Skip to content

Conversation

@adeeshperera
Copy link

@adeeshperera adeeshperera commented Oct 12, 2025

  • Add estimated token counts for input/output in dry run display
  • Include cost estimation based on selected LLM provider pricing
  • Show processing time estimates for each provider type

$ go run cmd/commit-msg/main.go . --dry-run
image

Summary by CodeRabbit

  • New Features

    • Displays estimated input, output, and total tokens in dry-run and detailed prompts.
    • Shows optional cost estimates and processing time ranges when available, with per-provider calculations.
    • Provides clearer usage metrics replacing the previous single token estimate.
  • Refactor

    • Revamped dry-run Changes Summary to include token metrics, conditional cost, and processing time.

adeeshperera and others added 2 commits October 12, 2025 20:24
…e to dry run mode

- Add estimated token counts for input/output in dry run display
- Include cost estimation based on selected LLM provider pricing
- Show processing time estimates for each provider type
Fix DFanso#113 : add token count, cost estimation, and processing time to dry run mode
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 12, 2025

Walkthrough

Adds LLM usage estimation to dry-run output in cmd/cli/createMsg.go: computes token counts, estimates cost (non-Ollama only), and processing time; introduces estimateCost and estimateProcessingTime helpers; updates DRY RUN summary fields; retains commit message length validation.

Changes

Cohort / File(s) Summary
Dry-run estimation and output update
cmd/cli/createMsg.go
Added input/output/total token estimation, provider-based cost/time estimates, new helpers estimateCost and estimateProcessingTime, and revised DRY RUN summary to display token metrics, conditional cost (non-Ollama), and min/max processing time; preserved commit length validation.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User
  participant CLI as CLI createMsg
  participant Est as Estimation Helpers
  participant Provider as Provider Config

  User->>CLI: run dry-run
  CLI->>Est: compute input tokens from prompt
  Est-->>CLI: inputTokens
  CLI->>Est: estimate output tokens (fixed)
  Est-->>CLI: outputTokens
  CLI->>CLI: totalTokens = input + output

  CLI->>Provider: check provider type
  alt provider != Ollama
    CLI->>Est: estimateCost(provider, totalTokens)
    Est-->>CLI: cost
  else Ollama
    Note over CLI: Cost omitted for Ollama
  end

  CLI->>Est: estimateProcessingTime(provider, tokens)
  Est-->>CLI: min/max time

  CLI-->>User: DRY RUN summary (tokens, optional cost, time)
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related issues

Possibly related PRs

  • Feature: Dry-Run Added #96 — Modifies cmd/cli/createMsg.go dry-run/display logic with token estimation; this PR extends it with cost and processing-time helpers.

Suggested labels

enhancement, go

Suggested reviewers

  • DFanso

Poem

I thump my paws and count each token,
Burrows of bytes, neatly spoken.
Costs hop in (unless it’s Ollama),
Time estimates—no drama.
With carrots tall and output tight,
My dry-run shines in moonlit night. 🥕✨

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title clearly summarizes the primary change by stating that token counting, cost estimation, and processing time have been added to dry-run mode, which aligns directly with the pull request’s modifications.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
cmd/cli/createMsg.go (1)

625-637: Consider adding a disclaimer about processing time variability.

The fixed time ranges are reasonable rough estimates for dry-run previews, but actual processing time can vary significantly based on input size, specific model selection, network latency, and system load. Consider adding a comment or adjusting the display text to indicate these are approximate ranges.

For example, you could update line 569 to be more explicit:

-	statsData = append(statsData, []string{"Estimated Processing Time", fmt.Sprintf("%d-%d seconds", minTime, maxTime)})
+	statsData = append(statsData, []string{"Estimated Processing Time", fmt.Sprintf("~%d-%d seconds (approximate)", minTime, maxTime)})
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3d2e536 and 4a824ad.

📒 Files selected for processing (1)
  • cmd/cli/createMsg.go (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
cmd/cli/createMsg.go (2)
pkg/types/types.go (7)
  • ProviderOllama (13-13)
  • LLMProvider (5-5)
  • ProviderOpenAI (8-8)
  • ProviderClaude (9-9)
  • ProviderGemini (10-10)
  • ProviderGrok (11-11)
  • ProviderGroq (12-12)
cmd/cli/store/store.go (1)
  • LLMProvider (33-36)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Test Build
  • GitHub Check: Build Go Binary (windows-latest)
🔇 Additional comments (2)
cmd/cli/createMsg.go (2)

551-571: LGTM! Well-structured dry-run enhancements.

The token estimation and conditional cost display logic is correct and appropriate for a dry-run preview. The separation of input, output, and total tokens provides clear visibility, and excluding cost for Ollama (local provider) is a thoughtful detail.


598-623: Unable to verify current pricing; please confirm updated API rates.

Pricing rates for GPT-4o, Claude Sonnet, Gemini Pro, Grok, and Groq may have changed since 2024. Please verify the current per-million-token input/output costs as of October 2025 and clarify which specific model tiers these estimates cover.

Copy link
Owner

@DFanso DFanso left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 🎊

@DFanso DFanso added enhancement New feature or request hacktoberfest Eligible for Hacktoberfest hacktoberfest-accepted Approved Hacktoberfest contribution go Pull requests that update go code labels Oct 12, 2025
@DFanso DFanso linked an issue Oct 12, 2025 that may be closed by this pull request
@DFanso DFanso merged commit 51c6704 into DFanso:main Oct 12, 2025
13 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request go Pull requests that update go code hacktoberfest Eligible for Hacktoberfest hacktoberfest-accepted Approved Hacktoberfest contribution

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEATURE] Add token count and cost estimation to dry run mode

2 participants