-
Notifications
You must be signed in to change notification settings - Fork 18
Fix DFanso#113 : add token count, cost estimation, and processing time to dry run mode #115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…e to dry run mode - Add estimated token counts for input/output in dry run display - Include cost estimation based on selected LLM provider pricing - Show processing time estimates for each provider type
Fix DFanso#113 : add token count, cost estimation, and processing time to dry run mode
WalkthroughAdds LLM usage estimation to dry-run output in cmd/cli/createMsg.go: computes token counts, estimates cost (non-Ollama only), and processing time; introduces estimateCost and estimateProcessingTime helpers; updates DRY RUN summary fields; retains commit message length validation. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant CLI as CLI createMsg
participant Est as Estimation Helpers
participant Provider as Provider Config
User->>CLI: run dry-run
CLI->>Est: compute input tokens from prompt
Est-->>CLI: inputTokens
CLI->>Est: estimate output tokens (fixed)
Est-->>CLI: outputTokens
CLI->>CLI: totalTokens = input + output
CLI->>Provider: check provider type
alt provider != Ollama
CLI->>Est: estimateCost(provider, totalTokens)
Est-->>CLI: cost
else Ollama
Note over CLI: Cost omitted for Ollama
end
CLI->>Est: estimateProcessingTime(provider, tokens)
Est-->>CLI: min/max time
CLI-->>User: DRY RUN summary (tokens, optional cost, time)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related issues
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
cmd/cli/createMsg.go (1)
625-637: Consider adding a disclaimer about processing time variability.The fixed time ranges are reasonable rough estimates for dry-run previews, but actual processing time can vary significantly based on input size, specific model selection, network latency, and system load. Consider adding a comment or adjusting the display text to indicate these are approximate ranges.
For example, you could update line 569 to be more explicit:
- statsData = append(statsData, []string{"Estimated Processing Time", fmt.Sprintf("%d-%d seconds", minTime, maxTime)}) + statsData = append(statsData, []string{"Estimated Processing Time", fmt.Sprintf("~%d-%d seconds (approximate)", minTime, maxTime)})
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
cmd/cli/createMsg.go(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
cmd/cli/createMsg.go (2)
pkg/types/types.go (7)
ProviderOllama(13-13)LLMProvider(5-5)ProviderOpenAI(8-8)ProviderClaude(9-9)ProviderGemini(10-10)ProviderGrok(11-11)ProviderGroq(12-12)cmd/cli/store/store.go (1)
LLMProvider(33-36)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Test Build
- GitHub Check: Build Go Binary (windows-latest)
🔇 Additional comments (2)
cmd/cli/createMsg.go (2)
551-571: LGTM! Well-structured dry-run enhancements.The token estimation and conditional cost display logic is correct and appropriate for a dry-run preview. The separation of input, output, and total tokens provides clear visibility, and excluding cost for Ollama (local provider) is a thoughtful detail.
598-623: Unable to verify current pricing; please confirm updated API rates.Pricing rates for GPT-4o, Claude Sonnet, Gemini Pro, Grok, and Groq may have changed since 2024. Please verify the current per-million-token input/output costs as of October 2025 and clarify which specific model tiers these estimates cover.
DFanso
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🎊
$ go run cmd/commit-msg/main.go . --dry-run

Summary by CodeRabbit
New Features
Refactor