-
Notifications
You must be signed in to change notification settings - Fork 18
Feature: Dry-Run Added #96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughAdds a CLI Dry Run (preview) mode: a new Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant CLI as CLI (commit)
participant Cmd as CreateCommitMsg(dryRun)
participant LLM as LLM Provider
User->>CLI: commit . [--dry-run]
CLI->>Cmd: Execute with dryRun flag
alt Dry-Run enabled
Cmd->>Cmd: Resolve provider config (resolveOllamaConfig)
Cmd->>Cmd: Build full prompt & change summary
Cmd->>Cmd: Mask API key, estimate tokens (maskAPIKey, estimateTokens)
Cmd-->>CLI: Render preview (provider, prompt, stats, token estimate) — no API call
else Normal run
Cmd->>Cmd: Build prompt
Cmd->>LLM: Send prompt (API call)
LLM-->>Cmd: Return generated commit message
Cmd-->>CLI: Output generated message
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (2)
cmd/cli/root.go (1)
56-57: Consider handling the error from GetBool.While
GetBoolonly returns an error if the flag is undefined (which won't happen here), explicitly handling or at least acknowledging the error improves code clarity and follows Go best practices.Apply this diff:
- dryRun, _ := cmd.Flags().GetBool("dry-run") + dryRun, err := cmd.Flags().GetBool("dry-run") + if err != nil { + return err + } CreateCommitMsg(dryRun)cmd/cli/createMsg.go (1)
526-528: Consider clarifying the token estimate is approximate.The token estimation uses a simple heuristic of
len(text) / 4, which can vary significantly from actual tokenizer counts (especially for non-English text or code). Consider either:
- Adding a comment explaining this is a rough approximation
- Updating the display to make it clear this is approximate (e.g., "~1234 tokens (estimate)")
Option 1 - Add a comment:
// estimateTokens provides a rough estimate of token count (1 token ≈ 4 characters) +// Note: This is a simplistic heuristic and may differ significantly from actual tokenizer counts func estimateTokens(text string) int { return len(text) / 4 }Option 2 - Update the display (in
displayDryRunInfo):- {"Prompt Size (approx)", fmt.Sprintf("%d tokens", estimateTokens(prompt))}, + {"Prompt Size (approx)", fmt.Sprintf("~%d tokens (rough estimate)", estimateTokens(prompt))},
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
README.md(2 hunks)cmd/cli/createMsg.go(3 hunks)cmd/cli/root.go(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
cmd/cli/root.go (1)
cmd/cli/createMsg.go (1)
CreateCommitMsg(30-211)
cmd/cli/createMsg.go (4)
pkg/types/types.go (4)
LLMProvider(5-5)Config(60-63)ProviderOllama(13-13)ProviderGrok(11-11)cmd/cli/store/store.go (2)
LLMProvider(16-19)Config(22-25)pkg/types/options.go (1)
GenerationOptions(4-10)pkg/types/prompt.go (1)
BuildCommitPrompt(30-51)
🔇 Additional comments (6)
cmd/cli/root.go (1)
73-74: LGTM!The flag definition is clear and correctly attached to the
creatCommitMsgcommand.README.md (1)
32-32: LGTM!The documentation clearly describes the dry-run feature with practical examples and use cases. The formatting is consistent with the rest of the README.
Also applies to: 116-136
cmd/cli/createMsg.go (4)
29-30: LGTM!The function signature change and updated documentation are clear and correct.
98-103: LGTM!The dry-run check correctly short-circuits before the API call, preventing unintended LLM usage.
437-511: LGTM with suggestions above!The dry-run display implementation provides comprehensive information to users. The structure is clear and the UI elements are well-formatted. Once the code duplication and masking issues are addressed, this will be a solid implementation.
470-474: Default display is sufficient
OpenAI, Gemini, Claude and Groq have no additional config fields beyond the API key, so the default case correctly covers their display.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
cmd/cli/createMsg.go (3)
443-507: Clarify the token estimation limitation.The dry-run display is comprehensive and well-structured. However, the token estimate at line 501 uses a rough 4-character-per-token heuristic that can vary significantly based on content type and language (technical content may be closer to 3 chars/token, non-English text can differ substantially).
Consider adding a note to the stats display:
statsData := [][]string{ {"Total Lines", fmt.Sprintf("%d", linesCount)}, {"Total Characters", fmt.Sprintf("%d", charsCount)}, - {"Prompt Size (approx)", fmt.Sprintf("%d tokens", estimateTokens(prompt))}, + {"Prompt Size (approx)", fmt.Sprintf("~%d tokens (estimate)", estimateTokens(prompt))}, }
526-528: Consider documenting the estimation variability.The 4-character-per-token approximation is reasonable for English text with GPT-style tokenizers, but can vary by language, content type (code vs. prose), and model tokenizer.
Consider expanding the comment:
-// estimateTokens provides a rough estimate of token count (1 token ≈ 4 characters) +// estimateTokens provides a rough estimate of token count (1 token ≈ 4 characters). +// This approximation varies by language, content type, and tokenizer. func estimateTokens(text string) int { return len(text) / 4 }
29-507: Verify test coverage for the dry-run feature.The PR description indicates no automated tests were added for this feature. While manual testing was performed, automated tests would improve maintainability and catch regressions.
Consider adding tests that verify:
- The dry-run flag correctly bypasses API calls
displayDryRunInforenders expected output for each providermaskAPIKeyhandles empty keys, short keys, URLs, and normal keys correctlyresolveOllamaConfigcorrectly resolves URL and model from various input combinationsWould you like assistance in generating test cases for this feature?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
cmd/cli/createMsg.go(5 hunks)cmd/cli/root.go(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
cmd/cli/root.go (1)
cmd/cli/createMsg.go (1)
CreateCommitMsg(30-211)
cmd/cli/createMsg.go (4)
pkg/types/types.go (4)
LLMProvider(5-5)Config(60-63)ProviderOllama(13-13)ProviderGrok(11-11)cmd/cli/store/store.go (2)
LLMProvider(16-19)Config(22-25)pkg/types/options.go (1)
GenerationOptions(4-10)pkg/types/prompt.go (1)
BuildCommitPrompt(30-51)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Build Go Binary (windows-latest)
🔇 Additional comments (6)
cmd/cli/root.go (2)
56-60: LGTM!The flag retrieval and error handling are correct. The dry-run value is properly passed to
CreateCommitMsg.
75-77: LGTM!The
--dry-runflag is well-defined with a clear description that accurately conveys its purpose.cmd/cli/createMsg.go (4)
29-30: LGTM!The updated signature and documentation clearly convey the new dry-run capability.
98-103: LGTM!The early return for dry-run mode is well-placed and prevents any API interaction. The separation between preview and actual generation is clean.
237-251: LGTM!The helper function successfully eliminates the duplication flagged in the previous review. The fallback logic for URL and model is correct.
Based on past review comments (eafbec6).
510-523: LGTM!The masking logic correctly handles URLs (for Ollama) while protecting API keys. This addresses the concern from the previous review.
Based on past review comments (eafbec6).
DFanso
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🎊
Description
Provided a way for users to see what the LLM sees when an API Call is made, without actually making one, using the
commit . --dry-runcommand.Type of Change
Related Issue
Fixes #92
Changes Made
commit . --dry-runTesting
Checklist
Screenshots (if applicable)
Command run when no changes made
Command run when changes made
Additional Notes
For Hacktoberfest Participants
Thank you for your contribution! 🎉
Summary by CodeRabbit
New Features
Documentation