Skip to content

Conversation

@aneeshsunganahalli
Copy link
Contributor

@aneeshsunganahalli aneeshsunganahalli commented Oct 8, 2025

Description

Provided a way for users to see what the LLM sees when an API Call is made, without actually making one, using the commit . --dry-run command.

Type of Change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Documentation update
  • Code refactoring
  • Performance improvement
  • Other (please describe):

Related Issue

Fixes #92

Changes Made

  • Added a --dry-run flag so the command is commit . --dry-run
  • Added a function to display what the LLM sees entirely during an API Call but no API call is actually made.
  • Added the documentation in the README

Testing

  • Tested with Gemini API
  • Tested with Grok API
  • Tested on Windows
  • Tested on Linux
  • Tested on macOS
  • Added/updated tests (if applicable)

Checklist

  • My code follows the project's code style
  • I have performed a self-review of my code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings or errors
  • I have tested this in a real Git repository
  • I have read the CONTRIBUTING.md guidelines

Screenshots (if applicable)

Command run when no changes made

image

Command run when changes made

  • The message is a bit long as it contains the lines of code changed, added or deleted along with prompts, so can't provide screenshot but has been tested to be working.

Additional Notes


For Hacktoberfest Participants

  • This PR is submitted as part of Hacktoberfest 2025

Thank you for your contribution! 🎉

Summary by CodeRabbit

  • New Features

    • Added Preview Mode (Dry Run) to the CLI via --dry-run. Shows LLM provider details, masked API info, the exact prompt, change stats, token estimate, and skips API calls. CLI command updated to accept the dry-run flag.
  • Documentation

    • Updated README with a “Preview Mode (Dry Run)” usage section, examples, and recommended use cases (debugging, cost control, privacy, development). Clarifies how to preview prompts without sharing data.

@aneeshsunganahalli aneeshsunganahalli changed the title Feat/dry run mode Feature: Dry-Run Added Oct 8, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 8, 2025

Walkthrough

Adds a CLI Dry Run (preview) mode: a new --dry-run flag wired into the command, updates CreateCommitMsg to accept a dryRun bool, implements a dry-run rendering path (provider, masked API key, full prompt, change stats, token estimate) and documents the feature in README.

Changes

Cohort / File(s) Change Summary
Docs: Dry-Run feature
README.md
Adds "Preview Mode (Dry Run)" usage and guidance: commit . --dry-run, displayed info (LLM provider, exact prompt, file stats, change summary, token estimate), and recommended use cases.
CLI: Dry-Run wiring & UI
cmd/cli/root.go
Adds --dry-run boolean flag and passes its value into CreateCommitMsg(dryRun bool) when running the command.
CLI: Dry-Run implementation & helpers
cmd/cli/createMsg.go
Changes CreateCommitMsg()CreateCommitMsg(dryRun bool); adds dry-run branch that bypasses API calls and spinner, renders preview via displayDryRunInfo, adds helpers resolveOllamaConfig, maskAPIKey, and estimateTokens, and replaces inline Ollama config extraction with resolveOllamaConfig.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User
  participant CLI as CLI (commit)
  participant Cmd as CreateCommitMsg(dryRun)
  participant LLM as LLM Provider

  User->>CLI: commit . [--dry-run]
  CLI->>Cmd: Execute with dryRun flag

  alt Dry-Run enabled
    Cmd->>Cmd: Resolve provider config (resolveOllamaConfig)
    Cmd->>Cmd: Build full prompt & change summary
    Cmd->>Cmd: Mask API key, estimate tokens (maskAPIKey, estimateTokens)
    Cmd-->>CLI: Render preview (provider, prompt, stats, token estimate) — no API call
  else Normal run
    Cmd->>Cmd: Build prompt
    Cmd->>LLM: Send prompt (API call)
    LLM-->>Cmd: Return generated commit message
    Cmd-->>CLI: Output generated message
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

  • Cli interface #46 — Modifies the same CLI entrypoints and function signature (adds dry-run flag / CreateCommitMsg(dryRun bool)), likely directly related.

Suggested labels

enhancement, hacktoberfest, go

Suggested reviewers

  • DFanso

Poem

A rabbit peeks at prompts, gentle and spry,
No calls to the cloud, just a curious eye.
Keys masked and tokens counted with care,
A preview of change floats light in the air.
When all looks good — hop! — we’ll send it out there. 🐇✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 62.50% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title “Feature: Dry-Run Added” clearly reflects the main change of introducing a dry-run mode and directly summarizes the addition of that feature in the CLI.
Linked Issues Check ✅ Passed The changes fully satisfy issue #92 by introducing the --dry-run flag, bypassing the API call, displaying the exact payload that would be sent to the LLM, and updating the README with usage instructions.
Out of Scope Changes Check ✅ Passed All modifications—including the flag handling, helper functions for configuration, masking and token estimation, and documentation updates—are directly related to implementing the dry-run functionality described in issue #92.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
cmd/cli/root.go (1)

56-57: Consider handling the error from GetBool.

While GetBool only returns an error if the flag is undefined (which won't happen here), explicitly handling or at least acknowledging the error improves code clarity and follows Go best practices.

Apply this diff:

-		dryRun, _ := cmd.Flags().GetBool("dry-run")
+		dryRun, err := cmd.Flags().GetBool("dry-run")
+		if err != nil {
+			return err
+		}
 		CreateCommitMsg(dryRun)
cmd/cli/createMsg.go (1)

526-528: Consider clarifying the token estimate is approximate.

The token estimation uses a simple heuristic of len(text) / 4, which can vary significantly from actual tokenizer counts (especially for non-English text or code). Consider either:

  1. Adding a comment explaining this is a rough approximation
  2. Updating the display to make it clear this is approximate (e.g., "~1234 tokens (estimate)")

Option 1 - Add a comment:

 // estimateTokens provides a rough estimate of token count (1 token ≈ 4 characters)
+// Note: This is a simplistic heuristic and may differ significantly from actual tokenizer counts
 func estimateTokens(text string) int {
 	return len(text) / 4
 }

Option 2 - Update the display (in displayDryRunInfo):

-		{"Prompt Size (approx)", fmt.Sprintf("%d tokens", estimateTokens(prompt))},
+		{"Prompt Size (approx)", fmt.Sprintf("~%d tokens (rough estimate)", estimateTokens(prompt))},
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7dd4621 and d65eccd.

📒 Files selected for processing (3)
  • README.md (2 hunks)
  • cmd/cli/createMsg.go (3 hunks)
  • cmd/cli/root.go (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
cmd/cli/root.go (1)
cmd/cli/createMsg.go (1)
  • CreateCommitMsg (30-211)
cmd/cli/createMsg.go (4)
pkg/types/types.go (4)
  • LLMProvider (5-5)
  • Config (60-63)
  • ProviderOllama (13-13)
  • ProviderGrok (11-11)
cmd/cli/store/store.go (2)
  • LLMProvider (16-19)
  • Config (22-25)
pkg/types/options.go (1)
  • GenerationOptions (4-10)
pkg/types/prompt.go (1)
  • BuildCommitPrompt (30-51)
🔇 Additional comments (6)
cmd/cli/root.go (1)

73-74: LGTM!

The flag definition is clear and correctly attached to the creatCommitMsg command.

README.md (1)

32-32: LGTM!

The documentation clearly describes the dry-run feature with practical examples and use cases. The formatting is consistent with the rest of the README.

Also applies to: 116-136

cmd/cli/createMsg.go (4)

29-30: LGTM!

The function signature change and updated documentation are clear and correct.


98-103: LGTM!

The dry-run check correctly short-circuits before the API call, preventing unintended LLM usage.


437-511: LGTM with suggestions above!

The dry-run display implementation provides comprehensive information to users. The structure is clear and the UI elements are well-formatted. Once the code duplication and masking issues are addressed, this will be a solid implementation.


470-474: Default display is sufficient
OpenAI, Gemini, Claude and Groq have no additional config fields beyond the API key, so the default case correctly covers their display.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
cmd/cli/createMsg.go (3)

443-507: Clarify the token estimation limitation.

The dry-run display is comprehensive and well-structured. However, the token estimate at line 501 uses a rough 4-character-per-token heuristic that can vary significantly based on content type and language (technical content may be closer to 3 chars/token, non-English text can differ substantially).

Consider adding a note to the stats display:

 	statsData := [][]string{
 		{"Total Lines", fmt.Sprintf("%d", linesCount)},
 		{"Total Characters", fmt.Sprintf("%d", charsCount)},
-		{"Prompt Size (approx)", fmt.Sprintf("%d tokens", estimateTokens(prompt))},
+		{"Prompt Size (approx)", fmt.Sprintf("~%d tokens (estimate)", estimateTokens(prompt))},
 	}

526-528: Consider documenting the estimation variability.

The 4-character-per-token approximation is reasonable for English text with GPT-style tokenizers, but can vary by language, content type (code vs. prose), and model tokenizer.

Consider expanding the comment:

-// estimateTokens provides a rough estimate of token count (1 token ≈ 4 characters)
+// estimateTokens provides a rough estimate of token count (1 token ≈ 4 characters).
+// This approximation varies by language, content type, and tokenizer.
 func estimateTokens(text string) int {
 	return len(text) / 4
 }

29-507: Verify test coverage for the dry-run feature.

The PR description indicates no automated tests were added for this feature. While manual testing was performed, automated tests would improve maintainability and catch regressions.

Consider adding tests that verify:

  1. The dry-run flag correctly bypasses API calls
  2. displayDryRunInfo renders expected output for each provider
  3. maskAPIKey handles empty keys, short keys, URLs, and normal keys correctly
  4. resolveOllamaConfig correctly resolves URL and model from various input combinations

Would you like assistance in generating test cases for this feature?

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d65eccd and b2112a5.

📒 Files selected for processing (2)
  • cmd/cli/createMsg.go (5 hunks)
  • cmd/cli/root.go (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
cmd/cli/root.go (1)
cmd/cli/createMsg.go (1)
  • CreateCommitMsg (30-211)
cmd/cli/createMsg.go (4)
pkg/types/types.go (4)
  • LLMProvider (5-5)
  • Config (60-63)
  • ProviderOllama (13-13)
  • ProviderGrok (11-11)
cmd/cli/store/store.go (2)
  • LLMProvider (16-19)
  • Config (22-25)
pkg/types/options.go (1)
  • GenerationOptions (4-10)
pkg/types/prompt.go (1)
  • BuildCommitPrompt (30-51)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Build Go Binary (windows-latest)
🔇 Additional comments (6)
cmd/cli/root.go (2)

56-60: LGTM!

The flag retrieval and error handling are correct. The dry-run value is properly passed to CreateCommitMsg.


75-77: LGTM!

The --dry-run flag is well-defined with a clear description that accurately conveys its purpose.

cmd/cli/createMsg.go (4)

29-30: LGTM!

The updated signature and documentation clearly convey the new dry-run capability.


98-103: LGTM!

The early return for dry-run mode is well-placed and prevents any API interaction. The separation between preview and actual generation is clean.


237-251: LGTM!

The helper function successfully eliminates the duplication flagged in the previous review. The fallback logic for URL and model is correct.

Based on past review comments (eafbec6).


510-523: LGTM!

The masking logic correctly handles URLs (for Ollama) while protecting API keys. This addresses the concern from the previous review.

Based on past review comments (eafbec6).

@DFanso DFanso self-requested a review October 8, 2025 17:10
@DFanso DFanso added enhancement New feature or request hacktoberfest Eligible for Hacktoberfest hacktoberfest-accepted Approved Hacktoberfest contribution go Pull requests that update go code labels Oct 8, 2025
Copy link
Owner

@DFanso DFanso left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 🎊

@DFanso DFanso merged commit 7f38168 into DFanso:main Oct 8, 2025
8 checks passed
@coderabbitai coderabbitai bot mentioned this pull request Oct 8, 2025
21 tasks
@aneeshsunganahalli aneeshsunganahalli deleted the feat/dry-run-mode branch October 8, 2025 18:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request go Pull requests that update go code hacktoberfest Eligible for Hacktoberfest hacktoberfest-accepted Approved Hacktoberfest contribution

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEATURE] Add Dry-Run Mode

2 participants