Skip to content

Conversation

@BjornMelin
Copy link
Contributor

Description

Adds support for Google's newly launched Gemini 3 Flash model (gemini-3-flash-preview), which offers Pro-level reasoning at Flash-level speed and cost.

Changes Made

  • Added gemini-3-flash-preview to conf/gemini_models.json with proper capabilities (1M context, 64K output, 32K thinking tokens)
  • Added gemini-3-flash-preview to conf/openrouter_models.json for OpenRouter access
  • Updated flash alias to point to gemini-3-flash-preview (higher intelligence score: 17 vs 10)
  • Added flash2.5 alias for legacy access to gemini-2.5-flash
  • Updated default fallback model in providers/registry.py from gemini-2.5-flash to gemini-3-flash-preview
  • Updated providers/gemini.py model selection to prefer canonical model names
  • Updated tests to reflect new model defaults and aliases
  • Added new test file tests/test_thinking_modes.py for thinking mode validation
  • Updated documentation in .env.example, docs/advanced-usage.md, and docs/configuration.md

Model Details

Property Value
Model ID gemini-3-flash-preview
Aliases flash, flash3, gemini-3-flash, gemini-flash
Context Window 1M tokens
Max Output 64K tokens
Max Thinking 32K tokens
Intelligence Score 17 (vs 10 for Gemini 2.5 Flash)
Supports Thinking Yes
Supports Images Yes

Testing

  • All linting passes (ruff, black, isort)
  • All unit tests pass (866 passed, 4 pre-existing failures unrelated to this PR)
  • Manual testing with model alias resolution
# Verified linting
.pal_venv/bin/python -m ruff check .  # All checks passed!
.pal_venv/bin/python -m black --check .  # 249 files unchanged
.pal_venv/bin/python -m isort --check-only .  # Passed

# Verified tests
.pal_venv/bin/python -m pytest tests/ -v -m "not integration"
# 866 passed, 4 skipped, 4 pre-existing failures on main

Related Issues

N/A - Proactive update for newly released Gemini 3 Flash model

Checklist

  • PR title follows conventional commits format (feat: add Gemini 3 Flash support)
  • Activated venv and ran code quality checks
  • Self-review completed
  • Tests added for changes (test_thinking_modes.py, updated existing tests)
  • Documentation updated (.env.example, docs/advanced-usage.md, docs/configuration.md)
  • All unit tests passing
  • Ready for review

Additional Notes

  • Gemini 3 Flash was launched by Google on December 17, 2025
  • This model offers significant improvements over Gemini 2.5 Flash:
    • Higher intelligence score (17 vs 10)
    • Better coding and agent capabilities
    • Pro-grade reasoning at Flash-level speed
  • The flash alias now points to gemini-3-flash-preview for best default experience
  • Users can still access legacy gemini-2.5-flash via flash2.5 alias

Copilot AI review requested due to automatic review settings December 18, 2025 12:44
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @BjornMelin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces support for Google's recently launched Gemini 3 Flash model, gemini-3-flash-preview, enhancing the system's capabilities with a more intelligent and efficient 'flash' model. The update reconfigures the flash alias to leverage Gemini 3 Flash's superior reasoning and speed, while ensuring backward compatibility for Gemini 2.5 Flash through a new flash2.5 alias. This change also updates the default fallback model and refines model selection logic, providing users with access to the latest advancements in AI.

Highlights

  • Gemini 3 Flash Integration: Added full support for Google's new gemini-3-flash-preview model, including its capabilities (1M context, 64K output, 32K thinking tokens, intelligence score 17).
  • Default Model Update: The flash alias now points to gemini-3-flash-preview, making it the new default fast model due to its higher intelligence score (17 vs 10 for Gemini 2.5 Flash).
  • Legacy Flash Alias: Introduced flash2.5 as a new alias to provide continued access to the gemini-2.5-flash model.
  • Fallback Model Preference: Updated the system's ultimate fallback model to gemini-3-flash-preview for improved default performance.
  • Model Selection Logic: Enhanced Gemini model selection to prefer canonical model names, improving clarity and consistency.
  • Comprehensive Testing: Added new tests for Gemini-specific thinking mode configurations and updated existing tests to validate the new model defaults, aliases, and restriction policies.
  • Documentation Updates: All relevant documentation (.env.example, docs/advanced-usage.md, docs/configuration.md) has been updated to reflect the new Gemini 3 Flash model and alias changes.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for the new Gemini 3 Flash model. The changes are comprehensive, updating model configurations, documentation, and a large number of tests to reflect the new model and its aliases. The default fallback model is also updated to the new Flash model. I've found one area for improvement in the model selection logic within the Gemini provider to make it more robust. Overall, this is a well-executed and thoroughly tested feature addition.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds support for Google's newly launched Gemini 3 Flash model (gemini-3-flash-preview), which is positioned as offering Pro-level reasoning at Flash-level speed and cost. The changes update model aliases, configuration files, provider logic, tests, and documentation to accommodate this new model.

Key Changes:

  • Updated flash alias to point to gemini-3-flash-preview instead of gemini-2.5-flash (higher intelligence score: 17 vs 10)
  • Added flash2.5 alias for legacy access to gemini-2.5-flash
  • Updated default fallback model from gemini-2.5-flash to gemini-3-flash-preview

Reviewed changes

Copilot reviewed 35 out of 46 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
conf/gemini_models.json Added Gemini 3 Flash model configuration with capabilities (1M context, 64K output, 32K thinking tokens) and updated aliases
conf/openrouter_models.json Added Gemini 3 Flash support for OpenRouter with proper aliases and capabilities
.env.example Updated documentation to list Gemini 3 Flash models and explain new alias structure
providers/registry.py Changed default fallback model from gemini-2.5-flash to gemini-3-flash-preview
providers/gemini.py Updated model selection logic to prefer canonical names and added documentation about thinking configuration
docs/configuration.md Added Gemini 3 Flash to model table with updated aliases
docs/advanced-usage.md Added Gemini 3 Flash documentation describing capabilities and use cases
tests/test_thinking_modes.py Added new test class for Gemini-specific thinking configuration testing
Multiple test files Updated test expectations to reflect new model defaults and alias changes
Various tool files Applied PEP 8 formatting fixes for spacing around operators and string formatting

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

- Replace brittle string matching ("preview" or "-lite" suffix) with
  registry-based approach using list_models(include_aliases=False)
- Update thinking config comment to be less speculative about future
  SDK changes and Gemini 3 models

Addresses PR review feedback for more robust model name handling.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant