This Vim plugin brings the power of AI language models into your Vim editor, enabling you to request code explanations or improvements directly within Vim. With this plugin, you can effortlessly highlight code snippets and ask AI to explain, review, or rewrite them, with the option to include additional context for better results.
Supported Providers:
- OpenAI (ChatGPT, GPT-4, etc.)
- Anthropic (Claude)
- Gemini (Google)
- Ollama (local models)
- OpenRouter (unified API for multiple providers)
- Vim with Python3 support.
- An API key from your chosen provider (OpenAI, Anthropic, Gemini, or OpenRouter).
The plugin requires only the requests library:
pip install requestsImportant: Vim must be able to import requests from its Python interpreter. If you get a ModuleNotFoundError: No module named 'requests' error, see the Troubleshooting section below.
Copy the chatgpt.vim file to your Vim plugin directory. If you're using vim-pathogen, add the chatgpt directory to your bundle directory.
The plugin defaults to OpenAI for backward compatibility, but you can use any supported provider:
Get your API key from: https://platform.openai.com/account/api-keys
export OPENAI_API_KEY='sk-...'Or in your .vimrc:
let g:openai_api_key='sk-...'
let g:chat_gpt_model='gpt-4o' " Optional: specify modelWith Proxy or Custom Base URL:
export OPENAI_PROXY="http://localhost:1087"
# or
export OPENAI_API_BASE='https://openai.xxx.cloud/v1'Or in your .vimrc:
let g:openai_base_url='https://openai.xxx.cloud/v1' " Custom base URL (alternative to env vars)Azure OpenAI:
let g:api_type = 'azure'
let g:chat_gpt_key = 'your_azure_chatgpt_api'
let g:azure_endpoint = 'your_azure_endpoint'
let g:azure_deployment = 'your_azure_deployment'
let g:azure_api_version = '2023-03-15-preview'Get your API key from: https://console.anthropic.com/
export ANTHROPIC_API_KEY='sk-ant-...'Or in your .vimrc:
let g:chat_gpt_provider = 'anthropic'
let g:anthropic_api_key = 'sk-ant-...'
let g:anthropic_model = 'claude-sonnet-4-5-20250929' " OptionalWith Custom Base URL (for proxies or custom deployments):
export ANTHROPIC_BASE_URL='https://your-proxy.com/v1'Or in your .vimrc:
let g:anthropic_base_url = 'https://your-proxy.com/v1'Get your API key from: https://makersuite.google.com/app/apikey
export GEMINI_API_KEY='...'Or in your .vimrc:
let g:chat_gpt_provider = 'gemini'
let g:gemini_api_key = '...'
let g:gemini_model = 'gemini-2.5-flash' " OptionalInstall Ollama from: https://ollama.ai
let g:chat_gpt_provider = 'ollama'
let g:ollama_model = 'llama3.2' " or codellama, mistral, etc.
let g:ollama_base_url = 'http://localhost:11434' " OptionalGet your API key from: https://openrouter.ai/keys
export OPENROUTER_API_KEY='sk-or-...'Or in your .vimrc:
let g:chat_gpt_provider = 'openrouter'
let g:openrouter_api_key = 'sk-or-...'
let g:openrouter_model = 'anthropic/claude-3.5-sonnet' " Choose any available model
let g:openrouter_base_url = 'https://openrouter.ai/api/v1' " Optional: custom base URL (default shown)" Select your AI provider (default: 'openai')
let g:chat_gpt_provider = 'openai' " Options: 'openai', 'anthropic', 'gemini', 'ollama', 'openrouter'
" Provider-specific models (optional - defaults shown)
let g:chat_gpt_model = 'gpt-4o' " For OpenAI
let g:anthropic_model = 'claude-sonnet-4-5-20250929' " For Anthropic
let g:gemini_model = 'gemini-2.5-flash' " For Gemini
let g:ollama_model = 'llama3.2' " For Ollama
let g:openrouter_model = 'anthropic/claude-3.5-sonnet' " For OpenRouterlet g:chat_gpt_max_tokens=2000
let g:chat_gpt_session_mode=1
let g:chat_gpt_temperature = 0.7
let g:chat_gpt_lang = 'Chinese'
let g:chat_gpt_split_direction = 'vertical'
let g:split_ratio=4
let g:chat_gpt_enable_tools=1
let g:chat_persona='default'
let g:chat_gpt_log_level=0 " 0=off, 1=basic, 2=verboseOption Details:
- g:chat_gpt_provider: Select which AI provider to use. Options:
'openai','anthropic','gemini','ollama','openrouter'. Default:'openai' - g:chat_gpt_max_tokens: Maximum number of tokens in the AI response. Default: 2000
- g:chat_gpt_model: Model name for OpenAI (e.g.,
'gpt-4o','gpt-3.5-turbo','o1'). Note: When using other providers, use their respective model variables instead. - g:chat_gpt_session_mode: Maintain persistent conversation history across sessions. Default: 1 (enabled). When enabled, conversations are saved to
.vim-chatgpt/history.txt. Set to 0 to disable history persistence. - g:chat_gpt_temperature: Controls response randomness (0.0-1.0). Higher = more creative, lower = more focused. Default: 0.7
- g:chat_gpt_lang: Request responses in a specific language (e.g.,
'Chinese','Spanish'). Default: none (English) - g:chat_gpt_split_direction: Window split direction:
'vertical'or'horizontal'. Default:'vertical' - g:split_ratio: Split window size ratio. If set to 4, the window will be 1/4 of the screen. Default: 3
- g:chat_persona: Default AI persona to load on startup. Must match a key in
g:gpt_personasorg:chat_gpt_custom_persona. Default:'default'. See Custom Personas section. - g:chat_gpt_enable_tools: Enable AI tool/function calling capabilities (allows AI to search files, read files, etc.). Default: 1 (enabled). Supported by OpenAI and Anthropic providers.
- g:chat_gpt_require_plan_approval: Require user approval before executing tool-based plans. When enabled, the AI will present a plan first, wait for approval, then execute tools in batches of 3 iterations with review points. Default: 1 (enabled).
- g:chat_gpt_summary_compaction_size: Trigger summary regeneration after this many bytes of new conversation since last summary. Default: 51200 (50KB). This implements automatic conversation compaction.
- g:chat_gpt_recent_history_size: Keep this many bytes of recent conversation uncompressed. Older content gets compressed into summary. Default: 20480 (20KB). Controls the sliding window size.
Advanced Options:
-
g:chat_gpt_log_level: Debug logging level for troubleshooting. Default: 0 (disabled). Options:
0- Logging disabled1- Basic logging (INFO and WARNING messages)2- Verbose logging (DEBUG, INFO, WARNING, ERROR messages)
Logs are written to
.vim-chatgpt/debug.login your project directory. Use this for troubleshooting API issues, tool execution problems, or understanding plugin behavior. -
g:chat_gpt_suppress_display: Internal flag to suppress response display in buffer. Default: 0 (show responses). Used internally by commands like
:GenerateCommit,:GptGenerateContext, and:GptGenerateSummary. Not recommended for manual use.
The plugin includes a powerful tools framework that allows AI agents to interact with your codebase. When enabled, the AI can autonomously use tools to search files, read code, and find information to better answer your questions.
When g:chat_gpt_require_plan_approval is enabled (default), the AI follows an adaptive planning workflow that adjusts based on results:
- Initial Plan Creation: The AI analyzes your request and creates a step-by-step plan
- User Approval: You review and approve the initial plan
- Execution & Reflection: The AI executes tools one step at a time, evaluating results
- Adaptive Revision: If results are unexpected or require a different approach:
- AI presents a REVISED PLAN explaining what changed and why
- You approve or reject the revision
- Execution continues with the new plan
- Natural Completion: The AI decides when the task is complete and summarizes results
Key Benefits:
- Adaptive: Plans can change based on what the AI discovers
- Transparent: You see and approve any plan changes
- Efficient: No artificial batch limits - AI works until done
- Flexible: Handles unexpected situations (missing files, different structure, etc.)
Disable plan approval (tools execute immediately without confirmation):
let g:chat_gpt_require_plan_approval = 0Project Exploration Tools:
- get_working_directory: Get the current working directory path
- list_directory: List files and directories in a specified path
Read-Only Tools:
- find_in_file: Search for text patterns in a specific file using grep
- find_file_in_project: Find files by name pattern in the current project
- read_file: Read the contents of a file (up to specified line limit)
File Modification Tools:
- create_file: Create a new file with specified content
- open_file: Open a file in the current Vim buffer (supports splits)
- edit_file: Edit an existing file by replacing specific content
- edit_file_lines: Edit specific line ranges in a file (efficient for large files)
When you ask the AI a question or give it a task, it can automatically:
- Get the working directory using
get_working_directory - Explore project structure using
list_directory - Search for relevant files using
find_file_in_project - Read file contents using
read_file - Find specific patterns in code using
find_in_file - Create new files using
create_file - Open files in Vim using
open_file - Edit existing files using
edit_fileoredit_file_lines - Use that information to provide accurate answers or complete tasks
Exploring the Project:
:Ask "What is the structure of this project?"The AI might:
- Use
get_working_directoryto see the project root - Use
list_directoryto explore the top-level structure - Use
list_directoryon subdirectories to understand organization - Provide a summary of the project layout and key directories
Finding Information:
:Ask "Where is the user authentication logic implemented?"The AI might:
- Use
find_file_in_projectto locate files matching*auth* - Use
read_fileto examine relevant files - Use
find_in_fileto search for specific functions - Provide an answer based on the actual code
Creating Files:
:Ask "Create a new test file for the authentication module"The AI might:
- Use
find_file_in_projectto locate the authentication module - Use
read_fileto understand the code structure - Use
create_fileto create a new test file with appropriate test cases - Use
open_fileto open the new test file in Vim
Editing Files:
:Ask "Add error handling to the login function in auth.py"The AI might:
- Use
find_file_in_projectto locateauth.py - Use
read_fileto read the current implementation - Use
edit_fileto add try/catch blocks and error handling - Confirm the changes were made successfully
Editing Large Files by Line Number:
:Ask "Replace lines 45-60 in config.py with updated settings"The AI might:
- Use
read_fileto view the current content and line numbers - Use
edit_file_linesto efficiently replace the specific line range - Confirm the changes without reading the entire file into memory
Adaptive Planning Example:
:Ask "Refactor the authentication module to use JWT tokens"With g:chat_gpt_require_plan_approval enabled, the workflow adapts to discoveries:
-
AI presents initial plan:
I'll refactor the authentication to use JWT tokens: 1. find_file_in_project to locate auth files 2. read_file to understand current implementation 3. create_file to add JWT utility functions 4. edit_file to update login/logout functions 5. edit_file to update middleware ============================================================ INITIAL PLAN FOR APPROVAL: ============================================================ -
You approve:
Approve plan? [y]es to proceed, [n]o to cancel: y -
AI executes and discovers something unexpected:
- Finds auth files in unexpected location
- Reads current implementation
- Discovers auth is split across multiple modules
-
AI presents revised plan:
=== REVISED PLAN === I discovered the auth system is split across 3 modules: - auth/session.py (session management) - auth/tokens.py (already has basic token support!) - middleware/auth.py (request validation) Revised approach: 1. Extend existing auth/tokens.py with JWT functions 2. Update session.py to use JWT instead of sessions 3. Update middleware to validate JWT tokens ============================================================ REVISED PLAN FOR APPROVAL: ============================================================ -
You approve revision:
Approve revised plan? [y]es to proceed, [n]o to cancel: y -
AI continues with adapted approach until complete
-
AI summarizes: "Successfully refactored auth to use JWT tokens. Modified 3 files..."
Tools are currently supported by:
- OpenAI (GPT-4, GPT-4o, etc.)
- Anthropic (Claude Sonnet, Claude Opus)
If you prefer the AI to not access your files, disable tools:
let g:chat_gpt_enable_tools = 0The plugin provides several commands to interact with AI:
Ask: Ask a questionRewrite: Ask the model to rewrite a code snippet more idiomaticallyReview: Request a code reviewDocument: Request documentation for a code snippetExplain: Ask the model to explain how a code snippet worksTest: Ask the model to write a test for a code snippetFix: Ask the model to fix an error in a code snippet
Each command takes a context as an argument, which can be any text describing the problem or question more specifically.
To ask the model to review a code snippet, visually select the code and execute the Review command:
:'<,'>Review 'Can you review this code for me?'The model's response will be displayed in a new buffer.
You can also use GenerateCommit command to generate a commit message for the current buffer.
When g:chat_gpt_session_mode is enabled (default), the plugin maintains conversation history to provide context across multiple interactions.
Conversation history is automatically saved to .vim-chatgpt/history.txt in your project directory. This allows:
- Persistent conversations across Vim sessions
- Project-specific history - each project has its own conversation log
- Easy review - you can view or edit the history file directly
- When you start a conversation, the plugin loads previous history from
.vim-chatgpt/history.txt - As you interact with the AI, responses are automatically appended to the history file
- The AI has access to previous conversation context (up to token limits)
- History is displayed in a Vim buffer and simultaneously saved to disk
View history file:
:e .vim-chatgpt/history.txtClear history:
rm .vim-chatgpt/history.txtDisable session mode (no history saved):
let g:chat_gpt_session_mode = 0The plugin uses a conversation compaction strategy to maintain context while keeping token usage bounded. As conversations grow, older messages are compressed into a summary, while recent messages remain fully accessible.
The Strategy:
-
System Message = Context + Summary + Recent History
- Project context (always loaded)
- Conversation summary (compressed older conversation)
- Last ~20KB of recent uncompressed conversation
-
Automatic Compaction:
- When conversation grows by 50KB (configurable), the summary is regenerated
- Content from the last cutoff point to current position (minus recent window) gets compressed into the summary
- A cutoff marker is stored in the summary metadata
- Only messages after the cutoff are loaded as full history
-
Result:
- Bounded token usage (summary + recent history is fixed size)
- Full context preserved (older parts compressed in summary)
- Automatic sliding window as conversations grow
While summaries are generated automatically through compaction, you can manually trigger an update:
:GptGenerateSummaryThe AI will:
- Read the conversation history from
.vim-chatgpt/history.txt - Compress content from last cutoff to current position (minus recent window)
- Identify key topics, decisions, and user preferences
- Merge with existing summary if present
- Update
.vim-chatgpt/summary.mdwith new cutoff metadata
The summary file (.vim-chatgpt/summary.md) contains:
Metadata Header:
<!-- SUMMARY_METADATA
cutoff_byte: 51200
last_updated: 2024-01-15
-->Summary Content:
- Key Topics Discussed: Main subjects and decisions made
- Important Information to Remember: Critical details and context
- User Preferences: Inferred preferences such as:
- Coding style preferences (e.g., "prefers functional programming")
- Tool or technology preferences (e.g., "uses TypeScript over JavaScript")
- Communication preferences (e.g., "prefers concise explanations")
- Project-specific conventions
- Action Items: Pending tasks or future work
The cutoff_byte metadata tracks which portion of history has been compressed, enabling the sliding window strategy.
Configure compaction behavior:
" Trigger summary update after this many bytes of new conversation
let g:chat_gpt_summary_compaction_size = 51200 " Default: 50KB
" Keep this much recent history uncompressed
let g:chat_gpt_recent_history_size = 20480 " Default: 20KBAutomatic Compaction:
- New conversation gets written to
.vim-chatgpt/history.txt - When new content since last summary exceeds
g:chat_gpt_summary_compaction_size:- AI reads existing summary + new content to compact
- Generates updated summary including key topics, decisions, and preferences
- Stores cutoff position in summary metadata
- On next conversation:
- Summary loaded into system message (compressed older content)
- Only recent history after cutoff loaded as full messages
- Token usage stays bounded
Manual Updates:
- Run
:GptGenerateSummaryanytime to manually trigger compaction - Edit
.vim-chatgpt/summary.mdto manually adjust preferences - The summary is automatically loaded into every conversation's system message
- Bounded Token Usage: Summary + recent history keeps context size predictable
- Full Context Preserved: Older conversations compressed, not lost
- Remembers Preferences: AI learns and retains your coding style, tool preferences, and communication style
- Automatic Maintenance: Compaction happens automatically as conversations grow
- Long-Running Conversations: Have extended discussions without hitting token limits
Example: As you work on a project over days/weeks:
- Day 1: Discuss architecture, make decisions (saved in history)
- Day 3: History grows, gets compacted into summary
- Day 7: AI still remembers Day 1 decisions (from summary) + recent conversation (full history)
- Your preferences (e.g., "prefers TypeScript", "uses Jest") persist across all sessions
The plugin can maintain project context to make the AI smarter about your specific codebase. This context is automatically loaded into every conversation.
Run this command to have the AI analyze your project and create a context file:
:GptGenerateContextThe AI will:
- Explore your project using available tools (list directories, read README, package files, etc.)
- Analyze the project structure and technology stack
- Create a context summary at
.vim-chatgpt/context.md
The generated context file contains:
- Project Name: Identified from the directory or metadata
- Type: Kind of project (web app, library, CLI tool, etc.)
- Purpose: What the project does
- Tech Stack: Technologies, frameworks, and key dependencies
- Structure: Overview of directory layout
- Key Files: Important entry points and configuration
You can manually edit .vim-chatgpt/context.md to:
- Add specific details the AI should know
- Highlight important patterns or conventions
- Document architectural decisions
- Note areas that need work
# Project: vim-chatgpt
## Type
Vim plugin
## Purpose
Brings AI language model capabilities into Vim editor for code assistance
## Tech Stack
- VimScript
- Python 3
- Multiple AI providers (OpenAI, Anthropic, Gemini, Ollama, OpenRouter)
## Structure
- plugin/chatgpt.vim - Main plugin file with VimScript and embedded Python
- README.md - Documentation
## Key Files
- chatgpt.vim - Contains all functionality including provider abstraction and tool frameworkWhen you start any AI conversation:
- Plugin checks for
.vim-chatgpt/context.mdin the current working directory - If found, the context is loaded into the system message
- The AI has this context for every request in that project
This means when you ask "What is this project?", the AI already knows!
All plugin files are stored in the .vim-chatgpt/ directory in your project root:
.vim-chatgpt/
├── context.md # Project context (auto-generated or manual)
├── summary.md # Conversation summary & user preferences
└── history.txt # Full conversation history
Files are automatically loaded:
context.md- Loaded into every conversation's system messagesummary.md- Loaded into every conversation's system messagehistory.txt- Loaded for conversation continuity (respects token limits)
Manual management:
# View files
ls .vim-chatgpt/
# Edit context or summary
vi .vim-chatgpt/context.md
vi .vim-chatgpt/summary.md
# Clear history
rm .vim-chatgpt/history.txt
# Start fresh (removes all plugin data)
rm -rf .vim-chatgpt/To introduce custom personas into the system context, simply define them in your vimrc file:
let g:chat_gpt_custom_persona = {'neptune': 'You are an expert in all things Graph databases'}With the custom persona defined, you can switch to it using the following command:
:GptBe neptuneIf you try to switch to a non-existent persona, the plugin will default to the preconfigured default persona.
You can also set a persona to be loaded by default when Vim starts, by setting it in your vimrc:
let g:chat_persona='neptune'You can add custom prompt templates using the chat_gpt_custom_prompts variable. This should be a dictionary mapping prompt keys to prompt templates.
For example, to add a 'debug' prompt, you could do:
let g:chat_gpt_custom_prompts = {'debug': 'Can you help me debug this code?'}Afterwards, you can use the Debug command like any other command:
:'<,'>Debug 'I am encountering an issue where...'This plugin exposes a binding to open a menu for options on a visual selecition. You can map it like this:
vmap <silent> <leader>0 <Plug>(chatgpt-menu)
- Enter visual mode by pressing V.
- Select the lines of code you want to explain, review, or rewrite.
- Type
:Explain,:Review, or:Rewrite,:Fix,:Testand press Enter.
If you see this error, it means requests is not installed for the Python version that Vim is using. Vim might use a different Python interpreter than your default python3 command.
1. Check which Python version Vim uses:
vim --version | grep pythonLook for a line like: -lpython3.13 or similar. This shows Vim is using Python 3.13.
2. Install requests for that specific Python version:
For Python 3.13 (adjust version number as needed):
python3.13 -m pip install requests3. If you get an "externally-managed-environment" error:
On newer macOS/Linux systems, Python prevents global package installation. Use one of these solutions:
# Option 1: Use --break-system-packages (simpler, but be aware of the implications)
python3.13 -m pip install --break-system-packages requests
# Option 2: Use --user flag (installs to user directory)
python3.13 -m pip install --user requests
# Option 3: Use Homebrew (macOS only, if requests is available)
brew install python-requests4. Verify installation:
python3.13 -c "import requests; print('✓ Success')"Q: The plugin doesn't respond when I run commands
- Check that your API key is set correctly
- Verify you have internet connection (except for Ollama)
- Check Vim's error messages with
:messages
Q: Vim says "Python 3 support is required"
- Your Vim build doesn't include Python 3 support
- Install a version with Python 3:
brew install vim(macOS) or compile with--enable-python3interp
Q: How do I know which provider/model I'm using?
- Check
:echo g:chat_gpt_providerin Vim - For OpenAI, check
:echo g:chat_gpt_model - For other providers, check
:echo g:anthropic_model, etc.
This plugin is not affiliated with or endorsed by OpenAI, Anthropic, Google/Gemini, or any other AI provider. You are responsible for managing your API usage and any associated costs when using this plugin.
Previous versions required the openai Python package. The plugin now uses HTTP requests for all providers, requiring only the requests library.
If you're upgrading from an older version:
- Uninstall the old dependency (optional):
pip uninstall openai - Install the new dependency:
pip install requests - Your existing OpenAI configuration will continue to work without changes!
- Vim plugin
- AI assistance
- ChatGPT
- Claude
- Anthropic
- Gemini
- Ollama
- OpenRouter
- Code assistance
- Programming help
- Code explanations
- Code review
- Code documentation
- Code rewrites
- Test generation
- Code fixes
- Commit messages
- OpenAI API
- Anthropic API
- Multi-provider
- LLM integration
- Python requests
- Vim integration