MCP (Model Context Protocol) server for integrating Claude CLI with Cursor and other MCP-compatible applications. Enables calling different Claude models (Sonnet, Opus, Haiku) through convenient tools.
- 🎯 Multiple Models: Support for Sonnet, Opus, and Haiku
- 🔄 Model Comparison: Get responses from multiple models simultaneously
- 📊 JSON Responses: Structured responses with metadata
- 🎭 System Prompts: Specialize model behavior with custom system prompts
- 📦 Batch Processing: Process multiple prompts sequentially
- 💬 Conversation Continuity: Continue previous conversations with context
- 🛡️ Fallback Support: Automatic fallback to backup models
- ⚡ Quick Integration: Simple setup in Cursor
- 🔧 11 Powerful Tools: Comprehensive set of tools for various use cases
- Node.js 18+
- Installed Claude CLI (
claudecommand in PATH) - Cursor or another MCP-compatible application
- Clone the repository or create a folder:
mkdir claude-mcp
cd claude-mcp- Install dependencies:
npm install- Build the project:
npm run buildThe project will be built in the dist/ folder.
-
Open the MCP configuration file:
~/.cursor/mcp.jsonOr through Finder:
Cmd+Shift+G→~/.cursor/mcp.json -
Add the server configuration:
{
"mcpServers": {
"claude": {
"command": "node",
"args": ["/Users/your_username/mcp/claude/dist/index.js"]
}
}
}Important: Replace /Users/your_username/mcp/claude with the full path to your project folder.
-
Open the configuration file:
%APPDATA%\Cursor\mcp.json -
Add the configuration:
{
"mcpServers": {
"claude": {
"command": "node",
"args": ["C:\\path\\to\\project\\claude-mcp\\dist\\index.js"]
}
}
}-
Open the file:
~/.config/cursor/mcp.json -
Add the configuration:
{
"mcpServers": {
"claude": {
"command": "node",
"args": ["/home/your_username/mcp/claude/dist/index.js"]
}
}
}If you want to use the source TypeScript without building:
{
"mcpServers": {
"claude": {
"command": "npx",
"args": ["tsx", "/full/path/to/claude/src/index.ts"]
}
}
}- Restart Cursor completely (close and reopen)
- Tools will be automatically available
Basic tool with model selection capability.
Parameters:
prompt(string, required) - Prompt to sendmodel(string, optional) - Model: "sonnet", "opus", "haiku" or full model name
Example:
claude_call({
prompt: "Explain what TypeScript is",
model: "sonnet",
});Quick access to Sonnet model.
Parameters:
prompt(string) - Prompt
Quick access to Opus model (most powerful).
Parameters:
prompt(string) - Prompt
Quick access to Haiku model (fastest).
Parameters:
prompt(string) - Prompt
Returns structured JSON with metadata (execution time, tokens, cost, etc.).
Parameters:
prompt(string) - Promptmodel(string, optional) - Model
Example Response:
{
"result": {
"type": "result",
"result": "Response text",
"duration_ms": 3509,
"usage": {
"input_tokens": 2,
"output_tokens": 57
},
"total_cost_usd": 0.07092975
},
"success": true
}Allows setting a system prompt to specialize model behavior.
Parameters:
prompt(string) - User promptsystemPrompt(string) - System prompt (model role)model(string, optional) - Model
Example:
claude_with_system_prompt({
prompt: "How can I improve this code?",
systemPrompt:
"You are a TypeScript and clean code expert. Answer briefly and to the point.",
model: "sonnet",
});Simultaneously gets responses from multiple models for comparison.
Parameters:
prompt(string) - Promptmodels(array) - Array of models: ["sonnet", "opus", "haiku"]
Example:
claude_compare_models({
prompt: "What is async/await?",
models: ["sonnet", "haiku"],
});Response:
{
"results": {
"sonnet": {
"result": "Sonnet response...",
"success": true
},
"haiku": {
"result": "Haiku response...",
"success": true
}
},
"success": true
}Processes multiple prompts sequentially and returns all responses.
Parameters:
prompts(array) - Array of prompts to processmodel(string, optional) - Model to use for all prompts
Example:
claude_batch({
prompts: ["What is TypeScript?", "What is React?", "What is Node.js?"],
model: "sonnet",
});Response:
{
"results": [
{
"prompt": "What is TypeScript?",
"result": "TypeScript is...",
"success": true
},
{
"prompt": "What is React?",
"result": "React is...",
"success": true
}
],
"success": true
}Continues the most recent Claude conversation with a new message.
Parameters:
prompt(string) - New message to continue the conversationmodel(string, optional) - Model
Example:
claude_continue({
prompt: "Tell me more about that",
model: "sonnet",
});Note: This tool uses --continue flag and maintains context from the previous conversation.
Calls Claude with a fallback model that activates when the primary model is overloaded.
Parameters:
prompt(string) - Prompt to sendmodel(string) - Primary modelfallbackModel(string) - Fallback model to use if primary is overloaded
Example:
claude_with_fallback({
prompt: "Explain async/await",
model: "sonnet",
fallbackModel: "haiku",
});Calls Claude and appends additional context to the default system prompt (doesn't replace, adds to it).
Parameters:
prompt(string) - User promptappendSystemPrompt(string) - Additional context to append to system promptmodel(string, optional) - Model
Example:
claude_append_system_prompt({
prompt: "Explain async/await",
appendSystemPrompt: "Answer in one sentence",
model: "haiku",
});Note: Unlike claude_with_system_prompt, this appends to the default system prompt rather than replacing it.
Use claude_compare_models when you need different perspectives:
claude_compare_models({
prompt: "What are the advantages of TypeScript?",
models: ["sonnet", "opus", "haiku"]
})
Use claude_with_system_prompt for role-based scenarios:
claude_with_system_prompt({
prompt: "Review this code for errors",
systemPrompt: "You are an experienced code reviewer. Find bugs and provide constructive criticism.",
model: "sonnet"
})
Use claude_json to get metadata:
claude_json({
prompt: "What is React?",
model: "sonnet"
})
Useful for:
- Analyzing execution time
- Tracking request costs
- Token usage statistics
Use claude_batch to process multiple prompts at once:
claude_batch({
prompts: [
"What is TypeScript?",
"What are its advantages?",
"How to get started?"
],
model: "sonnet"
})
Use claude_continue to maintain context across multiple messages:
// First message
claude_call({
prompt: "My name is John",
model: "sonnet"
})
// Continue the conversation
claude_continue({
prompt: "What's my name?",
model: "sonnet"
})
Use claude_with_fallback for critical requests that need guaranteed completion:
claude_with_fallback({
prompt: "Important analysis task",
model: "sonnet",
fallbackModel: "haiku"
})
Use claude_append_system_prompt to add context without replacing defaults:
claude_append_system_prompt({
prompt: "Review this code",
appendSystemPrompt: "Focus on security issues",
model: "sonnet"
})
npm run devnpm run buildclaude-mcp/
├── src/
│ └── index.ts # Main server code
├── dist/ # Compiled code
├── package.json
├── tsconfig.json
└── README.md
- Check that the path to
dist/index.jsis correct - Make sure you ran
npm run build - Verify Node.js is installed:
node --version
- Completely restart Cursor (not just reload)
- Check Cursor logs for errors
- Ensure the configuration in
mcp.jsonis valid (check JSON syntax)
Make sure Claude CLI is installed and available in PATH:
claude --versionIf the command is not found, install Claude CLI according to the official documentation.
# Reinstall dependencies
rm -rf node_modules package-lock.json
npm install
npm run buildMIT
Pull requests are welcome! For major changes, please open an issue first to discuss what you would like to change.
Created for integrating Claude CLI with the MCP ecosystem.