Skip to content

The MCP Code Executor is an MCP server that allows LLMs to execute Python code within a specified Conda environment.

License

Notifications You must be signed in to change notification settings

guardicode/mcp_code_executor

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MCP Code Executor

smithery badge

The MCP Code Executor is an MCP server that allows LLMs to execute Python code within a specified Python environment. This enables LLMs to run code with access to libraries and dependencies defined in the environment. It also supports incremental code generation for handling large code blocks that may exceed token limits.

Note: This server has been converted from STDIO to SSE (Server-Sent Events) transport for better web integration and debugging capabilities.

Code Executor MCP server

Features

  • Execute Python code from LLM prompts
  • Support for incremental code generation to overcome token limitations
  • Run code within a specified environment (Conda, virtualenv, or UV virtualenv)
  • Install dependencies when needed
  • Check if packages are already installed
  • Dynamically configure the environment at runtime
  • Configurable code storage directory
  • SSE transport for real-time communication and web-based debugging
  • Web-accessible endpoints for testing and integration

Prerequisites

  • Node.js installed
  • One of the following:
    • Conda installed with desired Conda environment created
    • Python virtualenv
    • UV virtualenv

Setup

  1. Clone this repository:
git clone https://github.com/bazinga012/mcp_code_executor.git
  1. Navigate to the project directory:
cd mcp_code_executor
  1. Install the Node.js dependencies:
npm install
  1. Build the project:
npm run build
  1. Create a .env file for configuration:
cp .env.example .env

Or create a .env file with your configuration:

PORT=4000
CODE_STORAGE_DIR=/tmp/mcp_code_storage
ENV_TYPE=venv
VENV_PATH=/path/to/your/venv

Configuration

The MCP Code Executor now runs as an SSE (Server-Sent Events) server instead of using STDIO transport. This provides better debugging capabilities and web integration.

Running the Server

Development Mode

npm run dev

Production Mode

npm start
# or
node build/index.js

The server will start on the port specified in your .env file (default: 4000) and provide the following endpoints:

  • SSE endpoint: http://localhost:4000/sse
  • Messages endpoint: http://localhost:4000/messages

Testing with MCP Inspector

You can test the server using the MCP Inspector:

npx @modelcontextprotocol/inspector http://localhost:4000/sse

Integration with MCP Clients

For MCP clients that support SSE transport, configure the connection to use:

  • SSE URL: http://localhost:4000/sse
  • Messages URL: http://localhost:4000/messages

Legacy STDIO Configuration (if needed)

If you need to revert to STDIO transport for compatibility with older MCP clients, you can modify the main function in src/index.ts to use StdioServerTransport instead of the Express server setup.

Using Node.js (Legacy STDIO)

{
  "mcpServers": {
    "mcp-code-executor": {
      "command": "node",
      "args": [
        "/path/to/mcp_code_executor/build/index.js" 
      ],
      "env": {
        "CODE_STORAGE_DIR": "/path/to/code/storage",
        "ENV_TYPE": "conda",
        "CONDA_ENV_NAME": "your-conda-env"
      }
    }
  }
}

Using Docker

{
  "mcpServers": {
    "mcp-code-executor": {
      "command": "docker",
      "args": [
        "run",
        "-i",
        "--rm",
        "mcp-code-executor"
      ]
    }
  }
}

Note: The Dockerfile has been tested with the venv-uv environment type only. Other environment types may require additional configuration.

Environment Variables

Required Variables

  • CODE_STORAGE_DIR: Directory where the generated code will be stored

Environment Type (choose one setup)

  • For Conda:

    • ENV_TYPE: Set to conda
    • CONDA_ENV_NAME: Name of the Conda environment to use
  • For Standard Virtualenv:

    • ENV_TYPE: Set to venv
    • VENV_PATH: Path to the virtualenv directory
  • For UV Virtualenv:

    • ENV_TYPE: Set to venv-uv
    • UV_VENV_PATH: Path to the UV virtualenv directory

Available Tools

The MCP Code Executor provides the following tools to LLMs:

1. execute_code

Executes Python code in the configured environment. Best for short code snippets.

{
  "name": "execute_code",
  "arguments": {
    "code": "import numpy as np\nprint(np.random.rand(3,3))",
    "filename": "matrix_gen"
  }
}

2. install_dependencies

Installs Python packages in the environment.

{
  "name": "install_dependencies",
  "arguments": {
    "packages": ["numpy", "pandas", "matplotlib"]
  }
}

3. check_installed_packages

Checks if packages are already installed in the environment.

{
  "name": "check_installed_packages",
  "arguments": {
    "packages": ["numpy", "pandas", "non_existent_package"]
  }
}

4. configure_environment

Dynamically changes the environment configuration.

{
  "name": "configure_environment",
  "arguments": {
    "type": "conda",
    "conda_name": "new_env_name"
  }
}

5. get_environment_config

Gets the current environment configuration.

{
  "name": "get_environment_config",
  "arguments": {}
}

6. initialize_code_file

Creates a new Python file with initial content. Use this as the first step for longer code that may exceed token limits.

{
  "name": "initialize_code_file",
  "arguments": {
    "content": "def main():\n    print('Hello, world!')\n\nif __name__ == '__main__':\n    main()",
    "filename": "my_script"
  }
}

7. append_to_code_file

Appends content to an existing Python code file. Use this to add more code to a file created with initialize_code_file.

{
  "name": "append_to_code_file",
  "arguments": {
    "file_path": "/path/to/code/storage/my_script_abc123.py",
    "content": "\ndef another_function():\n    print('This was appended to the file')\n"
  }
}

8. execute_code_file

Executes an existing Python file. Use this as the final step after building up code with initialize_code_file and append_to_code_file.

{
  "name": "execute_code_file",
  "arguments": {
    "file_path": "/path/to/code/storage/my_script_abc123.py"
  }
}

9. read_code_file

Reads the content of an existing Python code file. Use this to verify the current state of a file before appending more content or executing it.

{
  "name": "read_code_file",
  "arguments": {
    "file_path": "/path/to/code/storage/my_script_abc123.py"
  }
}

SSE Transport Conversion

This server has been converted from STDIO to SSE (Server-Sent Events) transport. The key changes include:

Benefits of SSE Transport:

  • Web-based debugging: Use tools like MCP Inspector with a simple HTTP URL
  • Real-time communication: Better for streaming responses and live updates
  • Better error handling: HTTP status codes and structured error responses
  • Browser compatibility: Can be tested directly in web browsers
  • Session management: Multiple concurrent connections with session IDs

Architecture Changes:

  • Express server: Now runs as a web server instead of STDIO process
  • Modular structure: Code split into modules (tools, transports)
  • Environment configuration: Uses .env files for easy configuration
  • Endpoint separation: Different endpoints for SSE connections and message handling

Testing the Conversion:

  1. Start the server: npm start
  2. Test with MCP Inspector: npm run inspector
  3. Manual testing: Access http://localhost:4000/sse in your browser or testing tool

Troubleshooting

Common Issues

  1. Port already in use: Change the PORT in your .env file
  2. Environment not found: Verify your Python environment paths are correct
  3. Permission errors: Ensure the CODE_STORAGE_DIR is writable
  4. Module import errors: Check that required packages are installed in your Python environment

Debug Mode

For detailed logging, you can run the server with debug output:

DEBUG=* npm start

Connection Testing

Test the SSE connection manually:

curl -N -H "Accept: text/event-stream" http://localhost:4000/sse

License

This project is licensed under the MIT License. } }


### 8. `execute_code_file`
Executes an existing Python file. Use this as the final step after building up code with initialize_code_file and append_to_code_file.
```json
{
  "name": "execute_code_file",
  "arguments": {
    "file_path": "/path/to/code/storage/my_script_abc123.py"
  }
}

9. read_code_file

Reads the content of an existing Python code file. Use this to verify the current state of a file before appending more content or executing it.

{
  "name": "read_code_file",
  "arguments": {
    "file_path": "/path/to/code/storage/my_script_abc123.py"
  }
}

Usage

Once configured, the MCP Code Executor will allow LLMs to execute Python code by generating a file in the specified CODE_STORAGE_DIR and running it within the configured environment.

LLMs can generate and execute code by referencing this MCP server in their prompts.

Handling Large Code Blocks

For larger code blocks that might exceed LLM token limits, use the incremental code generation approach:

  1. Initialize a file with the basic structure using initialize_code_file
  2. Add more code in subsequent calls using append_to_code_file
  3. Verify the file content if needed using read_code_file
  4. Execute the complete code using execute_code_file

This approach allows LLMs to write complex, multi-part code without running into token limitations.

Backward Compatibility

This package maintains backward compatibility with earlier versions. Users of previous versions who only specified a Conda environment will continue to work without any changes to their configuration.

Contributing

Contributions are welcome! Please open an issue or submit a pull request.

License

This project is licensed under the MIT License.

About

The MCP Code Executor is an MCP server that allows LLMs to execute Python code within a specified Conda environment.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 88.2%
  • JavaScript 7.8%
  • Dockerfile 3.8%
  • Shell 0.2%