The MCP Code Executor is an MCP server that allows LLMs to execute Python code within a specified Python environment. This enables LLMs to run code with access to libraries and dependencies defined in the environment. It also supports incremental code generation for handling large code blocks that may exceed token limits.
Note: This server has been converted from STDIO to SSE (Server-Sent Events) transport for better web integration and debugging capabilities.
- Execute Python code from LLM prompts
- Support for incremental code generation to overcome token limitations
- Run code within a specified environment (Conda, virtualenv, or UV virtualenv)
- Install dependencies when needed
- Check if packages are already installed
- Dynamically configure the environment at runtime
- Configurable code storage directory
- SSE transport for real-time communication and web-based debugging
- Web-accessible endpoints for testing and integration
- Node.js installed
- One of the following:
- Conda installed with desired Conda environment created
- Python virtualenv
- UV virtualenv
- Clone this repository:
git clone https://github.com/bazinga012/mcp_code_executor.git- Navigate to the project directory:
cd mcp_code_executor- Install the Node.js dependencies:
npm install- Build the project:
npm run build- Create a
.envfile for configuration:
cp .env.example .envOr create a .env file with your configuration:
PORT=4000
CODE_STORAGE_DIR=/tmp/mcp_code_storage
ENV_TYPE=venv
VENV_PATH=/path/to/your/venvThe MCP Code Executor now runs as an SSE (Server-Sent Events) server instead of using STDIO transport. This provides better debugging capabilities and web integration.
npm run devnpm start
# or
node build/index.jsThe server will start on the port specified in your .env file (default: 4000) and provide the following endpoints:
- SSE endpoint:
http://localhost:4000/sse - Messages endpoint:
http://localhost:4000/messages
You can test the server using the MCP Inspector:
npx @modelcontextprotocol/inspector http://localhost:4000/sseFor MCP clients that support SSE transport, configure the connection to use:
- SSE URL:
http://localhost:4000/sse - Messages URL:
http://localhost:4000/messages
If you need to revert to STDIO transport for compatibility with older MCP clients, you can modify the main function in src/index.ts to use StdioServerTransport instead of the Express server setup.
{
"mcpServers": {
"mcp-code-executor": {
"command": "node",
"args": [
"/path/to/mcp_code_executor/build/index.js"
],
"env": {
"CODE_STORAGE_DIR": "/path/to/code/storage",
"ENV_TYPE": "conda",
"CONDA_ENV_NAME": "your-conda-env"
}
}
}
}{
"mcpServers": {
"mcp-code-executor": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"mcp-code-executor"
]
}
}
}Note: The Dockerfile has been tested with the venv-uv environment type only. Other environment types may require additional configuration.
CODE_STORAGE_DIR: Directory where the generated code will be stored
-
For Conda:
ENV_TYPE: Set tocondaCONDA_ENV_NAME: Name of the Conda environment to use
-
For Standard Virtualenv:
ENV_TYPE: Set tovenvVENV_PATH: Path to the virtualenv directory
-
For UV Virtualenv:
ENV_TYPE: Set tovenv-uvUV_VENV_PATH: Path to the UV virtualenv directory
The MCP Code Executor provides the following tools to LLMs:
Executes Python code in the configured environment. Best for short code snippets.
{
"name": "execute_code",
"arguments": {
"code": "import numpy as np\nprint(np.random.rand(3,3))",
"filename": "matrix_gen"
}
}Installs Python packages in the environment.
{
"name": "install_dependencies",
"arguments": {
"packages": ["numpy", "pandas", "matplotlib"]
}
}Checks if packages are already installed in the environment.
{
"name": "check_installed_packages",
"arguments": {
"packages": ["numpy", "pandas", "non_existent_package"]
}
}Dynamically changes the environment configuration.
{
"name": "configure_environment",
"arguments": {
"type": "conda",
"conda_name": "new_env_name"
}
}Gets the current environment configuration.
{
"name": "get_environment_config",
"arguments": {}
}Creates a new Python file with initial content. Use this as the first step for longer code that may exceed token limits.
{
"name": "initialize_code_file",
"arguments": {
"content": "def main():\n print('Hello, world!')\n\nif __name__ == '__main__':\n main()",
"filename": "my_script"
}
}Appends content to an existing Python code file. Use this to add more code to a file created with initialize_code_file.
{
"name": "append_to_code_file",
"arguments": {
"file_path": "/path/to/code/storage/my_script_abc123.py",
"content": "\ndef another_function():\n print('This was appended to the file')\n"
}
}Executes an existing Python file. Use this as the final step after building up code with initialize_code_file and append_to_code_file.
{
"name": "execute_code_file",
"arguments": {
"file_path": "/path/to/code/storage/my_script_abc123.py"
}
}Reads the content of an existing Python code file. Use this to verify the current state of a file before appending more content or executing it.
{
"name": "read_code_file",
"arguments": {
"file_path": "/path/to/code/storage/my_script_abc123.py"
}
}This server has been converted from STDIO to SSE (Server-Sent Events) transport. The key changes include:
- Web-based debugging: Use tools like MCP Inspector with a simple HTTP URL
- Real-time communication: Better for streaming responses and live updates
- Better error handling: HTTP status codes and structured error responses
- Browser compatibility: Can be tested directly in web browsers
- Session management: Multiple concurrent connections with session IDs
- Express server: Now runs as a web server instead of STDIO process
- Modular structure: Code split into modules (tools, transports)
- Environment configuration: Uses
.envfiles for easy configuration - Endpoint separation: Different endpoints for SSE connections and message handling
- Start the server:
npm start - Test with MCP Inspector:
npm run inspector - Manual testing: Access
http://localhost:4000/ssein your browser or testing tool
- Port already in use: Change the
PORTin your.envfile - Environment not found: Verify your Python environment paths are correct
- Permission errors: Ensure the
CODE_STORAGE_DIRis writable - Module import errors: Check that required packages are installed in your Python environment
For detailed logging, you can run the server with debug output:
DEBUG=* npm startTest the SSE connection manually:
curl -N -H "Accept: text/event-stream" http://localhost:4000/sseThis project is licensed under the MIT License. } }
### 8. `execute_code_file`
Executes an existing Python file. Use this as the final step after building up code with initialize_code_file and append_to_code_file.
```json
{
"name": "execute_code_file",
"arguments": {
"file_path": "/path/to/code/storage/my_script_abc123.py"
}
}
Reads the content of an existing Python code file. Use this to verify the current state of a file before appending more content or executing it.
{
"name": "read_code_file",
"arguments": {
"file_path": "/path/to/code/storage/my_script_abc123.py"
}
}Once configured, the MCP Code Executor will allow LLMs to execute Python code by generating a file in the specified CODE_STORAGE_DIR and running it within the configured environment.
LLMs can generate and execute code by referencing this MCP server in their prompts.
For larger code blocks that might exceed LLM token limits, use the incremental code generation approach:
- Initialize a file with the basic structure using
initialize_code_file - Add more code in subsequent calls using
append_to_code_file - Verify the file content if needed using
read_code_file - Execute the complete code using
execute_code_file
This approach allows LLMs to write complex, multi-part code without running into token limitations.
This package maintains backward compatibility with earlier versions. Users of previous versions who only specified a Conda environment will continue to work without any changes to their configuration.
Contributions are welcome! Please open an issue or submit a pull request.
This project is licensed under the MIT License.