From 8fdd69721fd67bb2fd1ed26e31da6cf194722021 Mon Sep 17 00:00:00 2001 From: Max Isbey <224885523+maxisbey@users.noreply.github.com> Date: Thu, 12 Feb 2026 16:05:08 +0000 Subject: [PATCH 1/3] fix: revert README.md to v1 documentation Commits 21822053, 4fc49c62, and 1a8c14a5 accidentally updated README.md with v2 content after it was intentionally reverted to v1 docs in 2137c8fb. Restore README.md to the v1 state. All three commits already applied their changes to README.v2.md as well, so no content is lost. --- README.md | 433 ++++++++++++++++++++++++++++++------------------------ 1 file changed, 238 insertions(+), 195 deletions(-) diff --git a/README.md b/README.md index dc23d0d1d..487d48bee 100644 --- a/README.md +++ b/README.md @@ -13,12 +13,13 @@ -> [!IMPORTANT] -> **This is the `main` branch which contains v2 of the SDK (currently in development, pre-alpha).** -> -> We anticipate a stable v2 release in Q1 2026. Until then, **v1.x remains the recommended version** for production use. v1.x will continue to receive bug fixes and security updates for at least 6 months after v2 ships to give people time to upgrade. + + +> [!NOTE] +> **This README documents v1.x of the MCP Python SDK (the current stable release).** > -> For v1 documentation and code, see the [`v1.x` branch](https://github.com/modelcontextprotocol/python-sdk/tree/v1.x). +> For v1.x code and documentation, see the [`v1.x` branch](https://github.com/modelcontextprotocol/python-sdk/tree/v1.x). +> For the upcoming v2 documentation (pre-alpha, in development on `main`), see [`README.v2.md`](README.v2.md). ## Table of Contents @@ -45,7 +46,7 @@ - [Sampling](#sampling) - [Logging and Notifications](#logging-and-notifications) - [Authentication](#authentication) - - [MCPServer Properties](#mcpserver-properties) + - [FastMCP Properties](#fastmcp-properties) - [Session Properties and Methods](#session-properties-and-methods) - [Request Context Properties](#request-context-properties) - [Running Your Server](#running-your-server) @@ -134,18 +135,19 @@ uv run mcp Let's create a simple MCP server that exposes a calculator tool and some data: - + ```python -"""MCPServer quickstart example. +""" +FastMCP quickstart example. Run from the repository root: - uv run examples/snippets/servers/mcpserver_quickstart.py + uv run examples/snippets/servers/fastmcp_quickstart.py """ -from mcp.server.mcpserver import MCPServer +from mcp.server.fastmcp import FastMCP # Create an MCP server -mcp = MCPServer("Demo") +mcp = FastMCP("Demo", json_response=True) # Add an addition tool @@ -177,16 +179,16 @@ def greet_user(name: str, style: str = "friendly") -> str: # Run with streamable HTTP transport if __name__ == "__main__": - mcp.run(transport="streamable-http", json_response=True) + mcp.run(transport="streamable-http") ``` -_Full example: [examples/snippets/servers/mcpserver_quickstart.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/mcpserver_quickstart.py)_ +_Full example: [examples/snippets/servers/fastmcp_quickstart.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/fastmcp_quickstart.py)_ You can install this server in [Claude Code](https://docs.claude.com/en/docs/claude-code/mcp) and interact with it right away. First, run the server: ```bash -uv run --with mcp examples/snippets/servers/mcpserver_quickstart.py +uv run --with mcp examples/snippets/servers/fastmcp_quickstart.py ``` Then add it to Claude Code: @@ -216,7 +218,7 @@ The [Model Context Protocol (MCP)](https://modelcontextprotocol.io) lets you bui ### Server -The MCPServer server is your core interface to the MCP protocol. It handles connection management, protocol compliance, and message routing: +The FastMCP server is your core interface to the MCP protocol. It handles connection management, protocol compliance, and message routing: ```python @@ -226,7 +228,7 @@ from collections.abc import AsyncIterator from contextlib import asynccontextmanager from dataclasses import dataclass -from mcp.server.mcpserver import Context, MCPServer +from mcp.server.fastmcp import Context, FastMCP from mcp.server.session import ServerSession @@ -256,7 +258,7 @@ class AppContext: @asynccontextmanager -async def app_lifespan(server: MCPServer) -> AsyncIterator[AppContext]: +async def app_lifespan(server: FastMCP) -> AsyncIterator[AppContext]: """Manage application lifecycle with type-safe context.""" # Initialize on startup db = await Database.connect() @@ -268,7 +270,7 @@ async def app_lifespan(server: MCPServer) -> AsyncIterator[AppContext]: # Pass lifespan to server -mcp = MCPServer("My App", lifespan=app_lifespan) +mcp = FastMCP("My App", lifespan=app_lifespan) # Access type-safe lifespan context in tools @@ -288,9 +290,9 @@ Resources are how you expose data to LLMs. They're similar to GET endpoints in a ```python -from mcp.server.mcpserver import MCPServer +from mcp.server.fastmcp import FastMCP -mcp = MCPServer(name="Resource Example") +mcp = FastMCP(name="Resource Example") @mcp.resource("file://documents/{name}") @@ -319,9 +321,9 @@ Tools let LLMs take actions through your server. Unlike resources, tools are exp ```python -from mcp.server.mcpserver import MCPServer +from mcp.server.fastmcp import FastMCP -mcp = MCPServer(name="Tool Example") +mcp = FastMCP(name="Tool Example") @mcp.tool() @@ -340,14 +342,14 @@ def get_weather(city: str, unit: str = "celsius") -> str: _Full example: [examples/snippets/servers/basic_tool.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/basic_tool.py)_ -Tools can optionally receive a Context object by including a parameter with the `Context` type annotation. This context is automatically injected by the MCPServer framework and provides access to MCP capabilities: +Tools can optionally receive a Context object by including a parameter with the `Context` type annotation. This context is automatically injected by the FastMCP framework and provides access to MCP capabilities: ```python -from mcp.server.mcpserver import Context, MCPServer +from mcp.server.fastmcp import Context, FastMCP from mcp.server.session import ServerSession -mcp = MCPServer(name="Progress Example") +mcp = FastMCP(name="Progress Example") @mcp.tool() @@ -395,7 +397,7 @@ validated data that clients can easily process. **Note:** For backward compatibility, unstructured results are also returned. Unstructured results are provided for backward compatibility with previous versions of the MCP specification, and are quirks-compatible -with previous versions of MCPServer in the current version of the SDK. +with previous versions of FastMCP in the current version of the SDK. **Note:** In cases where a tool function's return type annotation causes the tool to be classified as structured _and this is undesirable_, @@ -414,10 +416,10 @@ from typing import Annotated from pydantic import BaseModel -from mcp.server.mcpserver import MCPServer +from mcp.server.fastmcp import FastMCP from mcp.types import CallToolResult, TextContent -mcp = MCPServer("CallToolResult Example") +mcp = FastMCP("CallToolResult Example") class ValidationModel(BaseModel): @@ -441,7 +443,7 @@ def validated_tool() -> Annotated[CallToolResult, ValidationModel]: """Return CallToolResult with structured output validation.""" return CallToolResult( content=[TextContent(type="text", text="Validated response")], - structured_content={"status": "success", "data": {"result": 42}}, + structuredContent={"status": "success", "data": {"result": 42}}, _meta={"internal": "metadata"}, ) @@ -465,9 +467,9 @@ from typing import TypedDict from pydantic import BaseModel, Field -from mcp.server.mcpserver import MCPServer +from mcp.server.fastmcp import FastMCP -mcp = MCPServer("Structured Output Example") +mcp = FastMCP("Structured Output Example") # Using Pydantic models for rich structured data @@ -567,10 +569,10 @@ Prompts are reusable templates that help LLMs interact with your server effectiv ```python -from mcp.server.mcpserver import MCPServer -from mcp.server.mcpserver.prompts import base +from mcp.server.fastmcp import FastMCP +from mcp.server.fastmcp.prompts import base -mcp = MCPServer(name="Prompt Example") +mcp = FastMCP(name="Prompt Example") @mcp.prompt(title="Code Review") @@ -595,7 +597,7 @@ _Full example: [examples/snippets/servers/basic_prompt.py](https://github.com/mo MCP servers can provide icons for UI display. Icons can be added to the server implementation, tools, resources, and prompts: ```python -from mcp.server.mcpserver import MCPServer, Icon +from mcp.server.fastmcp import FastMCP, Icon # Create an icon from a file path or URL icon = Icon( @@ -605,7 +607,7 @@ icon = Icon( ) # Add icons to server -mcp = MCPServer( +mcp = FastMCP( "My Server", website_url="https://example.com", icons=[icon] @@ -623,21 +625,21 @@ def my_resource(): return "content" ``` -_Full example: [examples/mcpserver/icons_demo.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/mcpserver/icons_demo.py)_ +_Full example: [examples/fastmcp/icons_demo.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/fastmcp/icons_demo.py)_ ### Images -MCPServer provides an `Image` class that automatically handles image data: +FastMCP provides an `Image` class that automatically handles image data: ```python -"""Example showing image handling with MCPServer.""" +"""Example showing image handling with FastMCP.""" from PIL import Image as PILImage -from mcp.server.mcpserver import Image, MCPServer +from mcp.server.fastmcp import FastMCP, Image -mcp = MCPServer("Image Example") +mcp = FastMCP("Image Example") @mcp.tool() @@ -660,9 +662,9 @@ The Context object is automatically injected into tool and resource functions th To use context in a tool or resource function, add a parameter with the `Context` type annotation: ```python -from mcp.server.mcpserver import Context, MCPServer +from mcp.server.fastmcp import Context, FastMCP -mcp = MCPServer(name="Context Example") +mcp = FastMCP(name="Context Example") @mcp.tool() @@ -678,11 +680,11 @@ The Context object provides the following capabilities: - `ctx.request_id` - Unique ID for the current request - `ctx.client_id` - Client ID if available -- `ctx.mcp_server` - Access to the MCPServer server instance (see [MCPServer Properties](#mcpserver-properties)) +- `ctx.fastmcp` - Access to the FastMCP server instance (see [FastMCP Properties](#fastmcp-properties)) - `ctx.session` - Access to the underlying session for advanced communication (see [Session Properties and Methods](#session-properties-and-methods)) - `ctx.request_context` - Access to request-specific data and lifespan resources (see [Request Context Properties](#request-context-properties)) - `await ctx.debug(message)` - Send debug log message -- `await ctx.info(message)` - Send info log message +- `await ctx.info(message)` - Send info log message - `await ctx.warning(message)` - Send warning log message - `await ctx.error(message)` - Send error log message - `await ctx.log(level, message, logger_name=None)` - Send log with custom level @@ -692,10 +694,10 @@ The Context object provides the following capabilities: ```python -from mcp.server.mcpserver import Context, MCPServer +from mcp.server.fastmcp import Context, FastMCP from mcp.server.session import ServerSession -mcp = MCPServer(name="Progress Example") +mcp = FastMCP(name="Progress Example") @mcp.tool() @@ -726,8 +728,9 @@ Client usage: ```python -"""cd to the `examples/snippets` directory and run: -uv run completion-client +""" +cd to the `examples/snippets` directory and run: + uv run completion-client """ import asyncio @@ -755,8 +758,8 @@ async def run(): # List available resource templates templates = await session.list_resource_templates() print("Available resource templates:") - for template in templates.resource_templates: - print(f" - {template.uri_template}") + for template in templates.resourceTemplates: + print(f" - {template.uriTemplate}") # List available prompts prompts = await session.list_prompts() @@ -765,20 +768,20 @@ async def run(): print(f" - {prompt.name}") # Complete resource template arguments - if templates.resource_templates: - template = templates.resource_templates[0] - print(f"\nCompleting arguments for resource template: {template.uri_template}") + if templates.resourceTemplates: + template = templates.resourceTemplates[0] + print(f"\nCompleting arguments for resource template: {template.uriTemplate}") # Complete without context result = await session.complete( - ref=ResourceTemplateReference(type="ref/resource", uri=template.uri_template), + ref=ResourceTemplateReference(type="ref/resource", uri=template.uriTemplate), argument={"name": "owner", "value": "model"}, ) print(f"Completions for 'owner' starting with 'model': {result.completion.values}") # Complete with context - repo suggestions based on owner result = await session.complete( - ref=ResourceTemplateReference(type="ref/resource", uri=template.uri_template), + ref=ResourceTemplateReference(type="ref/resource", uri=template.uriTemplate), argument={"name": "repo", "value": ""}, context_arguments={"owner": "modelcontextprotocol"}, ) @@ -824,12 +827,12 @@ import uuid from pydantic import BaseModel, Field -from mcp.server.mcpserver import Context, MCPServer +from mcp.server.fastmcp import Context, FastMCP from mcp.server.session import ServerSession from mcp.shared.exceptions import UrlElicitationRequiredError from mcp.types import ElicitRequestURLParams -mcp = MCPServer(name="Elicitation Example") +mcp = FastMCP(name="Elicitation Example") class BookingPreferences(BaseModel): @@ -908,7 +911,7 @@ async def connect_service(service_name: str, ctx: Context[ServerSession, None]) mode="url", message=f"Authorization required to connect to {service_name}", url=f"https://{service_name}.example.com/oauth/authorize?elicit={elicitation_id}", - elicitation_id=elicitation_id, + elicitationId=elicitation_id, ) ] ) @@ -931,11 +934,11 @@ Tools can interact with LLMs through sampling (generating text): ```python -from mcp.server.mcpserver import Context, MCPServer +from mcp.server.fastmcp import Context, FastMCP from mcp.server.session import ServerSession from mcp.types import SamplingMessage, TextContent -mcp = MCPServer(name="Sampling Example") +mcp = FastMCP(name="Sampling Example") @mcp.tool() @@ -968,10 +971,10 @@ Tools can send logs and notifications through the context: ```python -from mcp.server.mcpserver import Context, MCPServer +from mcp.server.fastmcp import Context, FastMCP from mcp.server.session import ServerSession -mcp = MCPServer(name="Notifications Example") +mcp = FastMCP(name="Notifications Example") @mcp.tool() @@ -1002,15 +1005,16 @@ MCP servers can use authentication by providing an implementation of the `TokenV ```python -"""Run from the repository root: -uv run examples/snippets/servers/oauth_server.py +""" +Run from the repository root: + uv run examples/snippets/servers/oauth_server.py """ from pydantic import AnyHttpUrl from mcp.server.auth.provider import AccessToken, TokenVerifier from mcp.server.auth.settings import AuthSettings -from mcp.server.mcpserver import MCPServer +from mcp.server.fastmcp import FastMCP class SimpleTokenVerifier(TokenVerifier): @@ -1020,9 +1024,10 @@ class SimpleTokenVerifier(TokenVerifier): pass # This is where you would implement actual token validation -# Create MCPServer instance as a Resource Server -mcp = MCPServer( +# Create FastMCP instance as a Resource Server +mcp = FastMCP( "Weather Service", + json_response=True, # Token verifier for authentication token_verifier=SimpleTokenVerifier(), # Auth settings for RFC 9728 Protected Resource Metadata @@ -1046,7 +1051,7 @@ async def get_weather(city: str = "London") -> dict[str, str]: if __name__ == "__main__": - mcp.run(transport="streamable-http", json_response=True) + mcp.run(transport="streamable-http") ``` _Full example: [examples/snippets/servers/oauth_server.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/oauth_server.py)_ @@ -1062,19 +1067,19 @@ For a complete example with separate Authorization Server and Resource Server im See [TokenVerifier](src/mcp/server/auth/provider.py) for more details on implementing token validation. -### MCPServer Properties +### FastMCP Properties -The MCPServer server instance accessible via `ctx.mcp_server` provides access to server configuration and metadata: +The FastMCP server instance accessible via `ctx.fastmcp` provides access to server configuration and metadata: -- `ctx.mcp_server.name` - The server's name as defined during initialization -- `ctx.mcp_server.instructions` - Server instructions/description provided to clients -- `ctx.mcp_server.website_url` - Optional website URL for the server -- `ctx.mcp_server.icons` - Optional list of icons for UI display -- `ctx.mcp_server.settings` - Complete server configuration object containing: +- `ctx.fastmcp.name` - The server's name as defined during initialization +- `ctx.fastmcp.instructions` - Server instructions/description provided to clients +- `ctx.fastmcp.website_url` - Optional website URL for the server +- `ctx.fastmcp.icons` - Optional list of icons for UI display +- `ctx.fastmcp.settings` - Complete server configuration object containing: - `debug` - Debug mode flag - `log_level` - Current logging level - `host` and `port` - Server network configuration - - `sse_path`, `streamable_http_path` - Transport paths + - `mount_path`, `sse_path`, `streamable_http_path` - Transport paths - `stateless_http` - Whether the server operates in stateless mode - And other configuration options @@ -1083,12 +1088,12 @@ The MCPServer server instance accessible via `ctx.mcp_server` provides access to def server_info(ctx: Context) -> dict: """Get information about the current server.""" return { - "name": ctx.mcp_server.name, - "instructions": ctx.mcp_server.instructions, - "debug_mode": ctx.mcp_server.settings.debug, - "log_level": ctx.mcp_server.settings.log_level, - "host": ctx.mcp_server.settings.host, - "port": ctx.mcp_server.settings.port, + "name": ctx.fastmcp.name, + "instructions": ctx.fastmcp.instructions, + "debug_mode": ctx.fastmcp.settings.debug, + "log_level": ctx.fastmcp.settings.log_level, + "host": ctx.fastmcp.settings.host, + "port": ctx.fastmcp.settings.port, } ``` @@ -1110,13 +1115,13 @@ The session object accessible via `ctx.session` provides advanced control over c async def notify_data_update(resource_uri: str, ctx: Context) -> str: """Update data and notify clients of the change.""" # Perform data update logic here - + # Notify clients that this specific resource changed await ctx.session.send_resource_updated(AnyUrl(resource_uri)) - + # If this affects the overall resource list, notify about that too await ctx.session.send_resource_list_changed() - + return f"Updated {resource_uri} and notified clients" ``` @@ -1145,11 +1150,11 @@ def query_with_config(query: str, ctx: Context) -> str: """Execute a query using shared database and configuration.""" # Access typed lifespan context app_ctx: AppContext = ctx.request_context.lifespan_context - + # Use shared resources connection = app_ctx.db settings = app_ctx.config - + # Execute query with configuration result = connection.execute(query, timeout=settings.query_timeout) return str(result) @@ -1203,9 +1208,9 @@ cd to the `examples/snippets` directory and run: python servers/direct_execution.py """ -from mcp.server.mcpserver import MCPServer +from mcp.server.fastmcp import FastMCP -mcp = MCPServer("My App") +mcp = FastMCP("My App") @mcp.tool() @@ -1234,7 +1239,7 @@ python servers/direct_execution.py uv run mcp run servers/direct_execution.py ``` -Note that `uv run mcp run` or `uv run mcp dev` only supports server using MCPServer and not the low-level server variant. +Note that `uv run mcp run` or `uv run mcp dev` only supports server using FastMCP and not the low-level server variant. ### Streamable HTTP Transport @@ -1242,13 +1247,22 @@ Note that `uv run mcp run` or `uv run mcp dev` only supports server using MCPSer ```python -"""Run from the repository root: -uv run examples/snippets/servers/streamable_config.py +""" +Run from the repository root: + uv run examples/snippets/servers/streamable_config.py """ -from mcp.server.mcpserver import MCPServer +from mcp.server.fastmcp import FastMCP -mcp = MCPServer("StatelessServer") +# Stateless server with JSON responses (recommended) +mcp = FastMCP("StatelessServer", stateless_http=True, json_response=True) + +# Other configuration options: +# Stateless server with SSE streaming responses +# mcp = FastMCP("StatelessServer", stateless_http=True) + +# Stateful server with session persistence +# mcp = FastMCP("StatefulServer") # Add a simple tool to demonstrate the server @@ -1259,28 +1273,20 @@ def greet(name: str = "World") -> str: # Run server with streamable_http transport -# Transport-specific options (stateless_http, json_response) are passed to run() if __name__ == "__main__": - # Stateless server with JSON responses (recommended) - mcp.run(transport="streamable-http", stateless_http=True, json_response=True) - - # Other configuration options: - # Stateless server with SSE streaming responses - # mcp.run(transport="streamable-http", stateless_http=True) - - # Stateful server with session persistence - # mcp.run(transport="streamable-http") + mcp.run(transport="streamable-http") ``` _Full example: [examples/snippets/servers/streamable_config.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/streamable_config.py)_ -You can mount multiple MCPServer servers in a Starlette application: +You can mount multiple FastMCP servers in a Starlette application: ```python -"""Run from the repository root: -uvicorn examples.snippets.servers.streamable_starlette_mount:app --reload +""" +Run from the repository root: + uvicorn examples.snippets.servers.streamable_starlette_mount:app --reload """ import contextlib @@ -1288,10 +1294,10 @@ import contextlib from starlette.applications import Starlette from starlette.routing import Mount -from mcp.server.mcpserver import MCPServer +from mcp.server.fastmcp import FastMCP # Create the Echo server -echo_mcp = MCPServer(name="EchoServer") +echo_mcp = FastMCP(name="EchoServer", stateless_http=True, json_response=True) @echo_mcp.tool() @@ -1301,7 +1307,7 @@ def echo(message: str) -> str: # Create the Math server -math_mcp = MCPServer(name="MathServer") +math_mcp = FastMCP(name="MathServer", stateless_http=True, json_response=True) @math_mcp.tool() @@ -1322,16 +1328,16 @@ async def lifespan(app: Starlette): # Create the Starlette app and mount the MCP servers app = Starlette( routes=[ - Mount("/echo", echo_mcp.streamable_http_app(stateless_http=True, json_response=True)), - Mount("/math", math_mcp.streamable_http_app(stateless_http=True, json_response=True)), + Mount("/echo", echo_mcp.streamable_http_app()), + Mount("/math", math_mcp.streamable_http_app()), ], lifespan=lifespan, ) # Note: Clients connect to http://localhost:8000/echo/mcp and http://localhost:8000/math/mcp # To mount at the root of each path (e.g., /echo instead of /echo/mcp): -# echo_mcp.streamable_http_app(streamable_http_path="/", stateless_http=True, json_response=True) -# math_mcp.streamable_http_app(streamable_http_path="/", stateless_http=True, json_response=True) +# echo_mcp.settings.streamable_http_path = "/" +# math_mcp.settings.streamable_http_path = "/" ``` _Full example: [examples/snippets/servers/streamable_starlette_mount.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/streamable_starlette_mount.py)_ @@ -1389,7 +1395,8 @@ You can mount the StreamableHTTP server to an existing ASGI server using the `st ```python -"""Basic example showing how to mount StreamableHTTP server in Starlette. +""" +Basic example showing how to mount StreamableHTTP server in Starlette. Run from the repository root: uvicorn examples.snippets.servers.streamable_http_basic_mounting:app --reload @@ -1400,10 +1407,10 @@ import contextlib from starlette.applications import Starlette from starlette.routing import Mount -from mcp.server.mcpserver import MCPServer +from mcp.server.fastmcp import FastMCP # Create MCP server -mcp = MCPServer("My App") +mcp = FastMCP("My App", json_response=True) @mcp.tool() @@ -1420,10 +1427,9 @@ async def lifespan(app: Starlette): # Mount the StreamableHTTP server to the existing ASGI server -# Transport-specific options are passed to streamable_http_app() app = Starlette( routes=[ - Mount("/", app=mcp.streamable_http_app(json_response=True)), + Mount("/", app=mcp.streamable_http_app()), ], lifespan=lifespan, ) @@ -1436,7 +1442,8 @@ _Full example: [examples/snippets/servers/streamable_http_basic_mounting.py](htt ```python -"""Example showing how to mount StreamableHTTP server using Host-based routing. +""" +Example showing how to mount StreamableHTTP server using Host-based routing. Run from the repository root: uvicorn examples.snippets.servers.streamable_http_host_mounting:app --reload @@ -1447,10 +1454,10 @@ import contextlib from starlette.applications import Starlette from starlette.routing import Host -from mcp.server.mcpserver import MCPServer +from mcp.server.fastmcp import FastMCP # Create MCP server -mcp = MCPServer("MCP Host App") +mcp = FastMCP("MCP Host App", json_response=True) @mcp.tool() @@ -1467,10 +1474,9 @@ async def lifespan(app: Starlette): # Mount using Host-based routing -# Transport-specific options are passed to streamable_http_app() app = Starlette( routes=[ - Host("mcp.acme.corp", app=mcp.streamable_http_app(json_response=True)), + Host("mcp.acme.corp", app=mcp.streamable_http_app()), ], lifespan=lifespan, ) @@ -1483,7 +1489,8 @@ _Full example: [examples/snippets/servers/streamable_http_host_mounting.py](http ```python -"""Example showing how to mount multiple StreamableHTTP servers with path configuration. +""" +Example showing how to mount multiple StreamableHTTP servers with path configuration. Run from the repository root: uvicorn examples.snippets.servers.streamable_http_multiple_servers:app --reload @@ -1494,11 +1501,11 @@ import contextlib from starlette.applications import Starlette from starlette.routing import Mount -from mcp.server.mcpserver import MCPServer +from mcp.server.fastmcp import FastMCP # Create multiple MCP servers -api_mcp = MCPServer("API Server") -chat_mcp = MCPServer("Chat Server") +api_mcp = FastMCP("API Server", json_response=True) +chat_mcp = FastMCP("Chat Server", json_response=True) @api_mcp.tool() @@ -1513,6 +1520,12 @@ def send_message(message: str) -> str: return f"Message sent: {message}" +# Configure servers to mount at the root of each path +# This means endpoints will be at /api and /chat instead of /api/mcp and /chat/mcp +api_mcp.settings.streamable_http_path = "/" +chat_mcp.settings.streamable_http_path = "/" + + # Create a combined lifespan to manage both session managers @contextlib.asynccontextmanager async def lifespan(app: Starlette): @@ -1522,12 +1535,11 @@ async def lifespan(app: Starlette): yield -# Mount the servers with transport-specific options passed to streamable_http_app() -# streamable_http_path="/" means endpoints will be at /api and /chat instead of /api/mcp and /chat/mcp +# Mount the servers app = Starlette( routes=[ - Mount("/api", app=api_mcp.streamable_http_app(json_response=True, streamable_http_path="/")), - Mount("/chat", app=chat_mcp.streamable_http_app(json_response=True, streamable_http_path="/")), + Mount("/api", app=api_mcp.streamable_http_app()), + Mount("/chat", app=chat_mcp.streamable_http_app()), ], lifespan=lifespan, ) @@ -1540,7 +1552,8 @@ _Full example: [examples/snippets/servers/streamable_http_multiple_servers.py](h ```python -"""Example showing path configuration when mounting MCPServer. +""" +Example showing path configuration during FastMCP initialization. Run from the repository root: uvicorn examples.snippets.servers.streamable_http_path_config:app --reload @@ -1549,10 +1562,15 @@ Run from the repository root: from starlette.applications import Starlette from starlette.routing import Mount -from mcp.server.mcpserver import MCPServer +from mcp.server.fastmcp import FastMCP -# Create a simple MCPServer server -mcp_at_root = MCPServer("My Server") +# Configure streamable_http_path during initialization +# This server will mount at the root of wherever it's mounted +mcp_at_root = FastMCP( + "My Server", + json_response=True, + streamable_http_path="/", +) @mcp_at_root.tool() @@ -1561,14 +1579,10 @@ def process_data(data: str) -> str: return f"Processed: {data}" -# Mount at /process with streamable_http_path="/" so the endpoint is /process (not /process/mcp) -# Transport-specific options like json_response are passed to streamable_http_app() +# Mount at /process - endpoints will be at /process instead of /process/mcp app = Starlette( routes=[ - Mount( - "/process", - app=mcp_at_root.streamable_http_app(json_response=True, streamable_http_path="/"), - ), + Mount("/process", app=mcp_at_root.streamable_http_app()), ] ) ``` @@ -1585,10 +1599,10 @@ You can mount the SSE server to an existing ASGI server using the `sse_app` meth ```python from starlette.applications import Starlette from starlette.routing import Mount, Host -from mcp.server.mcpserver import MCPServer +from mcp.server.fastmcp import FastMCP -mcp = MCPServer("My App") +mcp = FastMCP("My App") # Mount the SSE server to the existing ASGI server app = Starlette( @@ -1601,28 +1615,41 @@ app = Starlette( app.router.routes.append(Host('mcp.acme.corp', app=mcp.sse_app())) ``` -You can also mount multiple MCP servers at different sub-paths. The SSE transport automatically detects the mount path via ASGI's `root_path` mechanism, so message endpoints are correctly routed: +When mounting multiple MCP servers under different paths, you can configure the mount path in several ways: ```python from starlette.applications import Starlette from starlette.routing import Mount -from mcp.server.mcpserver import MCPServer +from mcp.server.fastmcp import FastMCP # Create multiple MCP servers -github_mcp = MCPServer("GitHub API") -browser_mcp = MCPServer("Browser") -search_mcp = MCPServer("Search") +github_mcp = FastMCP("GitHub API") +browser_mcp = FastMCP("Browser") +curl_mcp = FastMCP("Curl") +search_mcp = FastMCP("Search") + +# Method 1: Configure mount paths via settings (recommended for persistent configuration) +github_mcp.settings.mount_path = "/github" +browser_mcp.settings.mount_path = "/browser" -# Mount each server at its own sub-path -# The SSE transport automatically uses ASGI's root_path to construct -# the correct message endpoint (e.g., /github/messages/, /browser/messages/) +# Method 2: Pass mount path directly to sse_app (preferred for ad-hoc mounting) +# This approach doesn't modify the server's settings permanently + +# Create Starlette app with multiple mounted servers app = Starlette( routes=[ + # Using settings-based configuration Mount("/github", app=github_mcp.sse_app()), Mount("/browser", app=browser_mcp.sse_app()), - Mount("/search", app=search_mcp.sse_app()), + # Using direct mount path parameter + Mount("/curl", app=curl_mcp.sse_app("/curl")), + Mount("/search", app=search_mcp.sse_app("/search")), ] ) + +# Method 3: For direct execution, you can also pass the mount path to run() +if __name__ == "__main__": + search_mcp.run(transport="sse", mount_path="/search") ``` For more information on mounting applications in Starlette, see the [Starlette documentation](https://www.starlette.io/routing/#submounting-routes). @@ -1635,8 +1662,9 @@ For more control, you can use the low-level server implementation directly. This ```python -"""Run from the repository root: -uv run examples/snippets/servers/lowlevel/lifespan.py +""" +Run from the repository root: + uv run examples/snippets/servers/lowlevel/lifespan.py """ from collections.abc import AsyncIterator @@ -1644,7 +1672,7 @@ from contextlib import asynccontextmanager from typing import Any import mcp.server.stdio -from mcp import types +import mcp.types as types from mcp.server.lowlevel import NotificationOptions, Server from mcp.server.models import InitializationOptions @@ -1692,7 +1720,7 @@ async def handle_list_tools() -> list[types.Tool]: types.Tool( name="query_db", description="Query the database", - input_schema={ + inputSchema={ "type": "object", "properties": {"query": {"type": "string", "description": "SQL query to execute"}}, "required": ["query"], @@ -1751,14 +1779,15 @@ The lifespan API provides: ```python -"""Run from the repository root: +""" +Run from the repository root: uv run examples/snippets/servers/lowlevel/basic.py """ import asyncio import mcp.server.stdio -from mcp import types +import mcp.types as types from mcp.server.lowlevel import NotificationOptions, Server from mcp.server.models import InitializationOptions @@ -1829,15 +1858,16 @@ The low-level server supports structured output for tools, allowing you to retur ```python -"""Run from the repository root: -uv run examples/snippets/servers/lowlevel/structured_output.py +""" +Run from the repository root: + uv run examples/snippets/servers/lowlevel/structured_output.py """ import asyncio from typing import Any import mcp.server.stdio -from mcp import types +import mcp.types as types from mcp.server.lowlevel import NotificationOptions, Server from mcp.server.models import InitializationOptions @@ -1851,12 +1881,12 @@ async def list_tools() -> list[types.Tool]: types.Tool( name="get_weather", description="Get current weather for a city", - input_schema={ + inputSchema={ "type": "object", "properties": {"city": {"type": "string", "description": "City name"}}, "required": ["city"], }, - output_schema={ + outputSchema={ "type": "object", "properties": { "temperature": {"type": "number", "description": "Temperature in Celsius"}, @@ -1931,15 +1961,16 @@ For full control over the response including the `_meta` field (for passing data ```python -"""Run from the repository root: -uv run examples/snippets/servers/lowlevel/direct_call_tool_result.py +""" +Run from the repository root: + uv run examples/snippets/servers/lowlevel/direct_call_tool_result.py """ import asyncio from typing import Any import mcp.server.stdio -from mcp import types +import mcp.types as types from mcp.server.lowlevel import NotificationOptions, Server from mcp.server.models import InitializationOptions @@ -1953,7 +1984,7 @@ async def list_tools() -> list[types.Tool]: types.Tool( name="advanced_tool", description="Tool with full control including _meta field", - input_schema={ + inputSchema={ "type": "object", "properties": {"message": {"type": "string"}}, "required": ["message"], @@ -1969,7 +2000,7 @@ async def handle_call_tool(name: str, arguments: dict[str, Any]) -> types.CallTo message = str(arguments.get("message", "")) return types.CallToolResult( content=[types.TextContent(type="text", text=f"Processed: {message}")], - structured_content={"result": "success", "message": message}, + structuredContent={"result": "success", "message": message}, _meta={"hidden": "data for client applications only"}, ) @@ -2010,9 +2041,13 @@ For servers that need to handle large datasets, the low-level server provides pa ```python -"""Example of implementing pagination with MCP server decorators.""" +""" +Example of implementing pagination with MCP server decorators. +""" + +from pydantic import AnyUrl -from mcp import types +import mcp.types as types from mcp.server.lowlevel import Server # Initialize the server @@ -2036,14 +2071,14 @@ async def list_resources_paginated(request: types.ListResourcesRequest) -> types # Get page of resources page_items = [ - types.Resource(uri=f"resource://items/{item}", name=item, description=f"Description for {item}") + types.Resource(uri=AnyUrl(f"resource://items/{item}"), name=item, description=f"Description for {item}") for item in ITEMS[start:end] ] # Determine next cursor next_cursor = str(end) if end < len(ITEMS) else None - return types.ListResourcesResult(resources=page_items, next_cursor=next_cursor) + return types.ListResourcesResult(resources=page_items, nextCursor=next_cursor) ``` _Full example: [examples/snippets/servers/pagination_example.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/pagination_example.py)_ @@ -2053,7 +2088,9 @@ _Full example: [examples/snippets/servers/pagination_example.py](https://github. ```python -"""Example of consuming paginated MCP endpoints from a client.""" +""" +Example of consuming paginated MCP endpoints from a client. +""" import asyncio @@ -2082,8 +2119,8 @@ async def list_all_resources() -> None: print(f"Fetched {len(result.resources)} resources") # Check if there are more pages - if result.next_cursor: - cursor = result.next_cursor + if result.nextCursor: + cursor = result.nextCursor else: break @@ -2112,28 +2149,31 @@ The SDK provides a high-level client interface for connecting to MCP servers usi ```python -"""cd to the `examples/snippets/clients` directory and run: -uv run client +""" +cd to the `examples/snippets/clients` directory and run: + uv run client """ import asyncio import os +from pydantic import AnyUrl + from mcp import ClientSession, StdioServerParameters, types -from mcp.client.context import ClientRequestContext from mcp.client.stdio import stdio_client +from mcp.shared.context import RequestContext # Create server parameters for stdio connection server_params = StdioServerParameters( command="uv", # Using uv to run the server - args=["run", "server", "mcpserver_quickstart", "stdio"], # We're already in snippets dir + args=["run", "server", "fastmcp_quickstart", "stdio"], # We're already in snippets dir env={"UV_INDEX": os.environ.get("UV_INDEX", "")}, ) # Optional: create a sampling callback async def handle_sampling_message( - context: ClientRequestContext, params: types.CreateMessageRequestParams + context: RequestContext[ClientSession, None], params: types.CreateMessageRequestParams ) -> types.CreateMessageResult: print(f"Sampling request: {params.messages}") return types.CreateMessageResult( @@ -2143,7 +2183,7 @@ async def handle_sampling_message( text="Hello, world! from model", ), model="gpt-3.5-turbo", - stop_reason="endTurn", + stopReason="endTurn", ) @@ -2157,7 +2197,7 @@ async def run(): prompts = await session.list_prompts() print(f"Available prompts: {[p.name for p in prompts.prompts]}") - # Get a prompt (greet_user prompt from mcpserver_quickstart) + # Get a prompt (greet_user prompt from fastmcp_quickstart) if prompts.prompts: prompt = await session.get_prompt("greet_user", arguments={"name": "Alice", "style": "friendly"}) print(f"Prompt result: {prompt.messages[0].content}") @@ -2170,18 +2210,18 @@ async def run(): tools = await session.list_tools() print(f"Available tools: {[t.name for t in tools.tools]}") - # Read a resource (greeting resource from mcpserver_quickstart) - resource_content = await session.read_resource("greeting://World") + # Read a resource (greeting resource from fastmcp_quickstart) + resource_content = await session.read_resource(AnyUrl("greeting://World")) content_block = resource_content.contents[0] if isinstance(content_block, types.TextContent): print(f"Resource content: {content_block.text}") - # Call a tool (add tool from mcpserver_quickstart) + # Call a tool (add tool from fastmcp_quickstart) result = await session.call_tool("add", arguments={"a": 5, "b": 3}) result_unstructured = result.content[0] if isinstance(result_unstructured, types.TextContent): print(f"Tool result: {result_unstructured.text}") - result_structured = result.structured_content + result_structured = result.structuredContent print(f"Structured tool result: {result_structured}") @@ -2201,8 +2241,9 @@ Clients can also connect using [Streamable HTTP transport](https://modelcontextp ```python -"""Run from the repository root: -uv run examples/snippets/clients/streamable_basic.py +""" +Run from the repository root: + uv run examples/snippets/clients/streamable_basic.py """ import asyncio @@ -2240,8 +2281,9 @@ When building MCP clients, the SDK provides utilities to help display human-read ```python -"""cd to the `examples/snippets` directory and run: -uv run display-utilities-client +""" +cd to the `examples/snippets` directory and run: + uv run display-utilities-client """ import asyncio @@ -2254,7 +2296,7 @@ from mcp.shared.metadata_utils import get_display_name # Create server parameters for stdio connection server_params = StdioServerParameters( command="uv", # Using uv to run the server - args=["run", "server", "mcpserver_quickstart", "stdio"], + args=["run", "server", "fastmcp_quickstart", "stdio"], env={"UV_INDEX": os.environ.get("UV_INDEX", "")}, ) @@ -2280,7 +2322,7 @@ async def display_resources(session: ClientSession): print(f"Resource: {display_name} ({resource.uri})") templates_response = await session.list_resource_templates() - for template in templates_response.resource_templates: + for template in templates_response.resourceTemplates: display_name = get_display_name(template) print(f"Resource Template: {display_name}") @@ -2324,7 +2366,8 @@ The SDK includes [authorization support](https://modelcontextprotocol.io/specifi ```python -"""Before running, specify running MCP RS server URL. +""" +Before running, specify running MCP RS server URL. To spin up RS server locally, see examples/servers/simple-auth/README.md From 28a766f817811dca3fef919e928cb55611096a9a Mon Sep 17 00:00:00 2001 From: Max Isbey <224885523+maxisbey@users.noreply.github.com> Date: Thu, 12 Feb 2026 16:11:08 +0000 Subject: [PATCH 2/3] ci: add pre-commit hook to prevent README.md changes README.md is frozen at v1 docs. The hook fails with a clear message directing contributors to edit README.v2.md instead. --- .pre-commit-config.yaml | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 03a8ae038..f4e6630f1 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -55,6 +55,11 @@ repos: language: system files: ^(pyproject\.toml|uv\.lock)$ pass_filenames: false + - id: readme-v1-frozen + name: README.md is frozen (v1 docs) + entry: README.md is frozen at v1. Edit README.v2.md instead. + language: fail + files: ^README\.md$ - id: readme-snippets name: Check README snippets are up to date entry: uv run --frozen python scripts/update_readme_snippets.py --check From 5e7077cf74913be749b76598aad601f55e0e3575 Mon Sep 17 00:00:00 2001 From: Max Isbey <224885523+maxisbey@users.noreply.github.com> Date: Thu, 12 Feb 2026 16:12:51 +0000 Subject: [PATCH 3/3] Update .pre-commit-config.yaml Co-authored-by: Marcelo Trylesinski --- .pre-commit-config.yaml | 1 + 1 file changed, 1 insertion(+) diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index f4e6630f1..42c12fded 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -55,6 +55,7 @@ repos: language: system files: ^(pyproject\.toml|uv\.lock)$ pass_filenames: false + # TODO(Max): Drop this in v2. - id: readme-v1-frozen name: README.md is frozen (v1 docs) entry: README.md is frozen at v1. Edit README.v2.md instead.