Skip to content

A python implementation of the Model Context Protocol (MCP) server with fastmcp, fastapi and streamablehttp.

License

Notifications You must be signed in to change notification settings

rb58853/simple-mcp-server

Repository files navigation

Simple Python MCP-Server

License: MIT Last commit Commit activity Stars Forks Watchers Contributors

A python implementation of the Model Context Protocol (MCP) server with fastmcp and fastapi.

Table of Contents

Overview

This repository is based on the official MCP Python SDK repository, with the objective of creating an MCP server in Python using FastMCP. The project incorporates the following basic functionalities:

  • To facilitate understanding and working with the Model Context Protocol (MCP), from the fundamentals and in an accessible manner
  • To provide a testing platform for MCP clients
  • To integrate the server with FastAPI and offer it as a streamable HTTP service, maintaining a clear separation between the service and the client

The project focuses on the implementation of a simple MCP server that is served through FastAPI with httpstream. This approach represents the recommended methodology for creating MCP servers. To explore other implementation forms and server services, it is recommended to consult the official documentation.

Transport

Streamable HTTP Transport

Note: Streamable HTTP transport is superseding SSE transport for production deployments.

from mcp.server.fastmcp import FastMCP
# Stateless server (no session persistence)
mcp = FastMCP("StatelessServer", stateless_http=True)

You can mount multiple FastMCP servers in a FastAPI application

# echo.py
from mcp.server.fastmcp import FastMCP
mcp = FastMCP(name="EchoServer", stateless_http=True)

@mcp.tool(description="A simple echo tool")
def echo(message: str) -> str:
    return f"Echo: {message}"
# math.py
from mcp.server.fastmcp import FastMCP
mcp = FastMCP(name="MathServer", stateless_http=True)

@mcp.tool(description="A simple add tool")
def add_two(n: int) -> int:
    return n + 2
# fast_api.py
import contextlib
from fastapi import FastAPI
from mcp.echo import echo
from mcp.math import math


# Create a combined lifespan to manage both session managers
@contextlib.asynccontextmanager
async def lifespan(app: FastAPI):
    async with contextlib.AsyncExitStack() as stack:
        await stack.enter_async_context(echo.mcp.session_manager.run())
        await stack.enter_async_context(math.mcp.session_manager.run())
        yield


app = FastAPI(lifespan=lifespan)
app.mount("/echo", echo.mcp.streamable_http_app())
app.mount("/math", math.mcp.streamable_http_app())

Deployment

Local Deployment

To set up the development environment, execute the following commands:

1. Install project dependencies

pip install -r requirements.txt

2. Start the server in development mode

uvicorn src.run:app --host 0.0.0.0 --port 8000 --reload

3. Verify Proper Server Startup

To confirm that the server is operating correctly, open a web browser and navigate to the address http://0.0.0.0:8000. This should redirect to a user help page that provides guidance on how to use the server.

4. Run tests

python tests/run.py

Docker Deployment

The project can be run using Docker Compose:

docker compose -f docker-compose.yml up -d --build

Use Case

To verify the correct operation of this server, it is recommended to install the mcp-llm-client package and create a project based on it by following the steps outlined below:

⚠️ Configuration Note: To use this chat with an LLM, an OpenAI API key is required. If you do not have one, you can create it by following the instructions on the official OpenAI page.

1. Server Deployment

Deploy this server according to the instructions provided in the Deployment section. This step is essential, as the server must be running either locally or on a cloud server. Once the server is deployed, it can be used through the MCP client.

2. Clone a template from GitHub

Clone a template from GitHub that provides a simple base to use the MCP client:

# clone repo
git clone https://github.com/rb58853/template_mcp_llm_client.git

# change to project dir
cd template_mcp_llm_client

# install dependencies
pip install -r requirements.txt

3. Add Server to Configuration

In the cloned project, locate the config.json file in the root directory and add the following configuration inside the mcp_servers object:

{
    "mcp_servers": {
        "example_mcp_server": {
            "http": "your_http_path (e.g., http://0.0.0.0:8000/server_name/mcp)",
            "name": "server_name (optional)",
            "description": "server_description (optional)"
        }
    }
}

💡 Hint: Once the server is deployed, you can access its root URL to obtain help. This section provides the exact configuration needed to add the server to the MCP client. For example, opening http://0.0.0.0:8000 in a browser will redirect to the help page.

4. Execution

Follow the instructions in the readme.md file of the cloned project to run a local chat using this MCP server. Typically, this is done by running the following command in the console:

# Run app (after set OPENAI-API-KEY and add servers to config)
python3 main.py

Bibliography

For more detailed information on using this MCP client, please refer to its official repository.

License

MIT License. See license.

About

A python implementation of the Model Context Protocol (MCP) server with fastmcp, fastapi and streamablehttp.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages