Tools: MCP, LLM Agents Topics: Model Context Protocol, LLM tools
In this workshop, we'll provide an overview of how to work with Model Context Protocol (MCP), a model-agnostic protocol that has been rapidly adopted by many LLM-powered applications. We'll be developing a sales assistant that can provide information about a company's sales data. The assistant will be able to retrieve, analyze, and visualize sales information to answer queries. MCP was developed by Anthropic to establish a unified framework to provide LLMs with tools, resources, and templates in a lightweight package. Branded as the USB-C for LLMs, the protocol allows the development of plugins for AI software, which can be easily connected to any client regardless of the implementation or model being used on the client side.
Throughout this tutorial, we will develop MCP server and client that can be used to turn any model into a business assistant. We will equip the server with tools, resources, and templates that will allow the client to extract information from the company's sales database (a sales.json
file), draw conclusions from it, and render plots. We will see how the server will be compatible with any client regardless of the model or implementation of the client. You should test the tasks with both llm providers (openai and gemini).
Time: 30min
The goal of this task is to configure the MCP client to connect to a locally running server container using the stdio transport protocol. This simulates a lightweight, command-line interaction with the server.
You have a Docker image named sales_server
that runs our MCP server. Your task is to update the client's mcp.json
configuration file and complete the connect_to_server
method in client.py
to correctly initialize the stdio
connection.
-
Open the
mcp.json
file and replace its contents with the following configuration. This tells the client to use astdio
connection and run the server by executing a Docker command.{ "llm_provider": "google-genai", "llm_model": "gemini-2.0-flash-001", "mcpServers": { "dockerized_server": { "type": "stdio", "command": "docker", "args": [ "run", "-i", "--rm", "--init", "-e", "DOCKER_CONTAINER=true", "-v", "/Users/danielgomez/mcp/workspace:/root/mcp/workspace", "sales_server" ] } } }
-
Navigate to the
client.py
file and complete the code within the## homework:start
and## homework:end
blocks inside theconnect_to_server
method for the"stdio"
transport. You'll need to createStdioServerParameters
, initialize thestdio_client
, and then create a newClientSession
. -
After completing the code, run the client to test the connection. You should be able to interact with the server and its tools.
Time: 30min
Now, let's deploy the server as a long-running HTTP service in a Docker container. The client, running on your local machine, will connect to this container over a network port. This is a more typical production scenario.
-
First, run the
sales_server
container in the background, mapping its internal port 8050 to the host's port 8050.docker run -p 8050:8050 --rm -d sales_server
-
Next, modify the
mcp.json
file to switch the server type to "http". You will need to define the server'surl
(e.g.,http://localhost:8050
) within the configuration. Refer to theHTTPServerConfig
schema for the correct structure. -
Finally, navigate back to the
client.py
file and complete the code within the## homework:start
and## homework:end
blocks for the"http"
transport. You will usestreamablehttp_client
to create the client session.
Time: 1h
In this final task, you will deploy both the client and server as containerized services using Docker Compose. This represents a fully containerized, microservice-based architecture.
-
Ensure you have a
docker-compose.yml
file that defines services for both the client and the server. -
Update your
mcp.json
file to configure the client to connect to the server using the service name defined indocker-compose.yml
. For example, if your server service is namedserver
, the URL should behttp://server:8050
.{ "llm_provider": "google-genai", "llm_model": "gemini-2.0-flash-001", "mcpServers": { "dockerized_http_server": { "type": "http", "url": "http://server:8050" } } }
-
Launch the services using the following command:
docker-compose up -d
-
Since the client is now running inside a container, you must enter its container to interact with the LLM. Use the following command to execute a bash shell inside the client container:
docker-compose exec -it client bash
-
From within the client container's shell, run the client application to start the interactive session:
python client.py
- Experiment with the LLM. Ask it questions that require using the tools, such as "What was the total revenue in May 2024?" or "Can you show me a plot of the revenue for the first quarter of 2024?".
- Explore the
schemas/models.py
file to understand howpydantic
models are used to validate the configuration files for both the client and server.
- Create a more robust client that supports multiple concurrent server connections.
- Implement a tool that allows the LLM to write new data to the sales database.
- Explore the use of other LLMs like OpenAI's GPT models by changing the
llm_provider
in themcp.json
configuration.