Skip to content

Commit a53a0f4

Browse files
MCP integration and observability blog
1 parent e87041b commit a53a0f4

File tree

2 files changed

+237
-0
lines changed

2 files changed

+237
-0
lines changed

docs/blog/posts/lumo.md

Lines changed: 152 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,152 @@
1+
---
2+
draft: false
3+
date: 2025-05-01
4+
authors:
5+
- sonam
6+
slug: mcp, agent
7+
title: Easy MCP integration to our agentic framework; LUMO
8+
---
9+
10+
## Building a Server with Lumo: A Step-by-Step Guide to MCP Integration
11+
Lumo, a powerful Rust-based agent, offers seamless integration with MCPs (Modular Control Protocols) and remarkable flexibility in implementation. While Lumo can be used as a library, CLI tool, or server, this guide will focus specifically on deploying Lumo in server mode for optimal MCP integration.
12+
<!-- more -->
13+
14+
## What is an MCP?
15+
Modular Control Protocol (MCP) is a standardized communication framework that allows different components of a system to interact efficiently. MCPs enable modular applications to communicate through a structured protocol, making it easier to build scalable, maintainable systems where components can be swapped or upgraded without disrupting the entire architecture.
16+
17+
## Architecture of MCP
18+
19+
MCP follows a client-server architecture with clearly defined roles:
20+
21+
- **Hosts**: LLM applications (like Claude Desktop or integrated development environments) that initiate connections
22+
- **Clients**: Components that maintain one-to-one connections with servers inside the host application
23+
- **Servers**: Systems that provide context, tools, and prompts to clients
24+
25+
This architecture is built around three main concepts:
26+
27+
1. **Resources**: Similar to GET endpoints, resources load information into the LLM's context
28+
2. **Tools**: Functioning like POST endpoints, tools execute code or produce side effects
29+
3. **Prompts**: Reusable templates that define interaction patterns for LLM communications
30+
31+
![alt text](https://royal-hygienic-522.notion.site/image/attachment%3Ac462d75f-ac1f-460b-b686-8bd3827a4f6d%3Aimage.png?table=block&id=1f981b6a-6bbe-805d-8ce5-e6b1bf4697ce&spaceId=f1bf59bf-2c3f-4b4d-a5f9-109d041ef45a&width=1270&userId=&cache=v2)
32+
33+
## Setting Up a Lumo Server with MCP Integration
34+
35+
## 🖥️ Server Usage
36+
37+
Lumo can also be run as a server, providing a REST API for agent interactions.
38+
39+
### Starting the Server
40+
41+
```
42+
cargo install --git https://github.com/StarlightSearch/lumo.git --branch new-updates --features mcp lumo-server
43+
44+
```
45+
46+
#### Using Binary
47+
```bash
48+
# Start the server (default port: 8080)
49+
lumo-server
50+
```
51+
52+
#### Using Docker
53+
```bash
54+
# Build the image
55+
docker build -f server.Dockerfile -t lumo-server .
56+
57+
# Run the container with required API keys
58+
docker run -p 8080:8080 \
59+
-e OPENAI_API_KEY=your-openai-key \
60+
-e GOOGLE_API_KEY=your-google-key \
61+
-e GROQ_API_KEY=your-groq-key \
62+
-e ANTHROPIC_API_KEY=your-anthropic-key \
63+
-e EXA_API_KEY=your-exa-key \
64+
lumo-server
65+
```
66+
67+
You can also use the pre-built image:
68+
```bash
69+
docker pull akshayballal95/lumo-server:latest
70+
```
71+
72+
### Server Configuration
73+
You can configure multiple servers in the configuration file for MCP agent usage. The configuration file location varies by operating system:
74+
75+
```
76+
Linux: ~/.config/lumo-cli/servers.yaml
77+
macOS: ~/Library/Application Support/lumo-cli/servers.yaml
78+
Windows: %APPDATA%\Roaming\lumo\lumo-cli\servers.yaml```
79+
80+
Example config:
81+
82+
```exa-search:
83+
command: npx
84+
args:
85+
- "exa-mcp-server"
86+
env:
87+
EXA_API_KEY: "your-api-key"
88+
89+
fetch:
90+
command: uvx
91+
args:
92+
- "mcp_server_fetch"
93+
94+
system_prompt: |-
95+
You are a powerful agentic AI assistant...
96+
97+
```
98+
99+
### API Endpoints
100+
101+
#### Health Check
102+
```bash
103+
curl http://localhost:8080/health_check
104+
```
105+
106+
#### Run Task
107+
```bash
108+
curl -X POST http://localhost:8080/run \
109+
-H "Content-Type: application/json" \
110+
-d '{
111+
"task": "What are the files in the folder?",
112+
"model": "gpt-4o-mini",
113+
"base_url": "https://api.openai.com/v1/chat/completions",
114+
"tools": ["DuckDuckGo", "VisitWebsite"],
115+
"max_steps": 5,
116+
"agent_type": "mcp"
117+
}'
118+
```
119+
120+
#### Request Body Parameters
121+
122+
- `task` (required): The task to execute
123+
- `model` (required): Model ID (e.g., "gpt-4", "qwen2.5", "gemini-2.0-flash")
124+
- `base_url` (required): Base URL for the API
125+
- `tools` (optional): Array of tool names to use
126+
- `max_steps` (optional): Maximum number of steps to take
127+
- `agent_type` (optional): Type of agent to use ("function-calling" or "mcp")
128+
- `history` (optional): Array of previous messages for context
129+
130+
131+
132+
---
133+
## MCP vs. Traditional Function-Calling
134+
135+
While both MCP and traditional function-calling allow LLMs to interact with external tools, they differ in several important ways:
136+
137+
The only difference between them is that, in traditional function calling, you need to define the processes and then LLM chooses which is the right option for the given job. It’s main purpose is to translate natural language into JSON format function calls. Meanwhile, MCP is the protocol that standardized the resources and tool calls for the LLM, that is why even though LLM still makes the decision to choose which MCP, it’s the standard calls that makes it highly scalable
138+
139+
### Benefits of lumo over other agentic systems
140+
141+
1. MCP Agent Support for multi-agent coordination
142+
2. Multi-modal support, can easily use OpenAI, Google or Anthropic
143+
3. Asynchronous tool calling.
144+
4. In-built Observability with langfuse
145+
146+
Open discussions if you have any doubt and give us a star at repo.
147+
148+
## Conclusion
149+
150+
As agents evolve, standardized protocols like MCP will become increasingly important for enabling sophisticated AI applications. By providing a common language for AI systems to interact with external tools and data sources, MCP helps bridge the gap between powerful language models and the specific capabilities needed for real-world applications.
151+
152+
For developers working with AI, understanding and adopting MCP offers a more sustainable, future-proof approach to building AI integrations compared to platform-specific function-calling implementations.

docs/blog/posts/tracing_in_lumo.md

Lines changed: 85 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,85 @@
1+
---
2+
draft: false
3+
date: 2025-05-01
4+
authors:
5+
- sonam
6+
slug: observability
7+
title: Easy Observability to our agentic framework; LUMO
8+
---
9+
10+
In the rapidly evolving landscape of AI agents, particularly those employing Large Language Models (LLMs), observability and tracing have emerged as fundamental requirements rather than optional features. As agents become more complex and handle increasingly critical tasks, understanding their inner workings, debugging issues, and establishing accountability becomes paramount.
11+
12+
## Understanding Observability in AI Agents
13+
14+
Observability refers to the ability to understand the internal state of a system through its external outputs. In AI agents, comprehensive observability encompasses:
15+
16+
1. **Decision Visibility**: Transparency into how and why an agent made specific decisions
17+
2. **State Tracking**: Monitoring the agent's internal state as it evolves throughout task execution
18+
3. **Resource Utilization**: Measuring computational resources, API calls, and external interactions
19+
4. **Performance Metrics**: Capturing response times, completion rates, and quality indicators
20+
21+
## The Multi-Faceted Value of Tracing and Observability
22+
23+
### 1. Debugging and Troubleshooting
24+
25+
AI agents, especially those leveraging LLMs, operate with inherent complexity and sometimes unpredictability. Without proper observability:
26+
27+
- **Silent Failures** become common, where agents fail without clear indications of what went wrong
28+
- **Root Cause Analysis** becomes nearly impossible as there's no trace of the execution path
29+
30+
### 2. Performance Optimization
31+
32+
Observability provides crucial insights for optimizing agent performance:
33+
34+
- **Caching Opportunities**: Recognize repeated patterns that could benefit from caching
35+
36+
### 3. Security and Compliance
37+
38+
As agents gain more capabilities and autonomy, security becomes increasingly critical:
39+
40+
- **Audit Trails**: Maintain comprehensive logs of all agent actions for compliance and security reviews
41+
- **Prompt Injection Detection**: Identify potential attempts to manipulate the agent's behavior
42+
43+
### 4. User Trust and Transparency
44+
45+
For end-users working with AI agents, transparency builds trust:
46+
47+
- **Action Justification**: Provide clear explanations for why the agent took specific actions
48+
- **Confidence Indicators**: Show reliability metrics for different types of responses
49+
50+
### 5. Continuous Improvement
51+
52+
Observability creates a foundation for systematic improvement:
53+
54+
- **Pattern Recognition**: Identify standard failure modes or suboptimal behaviors
55+
- **A/B Testing**: Compare different agent configurations with detailed performance metrics
56+
57+
## Implementing Effective Observability in Lumo
58+
59+
For Tracing and Observability
60+
61+
```
62+
vim ~/.bashrc
63+
```
64+
Add the three keys from Langfuse:
65+
66+
```
67+
LANGFUSE_PUBLIC_KEY_DEV=your-dev-public-key
68+
LANGFUSE_SECRET_KEY_DEV=your-dev-secret-key
69+
LANGFUSE_HOST_DEV=http://localhost:3000 # Or your dev Langfuse instance URL
70+
```
71+
72+
Start lumo-cli or lumo server then press:
73+
74+
```
75+
CTRL + C
76+
```
77+
And it’s added to the dashboard
78+
79+
![image.png](attachment:2e738a1a-0d90-4eca-80a6-23539ac38d43:image.png)
80+
81+
## Conclusion
82+
83+
Observability and tracing are no longer optional components for serious AI agent implementations. They form the foundation for reliable, secure, and continuously improving systems. As agents take on more responsibility and autonomy, the ability to observe, understand, and explain their behavior becomes not just a technical requirement but an ethical imperative.
84+
85+
Organizations building or deploying AI agents should invest early in robust observability infrastructure, treating it as a core capability rather than an afterthought. The insights gained will improve current systems and also inform the development of better, more trustworthy agents in the future.

0 commit comments

Comments
 (0)