A hands-on workshop demonstrating MCP (Model Context Protocol) best practices using Mastra. Build a "Customer Analytics" MCP server that showcases workflow-oriented tools, authentication patterns, and multi-system integration.
- Minimize surface area: 2 tools that handle many use cases
- Workflow-shaped tools: Capabilities (explore schema, run queries) vs endpoints (getUsers)
- Model compatibility: Test with multiple models, ensure consistent behavior
- Exploration: Let AI discover capabilities through resources
- Guardrails: Safe, deterministic, read-only by default
Customer Analytics MCP Server featuring:
- 📊 2 Tools:
compute_account_health
(multi-system workflow),run_sql
(database queries) - 📚 1 Resource:
schema://main
(discovery & exploration) - 🔄 Workflow Patterns: External API integration, data fusion, scoring algorithms
- 🛡️ Built-in Safety: SELECT-only, implicit LIMIT, parsed queries, API rate limits
- 🔐 Authentication: Transparent role-based access control (admin/user/readonly)
# Required
node >= 20.9.0
pnpm >= 8.0.0
# Optional: OpenAI API key for demo
export OPENAI_API_KEY="your-key-here"
# Clone and install
git clone <this-repo>
cd mcp-server-workshop-best-practices
pnpm install
# Terminal 1: Start HTTP MCP server with real auth
pnpm mcp-http-server
# Terminal 2: Run HTTP workshop demo
pnpm workshop-demo-http
# Server runs on http://localhost:3001 with endpoints:
# - /mcp (MCP endpoint with auth)
# - /health (health check)
// src/mastra/mcp/server.ts
import { MCPServer } from "@mastra/mcp";
import { computeAccountHealthTool, runSqlTool } from "../tools";
const server = new MCPServer({
name: "customer-analytics",
version: "0.5.0",
description: "Customer analytics MCP server with multi-system workflows",
tools: {
compute_account_health: computeAccountHealthTool,
run_sql: runSqlTool,
},
resources: resourceHandlers,
});
Key Features:
- ✅ Multi-system workflows:
compute_account_health
combines database, external APIs, and business logic - ✅ Resource-based discovery: Schema exploration via
schema://main
resource - ✅ Transparent authentication: MCP-compliant auth context passed to tools
- ✅ Role-based access: admin/user/readonly permissions enforced per tool call
- ✅ Guardrails built-in: SELECT-only, auto-LIMIT, permission checking
- ✅ Error teaching: Structured, helpful error messages
// src/workshop-demo.ts
import { MCPClient } from "@mastra/mcp";
import { Agent } from "@mastra/core/agent";
const mcpClient = new MCPClient({
servers: {
customerAnalytics: {
command: "pnpm",
args: ["mcp-server"],
env: { DEMO_API_KEY: "api_key_user_456" }, // Auth context
},
},
});
const agent = new Agent({
name: "Customer Analytics Agent",
model: openai("gpt-4o-mini"),
});
// Use MCP tools with agent
const response = await agent.generate(task, {
toolsets: await mcpClient.getToolsets(),
});
The workshop includes a production-ready HTTP server with real authentication:
// src/mastra/mcp/http-server.ts
import { MCPServer } from "@mastra/mcp";
import { server } from "./server";
import type { AuthInfo, DemoUserInfo } from "./utils";
// Authentication middleware
function authenticateRequest(req: http.IncomingMessage): AuthInfo | null {
const authHeader = req.headers.authorization;
// Support JWT Bearer tokens
if (authHeader?.startsWith("Bearer ")) {
return validateJWT(authHeader.slice(7));
}
// Support API keys
if (authHeader?.startsWith("ApiKey ")) {
return validateApiKey(authHeader.slice(7));
}
return null;
}
// HTTP request handler with MCP-compliant auth
const handleRequest = async (req, res) => {
const authInfo = authenticateRequest(req);
if (!authInfo) {
res.writeHead(401, { "Content-Type": "application/json" });
res.end(
JSON.stringify({
error: "Authentication required",
message: "Please provide a valid Authorization header",
}),
);
return;
}
// Attach MCP-compliant auth to request
(req as any).auth = authInfo;
await server.startHTTP({
url: new URL(`http://localhost:${PORT}`),
httpPath: "/mcp",
req,
res,
});
};
Key Features:
- ✅ JWT & API Key Support: Multiple authentication methods
- ✅ MCP-Compliant Auth: Uses official
AuthInfo
type from MCP specification - ✅ Real Auth Context: Authentication info passed to tools via
options.extra.authInfo
- ✅ Session Management: Unique session IDs for each connection
- ✅ CORS Support: Web client compatibility
- ✅ Structured Errors: Clear auth failure responses
Test Credentials:
# Health check (no auth required)
curl localhost:3001/health
# MCP endpoint (requires auth)
curl -H "Authorization: ApiKey sk-user-987654321" \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"tools/list","id":1}' \
localhost:3001/mcp
# Available credentials:
# API Keys: sk-admin-123456789, sk-user-987654321, sk-readonly-555666777
# JWT Tokens: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.admin, eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.user
Purpose: Demonstrates complex workflow patterns that combine multiple data sources for business insights.
Data Flow:
Internal DB (orders) → External NPS API → External Support API → Risk Scoring → Actionable Insights
Key Features:
- Multi-source data fusion: Order history + NPS scores + support tickets
- Business logic: Configurable scoring weights (recency 30%, momentum 30%, satisfaction 25%, reliability 15%)
- Segmentation: Filter by customer value, activity patterns, risk levels
- External API simulation: Realistic delays, missing data, enterprise customer patterns
- Role-based limits: Readonly users limited to 10 accounts
Input Parameters:
segment
: "all" | "inactive" | "highValue"windowDays
: Analysis period (default 90 days)limit
: Maximum accounts to analyze (default 50)includeReasons
: Include risk factor explanations
Output Structure:
{
"accounts": [
{
"accountId": "1",
"name": "Alice Johnson",
"healthScore": 85,
"tier": "good",
"metrics": { "lastOrderDays": 5, "spendDeltaPct": 15.2, "nps": 72 },
"reasons": []
}
],
"summary": {
"totalAnalyzed": 20,
"segmentBreakdown": { "good": 15, "watch": 3, "at_risk": 2 },
"avgHealthScore": 75,
"externalDataCoverage": { "npsAvailable": 18, "supportDataAvailable": 20 }
}
}
Purpose: Safe, controlled database access with automatic guardrails.
Key Features:
- SELECT-only validation
- Automatic LIMIT injection
- Role-based row limiting
- Permission checking based on query content
Run the same task with multiple models:
Task: "Analyze customer health for high-value accounts and show database schema"
Expected flow:
1. Schema resource → understand available data structure
2. compute_account_health → multi-system customer analysis
3. run_sql → supplementary database queries if needed
4. Valid JSON response with insights and recommendations
Testing Matrix:
- ✅ Right tool chosen? (compute_account_health for analysis, run_sql for queries)
- ✅ Args valid? (proper segment filtering, reasonable limits)
- ✅ Result shape? (structured health scores, actionable insights)
- ✅ Authentication? (role-based access control working)
- Schema Exploration: "What data is available?" (via schema resource)
- Customer Health Analysis: "Show me at-risk customers" (compute_account_health)
- Database Queries: "Show me recent high-value orders" (run_sql)
- Multi-system Integration: "Combine order data with support tickets" (workflow tool)
- Guardrails Test: "Try to delete users" (safely fails with helpful error)
- Authentication Test: Different access levels (admin/user/readonly)
- Capability-oriented:
run_sql
(workflow) vsgetUserById
(endpoint) - Self-documenting: Descriptions include examples
- Composable: One general tool > many specific ones
- Read-only default: Only SELECT operations allowed
- Implicit limits: Auto-add LIMIT clauses
- Structured errors: "Only SELECT queries allowed" (teaches the model)
- Cross-model testing: GPT-4, GPT-3.5, etc.
- Consistent behavior: Same tools, same args, same output shape
- Graceful degradation: Weaker models still succeed
- Resource-driven: Schema exposed via MCP resources
- Progressive disclosure: Start simple, add complexity
- Documentation: Examples and usage notes included
Add a third tool to show workflow composition:
const cityReportTool = createTool({
id: "make_city_report",
description: "Generate a comprehensive city analysis report",
execute: async ({ context }) => {
// Internally runs 2-3 SQL queries
// Returns: { city, userCount, totalSpend, avgOrderValue }[]
},
});
Replace the in-memory data with real database connections:
// Update src/mcp-server/server.ts
import Database from "better-sqlite3";
const db = new Database("path/to/your/database.db");
// or with PostgreSQL
import postgres from "postgres";
const sql = postgres("postgresql://...");
// Update executeQuery function to use real SQL
Add models to the compatibility test:
// src/workshop-demo.ts
const MODELS_TO_TEST = [
{ name: "GPT-4o Mini", model: openai("gpt-4o-mini") },
{ name: "GPT-4o", model: openai("gpt-4o") },
{ name: "Claude Sonnet", model: anthropic("claude-3-sonnet-20240229") },
// Add your preferred models
];
- Tools are workflows, not APIs - Design for capabilities
- Fewer is better - 2 general tools > 10 specific ones
- Safety first - Guardrails built into every tool
- Test compatibility - Multiple models, same behavior
- Enable exploration - Resources help discovery
Found an issue or want to improve the workshop?
- Report bugs via GitHub issues
- Submit improvements via pull requests
- Share your workshop experiences
Built with ❤️ using Mastra - The TypeScript AI Framework