A Slack bot that integrates with Model Context Protocol (MCP) servers to provide AI-powered assistance with access to various tools and services. It summarize requirements in a Slack thread. Output is a Merge Request with analytics.
The bot is designed with four core components:
- SlackMCPBot: Core class managing Slack events and message processing
- LLMClient: Handles communication with OpenAI API
- Server: Manages communication with MCP servers
- Tool: Represents available tools from MCP servers
- π MCP Integration: Connect to multiple MCP servers and discover their tools
- π€ AI-Powered Responses: Uses OpenAI GPT models with function calling
- π¬ Slack Integration: Responds to mentions and direct messages in Socket Mode
- π§΅ Thread Support: Maintains conversation context within Slack threads
- π Comprehensive Logging: Structured logging with Winston
- π Graceful Shutdown: Properly disconnects from all services on shutdown
- π οΈ Tool Execution: Executes tools from MCP servers based on conversation context
- Node.js 18.0.0 or higher
- A Slack app with Socket Mode enabled
- OpenAI API key
- MCP servers configured and accessible
-
Clone and install dependencies:
npm install
-
Configure environment variables:
cp env.example .env
Edit
.env
with your configuration:SLACK_BOT_TOKEN=xoxb-your-bot-token-here SLACK_APP_TOKEN=xapp-your-app-token-here SLACK_SIGNING_SECRET=your-signing-secret-here OPENAI_API_KEY=sk-your-openai-api-key-here
-
Configure MCP servers:
Edit
servers_config.json
to add your MCP servers:{ "mcpServers": { "mcp-gitlab": { "command": "uv", "args": [ "--directory", "/path/to/mcp-gitlab", "run", "mcp-gitlab", "--gitlab-url", "https://your-gitlab.com", "--gitlab-token", "your-gitlab-token" ] } } }
-
Create a Slack App at https://api.slack.com/apps
-
Enable Socket Mode in your app settings
-
Add Bot Token Scopes:
app_mentions:read
chat:write
im:read
im:write
channels:read
groups:read
mpim:read
-
Subscribe to Bot Events:
app_mention
message.im
-
Install the app to your workspace
# Development mode
npm run dev
# Production mode
npm start
-
Mention the bot in any channel:
@your-bot-name help me create a merge request
-
Direct message the bot:
What's the status of project X?
-
Use in threads for contextual conversations:
- The bot maintains conversation history within threads
- Tools are executed based on the conversation context
The bot now features an enhanced View Details popup that provides comprehensive information about tool executions:
- Visual Tool Icons: Each tool displays with contextual icons (π§ for GitLab, π¬ for Slack, π for JIRA, etc.)
- Execution Metadata: Shows timestamps, execution status, and performance information
- Enhanced Code Blocks: Properly formatted JSON, code, and text with syntax highlighting
- Structured Layout: Clear sections for parameters, results, and metadata
- Error Handling: Detailed error information with troubleshooting suggestions
- Batch Execution Summary: Overview of multiple tool executions with success/failure counts
Icon | Tool Type | Description |
---|---|---|
π¦ | GitLab | Repository management, MRs, pipelines |
π | JIRA | Task management, updates, transitions |
π¬ | Slack | Messaging, channels, user management |
π | TechDocs | Documentation search and retrieval |
π | Think | AI reasoning and analysis |
π§ | Generic | General-purpose tools |
The popup automatically formats different content types:
- JSON Objects: Properly indented with syntax highlighting
- Long Text: Multi-line code blocks for readability
- Thinking Results: Special formatting for AI reasoning
- Compact Mode: Summarized view for batch operations
GitLab Integration (if GitLab MCP server is configured):
@bot create a merge request from feature-branch to main
@bot show me recent pipelines for project myproject
@bot what are the failing jobs in the latest pipeline?
General Assistance:
@bot help me understand this error message: [paste error]
@bot summarize the recent activity in this project
Viewing Tool Details:
- Click the "View Details" button on any tool execution
- Get comprehensive information about parameters, results, and execution metadata
- Perfect for debugging, auditing, and understanding tool behavior
- Manages Slack app initialization and event handling
- Maintains conversation state per thread
- Coordinates between LLM and MCP servers
- Handles graceful shutdown
- Wraps OpenAI API with function calling support
- Manages conversation history and context
- Converts MCP tools to OpenAI function format
- Handles tool execution responses
- Manages individual MCP server connections
- Spawns and communicates with MCP server processes
- Discovers and validates available tools
- Executes tool calls with proper error handling
- Represents tools available from MCP servers
- Validates tool arguments against schemas
- Converts to OpenAI function calling format
- Provides metadata for LLM context
Variable | Required | Description |
---|---|---|
SLACK_BOT_TOKEN |
Yes | Slack bot token (starts with xoxb- ) |
SLACK_APP_TOKEN |
Yes | Slack app token (starts with xapp- ) |
SLACK_SIGNING_SECRET |
No | Slack signing secret for verification |
OPENAI_API_KEY |
Conditional | OpenAI API key (required if using OpenAI) |
ANTHROPIC_API_KEY |
Conditional | Anthropic API key (required if using Anthropic) |
LLM_MODEL |
No | Model to use for either provider (default: gpt-4.1 for OpenAI, claude-3-5-sonnet-20241022 for Anthropic) |
LLM_MAX_TOKENS |
No | Max tokens per response (default: 4000) |
LLM_TEMPERATURE |
No | Response randomness (default: 0.7) |
NODE_ENV |
No | Environment (development/production) |
LOG_LEVEL |
No | Logging level (default: info) |
The bot automatically selects the LLM provider based on which API key is provided:
- Anthropic Claude: Used when only
ANTHROPIC_API_KEY
is provided - OpenAI: Used when only
OPENAI_API_KEY
is provided, or when both API keys are provided (OpenAI takes precedence)
At least one API key (either OPENAI_API_KEY
or ANTHROPIC_API_KEY
) must be provided.
Use LLM_MODEL
to specify the model for either provider. If not specified, defaults to:
- OpenAI:
gpt-4.1
- Anthropic:
claude-3-5-sonnet-20241022
The servers_config.json
file defines MCP servers:
{
"mcpServers": {
"server-name": {
"command": "executable-command",
"args": ["arg1", "arg2"],
"env": {
"OPTIONAL_ENV_VAR": "value"
}
}
}
}
Logs are written to the logs/
directory:
combined.log
: All log messageserror.log
: Error messages onlyexceptions.log
: Uncaught exceptionsrejections.log
: Unhandled promise rejections
Log levels: error
, warn
, info
, debug
src/
βββ bot/
β βββ SlackMCPBot.js # Main bot class
βββ llm/
β βββ LLMClient.js # OpenAI & Anthropic integration
βββ mcp/
β βββ Server.js # MCP server management
βββ types/
β βββ Tool.js # Tool representation
βββ utils/
β βββ config.js # Configuration loader
β βββ logger.js # Logging setup
βββ index.js # Entry point
- New MCP Server: Add configuration to
servers_config.json
- Custom Tools: Extend the
Tool
class for special handling - Event Handlers: Add new Slack event handlers in
SlackMCPBot
- LLM Customization: Modify prompts and behavior in
LLMClient
-
"Missing required environment variables"
- Ensure all required env vars are set in
.env
- Ensure all required env vars are set in
-
"Failed to connect to MCP server"
- Check MCP server command and arguments
- Verify server executable is available
- Check logs for detailed error messages
-
"Tool execution failed"
- Verify tool arguments match expected schema
- Check MCP server logs
- Ensure server is still connected
-
Slack connection issues
- Verify bot tokens are correct
- Check Slack app configuration
- Ensure Socket Mode is enabled
Set LOG_LEVEL=debug
for detailed logging:
LOG_LEVEL=debug npm start
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
MIT License - see LICENSE file for details