This guide walks you through setting up the JIRA report agent using Google Gemini as the LLM with CrewAI framework and your existing jira-mcp-snowflake MCP server.
Gemini LLM β CrewAI Agents β MCP Tools β jira-mcp-snowflake Server β JIRA Data
- Google Gemini API Key - Get from Google AI Studio
- Your existing MCP server - The
jira-mcp-snowflake
server we tested - Python 3.9+ with pip
# Install the Gemini + CrewAI requirements
pip install -r requirements.txt
Create a .env
file or set environment variables:
# Copy the example and edit with your values
cp gemini_env_example.txt .env
# Edit .env with your actual API key:
# GEMINI_API_KEY=your-actual-gemini-api-key-here
# MCP_SERVER_URL=http://localhost:8000
Or export directly:
export GEMINI_API_KEY="your-gemini-api-key-here"
export MCP_SERVER_URL="http://localhost:8000" # Your MCP server URL
python3 crewai_gemini.py
Choose between different Gemini models based on your needs:
# Fast and cost-effective (default)
export GEMINI_MODEL="gemini-1.5-flash"
# More capable for complex analysis
export GEMINI_MODEL="gemini-1.5-pro"
You can switch between Gemini and OpenAI:
# Use Gemini (default)
export LLM_PROVIDER="gemini"
export GEMINI_API_KEY="your-key"
# Use OpenAI instead
export LLM_PROVIDER="openai"
export OPENAI_API_KEY="your-key"
# Local MCP server (default)
export MCP_SERVER_URL="http://localhost:8000"
# Remote MCP server
export MCP_SERVER_URL="https://your-mcp-server.com"
# Disable MCP and use direct JIRA API
export USE_MCP_SERVER="false"
export JIRA_BASE_URL="https://your-jira.com"
export JIRA_USERNAME="your-username"
export JIRA_API_TOKEN="your-token"
The Gemini implementation uses 3 specialized agents:
- Role: Collects data from MCP server
- Tools:
MCPJIRADataTool
- Focus: Efficient data extraction from projects
- Role: Filters and validates data
- Tools:
DateFilterTool
- Focus: Temporal analysis and data quality
- Role: Creates professional reports
- Tools: None (pure LLM reasoning)
- Focus: Clear stakeholder communication
graph TD
A[Start] --> B[Data Collector Agent]
B --> C[Fetch JIRA issues via MCP]
C --> D[Data Analyst Agent]
D --> E[Filter last 7 days]
E --> F[Report Generator Agent]
F --> G[Create markdown report]
G --> H[weekly_jira_report_gemini.md]
python crewai_config.py
Expected output:
β
Environment configured for GEMINI + MCP integration
- LLM Provider: gemini
- Using MCP: true
- Projects: ['CCITJEN', 'CCITRP', 'QEHS']
- MCP URL: http://localhost:8000
- LLM created successfully: ChatGoogleGenerativeAI
Verify your MCP server is accessible:
curl -X POST http://localhost:8000/mcp/jira-mcp-snowflake/list_jira_issues \
-H "Content-Type: application/json" \
-d '{"project": "CCITJEN", "status": "6", "limit": 5}'
python crewai_gemini_implementation.py
ValueError: GEMINI_API_KEY environment variable is required
Solution:
- Get API key from Google AI Studio
- Set environment variable:
export GEMINI_API_KEY="your-key"
Failed to connect to MCP server: Connection refused
Solutions:
- Check if MCP server is running:
curl http://localhost:8000/health
- Verify URL in
MCP_SERVER_URL
environment variable - Check firewall/network connectivity
429 Too Many Requests
Solutions:
- Switch to
gemini-1.5-flash
for higher rate limits - Add delays between requests
- Upgrade Gemini API quota
ModuleNotFoundError: No module named 'langchain_google_genai'
Solution:
pip install -r requirements_crewai.txt
Aspect | Gemini 1.5 Flash | Gemini 1.5 Pro |
---|---|---|
π° Cost | Lower | Higher |
β‘ Speed | Faster | Slower |
π§ Capability | Good for reports | Better for analysis |
π Complex Reasoning | Basic | Advanced |
π Report Quality | High | Very High |
- Use
gemini-1.5-flash
for regular reporting (default) - Use
gemini-1.5-pro
for complex analysis or when quality is critical
- The implementation includes robust error handling for MCP failures
- Automatic fallback to error messages in reports
- Graceful handling of date parsing issues
- Set
temperature=0.1
for consistent reports - Enable
memory=True
for context retention - Use
planning=True
for better task coordination
- Monitor API usage in Google Cloud Console
- Set usage alerts for cost control
- Use flash model for development/testing
If migrating from the Llama Stack version:
# Your MCP server continues to work
export USE_MCP_SERVER="true"
export MCP_SERVER_URL="http://localhost:8000"
# From: Llama Stack + OpenAI
# To: CrewAI + Gemini
export LLM_PROVIDER="gemini"
export GEMINI_API_KEY="your-key"
- β Same JIRA data access via MCP
- β Better multi-agent collaboration
- β More sophisticated reporting
- β Better error handling
- β Standalone deployment (no Llama Stack needed)
The agent generates a professional report like:
# Weekly JIRA Closed Issues Report
**Report Period:** July 15-22, 2025
## Executive Summary
- Total issues closed: 12
- CCITJEN: 5 issues
- CCITRP: 4 issues
- QEHS: 3 issues
## Detailed Results
[Professional tables with issue details]
If you encounter issues:
- Check environment:
python crewai_config.py
- Verify MCP server: Test endpoint directly
- Check logs: Run with verbose mode
- Review API limits: Check Gemini quotas
- Run basic test:
python crewai_gemini_implementation.py
- Customize prompts: Edit agent backstories and tasks
- Add more projects: Extend the projects list
- Schedule reports: Set up cron jobs for automation
- Add integrations: Connect to Slack, email, etc.
π You're ready to use Gemini with CrewAI and your MCP server!
The setup provides the same JIRA data access with enhanced AI capabilities and better architectural design.