The first comprehensive monitoring and collaboration platform for Claude Code, Cursor, and GitHub Copilot
Quick Start • Features • User Guide • FAQ • Demo
You're using AI coding assistants like Claude Code or Cursor, but:
- 💸 No visibility into token usage until you hit limits
⚠️ No warnings when sessions are getting unstable- 🤷 Can't collaborate with teammates on AI sessions
- 🔁 Repeat mistakes - no way to learn from past sessions
- 📊 No insights - just hoping the AI works well
LLM Session Manager gives you complete control over your AI coding workflow:
- 📊 Real-time monitoring - Track every token, error, and metric
- 🎯 Smart health scores - Know when to start fresh
- 👥 Team collaboration - Share sessions, chat, and learn together
- 🧠 AI-powered insights - Learn from patterns across all sessions
- 🔌 Zero configuration - Auto-detects all your AI tools
Comprehensive testing • 28/29 tests passing (96.6%) • Production ready
Choose your installation method:
curl -fsSL https://raw.githubusercontent.com/\
iamgagan/llm-session-manager/main/\
setup.sh | bashThen: |
docker-compose up -d
open http://localhost:3000Zero configuration required! |
|
Everything pre-installed! |
Multiple methods, detailed steps, troubleshooting |
|
Tagging System
|
Project Management
|
|
Markdown Reports
|
AI-Powered Recommendations
|
|
Cross-Session Memory
|
Team Collaboration API
|
Stop guessing when to start a new session
$ llm-session health claude_code_65260
Session Health: 67% (CAUTION)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
⚠️ Token usage: 82% (16,400 / 20,000)
⚠️ Error rate increasing (15 errors)
💡 Recommendation: Consider starting fresh
Based on 50 similar sessions:
• Average failure point: 85% tokens
• Success rate drops 40% after 80%Collaborate on AI sessions in real-time
$ llm-session share claude_code_65260
✅ Session Sharing Active!
🔗 http://localhost:3000/session/claude_code_65260
Your team can now:
✓ View live metrics and token usage
✓ Chat and discuss the session
✓ See your cursor position in real-time
✓ Add comments at specific code locationsUnderstand team productivity and AI usage
- Track total token spend across team
- Identify patterns in successful vs failed sessions
- Build team knowledge base from AI interactions
- Monitor health trends and optimize workflows
curl -fsSL https://raw.githubusercontent.com/iamgagan/llm-session-manager/main/setup.sh | bashThen start using:
poetry run python -m llm_session_manager.cli listgit clone https://github.com/iamgagan/llm-session-manager.git
cd llm-session-manager
poetry install
# If upgrading from an older version, run database migration
python3 migrate_database.py
# Optional: Enable AI insights
export LLM_API_KEY="your-openai-or-anthropic-key"
# Start using
poetry run python -m llm_session_manager.cli listdocker-compose up -d
open http://localhost:3000Everything pre-configured and ready to use!
→ See Detailed Installation Guide
|
|
|
|
# Session Management
poetry run python -m llm_session_manager.cli list # List all active sessions
poetry run python -m llm_session_manager.cli monitor # Real-time TUI dashboard
poetry run python -m llm_session_manager.cli health <session-id> # Detailed health breakdown
poetry run python -m llm_session_manager.cli export <session-id> --format json # Export session data
# AI-Powered Insights 🧠
poetry run python -m llm_session_manager.cli recommend # Get smart recommendations
poetry run python -m llm_session_manager.cli memory-search "authentication" # Search team knowledge
# Team Collaboration 👥
poetry run python -m llm_session_manager.cli share <session-id> # Share with team
# Organization & Search
poetry run python -m llm_session_manager.cli tag <session-id> feature auth # Tag sessions
# MCP Integration (Claude Desktop)
poetry run python -m llm_session_manager.cli mcp-config # Generate config- Quick Start Guide - Get running in 30 seconds
- Installation Options - Detailed setup for all methods
- First Session Tutorial - Your first monitored session
- AI Insights with Cognee - Unlock intelligent recommendations
- Team Collaboration - Share and collaborate on sessions
- MCP Integration - Use with Claude Desktop
- Architecture Overview - How it all works
- CLI Command Reference - All available commands
- API Documentation - REST API endpoints
- Changelog - Version history and updates
┌──────────────────────────────────────────────────────────────┐
│ LLM Session Manager │
├──────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────┐ │
│ │ CLI Monitor │ │ Web Dashboard │ │ AI Engine │ │
│ │ │ │ │ │ (Cognee) │ │
│ │ • Discovery │ │ • Real-time UI │ │ │ │
│ │ • Tracking │ │ • Team Chat │ │ • Learn │ │
│ │ • Health │ │ • Presence │ │ • Analyze │ │
│ │ • Export │ │ • Cursors │ │ • Suggest │ │
│ └────────┬────────┘ └────────┬────────┘ └──────┬──────┘ │
│ │ │ │ │
│ └────────────┬───────┴───────────────────┘ │
│ ▼ │
│ ┌────────────────────────────┐ │
│ │ FastAPI Backend │ │
│ │ • REST API │ │
│ │ • WebSocket Server │ │
│ │ • Session Management │ │
│ └────────────┬───────────────┘ │
│ ▼ │
│ ┌────────────────────────────┐ │
│ │ Data Layer │ │
│ │ • SQLite (Sessions) │ │
│ │ • ChromaDB (Memories) │ │
│ │ • Cognee (Knowledge) │ │
│ └────────────────────────────┘ │
│ │
└──────────────────────────────────────────────────────────────┘
Supported AI Tools: Claude Code • Cursor • GitHub Copilot
Tech Stack:
- Backend: Python 3.10+, FastAPI, SQLAlchemy, WebSockets
- Frontend: React 18, Vite, TailwindCSS
- AI Layer: Cognee, LanceDB, ChromaDB, OpenAI/Anthropic APIs
- CLI: Typer, Rich (beautiful terminal UI)
- Testing: Comprehensive test suite, 96.6% pass rate (28/29 tests)
We have a comprehensive automated test suite with 28/29 tests passing (96.6% pass rate).
Quick test:
# Clone and install
git clone https://github.com/iamgagan/llm-session-manager.git
cd llm-session-manager
poetry install
# Run comprehensive test suites
python3 test_all_features.py # CLI features (19/19 ✅)
python3 test_backend_features.py # Backend API (7/7 ✅)
python3 test_mcp_features.py # MCP integration (2/3 ✅)
# View detailed results
cat COMPREHENSIVE_TEST_REPORT.mdWhat gets tested:
- ✅ CLI commands (19 tests) - list, show, health, export, memory, tagging
- ✅ Backend API (7 tests) - REST endpoints, session stats, projects, insights
- ✅ MCP Integration (3 tests) - Config generation, server startup, tools
- ✅ Export functionality (all formats: JSON, YAML, Markdown)
- ✅ Memory system (add, search, list, stats)
- ✅ Team collaboration features
- ✅ AI-powered recommendations
1. Test CLI monitoring:
# List your active AI sessions
poetry run python -m llm_session_manager.cli list
# Monitor in real-time (dashboard view)
poetry run python -m llm_session_manager.cli monitor
# Press Ctrl+C to exit the monitor
# Get detailed health breakdown for a specific session
poetry run python -m llm_session_manager.cli health <session-id>
# Example with actual session ID:
poetry run python -m llm_session_manager.cli health claude_code_60420Note: The monitor command shows a live-updating dashboard. To exit, press Ctrl+C.
If keyboard shortcuts aren't working, use Ctrl+C to quit.
2. Test collaboration features:
# Terminal 1: Start backend
cd backend
uvicorn app.main:app --reload
# Terminal 2: Start frontend
cd frontend
npm install
npm run dev
# Browser: Open http://localhost:3000
# Create a session, invite teammates, test chat3. Test export:
# Export session data
poetry run python -m llm_session_manager.cli export <session-id> --format json
# Verify file created
cat /tmp/export.jsonWant to contribute? Here's how to get started:
- Fork and clone the repository
- Install dependencies:
poetry install - Run tests:
python3 test_all_features.py && python3 test_backend_features.py - Make changes and add tests
- Ensure all tests pass (28/29 passing)
- Submit a pull request
Areas we'd love help with:
- Additional AI tool integrations (Windsurf, Aider, etc.)
- Enhanced pattern recognition algorithms
- UI/UX improvements
- Documentation and tutorials
- Bug fixes and optimizations
See CONTRIBUTING.md for detailed guidelines.
|
|
|
Key Benefits:
- 💰 Reduce AI costs by tracking token usage across all sessions
- 📊 Get visibility into team AI usage patterns
- 🚀 Debug faster with complete session history
⚠️ Identify abandoned or inefficient sessions- 📈 Optimize AI budget allocation
Key Benefits:
- ⏱️ Stop guessing when to restart sessions
- 🎯 Get AI-powered recommendations based on your patterns
- 📚 Build a personal knowledge base from all AI interactions
- 🔍 Search across all past sessions semantically
- 📊 Track your token usage and coding patterns
Key Benefits:
- 👥 Collaborate on AI sessions in real-time
- 📈 Track team-wide AI productivity metrics
- 🧠 Build organizational knowledge from AI interactions
- 💬 Share insights and best practices
- 🎯 Learn from collective session patterns
- 📊 Monitor team health and token usage
We'd love your help making LLM Session Manager better! Here's how to contribute:
|
🔍 Detection & Monitoring
|
🧠 AI & Intelligence
|
|
🎨 UI/UX
|
📚 Documentation
|
→ See Contributing Guide | → Good First Issues
If you find LLM Session Manager useful, please consider giving it a star! It helps others discover the project.
Built with these amazing open-source projects:
- FastAPI - High-performance web framework
- React + Vite - Modern frontend
- Cognee - AI knowledge graphs
- SQLAlchemy - Powerful ORM
- Rich - Beautiful terminal output
- Typer - CLI framework
MIT License - see LICENSE for details.
TL;DR: Free for personal and commercial use. Do whatever you want with it!
Get started in 30 seconds:
git clone https://github.com/iamgagan/llm-session-manager.git
cd llm-session-manager
poetry install
poetry run python -m llm_session_manager.cli listOr explore the features:
# List all your AI sessions
poetry run python -m llm_session_manager.cli list
# Get health check on a session
poetry run python -m llm_session_manager.cli health <session-id>
# Real-time monitoring dashboard
poetry run python -m llm_session_manager.cli monitorMonitor smarter • Collaborate better • Learn continuously
⭐ Star on GitHub • 📖 Read the Docs • 🚀 Try the Demo
Helping developers and teams get the most out of AI coding assistants since 2024









