Build your own AI-powered automation tools in the terminal with this extensible agent framework
Features " Installation " Usage " Development " Contributing
vibecore is a Do-it-yourself Agent Framework that transforms your terminal into a powerful AI workspace. More than just a chat interface, it's a complete platform for building and orchestrating custom AI agents that can manipulate files, execute code, run shell commands, and manage complex workflows—all from the comfort of your terminal.
Built on Textual and the OpenAI Agents SDK, vibecore provides the foundation for creating your own AI-powered automation tools. Whether you're automating development workflows, building custom AI assistants, or experimenting with agent-based systems, vibecore gives you the building blocks to craft exactly what you need.
- AI-Powered Chat Interface - Interact with state-of-the-art language models through an intuitive terminal interface
- Rich Tool Integration - Built-in tools for file operations, shell commands, Python execution, and task management
- MCP Support - Connect to external tools and services via Model Context Protocol servers
- Beautiful Terminal UI - Modern, responsive interface with dark/light theme support
- Real-time Streaming - See AI responses as they're generated with smooth streaming updates
- Extensible Architecture - Easy to add new tools and capabilities
- High Performance - Async-first design for responsive interactions
- Context Management - Maintains state across tool executions for coherent workflows
- Python 3.11 or higher
- uv package manager
# Clone the repository
git clone https://github.com/serialx/vibecore.git
cd vibecore
# Install dependencies using uv
uv sync
# Configure your API key
export ANTHROPIC_API_KEY="your-api-key-here"
# or
export OPENAI_API_KEY="your-api-key-here"
# Run vibecore
uv run vibecore
Once vibecore is running, you can:
- Chat naturally - Type messages and press Enter to send
- Switch themes - Press
d
to toggle between dark and light modes - Exit - Press
Control-Q
to quit the application
vibecore comes with powerful built-in tools:
- Read files and directories
- Write and edit files
- Multi-edit for batch file modifications
- Pattern matching with glob
- Execute bash commands
- Search with grep
- List directory contents
- File system navigation
- Run Python code in isolated environments
- Persistent execution context
- Full standard library access
- Create and manage todo lists
- Track task progress
- Organize complex workflows
vibecore supports the Model Context Protocol, allowing you to connect to external tools and services through MCP servers.
Create a config.yaml
file in your project directory or add MCP servers to your environment:
mcp_servers:
# Filesystem server for enhanced file operations
- name: filesystem
type: stdio
command: npx
args: ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/directory"]
# GitHub integration
- name: github
type: stdio
command: npx
args: ["-y", "@modelcontextprotocol/server-github"]
env:
GITHUB_PERSONAL_ACCESS_TOKEN: "your-github-token"
# Custom HTTP server
- name: my-server
type: http
url: "http://localhost:8080/mcp"
allowed_tools: ["specific_tool"] # Optional: whitelist specific tools
- stdio: Spawns a local process (npm packages, executables)
- sse: Server-Sent Events connection
- http: HTTP-based MCP servers
Control which tools are available from each server:
mcp_servers:
- name: restricted-server
type: stdio
command: some-command
allowed_tools: ["safe_read", "safe_write"] # Only these tools available
blocked_tools: ["dangerous_delete"] # These tools are blocked
# Clone and enter the repository
git clone https://github.com/serialx/vibecore.git
cd vibecore
# Install dependencies
uv sync
# Run tests
uv run pytest
# Run tests by category
uv run pytest tests/ui/ # UI and widget tests
uv run pytest tests/tools/ # Tool functionality tests
uv run pytest tests/session/ # Session tests
# Run linting and formatting
uv run ruff check .
uv run ruff format .
# Type checking
uv run pyright
vibecore/
├── src/vibecore/
│ ├── main.py # Application entry point & TUI orchestration
│ ├── context.py # Central state management for agents
│ ├── settings.py # Configuration with Pydantic
│ ├── agents/ # Agent configurations & handoffs
│ │ └── default.py # Main agent with tool integrations
│ ├── models/ # LLM provider integrations
│ │ └── anthropic.py # Claude model support via LiteLLM
│ ├── mcp/ # Model Context Protocol integration
│ │ └── manager.py # MCP server lifecycle management
│ ├── handlers/ # Stream processing handlers
│ │ └── stream_handler.py # Handle streaming agent responses
│ ├── session/ # Session management
│ │ ├── jsonl_session.py # JSONL-based conversation storage
│ │ └── loader.py # Session loading logic
│ ├── widgets/ # Custom Textual UI components
│ │ ├── core.py # Base widgets & layouts
│ │ ├── messages.py # Message display components
│ │ ├── tool_message_factory.py # Factory for creating tool messages
│ │ ├── core.tcss # Core styling
│ │ └── messages.tcss # Message-specific styles
│ ├── tools/ # Extensible tool system
│ │ ├── base.py # Tool interfaces & protocols
│ │ ├── file/ # File manipulation tools
│ │ ├── shell/ # Shell command execution
│ │ ├── python/ # Python code interpreter
│ │ └── todo/ # Task management system
│ └── prompts/ # System prompts & instructions
├── tests/ # Comprehensive test suite
│ ├── ui/ # UI and widget tests
│ ├── tools/ # Tool functionality tests
│ ├── session/ # Session and storage tests
│ ├── cli/ # CLI and command tests
│ ├── models/ # Model integration tests
│ └── _harness/ # Test utilities
├── pyproject.toml # Project configuration & dependencies
├── uv.lock # Locked dependencies
└── CLAUDE.md # AI assistant instructions
We maintain high code quality standards:
- Linting: Ruff for fast, comprehensive linting
- Formatting: Ruff formatter for consistent code style
- Type Checking: Pyright for static type analysis
- Testing: Pytest for comprehensive test coverage
Run all checks:
uv run ruff check . && uv run ruff format --check . && uv run pyright . && uv run pytest
# Model configuration
ANTHROPIC_API_KEY=sk-... # For Claude models
OPENAI_API_KEY=sk-... # For GPT models
# OpenAI Models
VIBECORE_DEFAULT_MODEL=o3
VIBECORE_DEFAULT_MODEL=gpt-4.1
# Claude
VIBECORE_DEFAULT_MODEL=anthropic/claude-sonnet-4-20250514
# Use any LiteLLM supported models
VIBECORE_DEFAULT_MODEL=litellm/deepseek/deepseek-chat
# Local models. Use with OPENAI_BASE_URL
VIBECORE_DEFAULT_MODEL=qwen3-30b-a3b-mlx@8bit
We welcome contributions! Here's how to get started:
- Fork the repository and create your branch from
main
- Make your changes and ensure all tests pass
- Add tests for any new functionality
- Update documentation as needed
- Submit a pull request with a clear description
- Follow the existing code style and patterns
- Write descriptive commit messages
- Add type hints to all functions
- Ensure your code passes all quality checks
- Update tests for any changes
Found a bug or have a feature request? Please open an issue with:
- Clear description of the problem or feature
- Steps to reproduce (for bugs)
- Expected vs actual behavior
- Environment details (OS, Python version)
vibecore is built with a modular, extensible architecture:
- Textual Framework: Provides the responsive TUI foundation
- OpenAI Agents SDK: Powers the AI agent capabilities
- Async Design: Ensures smooth, non-blocking interactions
- Tool System: Modular tools with consistent interfaces
- Context Management: Maintains state across operations
- MCP Support: Full integration with Model Context Protocol for external tool connections
- Tool Message Factory: Centralized widget creation for consistent UI across streaming and session loading
- Enhanced Tool Widgets: Specialized widgets for Python execution, file reading, and todo management
- Improved Session Support: Seamless save/load of conversations with full UI state preservation
- Print Mode: New
-p
flag for automation and Unix pipe integration
- More custom tool views (Python, Read, Todo widgets)
- Automation (vibecore -p "prompt")
- MCP (Model Context Protocol) support
- Permission model
- Multi-agent system (agent-as-tools)
- Plugin system for custom tools
- Automated workflow
This project is licensed under the MIT License - see the LICENSE file for details.
- Built with Textual - The amazing TUI framework
- Powered by OpenAI Agents SDK
- Inspired by the growing ecosystem of terminal-based AI tools
Made with love by the vibecore community