Skip to content

Build your own AI-powered automation tools in the terminal with this extensible agent framework

License

Notifications You must be signed in to change notification settings

serialx/vibecore

Repository files navigation

vibecore

Python 3.11+ License: MIT Code style: ruff Checked with pyright

Build your own AI-powered automation tools in the terminal with this extensible agent framework

Features " Installation " Usage " Development " Contributing


vibecore terminal screenshot

Overview

vibecore is a Do-it-yourself Agent Framework that transforms your terminal into a powerful AI workspace. More than just a chat interface, it's a complete platform for building and orchestrating custom AI agents that can manipulate files, execute code, run shell commands, and manage complex workflows—all from the comfort of your terminal.

Built on Textual and the OpenAI Agents SDK, vibecore provides the foundation for creating your own AI-powered automation tools. Whether you're automating development workflows, building custom AI assistants, or experimenting with agent-based systems, vibecore gives you the building blocks to craft exactly what you need.

Key Features

  • AI-Powered Chat Interface - Interact with state-of-the-art language models through an intuitive terminal interface
  • Rich Tool Integration - Built-in tools for file operations, shell commands, Python execution, and task management
  • MCP Support - Connect to external tools and services via Model Context Protocol servers
  • Beautiful Terminal UI - Modern, responsive interface with dark/light theme support
  • Real-time Streaming - See AI responses as they're generated with smooth streaming updates
  • Extensible Architecture - Easy to add new tools and capabilities
  • High Performance - Async-first design for responsive interactions
  • Context Management - Maintains state across tool executions for coherent workflows

Installation

Prerequisites

  • Python 3.11 or higher
  • uv package manager

Quick Start

# Clone the repository
git clone https://github.com/serialx/vibecore.git
cd vibecore

# Install dependencies using uv
uv sync

# Configure your API key
export ANTHROPIC_API_KEY="your-api-key-here"
# or
export OPENAI_API_KEY="your-api-key-here"

# Run vibecore
uv run vibecore

Usage

Basic Commands

Once vibecore is running, you can:

  • Chat naturally - Type messages and press Enter to send
  • Switch themes - Press d to toggle between dark and light modes
  • Exit - Press Control-Q to quit the application

Available Tools

vibecore comes with powerful built-in tools:

File Operations

- Read files and directories
- Write and edit files
- Multi-edit for batch file modifications
- Pattern matching with glob

Shell Commands

- Execute bash commands
- Search with grep
- List directory contents
- File system navigation

Python Execution

- Run Python code in isolated environments
- Persistent execution context
- Full standard library access

Task Management

- Create and manage todo lists
- Track task progress
- Organize complex workflows

MCP (Model Context Protocol) Support

vibecore supports the Model Context Protocol, allowing you to connect to external tools and services through MCP servers.

Configuring MCP Servers

Create a config.yaml file in your project directory or add MCP servers to your environment:

mcp_servers:
  # Filesystem server for enhanced file operations
  - name: filesystem
    type: stdio
    command: npx
    args: ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/directory"]
    
  # GitHub integration
  - name: github
    type: stdio
    command: npx
    args: ["-y", "@modelcontextprotocol/server-github"]
    env:
      GITHUB_PERSONAL_ACCESS_TOKEN: "your-github-token"
    
  # Custom HTTP server
  - name: my-server
    type: http
    url: "http://localhost:8080/mcp"
    allowed_tools: ["specific_tool"]  # Optional: whitelist specific tools

Available MCP Server Types

  • stdio: Spawns a local process (npm packages, executables)
  • sse: Server-Sent Events connection
  • http: HTTP-based MCP servers

Tool Filtering

Control which tools are available from each server:

mcp_servers:
  - name: restricted-server
    type: stdio
    command: some-command
    allowed_tools: ["safe_read", "safe_write"]  # Only these tools available
    blocked_tools: ["dangerous_delete"]         # These tools are blocked

Development

Setting Up Development Environment

# Clone and enter the repository
git clone https://github.com/serialx/vibecore.git
cd vibecore

# Install dependencies
uv sync

# Run tests
uv run pytest

# Run tests by category
uv run pytest tests/ui/        # UI and widget tests
uv run pytest tests/tools/     # Tool functionality tests
uv run pytest tests/session/   # Session tests

# Run linting and formatting
uv run ruff check .
uv run ruff format .

# Type checking
uv run pyright

Project Structure

vibecore/
├── src/vibecore/
│   ├── main.py              # Application entry point & TUI orchestration
│   ├── context.py           # Central state management for agents
│   ├── settings.py          # Configuration with Pydantic
│   ├── agents/              # Agent configurations & handoffs
│   │   └── default.py       # Main agent with tool integrations
│   ├── models/              # LLM provider integrations
│   │   └── anthropic.py     # Claude model support via LiteLLM
│   ├── mcp/                 # Model Context Protocol integration
│   │   └── manager.py       # MCP server lifecycle management
│   ├── handlers/            # Stream processing handlers
│   │   └── stream_handler.py # Handle streaming agent responses
│   ├── session/             # Session management
│   │   ├── jsonl_session.py # JSONL-based conversation storage
│   │   └── loader.py        # Session loading logic
│   ├── widgets/             # Custom Textual UI components
│   │   ├── core.py          # Base widgets & layouts
│   │   ├── messages.py      # Message display components
│   │   ├── tool_message_factory.py  # Factory for creating tool messages
│   │   ├── core.tcss        # Core styling
│   │   └── messages.tcss    # Message-specific styles
│   ├── tools/               # Extensible tool system
│   │   ├── base.py          # Tool interfaces & protocols
│   │   ├── file/            # File manipulation tools
│   │   ├── shell/           # Shell command execution
│   │   ├── python/          # Python code interpreter
│   │   └── todo/            # Task management system
│   └── prompts/             # System prompts & instructions
├── tests/                   # Comprehensive test suite
│   ├── ui/                  # UI and widget tests
│   ├── tools/               # Tool functionality tests
│   ├── session/             # Session and storage tests
│   ├── cli/                 # CLI and command tests
│   ├── models/              # Model integration tests
│   └── _harness/            # Test utilities
├── pyproject.toml           # Project configuration & dependencies
├── uv.lock                  # Locked dependencies
└── CLAUDE.md                # AI assistant instructions

Code Quality

We maintain high code quality standards:

  • Linting: Ruff for fast, comprehensive linting
  • Formatting: Ruff formatter for consistent code style
  • Type Checking: Pyright for static type analysis
  • Testing: Pytest for comprehensive test coverage

Run all checks:

uv run ruff check . && uv run ruff format --check . && uv run pyright . && uv run pytest

Configuration

Environment Variables

# Model configuration
ANTHROPIC_API_KEY=sk-...        # For Claude models
OPENAI_API_KEY=sk-...          # For GPT models

# OpenAI Models
VIBECORE_DEFAULT_MODEL=o3
VIBECORE_DEFAULT_MODEL=gpt-4.1
# Claude
VIBECORE_DEFAULT_MODEL=anthropic/claude-sonnet-4-20250514
# Use any LiteLLM supported models
VIBECORE_DEFAULT_MODEL=litellm/deepseek/deepseek-chat
# Local models. Use with OPENAI_BASE_URL
VIBECORE_DEFAULT_MODEL=qwen3-30b-a3b-mlx@8bit

Contributing

We welcome contributions! Here's how to get started:

  1. Fork the repository and create your branch from main
  2. Make your changes and ensure all tests pass
  3. Add tests for any new functionality
  4. Update documentation as needed
  5. Submit a pull request with a clear description

Development Guidelines

  • Follow the existing code style and patterns
  • Write descriptive commit messages
  • Add type hints to all functions
  • Ensure your code passes all quality checks
  • Update tests for any changes

Reporting Issues

Found a bug or have a feature request? Please open an issue with:

  • Clear description of the problem or feature
  • Steps to reproduce (for bugs)
  • Expected vs actual behavior
  • Environment details (OS, Python version)

Architecture

vibecore is built with a modular, extensible architecture:

  • Textual Framework: Provides the responsive TUI foundation
  • OpenAI Agents SDK: Powers the AI agent capabilities
  • Async Design: Ensures smooth, non-blocking interactions
  • Tool System: Modular tools with consistent interfaces
  • Context Management: Maintains state across operations

Recent Updates

  • MCP Support: Full integration with Model Context Protocol for external tool connections
  • Tool Message Factory: Centralized widget creation for consistent UI across streaming and session loading
  • Enhanced Tool Widgets: Specialized widgets for Python execution, file reading, and todo management
  • Improved Session Support: Seamless save/load of conversations with full UI state preservation
  • Print Mode: New -p flag for automation and Unix pipe integration

Roadmap

  • More custom tool views (Python, Read, Todo widgets)
  • Automation (vibecore -p "prompt")
  • MCP (Model Context Protocol) support
  • Permission model
  • Multi-agent system (agent-as-tools)
  • Plugin system for custom tools
  • Automated workflow

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

  • Built with Textual - The amazing TUI framework
  • Powered by OpenAI Agents SDK
  • Inspired by the growing ecosystem of terminal-based AI tools

Made with love by the vibecore community

Report Bug " Request Feature " Join Discussions

About

Build your own AI-powered automation tools in the terminal with this extensible agent framework

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages