A conversational AI CLI tool with intelligent text editor capabilities and tool usage.
- π€ Multi-LLM Provider Support: Choose from Anthropic Claude, OpenAI GPT, Grok, or Local LLM models
- π― Interactive Provider Selection: Easy-to-use interface for selecting providers and models at startup
- π MCP Integration: Connect to Model Context Protocol servers for extended tool capabilities
- π Smart File Operations: AI automatically uses tools to view, create, and edit files
- β‘ Bash Integration: Execute shell commands through natural conversation
- π§ Automatic Tool Selection: AI intelligently chooses the right tools for your requests
- π¬ Interactive UI: Beautiful terminal interface built with Ink
- βοΈ Persistent Settings: Save your preferred provider and model settings
- π Global Installation: Install and use anywhere with
npm i -g @graphteon/juriko-cli
- Node.js 16+
- API key from at least one supported provider (or a local LLM server):
- Anthropic Claude: Get your key from console.anthropic.com
- OpenAI: Get your key from platform.openai.com/api-keys
- Grok (X.AI): Get your key from console.x.ai
- Local LLM: Set up a local server (LM Studio, Ollama, llama.cpp, etc.)
npm install -g @graphteon/juriko-cli
git clone <repository>
cd juriko-cli
npm install
npm run build
npm link
JURIKO supports multiple AI providers. You can set up API keys for any or all of them:
Method 1: Environment Variables
# Anthropic Claude
export ANTHROPIC_API_KEY=your_anthropic_key_here
# OpenAI
export OPENAI_API_KEY=your_openai_key_here
# Grok (X.AI)
export GROK_API_KEY=your_grok_key_here
# Local LLM (optional)
export LOCAL_API_KEY=your_local_api_key_here
export LOCAL_BASE_URL=http://localhost:1234/v1
Method 2: .env File
cp .env.example .env
# Edit .env and add your API keys
Method 3: Command Line Flags
juriko --anthropic-key your_anthropic_key_here
juriko --openai-key your_openai_key_here
juriko --grok-key your_grok_key_here
Method 4: User Settings File
Create ~/.juriko/user-settings.json
:
{
"provider": "anthropic",
"model": "claude-3-7-sonnet-latest",
"apiKeys": {
"anthropic": "your_anthropic_key_here",
"openai": "your_openai_key_here",
"grok": "your_grok_key_here",
"local": "your_local_api_key_here"
},
"baseURLs": {
"local": "http://localhost:1234/v1"
}
}
JURIKO supports connecting to local LLM servers that expose OpenAI-compatible APIs. This includes popular local LLM solutions like:
- LM Studio: Download and run models locally with a user-friendly interface
- Ollama: Lightweight, extensible framework for running LLMs locally
- llama.cpp: Direct C++ implementation for running LLaMA models
- Text Generation WebUI: Web interface for running various LLM models
- vLLM: High-throughput LLM serving engine
- LocalAI: OpenAI-compatible API for local models
LM Studio:
- Download and install LM Studio
- Download a model (e.g., Llama 2, Code Llama, Mistral)
- Start the local server (usually runs on
http://localhost:1234/v1
) - Select "Local" provider in JURIKO and use the wizard
Ollama:
- Install Ollama
- Pull a model:
ollama pull llama2
- Start Ollama server:
ollama serve
(runs onhttp://localhost:11434/v1
) - Select "Local" provider in JURIKO and configure
llama.cpp:
- Build llama.cpp with server support
- Start server:
./server -m model.gguf --port 8080
- Use
http://localhost:8080/v1
as base URL in JURIKO
When you select "Local" as your provider, JURIKO will guide you through a 4-step configuration wizard:
- Base URL: Enter your local server URL (e.g.,
http://localhost:1234/v1
) - Model Name: Specify the model name your server uses
- API Key: Enter API key if your local server requires authentication (optional)
- Save Configuration: Choose whether to save settings for future use
The wizard includes helpful examples and validates your configuration before proceeding.
When you first run JURIKO, you'll be presented with an interactive interface to:
- Select your preferred LLM provider (Anthropic, OpenAI, Grok, or Local)
- Choose a model from the available options for that provider
- Enter your API key if not already configured
- Save your preferences for future sessions
You can change providers anytime by:
- Typing
provider
orswitch
in the chat - Pressing
Ctrl+P
for quick provider switching - Running
juriko
again to go through the selection process
Anthropic Claude:
claude-3-7-sonnet-latest
(Latest Claude 3.7 Sonnet)claude-sonnet-4-20250514
(Claude Sonnet 4)claude-opus-4-20250514
(Claude Opus 4)claude-3-5-sonnet-20241022
(Claude 3.5 Sonnet)claude-3-5-haiku-20241022
(Fast and efficient)claude-3-opus-20240229
(Most capable Claude 3)
OpenAI:
gpt-4o
(Latest GPT-4 Omni)gpt-4o-mini
(Fast and cost-effective)gpt-4-turbo
(High performance)gpt-3.5-turbo
(Fast and affordable)
Grok (X.AI):
grok-beta
(Latest Grok model)grok-vision-beta
(With vision capabilities)
Local LLM:
custom-model
(Your custom local model)- Configure any model name through the setup wizard
Start the conversational AI assistant:
juriko
Or specify a working directory:
juriko -d /path/to/project
Control the verbosity and communication style of JURIKO responses:
# Concise mode - short, direct responses (< 4 lines)
juriko --concise
# Verbose mode - detailed explanations and context
juriko --verbose
# Security level control
juriko --security-level high # Strict validation
juriko --security-level medium # Standard validation (default)
juriko --security-level low # Basic validation
Response Style Benefits:
- Concise Mode: Up to 65% reduction in response length, faster interactions
- Verbose Mode: Full explanations for learning and complex tasks
- Balanced Mode (default): Optimal mix of efficiency and helpfulness
Enable parallel execution of independent tools for improved performance:
# Enable batching (parallel execution)
juriko --enable-batching
# Disable batching (sequential execution)
juriko --disable-batching
Or via environment variable:
export JURIKO_ENABLE_BATCHING=true # or 'false'
Performance Benefits:
- Up to 40% faster execution when multiple independent tools are used
- Intelligent dependency detection ensures file operations remain safe
- Automatic fallback to sequential execution if parallel execution fails
- Smart categorization of tools (read-only, write, compute, network, bash)
How it works:
- Read-only tools (like
view_file
) can run in parallel with each other - Write tools (like
create_file
,str_replace_editor
) run sequentially for safety - Bash commands run sequentially to prevent conflicts
- Network and compute tools are intelligently batched based on dependencies
Enhanced file navigation with clickable references inspired by Claude Code patterns:
# Enable code references (enabled by default)
juriko --enable-code-references
# Disable code references
juriko --disable-code-references
Or via environment variable:
export JURIKO_ENABLE_CODE_REFERENCES=true # or 'false'
Features:
- Clickable file references: All file mentions become clickable links
- Line-specific navigation: Jump directly to specific lines in files
- VSCode integration: Links open directly in VSCode editor
- Automatic enhancement: Tool outputs automatically include clickable references
- Context awareness: Shows code context around referenced lines
Examples:
- File reference:
package.json
- Line reference:
src/index.ts:42
- Error reference:
src/utils/helper.ts:15
On your first run, JURIKO will guide you through:
- Provider Selection: Choose from Anthropic, OpenAI, or Grok
- Model Selection: Pick the best model for your needs
- API Key Setup: Enter your API key (with option to save it)
- Ready to Chat: Start conversing with your chosen AI
You can easily switch between providers and models:
- Type
provider
orswitch
in the chat - Press
Ctrl+P
for quick access - Your preferences are automatically saved to
~/.juriko/user-settings.json
You can provide custom instructions to tailor JURIKO's behavior to your project by creating a .juriko/JURIKO.md
file in your project directory:
mkdir .juriko
Create .juriko/JURIKO.md
with your custom instructions:
# Custom Instructions for JURIKO CLI
Always use TypeScript for any new code files.
When creating React components, use functional components with hooks.
Prefer const assertions and explicit typing over inference where it improves clarity.
Always add JSDoc comments for public functions and interfaces.
Follow the existing code style and patterns in this project.
JURIKO will automatically load and follow these instructions when working in your project directory. The custom instructions are added to JURIKO's system prompt and take priority over default behavior.
Instead of typing commands, just tell JURIKO what you want to do:
π¬ "Show me the contents of package.json"
π¬ "Create a new file called hello.js with a simple console.log"
π¬ "Find all TypeScript files in the src directory"
π¬ "Replace 'oldFunction' with 'newFunction' in all JS files"
π¬ "Run the tests and show me the results"
π¬ "What's the current directory structure?"
Concise Mode (--concise
):
β― view package.json
[file contents displayed directly]
β― what files are in src?
- index.ts
- agent/
- tools/
- ui/
Verbose Mode (--verbose
):
β― view package.json
I'll help you view the package.json file. Let me use the view_file tool to read the contents for you.
This will show you all the dependencies, scripts, and configuration details in your package.json file.
[file contents with detailed explanations]
The package.json contains your project configuration including dependencies like React, TypeScript, and various development tools.
JURIKO works seamlessly with local LLM servers. Here are some examples:
# Using with LM Studio (Code Llama for coding tasks)
juriko # Select Local provider, use http://localhost:1234/v1
# Using with Ollama (Llama 2 for general tasks)
ollama serve
juriko # Select Local provider, use http://localhost:11434/v1
# Using with custom llama.cpp server
./server -m codellama-7b.gguf --port 8080
juriko # Select Local provider, use http://localhost:8080/v1
Local LLMs are particularly useful for:
- Privacy-sensitive projects where you don't want to send code to external APIs
- Offline development when internet connectivity is limited
- Cost optimization for high-volume usage
- Custom fine-tuned models specific to your domain or coding style
JURIKO supports the Model Context Protocol (MCP), allowing you to connect to external tools and resources through MCP servers. This extends JURIKO's capabilities beyond built-in tools.
Create ~/.juriko/mcp-settings.json
to configure MCP servers:
{
"mcpServers": {
"filesystem": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/files"],
"enabled": true
},
"brave-search": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-brave-search"],
"env": {
"BRAVE_API_KEY": "your_brave_api_key_here"
},
"enabled": true
},
"example-sse": {
"type": "sse",
"url": "http://localhost:8080/sse",
"headers": {
"Authorization": "Bearer your_token_here"
},
"enabled": true
},
"example-http": {
"type": "httpStream",
"url": "http://localhost:8080/mcp",
"headers": {
"Authorization": "Bearer your_token_here"
},
"enabled": true
}
}
}
Examples :
{
"mcpServers": {
"llmtxt": {
"type": "sse",
"url": "https://mcp.llmtxt.dev/sse",
"enabled": true,
"description": "LLMTXT MCP server for text processing and utilities",
"timeout": 30000,
"retryAttempts": 3,
"retryDelay": 1000
}
},
"globalTimeout": 30000,
"enableLogging": true,
"logLevel": "info"
}
Local MCP Servers (stdio):
- Run as child processes communicating via standard input/output
- Examples: filesystem access, local databases, system tools
- Use
command
andargs
to specify how to launch the server
HTTP Stream MCP Servers:
- Connect to HTTP-based servers using streaming
- Examples: web APIs, cloud services
- Use
url
and optionalheaders
for authentication
SSE MCP Servers:
- Connect to HTTP-based servers using Server-Sent Events
- Examples: real-time APIs, streaming services
- Use
url
and optionalheaders
for authentication
Once configured, MCP tools become available in JURIKO with the naming pattern mcp_{server}_{tool}
. For example:
mcp_filesystem_read_file
- Read files through filesystem servermcp_brave_search_web_search
- Search the web using Brave Searchmcp_weather_get_forecast
- Get weather data from a weather server
MCP servers can also provide resources (data sources) that JURIKO can access for context, such as:
- File contents from filesystem servers
- API responses from web services
- Database query results
- System information
For detailed MCP setup and troubleshooting, see docs/MCP_INTEGRATION.md
.
JURIKO includes automatic conversation condensing to manage token usage efficiently. When conversations approach the model's token limit, JURIKO automatically summarizes older messages while preserving recent context.
Condense Threshold Configuration:
The condense threshold determines when conversation condensing is triggered (default: 75% of model's token limit).
# Set via environment variable (highest priority)
export JURIKO_CONDENSE_THRESHOLD=80
juriko
# Or set for single session
JURIKO_CONDENSE_THRESHOLD=85 juriko
User Settings Configuration:
Edit ~/.juriko/user-settings.json
:
{
"provider": "anthropic",
"model": "claude-3-5-sonnet-20241022",
"condenseThreshold": 80
}
Recommended Thresholds:
- Conservative (60-70%): Early condensing, lower token usage
- Balanced (75-80%): Default, good balance of context and efficiency
- Aggressive (85-95%): Maximum context retention, higher token usage
For comprehensive configuration details, troubleshooting, and advanced usage patterns, see CONDENSE_THRESHOLD_GUIDE.md
.
# Install dependencies
npm install
# Development mode
npm run dev
# Build project
npm run build
# Run linter
npm run lint
# Type check
npm run typecheck
# Test response styles
npm run test:response-style
# Test concise mode
npm run test:concise
# Test verbose mode
npm run test:verbose
- Multi-LLM Client: Unified interface supporting Anthropic, OpenAI, Grok, and Local LLM APIs
- Provider Selection: Interactive UI for choosing providers and models with local LLM wizard
- Agent: Core command processing and execution logic with multi-provider support
- Tools: Text editor and bash tool implementations
- UI: Ink-based terminal interface components with provider management and local LLM configuration
- Settings: Persistent user preferences, API key management, and local server configuration
- Types: TypeScript definitions for the entire system including local LLM support
MIT