Polli-Agent is a production-ready AI coding assistant powered by Pollinations AI models.
Polli-Agent is an advanced LLM-based agent specifically designed for software engineering tasks, powered by Pollinations AI. It provides a powerful CLI interface that understands natural language instructions and executes complex coding workflows using multiple Pollinations models including OpenAI, DeepSeek, Qwen, and Mistral.
What Makes Polli-Agent Special: Polli-Agent combines the power of multiple Pollinations AI models with a transparent, modular architecture. Unlike other agents, it offers seamless multi-model switching, optional API key usage (works with free tier), and production-ready stability. The Pollinations integration provides access to cutting-edge models like DeepSeek Reasoning, Qwen Coder, and Mistral, all through a unified interface.
Project Status: Polli-Agent is production-ready and actively maintained. Built on the solid foundation of Trae-Agent, it's specifically optimized for Pollinations AI models with full tool calling support, multi-model capabilities, and robust error handling.
- πΈ Pollinations AI Integration: Native support for multiple Pollinations models
- π Multi-Model Support: OpenAI, DeepSeek Reasoning, Qwen Coder, Mistral, and more
- π Optional API Key: Works with free tier (no API key) or premium models (with API key)
- π Lakeview: Provides short and concise summarisation for agent steps
- π οΈ Rich Tool Ecosystem: File editing, bash execution, sequential thinking, and more
- π― Interactive Mode: Conversational interface for iterative development
- π Trajectory Recording: Detailed logging of all agent actions for debugging and analysis
- βοΈ Flexible Configuration: JSON-based configuration with environment variable support
- π Easy Installation: Simple uv-based installation
We strongly recommend using UV to setup the project.
git clone https://github.com/pollinations/polli-agent.git
cd polli-agent
uv sync
To use polli
command globally from anywhere:
git clone https://github.com/pollinations/polli-agent.git
cd polli-agent
./install.sh
git clone https://github.com/pollinations/polli-agent.git
cd polli-agent
pip install -e .
After installation, you can use polli
from anywhere:
polli run "Create a Python script"
polli interactive
polli --help
For development, use UV to set up the project:
git clone https://github.com/pollinations/polli-agent.git
cd polli-agent
uv sync
# Use: uv run polli [command]
Polli-Agent works with or without an API key!
Polli-Agent works out of the box with basic Pollinations models:
# No setup needed - just start using it!
polli run "Create a hello world Python script"
For access to premium models like DeepSeek, Qwen, and Mistral:
Environment Variable (Recommended):
export POLLINATIONS_API_KEY="your-pollinations-api-key"
Or in Config File:
{
"model_providers": {
"pollinations": {
"api_key": "your-pollinations-api-key"
}
}
}
# Run with default model (OpenAI GPT-4o Mini - free tier)
polli run "Create a hello world Python script"
# Use specific Pollinations models (just select the provider!)
polli run "Create a Python script" --provider openai
polli run "Debug complex code" --provider deepseek-reasoning
polli run "Write documentation" --provider qwen-coder
polli run "Refactor code" --provider mistral
polli run "Large project analysis" --provider openai-large
The main entry point is the polli
command with several subcommands:
# Basic task execution (uses default OpenAI model)
polli run "Create a Python script that calculates fibonacci numbers"
# With specific Pollinations models (pre-configured providers)
polli run "Fix the bug in main.py" --provider deepseek-reasoning
polli run "Optimize this code" --provider openai-large
polli run "Add documentation" --provider qwen-coder
polli run "Refactor code" --provider mistral
polli run "Fast coding task" --provider openai-fast
polli run "Advanced reasoning" --provider grok
# With custom working directory
polli run "Add unit tests for the utils module" --working-dir /path/to/project
# Save trajectory for debugging
polli run "Refactor the database module" --trajectory-file debug_session.json
# With API key for premium models
polli run "Complex analysis" --provider deepseek-reasoning --api-key "your-key"
# Force to generate patches
polli run "Update the API endpoints" --must-patch
# Start interactive session with default model
polli interactive
# With specific Pollinations models
polli interactive --provider deepseek-reasoning --max-steps 30
polli interactive --provider qwen-coder
polli interactive --provider grok
In interactive mode, you can:
- Type any task description to execute it
- Use
status
to see agent information - Use
help
for available commands - Use
clear
to clear the screen - Use
exit
orquit
to end the session
polli show-config
# With custom config file
polli show-config --config-file my_config.json
Polli-Agent uses a JSON configuration file (trae_config.json
) with pre-configured providers for each model:
{
"default_provider": "openai",
"max_steps": 20,
"enable_lakeview": true,
"model_providers": {
"openai": {
"api_key": "",
"model": "openai",
"max_tokens": 4096,
"temperature": 0.7,
"top_p": 1,
"max_retries": 10
},
"anthropic": {
"api_key": "your_anthropic_api_key",
"model": "claude-sonnet-4-20250514",
"max_tokens": 4096,
"temperature": 0.5,
"top_p": 1,
"top_k": 0,
"max_retries": 10
},
"azure": {
"api_key": "you_azure_api_key",
"base_url": "your_azure_base_url",
"api_version": "2024-03-01-preview",
"model": "model_name",
"max_tokens": 4096,
"temperature": 0.5,
"top_p": 1,
"top_k": 0,
"max_retries": 10
},
"openrouter": {
"api_key": "your_openrouter_api_key",
"model": "openai/gpt-4o",
"max_tokens": 4096,
"temperature": 0.5,
"top_p": 1,
"top_k": 0,
"max_retries": 10
},
"doubao": {
"api_key": "you_doubao_api_key",
"model": "model_name",
"base_url": "your_doubao_base_url",
"max_tokens": 8192,
"temperature": 0.5,
"top_p": 1,
"max_retries": 20
}
},
"lakeview_config": {
"model_provider": "anthropic",
"model_name": "claude-sonnet-4-20250514"
}
}
Configuration Priority:
- Command-line arguments (highest)
- Configuration file values
- Environment variables
- Default values (lowest)
# Use different Pollinations models for specific tasks (pre-configured providers)
polli run "Write a Python script" --provider openai
polli run "Debug complex code" --provider deepseek-reasoning
polli run "Generate documentation" --provider qwen-coder
polli run "Refactor legacy code" --provider mistral
polli run "Large codebase analysis" --provider openai-large
polli run "Fast development" --provider openai-fast
polli run "Advanced reasoning" --provider grok
polli run "Latest AI capabilities" --provider llama-scout
Available Pre-Configured Providers:
openai
- General purpose, works without API key (default)deepseek-reasoning
- Excellent for complex problem solvingqwen-coder
- Specialized for coding tasksmistral
- Fast and efficient for most tasksopenai-large
- Enhanced capabilities for complex projectsopenai-fast
- Speed-optimized for quick tasksgrok
- Advanced reasoning capabilitiesllama-scout
- Latest Llama 4 Scout modeldeepseek
- DeepSeek V3 modelphi
- Microsoft Phi-4 with vision support
POLLINATIONS_API_KEY
- Pollinations API key (optional - works without for basic models)
Note: Unlike other agents, Polli-Agent works perfectly without any API key for basic models. Set POLLINATIONS_API_KEY
only if you want access to premium models like DeepSeek, Qwen, and Mistral.
Trae Agent comes with several built-in tools:
-
str_replace_based_edit_tool: Create, edit, view, and manipulate files
view
- Display file contents or directory listingscreate
- Create new filesstr_replace
- Replace text in filesinsert
- Insert text at specific lines
-
bash: Execute shell commands and scripts
- Run commands with persistent state
- Handle long-running processes
- Capture output and errors
-
sequential_thinking: Structured problem-solving and analysis
- Break down complex problems
- Iterative thinking with revision capabilities
- Hypothesis generation and verification
-
task_done: Signal task completion
- Mark tasks as successfully completed
- Provide final results and summaries
Trae Agent automatically records detailed execution trajectories for debugging and analysis:
# Auto-generated trajectory file
polli run "Debug the authentication module"
# Saves to: trajectory_20250612_220546.json
# Custom trajectory file
polli run "Optimize the database queries" --trajectory-file optimization_debug.json
Trajectory files contain:
- LLM Interactions: All messages, responses, and tool calls
- Agent Steps: State transitions and decision points
- Tool Usage: Which tools were called and their results
- Metadata: Timestamps, token usage, and execution metrics
For more details, see TRAJECTORY_RECORDING.md.
- Fork the repository
- Set up a development install(
uv sync --all-extras && pre-commit install
) - Create a feature branch (
git checkout -b feature/amazing-feature
) - Make your changes
- Add tests for new functionality
- Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
- Follow PEP 8 style guidelines
- Add tests for new features
- Update documentation as needed
- Use type hints where appropriate
- Ensure all tests pass before submitting
- Python 3.12+
- Optional: Pollinations API key (only needed for premium models)
- Free Tier: Works without any API key using basic models
- Premium Tier: Requires
POLLINATIONS_API_KEY
for advanced models like DeepSeek, Qwen, Mistral
Import Errors:
# Try setting PYTHONPATH
PYTHONPATH=. polli run "your task"
API Key Issues:
# Check if Pollinations API key is set (optional)
echo $POLLINATIONS_API_KEY
# Check configuration
polli show-config
# Test without API key (should work with basic models)
polli run "Create a simple Python script" --provider pollinations
Permission Errors:
# Ensure proper permissions for file operations
chmod +x /path/to/your/project
This project is licensed under the MIT License - see the LICENSE file for details.
- Pollinations AI - For providing the powerful AI models that make Polli-Agent possible
- Trae-Agent - The excellent foundation that Polli-Agent is built upon
- Anthropic - For building the anthropic-quickstart project that served as a valuable reference for the tool ecosystem