AI agents built using Google ADK and Docker Model Runner.
Agent | Description | Key Features |
---|---|---|
Sequential Agent | Code development pipeline | Write β Review β Refactor |
Parallel Agent | Market intelligence analysis | Concurrent competitive/trend/sentiment analysis |
Loop Agent | Iterative recipe development | Recipe creation with dietician feedback loops |
Human-in-Loop | Travel planning with feedback | Human decision points in AI workflows |
Google Search | Web research and synthesis | Live search with comprehensive reports |
Find Job | Job market analysis | Career guidance and opportunity analysis |
- Docker Desktop 4.40+ with Model Runner enabled
- Python 3.9+ (for local development)
- Google API Key (optional, for Google Search agents)
# Enable Docker Model Runner
docker desktop enable model-runner --tcp 12434
# Pull a model
docker model pull ai/llama3.2:1B-Q8_0
# Verify Model Runner is working
curl http://localhost:12434/engines/llama.cpp/v1/models
git clone https://github.com/dockersamples/google-adk-docker-model-runner.git
cd google-adk-docker-model-runner
# Copy environment template
cp agents/.env.example agents/.env
# Edit .env file with your configuration
nano agents/.env
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Run specific agents
cd agents && adk web
Now open - http://localhost:8000
# Build the image
docker build -t docker-adk-agents:v1 .
# Run with local .env variables
docker run -p 8000:8000 --env-file agents/.env docker-adk-agents:v1
Now open - http://localhost:8000
Agent | Prompts |
---|---|
Sequential Agent | Write a HTML code with title and description for front main website page |
Parallel Agent | Customer sentiment and feedback trends on Docker Model Runner and Docker AI |
Loop Agent | Suggest some healthy recipe with paneer |
Human-in-Loop | Plan a trip to dubai |
Google Search | Share details about docker model runner features release |
Find Job | Share job related to python |
Variable | Description | Default | Required |
---|---|---|---|
DOCKER_MODEL_RUNNER |
Model Runner endpoint | Auto-detected | No |
MODEL_NAME |
Model to use | ai/llama3.2:1B-Q8_0 |
No |
OPENAI_API_KEY |
API key for local runner | anything |
No |
GOOGLE_API_KEY |
Google API key | None | Yes (for search agents) |
AGENT_TYPE |
Which agent to run | sequential |
No |
TEST_QUERY |
Query to process | Agent-specific default | No |
The system automatically detects the correct Docker Model Runner endpoint:
- Explicit Override:
DOCKER_MODEL_RUNNER
environment variable - Container Auto-Detection: Tests common container networking patterns
- Localhost Fallback: Uses
http://localhost:12434
for development
- Host/Development:
http://localhost:12434/engines/llama.cpp/v1
- Docker Desktop:
http://host.docker.internal:12434/engines/llama.cpp/v1
- Docker Internal:
http://model-runner.docker.internal:12434/engines/llama.cpp/v1
- Docker Bridge:
http://172.17.0.1:12434/engines/llama.cpp/v1
- Docker Compose:
http://model-runner:12434/engines/llama.cpp/v1
# Test endpoint connectivity
docker run --rm curlimages/curl:latest \
curl -f http://host.docker.internal:12434/engines/llama.cpp/v1/models
# Test agent configuration
docker run --rm \
-e DOCKER_MODEL_RUNNER=http://host.docker.internal:12434/engines/llama.cpp/v1 \
docker-adk-agents \
python -c "
from agents.shared.config import ModelRunnerConfig
config = ModelRunnerConfig()
print(f'Endpoint: {config.api_base}')
print(f'Model: {config.model_name}')
"
# Check if using async/await properly
grep -r "create_session" agents/
# Verify Model Runner is accessible
curl http://localhost:12434/engines/llama.cpp/v1/models
# Check Docker networking
docker run --rm curlimages/curl:latest \
curl -f http://host.docker.internal:12434/engines/llama.cpp/v1/models
# Pull the model first
docker model pull ai/llama3.2:1B-Q8_0
docker model ls
# Test different endpoints
for endpoint in \
"http://host.docker.internal:12434" \
"http://172.17.0.1:12434" \
"http://localhost:12434"; do
echo "Testing $endpoint..."
docker run --rm curlimages/curl:latest \
curl -f "$endpoint/engines/llama.cpp/v1/models" || echo "Failed"
done
# Enable detailed logging
docker run --rm \
-e LOG_LEVEL=DEBUG \
-e DEV_MODE=true \
-e DOCKER_MODEL_RUNNER=http://host.docker.internal:12434/engines/llama.cpp/v1 \
docker-adk-agents
βββββββββββββββββββββββββββββββββββββββ
β Application Layer β β Your Agent Logic
βββββββββββββββββββββββββββββββββββββββ€
β π€ Google ADK Framework β β Agent Orchestration
β β’ Multi-agent workflows β
β β’ State management β
β β’ Tool integration β
βββββββββββββββββββββββββββββββββββββββ€
β π‘ Centralized Configuration β β Environment Detection
β β’ Auto endpoint detection β
β β’ Container networking β
β β’ Model configuration β
βββββββββββββββββββββββββββββββββββββββ€
β π LiteLLM Abstraction β β Model API Layer
βββββββββββββββββββββββββββββββββββββββ€
β π’ Docker Model Runner β β Local Inference
β β’ llama.cpp engine β
β β’ OpenAI-compatible API β
βββββββββββββββββββββββββββββββββββββββ€
β π§ AI Model (Llama 3.2) β β The Actual Model
βββββββββββββββββββββββββββββββββββββββ
- Centralized Configuration: All agents use
agents/shared/config.py
- Environment Awareness: Automatic detection of container vs host execution
- Graceful Fallbacks: Multiple endpoint detection strategies
- Async-First: Proper async/await patterns throughout
- Error Handling: Comprehensive error handling and logging
- Fork the repository
- Create a feature branch
- Make your changes with proper error handling
- Test in both local and container environments
- Submit a pull request