This project demonstrates how to integrate Livepeer's LLM API with AutoGen Core 0.5+. It provides a custom client implementation that bridges the gap between Livepeer's API and AutoGen's expected interfaces.
The integration consists of two key components:
- LivepeerManualClient: A custom client that implements AutoGen Core's ChatCompletionClient interface, allowing it to interact with Livepeer's LLM API.
- Minimal Agent Demo: A simple demonstration of how to use this client in a multi-agent conversation flow.
This client implements the ChatCompletionClient
interface from AutoGen Core to make requests to Livepeer's LLM API. It:
- Converts AutoGen message formats to Livepeer's expected format
- Handles authentication and request configuration
- Processes responses back into AutoGen's expected format
- Provides proper resource management and cleanup
A standalone script that demonstrates a multi-agent conversation flow using the Livepeer client. It simulates a three-agent team:
- Planner: Creates a plan for fulfilling the user's request
- Executor: Implements the plan
- Critic: Evaluates the execution and provides feedback
The easiest way to install the required dependencies is using the provided requirements.txt file:
pip install -r livepeer_team_example/requirements.txt
If you prefer to install dependencies manually, you'll need:
pip install autogen-core>=0.5.3 httpx>=0.28.0
This implementation is fully compatible with:
- AutoGen Core 0.5.3+ - Microsoft's newer, more modular implementation
- Python 3.8+ (recommended: Python 3.10)
To verify that your environment is compatible with this implementation, you can run the included compatibility check:
python -m livepeer_team_example.test_compatibility
This will:
- Check your installed AutoGen Core version
- Verify that the LivepeerManualClient implements all required interfaces
- Confirm that all necessary methods and properties are available
The minimal demo provides a complete example of a multi-agent team using the Livepeer client:
python -m livepeer_team_example.minimal_demo
When using the client in your own applications, you'll need to configure it with:
- base_url: The Livepeer API endpoint URL (example:
https://dream-gateway.livepeer.cloud/llm
) - model_name: The model identifier to use (example:
meta-llama/Meta-Llama-3.1-8B-Instruct
) - auth_header: Your authentication token (example:
Bearer your-auth-token
orAutogen-swarm
) - max_tokens (optional): Maximum completion length (default: 150)
Here's a complete example of using the client in an async context:
import asyncio
from livepeer_team_example.livepeer_manual_client import LivepeerManualClient
from autogen_core.models import SystemMessage, UserMessage
async def main():
# Create the client
client = LivepeerManualClient(
base_url="https://dream-gateway.livepeer.cloud/llm",
model_name="meta-llama/Meta-Llama-3.1-8B-Instruct",
auth_header="Autogen-swarm",
max_tokens=150
)
try:
# Set up messages
messages = [
SystemMessage(content="You are a helpful assistant."),
UserMessage(content="Write a haiku about clouds.", source="user")
]
# Make a request
response = await client.create(messages=messages)
print(f"Response: {response.content}")
# Check token usage
if response.usage:
print(f"Tokens used: {response.usage.prompt_tokens} prompt, "
f"{response.usage.completion_tokens} completion")
finally:
# Always close the client when done
await client.close()
if __name__ == "__main__":
asyncio.run(main())
For multi-turn conversations, append previous responses as AssistantMessage
objects:
from autogen_core.models import AssistantMessage
# After getting the first response
messages.append(AssistantMessage(content=response.content, source="assistant"))
# Add the next user message
messages.append(UserMessage(content="Tell me more about haikus.", source="user"))
# Make another request with the full conversation history
response2 = await client.create(messages=messages)
For more complex agent systems like the minimal demo, follow this pattern:
- Create specialized system messages for each agent role
- Pass relevant context from previous agents to the next agent
- Use
AssistantMessage
objects to represent previous agent outputs - Handle the conversation flow according to your application's needs
- autogen-core (0.5.x): Provides the core interfaces for the client implementation. This is the newer Microsoft AutoGen Core library, which is distinct from the original PyAutogen.
- httpx: For making asynchronous HTTP requests
It's important to understand the relationship between different AutoGen packages:
-
PyAutogen (often just called "AutoGen"): The original implementation, available via
pip install pyautogen
. This is a higher-level framework with agents, conversations, and many built-in capabilities. -
AutoGen Core (used in this project): A newer, more modular implementation from Microsoft that provides lower-level interfaces and more flexibility. Available via
pip install autogen-core
.
This project specifically uses AutoGen Core, which offers better modularity and flexibility for custom integrations like this. The original PyAutogen is not used in this implementation.
The solution addresses several challenges in integrating Livepeer with AutoGen:
- Message Format Conversion: Converts between AutoGen's message format and Livepeer's expected format
- Proper Authentication: Handles Livepeer's authentication requirements
- Response Processing: Properly formats Livepeer responses for AutoGen consumption
- Resource Management: Ensures proper cleanup of resources
While this implementation successfully works with autogen-core, direct integration with the higher-level PyAutogen framework presents additional challenges:
- PyAutogen expects OpenAI-compatible clients with specific methods and behavior
- Event loop handling differs between the libraries
- Deep copying of clients can cause issues
The minimal_demo.py shows how to implement agent interactions without these complications.
This project is available under the MIT License.