Skip to content

its-DeFine/Livepeer-Autogen-Integration

Repository files navigation

Livepeer Integration with AutoGen

This project demonstrates how to integrate Livepeer's LLM API with AutoGen Core 0.5+. It provides a custom client implementation that bridges the gap between Livepeer's API and AutoGen's expected interfaces.

Overview

The integration consists of two key components:

  1. LivepeerManualClient: A custom client that implements AutoGen Core's ChatCompletionClient interface, allowing it to interact with Livepeer's LLM API.
  2. Minimal Agent Demo: A simple demonstration of how to use this client in a multi-agent conversation flow.

Components

LivepeerManualClient

This client implements the ChatCompletionClient interface from AutoGen Core to make requests to Livepeer's LLM API. It:

  • Converts AutoGen message formats to Livepeer's expected format
  • Handles authentication and request configuration
  • Processes responses back into AutoGen's expected format
  • Provides proper resource management and cleanup

Minimal Demo

A standalone script that demonstrates a multi-agent conversation flow using the Livepeer client. It simulates a three-agent team:

  1. Planner: Creates a plan for fulfilling the user's request
  2. Executor: Implements the plan
  3. Critic: Evaluates the execution and provides feedback

Installation

Using pip

The easiest way to install the required dependencies is using the provided requirements.txt file:

pip install -r livepeer_team_example/requirements.txt

Manual Installation

If you prefer to install dependencies manually, you'll need:

pip install autogen-core>=0.5.3 httpx>=0.28.0

Compatibility

This implementation is fully compatible with:

  • AutoGen Core 0.5.3+ - Microsoft's newer, more modular implementation
  • Python 3.8+ (recommended: Python 3.10)

Usage

Verifying Compatibility

To verify that your environment is compatible with this implementation, you can run the included compatibility check:

python -m livepeer_team_example.test_compatibility

This will:

  • Check your installed AutoGen Core version
  • Verify that the LivepeerManualClient implements all required interfaces
  • Confirm that all necessary methods and properties are available

Running the Minimal Demo

The minimal demo provides a complete example of a multi-agent team using the Livepeer client:

python -m livepeer_team_example.minimal_demo

Configuration

When using the client in your own applications, you'll need to configure it with:

  1. base_url: The Livepeer API endpoint URL (example: https://dream-gateway.livepeer.cloud/llm)
  2. model_name: The model identifier to use (example: meta-llama/Meta-Llama-3.1-8B-Instruct)
  3. auth_header: Your authentication token (example: Bearer your-auth-token or Autogen-swarm)
  4. max_tokens (optional): Maximum completion length (default: 150)

Using the Client in Your Own Code

Here's a complete example of using the client in an async context:

import asyncio
from livepeer_team_example.livepeer_manual_client import LivepeerManualClient
from autogen_core.models import SystemMessage, UserMessage

async def main():
    # Create the client
    client = LivepeerManualClient(
        base_url="https://dream-gateway.livepeer.cloud/llm",
        model_name="meta-llama/Meta-Llama-3.1-8B-Instruct",
        auth_header="Autogen-swarm",
        max_tokens=150
    )
    
    try:
        # Set up messages
        messages = [
            SystemMessage(content="You are a helpful assistant."),
            UserMessage(content="Write a haiku about clouds.", source="user")
        ]
        
        # Make a request
        response = await client.create(messages=messages)
        print(f"Response: {response.content}")
        
        # Check token usage
        if response.usage:
            print(f"Tokens used: {response.usage.prompt_tokens} prompt, " 
                  f"{response.usage.completion_tokens} completion")
    
    finally:
        # Always close the client when done
        await client.close()

if __name__ == "__main__":
    asyncio.run(main())

Creating Multi-Turn Conversations

For multi-turn conversations, append previous responses as AssistantMessage objects:

from autogen_core.models import AssistantMessage

# After getting the first response
messages.append(AssistantMessage(content=response.content, source="assistant"))

# Add the next user message
messages.append(UserMessage(content="Tell me more about haikus.", source="user"))

# Make another request with the full conversation history
response2 = await client.create(messages=messages)

Creating Multi-Agent Systems

For more complex agent systems like the minimal demo, follow this pattern:

  1. Create specialized system messages for each agent role
  2. Pass relevant context from previous agents to the next agent
  3. Use AssistantMessage objects to represent previous agent outputs
  4. Handle the conversation flow according to your application's needs

Technical Details

Packages Used

  • autogen-core (0.5.x): Provides the core interfaces for the client implementation. This is the newer Microsoft AutoGen Core library, which is distinct from the original PyAutogen.
  • httpx: For making asynchronous HTTP requests

Relationship Between AutoGen Packages

It's important to understand the relationship between different AutoGen packages:

  1. PyAutogen (often just called "AutoGen"): The original implementation, available via pip install pyautogen. This is a higher-level framework with agents, conversations, and many built-in capabilities.

  2. AutoGen Core (used in this project): A newer, more modular implementation from Microsoft that provides lower-level interfaces and more flexibility. Available via pip install autogen-core.

This project specifically uses AutoGen Core, which offers better modularity and flexibility for custom integrations like this. The original PyAutogen is not used in this implementation.

Integration Challenges

The solution addresses several challenges in integrating Livepeer with AutoGen:

  1. Message Format Conversion: Converts between AutoGen's message format and Livepeer's expected format
  2. Proper Authentication: Handles Livepeer's authentication requirements
  3. Response Processing: Properly formats Livepeer responses for AutoGen consumption
  4. Resource Management: Ensures proper cleanup of resources

Notes on PyAutogen Integration

While this implementation successfully works with autogen-core, direct integration with the higher-level PyAutogen framework presents additional challenges:

  • PyAutogen expects OpenAI-compatible clients with specific methods and behavior
  • Event loop handling differs between the libraries
  • Deep copying of clients can cause issues

The minimal_demo.py shows how to implement agent interactions without these complications.

License

This project is available under the MIT License.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages