Skip to content

openserv-labs/python-sdk

Repository files navigation

OpenServ Python SDK

PyPI version License: MIT Python

A powerful Python framework for building non-deterministic AI agents with advanced cognitive capabilities like reasoning, decision-making, and inter-agent collaboration within the OpenServ platform. Built with strong typing, extensible architecture, and a fully autonomous agent runtime.

Table of Contents

Features

  • 🔌 Advanced cognitive capabilities with reasoning and decision-making
  • 🤝 Inter-agent collaboration and communication
  • 🔌 Extensible agent architecture with custom capabilities
  • 🔧 Fully autonomous agent runtime with shadow agents
  • 🌐 Framework-agnostic - integrate agents from any AI framework
  • ⛓️ Blockchain-agnostic - compatible with any chain implementation
  • 🤖 Task execution and chat message handling
  • 🔄 Asynchronous task management
  • 📁 File operations and management
  • 🤝 Smart human assistance integration
  • 📝 Strong type hints with Pydantic models
  • 📊 Built-in logging and error handling
  • 🎯 Three levels of control for different development needs

Framework Architecture

Framework & Blockchain Compatibility

OpenServ is designed to be completely framework and blockchain agnostic, allowing you to:

  • Integrate agents built with any AI framework (e.g., LangChain, BabyAGI, Eliza, G.A.M.E, etc.)
  • Connect agents operating on any blockchain network
  • Mix and match different framework agents in the same workspace
  • Maintain full compatibility with your existing agent implementations

This flexibility ensures you can:

  • Use your preferred AI frameworks and tools
  • Leverage existing agent implementations
  • Integrate with any blockchain ecosystem
  • Build cross-framework agent collaborations

Shadow Agents

Each agent is supported by two "shadow agents":

  • Decision-making agent for cognitive processing
  • Validation agent for output verification

This ensures smarter and more reliable agent performance without additional development effort.

Control Levels

OpenServ offers three levels of control to match your development needs:

  1. Fully Autonomous (Level 1)

    • Only build your agent's capabilities
    • OpenServ's "second brain" handles everything else
    • Built-in shadow agents manage decision-making and validation
    • Perfect for rapid development
  2. Guided Control (Level 2)

    • Natural language guidance for agent behavior
    • Balanced approach between control and simplicity
    • Ideal for customizing agent behavior without complex logic
  3. Full Control (Level 3)

    • Complete customization of agent logic
    • Custom validation mechanisms
    • Override task and chat message handling for specific requirements

Developer Focus

The framework caters to two types of developers:

  • Agent Developers: Focus on building task functionality
  • Logic Developers: Shape agent decision-making and cognitive processes

Installation

pip install openserv-sdk

Getting Started

Platform Setup

  1. Log In to the Platform

    • Visit OpenServ Platform and log in using your Google account
    • This gives you access to developer tools and features
  2. Set Up Developer Account

    • Navigate to the Developer menu in the left sidebar
    • Click on Profile to set up your developer account

Agent Registration

  1. Register Your Agent

    • Navigate to Developer -> Add Agent
    • Fill out required details:
      • Agent Name
      • Description
      • Capabilities Description (important for task matching)
      • Agent Endpoint (after deployment)
  2. Create API Key

    • Go to Developer -> Your Agents
    • Open your agent's details
    • Click "Create Secret Key"
    • Store this key securely

Development Setup

  1. Set Environment Variables

    # Required
    export OPENSERV_API_KEY=your_api_key_here
    
    # Optional
    export OPENAI_API_KEY=your_openai_key_here  # If using OpenAI process runtime
    export PORT=7378                            # Custom port (default: 7378)
  2. Initialize Your Agent

    from openserv import Agent
    from openserv.types import AgentOptions
    from pydantic import BaseModel
    
    class GreetArgs(BaseModel):
        name: str
    
    agent = Agent(AgentOptions(
        system_prompt="You are a specialized agent that...",
        api_key="your_api_key_here"
    ))
    
    # Add capabilities using the add_capability method
    agent.add_capability(Capability(
        name="greet",
        description="Greet a user by name",
        schema=GreetArgs,
        run=lambda run_params, messages: f"Hello, {run_params['args'].name}! How can I help you today?"
    ))
    
    # Start the agent server
    agent.start()
  3. Deploy Your Agent

    • Deploy your agent to a publicly accessible URL
    • Update the Agent Endpoint in your agent details
    • Ensure accurate Capabilities Description for task matching
  4. Test Your Agent

    • Find your agent under the Explore section
    • Start a project with your agent
    • Test interactions with other marketplace agents

Quick Start

Create a simple agent with greeting capabilities:

from openserv import Agent, Capability
from openserv.types import AgentOptions
from pydantic import BaseModel
import os

# Define argument models
class GreetArgs(BaseModel):
    name: str

class FarewellArgs(BaseModel):
    name: str

# Initialize the agent
agent = Agent(AgentOptions(
    system_prompt="You are a helpful assistant.",
    api_key=os.getenv("OPENSERV_API_KEY")
))

# Add a capability
agent.add_capability(Capability(
    name="greet",
    description="Greet a user by name",
    schema=GreetArgs,
    run=lambda run_params, messages: f"Hello, {run_params['args'].name}! How can I help you today?"
))

# Add multiple capabilities at once
agent.add_capabilities([
    Capability(
        name="farewell",
        description="Say goodbye to a user",
        schema=FarewellArgs,
        run=lambda run_params, messages: f"Goodbye, {run_params['args'].name}! Have a great day!"
    ),
    Capability(
        name="help",
        description="Show available commands",
        schema=BaseModel,
        run=lambda run_params, messages: "Available commands: greet, farewell, help"
    )
])

# Start the agent server
agent.start()

Environment Variables

Variable Description Required Default
OPENSERV_API_KEY Your OpenServ API key Yes -
OPENAI_API_KEY OpenAI API key (for process() method) No* -
PORT Server port No 7378

*Required if using OpenAI integration features

Core Concepts

Capabilities

Capabilities are the building blocks of your agent. Each capability represents a specific function your agent can perform. The framework handles complex connections, human assistance triggers, and background decision-making automatically.

Each capability must include:

  • name: Unique identifier for the capability
  • description: What the capability does
  • schema: Pydantic model defining the parameters
  • run: Function that executes the capability, receiving validated args and messages
from openserv import Agent, Capability
from openserv.types import AgentOptions
from pydantic import BaseModel
from typing import Optional
import json

class SummarizeArgs(BaseModel):
    text: str
    max_length: Optional[int] = 100

agent = Agent(AgentOptions(
    system_prompt="You are a helpful assistant."
))

# Define a capability function
async def summarize_run(run_params, messages):
    args = run_params["args"]
    text, max_length = args.text, args.max_length

    # Your summarization logic here
    summary = f"Summary of text ({len(text)} chars): ..."

    return summary

# Add capability to agent
agent.add_capability(Capability(
    name="summarize",
    description="Summarize a piece of text",
    schema=SummarizeArgs,
    run=summarize_run
))

# Add another capability
class AnalyzeArgs(BaseModel):
    text: str

agent.add_capabilities([
    Capability(
        name="analyze",
        description="Analyze text for sentiment and keywords",
        schema=AnalyzeArgs,
        run=lambda run_params, messages: json.dumps({"result": "analysis complete"})
    ),
    Capability(
        name="help",
        description="Show available commands",
        schema=BaseModel,
        run=lambda run_params, messages: "Available commands: summarize, analyze, help"
    )
])

Each capability's run function receives:

  • run_params: Dict containing:
    • args: The validated arguments matching the capability's schema
    • action: The action context containing:
      • task: The current task context (if running as part of a task)
      • workspace: The current workspace context
      • me: Information about the current agent
      • Other action-specific properties
  • messages: List of messages in the conversation

The run function must return a string or a coroutine that returns a string.

Tasks

Tasks are units of work that agents can execute. They can have dependencies, require human assistance, and maintain state:

from openserv.types import CreateTaskParams, AddLogToTaskParams, UpdateTaskStatusParams

# Create a task
task = await agent.create_task(CreateTaskParams(
    workspace_id=123,
    assignee=456,
    description="Analyze customer feedback",
    body="Process the latest survey results",
    input="survey_results.csv",
    expected_output="A summary of key findings",
    dependencies=[]  # Optional task dependencies
))

# Add progress logs
await agent.add_log_to_task(AddLogToTaskParams(
    workspace_id=123,
    task_id=task.id,
    severity="info",
    type="text",
    body="Starting analysis..."
))

# Update task status
await agent.update_task_status(UpdateTaskStatusParams(
    workspace_id=123,
    task_id=task.id,
    status="in-progress"
))

Chat Interactions

Agents can participate in chat conversations and maintain context:

from openserv import Agent, Capability
from openserv.types import AgentOptions
from pydantic import BaseModel
from typing import Optional

class CustomerQueryArgs(BaseModel):
    query: str
    context: Optional[str] = None

class CustomerSupportAgent(Agent):
    def __init__(self):
        super().__init__(AgentOptions(
            system_prompt="You are a customer support agent.",
            api_key="your_api_key_here"
        ))
        
        self.add_capability(Capability(
            name="respond_to_customer",
            description="Generate a response to a customer inquiry",
            schema=CustomerQueryArgs,
            run=self.respond_to_customer
        ))
    
    async def respond_to_customer(self, run_params, messages):
        args = run_params["args"]
        query, context = args.query, args.context
        
        # Generate response using the query and optional context
        return f"Thank you for your question about {query}..."

# Send a chat message
await agent.send_chat_message(
    workspace_id=123,
    agent_id=456,
    message="How can I assist you today?"
)

File Operations

Agents can work with files in their workspace:

# Upload a file
await agent.upload_file(
    workspace_id=123,
    path="reports/analysis.txt",
    file="Analysis results...",
    skip_summarizer=False,
    task_ids=[456]  # Associate with tasks
)

# Get workspace files
files = await agent.get_files(
    workspace_id=123
)

API Reference

Task Management

# Create Task
task = await agent.create_task(CreateTaskParams(
    workspace_id=int,
    assignee=int,
    description=str,
    body=str,
    input=str,
    expected_output=str,
    dependencies=List[int]
))

# Update Task Status
await agent.update_task_status(UpdateTaskStatusParams(
    workspace_id=int,
    task_id=int,
    status=TaskStatus.IN_PROGRESS  # Enum: TO_DO, IN_PROGRESS, HUMAN_ASSISTANCE_REQUIRED, ERROR, DONE, CANCELLED
))

# Add Task Log
await agent.add_log_to_task(AddLogToTaskParams(
    workspace_id=int,
    task_id=int,
    severity="info",  # "info", "warning", "error"
    type="text",  # "text", "openai-message"
    body="Log message or JSON object"
))

Chat & Communication

# Send Chat Message
await agent.send_chat_message(SendChatMessageParams(
    workspace_id=int,
    agent_id=int,
    message=str
))

# Request Human Assistance
await agent.request_human_assistance(RequestHumanAssistanceParams(
    workspace_id=int,
    task_id=int,
    type="text",  # "text", "project-manager-plan-review"
    question="Need help with...",
    agent_dump={"data": "Additional context"}  # Optional
))

Workspace Management

# Get Files
files = await agent.get_files(GetFilesParams(
    workspace_id=int
))

# Upload File
await agent.upload_file(UploadFileParams(
    workspace_id=int,
    path=str,
    file="File content or bytes",
    skip_summarizer=False,  # Optional
    task_ids=[123]  # Optional
))

Integration Management

# Call Integration
response = await agent.call_integration(IntegrationCallRequest(
    workspace_id=int,
    integration_id=str,
    details={
        "endpoint": "/api/endpoint",
        "method": "GET",
        "data": {}  # Optional
    }
))

Advanced Usage

OpenAI Process Runtime

The framework includes built-in OpenAI function calling support through the process() method:

from openserv.types import ProcessParams

result = await agent.process(ProcessParams(
    messages=[
        {
            "role": "system",
            "content": "You are a helpful assistant"
        },
        {
            "role": "user",
            "content": "Create a task to analyze the latest data"
        }
    ]
))

Error Handling

Implement robust error handling in your agents:

try:
    await agent.do_task(action)
except Exception as error:
    await agent.mark_task_as_errored(
        workspace_id=action.workspace.id,
        task_id=action.task.id,
        error=str(error)
    )

    # Log the error
    await agent.add_log_to_task(
        workspace_id=action.workspace.id,
        task_id=action.task.id,
        severity="error",
        type="text",
        body=f"Error: {str(error)}"
    )

Custom Agents

Create specialized agents by extending the base Agent class:

import json
from openserv import Agent
from openserv.types import AgentOptions, TaskStatus

class DataAnalysisAgent(Agent):
    def __init__(self):
        super().__init__(AgentOptions(
            system_prompt="You are a data analysis agent.",
            api_key="your_api_key_here"
        ))
        
    async def do_task(self, action):
        if not action.task:
            return

        try:
            await self.update_task_status(
                workspace_id=action.workspace.id,
                task_id=action.task.id,
                status=TaskStatus.IN_PROGRESS
            )

            # Implement custom analysis logic
            result = await self.analyze_data(action.task.input)

            await self.complete_task(
                workspace_id=action.workspace.id,
                task_id=action.task.id,
                output=json.dumps(result)
            )
        except Exception as error:
            await self.handle_error(action, error)

    async def analyze_data(self, input_data: str):
        # Custom data analysis implementation
        pass

    async def handle_error(self, action, error):
        # Custom error handling logic
        pass

Examples

Check out our examples directory for more detailed implementation examples.

License

MIT License

Copyright (c) 2024 OpenServ Labs

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

Built with ❤️ by OpenServ Labs

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages