A Production-Ready Runtime Framework for Intelligent Agent Applications
AgentScope Runtime tackles two critical challenges in agent development: secure sandboxed tool execution and scalable agent deployment. Built with a dual-core architecture, it provides framework-agnostic infrastructure for deploying agents with full observability and safe tool interactions.
-
ποΈ Deployment Infrastructure: Built-in services for session management, memory, and sandbox environment control
-
π Sandboxed Tool Execution: Isolated sandboxes ensure safe tool execution without system compromise
-
π§ Framework Agnostic: Not tied to any specific framework. Works seamlessly with popular open-source agent frameworks and custom implementations
-
β‘ Developer Friendly: Simple deployment with powerful customization options
-
π Observability: Comprehensive tracing and monitoring for runtime operations
Welcome to join our community on
Discord | DingTalk |
---|---|
- π Quick Start
- π Cookbook
- π Agent Framework Integration
- ποΈ Deployment
- π€ Contributing
- π License
- Python 3.10 or higher
- pip or uv package manager
From PyPI:
# Install core dependencies
pip install agentscope-runtime
# Install sandbox dependencies
pip install "agentscope-runtime[sandbox]"
(Optional) From source:
# Pull the source code from GitHub
git clone -b main https://github.com/agentscope-ai/agentscope-runtime.git
cd agentscope-runtime
# Install core dependencies
pip install -e .
# Install sandbox dependencies
pip install -e ".[sandbox]"
This example demonstrates how to create a simple LLM agent using AgentScope Runtime and stream responses from the Qwen model.
import asyncio
import os
from agentscope_runtime.engine import Runner
from agentscope_runtime.engine.agents.llm_agent import LLMAgent
from agentscope_runtime.engine.llms import QwenLLM
from agentscope_runtime.engine.schemas.agent_schemas import AgentRequest
from agentscope_runtime.engine.services.context_manager import ContextManager
async def main():
# Set up the language model and agent
model = QwenLLM(
model_name="qwen-turbo",
api_key=os.getenv("DASHSCOPE_API_KEY"),
)
llm_agent = LLMAgent(model=model, name="llm_agent")
async with ContextManager() as context_manager:
runner = Runner(agent=llm_agent, context_manager=context_manager)
# Create a request and stream the response
request = AgentRequest(
input=[
{
"role": "user",
"content": [
{
"type": "text",
"text": "What is the capital of France?",
},
],
},
],
)
async for message in runner.stream_query(request=request):
if hasattr(message, "text"):
print(f"Streamed Answer: {message.text}")
asyncio.run(main())
This example demonstrates how to create sandboxed and execute tool within the sandbox.
from agentscope_runtime.sandbox import BaseSandbox
with BaseSandbox() as box:
print(box.run_ipython_cell(code="print('hi')"))
print(box.run_shell_command(command="echo hello"))
Note
Current version requires Docker or Kubernetes to be installed and running on your system. Please refer to this tutorial for more details.
- π Cookbook: Comprehensive tutorials
- π‘ Concept: Core concepts and architecture overview
- π Quick Start: Quick start tutorial
- π Demo House: Rich example projects
- π API Reference: Complete API documentation
# pip install "agentscope-runtime[agentscope]"
import os
from agentscope.agent import ReActAgent
from agentscope.model import OpenAIChatModel
from agentscope_runtime.engine.agents.agentscope_agent import AgentScopeAgent
agent = AgentScopeAgent(
name="Friday",
model=OpenAIChatModel(
"gpt-4",
api_key=os.getenv("OPENAI_API_KEY"),
),
agent_config={
"sys_prompt": "You're a helpful assistant named {name}.",
},
agent_builder=ReActAgent,
)
# pip install "agentscope-runtime[agno]"
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agentscope_runtime.engine.agents.agno_agent import AgnoAgent
agent = AgnoAgent(
name="Friday",
model=OpenAIChat(
id="gpt-4",
),
agent_config={
"instructions": "You're a helpful assistant.",
},
agent_builder=Agent,
)
# pip install "agentscope-runtime[autogen]"
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from agentscope_runtime.engine.agents.autogen_agent import AutogenAgent
agent = AutogenAgent(
name="Friday",
model=OpenAIChatCompletionClient(
model="gpt-4",
),
agent_config={
"system_message": "You're a helpful assistant",
},
agent_builder=AssistantAgent,
)
# pip install "agentscope-runtime[langgraph]"
from typing import TypedDict
from langgraph import graph, types
from agentscope_runtime.engine.agents.langgraph_agent import LangGraphAgent
# define the state
class State(TypedDict, total=False):
id: str
# define the node functions
async def set_id(state: State):
new_id = state.get("id")
assert new_id is not None, "must set ID"
return types.Command(update=State(id=new_id), goto="REVERSE_ID")
async def reverse_id(state: State):
new_id = state.get("id")
assert new_id is not None, "ID must be set before reversing"
return types.Command(update=State(id=new_id[::-1]))
state_graph = graph.StateGraph(state_schema=State)
state_graph.add_node("SET_ID", set_id)
state_graph.add_node("REVERSE_ID", reverse_id)
state_graph.set_entry_point("SET_ID")
compiled_graph = state_graph.compile(name="ID Reversal")
agent = LangGraphAgent(graph=compiled_graph)
Note
More agent framework interations are comming soon!
The agent runner exposes a deploy
method that takes a DeployManager
instance and deploys the agent. The service port is set as the parameter port
when creating the LocalDeployManager
. The service endpoint path is set as the parameter endpoint_path
when deploying the agent. In this example, we set the endpoint path to /process
. After deployment, you can access the service at http://localhost:8090/process
.
from agentscope_runtime.engine.deployers import LocalDeployManager
# Create deployment manager
deploy_manager = LocalDeployManager(
host="localhost",
port=8090,
)
# Deploy the agent as a streaming service
deploy_result = await runner.deploy(
deploy_manager=deploy_manager,
endpoint_path="/process",
stream=True, # Enable streaming responses
)
We welcome contributions from the community! Here's how you can help:
- Use GitHub Issues to report bugs
- Include detailed reproduction steps
- Provide system information and logs
- Discuss new ideas in GitHub Discussions
- Follow the feature request template
- Consider implementation feasibility
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
For detailed contributing guidelines, please see CONTRIBUTE.
AgentScope Runtime is released under the Apache License 2.0.
Copyright 2025 Tongyi Lab
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Thanks goes to these wonderful people (emoji key):
Weirui Kuang π» π π§ π |
Bruce Luo π» π π‘ |
Zhicheng Zhang π» π π |
ericczq π» π |
qbc π |
Ran Chen π» |
|
|
This project follows the all-contributors specification. Contributions of any kind welcome!