This project is an AI chatbot powered by the OpenAI Agents SDK, capable of using different LLM providers, including OpenAI and AWS Bedrock. It features multi-agent orchestration, guardrails, and a Streamlit-based interface for interaction. The project supports secure configuration via .env
and seamless provider switching using LiteLLM.
This project leverages the OpenAI Agents SDK, a lightweight and powerful framework for building multi-agent workflows. The SDK simplifies the development of agentic applications by providing:
- Agents: LLMs equipped with instructions and tools.
- Handoffs: Mechanisms that allow agents to delegate tasks to other agents.
- Guardrails: Input validations and checks that run in parallel to agents.
- Function Tools: Python functions turned into tools with automatic schema generation and validation.
- Tracing: Built-in tracing to visualize and debug agentic flows.
For more details, refer to the official documentation.
- Chatbot Interface: A user-friendly chatbot built with Streamlit.
- Agent Management: Backend logic for managing multiple agents, including a triage agent and specialized agents.
- Guardrails: Input validation and routing using guardrails.
- Asynchronous Execution: Supports asynchronous operations for efficient processing.
- Multi-Provider Support: Automatically switches between OpenAI and AWS Bedrock (Anthropic Claude 3) using LiteLLM, based on available environment credentials.
- Testing: Includes unit tests for backend logic using
pytest
andpytest-asyncio
. - Custom Tools with Function Calling: Create Python functions (like fetching ChatGPT IP whitelist) and expose them to LLMs using
@function_tool
. - File Retrieval with Vector Store (Optional): Retrieve relevant information from your uploaded documents using OpenAI’s
FileSearchTool
.
If noVECTOR_STORE_ID
is provided, the agent will still function normally using only the base model.
The sa_genai_agent
is designed to dynamically select between different LLM providers:
- If
MODEL_PROVIDER
is set to "bedrock" andAWS_ACCESS_KEY_ID
and related AWS credentials are found in your environment or.env
, it uses Anthropic Claude 3 Sonnet via AWS Bedrock. - If not, it defaults to OpenAI’s GPT-4o model — no configuration changes needed.
This is enabled using the LiteLLM integration, making the agent flexible and cost-efficient depending on your cloud environment.
OPENAI_API_KEY=your_openai_api_key_here
VECTOR_STORE_ID=your_vector_store_id_here (optional)
AWS_ACCESS_KEY_ID=your_aws_access_key_id_here (optional)
AWS_SECRET_ACCESS_KEY=your_aws_secret_access_key_here (optional)
AWS_REGION_NAME=eu-west-1 (optional)
MODEL_PROVIDER=openai # "openai" or "bedrock"
-
Clone the repository:
git clone https://github.com/your-repo/openAiAgentText.git cd openAiAgentText
-
Install dependencies:
pip3 install -r requirements.txt
-
Create a
.env
file in the root directory and add your credentials. The app will use OpenAI by default, but if AWS credentials are present, it will use AWS Bedrock with the Anthropic Claude 3 model:OPENAI_API_KEY=your_openai_api_key_here VECTOR_STORE_ID=your_vector_store_id_here (optional) AWS_ACCESS_KEY_ID=your_aws_access_key_id_here (optional) AWS_SECRET_ACCESS_KEY=your_aws_secret_access_key_here (optional) AWS_REGION_NAME=eu-west-1 (optional) MODEL_PROVIDER=openai # "openai" or "bedrock"
-
To obtain a
VECTOR_STORE_ID
(optional – the app will still work even if you don't provide a vector store):-
Upload files via the OpenAI Platform UI:
- Navigate to OpenAI's platform.
- Go to the "Files" section.
- Create a new vector store and attach your uploaded file.
- Retrieve the
VECTOR_STORE_ID
from the vector store details.
-
Or use the OpenAI CLI:
openai file create -p assistants -f my_document.pdf openai vector-store create -f <file_id>
-
-
python3 -m streamlit run frontend/chatbot_app.py
python3 backend/main.py
To execute the test suite:
python3 -m pytest tests/test_main.py
To save test results to a file:
python3 -m pytest tests/test_main.py > test_results.txt
These examples show how user inputs are routed to the correct agent or rejected, based on the triage logic and guardrails.
User Input: How can I integrate Amazon Bedrock with LangChain?
- 🔍 Summary: A question about using generative AI tools and orchestration frameworks.
- 🤖 Routed Agent:
SA Generative AI Specialist (sa_genai_agent)
User Input: How can I automate deployment pipelines using GitHub Actions and AWS CodePipeline?
- 🔍 Summary: A DevOps/operations-related question about CI/CD automation.
- 🤖 Routed Agent:
SA Operations Specialist (sa_operations_agent)
User Input: Regarding my Generative AI Chatbot architecture, I need to whitelist an IP address for ChatGPT. Could you provide one?
- 🔍 Summary: Generative AI architecture with a specific technical requirement (ChatGPT IPs). will call a Custom Tool (chatgpt_actions_tool.py)
- 🤖 Routed Agent:
SA Generative AI Specialist (sa_genai_agent)
User Input: What is 1 + 1?
- 🔍 Summary: A general question not related to architecture.
- ❌ Routed Agent:
None – blocked by guardrail (is_architecture: false)
This project includes a custom tool (get_chatgpt_actions
) that uses a public OpenAI API to fetch the latest ChatGPT IP whitelist.
The tool is implemented as a Python function and exposed to the LLM via the @function_tool
decorator.
openAiAgentsChatbot/
├── backend/ # Backend logic
│ ├── agents/ # Agent definitions
│ │ ├── sa_genai_agent.py # Generative AI specialist agent
│ │ ├── sa_operations_agent.py # Operations specialist agent
│ ├── tools/
│ │ ├── chatgpt_actions_tool.py # Tool to fetch ChatGPT IP whitelist
│ ├── prompts/ # agent instructions
│ │ ├── guardrails.yaml
│ │ ├── sa_operations_instructions.yaml
│ │ └── sa_genai_agent_instructions.yaml
│ │ └── triage_agent_instructions.yaml
│ ├── [main.py](http://_vscodecontentref_/3) # Backend entry point
├── frontend/ # Frontend logic
│ ├── [chatbot_app.py](http://_vscodecontentref_/4) # Streamlit chatbot application
├── tests/ # Unit and integration tests
│ ├── test_main.py # Tests for backend logic
├── .env # Environment variables (ignored by Git)
├── requirements.txt # Python dependencies
├── [README.md](http://_vscodecontentref_/5) # Project documentation
Install the following Python packages:
openai-agents
python-dotenv
pyyaml
openai
streamlit
pytest
pytest-asyncio
You can install all dependencies using:
pip3 install -r requirements.txt
or one by one:
pip3 install openai-agents
pip3 install python-dotenv
pip3 install pyyaml
pip3 install openai streamlit
pip3 install pytest
pip3 install pytest-asyncio
pip3 install "openai-agents[litellm]"
pip3 install litellm
pip3 install boto3
Developed by imKikev.
- Detailed Usage Instructions: Includes commands for running the backend, chatbot, and tests.
- Project Structure: Provides an overview of the folder structure for better understanding.
- Environment Variables: Explains how to configure the .env file.
- Testing: Includes commands for running tests and generating reports.
- Troubleshooting: Addresses common issues like missing environment variables or dependencies.
Let me know if you'd like to customize this further!
This project is licensed under the MIT License. See the LICENSE
file for details.