diff --git a/README.md b/README.md
index e19270d..667c0ad 100644
--- a/README.md
+++ b/README.md
@@ -1,280 +1,17 @@
-**New**
+# Agents SDK Chat
-########## 90%
+This project demonstrates a minimal chat application using the OpenAI Agents SDK with Next.js.
-Code Refactor...loading
+## Local Development
-
-https://github.com/admineral/OpenAI-Assistant-API-Chat/tree/Code_refactor
-
-##
-
-[)
-
-
-
-# OpenAI Assistant API Chat
-
-[](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Fadmineral%2FOpenAI-Assistant-API-Chat&env=OPENAI_API_KEY&envDescription=OpenAI%20API%20Key&envLink=https%3A%2F%2Fplatform.openai.com%2Faccount%2Fapi-keys&project-name=openai-assistant-api-chat&repository-name=OpenAI-Assistant-API-Chat)
-[](https://open-ai-assistant-api-chat.vercel.app)
-
-## Introduction
-
-Welcome to the OpenAI Assistant API Chat repository! This innovative chat application allows users to interact with an AI assistant powered by OpenAI's latest "gpt-4-1106-preview" model. It's an exciting space where technology meets conversation, offering a unique experience of AI interaction.
-
-# [Demo](https://open-ai-assistant-api-chat.vercel.app)
-
-
-
-## Beta & Work in Progress
-
-Please note that this application is currently in the beta phase and is continuously evolving. We are working diligently to enhance the user experience and add new features. During this phase, you may encounter some hiccups or unexpected behavior.
-
-## Deployment
-
-This application is ready to be deployed with Vercel, a cloud platform for static sites and Serverless Functions. Vercel provides an easy way to deploy your applications directly from your repository.
-
-To deploy this application with Vercel, click on the "Deploy with Vercel" button below. This will take you to the Vercel platform where you'll be guided through the deployment process.
-
-Please note that you'll need to provide your OpenAI API key during the deployment process. This key is used to authenticate your application's requests to the OpenAI API.
-
-[](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Fadmineral%2FOpenAI-Assistant-API-Chat&env=OPENAI_API_KEY&envDescription=OpenAI%20API%20Key&envLink=https%3A%2F%2Fplatform.openai.com%2Faccount%2Fapi-keys&project-name=openai-assistant-api-chat&repository-name=OpenAI-Assistant-API-Chat)
-
-
-In addition to the OpenAI API key, you can also specify a default Assistant ID during the deployment process. This ID determines which AI assistant is used in the chat application. If you set this ID, the application will use this assistant for the chat. If you do not set this ID, the application will prompt the user to enter the assistant details.
-
-To deploy the application with both the OpenAI API key and a hardcoded Assistant ID, click on the "Deploy with Vercel" button below. You will be prompted to enter both your OpenAI API key and your Assistant ID.
-
-[](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Fadmineral%2FOpenAI-Assistant-API-Chat&env=OPENAI_API_KEY,REACT_APP_ASSISTANT_ID&envDescription=OpenAI%20API%20Key,Assistant%20ID&envLink=https%3A%2F%2Fplatform.openai.com%2Faccount%2Fapi-keys&project-name=openai-assistant-api-chat&repository-name=OpenAI-Assistant-API-Chat)
-## Features
-
-- **Personalized AI Assistant**: Customize the assistant's name, model, and description for a unique chat experience.
-- **Interactive Chat Experience**: Engage in dynamic conversations with the AI assistant.
-- **Robust AI Responses**: Leveraging OpenAI's "gpt-4-1106-preview" model (128k context) for intelligent, context-aware chat responses.
-- **File Upload**: Users can upload files for the assistant to analyze.
-- **GPT-4 Vision Integration**: Send pictures to the AI, and it will describe what it sees, providing insights and understanding of the visual content.(improoved version soon)
-
-
-- **Function Calls**: (Coming Soon) Experience interactive functionalities such as API calls based on chat context.
-- **Code Interpretation**: (Coming Soon) The assistant can execute Pytho code.
-
-
-
-
-
-
-
-
-## Getting Started
-
-### Prerequisites
-- Node.js installed on your machine.
-- An active OpenAI API key.
-
-### Installation
-1. **Clone the Repository**:
- ```
- git clone https://github.com/admineral/OpenAI-Assistant-API-Chat.git
- ```
-2. **Install Dependencies**:
- Navigate to the project directory and run:
- ```
- npm install
- ```
-3. **Environment Setup**:
- Create a `.env` file in the root directory and add your OpenAI API key:
- ```
- OPENAI_API_KEY=your_openai_api_key
- ```
-4. **Run the Application**:
- Start the server with:
- ```
- npm run dev
- ```
-
-## Contributing
-
-Your contributions make this project thrive. Whether it's reporting bugs, suggesting features, or submitting code changes, every bit of help is greatly appreciated.
-
-- **Report Issues**: If you encounter any problems, please open an issue on our GitHub page.
-- **Feature Requests**: Have an idea? Share it with us by opening an issue.
-- **Pull Requests**: Want to make a direct impact? Fork the repository, make your changes, and submit a pull request.
-
-We look forward to growing this project with the community's support and creativity!
-
-
-
-## Application Architecture Overview
-
-### ChatManager (`ChatManager.ts`)
-- **Role**: Central component for managing chat state and operations.
-- **Functions**:
- - `startAssistant`: Initializes the chat assistant, manages file uploads, and handles thread creation.
- - `sendMessage`: Sends user messages to the assistant and updates the chat.
- - `getChatState`: Retrieves the current state of the chat, including messages and assistant status.
-
-### API Layer (`api.js`)
-- **Purpose**: Acts as an intermediary between the front-end and various API routes.
-- **Key Functions**:
- - `uploadImageAndGetDescription`: Uploads images and gets descriptions using the GPT-4 Vision API.
- - `createAssistant`, `createThread`, `runAssistant`: Handles assistant creation, thread management, and assistant operations.
-
-### Assistant Modules (`assistantModules.ts`)
-- **Role**: Manages tasks related to the chat assistant, such as file preparation and assistant initialization.
-- **Key Functions**:
- - `prepareUploadFile`: Prepares and uploads files for the chat assistant.
- - `initializeAssistant`: Initializes a chat assistant with specific details.
- - `createChatThread`: Creates a chat thread with an initial message.
-
-### Chat Modules (`chatModules.ts`)
-- **Purpose**: Manages chat-related functionalities.
-- **Key Functions**:
- - `submitUserMessage`: Submits user messages to the chat.
- - `fetchAssistantResponse`: Fetches the latest messages from the assistant.
- - `updateChatState`: Updates the chat state with new messages.
-
-
-## Detailed Code Explanation
-
-### ChatManager Implementation (`ChatManager.ts`)
-- **Singleton Pattern**: Ensures a single instance of `ChatManager` manages the chat state and operations.
-- **State Management**: Handles chat state, including messages, thread IDs, assistant status, and loading states.
-- **Error Handling**: Robust error handling during chat operations.
-- **API Integration**: Integrates with API layer for message sending/receiving and chat thread management.
-
-### API Layer (`api.js`)
-- **Central API Management**: Simplifies front-end interactions with a clean API interface.
-- **Error Handling**: Ensures smooth application operation with error handling in API requests.
-
-### Front-End Interaction
-- **React Hooks**: Utilizes hooks in `useChatState.ts` for state management.
-- **User Interface**: `InputForm` and `MessageList` interact with `ChatManager` for displaying messages and handling user inputs.
-
-
-
-
-
-
-
-
-### Main Components and Flow
-- **ChatManager (`ChatManager.ts`)**: Central component managing the chat state and operations.
-- **API Layer (`api.js`)**: Intermediary for API interactions.
-- **Assistant Modules (`assistantModules.ts`)**: Handles tasks related to the chat assistant.
-- **Chat Modules (`chatModules.ts`)**: Manages chat functionalities.
-
-## Detailed Breakdown
-
-### `ChatManager.ts`
-This is the core class managing the chat's state and operations.
-
-```typescript
-class ChatManager {
- private state: ChatState;
- private static instance: ChatManager | null = null;
-
- // Singleton pattern to ensure a single ChatManager instance
- private constructor(setChatMessages: (messages: any[]) => void, setStatusMessage: (message: string) => void) {
- this.state = {
- /* State initialization */
- };
- console.log('ChatManager initialized');
- }
-
- // Method to get the current instance of ChatManager
- public static getInstance(setChatMessages: (messages: any[]) => void, setStatusMessage: (message: string) => void): ChatManager {
- if (this.instance === null) {
- this.instance = new ChatManager(setChatMessages, setStatusMessage);
- }
- return this.instance;
- }
-
- // Method to start the assistant
- async startAssistant(assistantDetails: any, file: File | null, initialMessage: string): Promise {
- // ... Function logic including API calls to initialize assistant and create chat thread
- }
-
- // Method to send a message
- async sendMessage(input: string): Promise {
- // ... Function logic to handle message sending
- }
-
- // Method to get the current chat state
- getChatState(): ChatState {
- console.log('Getting chat state');
- return this.state;
- }
-}
-```
-- **Key Features**:
- - Singleton pattern ensures only one instance of `ChatManager` is created.
- - Manages the chat's state, including messages, assistant's ID, thread ID, and loading states.
- - `startAssistant`: Initiates the assistant and sets up the chat thread.
- - `sendMessage`: Handles sending messages to the assistant.
- - `getChatState`: Retrieves the current state of the chat.
-
-### `api.js`
-This module contains functions for various API interactions required by the chat application.
-
-```javascript
-// Example of an API function
-export const uploadImageAndGetDescription = async (base64Image) => {
- // Code to upload an image and get a description using the OpenAI API
-};
-
-export const createAssistant = async (assistantDetails) => {
- // Code to create an assistant
-};
-
-// Other API functions like 'createThread', 'runAssistant', etc.
-```
-- **Purpose**: Provides a centralized and clean interface for API interactions.
-- **Key Functions**:
- - `uploadImageAndGetDescription`: Uploads a base64 encoded image and gets a description.
- - `createAssistant`: Creates a new assistant instance.
- - Other functions for managing threads, running assistants, etc.
-
-### `assistantModules.ts`
-Contains functions related to preparing and managing the chat assistant.
-
-```typescript
-export const prepareUploadFile = async (file: File, setStatusMessage: (message: string) => void): Promise => {
- // Logic to prepare and upload a file for the chat assistant
-};
-
-export const initializeAssistant = async (assistantDetails, fileId): Promise => {
- // Logic to initialize an assistant with given details
-};
-
-export const createChatThread = async (inputMessage: string): Promise => {
- // Logic to create a chat thread
-};
-```
-- **Purpose**: Handles assistant-related tasks such as file preparation and assistant initialization.
-
-### `chatModules.ts`
-Manages chat-related functionalities, primarily dealing with messages.
-
-```typescript
-export const submitUserMessage = async (input: string, threadId: string): Promise => {
- // Logic to submit a user's message to the chat
-};
-
-export const fetchAssistantResponse = async (runId: string, threadId: string): Promise => {
- // Logic to fetch the latest messages from the assistant
-};
-
-export const updateChatState = (prevMessages: Message[], newMessages: Message[], setChatMessages: (messages: any[]) => void): Promise => {
- // Logic to update the chat state with new messages
-};
+```bash
+npm install
+npm run dev
```
-- **Purpose**: Manages sending user messages, fetching assistant responses, and updating the chat state.
-
-### React Components
-- **`WelcomeForm`**, **`InputForm`**, and **`MessageList`** are React components that build the user interface of the chat application.
- They use hooks and states to manage user interactions and display chat messages.
+## API Routes
-### API Routes (`/api/*.ts`)
-These files define various API routes for handling tasks like creating assistants, listing messages, checking run status, etc. They interact with the OpenAI API and provide endpoints for the frontend to call.
+- `POST /api/agentsSDKChat` — Send `{ "message": "Hello" }` and receive the agent reply.
+- `POST /api/agentsTriage` — Example showing how to hand off between multiple agents.
+The old Assistants API implementation was removed in favor of the new Agents SDK.
diff --git a/app/api/README.md b/app/api/README.md
index 0f6d2e6..c019a79 100644
--- a/app/api/README.md
+++ b/app/api/README.md
@@ -1,93 +1,8 @@
-# README for `app/api/` Folder - Individual API Routes
+# API Routes
-## Overview
-The `app/api/` directory is a crucial part of our application, dedicated to defining the API routes that handle server-side operations. These routes are integral to the functionality of our chat application, enabling file uploads, message retrieval, and interaction with the OpenAI API.
+This application exposes two example endpoints using the OpenAI Agents SDK.
+- **POST `/api/agentsSDKChat`** – Run a simple agent. Send `{ "message": "Hello" }` and receive a reply.
+- **POST `/api/agentsTriage`** – Demonstrates handing off between specialized agents. Send `{ "question": "..." }`.
-
-
-
-## `upload/route.ts`
-
-### Description
-Handles the uploading of files from the client, which are necessary for initiating chat sessions with the AI assistant.
-
-### Key Features
-- Processes and uploads files received from the client.
-- Integrates with storage solutions for file persistence.
-- Returns file identifiers or metadata back to the client.
-
-
-## `listMessages/route.ts`
-
-### Description
-Manages fetching messages from specific chat threads, vital for displaying the chat history to the user.
-
-### Key Features
-- Retrieves messages based on thread IDs.
-- Ensures secure and efficient data retrieval.
-- Formats and returns chat messages for client display.
-
-
----
-
-## `createAssistant/route.ts`
-
-### Description
-Responsible for creating a new instance of the AI assistant.
-
-### Key Features
-- Receives configuration parameters for assistant creation.
-- Initializes an AI assistant using services like OpenAI.
-- Returns essential details like the assistant ID to the client.
-
----
-
-## `createThread/route.ts`
-
-### Description
-Handles the creation of a new chat thread, which is essential for managing a conversation context.
-
-### Key Features
-- Takes initial messages or context for thread setup.
-- Establishes a chat thread in the backend system.
-- Provides the thread ID for subsequent message handling.
-
----
-
-## `runAssistant/route.ts`
-
-### Description
-Executes the assistant's logic within a specific chat thread.
-
-### Key Features
-- Initiates assistant interaction in a given thread.
-- Manages and monitors the assistant's chat activities.
-- Provides real-time status updates of the assistant's actions.
-
----
-
-## `addMessage/route.ts`
-
-### Description
-Manages the addition of new user messages to a chat thread.
-
-### Key Features
-- Processes incoming messages from the client.
-- Adds messages to the appropriate chat thread.
-- Confirms successful message addition or reports issues.
-
----
-
-## `checkRunStatus/route.ts`
-
-### Description
-Monitors and reports the operational status of the AI assistant within a chat thread.
-
-### Key Features
-- Regularly checks the assistant's activity status.
-- Provides updates on the assistant's operational state to the client.
-- Handles various statuses like 'active', 'completed', or 'failed'.
-
-
----
+All former Assistants API routes have been removed.
diff --git a/app/api/addMessage/route.ts b/app/api/addMessage/route.ts
deleted file mode 100644
index 98821d1..0000000
--- a/app/api/addMessage/route.ts
+++ /dev/null
@@ -1,63 +0,0 @@
-/**
- * API Route - Add Message to Thread
- *
- * This route provides the functionality to add new messages to a specific
- * thread via the OpenAI API. It is designed to handle POST requests, where
- * it receives 'threadId' and 'input' within the form data. The 'threadId'
- * identifies the target conversation thread, while 'input' contains the
- * message content to be added. This route plays a crucial role in facilitating
- * dynamic interactions within AI-powered threads, allowing users to continue
- * conversations or add new queries and instructions.
- *
- * Path: /api/addMessage
- */
-
-import { NextRequest, NextResponse } from 'next/server';
-import OpenAI from "openai";
-
-const openai = new OpenAI({
- apiKey: process.env.OPENAI_API_KEY,
-});
-
-export async function POST(req: NextRequest) {
- try {
- // Extract thread ID, input content, and fileIds from JSON data
- const data = await req.json();
- const threadId = data.threadId;
- const input = data.input;
- const fileIds = data.fileIds; // This is the new line
-
- // Log the received thread ID, input, and fileIds for debugging purposes
- console.log(`inside add_Message -Thread ID: ${threadId}`);
- console.log(`inside add_Message -Input: ${input}`);
- console.log(`inside add_Message -File IDs: ${fileIds}`); // This is the new line
-
- // Validate the input data
- if (typeof input !== 'string') {
- throw new Error('Input is not a string');
- }
-
- // If input is provided, create a new message in the thread using the OpenAI API
- if (input) {
- await openai.beta.threads.messages.create(threadId, {
- role: "user",
- content: input,
- file_ids: fileIds || [], // This is the new line
- });
- console.log("add_Message successfully");
- return NextResponse.json({ message: "Message created successfully" });
- }
-
- // Respond with a message indicating no action was performed if input is empty
- return NextResponse.json({ message: 'No action performed' });
- } catch (error) {
- // Error handling with detailed logging
- if (error instanceof Error) {
- console.error('Error:', error);
- return NextResponse.json({ error: error.message });
- } else {
- console.error('Unknown error:', error);
- return NextResponse.json({ error: 'An unknown error occurred' });
- }
- }
-}
\ No newline at end of file
diff --git a/app/api/agentsSDKChat/route.ts b/app/api/agentsSDKChat/route.ts
new file mode 100644
index 0000000..b3bfe38
--- /dev/null
+++ b/app/api/agentsSDKChat/route.ts
@@ -0,0 +1,27 @@
+import { NextRequest, NextResponse } from 'next/server';
+import { Agent, run, setDefaultOpenAIKey } from '@openai/agents';
+
+// Initialize API key for the Agents SDK
+setDefaultOpenAIKey(process.env.OPENAI_API_KEY!);
+
+// Simple demo agent using the new Agents SDK
+const agent = new Agent({
+ name: 'Assistant',
+ instructions: 'You are a helpful assistant.',
+});
+
+export async function POST(request: NextRequest) {
+ try {
+ const { message } = await request.json();
+
+ if (typeof message !== 'string') {
+ return NextResponse.json({ error: 'Invalid message' }, { status: 400 });
+ }
+
+ const result = await run(agent, message);
+ return NextResponse.json({ reply: result.finalOutput });
+ } catch (err) {
+ console.error('Agent run error:', err);
+ return NextResponse.json({ error: 'Failed to run agent' }, { status: 500 });
+ }
+}
diff --git a/app/api/agentsTriage/route.ts b/app/api/agentsTriage/route.ts
new file mode 100644
index 0000000..55c5369
--- /dev/null
+++ b/app/api/agentsTriage/route.ts
@@ -0,0 +1,42 @@
+import { NextRequest, NextResponse } from 'next/server';
+import { Agent, run, setDefaultOpenAIKey } from '@openai/agents';
+
+setDefaultOpenAIKey(process.env.OPENAI_API_KEY!);
+
+// Two specialist agents
+const historyTutor = new Agent({
+ name: 'History Tutor',
+ instructions:
+ 'You provide assistance with historical queries. Explain important events and context clearly.',
+});
+
+const mathTutor = new Agent({
+ name: 'Math Tutor',
+ instructions:
+ 'You provide help with math problems. Explain your reasoning at each step and include examples',
+});
+
+// Triage agent handing off to specialists
+const triageAgent = new Agent({
+ name: 'Triage Agent',
+ instructions:
+ "You determine which agent to use based on the user's homework question",
+ handoffs: [historyTutor, mathTutor],
+});
+
+export async function POST(req: NextRequest) {
+ try {
+ const { question } = await req.json();
+ if (typeof question !== 'string') {
+ return NextResponse.json({ error: 'Invalid question' }, { status: 400 });
+ }
+ const result = await run(triageAgent, question);
+ return NextResponse.json({ reply: result.finalOutput });
+ } catch (err) {
+ console.error('Triage agent error:', err);
+ return NextResponse.json(
+ { error: 'Failed to run triage agent' },
+ { status: 500 },
+ );
+ }
+}
diff --git a/app/api/checkRunStatus/route.ts b/app/api/checkRunStatus/route.ts
deleted file mode 100644
index 56addfd..0000000
--- a/app/api/checkRunStatus/route.ts
+++ /dev/null
@@ -1,46 +0,0 @@
-/**
- * API Route - Check Run Status
- *
- * This route is designed to check the status of a specific run in a thread
- * using the OpenAI API. It accepts POST requests containing 'threadId' and
- * 'runId' in the form data. The route then queries the OpenAI API to retrieve
- * the current status of the specified run. This information is crucial for
- * understanding the state of an ongoing interaction with an AI assistant,
- * such as whether the interaction is completed, ongoing, or has encountered
- * any issues. The status of the run is returned as a JSON response, providing
- * a simple and effective way for client applications to monitor and react to
- * the progress of AI-assisted conversations or tasks.
- *
- * Path: /api/checkRunStatus
- */
-
-import { NextRequest, NextResponse } from 'next/server';
-import OpenAI from "openai";
-
-const openai = new OpenAI({
- apiKey: process.env.OPENAI_API_KEY,
-});
-
-export async function POST(req: NextRequest) {
- try {
- // Extract JSON data from the request
- const data = await req.json();
- const threadId = data.threadId;
- const runId = data.runId;
-
- // Log the received thread ID and run ID for debugging
- console.log(`Received request with threadId: ${threadId} and runId: ${runId}`);
-
- // Retrieve the status of the run for the given thread ID and run ID using the OpenAI API
- const runStatus = await openai.beta.threads.runs.retrieve(threadId, runId);
-
- // Log the retrieved run status for debugging
- console.log(`Retrieved run status: ${runStatus.status}`);
-
- // Return the retrieved run status as a JSON response
- return NextResponse.json({ status: runStatus.status });
- } catch (error) {
- // Log any errors that occur during the process
- console.error(`Error occurred: ${error}`);
- }
-}
\ No newline at end of file
diff --git a/app/api/createAssistant/route.ts b/app/api/createAssistant/route.ts
deleted file mode 100644
index d1774d0..0000000
--- a/app/api/createAssistant/route.ts
+++ /dev/null
@@ -1,73 +0,0 @@
-/**
- * API Route - Create Assistant
- *
- * This route handles the creation of a new OpenAI assistant. It accepts POST requests
- * with necessary data such as assistant name, model, description, and an optional file ID.
- * This data is used to configure and create an assistant via the OpenAI API. The route
- * returns the ID of the newly created assistant, allowing for further operations involving
- * this assistant. It's designed to provide a seamless process for setting up customized
- * OpenAI assistants as per user requirements.
- *
- * Path: /api/createAssistant
- */
-import { NextRequest, NextResponse } from 'next/server'
-import { OpenAI } from 'openai';
-
-const openai = new OpenAI({
- apiKey: process.env.OPENAI_API_KEY,
- });
-
-
-
- export async function POST(req: NextRequest) {
- if (req.method === 'POST') {
- try {
- const requestBody = await req.json();
- console.log('Request Body:', requestBody); // Log the entire request body
-
- const { assistantDetails, fileIds } = requestBody;
- const { name: assistantName, model: assistantModel, description: assistantDescription } = assistantDetails;
-
- // Log the fileIds
- console.log('Assistant Name:', assistantName);
- console.log('Assistant Model:', assistantModel);
- console.log('Assistant Description:', assistantDescription);
- console.log('File IDs:', fileIds);
-
- if (!assistantName || !assistantModel || !assistantDescription) {
- throw new Error('Missing required assistant parameters');
- }
-
- const assistantOptions: any = {
- name: assistantName,
- instructions: assistantDescription,
- model: assistantModel,
- tools: [{ "type": "retrieval" }],
- };
- if (fileIds) {
- assistantOptions.file_ids = fileIds;
- }
-
- // Log the assistantOptions
- console.log('Assistant Options:', assistantOptions);
-
- const assistant = await openai.beta.assistants.create(assistantOptions);
- const assistantId = assistant.id;
-
- return NextResponse.json({
- message: 'Assistant created successfully',
- assistantId: assistantId
- });
- } catch (error) {
- if (error instanceof Error) {
- console.error('Error:', error);
- return NextResponse.json({ error: error.message });
- } else {
- console.error('Unknown error:', error);
- return NextResponse.json({ error: 'An unknown error occurred' });
- }
- }
- } else {
- return NextResponse.json({ error: 'Method Not Allowed' });
- }
- };
\ No newline at end of file
diff --git a/app/api/createThread/route.ts b/app/api/createThread/route.ts
deleted file mode 100644
index 2bcd10a..0000000
--- a/app/api/createThread/route.ts
+++ /dev/null
@@ -1,51 +0,0 @@
-/**
- * API Route - Create Chat Thread
- *
- * This API route facilitates the creation of a new chat thread using the OpenAI API.
- * It processes POST requests that contain an initial input message. This route is primarily
- * used to start a new conversation thread, initializing it with a user-specified message.
- * The newly created thread ID is then returned, enabling further interaction within that thread.
- *
- * Path: /api/createThread
- */
-
-import { NextRequest, NextResponse } from 'next/server';
-import OpenAI from "openai";
-
-const openai = new OpenAI({
- apiKey: process.env.OPENAI_API_KEY,
-});
-
-export async function POST(req: NextRequest) {
- console.log('CREATE THREAD started');
- if (req.method === 'POST') {
- try {
- const data = await req.json();
- const inputMessage = data.inputmessage;
-
- // Überprüfen, ob die Eingabemessage vorhanden und ein String ist
- if (!inputMessage || typeof inputMessage !== 'string') {
- throw new Error('inputmessage is missing or not a string');
- }
-
- // Thread erstellen
- const thread = await openai.beta.threads.create({
- messages: [
- {
- role: "user",
- content: inputMessage,
- },
- ],
- });
- const threadId = thread.id;
- console.log('Thread ID:', threadId);
-
- return NextResponse.json({ threadId });
- } catch (error) {
- console.error('Error:', error);
- return NextResponse.json({ error: (error as Error).message });
- }
- } else {
- return NextResponse.json({ error: 'Method not allowed' });
- }
-}
\ No newline at end of file
diff --git a/app/api/deleteFile/route.ts b/app/api/deleteFile/route.ts
deleted file mode 100644
index f108a20..0000000
--- a/app/api/deleteFile/route.ts
+++ /dev/null
@@ -1,32 +0,0 @@
-// my-app/pages/api/deleteFile.ts
-import { NextRequest, NextResponse } from 'next/server';
-import OpenAI from "openai";
-
-// Initialize the OpenAI client with the API key
-const openai = new OpenAI({
- apiKey: process.env.OPENAI_API_KEY,
-});
-
-export async function DELETE(req: NextRequest) {
- const { fileId } = await req.json();
-
- // Check if a file ID was provided in the request
- if (!fileId) {
- console.log('No file ID found in the request');
- return NextResponse.json({ success: false }, { status: 400 });
- }
-
- try {
- // Deleting the file from OpenAI
- console.log(`Starting file deletion from OpenAI, File ID: ${fileId}`);
- const deletionStatus = await openai.files.del(fileId);
- console.log(`File deleted, ID: ${deletionStatus.id}, Status: ${deletionStatus.deleted}`);
-
- // Respond with the deletion status
- return NextResponse.json({ success: deletionStatus.deleted, fileId: deletionStatus.id });
- } catch (error) {
- // Log and respond to any errors during the deletion process
- console.error('Error deleting file:', error);
- return NextResponse.json({ success: false, message: 'Error deleting file' }, { status: 500 });
- }
-}
\ No newline at end of file
diff --git a/app/api/listMessages/route.ts b/app/api/listMessages/route.ts
deleted file mode 100644
index 0f21945..0000000
--- a/app/api/listMessages/route.ts
+++ /dev/null
@@ -1,63 +0,0 @@
-/**
- * API Route - List Messages in a Thread
- *
- * This API route is responsible for retrieving messages from a specific chat thread using the OpenAI API.
- * It processes POST requests that include a 'threadId' in the form data. The route fetches the messages
- * associated with the provided thread ID and returns them in a structured JSON format. It's designed to
- * facilitate the tracking and review of conversation threads created and managed via OpenAI's GPT models.
- *
- * Path: /api/listMessages
- */
-
-import { NextRequest, NextResponse } from 'next/server';
-import OpenAI from "openai";
-
-// Initialize OpenAI client using the API key from environment variables
-const openai = new OpenAI({
- apiKey: process.env.OPENAI_API_KEY,
-});
-
-// Define an asynchronous POST function to handle incoming requests
-export async function POST(req: NextRequest) {
- try {
- // Extract JSON data from the request
- const data = await req.json();
-
- // Retrieve 'threadId' from JSON data
- const threadId = data.threadId;
-
- // Log the received thread ID for debugging
- console.log(`Received request with threadId: ${threadId}`);
-
- // Retrieve messages for the given thread ID using the OpenAI API
- const messages = await openai.beta.threads.messages.list(threadId);
-
- messages.data.forEach((message, index) => {
- console.log(`Message ${index + 1} content:`, message.content);
- });
- // Log the count of retrieved messages for debugging
- console.log(`Retrieved ${messages.data.length} messages`);
-
-
- // Find the first assistant message
- const assistantMessage = messages.data.find(message => message.role === 'assistant');
-
- if (!assistantMessage) {
- return NextResponse.json({ error: "No assistant message found" });
- }
-
- const assistantMessageContent = assistantMessage.content.at(0);
- if (!assistantMessageContent) {
- return NextResponse.json({ error: "No assistant message content found" });
- }
-
- if (assistantMessageContent.type !== "text") {
- return NextResponse.json({ error: "Assistant message is not text, only text supported in this demo" });
- }
- // Return the retrieved messages as a JSON response
- return NextResponse.json({ ok: true, messages: assistantMessageContent.text.value });
- } catch (error) {
- // Log any errors that occur during the process
- console.error(`Error occurred: ${error}`);
- }
-}
\ No newline at end of file
diff --git a/app/api/runAssistant/route.ts b/app/api/runAssistant/route.ts
deleted file mode 100644
index 5d1c148..0000000
--- a/app/api/runAssistant/route.ts
+++ /dev/null
@@ -1,56 +0,0 @@
-/**
- * API Route - Run Assistant
- *
- * This API route is crafted to facilitate interaction with the OpenAI API, specifically for running
- * a session with an AI assistant. The route is responsible for receiving the assistant ID and thread ID,
- * which are crucial for identifying the specific assistant and conversation thread to interact with.
- * Upon receiving these IDs, the route invokes the OpenAI API to create a new run (interaction) within
- * the specified thread and then returns the run ID for tracking and further operations.
- *
- * Path: /api/runAssistant
- */
-
-import { NextRequest, NextResponse } from 'next/server';
-import OpenAI from "openai";
-
-
-
-// Initialize the OpenAI client with the API key. The API key is essential for authenticating
-// and authorizing the requests to OpenAI's services.
-const openai = new OpenAI({
- apiKey: process.env.OPENAI_API_KEY,
-});
-
-export async function POST(req: NextRequest) {
- try {
- // Extracting the assistant ID and thread ID from the JSON payload of the request.
- // These IDs are essential for specifying which assistant and conversation thread
- // to interact with.
- const data = await req.json();
- const assistantId = data.assistantId;
- const threadId = data.threadId;
-
- // Logging the received IDs for debugging purposes. This helps in verifying that
- // the correct IDs are being processed.
- console.log(`Inside -runAssistant --> assistantId: ${assistantId}`);
- console.log(`Inside -runAssistant --> threadId: ${threadId}`);
-
- // Creating a new run (interaction) using the OpenAI API with the provided assistant and thread IDs.
- // This step is crucial for initiating the interaction with the AI assistant.
- const run = await openai.beta.threads.runs.create(threadId, {
- assistant_id: assistantId,
- });
-
- // Logging the details of the created run for debugging. This includes the run ID and any other relevant information.
- console.log(`run: ${JSON.stringify(run)}`);
-
- // Responding with the run ID in JSON format. This ID can be used for further operations
- // such as retrieving the run's output or continuing the conversation.
- return NextResponse.json({ runId: run.id });
- } catch (error) {
- // Handling and logging any errors that occur during the process. This includes errors in
- // API requests, data extraction, or any other part of the interaction flow.
- console.error(`Error in -runAssistant: ${error}`);
- return NextResponse.json({ error: 'Failed to run assistant' }, { status: 500 });
- }
-}
diff --git a/app/api/upload/route.ts b/app/api/upload/route.ts
deleted file mode 100644
index 84e0db4..0000000
--- a/app/api/upload/route.ts
+++ /dev/null
@@ -1,60 +0,0 @@
-/**
- * API Route - Upload Files
- *
- * This API route is designed for initiating a chat session within an application.
- * It handles the processing and uploading of a file necessary for starting a chat session
- * with the OpenAI API. The route manages the receipt of a file through POST request,
- * temporarily saves it, and then uploads it to OpenAI, ultimately returning the
- * file ID for use in further chat-related operations.
- *
- * Path: /api/upload
- */
-
-import { NextRequest, NextResponse } from 'next/server';
-import { writeFile } from 'fs/promises';
-import { createReadStream } from 'fs';
-import OpenAI from "openai";
-
-// Initialize the OpenAI client with the API key
-const openai = new OpenAI({
- apiKey: process.env.OPENAI_API_KEY,
-});
-
-export async function POST(request: NextRequest) {
- // Logging the start of the upload process
- console.log(`Upload API call started`);
-
- // Retrieving the file from the form data
- const data = await request.formData();
- const file: File | null = data.get('file') as unknown as File;
-
- // Check if a file was provided in the request
- if (!file) {
- console.log('No file found in the request');
- return NextResponse.json({ success: false });
- }
-
- // Convert file to buffer and write to a temporary location
- const bytes = await file.arrayBuffer();
- const buffer = Buffer.from(bytes);
- const path = `/tmp/${file.name}`;
- await writeFile(path, buffer);
- console.log(`File written to ${path}`);
-
- try {
- // Uploading the file to OpenAI
- console.log('Starting file upload to OpenAI');
- const fileForRetrieval = await openai.files.create({
- file: createReadStream(path),
- purpose: "assistants",
- });
- console.log(`File uploaded, ID: ${fileForRetrieval.id}`);
-
- // Respond with the file ID
- return NextResponse.json({ success: true, fileId: fileForRetrieval.id });
- } catch (error) {
- // Log and respond to any errors during the upload process
- console.error('Error uploading file:', error);
- return NextResponse.json({ success: false, message: 'Error uploading file' });
- }
-}
\ No newline at end of file
diff --git a/app/api/upload_gpt4v/route.ts b/app/api/upload_gpt4v/route.ts
deleted file mode 100644
index 6784f5c..0000000
--- a/app/api/upload_gpt4v/route.ts
+++ /dev/null
@@ -1,83 +0,0 @@
-/**
- * API Route - Image Processing
- *
- * This API route is designed for processing images within an application using the OpenAI API.
- * It handles the reception of an image file (in base64 format) and an optional custom prompt through a POST request.
- * The route then sends this data to OpenAI for analysis, typically involving image description or any other
- * relevant vision-based task. The response from OpenAI, containing the analysis of the image, is then returned
- * to the user. This functionality is integral for applications requiring advanced image analysis capabilities.
- *
- * Path: /api/upload_gpt4v
- */
-
-import { NextRequest, NextResponse } from 'next/server';
-import OpenAI from "openai";
-
-// Initialize the OpenAI client with the API key. This key is essential for authenticating
-// the requests with OpenAI's API services.
-const openai = new OpenAI({
- apiKey: process.env.OPENAI_API_KEY,
-});
-
-export async function POST(request: NextRequest) {
- // Logging the start of the image processing API call
- console.log('Starting the image processing API call');
-
- // Extracting the file (in base64 format) and an optional custom prompt
- // from the request body. This is essential for processing the image using OpenAI's API.
- const { file: base64Image, prompt: customPrompt } = await request.json();
-
- // Check if the image file is included in the request. If not, return an error response.
- if (!base64Image) {
- console.error('No file found in the request');
- return NextResponse.json({ success: false, message: 'No file found' });
- }
-
- // Log the receipt of the image in base64 format
- console.log('Received image in base64 format');
-
- // Utilize the provided custom prompt or a default prompt if it's not provided.
- // This prompt guides the analysis of the image by OpenAI's model.
- const promptText = customPrompt || "Analyze and describe the image in detail. Focus on visual elements like colors, object details, people's positions and expressions, and the environment. Transcribe any text as 'Content: “[Text]”', noting font attributes. Aim for a clear, thorough representation of all visual and textual aspects.";
-
- // Log the chosen prompt
- console.log(`Using prompt: ${promptText}`);
-
- // Sending the image and prompt to OpenAI for processing. This step is crucial for the image analysis.
- console.log('Sending request to OpenAI');
- try {
- const response = await openai.chat.completions.create({
- model: "gpt-4-vision-preview",
- messages: [
- {
- role: "user",
- content: [
- { type: "text", text: promptText },
- {
- type: "image_url",
- image_url: {
- url: base64Image
- }
- }
- ]
- }
- ],
- max_tokens: 200
- });
-
- // Log the response received from OpenAI, which includes the analysis of the image.
- console.log('Received response from OpenAI');
- console.log('Response:', JSON.stringify(response, null, 2)); // Log the response for debugging
-
- // Extract and log the analysis from the response
- const analysis = response?.choices[0]?.message?.content;
- console.log('Analysis:', analysis);
-
- // Return the analysis in the response
- return NextResponse.json({ success: true, analysis: analysis });
- } catch (error) {
- // Log and handle any errors encountered during the request to OpenAI
- console.error('Error sending request to OpenAI:', error);
- return NextResponse.json({ success: false, message: 'Error sending request to OpenAI' });
- }
-}
diff --git a/app/components/InputForm.tsx b/app/components/InputForm.tsx
index e9e5ca2..fc50f5b 100644
--- a/app/components/InputForm.tsx
+++ b/app/components/InputForm.tsx
@@ -1,160 +1,41 @@
-// app/components/InputForm.tsx
-
-import clsx from 'clsx';
import Textarea from 'react-textarea-autosize';
-import { SendIcon, LoadingCircle, DocumentIcon, XIcon, ImageIcon } from '../icons';
-import { useContext } from 'react';
+import { SendIcon } from '../icons';
+import { useContext, useState } from 'react';
import { ChatStateContext } from '../ChatStateContext';
-type ChatFile = {
- name: string;
- type: string;
- size: number;
-};
-
const InputForm: React.FC = () => {
- const {
- input, setInput, inputRef, formRef, disabled, chatStarted, isSending, isLoading,
- chatUploadedFiles, setChatUploadedFiles, chatFileDetails, setChatFileDetails,
- chatManager, setChatStarted, setChatMessages, setStatusMessage, setIsSending,
- setProgress, setIsLoadingFirstMessage
- } = useContext(ChatStateContext);
+ const { chatManager, chatStarted, setChatMessages } = useContext(ChatStateContext);
+ const [input, setInput] = useState('');
- const handleFormSubmit = async (e: React.FormEvent) => {
+ const handleSubmit = async (e: React.FormEvent) => {
e.preventDefault();
- if (isSending) {
- return;
- }
- const message = input;
+ if (!chatManager || !chatStarted) return;
+ const message = input.trim();
+ if (!message) return;
setInput('');
- setIsSending(true);
- if (chatManager) {
- const currentFiles = chatUploadedFiles;
- setChatUploadedFiles([]);
- setChatFileDetails([]);
- try {
- await chatManager.sendMessage(message, currentFiles, chatFileDetails);
- } catch (error) {
- console.error('Error sending message:', error);
- } finally {
- setIsSending(false);
- }
- }
+ await chatManager.sendMessage(message, setChatMessages);
};
- const handleChatFilesUpload = (event: React.ChangeEvent) => {
- if (event.target.files) {
- const newFiles = Array.from(event.target.files);
- if (chatFileDetails.length + newFiles.length > 10) {
- alert('You can only upload up to 10 files.');
- return;
- }
- const fileArray = newFiles.map((file) => ({
- name: file.name,
- type: file.type,
- size: file.size,
- }));
- setChatFileDetails([...chatFileDetails, ...fileArray]);
- setChatUploadedFiles([...chatUploadedFiles, ...newFiles]);
- }
- event.target.value = '';
- };
-
- const removeChatFile = (fileName: string) => {
- const updatedFileDetails = chatFileDetails.filter((file: ChatFile) => file.name !== fileName);
- setChatFileDetails(updatedFileDetails);
-
- const updatedUploadedFiles = chatUploadedFiles.filter((file: File) => file.name !== fileName);
- setChatUploadedFiles(updatedUploadedFiles);
- };
-
-return (
-