Skip to content

Commit 05f9af9

Browse files
authored
partial spec sync (#312)
* updated AssistantObject * updated FileSearchRankingOptions * updated doc comment for ResponseFormatJsonSchema in types/chat.rs * updates related to CreateAssistantRequest * updted ModifyAssistantRequest * update doc comments in types/audio.rs * Updates for CreateChatCompletionRequest and Chats API group * updated doc comment in CreateChatCompletionRequest * add ChatCompletionModalities * update CreateChatCompletionRequest with prediction field * add ChatCompletinAudio type and related types * added ChatCompletionRequetDeveloperMessage and associated types * added user message type input_audio: ChatCompletionRequestMessageContentPartAudio * updated ChatCompletionRequestAssistantMessage * added ChatCompletionResponseMessageAudio * updates for chat completion streaming response * update CreateFineTuningJobRequest * update for FineTuningJobEvent * fix example compilation * fix CreateChatCompletionRequest * cleanup * fix MaxResponseOutputTokens * update model in examples/realtime * update model * update partially implemented
1 parent 13b8fc8 commit 05f9af9

File tree

12 files changed

+371
-58
lines changed

12 files changed

+371
-58
lines changed

async-openai/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,8 +34,8 @@
3434
- [x] Images
3535
- [x] Models
3636
- [x] Moderations
37-
- [x] Organizations | Administration
38-
- [x] Realtime API types (Beta)
37+
- [x] Organizations | Administration (partially implemented)
38+
- [x] Realtime (Beta) (partially implemented)
3939
- [x] Uploads
4040
- SSE streaming on available APIs
4141
- Requests (except SSE streaming) including form submissions are retried with exponential backoff when [rate limited](https://platform.openai.com/docs/guides/rate-limits).

async-openai/src/chat.rs

Lines changed: 15 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,21 @@ impl<'c, C: Config> Chat<'c, C> {
1919
Self { client }
2020
}
2121

22-
/// Creates a model response for the given chat conversation.
22+
/// Creates a model response for the given chat conversation. Learn more in
23+
/// the
24+
///
25+
/// [text generation](https://platform.openai.com/docs/guides/text-generation),
26+
/// [vision](https://platform.openai.com/docs/guides/vision),
27+
///
28+
/// and [audio](https://platform.openai.com/docs/guides/audio) guides.
29+
///
30+
///
31+
/// Parameter support can differ depending on the model used to generate the
32+
/// response, particularly for newer reasoning models. Parameters that are
33+
/// only supported for reasoning models are noted below. For the current state
34+
/// of unsupported parameters in reasoning models,
35+
///
36+
/// [refer to the reasoning guide](https://platform.openai.com/docs/guides/reasoning).
2337
pub async fn create(
2438
&self,
2539
request: CreateChatCompletionRequest,

async-openai/src/types/assistant.rs

Lines changed: 16 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ pub struct AssistantVectorStore {
5252
pub chunking_strategy: Option<AssistantVectorStoreChunkingStrategy>,
5353

5454
/// Set of 16 key-value pairs that can be attached to a vector store. This can be useful for storing additional information about the vector store in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
55-
pub metadata: Option<HashMap<String, serde_json::Value>>,
55+
pub metadata: Option<HashMap<String, String>>,
5656
}
5757

5858
#[derive(Clone, Serialize, Debug, Deserialize, PartialEq, Default)]
@@ -63,10 +63,7 @@ pub enum AssistantVectorStoreChunkingStrategy {
6363
#[serde(rename = "auto")]
6464
Auto,
6565
#[serde(rename = "static")]
66-
Static {
67-
#[serde(rename = "static")]
68-
config: StaticChunkingStrategy,
69-
},
66+
Static { r#static: StaticChunkingStrategy },
7067
}
7168

7269
/// Static Chunking Strategy
@@ -93,22 +90,20 @@ pub struct AssistantObject {
9390
pub name: Option<String>,
9491
/// The description of the assistant. The maximum length is 512 characters.
9592
pub description: Option<String>,
93+
/// ID of the model to use. You can use the [List models](https://platform.openai.com/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](https://platform.openai.com/docs/models) for descriptions of them.
9694
pub model: String,
9795
/// The system instructions that the assistant uses. The maximum length is 256,000 characters.
9896
pub instructions: Option<String>,
9997
/// A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types `code_interpreter`, `file_search`, or `function`.
98+
#[serde(default)]
10099
pub tools: Vec<AssistantTools>,
101-
102100
/// A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the `code_interpreter` tool requires a list of file IDs, while the `file_search` tool requires a list of vector store IDs.
103101
pub tool_resources: Option<AssistantToolResources>,
104-
/// Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
105-
pub metadata: Option<HashMap<String, serde_json::Value>>,
106-
102+
/// Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long.
103+
pub metadata: Option<HashMap<String, String>>,
107104
/// What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
108105
pub temperature: Option<f32>,
109-
110106
/// An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
111-
///
112107
/// We generally recommend altering this or temperature but not both.
113108
pub top_p: Option<f32>,
114109

@@ -156,15 +151,17 @@ pub enum FileSearchRanker {
156151
Default2024_08_21,
157152
}
158153

159-
/// The ranking options for the file search.
154+
/// The ranking options for the file search. If not specified, the file search tool will use the `auto` ranker and a score_threshold of 0.
160155
///
161-
/// See the [file search tool documentation](/docs/assistants/tools/file-search/customizing-file-search-settings) for more information.
162-
#[derive(Clone, Serialize, Debug, Deserialize, PartialEq)]
156+
/// See the [file search tool documentation](https://platform.openai.com/docs/assistants/tools/file-search#customizing-file-search-settings) for more information.
157+
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
163158
pub struct FileSearchRankingOptions {
164159
/// The ranker to use for the file search. If not specified will use the `auto` ranker.
160+
#[serde(skip_serializing_if = "Option::is_none")]
165161
pub ranker: Option<FileSearchRanker>,
162+
166163
/// The score threshold for the file search. All values must be a floating point number between 0 and 1.
167-
pub score_threshold: Option<f32>,
164+
pub score_threshold: f32,
168165
}
169166

170167
/// Function tool
@@ -208,12 +205,13 @@ pub struct CreateAssistantRequest {
208205
#[serde(skip_serializing_if = "Option::is_none")]
209206
pub tools: Option<Vec<AssistantTools>>,
210207

211-
/// A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the `code_interpreter` tool requires a list of file IDs, while the `file_search` tool requires a list of vector store IDs.
208+
/// A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the `code_interpreter` tool requires a list of file IDs, while the `file_search` tool requires a list of vector store IDs.
212209
#[serde(skip_serializing_if = "Option::is_none")]
213210
pub tool_resources: Option<CreateAssistantToolResources>,
214211

212+
/// Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long.
215213
#[serde(skip_serializing_if = "Option::is_none")]
216-
pub metadata: Option<HashMap<String, serde_json::Value>>,
214+
pub metadata: Option<HashMap<String, String>>,
217215

218216
/// What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
219217
#[serde(skip_serializing_if = "Option::is_none")]
@@ -261,7 +259,7 @@ pub struct ModifyAssistantRequest {
261259
pub tool_resources: Option<AssistantToolResources>,
262260
/// Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
263261
#[serde(skip_serializing_if = "Option::is_none")]
264-
pub metadata: Option<HashMap<String, serde_json::Value>>,
262+
pub metadata: Option<HashMap<String, String>>,
265263

266264
/// What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
267265
#[serde(skip_serializing_if = "Option::is_none")]

async-openai/src/types/audio.rs

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ pub struct CreateTranscriptionRequest {
7878
/// ID of the model to use. Only `whisper-1` (which is powered by our open source Whisper V2 model) is currently available.
7979
pub model: String,
8080

81-
/// An optional text to guide the model's style or continue a previous audio segment. The [prompt](https://platform.openai.com/docs/guides/speech-to-text/prompting) should match the audio language.
81+
/// An optional text to guide the model's style or continue a previous audio segment. The [prompt](https://platform.openai.com/docs/guides/speech-to-text#prompting) should match the audio language.
8282
pub prompt: Option<String>,
8383

8484
/// The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
@@ -204,13 +204,14 @@ pub struct CreateSpeechRequest {
204204
#[builder(derive(Debug))]
205205
#[builder(build_fn(error = "OpenAIError"))]
206206
pub struct CreateTranslationRequest {
207-
/// The audio file to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
207+
/// The audio file object (not file name) translate, in one of these
208+
///formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
208209
pub file: AudioInput,
209210

210211
/// ID of the model to use. Only `whisper-1` (which is powered by our open source Whisper V2 model) is currently available.
211212
pub model: String,
212213

213-
/// An optional text to guide the model's style or continue a previous audio segment. The [prompt](https://platform.openai.com/docs/guides/speech-to-text/prompting) should be in English.
214+
/// An optional text to guide the model's style or continue a previous audio segment. The [prompt](https://platform.openai.com/docs/guides/speech-to-text#prompting) should be in English.
214215
pub prompt: Option<String>,
215216

216217
/// The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.

0 commit comments

Comments
 (0)