You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* updated AssistantObject
* updated FileSearchRankingOptions
* updated doc comment for ResponseFormatJsonSchema in types/chat.rs
* updates related to CreateAssistantRequest
* updted ModifyAssistantRequest
* update doc comments in types/audio.rs
* Updates for CreateChatCompletionRequest and Chats API group
* updated doc comment in CreateChatCompletionRequest
* add ChatCompletionModalities
* update CreateChatCompletionRequest with prediction field
* add ChatCompletinAudio type and related types
* added ChatCompletionRequetDeveloperMessage and associated types
* added user message type input_audio: ChatCompletionRequestMessageContentPartAudio
* updated ChatCompletionRequestAssistantMessage
* added ChatCompletionResponseMessageAudio
* updates for chat completion streaming response
* update CreateFineTuningJobRequest
* update for FineTuningJobEvent
* fix example compilation
* fix CreateChatCompletionRequest
* cleanup
* fix MaxResponseOutputTokens
* update model in examples/realtime
* update model
* update partially implemented
- Requests (except SSE streaming) including form submissions are retried with exponential backoff when [rate limited](https://platform.openai.com/docs/guides/rate-limits).
/// Set of 16 key-value pairs that can be attached to a vector store. This can be useful for storing additional information about the vector store in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
/// The description of the assistant. The maximum length is 512 characters.
95
92
pubdescription:Option<String>,
93
+
/// ID of the model to use. You can use the [List models](https://platform.openai.com/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](https://platform.openai.com/docs/models) for descriptions of them.
96
94
pubmodel:String,
97
95
/// The system instructions that the assistant uses. The maximum length is 256,000 characters.
98
96
pubinstructions:Option<String>,
99
97
/// A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types `code_interpreter`, `file_search`, or `function`.
98
+
#[serde(default)]
100
99
pubtools:Vec<AssistantTools>,
101
-
102
100
/// A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the `code_interpreter` tool requires a list of file IDs, while the `file_search` tool requires a list of vector store IDs.
103
101
pubtool_resources:Option<AssistantToolResources>,
104
-
/// Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
/// Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long.
103
+
pubmetadata:Option<HashMap<String,String>>,
107
104
/// What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
108
105
pubtemperature:Option<f32>,
109
-
110
106
/// An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
111
-
///
112
107
/// We generally recommend altering this or temperature but not both.
113
108
pubtop_p:Option<f32>,
114
109
@@ -156,15 +151,17 @@ pub enum FileSearchRanker {
156
151
Default2024_08_21,
157
152
}
158
153
159
-
/// The ranking options for the file search.
154
+
/// The ranking options for the file search. If not specified, the file search tool will use the `auto` ranker and a score_threshold of 0.
160
155
///
161
-
/// See the [file search tool documentation](/docs/assistants/tools/file-search/customizing-file-search-settings) for more information.
/// See the [file search tool documentation](https://platform.openai.com/docs/assistants/tools/file-search#customizing-file-search-settings) for more information.
/// A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the `code_interpreter` tool requires a list of file IDs, while the `file_search` tool requires a list of vector store IDs.
208
+
/// A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the `code_interpreter` tool requires a list of file IDs, while the `file_search` tool requires a list of vector store IDs.
/// Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long.
/// What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
/// Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
/// What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
/// ID of the model to use. Only `whisper-1` (which is powered by our open source Whisper V2 model) is currently available.
79
79
pubmodel:String,
80
80
81
-
/// An optional text to guide the model's style or continue a previous audio segment. The [prompt](https://platform.openai.com/docs/guides/speech-to-text/prompting) should match the audio language.
81
+
/// An optional text to guide the model's style or continue a previous audio segment. The [prompt](https://platform.openai.com/docs/guides/speech-to-text#prompting) should match the audio language.
82
82
pubprompt:Option<String>,
83
83
84
84
/// The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
/// ID of the model to use. Only `whisper-1` (which is powered by our open source Whisper V2 model) is currently available.
211
212
pubmodel:String,
212
213
213
-
/// An optional text to guide the model's style or continue a previous audio segment. The [prompt](https://platform.openai.com/docs/guides/speech-to-text/prompting) should be in English.
214
+
/// An optional text to guide the model's style or continue a previous audio segment. The [prompt](https://platform.openai.com/docs/guides/speech-to-text#prompting) should be in English.
214
215
pubprompt:Option<String>,
215
216
216
217
/// The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
0 commit comments