You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Connect to OpenAI® Chat Completion API from MATLAB®
5
5
6
6
# Creation
7
7
## Syntax
@@ -36,29 +36,43 @@ To connect to the OpenAI API, you need a valid API key. For information on how t
36
36
37
37
`chat = openAIChat(___,Name=Value)` specifies additional options using one or more name\-value arguments.
38
38
39
-
##Input Arguments
40
-
### `systemPrompt`\- System prompt
39
+
# Input Arguments
40
+
### `systemPrompt`— System prompt
41
41
42
42
character vector | string scalar
43
43
44
44
45
-
The system prompt is a natural\-language description that provides the framework in which a large language model generates its responses. The system prompt can include instructions about tone, communications style, language, etc.
45
+
Specify the system prompt and set the `SystemPrompt` property. The system prompt is a natural\-language description that provides the framework in which a large language model generates its responses. The system prompt can include instructions about tone, communications style, language, etc.
46
46
47
47
48
48
**Example**: "You are a helpful assistant who provides answers to user queries in iambic pentameter."
49
49
50
50
## Name\-Value Arguments
51
-
### `APIKey`\- OpenAI API key
51
+
### `APIKey`— OpenAI API key
52
52
53
53
character vector | string scalar
54
54
55
55
56
56
OpenAI API key to access OpenAI APIs such as ChatGPT.
57
57
58
58
59
-
Instead of using the `APIKey` name\-value argument, you can also set the environment variable OPEN\_API\_KEY. For more information, see [OpenAI API](../OpenAI.md).
59
+
Instead of using the `APIKey` name\-value argument, you can also set the environment variable OPEN\_API\_KEY. For more information, see [OpenAI API](http://../OpenAI.md).
60
+
61
+
### `Tools` — OpenAI functions to use during output generation
62
+
63
+
`openAIFunction` object | array of `openAIFunction` objects
64
+
65
+
66
+
Custom functions used by the model to collect or generate additional data.
67
+
60
68
61
-
### `ModelName`\- Model name
69
+
For an example, see [Analyze Scientific Papers Using ChatGPT Function Calls](http://../examples/AnalyzeScientificPapersUsingFunctionCalls.md).
70
+
71
+
# Properties Settable at Construction
72
+
73
+
Optionally specify these properties at construction using name\-value arguments. Specify `PropertyName1=PropertyValue1,...,PropertyNameN=PropertyValueN`, where `PropertyName` is the property name and `PropertyValue` is the corresponding value.
@@ -68,38 +82,31 @@ Name of the OpenAI model to use for text or image generation.
68
82
69
83
For a list of currently supported models, see [OpenAI API](../OpenAI.md).
70
84
71
-
### `Temperature`\- Temperature
85
+
### `Temperature`— Temperature
72
86
73
87
`1` (default) | numeric scalar between `0` and `2`
74
88
75
89
76
90
Temperature value for controlling the randomness of the output. Higher temperature increases the randomness of the output. Setting the temperature to `0` results in fully deterministic output.
77
91
78
-
### `TopP`\- Top probability mass
92
+
### `TopP`— Top probability mass
79
93
80
94
`1` (default) | numeric scalar between `0` and `1`
81
95
82
96
83
97
Top probability mass for controlling the diversity of the generated output. Higher top probability mass corresponds to higher diversity.
84
98
85
-
### `Tools`\- OpenAI functions to use during output generation
86
-
87
-
`openAIFunction` object | array of `openAIFunction` objects
88
-
89
-
90
-
Custom functions used by the model to process its input and output.
99
+
### `StopSequences` — Stop sequences
91
100
92
-
### `StopSequences`\- Stop sequences
93
-
94
-
`""` (default) | string array with between `1` and `4` elements
101
+
`[]` (default) | string array with between `0` and `4` elements
95
102
96
103
97
104
Sequences that stop generation of tokens.
98
105
99
106
100
107
**Example:**`["The end.","And that is all she wrote."]`
101
108
102
-
### `PresencePenalty`\- Presence penalty
109
+
### `PresencePenalty`— Presence penalty
103
110
104
111
`0` (default) | numeric scalar between `-2` and `2`
105
112
@@ -109,158 +116,81 @@ Penalty value for using a token that has already been used at least once in the
109
116
110
117
The presence penalty is independent of the number of incidents of a token, so long as it has been used at least once. To increase the penalty for every additional time a token is generated, use the `FrequencyPenalty` name\-value argument.
111
118
112
-
### `FrequencyPenalty`\- Frequency penalty
119
+
### `FrequencyPenalty`— Frequency penalty
113
120
114
121
`0` (default) | numeric scalar between `-2` and `2`
115
122
116
123
117
124
Penalty value for repeatedly using the same token in the generated output. Higher values reduce the repetition of tokens. Negative values increase the repetition of tokens.
118
125
119
126
120
-
The frequence penalty increases with every instance of a token in the generated output. To use a constant penalty for a repeated token, independent of the number of instances that token is generated, use the `PresencePenalty` name\-value argument.
127
+
The frequency penalty increases with every instance of a token in the generated output. To use a constant penalty for a repeated token, independent of the number of instances that token is generated, use the `PresencePenalty` name\-value argument.
121
128
122
-
### `TimeOut`\- Connection timeout in seconds
129
+
### `TimeOut`— Connection timeout in seconds
123
130
124
131
`10` (default) | nonnegative numeric scalar
125
132
126
133
134
+
After construction, this property is read\-only.
135
+
136
+
127
137
If the OpenAI server does not respond within the timeout, then the function throws an error.
128
138
129
-
### `StreamFun`\- Custom streaming function
139
+
### `StreamFun`— Custom streaming function
130
140
131
141
function handle
132
142
133
143
134
144
Specify a custom streaming function to process the generated output token by token as it is being generated, rather than having to wait for the end of the generation. For example, you can use this function to print the output as it is generated.
135
145
136
146
137
-
**Example:**`@(token) fprint("%s",token)`
138
-
139
-
### `ResponseFormat`\- Response format
140
-
141
-
`"text"` (default) | `"json"`
142
-
143
-
144
-
Format of generated output.
145
-
146
-
147
-
If you set the response format to `"text"`, then the generated output is a string.
148
-
149
-
150
-
If you set the response format to `"json"`, then the generated output is a JSON (\*.json) file. This option is not supported for these models:
151
-
152
-
-`ModelName="gpt-4"`
153
-
-`ModelName="gpt-4-0613"`
154
-
155
-
To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md).
156
-
157
-
# Properties
158
-
### `SystemPrompt`\- System prompt
159
-
160
-
character vector | string scalar
161
-
162
-
163
-
This property is read\-only.
164
-
165
-
166
-
The system prompt is a natural\-language description that provides the framework in which a large language model generates its responses. The system prompt can include instructions about tone, communications style, language, etc.
167
-
168
-
169
-
**Example**: "You are a helpful assistant who provides answers to user queries in iambic pentameter."
Name of the OpenAI model to use for text or image generation.
177
-
178
-
179
-
For a list of currently supported models, see [OpenAI API](../OpenAI.md).
180
-
181
-
### `Temperature`\- Temperature
182
-
183
-
`1` (default) | numeric scalar between `0` and `2`
184
-
185
-
186
-
Temperature value for controlling the randomness of the output. Higher temperature increases the randomness of the output. Setting the temperature to `0` results in no randomness.
187
-
188
-
### `TopP`\- Top probability mass
189
-
190
-
`1` (default) | numeric scalar between `0` and `1`
191
-
192
-
193
-
Top probability mass for controlling the diversity of the generated output using top-p sampling. Higher top probability mass corresponds to higher diversity.
194
-
195
-
### `StopSequences`\- Stop sequences
196
-
197
-
`""` (default) | string array with between `1` and `4` elements
198
-
199
-
200
-
Sequences that stop generation of tokens.
201
-
202
-
203
-
**Example:**`["The end.","And that is all she wrote."]`
147
+
For an example, see [Process Generated Text in Real Time by Using ChatGPT™ in Streaming Mode](../../examples/ProcessGeneratedTextinRealTimebyUsingChatGPTinStreamingMode.md).
204
148
205
-
### `PresencePenalty`\- Presence penalty
206
149
207
-
`0` (default) | numeric scalar between `-2` and `2`
150
+
**Example:**`@(token) fprint("%s",token)`
208
151
152
+
### `ResponseFormat` — Response format
209
153
210
-
Penalty value for using a token that has already been used at least once in the generated output. Higher values reduce the repetition of tokens. Negative values increase the repetition of tokens.
154
+
`"text"` (default) | `"json"`
211
155
212
156
213
-
The presence penalty is independent of the number of incidents of a token, so long as it has been used at least once. To increase the penalty for every additional time a token is generated, use the `FrequencyPenalty` name\-value argument.
157
+
After construction, this property is read\-only.
214
158
215
-
### `FrequencyPenalty`\- Frequency penalty
216
159
217
-
`0` (default) | numeric scalar between `-2` and `2`
160
+
Format of generated output.
218
161
219
162
220
-
Penalty value for repeatedly using the same token in the generated output. Higher values reduce the repetition of tokens. Negative values increase the repetition of tokens.
163
+
If you set the response format to `"text"`, then the generated output is a string.
221
164
222
165
223
-
The frequence penalty increases with every instance of a token in the generated output. To use a constant penalty for a repeated token, independent of the number of instances that token is generated, use the `PresencePenalty` name\-value argument.
166
+
If you set the response format to `"json"`, then the generated output is a string containing JSON encoded data.
224
167
225
-
### `TimeOut`\- Connection timeout in seconds
226
168
227
-
`10` (default) | nonnegative numeric scalar
169
+
To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. The prompt or message describing the format must contain the word `"json"` or `"JSON"`.
228
170
229
171
230
-
This property is read\-only.
172
+
For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md).
231
173
232
174
233
-
If the OpenAI server does not respond within the timeout, then the function throws an error.
175
+
The JSON response format is not supported for these models:
234
176
235
-
### `ResponseFormat`\- Response format
177
+
-`ModelName="gpt-4"`
178
+
-`ModelName="gpt-4-0613"`
179
+
# Other Properties
180
+
### `SystemPrompt` — System prompt
236
181
237
-
`"text"` (default) | `"json"`
182
+
character vector | string scalar
238
183
239
184
240
185
This property is read\-only.
241
186
242
187
243
-
Format of generated output.
244
-
245
-
246
-
If the response format is `"text"`, then the generated output is a string.
247
-
248
-
249
-
If the response format is `"json"`, then the generated output is a string containing JSON encoded data.
250
-
251
-
252
-
To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. The prompt or message describing the format must contain the word `"json"` or `"JSON"`.
253
-
254
-
255
-
For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md).
256
-
188
+
The system prompt is a natural\-language description that provides the framework in which a large language model generates its responses. The system prompt can include instructions about tone, communications style, language, etc.
257
189
258
-
The JSON response format is not supported for these models:
259
190
260
-
-`ModelName="gpt-4"`
261
-
-`ModelName="gpt-4-0613"`
191
+
**Example**: "You are a helpful assistant who provides answers to user queries in iambic pentameter."
262
192
263
-
### `FunctionNames`\- Names of OpenAI functions to use during output generation
193
+
### `FunctionNames`— Names of OpenAI functions to use during output generation
264
194
265
195
string array
266
196
@@ -272,24 +202,49 @@ Names of the custom functions specified in the `Tools` name\-value argument.
272
202
273
203
# Object Functions
274
204
275
-
`generate`\- Generate text
205
+
[`generate`](generate.md) — Generate output from large language models
276
206
277
207
# Examples
278
208
## Create OpenAI Chat
279
209
```matlab
280
-
modelName = "gpt-3.5-turbo";
210
+
loadenv(".env")
211
+
modelName = "gpt-4o-mini";
281
212
chat = openAIChat("You are a helpful assistant awaiting further instructions.",ModelName=modelName)
282
213
```
214
+
215
+
```matlabTextOutput
216
+
chat =
217
+
openAIChat with properties:
218
+
219
+
ModelName: "gpt-4o-mini"
220
+
Temperature: 1
221
+
TopP: 1
222
+
StopSequences: [0x0 string]
223
+
TimeOut: 10
224
+
SystemPrompt: {[1x1 struct]}
225
+
ResponseFormat: "text"
226
+
PresencePenalty: 0
227
+
FrequencyPenalty: 0
228
+
FunctionNames: []
229
+
230
+
```
283
231
## Generate and Stream Text
284
232
```matlab
233
+
loadenv(".env")
285
234
sf = @(x) fprintf("%s",x);
286
235
chat = openAIChat(StreamFun=sf);
287
-
generate(chat,"Why is a raven like a writing desk?")
236
+
generate(chat,"Why is a raven like a writing desk?",MaxNumTokens=50)
237
+
```
238
+
239
+
```matlabTextOutput
240
+
The phrase "Why is a raven like a writing desk?" comes from Lewis Carroll's "Alice's Adventures in Wonderland." Initially posed by the Mad Hatter during the tea party scene, the question is often interpreted as nonsense, in line with the book
241
+
ans = "The phrase "Why is a raven like a writing desk?" comes from Lewis Carroll's "Alice's Adventures in Wonderland." Initially posed by the Mad Hatter during the tea party scene, the question is often interpreted as nonsense, in line with the book"
0 commit comments