Skip to content

Commit 3eb22c5

Browse files
Add newly created openAIChat.md to documentation.
1 parent ada0ff8 commit 3eb22c5

File tree

1 file changed

+82
-127
lines changed

1 file changed

+82
-127
lines changed

doc/functions/openAIChat.md

Lines changed: 82 additions & 127 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11

2-
# openAIChat
2+
# openAIChat
33

4-
Connect to OpenAI Chat Completion API
4+
Connect to OpenAI® Chat Completion API from MATLAB®
55

66
# Creation
77
## Syntax
@@ -36,29 +36,43 @@ To connect to the OpenAI API, you need a valid API key. For information on how t
3636

3737
`chat = openAIChat(___,Name=Value)` specifies additional options using one or more name\-value arguments.
3838

39-
## Input Arguments
40-
### `systemPrompt` \- System prompt
39+
# Input Arguments
40+
### `systemPrompt` System prompt
4141

4242
character vector | string scalar
4343

4444

45-
The system prompt is a natural\-language description that provides the framework in which a large language model generates its responses. The system prompt can include instructions about tone, communications style, language, etc.
45+
Specify the system prompt and set the `SystemPrompt` property. The system prompt is a natural\-language description that provides the framework in which a large language model generates its responses. The system prompt can include instructions about tone, communications style, language, etc.
4646

4747

4848
**Example**: "You are a helpful assistant who provides answers to user queries in iambic pentameter."
4949

5050
## Name\-Value Arguments
51-
### `APIKey` \- OpenAI API key
51+
### `APIKey` OpenAI API key
5252

5353
character vector | string scalar
5454

5555

5656
OpenAI API key to access OpenAI APIs such as ChatGPT.
5757

5858

59-
Instead of using the `APIKey` name\-value argument, you can also set the environment variable OPEN\_API\_KEY. For more information, see [OpenAI API](../OpenAI.md).
59+
Instead of using the `APIKey` name\-value argument, you can also set the environment variable OPEN\_API\_KEY. For more information, see [OpenAI API](http://../OpenAI.md).
60+
61+
### `Tools` — OpenAI functions to use during output generation
62+
63+
`openAIFunction` object | array of `openAIFunction` objects
64+
65+
66+
Custom functions used by the model to collect or generate additional data.
67+
6068

61-
### `ModelName` \- Model name
69+
For an example, see [Analyze Scientific Papers Using ChatGPT Function Calls](http://../examples/AnalyzeScientificPapersUsingFunctionCalls.md).
70+
71+
# Properties Settable at Construction
72+
73+
Optionally specify these properties at construction using name\-value arguments. Specify `PropertyName1=PropertyValue1,...,PropertyNameN=PropertyValueN`, where `PropertyName` is the property name and `PropertyValue` is the corresponding value.
74+
75+
### `ModelName` — Model name
6276

6377
`"gpt-4o-mini"` (default) | `"gpt-4"` | `"gpt-3.5-turbo"` | `"dall-e-2"` | ...
6478

@@ -68,38 +82,31 @@ Name of the OpenAI model to use for text or image generation.
6882

6983
For a list of currently supported models, see [OpenAI API](../OpenAI.md).
7084

71-
### `Temperature` \- Temperature
85+
### `Temperature` Temperature
7286

7387
`1` (default) | numeric scalar between `0` and `2`
7488

7589

7690
Temperature value for controlling the randomness of the output. Higher temperature increases the randomness of the output. Setting the temperature to `0` results in fully deterministic output.
7791

78-
### `TopP` \- Top probability mass
92+
### `TopP` Top probability mass
7993

8094
`1` (default) | numeric scalar between `0` and `1`
8195

8296

8397
Top probability mass for controlling the diversity of the generated output. Higher top probability mass corresponds to higher diversity.
8498

85-
### `Tools` \- OpenAI functions to use during output generation
86-
87-
`openAIFunction` object | array of `openAIFunction` objects
88-
89-
90-
Custom functions used by the model to process its input and output.
99+
### `StopSequences` — Stop sequences
91100

92-
### `StopSequences` \- Stop sequences
93-
94-
`""` (default) | string array with between `1` and `4` elements
101+
`[]` (default) | string array with between `0` and `4` elements
95102

96103

97104
Sequences that stop generation of tokens.
98105

99106

100107
**Example:** `["The end.","And that is all she wrote."]`
101108

102-
### `PresencePenalty` \- Presence penalty
109+
### `PresencePenalty` Presence penalty
103110

104111
`0` (default) | numeric scalar between `-2` and `2`
105112

@@ -109,158 +116,81 @@ Penalty value for using a token that has already been used at least once in the
109116

110117
The presence penalty is independent of the number of incidents of a token, so long as it has been used at least once. To increase the penalty for every additional time a token is generated, use the `FrequencyPenalty` name\-value argument.
111118

112-
### `FrequencyPenalty` \- Frequency penalty
119+
### `FrequencyPenalty` Frequency penalty
113120

114121
`0` (default) | numeric scalar between `-2` and `2`
115122

116123

117124
Penalty value for repeatedly using the same token in the generated output. Higher values reduce the repetition of tokens. Negative values increase the repetition of tokens.
118125

119126

120-
The frequence penalty increases with every instance of a token in the generated output. To use a constant penalty for a repeated token, independent of the number of instances that token is generated, use the `PresencePenalty` name\-value argument.
127+
The frequency penalty increases with every instance of a token in the generated output. To use a constant penalty for a repeated token, independent of the number of instances that token is generated, use the `PresencePenalty` name\-value argument.
121128

122-
### `TimeOut` \- Connection timeout in seconds
129+
### `TimeOut` Connection timeout in seconds
123130

124131
`10` (default) | nonnegative numeric scalar
125132

126133

134+
After construction, this property is read\-only.
135+
136+
127137
If the OpenAI server does not respond within the timeout, then the function throws an error.
128138

129-
### `StreamFun` \- Custom streaming function
139+
### `StreamFun` Custom streaming function
130140

131141
function handle
132142

133143

134144
Specify a custom streaming function to process the generated output token by token as it is being generated, rather than having to wait for the end of the generation. For example, you can use this function to print the output as it is generated.
135145

136146

137-
**Example:** `@(token) fprint("%s",token)`
138-
139-
### `ResponseFormat` \- Response format
140-
141-
`"text"` (default) | `"json"`
142-
143-
144-
Format of generated output.
145-
146-
147-
If you set the response format to `"text"`, then the generated output is a string.
148-
149-
150-
If you set the response format to `"json"`, then the generated output is a JSON (\*.json) file. This option is not supported for these models:
151-
152-
- `ModelName="gpt-4"`
153-
- `ModelName="gpt-4-0613"`
154-
155-
To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md).
156-
157-
# Properties
158-
### `SystemPrompt` \- System prompt
159-
160-
character vector | string scalar
161-
162-
163-
This property is read\-only.
164-
165-
166-
The system prompt is a natural\-language description that provides the framework in which a large language model generates its responses. The system prompt can include instructions about tone, communications style, language, etc.
167-
168-
169-
**Example**: "You are a helpful assistant who provides answers to user queries in iambic pentameter."
170-
171-
### `ModelName` \- Model name
172-
173-
`"gpt-4o-mini"` (default) | `"gpt-4"` | `"gpt-3.5-turbo"` | `"dall-e-2"` | ...
174-
175-
176-
Name of the OpenAI model to use for text or image generation.
177-
178-
179-
For a list of currently supported models, see [OpenAI API](../OpenAI.md).
180-
181-
### `Temperature` \- Temperature
182-
183-
`1` (default) | numeric scalar between `0` and `2`
184-
185-
186-
Temperature value for controlling the randomness of the output. Higher temperature increases the randomness of the output. Setting the temperature to `0` results in no randomness.
187-
188-
### `TopP` \- Top probability mass
189-
190-
`1` (default) | numeric scalar between `0` and `1`
191-
192-
193-
Top probability mass for controlling the diversity of the generated output using top-p sampling. Higher top probability mass corresponds to higher diversity.
194-
195-
### `StopSequences` \- Stop sequences
196-
197-
`""` (default) | string array with between `1` and `4` elements
198-
199-
200-
Sequences that stop generation of tokens.
201-
202-
203-
**Example:** `["The end.","And that is all she wrote."]`
147+
For an example, see [Process Generated Text in Real Time by Using ChatGPT™ in Streaming Mode](../../examples/ProcessGeneratedTextinRealTimebyUsingChatGPTinStreamingMode.md).
204148

205-
### `PresencePenalty` \- Presence penalty
206149

207-
`0` (default) | numeric scalar between `-2` and `2`
150+
**Example:** `@(token) fprint("%s",token)`
208151

152+
### `ResponseFormat` — Response format
209153

210-
Penalty value for using a token that has already been used at least once in the generated output. Higher values reduce the repetition of tokens. Negative values increase the repetition of tokens.
154+
`"text"` (default) | `"json"`
211155

212156

213-
The presence penalty is independent of the number of incidents of a token, so long as it has been used at least once. To increase the penalty for every additional time a token is generated, use the `FrequencyPenalty` name\-value argument.
157+
After construction, this property is read\-only.
214158

215-
### `FrequencyPenalty` \- Frequency penalty
216159

217-
`0` (default) | numeric scalar between `-2` and `2`
160+
Format of generated output.
218161

219162

220-
Penalty value for repeatedly using the same token in the generated output. Higher values reduce the repetition of tokens. Negative values increase the repetition of tokens.
163+
If you set the response format to `"text"`, then the generated output is a string.
221164

222165

223-
The frequence penalty increases with every instance of a token in the generated output. To use a constant penalty for a repeated token, independent of the number of instances that token is generated, use the `PresencePenalty` name\-value argument.
166+
If you set the response format to `"json"`, then the generated output is a string containing JSON encoded data.
224167

225-
### `TimeOut` \- Connection timeout in seconds
226168

227-
`10` (default) | nonnegative numeric scalar
169+
To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. The prompt or message describing the format must contain the word `"json"` or `"JSON"`.
228170

229171

230-
This property is read\-only.
172+
For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md).
231173

232174

233-
If the OpenAI server does not respond within the timeout, then the function throws an error.
175+
The JSON response format is not supported for these models:
234176

235-
### `ResponseFormat` \- Response format
177+
- `ModelName="gpt-4"`
178+
- `ModelName="gpt-4-0613"`
179+
# Other Properties
180+
### `SystemPrompt` — System prompt
236181

237-
`"text"` (default) | `"json"`
182+
character vector | string scalar
238183

239184

240185
This property is read\-only.
241186

242187

243-
Format of generated output.
244-
245-
246-
If the response format is `"text"`, then the generated output is a string.
247-
248-
249-
If the response format is `"json"`, then the generated output is a string containing JSON encoded data.
250-
251-
252-
To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. The prompt or message describing the format must contain the word `"json"` or `"JSON"`.
253-
254-
255-
For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md).
256-
188+
The system prompt is a natural\-language description that provides the framework in which a large language model generates its responses. The system prompt can include instructions about tone, communications style, language, etc.
257189

258-
The JSON response format is not supported for these models:
259190

260-
- `ModelName="gpt-4"`
261-
- `ModelName="gpt-4-0613"`
191+
**Example**: "You are a helpful assistant who provides answers to user queries in iambic pentameter."
262192

263-
### `FunctionNames` \- Names of OpenAI functions to use during output generation
193+
### `FunctionNames` Names of OpenAI functions to use during output generation
264194

265195
string array
266196

@@ -272,24 +202,49 @@ Names of the custom functions specified in the `Tools` name\-value argument.
272202

273203
# Object Functions
274204

275-
`generate` \- Generate text
205+
[`generate`](generate.md) Generate output from large language models
276206

277207
# Examples
278208
## Create OpenAI Chat
279209
```matlab
280-
modelName = "gpt-3.5-turbo";
210+
loadenv(".env")
211+
modelName = "gpt-4o-mini";
281212
chat = openAIChat("You are a helpful assistant awaiting further instructions.",ModelName=modelName)
282213
```
214+
215+
```matlabTextOutput
216+
chat =
217+
openAIChat with properties:
218+
219+
ModelName: "gpt-4o-mini"
220+
Temperature: 1
221+
TopP: 1
222+
StopSequences: [0x0 string]
223+
TimeOut: 10
224+
SystemPrompt: {[1x1 struct]}
225+
ResponseFormat: "text"
226+
PresencePenalty: 0
227+
FrequencyPenalty: 0
228+
FunctionNames: []
229+
230+
```
283231
## Generate and Stream Text
284232
```matlab
233+
loadenv(".env")
285234
sf = @(x) fprintf("%s",x);
286235
chat = openAIChat(StreamFun=sf);
287-
generate(chat,"Why is a raven like a writing desk?")
236+
generate(chat,"Why is a raven like a writing desk?",MaxNumTokens=50)
237+
```
238+
239+
```matlabTextOutput
240+
The phrase "Why is a raven like a writing desk?" comes from Lewis Carroll's "Alice's Adventures in Wonderland." Initially posed by the Mad Hatter during the tea party scene, the question is often interpreted as nonsense, in line with the book
241+
ans = "The phrase "Why is a raven like a writing desk?" comes from Lewis Carroll's "Alice's Adventures in Wonderland." Initially posed by the Mad Hatter during the tea party scene, the question is often interpreted as nonsense, in line with the book"
288242
```
289243
# See Also
290244
- [Create Simple Chat Bot](../../examples/CreateSimpleChatBot.md)
291245
- [Process Generated Text in Real Time Using ChatGPT in Streaming Mode](../../examples/ProcessGeneratedTextinRealTimebyUsingChatGPTinStreamingMode.md)
292246
- [Analyze Scientific Papers Using Function Calls](../../examples/AnalyzeScientificPapersUsingFunctionCalls.md)
293247
- [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md)
294248

295-
Copyright 2024 The MathWorks, Inc.
249+
*Copyright 2024 The MathWorks, Inc.*
250+

0 commit comments

Comments
 (0)