Skip to content

Commit 289ef5d

Browse files
lapp0rlouf
authored andcommitted
reflect in docs: models.openai with new interface, also remove references to gpt-3 and gpt-4
1 parent 5591950 commit 289ef5d

File tree

10 files changed

+63
-28
lines changed

10 files changed

+63
-28
lines changed

docs/reference/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,6 @@ By default, language models stop generating tokens after and <EOS> token was gen
1010
```python
1111
import outlines.models as models
1212

13-
complete = models.openai("gpt-3.5-turbo")
13+
complete = models.openai("gpt-4o-mini")
1414
expert = complete("Name an expert in quantum gravity.", stop_at=["\n", "."])
1515
```

docs/reference/models/models.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,11 @@ model = outlines.models.openai(
4242
| Stream ||||| ? |||
4343
| **`outlines.generate`** | | | | | | | |
4444
| Text ||||||||
45-
| Structured* ||||||||
45+
| __Structured__ ||||||||
46+
| JSON Schema ||||||||
47+
| Choice ||||||||
48+
| Regex ||||||||
49+
| Grammar ||||||||
4650

4751

4852
## Caveats

docs/reference/models/openai.md

Lines changed: 40 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -2,30 +2,29 @@
22

33
!!! Installation
44

5-
You need to install the `openai` and `tiktoken` libraries to be able to use the OpenAI API in Outlines.
5+
You need to install the `openai` library to be able to use the OpenAI API in Outlines.
66

77
## OpenAI models
88

9-
Outlines supports models available via the OpenAI Chat API, e.g. ChatGPT and GPT-4. You can initialize the model by passing the model name to `outlines.models.openai`:
9+
Outlines supports models available via the OpenAI Chat API, e.g. GPT-4o, ChatGPT and GPT-4. You can initialize the model by passing the model name to `outlines.models.openai`:
1010

1111
```python
1212
from outlines import models
1313

1414

15-
model = models.openai("gpt-3.5-turbo")
16-
model = models.openai("gpt-4-turbo")
15+
model = models.openai("gpt-4o-mini")
1716
model = models.openai("gpt-4o")
1817
```
1918

20-
Check the [OpenAI documentation](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4) for an up-to-date list of available models. You can pass any parameter you would pass to `openai.AsyncOpenAI` as keyword arguments:
19+
Check the [OpenAI documentation](https://platform.openai.com/docs/models/gpt-4o) for an up-to-date list of available models. You can pass any parameter you would pass to `openai.AsyncOpenAI` as keyword arguments:
2120

2221
```python
2322
import os
2423
from outlines import models
2524

2625

2726
model = models.openai(
28-
"gpt-3.5-turbo",
27+
"gpt-4o-mini",
2928
api_key=os.environ["OPENAI_API_KEY"]
3029
)
3130
```
@@ -56,8 +55,8 @@ from outlines import models
5655

5756
model = models.azure_openai(
5857
"azure-deployment-name",
59-
"gpt-3.5-turbo",
60-
api_version="2023-07-01-preview",
58+
"gpt-4o-mini",
59+
api_version="2024-07-18",
6160
azure_endpoint="https://example-endpoint.openai.azure.com",
6261
)
6362
```
@@ -111,6 +110,37 @@ model = models.openai(client, config)
111110

112111
You need to pass the async client to be able to do batch inference.
113112

113+
## Structured Generation Support
114+
115+
Outlines provides support for [OpenAI Structured Outputs](https://platform.openai.com/docs/guides/structured-outputs/json-mode) via `outlines.generate.json`, `outlines.generate.choice`
116+
117+
```python
118+
from pydantic import BaseModel, ConfigDict
119+
import outlines.models as models
120+
from outlines import generate
121+
122+
model = models.openai("gpt-4o-mini")
123+
124+
class Person(BaseModel):
125+
model_config = ConfigDict(extra='forbid') # required for openai
126+
first_name: str
127+
last_name: str
128+
age: int
129+
130+
generate.json(model, Person)
131+
generator("current indian prime minister on january 1st 2023")
132+
# Person(first_name='Narendra', last_name='Modi', age=72)
133+
134+
generator = generate.choice(model, ["Chicken", "Egg"])
135+
print(generator("Which came first?"))
136+
# Chicken
137+
```
138+
139+
!!! Warning
140+
141+
Structured generation support only provided to OpenAI-compatible endpoints which conform to OpenAI's standard. Additionally, `generate.regex` and `generate.cfg` are not supported.
142+
143+
114144
## Advanced configuration
115145

116146
For more advanced configuration option, such as support proxy, please consult the [OpenAI SDK's documentation](https://github.com/openai/openai-python):
@@ -146,7 +176,7 @@ config = OpenAIConfig(
146176
top_p=.95,
147177
seed=0,
148178
)
149-
model = models.openai("gpt-3.5-turbo", config)
179+
model = models.openai("gpt-4o-mini", config)
150180
```
151181

152182
## Monitoring API use
@@ -158,7 +188,7 @@ from openai import AsyncOpenAI
158188
import outlines.models
159189

160190

161-
model = models.openai("gpt-4")
191+
model = models.openai("gpt-4o")
162192

163193
print(model.prompt_tokens)
164194
# 0

docs/reference/text.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ Outlines provides a unified interface to generate text with many language models
99
```python
1010
from outlines import models, generate
1111

12-
model = models.openai("gpt-4")
12+
model = models.openai("gpt-4o-mini")
1313
generator = generate.text(model)
1414
answer = generator("What is 2+2?")
1515

examples/babyagi.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@
1010
import outlines
1111
import outlines.models as models
1212

13-
model = models.openai("gpt-3.5-turbo")
13+
model = models.openai("gpt-4o-mini")
1414
complete = outlines.generate.text(model)
1515

1616

examples/math_generate_code.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ def execute_code(code):
3535

3636

3737
prompt = answer_with_code_prompt(question, examples)
38-
model = models.openai("gpt-3.5-turbo")
38+
model = models.openai("gpt-4o-mini")
3939
answer = outlines.generate.text(model)(prompt)
4040
result = execute_code(answer)
4141
print(f"It takes Carla {result:.0f} minutes to download the file.")

examples/meta_prompting.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -140,7 +140,7 @@ def run_example(model_fn, question, model_name):
140140
parser.add_argument(
141141
"--model",
142142
type=str,
143-
default="gpt-3.5-turbo-1106",
143+
default="gpt-4o-mini",
144144
help="The Large Language Model to use to run the examples.",
145145
)
146146
args = parser.parse_args()

examples/pick_odd_one_out.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ def build_ooo_prompt(options):
3131

3232
options = ["sea", "mountains", "plains", "sock"]
3333

34-
model = models.openai("gpt-3.5-turbo")
34+
model = models.openai("gpt-4o-mini")
3535
gen_text = outlines.generate.text(model)
3636
gen_choice = outlines.generate.choice(model, options)
3737

examples/react.py

Lines changed: 11 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,7 @@
1313
import requests # type: ignore
1414

1515
import outlines
16+
import outlines.generate as generate
1617
import outlines.models as models
1718

1819

@@ -45,25 +46,25 @@ def search_wikipedia(query: str):
4546

4647

4748
prompt = build_reAct_prompt("Where is Apple Computers headquarted? ")
48-
model = models.openai("gpt-3.5-turbo")
49-
complete = outlines.generate.text(model)
49+
model = models.openai("gpt-4o-mini")
50+
51+
mode_generator = generate.choice(model, choices=["Tho", "Act"])
52+
action_generator = generate.choice(model, choices=["Search", "Finish"])
53+
text_generator = generate.text(model)
5054

5155
for i in range(1, 10):
52-
mode = complete.generate_choice(prompt, choices=["Tho", "Act"], max_tokens=128)
56+
mode = mode_generator(prompt, max_tokens=128)
5357
prompt = add_mode(i, mode, "", prompt)
5458

5559
if mode == "Tho":
56-
thought = complete(prompt, stop_at="\n", max_tokens=128)
60+
thought = text_generator(prompt, stop_at="\n", max_tokens=128)
5761
prompt += f"{thought}"
5862
elif mode == "Act":
59-
action = complete.generate_choice(
60-
prompt, choices=["Search", "Finish"], max_tokens=128
61-
)
63+
action = action_generator(prompt, max_tokens=128)
6264
prompt += f"{action} '"
6365

64-
subject = complete(
65-
prompt, stop_at=["'"], max_tokens=128
66-
) # Apple Computers headquartered
66+
subject = text_generator(prompt, stop_at=["'"], max_tokens=128)
67+
# Apple Computers headquartered
6768
subject = " ".join(subject.split()[:2])
6869
prompt += f"{subject}'"
6970

examples/self_consistency.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ def few_shots(question, examples):
5555
"""
5656

5757

58-
model = models.openai("gpt-3.5-turbo")
58+
model = models.openai("gpt-4o-mini")
5959
generator = outlines.generate.text(model)
6060
prompt = few_shots(question, examples)
6161
answers = generator(prompt, samples=10)

0 commit comments

Comments
 (0)