Skip to content

Commit 4d88ea2

Browse files
authored
Update code snippets in README (#19)
* Update code snippets in README * Rename llm variable to model
1 parent 2bbad8f commit 4d88ea2

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

README.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -24,30 +24,30 @@ Using this convenience API, requesting text completion from an already
2424
loaded LLM is as straightforward as:
2525

2626
```python
27-
import lmstudio as lm
27+
import lmstudio as lms
2828

29-
llm = lm.llm()
30-
llm.complete("Once upon a time,")
29+
model = lms.llm()
30+
model.complete("Once upon a time,")
3131
```
3232

3333
Requesting a chat response instead only requires the extra step of
3434
setting up a `Chat` helper to manage the chat history and include
3535
it in response prediction requests:
3636

3737
```python
38-
import lmstudio as lm
38+
import lmstudio as lms
3939

4040
EXAMPLE_MESSAGES = (
4141
"My hovercraft is full of eels!",
4242
"I will not buy this record, it is scratched."
4343
)
4444

45-
llm = lm.llm()
46-
chat = lm.Chat("You are a helpful shopkeeper assisting a foreign traveller")
45+
model = lms.llm()
46+
chat = lms.Chat("You are a helpful shopkeeper assisting a foreign traveller")
4747
for message in EXAMPLE_MESSAGES:
4848
chat.add_user_message(message)
4949
print(f"Customer: {message}")
50-
response = llm.respond(chat)
50+
response = model.respond(chat)
5151
chat.add_assistant_response(response)
5252
print(f"Shopkeeper: {response}")
5353
```

0 commit comments

Comments
 (0)