File tree Expand file tree Collapse file tree 1 file changed +7
-7
lines changed Expand file tree Collapse file tree 1 file changed +7
-7
lines changed Original file line number Diff line number Diff line change @@ -24,30 +24,30 @@ Using this convenience API, requesting text completion from an already
24
24
loaded LLM is as straightforward as:
25
25
26
26
``` python
27
- import lmstudio as lm
27
+ import lmstudio as lms
28
28
29
- llm = lm .llm()
30
- llm .complete(" Once upon a time," )
29
+ model = lms .llm()
30
+ model .complete(" Once upon a time," )
31
31
```
32
32
33
33
Requesting a chat response instead only requires the extra step of
34
34
setting up a ` Chat ` helper to manage the chat history and include
35
35
it in response prediction requests:
36
36
37
37
``` python
38
- import lmstudio as lm
38
+ import lmstudio as lms
39
39
40
40
EXAMPLE_MESSAGES = (
41
41
" My hovercraft is full of eels!" ,
42
42
" I will not buy this record, it is scratched."
43
43
)
44
44
45
- llm = lm .llm()
46
- chat = lm .Chat(" You are a helpful shopkeeper assisting a foreign traveller" )
45
+ model = lms .llm()
46
+ chat = lms .Chat(" You are a helpful shopkeeper assisting a foreign traveller" )
47
47
for message in EXAMPLE_MESSAGES :
48
48
chat.add_user_message(message)
49
49
print (f " Customer: { message} " )
50
- response = llm .respond(chat)
50
+ response = model .respond(chat)
51
51
chat.add_assistant_response(response)
52
52
print (f " Shopkeeper: { response} " )
53
53
```
You can’t perform that action at this time.
0 commit comments