1
1
# go-llm
2
2
3
- Large Language Model API interface. This is a simple API interface for large language models
3
+ The module implements a simple API interface for large language models
4
4
which run on [ Ollama] ( https://github.com/ollama/ollama/blob/main/docs/api.md ) ,
5
- [ Anthopic] ( https://docs.anthropic.com/en/api/getting-started ) and [ Mistral] ( https://docs.mistral.ai/ )
6
- ( OpenAI might be added later).
5
+ [ Anthopic] ( https://docs.anthropic.com/en/api/getting-started ) , [ Mistral] ( https://docs.mistral.ai/ )
6
+ and [ OpenAI] ( https://platform.openai.com/docs/api-reference ) . The module implements the ability to:
7
7
8
- The module includes the ability to utilize:
9
-
10
- * Maintaining a session of messages
11
- * Tool calling support, including using your own tools (aka Tool plugins)
12
- * Creating embedding vectors from text
13
- * Streaming responses
14
- * Multi-modal support (aka, Images and Attachments)
8
+ * Maintain a session of messages;
9
+ * Tool calling support, including using your own tools (aka Tool plugins);
10
+ * Create embedding vectors from text;
11
+ * Stream responses;
12
+ * Multi-modal support (aka, Images, Audio and Attachments);
13
+ * Text-to-speech (OpenAI only) for completions
15
14
16
15
There is a command-line tool included in the module which can be used to interact with the API.
17
16
If you have docker installed, you can use the following command to run the tool, without
@@ -24,7 +23,8 @@ docker run ghcr.io/mutablelogic/go-llm:latest --help
24
23
# Interact with Claude to retrieve news headlines, assuming
25
24
# you have an API key for Anthropic and NewsAPI
26
25
docker run \
27
- -e OLLAMA_URL -e MISTRAL_API_KEY -e NEWSAPI_KEY \
26
+ -e OLLAMA_URL -e MISTRAL_API_KEY -e ANTHROPIC_API_KEY -e OPENAI_API_KEY \
27
+ -e NEWSAPI_KEY \
28
28
ghcr.io/mutablelogic/go-llm:latest \
29
29
chat mistral-small-latest --prompt " What is the latest news?" --no-stream
30
30
```
@@ -35,7 +35,7 @@ install it if you have a `go` compiler).
35
35
## Programmatic Usage
36
36
37
37
See the documentation [ here] ( https://pkg.go.dev/github.com/mutablelogic/go-llm )
38
- for integration into your own Go programs .
38
+ for integration into your own code .
39
39
40
40
### Agent Instantiation
41
41
@@ -95,6 +95,24 @@ func main() {
95
95
}
96
96
```
97
97
98
+ Similarly for [ OpenAI] ( https://pkg.go.dev/github.com/mutablelogic/go-llm/pkg/openai )
99
+ models, you can use:
100
+
101
+ ``` go
102
+ import (
103
+ " github.com/mutablelogic/go-llm/pkg/openai"
104
+ )
105
+
106
+ func main () {
107
+ // Create a new agent
108
+ agent , err := openai.New (os.Getenv (" OPENAI_API_KEY" ))
109
+ if err != nil {
110
+ panic (err)
111
+ }
112
+ // ...
113
+ }
114
+ ```
115
+
98
116
You can append options to the agent creation to set the client/server communication options,
99
117
such as user agent strings, timeouts, debugging, rate limiting, adding custom headers, etc. See [ here] ( https://pkg.go.dev/github.com/mutablelogic/go-client#readme-basic-usage ) for more information.
100
118
@@ -111,6 +129,7 @@ func main() {
111
129
agent , err := agent.New (
112
130
agent.WithAnthropic (os.Getenv (" ANTHROPIC_API_KEY" )),
113
131
agent.WithMistral (os.Getenv (" MISTRAL_API_KEY" )),
132
+ agent.WithOpenAI (os.Getenv (" OPENAI_API_KEY" )),
114
133
agent.WithOllama (os.Getenv (" OLLAMA_URL" )),
115
134
)
116
135
if err != nil {
@@ -120,6 +139,30 @@ func main() {
120
139
}
121
140
```
122
141
142
+ ### Completion
143
+
144
+ You can generate a ** completion** as follows,
145
+
146
+ ``` go
147
+ import (
148
+ " github.com/mutablelogic/go-llm"
149
+ )
150
+
151
+ func completion (ctx context .Context , agent llm .Agent ) (string , error ) {
152
+ completion , err := agent.
153
+ Model (ctx, " claude-3-5-haiku-20241022" ).
154
+ Completion ((ctx, " Why is the sky blue?" )
155
+ if err != nil {
156
+ return " " , err
157
+ } else {
158
+ return completion.Text (0 ), nil
159
+ }
160
+ }
161
+ ```
162
+
163
+ The zero index argument on ` completion.Text(int) ` indicates you want the text from the zero'th completion
164
+ choice, for providers who can generate serveral different choices simultaneously.
165
+
123
166
### Chat Sessions
124
167
125
168
You create a ** chat session** with a model as follows,
@@ -131,7 +174,9 @@ import (
131
174
132
175
func session (ctx context .Context , agent llm .Agent ) error {
133
176
// Create a new chat session
134
- session := agent.Model (context.TODO (), " claude-3-5-haiku-20241022" ).Context ()
177
+ session := agent.
178
+ Model (ctx, " claude-3-5-haiku-20241022" ).
179
+ Context ()
135
180
136
181
// Repeat forever
137
182
for {
@@ -147,20 +192,21 @@ func session(ctx context.Context, agent llm.Agent) error {
147
192
```
148
193
149
194
The ` Context ` object will continue to store the current session and options, and will
150
- ensure the session is maintained across multiple calls.
195
+ ensure the session is maintained across multiple completion calls.
151
196
152
197
### Embedding Generation
153
198
154
- You can generate embedding vectors using an appropriate model with Ollama or Mistral models:
199
+ You can generate embedding vectors using an appropriate model with Ollama, OpenAI and Mistral models:
155
200
156
201
``` go
157
202
import (
158
203
" github.com/mutablelogic/go-llm"
159
204
)
160
205
161
206
func embedding (ctx context .Context , agent llm .Agent ) error {
162
- // Create a new chat session
163
- vector , err := agent.Model (ctx, " mistral-embed" ).Embedding (ctx, " hello" )
207
+ vector , err := agent.
208
+ Model (ctx, " mistral-embed" ).
209
+ Embedding (ctx, " hello" )
164
210
// ...
165
211
}
166
212
```
@@ -182,21 +228,19 @@ func generate_image_caption(ctx context.Context, agent llm.Agent, path string) (
182
228
}
183
229
defer f.Close ()
184
230
185
- // Describe an image
186
- r , err := agent.Model (" claude-3-5-sonnet-20241022" ).UserPrompt (
187
- ctx, model.UserPrompt (" Provide a short caption for this image" , llm.WithAttachment (f))
188
- )
231
+ completion , err := agent.
232
+ Model (ctx, " claude-3-5-sonnet-20241022" ).
233
+ Completion ((ctx, " Provide a short caption for this image" , llm.WithAttachment (f))
189
234
if err != nil {
190
- return " " , err
191
- }
235
+ return " " , err
236
+ }
192
237
193
- // Return success
194
- return r.Text (0 ), err
238
+ return completion.Text (0 ), nil
195
239
}
196
240
```
197
241
198
- To summarize a text or PDF docment is exactly the same using an Anthropic model, but maybe with a
199
- different prompt.
242
+ To summarize a text or PDF document is exactly the same using an Anthropic model, but maybe
243
+ with a different prompt.
200
244
201
245
### Streaming
202
246
@@ -210,16 +254,14 @@ import (
210
254
)
211
255
212
256
func generate_completion (ctx context .Context , agent llm .Agent , prompt string ) (string , error ) {
213
- r , err := agent.Model (" claude-3-5-sonnet-20241022" ).UserPrompt (
214
- ctx, model.UserPrompt (" What is the weather in London?" ),
215
- llm.WithStream (stream_callback),
216
- )
257
+ completion , err := agent.
258
+ Model (ctx, " claude-3-5-haiku-20241022" ).
259
+ Completion ((ctx, " Why is the sky blue?" , llm.WithStream (stream_callback))
217
260
if err != nil {
218
261
return " " , err
262
+ } else {
263
+ return completion.Text (0 ), nil
219
264
}
220
-
221
- // Return success
222
- return r.Text (0 ), err
223
265
}
224
266
225
267
func stream_callback (completion llm .Completion ) {
@@ -231,30 +273,118 @@ func stream_callback(completion llm.Completion) {
231
273
232
274
### Tool Support
233
275
234
- All providers support tools, but not all models.
276
+ All providers support tools, but not all models. Your own tools should implement the
277
+ following interface:
278
+
279
+ ``` go
280
+ package llm
281
+
282
+ // Definition of a tool
283
+ type Tool interface {
284
+ Name () string // The name of the tool
285
+ Description () string // The description of the tool
286
+ Run (context.Context ) (any, error ) // Run the tool with a deadline and
287
+ // return the result
288
+ }
289
+ ```
290
+
291
+ For example, if you want to implement a tool which adds to numbers,
292
+
293
+ ``` go
294
+ package addition
295
+
296
+ type Adder struct {
297
+ A float64 ` name:"a" help:"The first number" required:"true"`
298
+ B float64 ` name:"b" help:"The second number" required:"true"`
299
+ }
300
+
301
+ func (Adder ) Name () string {
302
+ return " add_two_numbers"
303
+ }
304
+
305
+ func (Adder ) Description () string {
306
+ return " Add two numbers together and return the result"
307
+ }
308
+
309
+ func (a Adder ) Run (context .Context ) (any , error ) {
310
+ return a.A + a.B , nil
311
+ }
312
+ ```
313
+
314
+ Then you can include your tool as part of the completion. It's possible that a
315
+ completion will continue to call additional tools, in which case you should
316
+ actually loop through completions until no tool calls are made.
317
+
318
+ ``` go
319
+ import (
320
+ " github.com/mutablelogic/go-llm"
321
+ " github.com/mutablelogic/go-llm/pkg/tool"
322
+ )
323
+
324
+ func add_two_numbers (ctx context .Context , agent llm .Agent ) (string , error ) {
325
+ context := agent.Model (ctx, " claude-3-5-haiku-20241022" ).Context ()
326
+ toolkit := tool.NewToolKit ()
327
+ toolkit.Register (Adder{})
328
+
329
+ // Get the tool call
330
+ if err := context.FromUser (ctx, " What is five plus seven?" , llm.WithToolKit (toolkit)); err != nil {
331
+ return " " , err
332
+ }
333
+
334
+ // Call tools
335
+ for {
336
+ calls := context.ToolCalls (0 )
337
+ if len (calls) == 0 {
338
+ break
339
+ }
235
340
236
- TODO
341
+ // Print out any intermediate messages
342
+ if context.Text (0 ) != " " {
343
+ fmt.Println (context.Text (0 ))
344
+ }
345
+
346
+ // Get the results from the toolkit
347
+ results , err := toolkit.Run (ctx, calls...)
348
+ if err != nil {
349
+ return " " , err
350
+ }
351
+
352
+ // Get another tool call or a user response
353
+ if err := context.FromTool (ctx, results...); err != nil {
354
+ return " " , err
355
+ }
356
+ }
357
+
358
+ // Return the result
359
+ return context.Text (0 )
360
+ }
361
+ ```
237
362
238
363
## Options
239
364
240
365
You can add options to sessions, or to prompts. Different providers and models support
241
366
different options.
242
367
243
368
``` go
369
+ package llm
370
+
244
371
type Model interface {
245
372
// Set session-wide options
246
373
Context (...Opt ) Context
247
374
248
- // Add attachments (images, PDF's) to a user prompt for completion
249
- UserPrompt (string , ...Opt ) Context
375
+ // Create a completion from a text prompt
376
+ Completion (context.Context , string , ...Opt ) (Completion, error )
377
+
378
+ // Create a completion from a chat session
379
+ Chat (context.Context , []Completion , ...Opt ) (Completion, error )
250
380
251
- // Create an embedding vector with embedding options
381
+ // Embedding vector generation
252
382
Embedding (context.Context , string , ...Opt ) ([]float64 , error )
253
383
}
254
384
255
385
type Context interface {
256
- // Add single-use options when calling the model, which override
257
- // session options. You can attach files to a user prompt.
386
+ // Generate a response from a user prompt (with attachments and
387
+ // other options)
258
388
FromUser (context.Context , string , ...Opt ) error
259
389
}
260
390
```
0 commit comments