Skip to content

Commit 94e5c1e

Browse files
committed
much cleaner UX
1 parent 6cc3155 commit 94e5c1e

File tree

5 files changed

+322
-188
lines changed

5 files changed

+322
-188
lines changed

README.md

Lines changed: 18 additions & 46 deletions
Original file line numberDiff line numberDiff line change
@@ -27,17 +27,13 @@ Think Python code, but with an LLM brain transplant!
2727

2828
<img width="534" alt="image" src="https://github.com/user-attachments/assets/921734fc-49b6-4efd-b702-d00d3f9b60e4" />
2929

30-
Most applications will need to perform some logic that allows you to control the workflow of your Agente with good old if/else statements. For example, given a question in plain English, you want to do something different, like checking if the email sounds urgent or not:
30+
Most applications will need to perform some logic that allows you to control the workflow of your Agent with good old if/else statements. For example, given a question in plain English, you want to do something different, like checking if the email sounds urgent or not:
3131

3232
```python
33-
34-
llm.set_context(email=email)
35-
36-
if llm.true_or_false('is this email urgent?'):
33+
if llm.is_true('is this email urgent?', email=email):
3734
-- do something
3835
else:
3936
-- do something else
40-
4137
```
4238

4339
### Workflow: Routing
@@ -47,26 +43,21 @@ Similar to if/else statements, but for when your LLM needs to be more dramatic w
4743

4844
*For example*, let's say we want to classify a message into different categories:
4945

50-
``` python
51-
46+
```python
5247
options = {
5348
'meeting': 'this is a meeting request',
5449
'spam': 'people trying to sell you stuff you dont want',
5550
'other': 'this is sounds like something else'
56-
}
57-
58-
llm.set_context(email=email)
51+
}
5952

60-
match llm.get_key(options):
53+
match llm.classify(options, email=email):
6154
case 'meeting':
62-
# you can add more context whenever you want
63-
llm.add_context(meeting=True)
55+
-- do something
6456
case 'spam':
65-
llm.add_context(spam=True)
57+
-- do something
6658
case 'other':
6759
-- do something
6860

69-
7061
```
7162

7263
## Agents
@@ -90,10 +81,7 @@ class EmailSummary(BaseModel):
9081
label: str
9182

9283

93-
llm.set_context(email=email)
94-
95-
ret = llm.generate_object(EmailSummary)
96-
84+
ret = llm.generate_object(EmailSummary, email=email)
9785
```
9886

9987

@@ -114,33 +102,32 @@ class ActionItem(BaseModel):
114102

115103
object_schema = List[ActionItem]
116104

117-
llm.set_context(email=email)
118-
105+
# lets pass the context to the LLM once, so we don't have to pass it every time
106+
llm.set_context(email=email, today = date.today())
119107
if llm.true_or_false('are there action items in this email?'):
120108
for action_item in llm.generate_object(object_schema):
121109
-- do something
110+
111+
llm.clear_context()
122112
```
123113

124114
### Function Calling
125115

126116
And of course, we want to be able to call functions. But you want the llm to figure out the arguments for you.
127117

128118
*For example*, let's say we want to call a function that sends a calendar invite to a meeting, we want the llm to figure out the arguments for the function given some information:
129-
```python
130-
131119

120+
```python
132121
def send_calendar_invite(
133122
subject = str,
134123
time = str,
135124
location = str,
136125
attendees = List[str]):
137126
-- send a calendar invite to the meeting
138127

139-
llm.set_context(email=email)
140-
141-
if llm.true_or_false('is this an email requesting for a meeting?'):
142-
ret = llm.call_function(send_calendar_invite)
143128

129+
if llm.true_or_false('is this an email requesting for a meeting?', email=email):
130+
ret = llm.call_function(send_calendar_invite, email=email, today = date.today())
144131
```
145132

146133
### Function picking
@@ -150,7 +137,6 @@ Sometimes you want to pick a function from a list of functions. You can do that
150137
*For example*, let's say we want to pick a function from a list of functions:
151138

152139
```python
153-
154140
def send_calendar_invites(
155141
subject = str,
156142
time = str,
@@ -174,13 +160,7 @@ else
174160
send a calendar invites to the meeting
175161
"""
176162

177-
llm.set_context(email=email)
178-
179-
# pick the function and the arguments
180-
function, args = llm.pick_a_function(instructions, [send_calendar_invite, send_email])
181-
182-
# call the function with the arguments
183-
function(**args)
163+
function, args = llm.pick_a_function(instructions, [send_calendar_invite, send_email], email=email, today = date.today())
184164
```
185165

186166

@@ -189,24 +169,16 @@ function(**args)
189169
Sometimes you just want a simple string response from the LLM. You can use the `get_string` method for this, I know! boring AF but it may come in handy:
190170

191171
```python
192-
193-
llm.set_context(email=email)
194-
195-
ret = llm.get_string('what is the subject of the email?')
196-
172+
ret = llm.get_string('what is the subject of the email?', email=email)
197173
```
198174

199175
### Streaming Response
200176

201177
Sometimes you want to stream the response from the LLM. You can use the `get_stream` method for this:
202178

203179
```python
204-
205-
llm.set_context(email=email)
206-
207-
for chunk in llm.get_stream('what is the subject of the email?'):
180+
for chunk in llm.get_stream('what is the subject of the email?', email=email):
208181
print(chunk)
209-
210182
```
211183

212184

flat_ai/__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1 +1,2 @@
11
from .flat_ai import FlatAI
2+
from .trace_llm import configure_logging

0 commit comments

Comments
 (0)