Skip to content

Commit 7cded01

Browse files
Merge branch 'main' into generators
2 parents 9eba7bc + fe4ffe4 commit 7cded01

File tree

45 files changed

+3451
-377
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

45 files changed

+3451
-377
lines changed

.github/workflows/cicd.yml

+32-10
Original file line numberDiff line numberDiff line change
@@ -74,24 +74,46 @@ jobs:
7474
ruby-version: '3.3'
7575
bundler-cache: true
7676

77-
- name: Check if version has changed
77+
- name: Check if version needs publishing
7878
id: check_version
7979
run: |
80-
VERSION=$(ruby -r ./lib/ruby_llm/version.rb -e "puts RubyLLM::VERSION")
81-
echo "Current version: $VERSION"
80+
VERSION_FILE="./lib/ruby_llm/version.rb"
81+
VERSION=$(ruby -r $VERSION_FILE -e "puts RubyLLM::VERSION" 2>/dev/null)
8282
83-
# Fetch all versions including prereleases
84-
ALL_VERSIONS=$(gem list ruby_llm -r --prerelease | grep -oE '[0-9a-zA-Z\.\-]+')
85-
echo "Available versions: $ALL_VERSIONS"
83+
if [ -z "$VERSION" ]; then
84+
echo "Error: Could not extract VERSION from $VERSION_FILE" >&2
85+
exit 1
86+
fi
87+
echo "Local version: $VERSION"
88+
89+
GEM_NAME="ruby_llm"
90+
echo "Fetching published versions for '$GEM_NAME'..."
91+
92+
# Combine results from --all and --prerelease, extract, clean, unique sort
93+
ALL_PUBLISHED_VERSIONS=$(( \
94+
gem list ^${GEM_NAME}$ -r --all 2>/dev/null || true; \
95+
gem list ^${GEM_NAME}$ -r --prerelease 2>/dev/null || true; \
96+
) | sed -n 's/.*(\(.*\)).*/\1/p' \
97+
| tr ',' '\n' \
98+
| sed 's/^[[:space:]]*//;s/[[:space:]]*$//' \
99+
| grep . \
100+
| sort -u )
101+
102+
echo "Checking against published versions:"
103+
if [ -n "$ALL_PUBLISHED_VERSIONS" ]; then
104+
echo "$ALL_PUBLISHED_VERSIONS" | sed 's/^/ /' # Log found versions nicely
105+
else
106+
echo " (none found)"
107+
fi
86108
87-
# Check if current version is among published versions
88-
if echo "$ALL_VERSIONS" | grep -Fxq "$VERSION"; then
89-
echo "Version $VERSION already published, skipping publish"
109+
if echo "$ALL_PUBLISHED_VERSIONS" | grep -Fxq "$VERSION"; then
110+
echo "Verdict: Version $VERSION already published."
90111
echo "version_changed=false" >> $GITHUB_OUTPUT
91112
else
92-
echo "Version $VERSION is new"
113+
echo "Verdict: Version $VERSION is new."
93114
echo "version_changed=true" >> $GITHUB_OUTPUT
94115
fi
116+
shell: bash
95117

96118
- name: Test with real APIs before publishing
97119
if: steps.check_version.outputs.version_changed == 'true'

CONTRIBUTING.md

-4
Original file line numberDiff line numberDiff line change
@@ -185,10 +185,6 @@ When adding new features, please include documentation updates:
185185
- Add inline documentation using YARD comments
186186
- Keep the README clean and focused on helping new users get started quickly
187187

188-
## Philosophy
189-
190-
RubyLLM follows certain design philosophies and conventions. Please refer to our [Philosophy Guide](https://rubyllm.com/philosophy) to ensure your contributions align with the project's vision.
191-
192188
## Discussions and Issues
193189

194190
- For questions and discussions, please use [GitHub Discussions](https://github.com/crmne/ruby_llm/discussions)

README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -135,7 +135,7 @@ chat.ask "Tell me a story about a Ruby programmer" do |chunk|
135135
print chunk.content
136136
end
137137

138-
# Set personality or behavior with instructions (aka system prompts) - available from 1.1.0
138+
# Set personality or behavior with instructions (aka system prompts)
139139
chat.with_instructions "You are a friendly Ruby expert who loves to help beginners"
140140

141141
# Understand content in multiple forms
@@ -197,7 +197,7 @@ end
197197
# In a background job
198198
chat = Chat.create! model_id: "gpt-4o-mini"
199199

200-
# Set personality or behavior with instructions (aka system prompts) - they're persisted too! - available from 1.1.0
200+
# Set personality or behavior with instructions (aka system prompts) - they're persisted too!
201201
chat.with_instructions "You are a friendly Ruby expert who loves to help beginners"
202202

203203
chat.ask("What's your favorite Ruby gem?") do |chunk|

docs/guides/chat.md

+1-11
Original file line numberDiff line numberDiff line change
@@ -43,11 +43,6 @@ claude_chat = RubyLLM.chat(model: 'claude-3-5-sonnet-20241022')
4343
chat.with_model('gemini-2.0-flash')
4444
```
4545

46-
{: .warning-title }
47-
> Coming in v1.1.0
48-
>
49-
> The following model aliases and provider selection features are available in the upcoming version.
50-
5146
RubyLLM supports model aliases, so you don't need to remember specific version numbers:
5247

5348
```ruby
@@ -69,11 +64,6 @@ See [Working with Models]({% link guides/models.md %}) for more details on model
6964

7065
## Instructions (aka System Prompts)
7166

72-
{: .warning-title }
73-
> Coming in v1.1.0
74-
>
75-
> chat.with_instructions is coming in 1.1.0. 1.0.x users should use `add_message role: system, content: PROMPT`
76-
7767
System prompts allow you to set specific instructions or context that guide the AI's behavior throughout the conversation. These prompts are not directly visible to the user but help shape the AI's responses:
7868

7969
```ruby
@@ -86,7 +76,7 @@ chat.with_instructions "You are a helpful Ruby programming assistant. Always inc
8676
# Now the AI will follow these instructions in all responses
8777
response = chat.ask "How do I handle file operations in Ruby?"
8878

89-
# You can add multiple system messages or update them during the conversation - available from 1.1.0
79+
# You can add multiple system messages or update them during the conversation
9080
chat.with_instructions "Always format your code using proper Ruby style conventions and include comments."
9181

9282
# System prompts are especially useful for:

docs/guides/error-handling.md

+10-4
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@ end
9797

9898
## Handling Tool Errors
9999

100-
When using tools, errors can be handled within the tool or in the calling code:
100+
There are two kinds of errors when working with tools: those the LLM should know about and retry, and those that should bubble up to your application code. Let's handle them appropriately:
101101

102102
```ruby
103103
# Error handling within tools
@@ -110,13 +110,17 @@ class Weather < RubyLLM::Tool
110110
url = "https://api.open-meteo.com/v1/forecast?latitude=#{latitude}&longitude=#{longitude}&current=temperature_2m,wind_speed_10m"
111111

112112
response = Faraday.get(url)
113-
data = JSON.parse(response.body)
114-
rescue => e
113+
JSON.parse(response.body)
114+
rescue Faraday::ClientError => e
115+
# Return errors the LLM should know about and can retry
115116
{ error: e.message }
116117
end
117118
end
119+
```
120+
121+
Handle program-ending errors at the application level:
118122

119-
# Error handling when using tools
123+
```ruby
120124
begin
121125
chat = RubyLLM.chat.with_tool(Calculator)
122126
chat.ask "What's 1/0?"
@@ -125,6 +129,8 @@ rescue RubyLLM::Error => e
125129
end
126130
```
127131

132+
Return errors to the LLM when it should try a different approach (like invalid parameters or temporary failures), but let serious problems bubble up to be handled by your application's error tracking. The LLM is smart enough to work with error messages and try alternative approaches, but it shouldn't have to deal with program-ending problems.
133+
128134
## Automatic Retries
129135

130136
RubyLLM automatically retries on certain transient errors:

docs/guides/models.md

-14
Original file line numberDiff line numberDiff line change
@@ -29,11 +29,6 @@ chat.with_model('claude-3-5-sonnet')
2929

3030
### Model Resolution
3131

32-
{: .warning-title }
33-
> Coming in v1.1.0
34-
>
35-
> Provider-Specific Match and Alias Resolution will be available in the next release.
36-
3732
When you specify a model, RubyLLM follows these steps to find it:
3833

3934
1. **Exact Match**: First tries to find an exact match for the model ID
@@ -66,21 +61,12 @@ chat = RubyLLM.chat(model: 'claude-3-5-sonnet', provider: 'bedrock')
6661

6762
### Model Aliases
6863

69-
{: .warning-title }
70-
> Coming in v1.1.0
71-
>
72-
> Alias Resolution will be available in the next release.
73-
7464
RubyLLM provides convenient aliases for popular models, so you don't have to remember specific version numbers:
7565

7666
```ruby
7767
# These are equivalent
7868
chat = RubyLLM.chat(model: 'claude-3-5-sonnet')
7969
chat = RubyLLM.chat(model: 'claude-3-5-sonnet-20241022')
80-
81-
# These are also equivalent
82-
chat = RubyLLM.chat(model: 'gpt-4o')
83-
chat = RubyLLM.chat(model: 'gpt-4o-2024-11-20')
8470
```
8571

8672
If you want to ensure you're always getting a specific version, use the full model ID:

docs/guides/rails.md

+2-7
Original file line numberDiff line numberDiff line change
@@ -153,25 +153,20 @@ end
153153

154154
## Instructions (aka System Prompts)
155155

156-
{: .warning-title }
157-
> Coming in v1.1.0
158-
>
159-
> chat.with_instructions is coming in 1.1.0. 1.0.x users should use `chat.messages.create! role: system, content: PROMPT`
160-
161156
Instructions help guide the AI's behavior throughout a conversation. With Rails integration, these messages are automatically persisted just like regular chat messages:
162157

163158
```ruby
164159
# Create a new chat
165160
chat = Chat.create!(model_id: 'gpt-4o-mini')
166161

167-
# Add instructions (these are persisted) - available from 1.1.0
162+
# Add instructions (these are persisted)
168163
chat.with_instructions("You are a helpful Ruby programming assistant. Always include code examples in your responses and explain them line by line.")
169164

170165
# Ask questions - the AI will follow the instructions
171166
response = chat.ask("How do I handle file operations in Ruby?")
172167
puts response.content # Will include detailed code examples
173168

174-
# Add additional instructions - available from 1.1.0
169+
# Add additional instructions
175170
chat.with_instructions("Always format your code using proper Ruby style conventions and include comments.")
176171
# Both instructions are now persisted and active
177172

docs/guides/tools.md

+66-10
Original file line numberDiff line numberDiff line change
@@ -184,7 +184,7 @@ chat.ask "What's the weather in New York? Coordinates are 40.7128, -74.0060"
184184

185185
## Error Handling
186186

187-
Tools can handle errors gracefully:
187+
Tools should handle errors differently based on whether they're recoverable by the LLM or require application intervention:
188188

189189
```ruby
190190
class Weather < RubyLLM::Tool
@@ -193,20 +193,76 @@ class Weather < RubyLLM::Tool
193193
param :longitude, desc: "Longitude (e.g., 13.4050)"
194194

195195
def execute(latitude:, longitude:)
196-
url = "https://api.open-meteo.com/v1/forecast?latitude=#{latitude}&longitude=#{longitude}&current=temperature_2m,wind_speed_10m"
197-
198-
response = Faraday.get(url)
199-
data = JSON.parse(response.body)
200-
rescue => e
201-
{ error: e.message }
196+
validate_coordinates!(latitude, longitude)
197+
response = Faraday.get(weather_api_url(latitude, longitude))
198+
199+
case response.status
200+
when 429
201+
# Return errors the LLM should know about and can retry
202+
{ error: "Rate limit exceeded. Please try again in 60 seconds." }
203+
when 200
204+
JSON.parse(response.body)
205+
else
206+
# Let serious problems bubble up
207+
raise "Weather API error: #{response.status}"
208+
end
202209
end
210+
211+
private
212+
def validate_coordinates!(lat, long)
213+
lat = lat.to_f
214+
long = long.to_f
215+
216+
if lat.abs > 90 || long.abs > 180
217+
# Return validation errors to the LLM
218+
{ error: "Invalid coordinates. Latitude must be between -90 and 90, longitude between -180 and 180." }
219+
end
220+
end
221+
222+
def weather_api_url(lat, long)
223+
"https://api.open-meteo.com/v1/forecast?latitude=#{lat}&longitude=#{long}&current=temperature_2m"
224+
end
203225
end
226+
```
227+
228+
Handle application-level errors in your code:
204229

205-
# When there's an error, the model will receive and explain it
206-
chat.ask "What's the weather at invalid coordinates 1000, 1000?"
207-
# => "The coordinates 1000, 1000 are not valid for any location on Earth, as latitude must be between -90 and 90, and longitude must be between -180 and 180. Please provide valid coordinates or a city name for weather information."
230+
```ruby
231+
begin
232+
chat = RubyLLM.chat.with_tool(Weather)
233+
response = chat.ask "What's the weather in Berlin?"
234+
rescue RubyLLM::Error => e
235+
# Handle LLM-specific errors
236+
Rails.logger.error "LLM error: #{e.message}"
237+
raise
238+
rescue StandardError => e
239+
# Handle other unexpected errors
240+
Rails.logger.error "Tool execution failed: #{e.message}"
241+
raise
242+
end
208243
```
209244

245+
### Error Handling Guidelines
246+
247+
When implementing tools, follow these principles:
248+
249+
1. **Return errors to the LLM when:**
250+
- Input validation fails
251+
- The operation can be retried (rate limits, temporary failures)
252+
- Alternative approaches might work
253+
254+
2. **Let errors bubble up when:**
255+
- The tool encounters unexpected states
256+
- System resources are unavailable
257+
- Authentication or authorization fails
258+
- Data integrity is compromised
259+
260+
The LLM can handle returned errors intelligently by:
261+
- Retrying with different parameters
262+
- Suggesting alternative approaches
263+
- Explaining the issue to the user
264+
- Using different tools to accomplish the task
265+
210266
## Simple Tool Parameters
211267

212268
RubyLLM currently only supports simple parameter types: strings, numbers, and booleans. Complex types like arrays and objects are not supported.

docs/index.md

-5
Original file line numberDiff line numberDiff line change
@@ -21,11 +21,6 @@ A delightful Ruby way to work with AI through a unified interface to OpenAI, Ant
2121

2222
---
2323

24-
{: .warning-title }
25-
> Coming in v1.1.0
26-
>
27-
> Amazon Bedrock support is coming in v1.1.0
28-
2924
<div style="display: flex; align-items: center; flex-wrap: wrap; gap: 1em; margin-bottom: 1em">
3025
<img src="https://upload.wikimedia.org/wikipedia/commons/4/4d/OpenAI_Logo.svg" alt="OpenAI" height="40" width="120">
3126
<img src="https://upload.wikimedia.org/wikipedia/commons/7/78/Anthropic_logo.svg" alt="Anthropic" height="40" width="120">

lib/ruby_llm/active_record/acts_as.rb

+3-3
Original file line numberDiff line numberDiff line change
@@ -25,13 +25,13 @@ def acts_as_chat(message_class: 'Message', tool_call_class: 'ToolCall')
2525
to: :to_llm
2626
end
2727

28-
def acts_as_message(chat_class: 'Chat', tool_call_class: 'ToolCall') # rubocop:disable Metrics/MethodLength
28+
def acts_as_message(chat_class: 'Chat', tool_call_class: 'ToolCall', touch_chat: false) # rubocop:disable Metrics/MethodLength
2929
include MessageMethods
3030

3131
@chat_class = chat_class.to_s
3232
@tool_call_class = tool_call_class.to_s
3333

34-
belongs_to :chat, class_name: @chat_class, foreign_key: "#{@chat_class.underscore}_id"
34+
belongs_to :chat, class_name: @chat_class, touch: touch_chat
3535
has_many :tool_calls, class_name: @tool_call_class, dependent: :destroy
3636

3737
belongs_to :parent_tool_call,
@@ -209,4 +209,4 @@ def extract_content
209209
end
210210
end
211211
end
212-
end
212+
end

lib/ruby_llm/aliases.json

-27
Original file line numberDiff line numberDiff line change
@@ -34,32 +34,5 @@
3434
"claude-2-1": {
3535
"anthropic": "claude-2.1",
3636
"bedrock": "anthropic.claude-2.1"
37-
},
38-
"gpt-4o": {
39-
"openai": "gpt-4o-2024-11-20"
40-
},
41-
"gpt-4o-mini": {
42-
"openai": "gpt-4o-mini-2024-07-18"
43-
},
44-
"gpt-4-turbo": {
45-
"openai": "gpt-4-turbo-2024-04-09"
46-
},
47-
"gemini-1.5-flash": {
48-
"gemini": "gemini-1.5-flash-002"
49-
},
50-
"gemini-1.5-flash-8b": {
51-
"gemini": "gemini-1.5-flash-8b-001"
52-
},
53-
"gemini-1.5-pro": {
54-
"gemini": "gemini-1.5-pro-002"
55-
},
56-
"gemini-2.0-flash": {
57-
"gemini": "gemini-2.0-flash-001"
58-
},
59-
"o1": {
60-
"openai": "o1-2024-12-17"
61-
},
62-
"o3-mini": {
63-
"openai": "o3-mini-2025-01-31"
6437
}
6538
}

0 commit comments

Comments
 (0)