Skip to content

Commit 635a215

Browse files
committed
docs: enhance README and guides for clarity and consistency; update gemspec description
1 parent e9077f7 commit 635a215

File tree

5 files changed

+156
-277
lines changed

5 files changed

+156
-277
lines changed

README.md

Lines changed: 54 additions & 118 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
<img src="/docs/assets/images/logotype.svg" alt="RubyLLM" height="120" width="250">
22

3-
A delightful Ruby way to work with AI. No configuration madness, no complex callbacks, no handler hell – just beautiful, expressive Ruby code.
3+
**A delightful Ruby way to work with AI.** RubyLLM provides **one** beautiful, Ruby-like interface to interact with modern AI models. Chat, generate images, create embeddings, and use tools – all with clean, expressive code that feels like Ruby, not like patching together multiple services.
44

55
<div class="provider-icons">
66
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/anthropic-text.svg" alt="Anthropic" class="logo-small">
@@ -11,14 +11,14 @@ A delightful Ruby way to work with AI. No configuration madness, no complex call
1111
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/deepseek-color.svg" alt="DeepSeek" class="logo-medium">
1212
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/deepseek-text.svg" alt="DeepSeek" class="logo-small">
1313
&nbsp;
14+
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/gemini-brand-color.svg" alt="Gemini" class="logo-large">
15+
&nbsp;
1416
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/ollama.svg" alt="Ollama" class="logo-medium">
1517
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/ollama-text.svg" alt="Ollama" class="logo-medium">
1618
&nbsp;
1719
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openai.svg" alt="OpenAI" class="logo-medium">
1820
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openai-text.svg" alt="OpenAI" class="logo-medium">
1921
&nbsp;
20-
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/gemini-brand-color.svg" alt="Gemini" class="logo-large">
21-
&nbsp;
2222
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openrouter.svg" alt="OpenRouter" class="logo-medium">
2323
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openrouter-text.svg" alt="OpenRouter" class="logo-small">
2424
&nbsp;
@@ -39,17 +39,6 @@ Every AI provider comes with its own client library, its own response format, it
3939

4040
RubyLLM fixes all that. One beautiful API for everything. One consistent format. Minimal dependencies — just Faraday and Zeitwerk. Because working with AI should be a joy, not a chore.
4141

42-
## Features
43-
44-
- 💬 **Chat** with Anthropic, AWS Bedrock Anthropic, DeepSeek, Ollama, OpenAI, Gemini, and OpenRouter models
45-
- 👁️ **Vision and Audio** understanding
46-
- 📄 **PDF Analysis** for analyzing documents
47-
- 🖼️ **Image generation** with DALL-E and other providers
48-
- 📊 **Embeddings** for vector search and semantic analysis
49-
- 🔧 **Tools** that let AI use your Ruby code
50-
- 🚂 **Rails integration** to persist chats and messages with ActiveRecord
51-
- 🌊 **Streaming** responses with proper Ruby patterns
52-
5342
## What makes it great
5443

5544
```ruby
@@ -96,142 +85,89 @@ end
9685
chat.with_tool(Weather).ask "What's the weather in Berlin? (52.5200, 13.4050)"
9786
```
9887

88+
## Core Capabilities
89+
90+
* 💬 **Unified Chat:** Converse with models from OpenAI, Anthropic, Gemini, Bedrock, OpenRouter, DeepSeek, Ollama, or any OpenAI-compatible API using `RubyLLM.chat`.
91+
* 👁️ **Vision:** Analyze images within chats.
92+
* 🔊 **Audio:** Transcribe and understand audio content.
93+
* 📄 **PDF Analysis:** Extract information and summarize PDF documents.
94+
* 🖼️ **Image Generation:** Create images with `RubyLLM.paint`.
95+
* 📊 **Embeddings:** Generate text embeddings for vector search with `RubyLLM.embed`.
96+
* 🔧 **Tools (Function Calling):** Let AI models call your Ruby code using `RubyLLM::Tool`.
97+
* 🚂 **Rails Integration:** Easily persist chats, messages, and tool calls using `acts_as_chat` and `acts_as_message`.
98+
* 🌊 **Streaming:** Process responses in real-time with idiomatic Ruby blocks.
99+
99100
## Installation
100101

102+
Add to your Gemfile:
101103
```ruby
102-
# In your Gemfile
103104
gem 'ruby_llm'
104-
105-
# Then run
106-
bundle install
107-
108-
# Or install it yourself
109-
gem install ruby_llm
110105
```
106+
Then `bundle install`.
111107

112-
Configure with your API keys:
113-
108+
Configure your API keys (using environment variables is recommended):
114109
```ruby
110+
# config/initializers/ruby_llm.rb or similar
115111
RubyLLM.configure do |config|
116112
config.openai_api_key = ENV.fetch('OPENAI_API_KEY', nil)
117-
config.anthropic_api_key = ENV.fetch('ANTHROPIC_API_KEY', nil)
118-
config.gemini_api_key = ENV.fetch('GEMINI_API_KEY', nil)
119-
config.deepseek_api_key = ENV.fetch('DEEPSEEK_API_KEY', nil)
120-
121-
# Bedrock
122-
config.bedrock_api_key = ENV.fetch('AWS_ACCESS_KEY_ID', nil)
123-
config.bedrock_secret_key = ENV.fetch('AWS_SECRET_ACCESS_KEY', nil)
124-
config.bedrock_region = ENV.fetch('AWS_REGION', nil)
125-
config.bedrock_session_token = ENV.fetch('AWS_SESSION_TOKEN', nil)
113+
# Add keys ONLY for providers you intend to use
114+
# config.anthropic_api_key = ENV.fetch('ANTHROPIC_API_KEY', nil)
115+
# ... see Configuration guide for all options ...
126116
end
127117
```
118+
See the [Installation Guide](https://rubyllm.com/installation) for full details.
128119

129-
## Have great conversations
120+
## Rails Integration
130121

131-
```ruby
132-
# Start a chat with the default model (gpt-4.1-nano)
133-
chat = RubyLLM.chat
134-
135-
# Or specify what you want
136-
chat = RubyLLM.chat(model: 'claude-3-7-sonnet-20250219')
137-
138-
# Simple questions just work
139-
chat.ask "What's the difference between attr_reader and attr_accessor?"
140-
141-
# Multi-turn conversations are seamless
142-
chat.ask "Could you give me an example?"
143-
144-
# Stream responses in real-time
145-
chat.ask "Tell me a story about a Ruby programmer" do |chunk|
146-
print chunk.content
147-
end
148-
149-
# Set personality or behavior with instructions (aka system prompts)
150-
chat.with_instructions "You are a friendly Ruby expert who loves to help beginners"
151-
152-
# Understand content in multiple forms
153-
chat.ask "Compare these diagrams", with: { image: ["diagram1.png", "diagram2.png"] }
154-
chat.ask "Summarize this document", with: { pdf: "contract.pdf" }
155-
chat.ask "What's being said?", with: { audio: "meeting.wav" }
156-
157-
# Need a different model mid-conversation? No problem
158-
chat.with_model('gemini-2.0-flash').ask "What's your favorite algorithm?"
159-
```
160-
161-
## Rails integration that makes sense
122+
Add persistence to your chat models effortlessly:
162123

163124
```ruby
164125
# app/models/chat.rb
165126
class Chat < ApplicationRecord
166-
acts_as_chat
167-
168-
# Works great with Turbo
169-
broadcasts_to ->(chat) { "chat_#{chat.id}" }
127+
acts_as_chat # Automatically saves messages & tool calls
128+
# ... your other model logic ...
170129
end
171130

172131
# app/models/message.rb
173132
class Message < ApplicationRecord
174133
acts_as_message
134+
# ...
175135
end
176136

177-
# app/models/tool_call.rb
137+
# app/models/tool_call.rb (if using tools)
178138
class ToolCall < ApplicationRecord
179139
acts_as_tool_call
140+
# ...
180141
end
181142

182-
# In a background job
183-
chat = Chat.create! model_id: "gpt-4.1-nano"
184-
185-
# Set personality or behavior with instructions (aka system prompts) - they're persisted too!
186-
chat.with_instructions "You are a friendly Ruby expert who loves to help beginners"
187-
188-
chat.ask("What's your favorite Ruby gem?") do |chunk|
189-
Turbo::StreamsChannel.broadcast_append_to(
190-
chat,
191-
target: "response",
192-
partial: "messages/chunk",
193-
locals: { chunk: chunk }
194-
)
195-
end
196-
197-
# That's it - chat history is automatically saved
198-
```
199-
200-
## Creating tools is a breeze
201-
202-
```ruby
203-
class Search < RubyLLM::Tool
204-
description "Searches a knowledge base"
205-
206-
param :query, desc: "The search query"
207-
param :limit, type: :integer, desc: "Max results", required: false
208-
209-
def execute(query:, limit: 5)
210-
# Your search logic here
211-
Document.search(query).limit(limit).map(&:title)
212-
end
213-
end
214-
215-
# Let the AI use it
216-
chat.with_tool(Search).ask "Find documents about Ruby 3.3 features"
143+
# Now interacting with a Chat record persists the conversation:
144+
chat_record = Chat.create!(model_id: "gpt-4.1-nano")
145+
chat_record.ask("Explain Active Record callbacks.") # User & Assistant messages saved
217146
```
218-
219-
## Learn more
220-
221-
Check out the guides at https://rubyllm.com for deeper dives into conversations with tools, streaming responses, embedding generations, and more.
147+
Check the [Rails Integration Guide](https://rubyllm.com/guides/rails) for more.
148+
149+
## Learn More
150+
151+
Dive deeper with the official documentation:
152+
153+
- [Installation](https://rubyllm.com/installation)
154+
- [Configuration](https://rubyllm.com/configuration)
155+
- **Guides:**
156+
- [Getting Started](https://rubyllm.com/guides/getting-started)
157+
- [Chatting with AI Models](https://rubyllm.com/guides/chat)
158+
- [Using Tools](https://rubyllm.com/guides/tools)
159+
- [Streaming Responses](https://rubyllm.com/guides/streaming)
160+
- [Rails Integration](https://rubyllm.com/guides/rails)
161+
- [Image Generation](https://rubyllm.com/guides/image-generation)
162+
- [Embeddings](https://rubyllm.com/guides/embeddings)
163+
- [Working with Models](https://rubyllm.com/guides/models)
164+
- [Error Handling](https://rubyllm.com/guides/error-handling)
165+
- [Available Models](https://rubyllm.com/guides/available-models)
222166

223167
## Contributing
224168

225-
We welcome contributions to RubyLLM!
226-
227-
See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed instructions on how to:
228-
- Run the test suite
229-
- Add new features
230-
- Update documentation
231-
- Re-record VCR cassettes when needed
232-
233-
We appreciate your help making RubyLLM better!
169+
We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for details on setup, testing, and contribution guidelines.
234170

235171
## License
236172

237-
Released under the MIT License.
173+
Released under the MIT License.

docs/guides/getting-started.md

Lines changed: 34 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ permalink: /guides/getting-started
99
# Getting Started with RubyLLM
1010
{: .no_toc }
1111

12-
Welcome to RubyLLM! This guide will get you up and running quickly. We'll cover installing the gem, configuring your first API key, and making basic chat, image, and embedding requests.
12+
Welcome to RubyLLM! This guide will get you up and running quickly. We'll cover installing the gem, minimal configuration, and making your first chat, image, and embedding requests.
1313
{: .fs-6 .fw-300 }
1414

1515
## Table of contents
@@ -23,10 +23,10 @@ Welcome to RubyLLM! This guide will get you up and running quickly. We'll cover
2323
After reading this guide, you will know:
2424

2525
* How to install RubyLLM.
26-
* How to configure API keys.
26+
* How to perform minimal configuration.
2727
* How to start a simple chat conversation.
2828
* How to generate an image.
29-
* How to create text embeddings.
29+
* How to create a text embedding.
3030

3131
## Installation
3232

@@ -38,34 +38,33 @@ gem 'ruby_llm'
3838

3939
Then run `bundle install`.
4040

41-
Alternatively, install it manually: `gem install ruby_llm`
42-
4341
(For full details, see the [Installation Guide]({% link installation.md %})).
4442

45-
## Configuration
43+
## Minimal Configuration
4644

47-
RubyLLM needs API keys for the AI providers you want to use. Configure them, typically in an initializer (`config/initializers/ruby_llm.rb` in Rails) or at the start of your script.
45+
RubyLLM needs API keys for the AI providers you want to use. Configure them once, typically when your application starts.
4846

4947
```ruby
48+
# config/initializers/ruby_llm.rb (in Rails) or at the start of your script
5049
require 'ruby_llm'
5150

5251
RubyLLM.configure do |config|
53-
# Add keys for the providers you plan to use.
54-
# Using environment variables is recommended.
52+
# Add keys ONLY for the providers you intend to use.
53+
# Using environment variables is highly recommended.
5554
config.openai_api_key = ENV.fetch('OPENAI_API_KEY', nil)
5655
# config.anthropic_api_key = ENV.fetch('ANTHROPIC_API_KEY', nil)
57-
# ... add other provider keys as needed
5856
end
5957
```
6058

61-
You only need to configure keys for the providers you intend to use. See the [Installation Guide]({% link installation.md %}#configuration) for all configuration options.
59+
{: .note }
60+
You only need to configure keys for the providers you actually plan to use. See the [Configuration Guide]({% link configuration.md %}) for all options, including setting defaults and connecting to custom endpoints.
6261

6362
## Your First Chat
6463

65-
The primary way to interact with language models is through the `RubyLLM.chat` interface.
64+
Interact with language models using `RubyLLM.chat`.
6665

6766
```ruby
68-
# Create a chat instance (uses the default model, usually GPT)
67+
# Create a chat instance (uses the configured default model)
6968
chat = RubyLLM.chat
7069

7170
# Ask a question
@@ -74,62 +73,58 @@ response = chat.ask "What is Ruby on Rails?"
7473
# The response is a RubyLLM::Message object
7574
puts response.content
7675
# => "Ruby on Rails, often shortened to Rails, is a server-side web application..."
77-
78-
# Continue the conversation naturally
79-
response = chat.ask "What are its main advantages?"
80-
puts response.content
81-
# => "Some key advantages of Ruby on Rails include..."
8276
```
8377

84-
RubyLLM automatically handles conversation history. Dive deeper in the [Chatting with AI Models Guide]({% link guides/chat.md %}).
78+
RubyLLM handles the conversation history automatically. See the [Chatting with AI Models Guide]({% link guides/chat.md %}) for more details.
8579

8680
## Generating an Image
8781

88-
You can generate images using models like DALL-E 3 via the `RubyLLM.paint` method.
82+
Generate images using models like DALL-E 3 via `RubyLLM.paint`.
8983

9084
```ruby
91-
# Generate an image (uses the default image model, usually DALL-E 3)
92-
image = RubyLLM.paint("A futuristic cityscape at sunset, watercolor style")
93-
94-
# Access the image URL
95-
puts image.url
96-
# => "https://oaidalleapiprodscus.blob.core.windows.net/..."
85+
# Generate an image (uses the default image model)
86+
image = RubyLLM.paint("A photorealistic red panda coding Ruby")
87+
88+
# Access the image URL (or Base64 data depending on provider)
89+
if image.url
90+
puts image.url
91+
# => "https://oaidalleapiprodscus.blob.core.windows.net/..."
92+
else
93+
puts "Image data received (Base64)."
94+
end
9795

98-
# See the potentially revised prompt the model used
99-
puts image.revised_prompt
100-
# => "A watercolor painting of a futuristic cityscape bathed in the warm hues of a setting sun..."
96+
# Save the image locally
97+
image.save("red_panda.png")
10198
```
10299

103100
Learn more in the [Image Generation Guide]({% link guides/image-generation.md %}).
104101

105-
## Creating Embeddings
102+
## Creating an Embedding
106103

107-
Embeddings represent text as numerical vectors, useful for tasks like semantic search. Use `RubyLLM.embed`.
104+
Create numerical vector representations of text using `RubyLLM.embed`.
108105

109106
```ruby
110-
# Create an embedding for a single piece of text
107+
# Create an embedding (uses the default embedding model)
111108
embedding = RubyLLM.embed("Ruby is optimized for programmer happiness.")
112109

113110
# Access the vector (an array of floats)
114111
vector = embedding.vectors
115-
puts "Vector dimension: #{vector.length}" # e.g., 1536 for text-embedding-3-small
116-
117-
# Embed multiple texts at once
118-
texts = ["Convention over configuration", "Model-View-Controller", "Metaprogramming"]
119-
embeddings = RubyLLM.embed(texts)
112+
puts "Vector dimension: #{vector.length}" # e.g., 1536
120113

121-
puts "Generated #{embeddings.vectors.length} vectors." # => 3
114+
# Access metadata
115+
puts "Model used: #{embedding.model}"
122116
```
123117

124118
Explore further in the [Embeddings Guide]({% link guides/embeddings.md %}).
125119

126120
## What's Next?
127121

128-
You've seen the basics! Now you're ready to explore RubyLLM's features in more detail:
122+
You've covered the basics! Now you're ready to explore RubyLLM's features in more detail:
129123

130124
* [Chatting with AI Models]({% link guides/chat.md %})
131125
* [Working with Models]({% link guides/models.md %}) (Choosing models, custom endpoints)
132126
* [Using Tools]({% link guides/tools.md %}) (Letting AI call your code)
133127
* [Streaming Responses]({% link guides/streaming.md %})
134128
* [Rails Integration]({% link guides/rails.md %})
129+
* [Configuration]({% link configuration.md %})
135130
* [Error Handling]({% link guides/error-handling.md %})

0 commit comments

Comments
 (0)