Skip to content

Commit a15855a

Browse files
committed
Add issue templates for bug reports and feature requests, and enhance contributing guide.
1 parent 555cf0a commit a15855a

File tree

5 files changed

+224
-106
lines changed

5 files changed

+224
-106
lines changed

.github/ISSUE_TEMPLATE/bug_report.yml

Lines changed: 67 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,67 @@
1+
name: Bug Report
2+
description: Report a bug in RubyLLM
3+
title: "[BUG] "
4+
labels: ["bug"]
5+
body:
6+
- type: markdown
7+
attributes:
8+
value: |
9+
Found a bug? Let's fix it.
10+
11+
- type: checkboxes
12+
id: checks
13+
attributes:
14+
label: Basic checks
15+
options:
16+
- label: I searched existing issues - this hasn't been reported
17+
required: true
18+
- label: I can reproduce this consistently
19+
required: true
20+
- label: This is a RubyLLM bug, not my application code
21+
required: true
22+
23+
- type: textarea
24+
id: description
25+
attributes:
26+
label: What's broken?
27+
description: Clear description of the bug
28+
validations:
29+
required: true
30+
31+
- type: textarea
32+
id: reproduction
33+
attributes:
34+
label: How to reproduce
35+
placeholder: |
36+
1. Configure RubyLLM with...
37+
2. Call method...
38+
3. See error...
39+
validations:
40+
required: true
41+
42+
- type: textarea
43+
id: expected
44+
attributes:
45+
label: Expected behavior
46+
validations:
47+
required: true
48+
49+
- type: textarea
50+
id: actual
51+
attributes:
52+
label: What actually happened
53+
description: Include error messages and debug logs (RUBYLLM_DEBUG=true)
54+
validations:
55+
required: true
56+
57+
- type: textarea
58+
id: environment
59+
attributes:
60+
label: Environment
61+
placeholder: |
62+
- Ruby version:
63+
- RubyLLM version:
64+
- Provider (OpenAI, Anthropic, etc.):
65+
- OS:
66+
validations:
67+
required: true

.github/ISSUE_TEMPLATE/config.yml

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
blank_issues_enabled: false
2+
contact_links:
3+
- name: 💬 Ask Questions
4+
url: https://github.com/crmne/ruby_llm/discussions
5+
about: Questions about using RubyLLM, implementation approaches, or general discussion
6+
- name: 📖 Documentation
7+
url: https://rubyllm.com
8+
about: Check the docs first - guides and examples
9+
- name: 🚀 Priority Development
10+
url: mailto:carmine@paolino.work
11+
about: Need a feature implemented quickly? Paid development available
Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
name: Feature Request
2+
description: Suggest a new feature for RubyLLM
3+
title: "[FEATURE] "
4+
labels: ["enhancement"]
5+
body:
6+
- type: markdown
7+
attributes:
8+
value: |
9+
**Read this first:** Our [Contributing Guide](https://github.com/crmne/ruby_llm/blob/main/CONTRIBUTING.md) explains what belongs in RubyLLM vs. your app.
10+
11+
We focus on **LLM communication**, not application architecture.
12+
13+
- type: checkboxes
14+
id: scope_check
15+
attributes:
16+
label: Scope check
17+
description: "Does this belong in RubyLLM?"
18+
options:
19+
- label: This is **core LLM communication** (not application logic)
20+
required: true
21+
- label: This **benefits most users** (not just my use case)
22+
required: true
23+
- label: This **can't be solved in application code** with current RubyLLM
24+
required: true
25+
- label: I read the [Contributing Guide](https://github.com/crmne/ruby_llm/blob/main/CONTRIBUTING.md)
26+
required: true
27+
28+
- type: checkboxes
29+
id: existing_check
30+
attributes:
31+
label: Due diligence
32+
options:
33+
- label: I searched existing issues
34+
required: true
35+
- label: I checked the documentation
36+
required: true
37+
38+
- type: textarea
39+
id: problem
40+
attributes:
41+
label: What problem does this solve?
42+
description: Focus on LLM communication and developer experience
43+
validations:
44+
required: true
45+
46+
- type: textarea
47+
id: solution
48+
attributes:
49+
label: Proposed solution
50+
validations:
51+
required: true
52+
53+
- type: textarea
54+
id: why_library
55+
attributes:
56+
label: Why this belongs in RubyLLM
57+
description: Explain why this can't/shouldn't be application code
58+
validations:
59+
required: true

.github/pull_request_template.md

Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
## What this does
2+
3+
<!-- Clear description of what this PR does and why -->
4+
5+
## Type of change
6+
7+
- [ ] Bug fix
8+
- [ ] New feature
9+
- [ ] Breaking change
10+
- [ ] Documentation
11+
- [ ] Performance improvement
12+
13+
## Scope check
14+
15+
- [ ] I read the [Contributing Guide](https://github.com/crmne/ruby_llm/blob/main/CONTRIBUTING.md)
16+
- [ ] This aligns with RubyLLM's focus on **LLM communication**
17+
- [ ] This isn't application-specific logic that belongs in user code
18+
- [ ] This benefits most users, not just my specific use case
19+
20+
## Quality check
21+
22+
- [ ] I ran `overcommit --install` and all hooks pass
23+
- [ ] I tested my changes thoroughly
24+
- [ ] I updated documentation if needed
25+
- [ ] I didn't modify auto-generated files manually (`models.json`, `aliases.json`)
26+
27+
## API changes
28+
29+
- [ ] Breaking change
30+
- [ ] New public methods/classes
31+
- [ ] Changed method signatures
32+
- [ ] No API changes
33+
34+
## Related issues
35+
36+
<!-- Link issues: "Fixes #123" or "Related to #123" -->

CONTRIBUTING.md

Lines changed: 51 additions & 106 deletions
Original file line numberDiff line numberDiff line change
@@ -1,138 +1,83 @@
11
# Contributing to RubyLLM
22

3-
Thank you for considering contributing to RubyLLM! We're aiming to build a high-quality, robust library, and thoughtful contributions are welcome.
4-
5-
## Development Setup and Workflow
6-
7-
Getting started and contributing follows a typical GitHub-based workflow:
8-
9-
1. **Fork & Clone**: Fork the repository to your own GitHub account and then clone it locally.
10-
```bash
11-
gh repo fork crmne/ruby_llm --clone
12-
cd ruby_llm
13-
```
14-
2. **Install Dependencies**:
15-
```bash
16-
bundle install
17-
```
18-
3. **Set Up Git Hooks**: Required.
19-
```bash
20-
overcommit --install
21-
```
22-
4. **Branch**: Create a new branch for your feature or bugfix. If it relates to an existing issue, you can use the `gh` CLI to help:
23-
```bash
24-
gh issue develop 123 --checkout # Substitute 123 with the relevant issue number
25-
```
26-
5. **Code & Test**: Make your changes and ensure they are well-tested. (See "Running Tests" section for more details).
27-
6. **Commit**: Write clear and concise commit messages.
28-
7. **Pull Request**: Create a Pull Request (PR) against the `main` branch of the `crmne/ruby_llm` repository.
29-
* **Thoroughly review your own PR before submitting.** Check for any "vibe coding" – unnecessary files, experimental code that doesn't belong, or incomplete work.
30-
* Write a **clear and detailed PR description** explaining the "what" and "why" of your changes. Link to any relevant issues.
31-
* Badly/vibe-coded PRs with minimal descriptions will likely be closed or receive extensive review comments, slowing things down for everyone. Follow the existing conventions of RubyLLM. Aim for quality.
32-
```bash
33-
gh pr create --web
34-
```
35-
36-
## Model Registry (`models.json`) & Aliases (`aliases.json`)
37-
38-
These files are critical for how RubyLLM identifies and uses AI models. **Both are auto-generated by rake tasks. Do not edit them manually or include manual changes to them in PRs.**
39-
40-
### `models.json`: The Model Catalog
41-
42-
* **How it's made**: The `rake models:update` task builds this file. It fetches model data directly from configured provider APIs (processing these details via each provider's `capabilities.rb` file) and also from the [Parsera LLM Specs API](https://api.parsera.org/v1/llm-specs). These lists are then merged, with Parsera's standardized data generally taking precedence for common models, augmented by provider-specific metadata. Models unique to a provider's API (and not yet in Parsera) are also included.
43-
* **Updating Model Information**:
44-
* **Incorrect public specs (pricing, context size, etc.)?** Parsera scrapes public provider documentation. If data for a publicly documented model is wrong or missing on Parsera, please [file an issue with Parsera](https://github.com/parsera-labs/api-llm-specs/issues). Once they update, `rake models:update` will fetch the corrections.
45-
* **Models not in public docs / Provider-specifics**: If a model isn't well-documented publicly by the provider (e.g., older or preview models) or needs provider-specific handling within RubyLLM, update the relevant `lib/ruby_llm/providers/<provider>/capabilities.rb` and potentially `models.rb`. Then run `bundle exec rake models:update`.
46-
* **New Provider Support**: This involves more in-depth work to create the provider-specific modules and ensure integration with the `models:update` task.
47-
48-
### `aliases.json`: User-Friendly Shortcuts
49-
50-
* **Purpose**: Maps common names (e.g., `claude-3-5-sonnet`) to precise, versioned model IDs.
51-
* **How it's made**: Generated by `rake aliases:generate` using the current `models.json`. Run this task *after* `models.json` is updated.
52-
53-
## Running Tests
54-
55-
Tests are crucial. We use RSpec and VCR.
3+
Thanks for considering contributing! Here's what you need to know.
564

57-
```bash
58-
# Run all tests (uses existing VCR cassettes)
59-
bundle exec rspec
60-
61-
# Run a specific test file
62-
bundle exec rspec spec/ruby_llm/chat_spec.rb
5+
## Philosophy & Scope
636

64-
# To re-record a specific test's cassette, first remove its .yml file:
65-
rm spec/fixtures/vcr_cassettes/chat_vision_models_*_can_understand_local_images.yml # Adjust file name as needed
66-
# Then run the specific test or test file that uses this cassette.
7+
RubyLLM does one thing well: **LLM communication in Ruby**.
678

68-
# Run a specific test by its description string (or part of it)
69-
bundle exec rspec -e "can understand local images"
70-
```
9+
### ✅ We Want
10+
- LLM provider integrations and new provider features
11+
- Convenience that benefits most users (Rails generators, etc.)
12+
- Performance and API consistency improvements
7113

72-
### Testing Philosophy & VCR
14+
### ❌ We Don't Want
15+
- Application architecture (testing, persistence, error tracking)
16+
- One-off solutions you can build in your app
17+
- Auxiliary features unrelated to LLM communication
7318

74-
* New tests should generally be **end-to-end** to verify integration with actual provider APIs (via VCR).
75-
* Keep tests **minimal and focused**. We don't need to test every single model variant for every feature if the underlying API mechanism is the same. One or two representative models per provider for a given feature is usually sufficient.
76-
* **API Call Costs**: VCR cassettes are used to avoid hitting live APIs on every test run. However, recording these cassettes costs real money for API calls. Please be mindful of this when adding tests that would require new recordings. If you're adding extensive tests that significantly increase API usage for VCR recording, consider [sponsoring the project on GitHub](https://github.com/sponsors/crmne) to help offset these costs.
19+
### Requests We'll Close
20+
- **RAG support** → Use dedicated libraries
21+
- **Prompt templates** → Use ERB/Mustache in your app
22+
- **Model data fixes** → File with [Parsera](https://github.com/parsera-labs/api-llm-specs/issues)
23+
- **Auto-failover** → Use `.with_model()` (works mid-conversation, even across providers)
24+
- **Tool interface changes** → Handle in your tool's initializer
25+
- **Testing helpers** → Use dependency injection
7726

78-
### Recording VCR Cassettes
27+
**The rule:** If you can solve it in application code, you should.
7928

80-
If your changes affect API interactions, you'll need to re-record the VCR cassettes.
29+
## Response Times & Paid Work
8130

82-
To re-record cassettes for specific providers (e.g., OpenAI and Anthropic):
83-
84-
```bash
85-
# Set necessary API keys as environment variables
86-
export OPENAI_API_KEY="your_openai_key"
87-
export ANTHROPIC_API_KEY="your_anthropic_key"
31+
This is unpaid work I do between other priorities. I respond when I can.
8832

89-
# Run the rake task, specifying providers
90-
bundle exec rake vcr:record[openai,anthropic]
91-
```
33+
**Need something fast?** Email **carmine@paolino.work** for paid development. $200/hour, 10-hour minimum ($2000).
9234

93-
To re-record all cassettes (requires all relevant API keys to be set):
35+
## Quick Start
9436

9537
```bash
96-
bundle exec rake vcr:record[all]
38+
gh repo fork crmne/ruby_llm --clone && cd ruby_llm
39+
bundle install
40+
overcommit --install
41+
gh issue develop 123 --checkout # or create your own branch
42+
# make changes, add tests
43+
gh pr create --web
9744
```
9845

99-
The rake task will delete the relevant existing cassettes and re-run the tests to record fresh interactions.
46+
## Essential Rules
10047

101-
**CRITICAL**: After recording new or updated VCR cassettes, **manually inspect the YAML files in `spec/fixtures/vcr_cassettes/`**. Ensure that no sensitive information (API keys, personal data, etc.) has accidentally been recorded. The VCR configuration has filters for common keys, but diligence is required.
48+
1. **Run `overcommit --install` before doing anything** - it auto-fixes style, runs tests, and updates model files on commit
49+
2. **Don't edit `models.json` or `aliases.json`** - overcommit regenerates them automatically
50+
3. **Write clear PR descriptions** - explain what and why
10251

103-
## Coding Style
52+
The git hooks handle style, tests, and file generation for you. No excuses for broken commits.
10453

105-
We follow the [Standard Ruby](https://github.com/testdouble/standard) style guide.
54+
## Testing
10655

107-
```bash
108-
# Check your code style
109-
bundle exec rubocop
56+
Run tests: `bundle exec rspec`
11057

111-
# Auto-fix style issues where possible
112-
bundle exec rubocop -A
58+
**Re-recording VCR cassettes?** Set API keys and run:
59+
```bash
60+
bundle exec rake vcr:record[openai,anthropic] # specific providers
61+
bundle exec rake vcr:record[all] # everything
11362
```
11463

115-
The Overcommit pre-commit hook should help enforce this.
116-
117-
## Documentation
118-
119-
If you add new features or change existing behavior, please update the documentation:
64+
**Then inspect the YAML files** - make sure no API keys leaked.
12065

121-
* Update relevant guides in the `docs/guides/` directory.
122-
* Ensure the `README.md` remains a concise and helpful entry point for new users.
66+
## Model Registry
12367

124-
## Release Process
68+
**Don't touch these files directly:**
69+
- `models.json` - auto-generated from provider APIs + [Parsera](https://api.parsera.org/v1/llm-specs)
70+
- `aliases.json` - auto-generated from models.json
12571

126-
Gem versioning follows [Semantic Versioning (SemVer)](https://semver.org/):
72+
**To update model info:**
73+
- Public model issues → File with [Parsera](https://github.com/parsera-labs/api-llm-specs/issues)
12774

128-
1. **MAJOR** version for incompatible API changes.
129-
2. **MINOR** version for adding functionality in a backward-compatible manner.
130-
3. **PATCH** version for backward-compatible bug fixes.
75+
## Support the Project
13176

132-
Releases are handled by the maintainers through the CI/CD pipeline.
77+
Consider [sponsoring RubyLLM](https://github.com/sponsors/crmne) to help with ongoing costs. Sponsorship supports general maintenance - for priority features, use paid development above.
13378

13479
---
13580

136-
Thanks for contributing to RubyLLM,
81+
That's it. Let's make Ruby the best AI development experience possible.
13782

138-
Carmine
83+
Carmine

0 commit comments

Comments
 (0)