Skip to content

eurora-labs/ferrous-llm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

ferrous-llm

Crates.io Documentation License Rust

A unified cross-platform Rust library for interacting with multiple Large Language Model providers. Ferrous-LLM provides a modular, type-safe, and performant abstraction layer that allows developers to easily switch between different LLM providers while maintaining consistent APIs.

πŸš€ Features

  • Multi-Provider Support: Unified interface for OpenAI, Anthropic, and Ollama providers
  • Modular Architecture: Separate crates for core functionality and each provider
  • Type Safety: Leverages Rust's type system for safe LLM interactions
  • Streaming Support: Real-time streaming capabilities for chat completions
  • Memory Management: Dedicated memory crate for conversation context handling
  • Async/Await: Full async support with tokio runtime
  • Comprehensive Examples: Working examples for all supported providers
  • Extensible Design: Easy to add new providers and capabilities

πŸ“¦ Installation

Add ferrous-llm to your Cargo.toml:

[dependencies]
ferrous-llm = "0.4.1"

Feature Flags

By default, no providers are enabled. Enable the providers you need:

[dependencies]
ferrous-llm = { version = "0.4.1", features = ["openai", "anthropic", "ollama"] }

Available features:

  • openai - OpenAI provider support
  • anthropic - Anthropic Claude provider support
  • ollama - Ollama local model provider support
  • full - All providers (equivalent to enabling all individual features)

πŸ—οΈ Architecture

Ferrous-LLM is organized as a workspace with the following crates:

πŸ”§ Quick Start

Basic Chat Example

use ferrous_llm::{
    ChatProvider, ChatRequest, Message, MessageContent,
    Parameters, Role, Metadata,
    openai::{OpenAIConfig, OpenAIProvider},
};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Load configuration from environment
    let config = OpenAIConfig::from_env()?;
    let provider = OpenAIProvider::new(config)?;

    // Create a chat request
    let request = ChatRequest {
        messages: vec![
            Message {
                role: Role::User,
                content: MessageContent::Text("Hello! Explain Rust in one sentence.".to_string()),
                name: None,
                tool_calls: None,
                tool_call_id: None,
                created_at: chrono::Utc::now(),
            }
        ],
        parameters: Parameters {
            temperature: Some(0.7),
            max_tokens: Some(100),
            ..Default::default()
        },
        metadata: Metadata::default(),
    };

    // Send the request
    let response = provider.chat(request).await?;
    println!("Response: {}", response.content());

    Ok(())
}

Streaming Chat Example

use ferrous_llm::{
    StreamingProvider, ChatRequest, Message, MessageContent, Role,
    anthropic::{AnthropicConfig, AnthropicProvider},
};
use futures::StreamExt;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let config = AnthropicConfig::from_env()?;
    let provider = AnthropicProvider::new(config)?;

    let request = ChatRequest {
        messages: vec![
            Message {
                role: Role::User,
                content: MessageContent::Text("Tell me a story".to_string()),
                name: None,
                tool_calls: None,
                tool_call_id: None,
                created_at: chrono::Utc::now(),
            }
        ],
        parameters: Default::default(),
        metadata: Default::default(),
    };

    let mut stream = provider.chat_stream(request).await?;

    while let Some(chunk) = stream.next().await {
        match chunk {
            Ok(data) => print!("{}", data.content()),
            Err(e) => eprintln!("Stream error: {}", e),
        }
    }

    Ok(())
}

πŸ”Œ Supported Providers

OpenAI

use ferrous_llm::openai::{OpenAIConfig, OpenAIProvider};

// From environment variables
let config = OpenAIConfig::from_env()?;

// Or configure manually
let config = OpenAIConfig {
    api_key: "your-api-key".to_string(),
    model: "gpt-4".to_string(),
    base_url: "https://api.openai.com/v1".to_string(),
    timeout: std::time::Duration::from_secs(30),
};

let provider = OpenAIProvider::new(config)?;

Environment Variables:

  • OPENAI_API_KEY - Your OpenAI API key (required)
  • OPENAI_MODEL - Model to use (default: "gpt-3.5-turbo")
  • OPENAI_BASE_URL - API base URL (default: "https://api.openai.com/v1")

Anthropic

use ferrous_llm::anthropic::{AnthropicConfig, AnthropicProvider};

let config = AnthropicConfig::from_env()?;
let provider = AnthropicProvider::new(config)?;

Environment Variables:

  • ANTHROPIC_API_KEY - Your Anthropic API key (required)
  • ANTHROPIC_MODEL - Model to use (default: "claude-3-sonnet-20240229")
  • ANTHROPIC_BASE_URL - API base URL (default: "https://api.anthropic.com")

Ollama

use ferrous_llm::ollama::{OllamaConfig, OllamaProvider};

let config = OllamaConfig::from_env()?;
let provider = OllamaProvider::new(config)?;

Environment Variables:

  • OLLAMA_MODEL - Model to use (default: "llama2")
  • OLLAMA_BASE_URL - Ollama server URL (default: "http://localhost:11434")

🎯 Core Traits

Ferrous-LLM follows the Interface Segregation Principle with focused traits:

Core chat functionality that most LLM providers support.

#[async_trait]
pub trait ChatProvider: Send + Sync {
    type Config: ProviderConfig;
    type Response: ChatResponse;
    type Error: ProviderError;

    async fn chat(&self, request: ChatRequest) -> Result<Self::Response, Self::Error>;
}

Extends ChatProvider with streaming capabilities.

#[async_trait]
pub trait StreamingProvider: ChatProvider {
    type StreamItem: Send + 'static;
    type Stream: Stream<Item = Result<Self::StreamItem, Self::Error>> + Send + 'static;

    async fn chat_stream(&self, request: ChatRequest) -> Result<Self::Stream, Self::Error>;
}

Additional Traits

πŸ“š Examples

The examples/ directory contains comprehensive examples:

Run examples with:

# Set up environment variables
export OPENAI_API_KEY="your-key"
export ANTHROPIC_API_KEY="your-key"

# Run specific examples
cargo run --example openai_chat --features openai
cargo run --example anthropic_chat_streaming --features anthropic
cargo run --example ollama_chat --features ollama

πŸ§ͺ Testing

Run tests for all crates:

# Run all tests
cargo test --workspace

# Run tests for specific provider
cargo test -p ferrous-llm-openai
cargo test -p ferrous-llm-anthropic
cargo test -p ferrous-llm-ollama

# Run integration tests
cargo test --test integration_tests

🀝 Contributing

We welcome contributions! Please see our Contributing Guidelines for details.

Development Setup

  1. Clone the repository:

    git clone https://github.com/your-username/ferrous-llm.git
    cd ferrous-llm
  2. Install Rust (if not already installed):

    curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
  3. Set up environment variables:

    cp .env.example .env
    # Edit .env with your API keys
  4. Run tests:

    cargo test --workspace

Adding a New Provider

  1. Create a new crate in crates/ferrous-llm-{provider}/
  2. Implement the required traits from ferrous-llm-core
  3. Add integration tests
  4. Update the main crate's feature flags
  5. Add examples demonstrating usage

πŸ“„ License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

πŸ”— Links

πŸ™ Acknowledgments

  • The Rust community for excellent async and HTTP libraries
  • OpenAI, Anthropic, and Ollama for their APIs and documentation
  • Contributors and users of this library

Note: This library is in active development. APIs may change before 1.0 release.

About

A crate integrating OpenAI, Anthropic and Ollama in a sane way.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •  

Languages