Skip to content

apuslabs/hb-ai-python

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

HB AI Python SDK

A secure Python SDK for HB AI service with optional TEE (Trusted Execution Environment) attestation support.

Features

  • 🤖 AI Inference - Send messages to HB AI service and get responses
  • 🔒 TEE Attestation - Optional GPU and CPU attestation for enhanced security
  • 🛡️ Verification - Verify attestations to ensure environment integrity
  • 💬 Session Management - Support for conversation continuity
  • 🔄 Async Support - Full async/await support for efficient operations
  • 🎯 Tinfoil-inspired API - Familiar API design for easy adoption

Installation

cd hb-ai-python
uv sync

Quick Start

Basic Usage

import asyncio
from hb_ai import HBClient, TEEConfig

async def main():
    # Initialize client
    client = HBClient(
        endpoint="http://localhost:8734",
        tee_config=TEEConfig(enabled=True)
    )
    
    async with client:
        # Send a message
        response = await client.chat("Hello, how are you?")
        print(f"AI: {response.content}")
        
        # Check attestations
        attestations = await client.get_attestations()
        print(f"Generated {len(attestations)} attestations")
        
        # Verify attestations
        results = await client.verify_latest_attestations()
        for result in results:
            status = "✓ VERIFIED" if result.verified else "✗ FAILED"
            print(f"{result.attestation_type.upper()}: {status}")

if __name__ == "__main__":
    asyncio.run(main())

Session Management

async def session_example():
    client = HBClient(endpoint="http://localhost:8734")
    
    async with client:
        # Start a session
        session = client.start_session(model_id="ISrbGzQot05rs_HKC08O_SmkipYQnqgB1yC3mjZZeEo")
        
        # Have a conversation
        response1 = await client.chat("What is Python?", session_id=session.session_id)
        response2 = await client.chat("Give me an example", session_id=session.session_id)
        
        # Session maintains conversation context
        print(f"Session has {len(session.messages)} messages")

TEE Attestation

async def attestation_example():
    client = HBClient(
        endpoint="http://localhost:8734",
        tee_config=TEEConfig(
            enabled=True,
            gpu_attestation=True,
            cpu_attestation=True
        )
    )
    
    async with client:
        # Send message (automatically generates attestations)
        response = await client.chat("Explain quantum computing")
        
        # Parse attestation details
        attestations = await client.get_attestations()
        for attestation in attestations:
            if attestation.type == "gpu":
                parsed = client.parse_gpu_attestation(attestation)
                print(f"GPU: {parsed.get('hardware_model', 'N/A')}")
                print(f"Driver: {parsed.get('driver_version', 'N/A')}")
            elif attestation.type == "cpu":
                parsed = client.parse_cpu_attestation(attestation)
                print(f"CPU Version: {parsed.get('version', 'N/A')}")
                print(f"Measurement: {parsed.get('measurement', 'N/A')}")

Configuration

TEE Configuration

from hb_ai import TEEConfig

# Default configuration
config = TEEConfig(
    enabled=True,          # Enable TEE attestation
    auto_verify=True,      # Auto-verify attestations
    gpu_attestation=True,  # Enable GPU attestation
    cpu_attestation=True,  # Enable CPU attestation
    timeout=30            # Request timeout in seconds
)

Client Configuration

from hb_ai import HBClient

client = HBClient(
    endpoint="http://localhost:8734",
    tee_config=TEEConfig(enabled=True),
    timeout=30,
    # Additional httpx client options
    follow_redirects=True,
    verify=True
)

Available Models

The SDK supports various AI models:

  • Phi-3 Mini 4k Instruct: ISrbGzQot05rs_HKC08O_SmkipYQnqgB1yC3mjZZeEo
  • CodeQwen 1.5 7B Chat q3: sKqjvBbhqKvgzZT4ojP1FNvt4r_30cqjuIIQIr-3088
  • Llama3 8B Instruct q4: Pr2YVrxd7VwNdg6ekC0NXWNKXxJbfTlHhhlrKbAd1dA

Command Line Interface

The SDK includes a CLI for easy testing:

# Basic chat
python examples/cli.py chat "Hello, how are you?"

# Interactive session
python examples/cli.py interactive

# With specific model
python examples/cli.py chat "Write a Python function" --model-id ISrbGzQot05rs_HKC08O_SmkipYQnqgB1yC3mjZZeEo

# Generate and display attestations
python examples/cli.py attestation "Explain AI safety"

# List available models
python examples/cli.py models

API Reference

HBClient

The main client class for interacting with HB AI service.

Methods

  • async chat(message, model_id=None, session_id=None, include_attestation=None) - Send chat message
  • async get_attestations() - Get latest attestations
  • async verify_attestation(attestation_data) - Verify specific attestation
  • async verify_latest_attestations() - Verify all latest attestations
  • start_session(model_id=None) - Start new chat session
  • get_session() - Get current session
  • end_session() - End current session

Properties

  • endpoint_url - Service endpoint URL
  • is_tee_enabled - TEE attestation status

TEEConfig

Configuration for TEE attestation.

Parameters

  • enabled: bool = True - Enable TEE attestation
  • auto_verify: bool = True - Auto-verify attestations
  • gpu_attestation: bool = True - Enable GPU attestation
  • cpu_attestation: bool = True - Enable CPU attestation
  • timeout: int = 30 - Request timeout

Models

AIResponse

Response from AI inference.

  • content: str - Response content
  • session_id: Optional[str] - Session ID
  • model_id: Optional[str] - Model ID used
  • timestamp: datetime - Response timestamp
  • metadata: Dict[str, Any] - Additional metadata

AttestationData

Raw attestation data from TEE.

  • type: str - Attestation type ("gpu" or "cpu")
  • raw_data: Any - Raw attestation data
  • nonce: Optional[str] - Nonce used (GPU only)
  • timestamp: datetime - Generation timestamp

AttestationResult

Result of attestation verification.

  • verified: bool - Verification status
  • attestation_type: str - Attestation type
  • details: Dict[str, Any] - Verification details
  • timestamp: datetime - Verification timestamp
  • error: Optional[str] - Error message if failed

Examples

See the examples/ directory for complete examples:

  • basic_usage.py - Basic usage patterns
  • advanced_usage.py - Advanced features and error handling
  • cli.py - Command line interface

Testing

Run the test suite:

uv run pytest tests/

Development

Setup

# Clone and setup
git clone <repository>
cd hb-ai-python
uv sync --all-extras

# Install development dependencies
uv sync --group dev

Code Quality

# Format code
uv run black src/ tests/ examples/

# Lint code
uv run ruff src/ tests/ examples/

# Type checking
uv run mypy src/

Comparison with Tinfoil SDK

This SDK is inspired by Tinfoil's design but adapted for HB AI service:

Similarities

  • Async/await support
  • Similar client initialization patterns
  • TEE attestation integration
  • Verification capabilities

Differences

  • No API Key: HB AI uses endpoint-only authentication
  • Optional TEE: TEE attestation is optional, not required for TLS
  • Session Management: Built-in conversation session support
  • Multiple Models: Support for different AI models
  • Flexible Attestation: Both GPU and CPU attestation options

Migration from Tinfoil

# Tinfoil style
from tinfoil import SecureClient
client = SecureClient(enclave="inference.tinfoil.sh")

# HB AI style
from hb_ai import HBClient, TEEConfig
client = HBClient(
    endpoint="http://localhost:8734",
    tee_config=TEEConfig(enabled=True)
)

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests
  5. Run quality checks
  6. Submit a pull request

License

MIT License - see LICENSE file for details.

Support

For issues and questions:

  • Check the examples directory
  • Review the test cases
  • Open an issue on GitHub

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages