A secure Python SDK for HB AI service with optional TEE (Trusted Execution Environment) attestation support.
- 🤖 AI Inference - Send messages to HB AI service and get responses
- 🔒 TEE Attestation - Optional GPU and CPU attestation for enhanced security
- 🛡️ Verification - Verify attestations to ensure environment integrity
- 💬 Session Management - Support for conversation continuity
- 🔄 Async Support - Full async/await support for efficient operations
- 🎯 Tinfoil-inspired API - Familiar API design for easy adoption
cd hb-ai-python
uv sync
import asyncio
from hb_ai import HBClient, TEEConfig
async def main():
# Initialize client
client = HBClient(
endpoint="http://localhost:8734",
tee_config=TEEConfig(enabled=True)
)
async with client:
# Send a message
response = await client.chat("Hello, how are you?")
print(f"AI: {response.content}")
# Check attestations
attestations = await client.get_attestations()
print(f"Generated {len(attestations)} attestations")
# Verify attestations
results = await client.verify_latest_attestations()
for result in results:
status = "✓ VERIFIED" if result.verified else "✗ FAILED"
print(f"{result.attestation_type.upper()}: {status}")
if __name__ == "__main__":
asyncio.run(main())
async def session_example():
client = HBClient(endpoint="http://localhost:8734")
async with client:
# Start a session
session = client.start_session(model_id="ISrbGzQot05rs_HKC08O_SmkipYQnqgB1yC3mjZZeEo")
# Have a conversation
response1 = await client.chat("What is Python?", session_id=session.session_id)
response2 = await client.chat("Give me an example", session_id=session.session_id)
# Session maintains conversation context
print(f"Session has {len(session.messages)} messages")
async def attestation_example():
client = HBClient(
endpoint="http://localhost:8734",
tee_config=TEEConfig(
enabled=True,
gpu_attestation=True,
cpu_attestation=True
)
)
async with client:
# Send message (automatically generates attestations)
response = await client.chat("Explain quantum computing")
# Parse attestation details
attestations = await client.get_attestations()
for attestation in attestations:
if attestation.type == "gpu":
parsed = client.parse_gpu_attestation(attestation)
print(f"GPU: {parsed.get('hardware_model', 'N/A')}")
print(f"Driver: {parsed.get('driver_version', 'N/A')}")
elif attestation.type == "cpu":
parsed = client.parse_cpu_attestation(attestation)
print(f"CPU Version: {parsed.get('version', 'N/A')}")
print(f"Measurement: {parsed.get('measurement', 'N/A')}")
from hb_ai import TEEConfig
# Default configuration
config = TEEConfig(
enabled=True, # Enable TEE attestation
auto_verify=True, # Auto-verify attestations
gpu_attestation=True, # Enable GPU attestation
cpu_attestation=True, # Enable CPU attestation
timeout=30 # Request timeout in seconds
)
from hb_ai import HBClient
client = HBClient(
endpoint="http://localhost:8734",
tee_config=TEEConfig(enabled=True),
timeout=30,
# Additional httpx client options
follow_redirects=True,
verify=True
)
The SDK supports various AI models:
- Phi-3 Mini 4k Instruct:
ISrbGzQot05rs_HKC08O_SmkipYQnqgB1yC3mjZZeEo
- CodeQwen 1.5 7B Chat q3:
sKqjvBbhqKvgzZT4ojP1FNvt4r_30cqjuIIQIr-3088
- Llama3 8B Instruct q4:
Pr2YVrxd7VwNdg6ekC0NXWNKXxJbfTlHhhlrKbAd1dA
The SDK includes a CLI for easy testing:
# Basic chat
python examples/cli.py chat "Hello, how are you?"
# Interactive session
python examples/cli.py interactive
# With specific model
python examples/cli.py chat "Write a Python function" --model-id ISrbGzQot05rs_HKC08O_SmkipYQnqgB1yC3mjZZeEo
# Generate and display attestations
python examples/cli.py attestation "Explain AI safety"
# List available models
python examples/cli.py models
The main client class for interacting with HB AI service.
async chat(message, model_id=None, session_id=None, include_attestation=None)
- Send chat messageasync get_attestations()
- Get latest attestationsasync verify_attestation(attestation_data)
- Verify specific attestationasync verify_latest_attestations()
- Verify all latest attestationsstart_session(model_id=None)
- Start new chat sessionget_session()
- Get current sessionend_session()
- End current session
endpoint_url
- Service endpoint URLis_tee_enabled
- TEE attestation status
Configuration for TEE attestation.
enabled: bool = True
- Enable TEE attestationauto_verify: bool = True
- Auto-verify attestationsgpu_attestation: bool = True
- Enable GPU attestationcpu_attestation: bool = True
- Enable CPU attestationtimeout: int = 30
- Request timeout
Response from AI inference.
content: str
- Response contentsession_id: Optional[str]
- Session IDmodel_id: Optional[str]
- Model ID usedtimestamp: datetime
- Response timestampmetadata: Dict[str, Any]
- Additional metadata
Raw attestation data from TEE.
type: str
- Attestation type ("gpu" or "cpu")raw_data: Any
- Raw attestation datanonce: Optional[str]
- Nonce used (GPU only)timestamp: datetime
- Generation timestamp
Result of attestation verification.
verified: bool
- Verification statusattestation_type: str
- Attestation typedetails: Dict[str, Any]
- Verification detailstimestamp: datetime
- Verification timestamperror: Optional[str]
- Error message if failed
See the examples/
directory for complete examples:
basic_usage.py
- Basic usage patternsadvanced_usage.py
- Advanced features and error handlingcli.py
- Command line interface
Run the test suite:
uv run pytest tests/
# Clone and setup
git clone <repository>
cd hb-ai-python
uv sync --all-extras
# Install development dependencies
uv sync --group dev
# Format code
uv run black src/ tests/ examples/
# Lint code
uv run ruff src/ tests/ examples/
# Type checking
uv run mypy src/
This SDK is inspired by Tinfoil's design but adapted for HB AI service:
- Async/await support
- Similar client initialization patterns
- TEE attestation integration
- Verification capabilities
- No API Key: HB AI uses endpoint-only authentication
- Optional TEE: TEE attestation is optional, not required for TLS
- Session Management: Built-in conversation session support
- Multiple Models: Support for different AI models
- Flexible Attestation: Both GPU and CPU attestation options
# Tinfoil style
from tinfoil import SecureClient
client = SecureClient(enclave="inference.tinfoil.sh")
# HB AI style
from hb_ai import HBClient, TEEConfig
client = HBClient(
endpoint="http://localhost:8734",
tee_config=TEEConfig(enabled=True)
)
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Run quality checks
- Submit a pull request
MIT License - see LICENSE file for details.
For issues and questions:
- Check the examples directory
- Review the test cases
- Open an issue on GitHub