seekgpt
is a Python library designed to provide a simple and unified interface for interacting with various Large Language Model (LLM) APIs, focusing on compatibility with the OpenAI API standard. It defaults to connecting to https://api.seekgpt.org/v1
but can be easily configured for other services.
- Connect to the default SeekGPT API (
https://api.seekgpt.org/v1
). - Connect to any OpenAI-compatible API endpoint (OpenAI, Ollama, vLLM, Anyscale, Together AI, etc.) using
SeekGPT
. - Simple
chat
interface following thechat/completions
standard. - Support for streaming responses.
- Handles API key authentication (Bearer token).
- Basic error handling and custom exceptions.
- Configurable API base URL, API key, default model, and timeout.
- Uses environment variables for configuration (
SEEKGPT_API_KEY
,SEEKGPT_API_BASE
, etc.).
pip install seekgpt
Make sure you have your SeekGPT API key set as an environment variable:
export SEEKGPT_API_KEY="your-seekgpt-api-key"
# Optional: Set a default model
# export SEEKGPT_DEFAULT_MODEL="seekgpt-model-name"
Then use the SeekGPT
:
from seekgpt import SeekGPT
# Client automatically picks up SEEKGPT_API_KEY from environment
# You can also pass it directly: SeekGPTClient(api_key="your-key")
client = SeekGPT(api_key="", api_base="https://api.seekgpt.org/v1")
try:
response = client.chat(
model="SeekGPT-mini", # Or rely on SEEKGPT_DEFAULT_MODEL env var
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"}
]
)
print(response['choices'][0]['message']['content'])
# Streaming example
stream = client.chat(
model="SeekGPT-mini",
messages=[{"role": "user", "content": "Tell me a short story."}],
stream=True
)
print("\nStreaming response:")
for chunk in stream:
# Process each line (chunk) of the stream
# Note: Parsing SSE data might require additional logic depending on the exact format
print(chunk, end="") # Raw SSE line, may need parsing
print()
except Exception as e:
print(f"An error occurred: {e}")
export OPENAI_API_KEY="your-openai-api-key"
from seekgpt import SeekGPT
# SeekGPT will fallback to OPENAI_API_KEY if SEEKGPT_API_KEY is not set
client = SeekGPT(
api_base="https://api.openai.com/v1",
default_model="gpt-4o" # Set a default model for this client instance
)
response = client.chat(
messages=[{"role": "user", "content": "Hello OpenAI!"}]
)
print(response['choices'][0]['message']['content'])
Ensure Ollama is running (usually at http://localhost:11434
).
from seekgpt import SeekGPT, APIConnectionError
# Ollama typically doesn't require an API key for local access,
# but some libraries expect *something*, so we pass a dummy key.
# The client automatically detects localhost and won't raise an Auth error if key is None.
client = SeekGPT(
api_base="http://localhost:11434/v1",
api_key="ollama", # Dummy key often needed, but can also try api_key=None
default_model="llama3" # Specify the local model you want to use
)
try:
response = client.chat(
messages=[{"role": "user", "content": "Hi Ollama! Write a haiku about code."}]
)
print(response['choices'][0]['message']['content'])
except APIConnectionError as e:
print(f"Could not connect to Ollama. Is the server running? Error: {e}")
except Exception as e:
print(f"An error occurred: {e}")
Set the appropriate API key environment variable (e.g., ANYSCALE_API_KEY
, TOGETHER_API_KEY
) or pass the api_key
directly. Use the SeekGPT
and provide the correct api_base
for the service.
# Example for Anyscale Endpoints
# export ANYSCALE_API_KEY="your-anyscale-key"
from seekgpt import SeekGPT
client = SeekGPT(
api_base="https://api.endpoints.anyscale.com/v1",
# SeekGPT will try ANYSCALE_API_KEY if SEEKGPT_API_KEY/OPENAI_API_KEY are not set
# Or pass explicitly: api_key="your-anyscale-key"
default_model="mistralai/Mixtral-8x7B-Instruct-v0.1"
)
# ... use client.chat(...) ...
The clients can be configured via:
- Environment Variables:
SEEKGPT_API_KEY
: API Key (used bySeekGPTClient
andSeekGPT
priority).SEEKGPT_API_BASE
: API Base URL (used bySeekGPT
ifapi_base
argument is not provided). Defaults tohttps://api.seekgpt.org/v1
.SEEKGPT_DEFAULT_MODEL
: Default model name.OPENAI_API_KEY
,ANTHROPIC_API_KEY
, etc.: Fallback keys used bySeekGPT
ifSEEKGPT_API_KEY
is missing.SEEKGPT_LOGLEVEL
: Set logging level (e.g.,DEBUG
,INFO
,WARNING
). Default isWARNING
.
- Client Initialization Arguments: Pass
api_key
,api_base
,default_model
,timeout
directly when creatingSeekGPTClient
orSeekGPT
.
This library focuses on the OpenAI API standard (/v1/chat/completions
). For services with significantly different APIs, authentication methods, or response structures, using their official Python SDKs is strongly recommended:
- Google Generative AI (Gemini): Use the
google-generativeai
orgoogle-cloud-aiplatform
libraries. - Anthropic (Claude): Use the
anthropic
library. It requires specific headers (x-api-key
,anthropic-version
) and has a different request/response structure. - Cohere: Use the
cohere
library.
While you could potentially adapt SeekGPT._request
or create new client classes within this library for them, it often involves reimplementing logic already handled well by their official SDKs.
The library defines custom exceptions inheriting from SeekGPTError
:
AuthenticationError
: For 401/403 errors.APIConnectionError
: For network issues (timeout, connection refused).InvalidRequestError
: For 400/422/429 errors (bad request, rate limit).APIError
: For other non-2xx API errors.
Wrap API calls in try...except
blocks to handle potential issues.
[Details on how to contribute - optional]
This project is licensed under the MIT License - see the LICENSE file for details.