A modern REST API service for managing and serving AI prompts. This service provides a centralized repository for storing, versioning, and retrieving prompts for various AI applications. It uses PostgreSQL as the database for robust and scalable data management.
💡 GUI Available! This service comes with a modern web interface. Check out the Exemplar Prompt Hub UI for a user-friendly way to manage your prompts, test them in the playground, and compare responses from different AI models.
- Features
- Getting Started
- Running Tests
- Contributing
- License
- API Documentation
- API Usage Examples
- Project Structure
- Database Table Structure
- Updating Prompts with Versioning
For a detailed checklist of implemented and planned features, see FEATURES.md.
- RESTful API for prompt management
- Version control for prompts
- Tag-based prompt organization
- Metadata support for prompts
- Authentication and authorization
- Search and filtering capabilities
- Prompt Playground API via OpenRouter
- Python 3.8 or higher
- pip (Python package manager)
- Git
- PostgreSQL (for database) (by default it uses sqlite as per .env.example)
- Docker and Docker Compose (for containerized setup)
You can install the package directly from PyPI:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install exemplar-prompt-hub
# Create a .env file and copy content from .env.example as per the github repo
cp .env.example .env
# Edit .env as needed
prompt-hub
Or install from the source:
# Clone the repository
git clone https://github.com/yourusername/exemplar-prompt-hub.git
cd exemplar-prompt-hub
# Create and activate a virtual environment
python -m venv venv
source venv/bin/activate # On Windows, use `venv\Scripts\activate`
# Install the package
pip install -e .
# Copy .env.example to .env [copy .env.example from github repo branch]
cp .env.example .env
# Edit .env to configure your database and other settings
After installation, you can use the following command:
prompt-hub
- Start the FastAPI server
The easiest way to get started is using Docker Compose:
-
Clone the repository:
git clone https://github.com/yourusername/exemplar-prompt-hub.git cd exemplar-prompt-hub
-
Start the services:
docker-compose up -d
This will start:
- FastAPI backend at http://localhost:8000
- PostgreSQL database at localhost:5432
-
Access the services:
- API Documentation: http://localhost:8000/docs
-
Stop the services:
docker-compose down
If you prefer to run the services manually:
-
Clone the repository:
git clone https://github.com/yourusername/exemplar-prompt-hub.git cd exemplar-prompt-hub
-
Create a virtual environment:
python -m venv venv source venv/bin/activate # On Windows, use `venv\\Scripts\\activate`
-
Install dependencies:
pip install -r requirements.txt
-
Set up environment variables:
- Copy
.env.example
to.env
:cp .env.example .env
- Edit
.env
to configure your database and other settings.
- Copy
-
Start the application:
uvicorn app.main:app --reload
To run the tests, use:
pytest
For detailed test coverage, use:
pytest --cov=app --cov-report=term-missing
Contributions are welcome! Please feel free to submit a Pull Request. For detailed contribution guidelines, please refer to the CONTRIBUTING.md file.
This project is licensed under the MIT License - see the LICENSE file for details.
Once the server is running, you can access the interactive API documentation at:
- Swagger UI:
http://localhost:8000/docs
- ReDoc:
http://localhost:8000/redoc
Here are some example curl commands to interact with the API:
curl -X POST "http://localhost:8000/api/v1/prompts/" \
-H "Content-Type: application/json" \
-d '{
"name": "greeting-template",
"text": "Hello {{ name }}! Welcome to {{ platform }}. Your role is {{ role }}.",
"description": "A greeting template with dynamic variables",
"meta": {
"template_variables": ["name", "platform", "role"],
"author": "test-user"
},
"tags": ["template", "greeting"]
}'
Note: The version
field is optional and handled automatically by the API. New prompts start with version 1, and subsequent updates will increment the version number automatically.
# Get all prompts
curl "http://localhost:8000/api/v1/prompts/"
# Get prompts with search
curl "http://localhost:8000/api/v1/prompts/?search=example"
# Get prompts with tag filter
curl "http://localhost:8000/api/v1/prompts/?tag=test"
# Get prompts with pagination
curl "http://localhost:8000/api/v1/prompts/?skip=0&limit=10"
# Replace {prompt_id} with actual ID
curl "http://localhost:8000/api/v1/prompts/{prompt_id}"
curl -X PUT "http://localhost:8000/api/v1/prompts/{prompt_id}" \
-H "Content-Type: application/json" \
-d '{
"text": "Hello {{ name }}! Welcome to {{ platform }}. Your role is {{ role }}. Your department is {{ department }}.",
"description": "Updated greeting template with department",
"meta": {
"template_variables": ["name", "platform", "role", "department"],
"author": "test-user",
"updated": true
},
"tags": ["template", "greeting", "updated"]
}'
curl -X DELETE "http://localhost:8000/api/v1/prompts/{prompt_id}"
# Fetch a template by ID
curl "http://localhost:8000/api/v1/prompts/1"
# Response:
{
"id": 1,
"name": "greeting-template",
"text": "Hello {{ name }}! Welcome to {{ platform }}. Your role is {{ role }}.",
"description": "A greeting template with dynamic variables",
"version": 1,
"meta": {
"template_variables": ["name", "platform", "role"],
"author": "test-user"
},
"tags": [
{"id": 1, "name": "template"},
{"id": 2, "name": "greeting"}
],
"created_at": "2024-03-20T10:00:00",
"updated_at": null
}
# Fetch a specific version of a prompt
curl "http://localhost:8000/api/v1/prompts/1/versions/2"
# Response:
{
"id": 2,
"prompt_id": 1,
"version": 2,
"text": "Updated greeting template text",
"description": "Updated description",
"meta": {
"template_variables": ["name", "platform", "role"],
"author": "test-user",
"updated": true
},
"created_at": "2024-03-20T11:00:00"
}
# Fetch a template by name
curl "http://localhost:8000/api/v1/prompts/?search=greeting-template"
# Response:
[
{
"id": 1,
"name": "greeting-template",
"text": "Hello {{ name }}! Welcome to {{ platform }}. Your role is {{ role }}.",
"description": "A greeting template with dynamic variables",
"version": 1,
"meta": {
"template_variables": ["name", "platform", "role"],
"author": "test-user"
},
"tags": [
{"id": 1, "name": "template"},
{"id": 2, "name": "greeting"}
],
"created_at": "2024-03-20T10:00:00",
"updated_at": null
}
]
exemplar-prompt-hub/
├── app/
│ ├── api/
│ │ └── endpoints/
│ │ └── prompts.py
│ ├── core/
│ │ └── config.py
│ ├── db/
│ │ ├── base_class.py
│ │ ├── models.py
│ │ └── session.py
│ ├── schemas/
│ │ └── prompt.py
│ └── main.py
├── tests/
│ └── test_prompts.py
├── alembic/
│ └── versions/
├── .env.example
├── .gitignore
├── docker-compose.yml
├── Dockerfile
├── LICENSE
├── MANIFEST.in
├── pyproject.toml
├── README.md
├── requirements.txt
└── setup.py
CREATE TABLE prompts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL UNIQUE,
text TEXT NOT NULL,
description TEXT,
version INTEGER NOT NULL,
meta TEXT, -- Store JSON as TEXT in SQLite
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP
);
CREATE TABLE tags (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL UNIQUE
);
CREATE TABLE prompt_tags (
prompt_id INTEGER,
tag_id INTEGER,
PRIMARY KEY (prompt_id, tag_id),
FOREIGN KEY (prompt_id) REFERENCES prompts(id) ON DELETE CASCADE,
FOREIGN KEY (tag_id) REFERENCES tags(id) ON DELETE CASCADE
);
CREATE TABLE prompt_versions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
prompt_id INTEGER,
version INTEGER,
text TEXT,
description TEXT,
meta TEXT, -- Store JSON as TEXT in SQLite
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (prompt_id) REFERENCES prompts(id) ON DELETE CASCADE
);
CREATE INDEX idx_prompts_name ON prompts(name);
CREATE INDEX idx_tags_name ON tags(name);
CREATE INDEX idx_prompt_versions_prompt_id ON prompt_versions(prompt_id);
CREATE INDEX idx_prompt_tags_prompt_id ON prompt_tags(prompt_id);
CREATE INDEX idx_prompt_tags_tag_id ON prompt_tags(tag_id);
CREATE TRIGGER update_prompt_timestamp
AFTER UPDATE ON prompts
BEGIN
UPDATE prompts SET updated_at = CURRENT_TIMESTAMP
WHERE id = NEW.id;
END;
Key differences from PostgreSQL:
- Uses
INTEGER PRIMARY KEY AUTOINCREMENT
instead ofSERIAL
- Uses
TEXT
instead ofVARCHAR
andJSONB
- Stores JSON as
TEXT
with manual serialization - Requires explicit foreign key support with
PRAGMA foreign_keys = ON
- Uses triggers for
updated_at
timestamp management
The API supports versioning of prompts. When updating a prompt:
- The current version is incremented
- A new record is created with the updated content
- The old version is preserved for reference
To update a prompt, use the PUT endpoint with the prompt ID:
curl -X PUT "http://localhost:8000/api/v1/prompts/{prompt_id}" \
-H "Content-Type: application/json" \
-d '{
"text": "Hello {{ name }}! Welcome to {{ platform }}. Your role is {{ role }}. Your department is {{ department }}.",
"description": "Updated greeting template with department",
"meta": {
"template_variables": ["name", "platform", "role", "department"],
"author": "test-user",
"updated": true
},
"tags": ["template", "greeting", "updated"]
}'
The API will automatically handle versioning and maintain the history of changes.
The API supports Jinja2 templating in prompts, allowing you to create dynamic prompts with variables. Here's how to use it:
# Create a basic greeting template
curl -X POST "http://localhost:8000/api/v1/prompts/" \
-H "Content-Type: application/json" \
-d '{
"name": "greeting-template",
"text": "Hello {{ name }}! Welcome to {{ platform }}. Your role is {{ role }}.",
"description": "A greeting template with dynamic variables",
"meta": {
"template_variables": ["name", "platform", "role"],
"author": "test-user"
},
"tags": ["template", "greeting"]
}'
# Response:
{
"id": 1,
"name": "greeting-template",
"text": "Hello {{ name }}! Welcome to {{ platform }}. Your role is {{ role }}.",
"description": "A greeting template with dynamic variables",
"version": 1,
"meta": {
"template_variables": ["name", "platform", "role"],
"author": "test-user"
},
"tags": [
{"id": 1, "name": "template"},
{"id": 2, "name": "greeting"}
],
"created_at": "2024-03-20T10:00:00",
"updated_at": null
}
See examples/jinja_open_ai.py for a complete Python implementation of how to use this template with Jinja2.
# Fetch a template by ID
curl "http://localhost:8000/api/v1/prompts/1"
# Response:
{
"id": 1,
"name": "greeting-template",
"text": "Hello {{ name }}! Welcome to {{ platform }}. Your role is {{ role }}.",
"description": "A greeting template with dynamic variables",
"version": 1,
"meta": {
"template_variables": ["name", "platform", "role"],
"author": "test-user"
},
"tags": [
{"id": 1, "name": "template"},
{"id": 2, "name": "greeting"}
],
"created_at": "2024-03-20T10:00:00",
"updated_at": null
}
# Fetch a specific version of a prompt
curl "http://localhost:8000/api/v1/prompts/1/versions/2"
# Response:
{
"id": 2,
"prompt_id": 1,
"version": 2,
"text": "Updated greeting template text",
"description": "Updated description",
"meta": {
"template_variables": ["name", "platform", "role"],
"author": "test-user",
"updated": true
},
"created_at": "2024-03-20T11:00:00"
}
# Fetch a template by name
curl "http://localhost:8000/api/v1/prompts/?search=greeting-template"
# Response:
[
{
"id": 1,
"name": "greeting-template",
"text": "Hello {{ name }}! Welcome to {{ platform }}. Your role is {{ role }}.",
"description": "A greeting template with dynamic variables",
"version": 1,
"meta": {
"template_variables": ["name", "platform", "role"],
"author": "test-user"
},
"tags": [
{"id": 1, "name": "template"},
{"id": 2, "name": "greeting"}
],
"created_at": "2024-03-20T10:00:00",
"updated_at": null
}
]
# Update a template with new variables
curl -X PUT "http://localhost:8000/api/v1/prompts/1" \
-H "Content-Type: application/json" \
-d '{
"text": "Hello {{ name }}! Welcome to {{ platform }}. Your role is {{ role }}. Your department is {{ department }}.",
"description": "Updated greeting template with department",
"meta": {
"template_variables": ["name", "platform", "role", "department"],
"author": "test-user",
"updated": true
},
"tags": ["template", "greeting", "updated"]
}'
# Response:
{
"id": 1,
"name": "greeting-template",
"text": "Hello {{ name }}! Welcome to {{ platform }}. Your role is {{ role }}. Your department is {{ department }}.",
"description": "Updated greeting template with department",
"version": 2,
"meta": {
"template_variables": ["name", "platform", "role", "department"],
"author": "test-user",
"updated": true
},
"tags": [
{"id": 1, "name": "template"},
{"id": 2, "name": "greeting"},
{"id": 5, "name": "updated"}
],
"created_at": "2024-03-20T10:00:00",
"updated_at": "2024-03-20T10:15:00"
}
# Delete a template
curl -X DELETE "http://localhost:8000/api/v1/prompts/1"
# Response: 204 No Content
See the complete examples in the examples/templating/python
directory:
- Basic string template:
examples/templating/python/string_template_example.py
- F-strings:
examples/templating/python/f_strings_example.py
- Mako template engine:
examples/templating/python/mako_example.py
- Control structures:
examples/templating/python/control_structures_example.py
- Jinja2 macros:
examples/templating/python/macro_example.py
Here's a simple example using Jinja2:
import requests
import jinja2
from jinja2 import Template
# Fetch the prompt template
response = requests.get("http://localhost:8000/api/v1/prompts/1")
prompt_data = response.json()
# Create a Jinja template
template = Template(prompt_data["text"])
# Render with variables
rendered_prompt = template.render(
name="John",
platform="Exemplar Prompt Hub",
role="Developer",
department="Engineering"
)
print(rendered_prompt)
# Output: Hello John! Welcome to Exemplar Prompt Hub. Your role is Developer. Your department is Engineering.
For more advanced examples including control structures, macros, and different template engines, refer to the example files mentioned above. Each example demonstrates different approaches to template rendering:
string_template_example.py
: Uses Python's built-in string.Template for simple variable substitutionf_strings_example.py
: Shows how to use Python's f-strings for template renderingmako_example.py
: Demonstrates the Mako template engine for high-performance templatingcontrol_structures_example.py
: Shows how to use if-else statements and loops in templatesmacro_example.py
: Demonstrates reusable template components using Jinja2 macros
See the complete examples in the examples/templating/javascript
directory:
- Basic template literals:
examples/templating/javascript/template_literals.js
- Handlebars.js:
examples/templating/javascript/handlebars_example.js
- Mustache.js:
examples/templating/javascript/mustache_example.js
- React component:
examples/templating/javascript/react_example.jsx
- Control structures:
examples/templating/javascript/control_structures_example.js
- Macros/Partials:
examples/templating/javascript/macro_example.js
Here's a simple example using template literals:
// Fetch and render a template
async function renderPrompt(promptId, variables) {
const response = await fetch(`http://localhost:8000/api/v1/prompts/${promptId}`);
const promptData = await response.json();
// Create template function
const template = new Function('variables', `
with(variables) {
return \`${promptData.text}\`;
}
`);
// Render with variables
return template(variables);
}
// Usage
const renderedPrompt = await renderPrompt(1, {
name: 'John',
platform: 'Exemplar Prompt Hub',
role: 'Developer',
department: 'Engineering'
});
console.log(renderedPrompt);
// Output: Hello John! Welcome to Exemplar Prompt Hub. Your role is Developer. Your department is Engineering.
For more advanced examples including control structures, macros, and React integration, refer to the example files mentioned above.
- Document Variables: Always document template variables in the prompt's meta field
- Default Values: Consider providing default values in the template
- Error Handling: Use Jinja2's error handling features
- Security: Be careful with user input in templates
- Versioning: Use the API's versioning feature to track template changes
The Prompt Playground API allows you to compare responses from different LLM models using the same prompt. It leverages the OpenRouter API to access multiple models through a single endpoint.
Create a template as mentioned here
# Using latest version of the prompt
curl -X POST "http://localhost:8000/api/v1/prompts/playground" \
-H "Content-Type: application/json" \
-d '{
"prompt_id": 4,
"models": ["openai/gpt-4", "anthropic/claude-3-opus"],
"variables": {
"name": "John",
"platform": "Exemplar Prompt Hub",
"role": "Developer"
}
}'
# Using a specific version of the prompt
curl -X POST "http://localhost:8000/api/v1/prompts/playground" \
-H "Content-Type: application/json" \
-d '{
"prompt_id": 4,
"version": 2,
"models": ["openai/gpt-4", "anthropic/claude-3-opus"],
"variables": {
"name": "John",
"platform": "Exemplar Prompt Hub",
"role": "Developer"
}
}'
{
"prompt_id": 4,
"prompt_name": "greeting-template",
"prompt_version": 1,
"variables_used": {
"name": "John",
"platform": "Exemplar Prompt Hub",
"role": "Developer"
},
"responses": {
"openai/gpt-4": {
"response": "Hello John! How can I assist you in your developer role today? Are you looking for help with coding, debugging, or perhaps ideas for a new project?",
"model": "openai/gpt-4",
"prompt_used": "Hello John! Welcome to Exemplar Prompt Hub. Your role is Developer.",
"metadata": {
"prompt_id": 4,
"prompt_version": 1,
"variables_used": {
"name": "John",
"platform": "Exemplar Prompt Hub",
"role": "Developer"
},
"usage": {
"prompt_tokens": 34,
"completion_tokens": 32,
"total_tokens": 66,
"prompt_tokens_details": {
"cached_tokens": 0
},
"completion_tokens_details": {
"reasoning_tokens": 0
}
},
"model_info": "openai/gpt-4"
}
},
"anthropic/claude-3-opus": {
"response": "Hello! It's great to be here at Exemplar Prompt Hub. As an AI assistant in the Developer role, I'm happy to help with any programming, coding, software development, or technical questions you may have. Feel free to ask me about languages like Python, Java, C++, web development, databases, algorithms, or anything else related to software engineering. I'll do my best to provide helpful explanations, code samples, debugging tips, or guidance. Let me know what development topics you'd like to explore!",
"model": "anthropic/claude-3-opus",
"prompt_used": "Hello John! Welcome to Exemplar Prompt Hub. Your role is Developer.",
"metadata": {
"prompt_id": 4,
"prompt_version": 1,
"variables_used": {
"name": "John",
"platform": "Exemplar Prompt Hub",
"role": "Developer"
},
"usage": {
"prompt_tokens": 31,
"completion_tokens": 113,
"total_tokens": 144
},
"model_info": "anthropic/claude-3-opus"
}
}
}
}
- Multiple Model Support: Compare responses from different LLM models simultaneously
- Template Variables: Support for Jinja2-style template variables
- Detailed Metadata: Includes token usage, model information, and prompt versioning
- Error Handling: Graceful error handling for each model independently
- OpenRouter Integration: Uses OpenRouter API for accessing multiple models through a single endpoint
Add the following to your .env
file:
OPENROUTER_API_KEY=your_api_key_here
PROJECT_URL=http://your-app-url.com # Optional: for OpenRouter rankings
The playground supports all models available through OpenRouter, including:
- OpenAI models (GPT-4, GPT-3.5)
- Anthropic models (Claude 3 Opus, Claude 3 Sonnet)
- And many more...
For a complete list of available models, visit the OpenRouter Models page.
- Model Selection: Choose models that best fit your use case
- Variable Validation: Validate template variables before sending to models
- Error Handling: Handle model-specific errors appropriately
- Token Usage: Monitor token usage for cost optimization
- Response Comparison: Compare responses to identify model strengths and weaknesses
from jinja2 import Template, TemplateError
try:
template = Template(prompt_data["text"])
rendered_prompt = template.render(
name="John",
platform="Exemplar Prompt Hub"
# role is missing, will use default if defined
)
except TemplateError as e:
print(f"Template error: {e}")