PromptLens is an open-source tool for comparing and analyzing LLM responses across different models using different models or different prompts
-
Prompt Discovery & Management
- Dual prompt system with user and system prompts
- Prompt history tracking
- Continue conversations with selected or all models
- Multi-turn conversations support
-
LLM Integration
- Support for OpenAI GPT & o1 models and Anthropic Claude
- Streaming responses for real-time feedback
- Cost calculation and usage tracking
- API key management in settings
- Model parameter configuration
-
Results Display
- Side-by-side comparison of responses
- Syntax highlighting for code
- Export and share functionality
The easiest way to get started is using Docker:
services:
app:
image: siteboonai/promptlens:latest
ports:
- "3000:3000" # Application port (serves both frontend and API)
environment:
- OPENAI_KEY=your_key # Optional can be configured in the settings later on
- ANTHROPIC_KEY=your_key # Optional can be configured in the settings later on
volumes:
- ./data:/app/data # For SQLite database (stores prompts, comparisons, and encrypted API keys)Save as docker-compose.yml and run:
docker-compose up -dThe application will be available at http://localhost:3000
- Clone the repository:
git clone https://github.com/siteboon/promptlens.git
cd promptlens- Install all dependencies:
npm run install:all-
Configure environment:
- Copy
.env.exampleto.env - Add your OpenAI and/or Anthropic API keys (optional, can be configured in UI)
- Copy
-
Start the application:
# Start both frontend and backend in development mode
npm run dev # Frontend on port 3000, Backend on port 3001
# Or start them separately:
npm run dev:client # Frontend on port 3000
npm run dev:server # Backend on port 3001This will initialize all packages in parallel and watch for changes, including the website which will be available at http://localhost:3000.
In development mode:
- Frontend: http://localhost:3000
- Backend API: http://localhost:3001
In production mode (Docker):
- Everything runs on http://localhost:3000 with API routes prefixed with
/api
POST /api/completions- Generate completions from LLM modelsGET /api/comparisons- List recent prompt comparisonsGET /api/models- List available modelsPOST /api/keys- Manage API keysGET /api/keys/info- Get API key status
promptlens-v2/
├── src/ # Frontend source
│ ├── components/ # React components
│ ├── services/ # API and external services
│ ├── utils/ # Helper functions
│ └── assets/ # Static assets
├── server/ # Backend source
│ ├── src/
│ │ ├── migrations/ # Database migrations
│ │ └── index.ts # Server entry point
│ └── package.json
├── public/ # Static files
└── package.json
-
Frontend
- React with TypeScript
- Tailwind CSS + DaisyUI
- Vite build tool
-
Backend
- Node.js with TypeScript
- SQLite database
- Express server
We welcome contributions! Please see our Contributing Guide for details.
- API Documentation - Details about the backend API
- Contributing Guide - How to contribute to the project
AGPL-3.0 - See LICENSE for details.