🚀 A comprehensive tool for testing and comparing Large Language Model API performance
Languages: English | 中文 | العربية | Deutsch | Español | Français | 日本語
LLM API Test is a powerful, web-based tool designed to benchmark and compare the performance of various Large Language Model APIs. Whether you're evaluating different providers, optimizing your AI applications, or conducting research, this tool provides comprehensive metrics and insights.
- OpenAI: GPT-3.5, GPT-4, and latest models
- Google Gemini: Gemini Pro, Gemini Pro Vision
- Custom APIs: Any OpenAI-compatible API endpoint
- Response Time: First token latency measurement
- Output Speed: Tokens per second calculation
- Success Rate: API reliability tracking
- Quality Assessment: Response comparison tools
- Multilingual Interface: 7 languages supported
- Responsive Design: Works on desktop and mobile
- Real-time Results: Live performance monitoring
- History Tracking: Persistent test records
- Local Development: Simple HTTP server setup
- Static Hosting: Compatible with any static host
- Modern web browser (Chrome, Firefox, Safari, Edge)
- Node.js and npm installed
- API keys for the services you want to test
-
Clone the repository
git clone https://github.com/qjr87/llm-api-test.git cd llm-api-test
-
Install dependencies and start the server
npm install npm start
Alternative methods:
# Using Python 3 python -m http.server 8000 # Using PHP php -S localhost:8000
-
Open in browser Navigate to
http://localhost:8000
- Select Protocol: Choose your API provider (OpenAI, Gemini, or Custom)
- Enter Endpoint: API URL (auto-filled for standard providers)
- Add API Key: Your authentication key
- Configure Models: Specify which models to test
- Test Rounds: Number of iterations per model
- Prompts: Custom test prompts or use defaults
- Concurrency: Parallel request handling
// OpenAI Configuration
Protocol: "openai"
API URL: "https://api.openai.com/v1/chat/completions"
API Key: "sk-..."
Models: "gpt-3.5-turbo,gpt-4"
// Gemini Configuration
Protocol: "gemini"
API URL: "https://generativelanguage.googleapis.com/v1beta/models"
API Key: "AIza..."
Models: "gemini-pro"
Deploy to any static hosting service:
- Vercel:
vercel --prod
- Netlify: Drag and drop the project folder
- GitHub Pages: Enable in repository settings
- Firebase Hosting:
firebase deploy
FROM nginx:alpine
COPY . /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
docker build -t llm-api-test .
docker run -p 8080:80 llm-api-test
llm-api-test/
├── 📄 index.html # Main application interface
├── 🧠 app.js # Core application logic & test orchestration
├── 🔌 api-handlers.js # API protocol implementations
├── 🎨 styles.css # Responsive UI styling
├── 🌍 i18n.js # Internationalization & language support
└── 📜 LICENSE # MIT License
- APITester Class: Main test orchestration and UI management
- APIHandler Class: Protocol-specific API implementations
- I18n System: Multi-language support with dynamic loading
- Results Engine: Real-time performance metrics calculation
- HTML5: Semantic markup and accessibility
- CSS3: Modern styling with Flexbox/Grid
- Vanilla JavaScript: No framework dependencies
- Web APIs: Fetch, LocalStorage, Internationalization
- Modular Design: Separation of concerns
- Event-Driven: Reactive UI updates
- Progressive Enhancement: Works without JavaScript
- Mobile-First: Responsive design principles
- Static Hosting: Universal compatibility
- CDN Ready: Global content distribution
Metric | Description | Good Range |
---|---|---|
First Token Time | Time to receive first response token | < 2 seconds |
Output Speed | Tokens generated per second | > 10 tokens/sec |
Success Rate | Percentage of successful requests | > 95% |
Total Time | Complete response generation time | Varies by length |
We welcome contributions! Here's how you can help:
- Fork the repository
- Clone your fork locally
- Create a feature branch
git checkout -b feature/amazing-feature
- Make your changes
- Test thoroughly
- Commit with clear messages
git commit -m "feat: add amazing feature"
- Push to your fork
- Submit a Pull Request
- Follow existing code style
- Add tests for new features
- Update documentation
- Ensure cross-browser compatibility
- 🌐 Additional language translations
- 🔌 New API provider support
- 📊 Enhanced metrics and visualizations
- 🎨 UI/UX improvements
- 🐛 Bug fixes and optimizations
How do I get API keys?
- OpenAI: Visit platform.openai.com
- Google Gemini: Get started at ai.google.dev
- Custom APIs: Check your provider's documentation
Why are my tests failing?
- Verify API key is correct and has sufficient credits
- Check if the API endpoint URL is accurate
- Ensure your IP isn't blocked by the provider
- Try reducing concurrency or test rounds
Can I test custom models?
Yes! Use the "Custom" protocol option and provide:
- Your API endpoint URL
- Authentication method
- Model names
This project is licensed under the MIT License - see the LICENSE file for details.
- Thanks to all contributors who help improve this tool
- Inspired by the need for transparent AI performance testing
- Built with ❤️ for the AI development community
⭐ Star this repo if you find it helpful!
Made with ❤️ by qjr87