Create production-ready FastAPI applications in seconds
From simple APIs to LLM-enabled applications, all without build configuration.
Get a fully functional FastAPI app running in 30 seconds:
# Install
pip install fastapi-gen
# Create your app
fastapi-gen my_app
# Run it
cd my_app && make start
Or use pipx for one-time execution:
pipx run fastapi-gen my_app
cd my_app && make start
That's it! Open http://localhost:8000/docs to see your OpenAPI documentation.
Platform Support: Works on macOS and Linux | Report Issues
Focus on Code | Production Ready | Testing Included | Zero Config |
---|---|---|---|
Skip boilerplate setup | Enterprise patterns | Real test coverage | Ready-to-run templates |
Hello World - Perfect for Learning FastAPI
Best for: Learning FastAPI fundamentals and starting new projects
Key Features:
- REST API Fundamentals - Complete CRUD with validation
- Configuration Management - Both pydantic-settings & dotenv
- Dependency Injection - Clean architecture with
Depends()
- Background Tasks - Async processing with logging
- Exception Handling - Professional error responses
- Input Validation - Advanced Pydantic constraints
- Health Monitoring - Built-in health endpoints
- Complete Tests - 100% test coverage
Advanced - Enterprise Production Template
Best for: Production applications with enterprise features
Key Features:
- JWT Authentication - Registration, login, protected routes
- Database Integration - SQLAlchemy 2.0 async (SQLite/PostgreSQL)
- Rate Limiting - DDoS protection per endpoint
- Caching System - In-memory + Redis integration ready
- WebSocket Support - Real-time communication
- File Upload - Secure handling + cloud storage ready
- Enhanced Security - CORS, validation, production patterns
- Full Test Suite - Auth, CRUD, WebSocket, integration
NLP - Comprehensive AI Language Processing
Best for: AI applications with natural language processing
Key Features:
- 8 NLP Capabilities - Summarization, NER, generation, QA, embeddings, sentiment, classification, similarity
- Production Architecture - Startup model loading, device auto-detection
- Smart Configuration - Environment-based config, multiple models
- Performance Optimized - Model caching, concurrent handling, hardware acceleration
- Production Monitoring - Health checks, model status, logging
- Real AI Testing - Actual model inference validation
LangChain - Modern LLM Integration
Best for: Applications using LangChain for LLM workflows
Key Features:
- Optimized Loading - Startup caching, memory management
- Modern Patterns - Latest LangChain best practices
- Smart Config - Auto device detection (CPU/GPU)
- Production Ready - Health checks, monitoring, error handling
- Real Testing - Actual model inference tests
- Dual Endpoints - Text generation & question answering
Llama - Local LLM Powerhouse
Best for: Local LLM inference with llama-cpp-python
Key Features:
- Local LLM Focus - Optimized for Gemma/Llama GGUF models
- GPU Acceleration - Auto GPU detection, configurable layers
- Advanced Config - Context windows, threading, performance tuning
- Production Ready - Lifecycle management, health monitoring
- Real Testing - Actual model inference validation
- Easy Setup - Auto model download, optimized defaults
Requirements: ~4GB model download + 4GB+ RAM
Template | Best For | Complexity | AI/ML | Database | Auth |
---|---|---|---|---|---|
Hello World | Learning, Simple APIs | β | β | β | β |
Advanced | Production Apps | βββ | β | β | β |
NLP | AI Text Processing | ββββ | β | β | β |
LangChain | LLM Workflows | ββββ | β | β | β |
Llama | Local LLM | βββββ | β | β | β |
Zero Configuration β’ Production Patterns β’ Complete Testing β’ Code Quality β’ Auto Documentation β’ Deployment Ready
Focus on Your Code, Not Setup
All dependencies (FastAPI, Pydantic, Pytest, etc.) are preconfigured. Just create and run:
fastapi-gen my_app # Create
cd my_app # Enter
make start # Run!
Every Template Includes:
- Ready-to-run development environment
- Industry-standard project structure
- Comprehensive test suites with examples
- Ruff linting and formatting
- Auto-generated OpenAPI documentation
- Makefile with common development commands
You'll need to have Python 3.12+ or later version on your local development machine. We recommend using the latest LTS version. You can use pyenv (macOS/Linux) to switch Python versions between different projects.
pip3 install fastapi-gen
fastapi-gen my_app
or
pip3 install fastapi-gen
fastapi-gen my_app --template hello_world
pip3 install fastapi-gen
fastapi-gen my_app --template advanced
pip install fastapi-gen
fastapi-gen my_app --template nlp
pip install fastapi-gen
fastapi-gen my_app --template Langchain
pip install fastapi-gen
fastapi-gen my_app --template llama
Inside the newly created project, you can run some built-in commands:
Runs the app in development mode.
Open http://localhost:8000/docs to view OpenAPI documentation in the browser.
The page will automatically reload if you make changes to the code.
Runs tests.
By default, runs tests related to files changed since the last commit.
fastapi-gen
is distributed under the terms of the MIT license.