A platform to design, test, and deploy LangChain agents visually, while capturing every artefact in version-controlled object storage and authenticating users through MongoDB.
frontend/
: Frontend code for the visual playgroundbackend/
: Backend API and servicesinfra/
: Infrastructure as code and deployment configurationsdocs/
: Project documentation
# Clone the repository
git clone https://github.com/yourusername/langchain-playground.git
cd langchain-playground
# Create and configure your environment file
cp .env.template .env
# Edit .env with your preferred settings
# Start all services
docker-compose up
The .env
file contains important configuration options:
-
OpenAI API (default):
LLM_PROVIDER=openai OPENAI_API_KEY=your-openai-api-key
-
Local LLM:
LLM_PROVIDER=local LOCAL_LLM_URL=http://localhost:8000/v1 # URL for local LLM API (e.g., LM Studio, Ollama) LOCAL_LLM_MODEL=llama2 # Model name for local LLM
-
MongoDB Authentication (default):
USE_MONGODB=true MONGO_URI=mongodb://localhost:27017/langchain-playground # Optional: MONGO_USER, MONGO_PASSWORD
-
In-Memory Authentication:
USE_MONGODB=false
Note: User accounts will be lost when the application is restarted.
- Visual LangChain agent builder
- S3-compatible storage for artefacts (MinIO)
- User authentication with MongoDB (optional - can run with in-memory authentication)
- Support for both OpenAI API and local LLMs
- Jupyter integration for prototyping
- YAML export for portable agent definitions
- Streaming responses for real-time token generation
- API documentation with Swagger UI
To use a local LLM instead of OpenAI:
-
Set up a local LLM server that implements the OpenAI API format:
-
Configure your
.env
file:LLM_PROVIDER=local LOCAL_LLM_URL=http://localhost:8000/v1 LOCAL_LLM_MODEL=llama2
-
Start your local LLM server according to its documentation
-
Start the LangChain Playground application
If you don't need persistent user authentication, you can run the application without MongoDB:
-
Configure your
.env
file:USE_MONGODB=false
-
Start the application with Docker Compose:
docker-compose up backend frontend minio jupyter
This will use an in-memory user store instead of MongoDB. Note that user accounts will be lost when the application is restarted.
See the tasks list for current development status and upcoming features.
The backend API is documented using Swagger UI, which provides an interactive interface to explore and test the API endpoints.
When the application is running, you can access the Swagger UI at:
http://localhost:8000/api/docs/
The Swagger UI provides:
- A list of all available API endpoints
- Request parameters and body schemas
- Response schemas and examples
- The ability to try out API calls directly from the browser
The API is organized into the following categories:
- Authentication: User registration, login, token refresh, and user information
- Chat: Text generation and streaming endpoints
- Graph: LangChain graph execution
- System: Health check and monitoring endpoints
You can run the backend locally without Docker using the provided script:
# Make sure the script is executable
chmod +x run_backend_locally.sh
# Run the backend
./run_backend_locally.sh
This script will:
- Create a Python virtual environment if it doesn't exist
- Install all required dependencies (including watchdog for file monitoring)
- Set up the necessary environment variables from your .env file
- Start the Flask backend server with auto-reloading enabled
The backend will be available at http://localhost:8000.
The server includes a file watcher that automatically detects code changes and reloads the application, similar to how uvicorn works with FastAPI applications. This means you can edit your code and see the changes immediately without having to manually restart the server.
Note: You'll need to have Python 3 installed on your system. Also, make sure you've configured your .env file properly before running the script.
You can run the frontend locally without Docker using the provided script:
# Make sure the script is executable
chmod +x run_frontend_locally.sh
# Run the frontend
./run_frontend_locally.sh
This script will:
- Install all required dependencies
- Set up the necessary environment variables from your .env file
- Build the Next.js application
- Start the Next.js server in production mode
The frontend will be available at http://localhost:3000.
Note: You'll need to have Node.js installed on your system. Also, make sure you've configured your .env file properly before running the script.
This project uses DVC (Data Version Control) to manage large notebook files and datasets. DVC tracks these files in MinIO storage instead of Git, keeping your repository lightweight.
DVC is already configured to use MinIO as remote storage. The configuration is in .dvc/config
.
-
Create a new notebook:
# Create your notebook in the notebooks directory jupyter notebook notebooks/your_notebook.ipynb
-
Track the notebook with DVC:
# Add the notebook to DVC dvc add notebooks/your_notebook.ipynb # Add the .dvc file to Git git add notebooks/your_notebook.ipynb.dvc git commit -m "Add notebook tracking file"
-
Push notebook to MinIO storage:
# Push to MinIO dvc push
-
Pull notebooks from MinIO storage:
# Pull from MinIO dvc pull
-
Update a notebook:
# After making changes to your notebook dvc add notebooks/your_notebook.ipynb git add notebooks/your_notebook.ipynb.dvc git commit -m "Update notebook tracking file" dvc push
To export a notebook as a Python script:
- Open the notebook in Jupyter
- Go to File → Download as → Python (.py)
- Save the Python script to the appropriate location in the project
The LangChain Playground includes several security features to protect your data and monitor for suspicious activities:
All security-sensitive operations are logged with detailed information:
- Authentication events (login, logout, token refresh)
- Data access operations
- Administrative actions
Audit logs include:
- Timestamp
- User ID
- Client IP address
- User agent
- Success/failure status
- Detailed event information
The application includes real-time monitoring for suspicious activities:
- Brute force attack detection
- Account takeover attempt detection
- Suspicious access pattern detection
When suspicious activities are detected, alerts are sent via:
- Email (if configured)
- Slack (if configured)
- Security log file
The CI/CD pipeline includes comprehensive security scanning:
- CodeQL for code scanning (Python and JavaScript)
- Dependency scanning for vulnerable packages
- Container scanning with Trivy
- Secret scanning with TruffleHog
Security features can be configured in the .env
file:
# Email Alerts
SMTP_SERVER=smtp.example.com
SMTP_PORT=587
SMTP_USERNAME=alerts@example.com
SMTP_PASSWORD=your-smtp-password
ALERT_EMAIL=security@example.com
# Slack Alerts
SLACK_WEBHOOK_URL=https://hooks.slack.com/services/your-webhook-url
MIT