Skip to content

ihiteshgupta/langchain-playground

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LangChain Playground

A platform to design, test, and deploy LangChain agents visually, while capturing every artefact in version-controlled object storage and authenticating users through MongoDB.

Project Structure

  • frontend/: Frontend code for the visual playground
  • backend/: Backend API and services
  • infra/: Infrastructure as code and deployment configurations
  • docs/: Project documentation

Quick Start

# Clone the repository
git clone https://github.com/yourusername/langchain-playground.git
cd langchain-playground

# Create and configure your environment file
cp .env.template .env
# Edit .env with your preferred settings

# Start all services
docker-compose up

Environment Configuration

The .env file contains important configuration options:

LLM Configuration (choose one)

  • OpenAI API (default):

    LLM_PROVIDER=openai
    OPENAI_API_KEY=your-openai-api-key
    
  • Local LLM:

    LLM_PROVIDER=local
    LOCAL_LLM_URL=http://localhost:8000/v1  # URL for local LLM API (e.g., LM Studio, Ollama)
    LOCAL_LLM_MODEL=llama2  # Model name for local LLM
    

Authentication (choose one)

  • MongoDB Authentication (default):

    USE_MONGODB=true
    MONGO_URI=mongodb://localhost:27017/langchain-playground
    # Optional: MONGO_USER, MONGO_PASSWORD
    
  • In-Memory Authentication:

    USE_MONGODB=false
    

    Note: User accounts will be lost when the application is restarted.

Features

  • Visual LangChain agent builder
  • S3-compatible storage for artefacts (MinIO)
  • User authentication with MongoDB (optional - can run with in-memory authentication)
  • Support for both OpenAI API and local LLMs
  • Jupyter integration for prototyping
  • YAML export for portable agent definitions
  • Streaming responses for real-time token generation
  • API documentation with Swagger UI

Using Local LLMs

To use a local LLM instead of OpenAI:

  1. Set up a local LLM server that implements the OpenAI API format:

  2. Configure your .env file:

    LLM_PROVIDER=local
    LOCAL_LLM_URL=http://localhost:8000/v1
    LOCAL_LLM_MODEL=llama2
    
  3. Start your local LLM server according to its documentation

  4. Start the LangChain Playground application

Running Without MongoDB

If you don't need persistent user authentication, you can run the application without MongoDB:

  1. Configure your .env file:

    USE_MONGODB=false
    
  2. Start the application with Docker Compose:

    docker-compose up backend frontend minio jupyter

This will use an in-memory user store instead of MongoDB. Note that user accounts will be lost when the application is restarted.

Development

See the tasks list for current development status and upcoming features.

API Documentation

The backend API is documented using Swagger UI, which provides an interactive interface to explore and test the API endpoints.

Accessing Swagger UI

When the application is running, you can access the Swagger UI at:

http://localhost:8000/api/docs/

The Swagger UI provides:

  • A list of all available API endpoints
  • Request parameters and body schemas
  • Response schemas and examples
  • The ability to try out API calls directly from the browser

API Endpoints

The API is organized into the following categories:

  • Authentication: User registration, login, token refresh, and user information
  • Chat: Text generation and streaming endpoints
  • Graph: LangChain graph execution
  • System: Health check and monitoring endpoints

Running Backend Locally

You can run the backend locally without Docker using the provided script:

# Make sure the script is executable
chmod +x run_backend_locally.sh

# Run the backend
./run_backend_locally.sh

This script will:

  1. Create a Python virtual environment if it doesn't exist
  2. Install all required dependencies (including watchdog for file monitoring)
  3. Set up the necessary environment variables from your .env file
  4. Start the Flask backend server with auto-reloading enabled

The backend will be available at http://localhost:8000.

The server includes a file watcher that automatically detects code changes and reloads the application, similar to how uvicorn works with FastAPI applications. This means you can edit your code and see the changes immediately without having to manually restart the server.

Note: You'll need to have Python 3 installed on your system. Also, make sure you've configured your .env file properly before running the script.

Running Frontend Locally

You can run the frontend locally without Docker using the provided script:

# Make sure the script is executable
chmod +x run_frontend_locally.sh

# Run the frontend
./run_frontend_locally.sh

This script will:

  1. Install all required dependencies
  2. Set up the necessary environment variables from your .env file
  3. Build the Next.js application
  4. Start the Next.js server in production mode

The frontend will be available at http://localhost:3000.

Note: You'll need to have Node.js installed on your system. Also, make sure you've configured your .env file properly before running the script.

Notebook Version Control with DVC

This project uses DVC (Data Version Control) to manage large notebook files and datasets. DVC tracks these files in MinIO storage instead of Git, keeping your repository lightweight.

Setup

DVC is already configured to use MinIO as remote storage. The configuration is in .dvc/config.

Working with Notebooks

  1. Create a new notebook:

    # Create your notebook in the notebooks directory
    jupyter notebook notebooks/your_notebook.ipynb
  2. Track the notebook with DVC:

    # Add the notebook to DVC
    dvc add notebooks/your_notebook.ipynb
    
    # Add the .dvc file to Git
    git add notebooks/your_notebook.ipynb.dvc
    git commit -m "Add notebook tracking file"
  3. Push notebook to MinIO storage:

    # Push to MinIO
    dvc push
  4. Pull notebooks from MinIO storage:

    # Pull from MinIO
    dvc pull
  5. Update a notebook:

    # After making changes to your notebook
    dvc add notebooks/your_notebook.ipynb
    git add notebooks/your_notebook.ipynb.dvc
    git commit -m "Update notebook tracking file"
    dvc push

Export as Python Script

To export a notebook as a Python script:

  1. Open the notebook in Jupyter
  2. Go to File → Download as → Python (.py)
  3. Save the Python script to the appropriate location in the project

Security Features

The LangChain Playground includes several security features to protect your data and monitor for suspicious activities:

Audit Logging

All security-sensitive operations are logged with detailed information:

  • Authentication events (login, logout, token refresh)
  • Data access operations
  • Administrative actions

Audit logs include:

  • Timestamp
  • User ID
  • Client IP address
  • User agent
  • Success/failure status
  • Detailed event information

Security Monitoring

The application includes real-time monitoring for suspicious activities:

  • Brute force attack detection
  • Account takeover attempt detection
  • Suspicious access pattern detection

When suspicious activities are detected, alerts are sent via:

  • Email (if configured)
  • Slack (if configured)
  • Security log file

CI/CD Security Scanning

The CI/CD pipeline includes comprehensive security scanning:

  • CodeQL for code scanning (Python and JavaScript)
  • Dependency scanning for vulnerable packages
  • Container scanning with Trivy
  • Secret scanning with TruffleHog

Configuration

Security features can be configured in the .env file:

# Email Alerts
SMTP_SERVER=smtp.example.com
SMTP_PORT=587
SMTP_USERNAME=alerts@example.com
SMTP_PASSWORD=your-smtp-password
ALERT_EMAIL=security@example.com

# Slack Alerts
SLACK_WEBHOOK_URL=https://hooks.slack.com/services/your-webhook-url

License

MIT

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published