Skip to content

jillxuu/crypto-llm-backend

Repository files navigation

Crypto LLM Backend

A Temporal-orchestrated backend service for cryptocurrency news analysis using ChatGPT and Perplexity AI.

Overview

This service receives cryptocurrency ticker symbols (BTC/ETH/APT), generates analytical questions via ChatGPT, searches for answers using Perplexity AI, formats the results, and delivers them via multiple channels (webhook, email, Slack).

Architecture

  • Workflow Orchestration: Temporal.io (Python SDK)
  • AI APIs: OpenAI GPT-4, Perplexity AI
  • Database: PostgreSQL (persistence), Redis (caching)
  • Containerization: Docker + docker-compose
  • Monitoring: Temporal Web UI, structured logging, health checks

Components

Core Services

  • LLM Workflow Service: Main orchestrator handling the analysis workflow
  • Activities: Validation, ChatGPT Q-gen, Perplexity search, Formatting, Delivery
  • API Service: REST API for API Gateway integration

Data Flow

  1. Validation: Validate ticker symbol (BTC/ETH/APT)
  2. Question Generation: Generate 3 analytical questions using ChatGPT
  3. Answer Search: Get answers for each question via Perplexity AI (parallel)
  4. Formatting: Format results for different delivery channels
  5. Delivery: Deliver via webhook/email/Slack with retry logic

Quick Start

Prerequisites

  • Docker and docker-compose
  • OpenAI API key
  • Perplexity API key

Setup

  1. Clone the repository and navigate to the crypto-llm-backend directory
  2. Copy environment configuration:
    cp .env.example .env
  3. Edit .env with your API keys and configuration
  4. Start all services:
    docker compose up -d

Services

API Endpoints

Start Workflow

POST /workflows/start
Content-Type: application/json

{
  "ticker": "BTC",
  "delivery_type": "webhook",
  "delivery_destination": "https://your-api.com/crypto-webhook",
  "execution_type": "adhoc",
  "metadata": {"priority": "high"}
}

Get Workflow Status

GET /workflows/{workflow_id}/status

Get Workflow Result

GET /workflows/{workflow_id}/result

List Workflows

GET /workflows?ticker=BTC&status=completed&limit=10

Health Check

GET /health

Metrics

GET /metrics

Configuration

Environment Variables

Temporal Configuration

  • TEMPORAL_HOST: Temporal server host (default: localhost:7233)
  • TEMPORAL_NAMESPACE: Temporal namespace (default: default)
  • TEMPORAL_TASK_QUEUE: Task queue name (default: crypto-news-analysis)

AI Service Configuration

  • OPENAI_API_KEY: Your OpenAI API key (required)
  • PERPLEXITY_API_KEY: Your Perplexity API key (required)

Database Configuration

  • POSTGRES_HOST: PostgreSQL host (default: localhost)
  • POSTGRES_PORT: PostgreSQL port (default: 5432)
  • POSTGRES_DB: Database name (default: crypto_news)
  • POSTGRES_USER: Database user (default: postgres)
  • POSTGRES_PASSWORD: Database password (required)

Redis Configuration

  • REDIS_HOST: Redis host (default: localhost)
  • REDIS_PORT: Redis port (default: 6379)
  • REDIS_DB: Redis database number (default: 0)

Delivery Configuration

  • WEBHOOK_TIMEOUT: Webhook timeout in seconds (default: 30)
  • SLACK_BOT_TOKEN: Slack bot token (optional)
  • SMTP_HOST: SMTP server host (default: smtp.gmail.com)
  • SMTP_PORT: SMTP server port (default: 587)
  • SMTP_USERNAME: SMTP username (optional)
  • SMTP_PASSWORD: SMTP password (optional)

Application Configuration

  • LOG_LEVEL: Logging level (default: INFO)
  • MAX_CONCURRENT_WORKFLOWS: Max concurrent workflows (default: 10)
  • WORKFLOW_TIMEOUT: Workflow timeout in seconds (default: 120)
  • CACHE_TTL_CHATGPT: ChatGPT cache TTL in seconds (default: 86400)
  • CACHE_TTL_PERPLEXITY: Perplexity cache TTL in seconds (default: 3600)

Delivery Channels

Webhook

{
  "ticker": "BTC",
  "delivery_type": "webhook",
  "delivery_destination": "https://your-api.com/webhook"
}

Email

{
  "ticker": "BTC", 
  "delivery_type": "email",
  "delivery_destination": "user@example.com"
}

Slack

{
  "ticker": "BTC",
  "delivery_type": "slack", 
  "delivery_destination": "#crypto-analysis"
}

Or via webhook URL:

{
  "ticker": "BTC",
  "delivery_type": "slack",
  "delivery_destination": "https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
}

Monitoring & Observability

Health Checks

  • Database connectivity
  • Redis cache functionality
  • Temporal server connection
  • Overall system health

Metrics

  • Workflows started/completed/failed
  • Average workflow duration
  • Activity execution counts
  • Cache hit/miss ratios

Logging

  • Structured JSON logging
  • Workflow and activity tracing
  • Error tracking with context

Performance Characteristics

Target Performance

  • Workflow Duration: <2 minutes full run
  • Concurrent Workflows: 10+ workflows simultaneously
  • Success Rate: 99% under normal conditions
  • Error Handling: Robust retry policies and circuit breakers

Caching Strategy

  • ChatGPT Questions: 24-hour TTL (questions don't change frequently)
  • Perplexity Answers: 1-hour TTL (news updates frequently)
  • Rate Limiting: Per-service rate limiting via Redis

Development

Running Locally

  1. Start infrastructure:

    docker-compose up postgresql redis temporal temporal-ui -d
  2. Install dependencies:

    pip install -r requirements.txt
  3. Run database migrations:

    python -c "from core.database import create_tables; create_tables()"
  4. Start worker:

    python worker.py
  5. Start API service:

    python api_service.py

Testing

pytest tests/ -v

Deployment

The service is designed for container deployment with docker-compose. For production:

  1. Use environment-specific .env files
  2. Configure proper secrets management
  3. Set up log aggregation
  4. Configure monitoring and alerting
  5. Scale worker instances based on load

Security Considerations

  • API keys stored in environment variables only
  • No secrets logged or committed to repository
  • Database credentials secured via environment
  • HTTPS required for production deployments
  • Input validation on all API endpoints

Support

For issues and questions:

  • Check service health at /health endpoint
  • Review logs in the ./logs directory
  • Monitor Temporal Web UI at http://localhost:8080
  • Check system metrics at /metrics endpoint

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages