Skip to content

The application allows a user to enter a topic, and the backend uses a Generative AI model to create a tweet about it.

Notifications You must be signed in to change notification settings

priyanshutariyal02/ai-tweet-generator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 

Repository files navigation

Full-Stack AI Tweet Generator with Langfuse Observability

This project is a simple, fully-functional example of how to integrate Langfuse into a full-stack application. It consists of a React frontend and a Python (Flask) backend.

The application allows a user to enter a topic, and the backend uses a Generative AI model to create a tweet about it. Every request to the backend is traced using Langfuse, providing valuable observability into the LLM's performance, cost, and output.

Core Technologies

  • Frontend: React (with Vite) & Tailwind CSS
  • Backend: Python (with Flask)
  • Observability: Langfuse
  • AI Model: Google Gemini (via google-generativeai)

How Langfuse is Used Here

Langfuse allows us to see exactly what happens inside our AI application. For each generated tweet, we create a trace. This trace contains:

  1. A Generation Event: This logs the specific LLM call.
  2. Input: The prompt we constructed.
  3. Output: The exact tweet generated by the model.
  4. Metadata: Information like the model name (gemini-pro).
  5. Usage: Token counts for prompt, completion, and total.
  6. Latency: How long the generation took.

This is invaluable for debugging, monitoring costs, and evaluating the quality of your AI's responses over time.

Project Setup & Running

Prerequisites:

Step 1: Set up Environment Variables

Create a .env file in the backend directory and add your keys:

# backend/.env

# Get from your Langfuse project settings  
LANGFUSE_SECRET_KEY="sk-lf-..."  
LANGFUSE_PUBLIC_KEY="pk-lf-..."  
LANGFUSE_HOST="" # Or your self-hosted instance

# Get from Google AI Studio  
GEMINI_API_KEY="..."

Step 2: Backend Setup

# Navigate to the backend directory  
cd backend

# Create a virtual environment (recommended)  
python -m venv venv  
source venv/bin/activate  # On Windows, use `venv\Scripts\activate`

# Install dependencies  
pip install -r requirements.txt

# Run the Flask server  
flask run

The backend will be running at http://127.0.0.1:5000.

Step 3: Frontend Setup

Open a new terminal and navigate to the project root

# Create a new Vite project called 'frontend'  
npm create vite@latest frontend -- --template react

# Navigate into the new frontend directory  
cd frontend

# Install dependencies  
npm install

# (IMPORTANT) Replace the content of \`src/App.jsx\` with the code provided in the App.jsx file.  
# Also, install Tailwind CSS for styling  
npm install -D tailwindcss postcss autoprefixer  
npx tailwindcss init -p

# Configure tailwind.config.js by adding "./src/\*\*/\*.{js,ts,jsx,tsx}" to the content array.  
# Add the tailwind directives to your src/index.css file.

# Start the development server  
npm run dev

The frontend will be running at http://localhost:5173. Open this URL in your browser to use the app.

About

The application allows a user to enter a topic, and the backend uses a Generative AI model to create a tweet about it.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published