Created by AI Afterdark - Building Innovation with AI at Night
An AI-powered application that generates summaries of YouTube videos and enables interactive conversations about their content.
Experience the YouTube Video Summarizer in action: https://aiafterdark-youtube-summarizer.streamlit.app/
- Robust YouTube video transcript extraction with multiple fallback methods:
- YouTube Transcript API (primary)
- Pytube captions
- yt-dlp caption extraction
- AI-powered content summarization using OpenRouter's LLMs
- Interactive Q&A about video content
- Adjustable summary detail levels
- Clean, responsive UI
- Comprehensive error handling and reporting
- Python 3.11+
- pip (Python package manager)
- Git
- OpenRouter API key
- Clone the Repository
git clone https://github.com/AIAfterDark/youtube-summarizer-app.git
cd youtube-summarizer-app
- Set Up Virtual Environment
python -m venv venv
# Windows
venv\Scripts\activate
# Unix/MacOS
source venv/bin/activate
- Install Dependencies
pip install -r requirements.txt
- Configure Environment
Create a
.env
file in the root directory and add your OpenRouter API key:
OPENROUTER_API_KEY=your_api_key_here
- Run the Application
streamlit run app.py
The app-local.py version allows you to run the summarizer using Ollama on your local machine, which is free and doesn't require an API key.
- All requirements from Cloud Deployment
- Ollama installed on your machine
-
Install Ollama
- Download from ollama.ai
- Follow the installation instructions for your OS
- Make sure Ollama is running in the background
-
Pull Your Preferred Model
# Pull the default model (recommended)
ollama pull llama2
# Or pull other supported models
ollama pull codellama
ollama pull mistral
ollama pull neural-chat
- Run the Local Version
streamlit run app-local.py
The following models are tested and supported in app-local.py:
- llama2 (default, recommended)
- codellama
- mistral
- neural-chat
You can modify these settings in app-local.py:
- Default model: Change
model="llama2"
in theollama_completion
function - API endpoint: Default is
http://localhost:11434/api/chat
- Timeout settings: Default is 30 seconds
Adjust the chunk size based on video length:
- Short videos (<30 mins): 4000
- Long content (1hr+): 7000+
The app uses OpenRouter's API to access various LLM models:
- meta-llama/llama-2-13b-chat (default)
- anthropic/claude-2
- openai/gpt-3.5-turbo
We welcome contributions! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.
Created by Will at AI Afterdark Built using:
- Streamlit for web interface
- OpenRouter for cloud AI
- Ollama for local AI
- YouTube Transcript API for content extraction
- Twitter: @AIAfterdark
- GitHub: AI Afterdark
Built by AI Afterdark - Innovating with AI at Night
An AI-powered Streamlit app that generates summaries of YouTube videos and allows you to chat with the content. Available in two versions: Cloud (OpenRouter) and Local (Ollama).
- YouTube video transcript extraction
- Intelligent text chunking and processing
- AI-powered summarization
- Interactive chat with video content
- Multiple transcript retrieval methods
- Cloud and Local deployment options
- Uses OpenRouter API for AI inference
- Requires OpenRouter API key
- Better for deployment and sharing
- Uses meta-llama/llama-3.2-3b-instruct:free model
- Uses Ollama for local AI inference
- No API key required
- Better for privacy and offline use
- Supports multiple Ollama models
- Clone the repository:
git clone https://github.com/yourusername/youtube-summarizer-app.git
cd youtube-summarizer-app
- Install dependencies:
pip install -r requirements.txt
- Setup based on version:
- Get an API key from OpenRouter
- Create a
.env
file:
OPENROUTER_API_KEY=your_api_key_here
- Run the app:
streamlit run app.py
- Install Ollama from ollama.ai
- Pull the Llama2 model:
ollama pull llama2
- Start Ollama:
ollama serve
- Run the app:
streamlit run app-local.py
- Enter a YouTube URL
- Adjust the Summary Detail Level slider (1000-10000)
- Click "Generate Summary"
- View the generated summary
- Use the chat to ask questions about the video content
- Primary: YouTube Transcript API
- Fallback: yt-dlp
- Supports both manual and auto-generated captions
- Smart chunking based on sentence boundaries
- Context-aware summarization
- Clean transcript formatting
- Context-aware responses
- Strictly based on generated summary
- Clear indication when information isn't available
- Python 3.7+
- Streamlit
- youtube-transcript-api
- yt-dlp
- For Cloud Version: OpenRouter API key
- For Local Version: Ollama
Feel free to open issues or submit pull requests with improvements.
MIT License - feel free to use this project as you wish.