Discover high-signal YouTube videos for any learning goal, then chat with a local LLM grounded on the video transcript.
- Query refinement
Turn a short learning goal into a sharp YouTube search query using your local Ollama model. - YouTube search
Fetch relevant videos via the YouTube Data API v3. - Transcript fetch
Prefers manual transcripts, then auto-generated, then translated-to-English. - Grounded chat
Ask questions about the selected video; answers are based on its transcript. - Simple, responsive UI
Built with Streamlit.
- Python 3.11+
- An Ollama server running locally or reachable over HTTP
- A YouTube Data API v3 key
git clone https://github.com/Anishrkhadka/asktube.git
cd asktube
Create a .env
file in the project root:
# Required: YouTube Data API key
YOUTUBE_API_KEY=YOUR_API_KEY_HERE
# Optional: Ollama host (defaults to http://localhost:11434)
OLLAMA_HOST=http://localhost:11434
The app defaults to gemma3:12b
(falls back to discovered tags). Pull at least one model:
ollama pull gemma3:12b
# or another compatible model, e.g. llama3.1:8b, mistral:7b
Build:
docker compose up --build
- In the text area, describe what you want to learn.
- Choose number of videos, model, sort order, and duration filter.
- Click Find videos to fetch results.
- Pick a video from the sidebar. Transcript loads automatically (if available).
- Use the chat box to ask questions grounded on the transcript.
- YouTube API key → Create in Google Cloud Console (enable YouTube Data API v3, create API key).
- Ollama → Install Ollama, run
ollama serve
, and pull a model withollama pull <model>
.
MIT License © 2025 AskTube