Skip to content

A professional collection of small-scale LLM projects showcasing practical applications of generative AI.

Notifications You must be signed in to change notification settings

MoustafaMohamed01/llm-projects

Repository files navigation

LLM Projects Collection

Welcome to the LLM Projects repository — a growing collection of hands-on applications powered by Large Language Models (LLMs) like Google Gemini 2.0 Flash and Meta's LLaMA 3.2. This repo showcases real-world use cases built with Streamlit, focused on automating content creation and enhancing productivity with AI.

Whether you're a student, developer, or AI enthusiast, these projects demonstrate how to integrate cutting-edge LLMs into useful tools like blog generators, SQL query builders, and data analysis assistants.


Projects Included

Generate SEO-friendly blog posts with just a title and keywords. Built using Gemini 2.0 Flash and optionally powered by LLaMA 3.2 for experimentation with open-source models.

Features:

  • Input-based article generation
  • Multi-model support (Gemini & LLaMA)
  • SEO keyword handling
  • Markdown export
  • Beautiful, responsive UI

View README


Turn plain English into fully-formed SQL queries! This tool supports schema context, dialect customization, and offers AI-generated explanations and sample outputs.

Features:

  • Converts natural language to SQL queries
  • Supports Gemini 2.0 and LLaMA 3.2
  • Accepts optional DB schema and dialect
  • Returns SQL, explanation, and sample output
  • Clean Streamlit interface

View README


Upload a CSV file and interact with your dataset through natural language questions using either Google Gemini or LLaMA 3.2 via Ollama. No embeddings or external databases required.

Features:

  • Upload and preview CSV files
  • Ask questions about your data
  • Gemini or local LLaMA backend support
  • No vector store needed
  • Session-based Q&A memory

View README


Extract and summarize website content using Google Gemini API or LLaMA 3.2 (via Ollama). Scrapes raw text, removes unnecessary elements, and outputs clean markdown summaries.

Features:

  • Extracts title and main text from a URL
  • Removes scripts, styles, and irrelevant tags
  • Summarizes using Gemini or LLaMA 3.2
  • Works with Jupyter or standalone scripts
  • No need for embeddings or RAG setup

View README


Summarize content from uploaded PDF and Word documents in seconds using Google Gemini 1.5 Flash and LangChain. The app creates a vector-based knowledge base and queries the content for an intelligent summary.

Features:

  • Upload .pdf or .docx files
  • Automatically extracts and cleans text
  • Uses FAISS and LangChain for context-aware summarization
  • Gemini 1.5 Flash as the LLM backend
  • Clean and responsive Streamlit interface

View README


Getting Started

1. Clone the Repository

git clone https://github.com/MoustafaMohamed01/llm-projects.git
cd llm-projects

2. Navigate to Any Subproject

For example, to use the AI SQL Generator:

cd ai-sql-query-generator
pip install -r requirements.txt

3. Set Your API Key (for Gemini-powered apps)

Create a file named api_key.py inside the subproject folder:

GEMINI_API_KEY = "your_google_gemini_api_key"

Requirements

Each app has its own requirements.txt, but common dependencies include:

  • streamlit
  • google-generativeai
  • pandas
  • requests
  • json

Install them globally or per-project as needed.


Gallery

Blog Assistant UI SQL Generator UI
CSV Assistant UI Document Summarizer UI

Author

Created by Moustafa Mohamed - feel free to reach out!


About

A professional collection of small-scale LLM projects showcasing practical applications of generative AI.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages