Skip to content
#

local-llm-integration

Here are 16 public repositories matching this topic...

🚀 A powerful Flutter-based AI chat application that lets you run LLMs directly on your mobile device or connect to local model servers. Features offline model execution, Ollama/LLMStudio integration, and a beautiful modern UI. Privacy-focused, cross-platform, and fully open source.

  • Updated Apr 14, 2025
  • Dart

An advanced, fully local, and GPU-accelerated RAG pipeline. Features a sophisticated LLM-based preprocessing engine, state-of-the-art Parent Document Retriever with RAG Fusion, and a modular, Hydra-configurable architecture. Built with LangChain, Ollama, and ChromaDB for 100% private, high-performance document Q&A.

  • Updated Jul 30, 2025
  • Python

Improve this page

Add a description, image, and links to the local-llm-integration topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the local-llm-integration topic, visit your repo's landing page and select "manage topics."

Learn more