HAG (Hybrid Augmented Generation) is an advanced knowledge-enhanced generation framework that combines the powerful capabilities of vector databases and knowledge graphs to provide intelligent Q&A capabilities. Built on LangChain, Neo4j, and Weaviate, HAG excels in domain-specific knowledge retrieval and reasoning.
- Multi-dimensional Understanding: Deep analysis of user query intent with precise knowledge need matching
- Context Awareness: Personalized responses based on conversation history and semantic understanding
- Vector Database: Weaviate provides efficient semantic similarity search
- Knowledge Graph: Neo4j enables complex relationship reasoning and entity discovery
- Hybrid Retrieval: Intelligent fusion of two data sources ensuring retrieval accuracy and completeness
- RESTful Interface: Standardized API design supporting multiple programming language calls
- Modular Architecture: Independent embedding, retrieval, and generation services with flexible composition
- LangChain Integration: Runnable pipeline architecture supporting complex workflow orchestration
- Modern Interface: Clean and elegant user experience following LINEAR design principles
- Real-time Feedback: Streaming response display with instant status updates
- Intelligent Interaction: Intuitive chat interface supporting multi-turn conversations and history
LINEAR style frontend interface
Hybrid retrieval workflow demonstration, integrating vector database and knowledge graph
Intelligent Q&A result display with complete knowledge sources and reasoning process
Session-based conversation management with persistent history
- Python 3.8 or higher
- Docker and Docker Compose
- Git
- Clone Repository
git clone https://github.com/yankmo/HAG.git
cd HAG
- Install Dependencies
pip install -r requirements.txt
- Start Required Services
# Start Neo4j
docker run -d --name neo4j \
-p 7474:7474 -p 7687:7687 \
-e NEO4J_AUTH=neo4j/your_password \
neo4j:latest
# Start Weaviate
docker run -d --name weaviate \
-p 8080:8080 \
-e QUERY_DEFAULTS_LIMIT=25 \
-e AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED=true \
semitechnologies/weaviate:latest
# Start Ollama
docker run -d --name ollama \
-p 11434:11434 \
ollama/ollama:latest
- Configure System
# Edit configuration file
cp config/config.yaml.example config/config.yaml
# Update database credentials and service URLs
- Run Application
# Start Web Interface
streamlit run app_simple.py
# Or use API directly
python api.py
Edit config/config.yaml
to customize your settings:
# Neo4j Configuration
neo4j:
uri: "bolt://localhost:7687"
username: "neo4j"
password: "your_password"
# Ollama Configuration
ollama:
base_url: "http://localhost:11434"
default_model: "gemma3:4b"
embedding_model: "bge-m3:latest"
# Weaviate Configuration
weaviate:
url: "http://localhost:8080"
streamlit run app_simple.py
Navigate to http://localhost:8501
and start asking questions!
from api import HAGIntegratedAPI
# Initialize system
hag = HAGIntegratedAPI()
# Ask questions
response = hag.runnable_chain.invoke("What are the symptoms of Parkinson's disease?")
print(response)
from src.services import HybridRetrievalService
# Use hybrid retrieval directly
hybrid_service = HybridRetrievalService(...)
results = hybrid_service.search("medical query", limit=5)
Run the test suite to verify your installation:
# Test basic functionality
python -c "from api import HAGIntegratedAPI; api = HAGIntegratedAPI(); print('β
HAG initialized successfully')"
We welcome contributions! Please check our Contributing Guide for details.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature
) - Commit your changes (
git commit -m 'Add some AmazingFeature'
) - Push to the branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
YankMo
- GitHub: @yankmo
- CSDN Blog: YankMo's Tech Blog
β If this project helps you, please give us a Star!