Vaasthu Vision AI is an intelligent GenAI system designed to provide authentic guidance, directional insights, and remedial suggestions based on Vaasthu Shastra. Unlike generic chatbots, it uses a Retrieval-Augmented Generation (RAG) architecture to ensure that every response is accurate, reliable, and context-aware.
The system carefully routes queries using similarity scores and critical keyword checks, preventing wrong or hallucinated answers while always prioritizing trustworthy knowledge from its Vaasthu knowledge base.
Iโve shared my project demo and explanation on LinkedIn โ check them out below:
๐ Click to watch the full explanation or directly view the video output.
To build an AI assistant that:
- Understands Vaasthu rules deeply
- Provides reliable answers
- Avoids hallucinations
- Runs fast on web (connected to a slick frontend)
- Faithfulness: 0.9โ1.0
- Answer Relevancy: ~0.7โ0.85
- Latency: Measured and optimized for real-time responses
- Hallucination Rate: Reduced via retrieval grounding and fallback strategies
- Frontend: Built the website using bolt ai and customized as desired.
- Backend: Python (FastAPI / Streamlit for local)
- LLM: LLaMA3-8B-8192 via Groq API
- Vector DB: Qdrant with
all-MiniLM-L6-v2
embeddings - RAG: LangChain-powered pipeline
- Started with 40 structured Vaasthu elements in JSON
- Converted to 350+ high-quality natural-language rules
- Added metadata:
zone
,rule_id
,category
- Stored using
RULE_START
andRULE_END
delimiters
- Final prompt: minimalistic, 4 - 6 lines of answer.
- Designed for clarity, consistency, and production use
- Used temperature=0 and top_p=1 for deterministic output
- Query โ Embedding
- Qdrant โ Top 3 relevant rules
- Custom prompt โ Groq LLM (LLaMA3)
- Final response โ Displayed in UI
graph TD
A[User Question] --> B[Critical Keyword Check]
B -->|Yes| C[RAG QA Chain]
B -->|No| D[Vectorstore Retrieval]
D --> E[Qdrant Similarity Score]
E --> F{Confidence Thresholds}
F -->|High| C
F -->|Medium| G[`I don't know Response`]
F -->|Low| H[Fallback Chat Chain]
C --> I[Final Vaasthu Answer]
G --> I
H --> I
This project implements a smart query routing system that decides whether a userโs question should be answered via RAG pipeline (vector database retrieval) or by an LLM fallback, based on similarity scores and critical keywords.
- User enters a query
- The system runs a similarity search on the vector database.
- A similarity score is calculated for the top retrieved chunks.
- Based on this score and rules, the query is routed:
-
Case 1: High Confidence (โฅ HIGH_THRESHOLD)
- โ Strong match found in vector DB
- ๐ Response generated by RAG pipeline (qa_chain)
-
Case 2: Low Confidence (< LOW_THRESHOLD)
โ ๏ธ Retrieved chunks are unreliable- ๐ Routed to LLM fallback, which replies:
"Sorry, I donโt have an idea about this query."
-
Case 3: Critical Keywords Override
- ๐ Even if similarity score is below LOW_THRESHOLD,
- If the query contains critical keywords (e.g., Kitchen, Bathroom, Hall),
- ๐ Still answered through RAG pipeline (domain relevance guaranteed)
-
Case 4: Nonsense / Out-of-Domain Queries
- ๐ซ No relevant match + no critical keywords
- ๐ Routed to LLM fallback for casual/nonsense handling
- Confidence-based Routing โ prevents misleading answers from weak retrievals.
- Domain Awareness โ critical keywords always prioritize vector DB results.
- LLM Fallback โ handles nonsense or completely unrelated queries.
- Accuracy & Safety First โ ensures only reliable information is returned.
- Contribute Feature โ Allows users to submit data or upload files for review. After admin verification, contributions can be incorporated into the project.
- Prompt structure impacts hallucination significantly
- Simpler is better: hardcoded rules > complex intent classifier
- Fast, reliable LLMs like Groq drastically improve UX
- Semantic granularity in rules increases RAG accuracy
๐ Click here : Visit Site
or
website : https://jazzy-entremet-70cc2a.netlify.app/
The system is ready for real-world integration and can be expanded to:
- Multiple languages
- Room-by-room suggestions
- Vaastu-based house plan checker
We provide a Docker image for Vaasthu Vision AI so you can run the app anywhere without installing dependencies.
-
Docker Desktop installed: https://www.docker.com/products/docker-desktop
-
A
.env
file with:- QDRANT_URL=<your-qdrant-cloud-url> - QDRANT_API_KEY=<your-qdrant-api-key> - GROQ_API_KEY=<your-groq-api-key>
Pull the latest image from Docker Hub:
- docker pull docker.io/shivaprasadnaroju/vaasthu-vision-ai:latest
Run the container with your .env:
- docker run -p 8000:8000 --env-file .env docker.io/shivaprasadnaroju/vaasthu-vision-ai:latest
Visit: http://localhost:8000/docs to access FastAPI Swagger UI.
-
The image is preconfigured to connect to Qdrant Cloud via .env.
-
For a local Qdrant setup, a separate Docker Compose file can be used
Inspired by traditional Indian architecture wisdom and empowered by modern AI.
Iโm open to collaborations, feedback, or AI-based consulting.
๐ง Email: shivanaroju26@gmail.com