Prototype rebuilt with Mistral LLM, LangChain, and a modern UI.
This is an enhanced and efficient version of the Patient Health Report Analyzer project I initially created as a prototype. The earlier version focused on basic PDF parsing and keyword matching. In this release, I've rebuilt it using cutting-edge technologies like:
- 🧠 Mistral LLM (via Ollama) – for local, privacy-preserving AI inference
- 🔗 LangChain – for prompt management and chaining logic
- 📄 PDF Parsing – clean extraction of health report text using
pdfplumber
- 💻 Streamlit UI – for an intuitive and interactive frontend
- Upload and analyze health/lab PDF reports
- Extracts and summarizes:
- 🔍 Key medical observations
⚠️ Critical abnormalities or conditions- 📋 Suggested follow-ups or tests
- ❓ Missing or ambiguous information
- All AI processing done locally via Mistral (no cloud model dependency)
- Clone the repository:
git clone https://github.com/rahulprajapati08/PHR-Analyzer.git cd PHR-Analyzer
- Install dependencies:
pip install -r requirements.txt
- Install Ollama if not already installed, then run:
ollama run mistral
- Launch the App:
streamlit run app.py
- This project is fully local and private — no external LLM APIs are used.
- You can deploy the Streamlit app online and expose your backend using ngrok or localtunnel.
- Add PDF export of summary
- Build patient history dashboard
- Integrate OCR for scanned reports