Ever wondered if your data is safe or not? Ever wondered how much of your data you are actually unknowingly giving to chat applications on the internet? No???? click at this article below:

If you read the article your reaction is probably like this right now

Well, I have some good news for you!! , Google's recent high performance Gemma3n model
is the answer for all your therapeutic needs! Instead of asking ChatGPT or other AI tools for therapy, why not ask MindWell? Your very own offline AI desktop app where you are in control of your data and you can analyze
and journal
your special memories
, and view your progress
over time.
All of this is possible thanks to a lightweight model from Google called "Gemma 3n".
Gemma 3n is truly one of a kind! You know how models create better responses based on the parameters they have? Well, Gemma 3n is unique in this case - it has a footprint of 5B parameters but operates with the memory capacity of only 2B (in the case of e2b), allowing us to create a desktop application with 2 instances of the model when required for internal calls with much less memory and utilize the TTL mechanisims of Ollama to auto-kill the instance when in-active creating a smooth experience for the entire Desktop application itself:
- ๐ Summarization
- ๐ฌ Chat
- ๐ Tracking
- ๐ Journaling
All at once!!! How awesome is that??

Below you can check out the architecture diagram and tech stacks used in this project:
Interact with a powerful local AI assistant from Google (Gemma3n model via Ollama) for various tasks including:
- Question answering
- Re-affirmations
- Wellness tracking
- Storing Special Memories
- Journaling
- Language Switching
- Exporting or accessing your database
- and a lot more planned...

- Log daily moods with detailed entries
- Interactive mood trend visualizations
- Insightful summaries over time
- Pattern recognition and insights
- Multilingual Summarization


- Any special memories tagged by Gemma3n are stored in Memory lane
- Allows users to Journal, modify the memory on command
- Revisit old memories with timestamps
- Delete Memories on command
- Multilingual Memory storage


- User name setting to utilize as context for the chat app
- default language settings to get summarization and journaling in different languages
- Adaptive UI design and interactive animations

A fully configured NSIS (Nullsoft Scriptable Install System) installer bundles the entire Electron app along with required binaries. It handles:
- Offline installation of MindWell
- Offline installation of Ollama
Leveraging terminal emulation via:
@xterm/xterm ^5.5.0
โ Provides the core terminal interface@xterm/addon-fit ^0.10.0
โ Auto-resizes terminal to fit container
These libraries allow for an embedded terminal that displays real-time model downloads (e.g., ollama pull gemma:3n
) during first launch.
- The Ollama binary is pre-packaged within the app's
resources/
directory - If not already installed, it runs immediately after Mindwell installation during setup
- Post-install, Ollama auto-updates and manages local models efficiently
- App launches with embedded terminal
- Checks for ollama binary and gemma:3n model
- If missing, silently installs Ollama
- Runs:
ollama pull gemma:3n
- User sees progress and status via in-app terminal
- Offline-ready and installer-integrated
- Terminal transparency for initial first-time setups and download
- Seamless first-time AI model provisioning
- Supports auto-updating of Ollama in background via Ollama itself

- Node.js (v18 or higher)
- npm or yarn
- Git
-
Clone the repository
git clone https://github.com/MirangBhandari/MindWell.git cd MindWell/electron-app
-
Install dependencies
npm install
-
Start development environment
npm run dev
This command will:
- ๐ Start the React development server
- ๐ฅ๏ธ Launch the Electron application
- ๐ Initialize the Python backend
- ๐ค Connect to the local Ollama instance (Ollama is necessary when running on dev environments)
-
Install Ollama (if not already installed)
- Visit ollama.ai and follow installation instructions
- Pull the Gemma model:
ollama pull gemma3n:e2b
-
Launch MindWell
- The application will automatically detect your Ollama installation
- Complete the initial setup wizard
- Start exploring your new AI assistant!
Platform | Command | Output |
---|---|---|
Windows | npm run dist:win |
.exe installer |
Built applications will be available in electron-app/dist/
Currently Scripts and Setups have been modified and created with Windows in mind only
- ๐ฆ Standalone Executables: No Python installation required
- ๐ง Custom NSIS Installer: Professional Windows installation experience
- Core Electron application setup
- React-based UI with TypeScript
- FastAPI backend integration
- Ollama AI integration
- Mood tracking with Chart.js visualizations
- Goal management system
- Digital journaling (Memory Lane)
- Wellness tracking
- Cross-platform build system
- Python-less packaging
- Cloud sync (optional, encrypted)
- Mobile companion app
- Advanced analytics dashboard
- Integration with Mobile devices
- Voice interaction capabilities when Gemma3n updates on Ollama enabling multi-modal capabilities
- RAG implementation and so much more
contributions from the community are wlecomed! if you have any blazing new idea make sure to let me know and do the following below to submit a pull-request:
- ๐ด Fork the repository
- ๐ฟ Create a feature branch (
git checkout -b feature/amazing-feature
) - ๐ป Make your changes
- ๐ Commit your changes (
git commit -m 'Add amazing feature'
) - ๐ Push to the branch (
git push origin feature/amazing-feature
) - ๐ Open a Pull Request
(I will verify the requests manually as i haven't integrated any tests yet)
- Follow the existing code style
- Write clear commit messages
- Add tests for new features
- Update documentation as needed
This project is licensed under the MIT License - see the LICENSE file for details.
This project wouldn't have been possible without the incredible contributions and innovations from various teams and communities:
- Ollama Team โ For building an exceptional local AI infrastructure that makes running large language models accessible and efficient
- Electron Community โ For providing the powerful desktop framework that bridges web technologies with native applications
- React & FastAPI Teams โ For creating robust, developer-friendly tools that form the backbone of modern applications
- Open Source Community โ For the endless inspiration, collaborative spirit, and unwavering support that has brought infinite joy throughout my entire Software Development Engineer career. The open-source ethos continues to fuel innovation and creativity in ways that never cease to amaze me.
A special recognition goes to the Google Gemma 3n Team for creating something truly extraordinary. The innovation achieved in these models is nothing short of jaw-dropping:
- Blazing Performance โ The response times and memory efficiency make the application feel incredibly fast and responsive
- Exceptional Quality โ The model's output quality and reasoning capabilities are remarkable for a local deployment
- Multi-Modal Vision โ The multi-modal capabilities make this model genuinely one-of-a-kind in the local AI space
Note: While Ollama hasn't yet updated to support the full multi-modal capabilities of Gemma 3n, working with what's currently available has been an absolute pleasure.
This hackathon has been an incredibly rewarding experience. Building with cutting-edge AI technology, exploring the boundaries of what's possible with local models, and creating something meaningful in such a short timeframe has been both challenging and exhilarating.
Thank you for this amazing opportunity โ it's experiences like these that remind me why I fell in love with software development in the first place.
Made with โค๏ธ for mental wellness and productivity
โญ Star this project if you find it helpful or interesting!!