Med Gemma is a medical chatbot based on the edwardlo12/medgemma-4b-it-Q4_K_M
language model, accessible via a Chainlit Chat interface. Note: The used model is image-text-to-text. On M-Series Mac or eventually PC with GPU or suffisant ram and vram, I recommend to install LM Studio and download a multimodal model in GGUF-Format or mlx 4 bit vision for Mac for a quicker start. Medgemma from google is the first multimodal model with its minimalistic size of 4B parameters. It might be only capable of performing easy radiology vision tasks I guess. Check ollama medgemma vision. Because in the meanwhile new vision models of medgemma are available on ollama model search simililar to the one chosen. You might want to use one of those instead.
Before you begin, ensure that Ollama is installed and running.
-
Install Ollama: Visit ollama.com and download the version for your operating system.
-
Download the MedGemma Model: Open your terminal and run the following command to download the specific model:
ollama pull edwardlo12/medgemma-4b-it-Q4_K_M
Ensure the Ollama service is running in the background before starting the application.
You can install the project dependencies using either Conda or pip.
-
Create a new Conda environment (recommended):
- download & install miniconda
conda create -n medgemma python=3.11 # You can choose a different Python version if desired conda activate medgemma
-
Install dependencies: Create a
requirements.txt
file (if not already present) with the following content:// filepath: requirements.txt chainlit langchain langchain-community ollama
Then install the packages:
pip install -r requirements.txt
(Note: Although we are using Conda,
pip
within a Conda environment is often the easiest way to install Python packages that are not directly available as Conda packages or to use specific versions fromrequirements.txt
.)
-
Create a virtual environment (recommended):
python -m venv venv source venv/bin/activate # On macOS/Linux # venv\Scripts\activate # On Windows
-
Install dependencies: Create a
requirements.txt
file (if not already present) with the following content:// filepath: requirements.txt chainlit langchain langchain-community ollama
Then install the packages:
pip install -r requirements.txt
- Ensure your (Conda or virtual) environment is activated.
- Ensure the Ollama service is running and the
edwardlo12/medgemma-4b-it-Q4_K_M
model is available. - Navigate to the project directory in your terminal.
- Start the Chainlit application:
The
chainlit run main.py -w
-w
flag enables automatic reloading on code changes.
After starting the application, you will see output in the terminal similar to this:
Your app is available at http://localhost:8000
Open your web browser and navigate to the displayed address (defaults to http://localhost:8000
, but may vary if the port is already in use). You should now see the Med Gemma chat interface.