Skintector is a web application that allows users to self-diagnose skin conditions present in their images using AI. The model is built on EfficientNetV2M and has been fine-tuned on the SD-198 dataset.
The front-end was built using React, styled using Tailwind and Mantine, uses Dexie (IndexedDB) as a database, and was developed on Vite. The back-end API runs on Flask and serves a Tensorflow model.
This project was developed by Jeffrey Jin, Hui Hua (Emily) Huang, Long Tran, and Albert Hong.
This is an example of how a user can diagnose an image on our website.
Watch the full demo video down below.
The file structure of our project is listed as follows.
repository
├── /api/
├── /model/ ## Saved Tensorflow model for API
├── /training/ ## Code for training the model
├── main.py ## Code for the Flask API
├── requirements.txt ## List of dependencies for the back-end and model training
├── /public/ ## Static elements for the website
├── /src/ ## Source code for the website
├── .eslintrc.cjs ## Config file for linting
├── .gitignore ## Ignores files that shouldn't be tracked
├── .vercelignore ## Ignores back-end files when hosting front-end on Vercel
├── demo.gif ## Demo gif for readme
├── index.html ## Loads React code
├── LICENSE ## GNU AGPL 3.0 license
├── package-lock.json, package.json ## Config files for running front-end and installing dependencies
├── postcss.config.cjs ## Config file for Mantine
├── README.md ## You are here
├── tailwind.config.js ## Config file for Tailwind
├── tsconfig.json, tsconfig.node.json ## Config files for TypeScript
├── vercel.json ## Config file for Vercel
├── vite.config.ts ## Config file for Vite
To clone the repo, use
git clone https://github.com/sfu-cmpt340/2024_1_project_01
cd 2024_1_project_01
The training and evaluation scripts require Python (tested on 3.11), Keras, and KerasCV. Keras also requires another library for its back-end, which could be Jax, Torch, or Tensorflow. No matter what back-end is used, Tensorflow is required for data loading functionality and Matplotlib for plotting training history.
It is recommended to use Tensorflow as a back-end since it has CUDA support which will speed up training and evaluation of the models. If you choose Jax or Torch, you will have to install Tensorflow without CUDA support.
Keras and KerasCV should both be installed after the back-end is installed.
If have chosen Jax or Torch as a back-end, please export environment variable KERAS_BACKEND
as follows
export KERAS_BACKEND="jax"
or edit your local config file at ~/.keras/keras.json
to configure the back-end.
If you are on Linux with an Nvidia GPU, the dependencies for the training scripts can be installed by using
pip install -r api/training/requirements.txt
It is recommended to install dependencies for both training scripts and back-end in a Python virtual environment.
The back-end requires Python (tested on 3.11).
To install the back-end dependencies, use
pip install -r api/requirements.txt
It is also recommended to use a venv for the back-end dependencies.
The front-end requires the installation of Node.js (tested on v16.17.0).
To install the front-end dependencies, use
npm install
The required packages will be installed to /node_modules/
.
The project uses a modified version of SD-198, where each image has its bottom caption cropped to prevent corrupting features that the model will learn. Download the dataset here and place its contents inside /api/training/
.
To train a model, use
python ./api/training/train_model.py -m [model_number]
where [model_number]
is one of the six models mentioned in the report. Once complete, the model will be saved at api/training/models/model_[model_number].keras
.
To export one of your trained models at api/training/models
as a Tensorflow serving model, use
python ./api/training/export_serving_model.py -m [model_number]
The serving model will be saved at /api/training/models/model_[model_number]/
.
You can download our trained models and their corresponding serving models here in /models/
folder.
To evaluate a trained model or downloaded model at /api/training/models/model_[model_number].keras
, use
python ./api/training/evaluate_model.py -m [model_number]
Our website runs Tensorflow serving model. If you have trained a model as instructed above, you should have /api/training/models/model_[model_number].keras
Use /api/training/export_serving_model.py
to export it as a Tensorflow serving model as instructed above. You will get /api/training/models/model_[model_number]
, whose content should be copied into /api/model/
. Alternatively, you can download a serving model here.
Then, set the current directory to api by using
cd api
The rest of the following commands must be run in /api/
.
To run the back-end in development mode, use
python main.py
The API will be accessible at http://127.0.0.1:5000 or http://localhost:5000. Note that this is not the recommended method to run the back-end for production.
If you're running the back-end locally, this step can be omitted. Otherwise, create a .env
file in base directory and fill it in as follows
VITE_CLASSIFY="address_of_flask_server"
To run the front-end in development mode, use
npm run dev
The website can then be accessed at https://127.0.0.1:5173 or https://localhost:5173.
To compile the front-end and run it in production mode, use
npm run build
npm preview
The TypeScript code will be compiled into JavaScript and stored at /dist/
. The website can then be accessed at https://127.0.0.1:4173 or https://localhost:4173.