Get the apk here.
- Open the app
- Click the camera button on the dashboard page
- Take picture of waste object
- Click the predict button
- See the result
See the Jupyter Notebook here.
Content of Jupyter Notebook:
- Import libraries
- Import dataset
The dataset contains of 11 classes of images, which are battery, cardboard, clothes, e-waste, food, glass, light bulbs, metal, paper, plastic, and shoes. - Image preprocessing using ImageDataGenerator
- Build the model structure with InceptionV3 pre-trained model and addition layers (dense layer with an output size of 11 and activation of softmax as the last layer)
- Compile and train the model
- Plot validation accuracy and loss
- Model testing
- Save model into .h5 format and TFLite (additional)
- Load the .h5 format saved model and retrain it
- Activate Cloud Run API and Cloud Build API
- Container image to package resources (model.h5, Flask RestFul API) using Dockerfile
- Build with Cloud Build and deploy it to Cloud Run
- API endpoints: https://getpredict-ehmfuclc5q-et.a.run.app
- The mapping prediction results into 3 categories:
Organik (Organic): food, cardboard, paper
Anorganik (Inorganic): clothes, glass, light bulbs, metal, plastic, shoes
B3 (Toxic and Hazardous Material): battery, e-waste
- Capture image using CameraX
- Call the endpoint API with the image as a key
- Display the result