Skip to content

Koushik2824/Image-Clasiifier

Repository files navigation

Image-Clasiifier

This is a web app built using streamlit
Link to Web app

Steps:

1.Webscraping using Selenium,a python library
2.Convolutional Neural Network Model building and training
3.Saving the model and creating a webapp using streamlit

PART-1:: WEB SCRAPING(DONE BY HIMANGSHU DEKA)

  1. FIRST CREATED A CHROME DRIVER INSTANCE
Screenshot 0005-07-29 at 14 28 49 2. THEN WROTE A FUNCTION THAT TAKES IN URL AND NO. OF IMAGES TO BE DOWNLOADED, AND DOWNLOADS THE IMAGES FROM THE SAID URL INTO A FILE CREATED USING THE FUNCTIONS AS DEPICTED BELOW:: Screenshot 0005-07-29 at 14 29 59
  1. HERE NOTE THAT WE ARE EXCLUDING EVERY 25TH IMAGE, THIS IS BECAUSE::AS WE CLICK AN IMAGE IN GOOGLE IMAGES EVERY 25TH INDEXED ELEMENT IS INSTEAD A LINK THAT STANDS FOR "VIEW IMAGES LIKE THIS" SO WE DONT WISH TO COLLECT THIS NON-IMAGE ELEMENT AND SO WE DISCARD THIS.

  2. NOW OUR FNS ARE READY TYPE IN SOME IMAGE DESCRIPTIONS IN GOOGLE IMAGES, PASTE THAT URL AND APPLY THE FUNCTION

-------------------------------------------------------------------------------------------------------------------------------------

PART-2 BUILDING THE ACTUAL MODEL(DONE BY HIMANGSHU DEKA;KOUSHIK MUKKA)

1.CREATED THE TRAIN AND TEST DATA SET AS FOLLOWS;

Screenshot 0005-07-29 at 14 45 52

2.THEN PREPROCESSED THE GIVEN IMAGES

Screenshot 0005-07-29 at 14 36 37

3.THEN BUILT THE MODEL AS FOLLOWS Screenshot 0005-07-29 at 14 48 33 //NOTE::AFTER SOME FINETUNING WITH THE NO OF LAYERS(WHETHER TO GO WITH 2 OR 3 AND THE PADDING LAYERS,WE DECIDED TO GO WITH THESE AS WE GOT MORE ACCURACY WITH THESE

4.OUR MODEL::

Screenshot 0005-07-29 at 14 50 02

5.THEN WE FIT OUR DATA as follows(we tried using Adam optimizer as well but accuracy got better for SGD ,"So Koushik Mukka decided to use SGD optimizer here

Screenshot 0005-07-29 at 14 51 23

6.Final Model Accuracy::

Screenshot 0005-07-29 at 14 55 19

7.Then saved our model to be deployed using StreamLit as my_model2.hdf5

------------------------------------------------------------------------------------------------------------------------------------- PART-3:DEPLOYMENT(DONE BY KOUSHIK MUKKA)
1.To deploy the model built,It was saved to my_model2.hdf5 file.Here I used streamlit to host the deeplearning model on local host,For which streamlit is installed.

Screenshot 2023-07-29 at 3 12 15 PM

2.Then app.py is created,which would be hosted using streamlit.Here streamlit is imported and using which the option to accept image is created.Once the image is accepted,I loaded the model saved before.I resized the accepted image to match with input size of model.Then prediction is made using the model,which then was later used to print appropriate message.

Screenshot 2023-07-29 at 3 19 02 PM Screenshot 2023-07-29 at 3 19 06 PM

3.Then to host the website on internet,I used streamlit where repository link with app.py,model,and requirements(which has all dependencies which must be included) are added.

Screenshot 2023-07-29 at 3 30 36 PM Screenshot 2023-07-29 at 3 30 40 PM Screenshot 2023-07-29 at 3 45 32 PM

-------------------------------------------------------------------------------------------------------------------------------------

About

This is a web app built using streamlit

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published