This is a web app built using streamlit
Link to Web app
1.Webscraping using Selenium,a python library
2.Convolutional Neural Network Model building and training
3.Saving the model and creating a webapp using streamlit
PART-1:: WEB SCRAPING(DONE BY HIMANGSHU DEKA)
- FIRST CREATED A CHROME DRIVER INSTANCE
2. THEN WROTE A FUNCTION THAT TAKES IN URL AND NO. OF IMAGES TO BE DOWNLOADED, AND DOWNLOADS THE IMAGES FROM THE SAID URL INTO A FILE CREATED USING THE FUNCTIONS AS DEPICTED BELOW::
-
HERE NOTE THAT WE ARE EXCLUDING EVERY 25TH IMAGE, THIS IS BECAUSE::AS WE CLICK AN IMAGE IN GOOGLE IMAGES EVERY 25TH INDEXED ELEMENT IS INSTEAD A LINK THAT STANDS FOR "VIEW IMAGES LIKE THIS" SO WE DONT WISH TO COLLECT THIS NON-IMAGE ELEMENT AND SO WE DISCARD THIS.
-
NOW OUR FNS ARE READY TYPE IN SOME IMAGE DESCRIPTIONS IN GOOGLE IMAGES, PASTE THAT URL AND APPLY THE FUNCTION
-------------------------------------------------------------------------------------------------------------------------------------
PART-2 BUILDING THE ACTUAL MODEL(DONE BY HIMANGSHU DEKA;KOUSHIK MUKKA)
1.CREATED THE TRAIN AND TEST DATA SET AS FOLLOWS;
2.THEN PREPROCESSED THE GIVEN IMAGES
3.THEN BUILT THE MODEL AS FOLLOWS
//NOTE::AFTER SOME FINETUNING WITH THE NO OF LAYERS(WHETHER TO GO WITH 2 OR 3 AND THE PADDING LAYERS,WE DECIDED TO GO WITH THESE AS WE GOT MORE ACCURACY WITH THESE
4.OUR MODEL::
5.THEN WE FIT OUR DATA as follows(we tried using Adam optimizer as well but accuracy got better for SGD ,"So Koushik Mukka decided to use SGD optimizer here
6.Final Model Accuracy::
7.Then saved our model to be deployed using StreamLit as my_model2.hdf5
-------------------------------------------------------------------------------------------------------------------------------------
PART-3:DEPLOYMENT(DONE BY KOUSHIK MUKKA)
1.To deploy the model built,It was saved to my_model2.hdf5 file.Here I used streamlit to host the deeplearning model on local host,For which streamlit is installed.
2.Then app.py is created,which would be hosted using streamlit.Here streamlit is imported and using which the option to accept image is created.Once the image is accepted,I loaded the model saved before.I resized the accepted image to match with input size of model.Then prediction is made using the model,which then was later used to print appropriate message.
3.Then to host the website on internet,I used streamlit where repository link with app.py,model,and requirements(which has all dependencies which must be included) are added.
-------------------------------------------------------------------------------------------------------------------------------------