-
Notifications
You must be signed in to change notification settings - Fork 82
Open
Description
Hello,
I am having trouble understanding the procedure to train my own detection model. I have a Jetson Nano 2GB and 4GB variant with me.
My objective is to detect if a person wears sunglasses or not. To accomplish this objective, my main queries are as follows.
- I will have to train a detection model on my own dataset. It is mentioned in the Custom Containers document that I need to have one that is compatible with DeepStream. If I do manage to do this, what change should I make in which codes in the docker container so that it runs this different object detection neural network?
- I am under the assumption that if I manage to train a custom object detection neural network following the instructions on the Deep Stream docs page, I will have a compatible neural network. I should then put these weights in a shared drive and run the container, putting the trained weights in a particular folder (which I do not know the location of) and make changes in maskcam_run.py or maskcam_inference.py to point it to the updated weights. Are there flaws in my assumptions? Could you please correct me if I am wrong? I am new to docker as well so I might be missing something fundamental.
My work flow is the exact same as mask cam, with remote deployment and web server accessing and the rest. I just need to change the object detection mechanism. Even the statistics that it provides will be unchanged.
Thank you.
Metadata
Metadata
Assignees
Labels
No labels