-
Hi @mrdbourke , i have a question about how does usualy people or company like Tesla use their deep learning model, do they use deep learning model directly on physical devices or serving it on server ? if any suggestion about what is typicaly deep learning architecture look like that integegrated to software it would be better. Thank you |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi @rizki4106, Tesla use their models on their cars. As in, the model lives on a chip inside the car and cameras feed it images/video and it performs inference (makes predictions) live on the car. This is because of the latency requirements for self-driving (the models need to be fast so they wouldn't have time to upload data to a server). However, many companies use servers for inference. One of the best ways to get started checking out how to use a deployed model is with Gradio: https://gradio.app |
Beta Was this translation helpful? Give feedback.
Hi @rizki4106,
Tesla use their models on their cars.
As in, the model lives on a chip inside the car and cameras feed it images/video and it performs inference (makes predictions) live on the car.
This is because of the latency requirements for self-driving (the models need to be fast so they wouldn't have time to upload data to a server).
However, many companies use servers for inference.
One of the best ways to get started checking out how to use a deployed model is with Gradio: https://gradio.app