This is a modified version of naokishibuya/car-behavioral-cloning
Clone the Udacity simulator from udacity/self-driving-car-sim
- Setup a virtual environment with python 3.8
- Install all the dependencies from req.txt
The CNN model we use is the modified version of the model described in a paper published by Nvidia. You can find the paper here. This repository contains a pre-trained model but it may cause limited accuracy, since it was trained with a smaller dataset.
- Run the Udacity simulator in training mode and record your game play to a folder
- This will generate a folder named
IMG
and a CSV filedriving_log.csv
. TheIMG
folder contains all the frames captured from the game play, from cameras placed in 3 different angles ( left, center and right). - The CSV file
driving_log.csv
contains a mapping of the set of images captured from a single moment and the corresponding steering angle, throttle, speed and brake values - The CSV structure will be
Center Left Right Steering angle Throttle Speed Brake - Copy the CSV file and the contents of
IMG
folder into a single folder - For training the model, run
python model.py -d path-of-the-folder-with-training-images-and-csv
- This will print a summary of the model and start training the model.
For running the server without recording the simulator output :
python drive.py name-of-model-file.h5
For recording the simulator output:
python drive.py name-of-model-file.h5 run1
where run1 is the name of folder to which the recorded frames are to be stored
You can constuct a video from these frames by:
python video.py run1 --fps 30
This will combine all the frames in the folder run1 in to a video of 30fps. Default fps value is 60.