Skip to content

Photonf22/3D_Object_Tracking

Repository files navigation

SFND 3D Object Tracking

Flow Diagram of Project

TTC Image #16

Project Rubric:

FP.0 Final Report

  • Provide a Writeup / README that includes all the rubric points and how you addressed each one. You can submit your writeup as markdown or pdf.

FP.1 Match 3D Objects

  • Implement the method "matchBoundingBoxes", which takes as input both the previous and the current data frames and provides as output the ids of the matched regions of interest (i.e. the boxID property). Matches must be the ones with the highest number of keypoint correspondences.

void matchBoundingBoxes(std::vector<cv::DMatch> &matches, std::map<int, int> &bbBestMatches, DataFrame &prevFrame, DataFrame &currFrame)

FP.2 Compute Lidar-based TTC

  • Compute the time-to-collision in second for all matched 3D objects using only Lidar measurements from the matched bounding boxes between current and previous frame.

void computeTTCLidar(std::vector<LidarPoint> &lidarPointsPrev,

FP.3 Associate Keypoint Correspondences with Bounding Boxes

  • Prepare the TTC computation based on camera measurements by associating keypoint correspondences to the bounding boxes which enclose them. All matches which satisfy this condition must be added to a vector in the respective bounding box.

void clusterKptMatchesWithROI(BoundingBox &boundingBox, std::vector<cv::KeyPoint> &kptsPrev, std::vector<cv::KeyPoint> &kptsCurr, std::vector<cv::DMatch> &kptMatches)

FP.4 Compute Camera-based TTC

  • Compute the time-to-collision in second for all matched 3D objects using only keypoint correspondences from the matched bounding boxes between current and previous frame.

void computeTTCCamera(std::vector<cv::KeyPoint> &kptsPrev, std::vector<cv::KeyPoint> &kptsCurr,

FP.5 : Performance Evaluation 1

image

According to the above examples, It can be assumed that the factor that caused the lidar to be somewhat off compared to the Camera TTC is due to the outliers points. Even though we used the mean distance of all points between the current and previous frame. This was not enought to get rid of these outliers which affected our results. One way to solve this would be to lower the threshold value of what is acceptable when calculating TTC Lidar. Thus points that fall above the mean euclidian distance times the threshold will be ignored. Currently the threshold is set to 1.3 but one could lower it to 1.1 to get rid of more points that affect the lidar TTC calculation.

Harris-Brisk

Image 10 (Previous) and Image 11 (Current and Previous) as well as Image 12 (Current) Top-View

image

Fast-SIFT

Image 6 (Previous) and Image 7 (Current) Top-View

image

Image 9 (Previous) and Image 10 (Current) Top-View

image

Akaze-ORB

Image 8 (Previous) and Image 9 (Current) Top-View

image

Image 11 (Previous) and Image 12 (Current) Top-View

image

FP.6 Performance Evaluation 2

Top Choices according to speed and accuracy

image

Image Index along with their respective TTC Lidar and Camera.

image

Graph of Chosen Detectors-Descriptors which shows the differences between TTC Lidar and TTC Camera estimation

image

Chosen Detector-Descriptor and TTC Lidar observation:

Fast-Brisk:

  • Looking at the Fast-Brisk combination one can easily see that Image 9 the Lidar TTC estimation is off due to the amount of outliers in the image.

Fast-ORB:

  • Looking at the Fast-ORB combination one can easily see that Image 10 the Lidar TTC estimation is off due to the amount of outliers in the image.

Akaze-ORB:

  • Looking at the Akaze-ORB combination one can easily see that Image 9 the Lidar TTC estimation is off due to the amount of outliers in the image.

Fast-Brief:

  • Looking at the Fast-Brief combination one can easily see that Image 10 the Lidar TTC estimation is off due to the amount of outliers in the image.

I chose the following 3 top detector/descriptor combinations because they are the ones that perform the fastest. Therefore my answer to this question is biased on the seed of the Combination of detectors/descriptors. The faster and more accurately we can obtain an estimated TTC then the better since in real life applications speed is important and reliability when talking about life and death scenarious.

Below is a table which shows some examples where TTC Camera estimation is way off.

  • In harris-brisk Image 10 Excel was unable to calculate or show the estimated TTC Camera since the value would have been infinity.

image

Dependencies for Running Locally

Basic Build Instructions

  1. Clone this repo.
  2. Make a build directory in the top level project directory: mkdir build && cd build
  3. Compile: cmake .. && make
  4. Run it: ./3D_object_tracking.

About

3D Camera and Lidar Object tracking final project from Udacity Sensor Fusion nano-degree

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published