-
Notifications
You must be signed in to change notification settings - Fork 23
Description
Dear Author, I appreciate this great work again.
I'm working on implementing the evaluation metrics for the RaTrack approach in my research project. I've read your paper "RaTrack: Moving Object Detection and Tracking with 4D Radar Point Cloud" published at ICRA 2024 with great interest.
After running python main.py --config configs_eval.yaml
to generate predictions, I'm trying to understand how to properly evaluate these results against the ground truth. According to your paper and README, you've developed a point-based version of the AB3DMOT evaluation that adapts to your cluster-based detection approach.
Specific Questions:
-
Point Cloud IoU Calculation:
- How exactly is the point-based IoU calculated between prediction clusters and ground truth bounding boxes?
- Is there a specific approach for determining which points belong to a ground truth bounding box?
-
Format Adaptation:
- What is the format of the prediction files generated in the
results
folder? - How are these predictions matched with the KITTI-format ground truth labels?
- What is the format of the prediction files generated in the
-
Evaluation Metrics:
- Is there any guidance on implementing the point-based versions of sAMOTA, AMOTA, AMOTP metrics?
- Are there any specific considerations when using the 0.25 IoU threshold you mention in the paper?
I'm working on [brief description of your research project or application]. Understanding these implementation details would greatly help me in properly evaluating and comparing tracking approaches with your method.
I understand that you mentioned in the README that you're currently working on integrating the evaluation scripts. Any guidance, code snippets, or conceptual explanations would be extremely valuable even if the full integration is still in progress.
Thank you for your innovative work in this field!