Skip to content

Vehicle Re-Identification in a city scenario through variation of the lighting context, and more generally, at any variation on the scene.

Notifications You must be signed in to change notification settings

SlimShadys/Vehicle-ReID

Repository files navigation

Vehicle Re-Identification

Normal images         Density maps

Install proper libraries

In order to install the proper libraries, please first install PyTorch. The following guidelines are made for Windows 10/11. Adjust accordingly.

PyTorch

pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124

Once everything has been installed, simply run the requirements.txt file with:

pip install -r requirements.txt

Configuration

The project has a dedicated config.py file where all the default parameters have been set, such as:

  • Misc: Contains general settings such as the random seed, device configuration (CPU/GPU), and experimental features like Automatic Mixed Precision (AMP).
  • Dataset: Defines the dataset-related configurations, including paths, dataset names, sizes, and sampling strategies. It also handles dataset splitting for combined datasets.
  • Model: Configures the ReID model architecture, including backbone choices, pretraining options, and advanced features like GeM pooling, stride adjustments, and normalization layers.
  • Color Model: Specifies the color classification model used alongside the ReID model, including its architecture and pretrained weights.
  • Augmentation: Controls the data augmentation pipeline, including resizing, cropping, padding, color jitter, and normalization settings.
  • Loss: Configures the loss functions used during training, including triplet loss variants, label smoothing, and advanced techniques like Relation Preserving Triplet Mining (RPTM) and Multi-Attribute Loss Weighting (MALW).
  • Training: Manages the training process, including epochs, batch size, optimizer settings, learning rate schedules, and checkpoint loading.
  • Validation: Sets up the validation process, including batch size, validation intervals, and re-ranking options.
  • Test: Configures the testing phase, including embedding normalization, similarity algorithms, and paths to test images or models.
  • Tracking: Defines settings for object tracking, including YOLO configurations, filtering thresholds, and output paths for bounding boxes and videos.
  • Database: Configures the MongoDB database connection and collections for storing vehicle, camera, trajectory, and bounding box data.
  • Metrics: Handles evaluation metrics for tracking and ReID tasks, including MOTA metrics, IoU thresholds, and prediction file management.
  • Pipeline: Controls the overall pipeline execution, including video paths, ROI masks, target search, and database unification processes.

If you wish to override some parameters, create a config.yaml file .

Full Pipeline of Tracking + ReID

To run the full pipeline, which involves Object Detection + Tracking + ReID, run:

python pipeline.py <config_file>.yml

The YAML file specified here, must be a configuration file that contains the following section:

  • camera_x: (path, roi, info, gt) Contains the paths to the video/frames, the ROI, the informations and possibly the ground truth.
  • layout: (layout path) Probably the most important and MANDATORY field. It contains geometric informations between all pairs of cameras, with FPS, scales and offsets, compatibility (matrix for determining whether a car can be seen from a camera to another), dtmin (minimum time for a vehicle to transition between two pairs of cams) and dtmax. There's no pipeline without this field. You can find examples of configuration files in the config folder.

For AICity22, it would be:

python pipeline.py configs/cameras_s02_cityflow.yml

Please change the config.py file to adjust the MTMC (YOLO, Detector, Tracking & Pipeline configs).

ReID

Training

To train the model, run the following command in your terminal:
python -m reid.main <config_file>.yml

N.B. If you do not specify a .yml file, the script will use the pre-defined config.py file to set up the Dataset, Model, and Training parameters. N.B.2 For example, for RPTM Training, you can call the existing config file config_rptm.yml

Testing

To evaluate a trained model, use the following command:
python -m reid.test <config_test_file>.yml

N.B. If you do not specify a .yml file, the script will use the pre-defined config_test.yml file to set up the Model and Testing parameters.

N.B. This script will load a pre-trained model specified in the config_test.yml file and either:

  • Compare two specific images: If run_reid_metrics is set to False in the config file, the script will compute the similarity or distance between the two images specified in PATH_IMG_1 and PATH_IMG_2. The similarity metric (e.g., Euclidean distance or cosine similarity) is determined by the SIMILARITY_ALGORITHM setting in the config file.
  • Run re-identification metrics: If run_reid_metrics is set to True, the script will evaluate the model on the validation set and compute re-identification metrics.
  • Run color metrics: If run_color_metrics is set to True, the script will evaluate the color classification model on the validation set and compute color-related metrics.

Additionally, if stack_images is set to True and run_reid_metrics is False, the script will compute a similarity matrix for all images in the test directory and display it as a heatmap. For this reason, change the Test Configuration accordingly.

About

Vehicle Re-Identification in a city scenario through variation of the lighting context, and more generally, at any variation on the scene.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages