PML for benchmarking different methods #440
-
Hi, thanks for the great and easy-to-use library. Is there an easy way to use PML for benchmarking a variety of losses and distances that are already implemented? I mean, both benchmarking and Hyperparameter tuning. Thanks. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
Not at the moment. My repo powerful-benchmarker was for this purpose, but I'm no longer maintaining the metric-learning branch. It's outdated and the code is awful. The master/dev branches are current and the code is a lot nicer but I'm using those branches to benchmark a different library for domain adaptation. I use Optuna for hyperparameter tuning, and just regular Python files for configs. Creating configs for PML is probably more straightforward because there is a consistent format among losses, distances etc. In domain adaptation there is more variety in the algorithm structures, so configuration is more complicated, which is why I don't use yaml files. |
Beta Was this translation helpful? Give feedback.
Not at the moment. My repo powerful-benchmarker was for this purpose, but I'm no longer maintaining the metric-learning branch. It's outdated and the code is awful. The master/dev branches are current and the code is a lot nicer but I'm using those branches to benchmark a different library for domain adaptation. I use Optuna for hyperparameter tuning, and just regular Python files for configs. Creating configs for PML is probably more straightforward because there is a consistent format among losses, distances etc. In domain adaptation there is more variety in the algorithm structures, so configuration is more complicated, which is why I don't use yaml files.