Skip to content

PolyNet: Learning Diverse Solution Strategies for Neural Combinatorial Optimization

License

ahottung/PolyNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PolyNet

This repository provides the implementation for the paper PolyNet: Learning Diverse Solution Strategies for Neural Combinatorial Optimization.

PolyNet is a neural combinatorial optimization approach that learns complementary policies within a single model. These diverse policies are used to generate multiple high-quality solutions to the same problem instance in parallel, with the best-performing solution selected as the final output. This exploration strategy allows PolyNet to achieve better performance than existing neural combinatorial optimization (NCO) methods within the same computational budget.

Key Highlights

  • State-of-the-art performance on the Traveling Salesperson Problem (TSP) with 100 nodes, achieving a gap to optimality of < 0.001% in just 4 minutes over 10,000 instances.
  • Supports more complex tasks such as:
    • Capacitated Vehicle Routing Problem (CVRP)
    • CVRP with Time Windows (CVRPTW)
  • In combination with Efficient Active Search (EAS), PolyNet can perform extended search runs. For example, it achieves a 0.14% gap to HGS in 4 hours over 10,000 CVRP-100 instances.

Note that this implementation builds upon the POMO codebase.

Paper

@inproceedings{
hottung2025polynet,
title={PolyNet: Learning Diverse Solution Strategies for Neural Combinatorial Optimization},
author={Andr{\'e} Hottung and Mridul Mahajan and Kevin Tierney},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=TKuYWeFE6S}
}

Requirements

  • Python >= 3.9
  • PyTorch == 2.0.0
  • tqdm (required for the CVRPTW instances generator)

Evaluation

To reproduce the results shown in Table 1 of the paper, run the following commands:

Sampling

# TSP Testing
python TSP/exp/sampling/eval_n100.py
python TSP/exp/sampling/eval_n200.py
# TSP Generalization
python TSP/exp/sampling/eval_n150.py
python TSP/exp/sampling/eval_n300.py
# CVRP Testing
python CVRP/exp/sampling/eval_n100.py
python CVRP/exp/sampling/eval_n200.py
# CVRP Generalization
python CVRP/exp/sampling/eval_n150.py
python CVRP/exp/sampling/eval_n300.py
# CVRPTW Testing
python CVRPTW/exp/sampling/eval_n100.py
python CVRPTW/exp/sampling/eval_n200.py
# CVRPTW Generalization
python CVRPTW/exp/sampling/eval_n150.py
python CVRPTW/exp/sampling/eval_n300.py

Efficient Active Search (EAS)

# TSP Testing
python TSP/exp/eas/eval_n100.py
python TSP/exp/eas/eval_n200.py
# TSP Generalization
python TSP/exp/eas/eval_n150.py
python TSP/exp/eas/eval_n300.py
# CVRP Testing
python CVRP/exp/eas/eval_n100.py
python CVRP/exp/eas/eval_n200.py
# CVRP Generalization
python CVRP/exp/eas/eval_n150.py
python CVRP/exp/eas/eval_n300.py
# CVRPTW Testing
python CVRPTW/exp/eas/eval_n100.py
python CVRPTW/exp/eas/eval_n200.py
# CVRPTW Generalization
python CVRPTW/exp/eas/eval_n150.py
python CVRPTW/exp/eas/eval_n300.py

Training

New models can be trained via the following commands. All training runs are initialized from provided pre-trained POMO models.

TSP100

python TSP/exp/training/train_n100.py

CVRP100

python CVRP/exp/training/train_n100.py

CVRPTW100

For the CVRPTW we do not train on uniformly generated instances, but use the instance generator from Queiroga, Eduardo, et al. (2022) to generate customer locations and demand. For training, it is hence required to first generate a training set of CVRP instances using this instance generator. The time windows for the CVRPTW are still generated on the fly during training. The training set can be generated via

`python CVRPTW/generate_CVRPTW_dataset.py --name train --n 100 --seed 0 --problem CVRP --dataset_size 1_000_000 --capacity 50 --data_dir "CVRPTW/data"

Note that this may take more than one hour. After the training set have been generated, training can be started via

python CVRPTW/exp/training/train_n100.py

Acknowledgements

We thank the authors of the POMO paper for making their code publicly available.

About

PolyNet: Learning Diverse Solution Strategies for Neural Combinatorial Optimization

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages