Code for ICLR2025 Paper: Rethinking Neural Multi-Objective Combinatorial Optimization via Neat Weight Embedding
Quick Start For WE-Add or WE-CA
- To train a model, such as MOTSP with 20 nodes, run train_motsp_n20.py in the corresponding folder.
- To test a model, such as MOTSP with 20 nodes, run test_motsp_n20.py in the corresponding folder.
- Pretrained models for each problem can be found in the result folder.
Quick Start For WE-CA-U
- To train a unified model, such as MOTSP, run train_motsp.py in the WE-CA/POMO-U folder.
- To test a model, such as MOTSP with 20 nodes, run test_motsp_n20.py in the WE-CA/POMO-U folder.
- Pretrained models for each problem can be found in the WE-CA/POMO-U/result folder.
Reference
If our work is helpful for your research, please cite our paper:
@inproceedings{chen2025rethinking,
title={Rethinking Neural Multi-Objective Combinatorial Optimization via Neat Weight Embedding},
author={Chen, Jinbiao and Cao, Zhiguang and Wang, Jiahai and Wu, Yaoxin and Qin, Hanzhang and Zhang, Zizhen and Gong, Yue-Jiao},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
}