This project accompanies the paper "PAINT: Parallel-in-time Neural Twins for Dynamical System Reconstruction". PAINT is an architecture-agnostic method that trains a generative neural network to model the distribution of system states parallel over time. At inference, it uses a sliding window of measurements to predict subtrajectories, enabling robust state estimation even from sparse data. Unlike autoregressive models, PAINT is theoretically on-trajectory, ensuring long-term fidelity.
Install the dependencies, ideally in a fresh environment
pip install -r requirements.txt
Set the paths and the logger, where we use wandb to also plot intermediate generations.
conigs/paths/default.yaml
conigs/logger/wandb.yaml
Training can be started with
python src/train.py --config-name=paint data.batch_size=2 trainer.devices=[1] data.num_workers=1 checkpoint_interval=1
or for the autoregressive baseline
python src/train.py --config-name=autoregressive data.batch_size=2 trainer.devices=[1] data.num_workers=1
In principal, use as high as a batchsize as you can afford. We used batchsize 144. In case you have compute constraints you can use gradient checkpointing for intermediate layers of the transformer by configuring the checkpoint interval ("checkpoint a layer i if i % checkpoint_interval == 0"):
checkpoint_interval=2
@article{radler2025paint,
title={PAINT: Parallel-in-time Neural Twins for Dynamical System Reconstruction},
author={Radler, Andreas and Seyfried, Vincent and Pirker, Stefan and Brandstetter, Johannes and Lichtenegger, Thomas},
journal={arXiv preprint arXiv:2510.16004},
year={2025}
}
Another GIF, this time with the mean field, i.e. averaged over different 10 samples of PAINT.


