This is the official implementation for the paper Optimal Lattice Boltzmann Closures through Multi-Agent Reinforcement Learning.
Reinforcement Learning (RL) is used for automatic discovery of hybrid turbulence models for Lattice Boltzmann Methods (LBM). LBM offers advantages such as easy access to macroscopic flow features and its complete locality, which allows efficient parallelization. RL eliminates the need for costly direct numerical simulation data during training by using an energy spectrum similarity measure as a reward. We have implemented several multi-agent RL models (ReLBM) with fully convolutional networks, achieving stabilization of turbulent 2D Kolmogorov flows and more accurate energy spectra than traditional LBM turbulence models. An example of the performance of a ReLBM at resolution
Clone repository and import XLB
$ git clone git@github.com:cselab/Optimal_LB_Closures.git
$ cd Optimal_LB_Closures
$ git clone git@github.com:Autodesk/XLB.git -b main-old
To disable print statements in XLB, optionally modify XLB/src/base.py
by removing or commenting out:
self.show_simulation_parameters()
...
print("Time to create the grid mask:", time.time() - start)
...
print("Time to create the local masks and normal arrays:", time.time() - start)
For example as python venv:
python -m venv <env_name>
source <env_name>/bin/activate
pip install -r requirements.txt
Install Jax:
python -m pip install -U "jax[cuda12]"
Install Torch for CUDA 12.6, which is currently a nightly build. Check torch for updates.
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu126
A burn in simulation is used to statistically stabalize the Kolmogorov flow. Run:
$ cd xlb_flows
$ python run_burn_in.py
to run an XLB simulation of the 2D Kolmogorov flow for
To reproduce the training of the policies run:
cd closure_discovery
Global:
CUDA_VISIBLE_DEVICES=1 PYTHONPATH=..:../XLB python rl_klmgrv_PPO.py --max_epoch 100 --setup "glob" --num_agents 1 --ent_coef -0.01 --seed 66
Local:
CUDA_VISIBLE_DEVICES=1 PYTHONPATH=..:../XLB python rl_klmgrv_PPO.py --max_epoch 300 --setup "loc" --num_agents 128 --lr_decay 1 --seed 44
Interpolating:
CUDA_VISIBLE_DEVICES=1 PYTHONPATH=..:../XLB python rl_klmgrv_PPO.py --max_epoch 200 --setup "interp" --num_agents 16 --seed 33
This steps are optional as we provided the weights of the trained models in results/weights.
To create all the reference solutions used for testing the ClosureRL models, run:
$ cd xlb_flows
$ python create_reference_runs.py
It runs a BGK and a KBC simulation at the same resolution as the ClosureRL model, a BGK simulation at twice the resolution, and a BGK simulation at DNS resolution
-To evaluate the trained models run:
$ cd closure_discovery
$ python create_test_runs.py
This will evaluate all 3 models (global, interpolating and local) on the 3 test cases and store the velocity fields used to create the figures.
- The testing figures are plottet in results/analysis.ipynb.
- The action interpretation figures are plottet in closure_discovery/action_analysis.ipynb.
To measure the speedup and create the speedup plot, run:
$ cd xlb_flows
$ python measure_speedup.py