Skip to content

cselab/Optimal_LB_Closures

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Optimal Lattice Boltzmann Closures through Multi-Agent Reinforcement Learning

Description

This is the official implementation for the paper Optimal Lattice Boltzmann Closures through Multi-Agent Reinforcement Learning.

Reinforcement Learning (RL) is used for automatic discovery of hybrid turbulence models for Lattice Boltzmann Methods (LBM). LBM offers advantages such as easy access to macroscopic flow features and its complete locality, which allows efficient parallelization. RL eliminates the need for costly direct numerical simulation data during training by using an energy spectrum similarity measure as a reward. We have implemented several multi-agent RL models (ReLBM) with fully convolutional networks, achieving stabilization of turbulent 2D Kolmogorov flows and more accurate energy spectra than traditional LBM turbulence models. An example of the performance of a ReLBM at resolution $N = 128$, compared to a coarse LBGK simulation (128_BGK) and a resolved direct numerical simulation (2048_GK), is shown below. For RL training we used Tianshou and for the LBM simulations we used XLB.

Getting Started

Clone repository and import XLB

$ git clone git@github.com:cselab/Optimal_LB_Closures.git
$ cd Optimal_LB_Closures
$ git clone git@github.com:Autodesk/XLB.git -b main-old

To disable print statements in XLB, optionally modify XLB/src/base.py by removing or commenting out:

self.show_simulation_parameters()
...
print("Time to create the grid mask:", time.time() - start)
...
print("Time to create the local masks and normal arrays:", time.time() - start)

Install Requirements

For example as python venv:

python -m venv <env_name>
source <env_name>/bin/activate
pip install -r requirements.txt

Install Jax:

python -m pip install -U "jax[cuda12]"

Install Torch for CUDA 12.6, which is currently a nightly build. Check torch for updates.

pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu126

Setup (Optional)

1. Burn in simulation

A burn in simulation is used to statistically stabalize the Kolmogorov flow. Run:

$ cd xlb_flows
$ python run_burn_in.py

to run an XLB simulation of the 2D Kolmogorov flow for $T=645$ at resolution $N=2048$ for seeds $s \in {102, 99, 33}$. The final density and velocity fields will be used to initialize all future Kolmogorov flows. $s=102$ is used for training, $s=99$ for validation, and $s=33$ for testing. This step is optional as we have included the resulting fields from the burn in simulation in results/init_fields.

Model Training (Optional)

To reproduce the training of the policies run:

cd closure_discovery

Global:

CUDA_VISIBLE_DEVICES=1 PYTHONPATH=..:../XLB python rl_klmgrv_PPO.py --max_epoch 100 --setup "glob" --num_agents 1 --ent_coef -0.01 --seed 66

Local:

CUDA_VISIBLE_DEVICES=1 PYTHONPATH=..:../XLB python rl_klmgrv_PPO.py --max_epoch 300 --setup "loc" --num_agents 128 --lr_decay 1 --seed 44

Interpolating:

CUDA_VISIBLE_DEVICES=1 PYTHONPATH=..:../XLB python rl_klmgrv_PPO.py --max_epoch 200 --setup "interp" --num_agents 16 --seed 33

This steps are optional as we provided the weights of the trained models in results/weights.

Model Testing

1. Create references for testing

To create all the reference solutions used for testing the ClosureRL models, run:

$ cd xlb_flows
$ python create_reference_runs.py

It runs a BGK and a KBC simulation at the same resolution as the ClosureRL model, a BGK simulation at twice the resolution, and a BGK simulation at DNS resolution $N=2048$. This is done for all 3 test cases: Kolmogorov flow at $Re=10^4$ and $Re=10^5$ and a decaying flow at $10^4$. All simulations run for $T=227$, and the velocity file is saved every $32$ steps for the CGS resolution.

2. Evaluate ClosureRL models

-To evaluate the trained models run:

$ cd closure_discovery
$ python create_test_runs.py

This will evaluate all 3 models (global, interpolating and local) on the 3 test cases and store the velocity fields used to create the figures.

3. Create figures

4. Measure speedup

To measure the speedup and create the speedup plot, run:

$ cd xlb_flows
$ python measure_speedup.py

Acknowledgements

  • the RL part is from Tianshou.
  • The LBM part is from XLB.

About

Optimal Lattice Boltzmann Closures through Multi-Agent Reinforcement Learning

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •