In this project, we introduce a powerful new class of neural networks — the Fourier Neural Operator (FNO) — designed to efficiently learn operators arising from partial differential equations (PDEs).
By parameterizing the integral kernel directly in Fourier space, we develop an expressive and scalable architecture that outperforms existing methods in both accuracy and speed.
Our model has been extensively tested on:
- Burgers' Equation
- Darcy Flow
- Navier-Stokes Equation (including turbulent regimes)
- Efficient Kernel Learning: Directly learning in Fourier space drastically reduces computational complexity.
- Fast and Scalable: Achieves up to three orders of magnitude faster inference compared to traditional PDE solvers.
- State-of-the-art Performance: Outperforms existing neural network-based methods on a wide range of benchmarks.
- Simple and Modular Code: All scripts are standalone, clean, and easy to adapt for different applications.
Install the required library using:
pip install torch
Each script in this repository is independent and directly runnable.
main.py
— Training loop and evaluation scripts.utilities.py
— Dataset generation, loading, and preprocessing utilities.models/
— Model architecture files.data/
— Datasets for different PDE problems.
We provide datasets for the Burgers equation and Darcy flow.
Data generation scripts are available in utilities.py
.
Download and place them inside the data/
directory.
Evaluate the pre-trained models easily using the provided scripts like _eval.py
or _super_resolution.py
.
Train the model:
python main.py
Evaluate the model:
python eval.py
Super-resolution tasks:
python super_resolution.py
Problem | FNO Performance | Traditional Solvers |
---|---|---|
Burgers' Equation | 3x faster | Slower |
Darcy Flow | 100x faster | Much slower |
Navier-Stokes Equation | State-of-the-art | Poor generalization |
The Fourier Neural Operator provides an efficient, scalable, and highly accurate method for learning PDE mappings.
By working in Fourier space, we achieve faster training, better generalization, and superior performance over classical methods and traditional neural networks.
For any questions or collaborations, feel free to reach out!