A framework for providing calibrated, physics-informed uncertainty estimates for neural PDE solvers using conformal prediction. This approach leverages physics residual errors as a nonconformity score within a conformal prediction framework to enable data-free, model-agnostic, and statistically guaranteed uncertainty estimates.
- Physics Residual Error (PRE) as a nonconformity score for Conformal Prediction
- Data-free uncertainty quantification
- Model-agnostic implementation
- Marginal and Joint coverage guarantees
- Efficient gradient estimation using convolutional kernels
├── Active_Learning/ # Active learning experiments
├── Expts_initial/ # Initial experiments
├── Joint/ # Joint conformal prediction implementation
├── Marginal/ # Marginal conformal prediction implementation
├── Neural_PDE/ # Neural PDE solver implementations
├── Physics_Informed/ # Physics-informed components
├── Tests/ # Test suite
├── Utils/ # Utility functions
└── Other_UQ/ # Bayesian Deep Learning Experiments
Within this code base for the paper, we release a utility function that constructs convolutional layers for gradient estimation based on your choice of order of differentiation and Taylor approximation. This allows for the PRE score function to be easily expressed in a single line of code. This section provides an overview of the code implementation and algorithm for estimating the PRE using Convolution operations. We'll use an arbitrary PDE example with a temporal gradient and a Laplacian to illustrate the process.
from ConvOps_2d import ConvOperator
# Define operators for PDE
D_t = ConvOperator(domain='t', order=1) # time-derivative
D_xx_yy = ConvOperator(domain=('x','y'), order=2) # Laplacian
D_identity = ConvOperator() # Identity Operator
The ConvOperator class is used to set up a gradient operation. It takes in variable(s) of differentiation and order of differentiation as arguments to design the appropriate forward difference stencil and then sets up a convolutional layer with the stencil as the kernel. Under the hood, the class will take care of devising a 3D convolutional layer, and setup the kernel so that it acts on a spatio-temporal tensor of dimensionality: [BS, Nt, Nx, Ny] which expands to batch size, temporal discretisation and the spatial discretisation in
# Combine operators
alpha, beta = 1.0, 0.5 #coefficients
D = ConvOperator()
D.kernel = D_t.kernel - alpha * D_xx_yy.kernel - beta * D_identity.kernel
The convolutional kernels are additive i.e. in order to estimate the residual in one convolutional operation, they could be added together to form a composite kernel that characterises the entire PDE residual.
Once having set up the kernels, PRE estimation is as simple as passing the composite class instance
# Estimate PRE
y_pred = model(X)
PRE = D(y_pred)
Only operating on the outputs, this method of PRE estimation is memory efficient, computationally cheap and with the ConvOperator evaluating the PDE residual can be done in a single line of code.
Standalone Reproduceable experiments (Does not need any data or pretrained Models) :
python -m Marginal/Advection_Residuals_CP.py # Run 1D advection experiment to obtain Marginal Bounds
python -m Joint/Advection_Residuals_CP.py # Run 1D advection experiment to obtain Joint Bounds
In order to run the other experiments, you will need the data (could also be generated by running the scripts at Neural_PDE/Numerical_Solvers) and the pretrained models, which can be downloaded from here.
The repository includes experiments over the following PDEs:
- 1D Advection Equation
- 1D Burgers' Equation
- 2D Wave Equation
- 2D Navier-Stokes Equations
- 2D Magnetohydrodynamics (MHD)
The methdology is benchmarked against several Bayesian Deep Learning Methods:
- MC Dropout
- Deep Ensembles
- Bayesian Neural Networks
- Stochastic Weighted Avergaing - Gaussian
- Numpy
- Scipy
- PyTorch
- Matplotlib
- NumPy
- tqdm
If you use this code in your research, please cite:
@misc{gopakumar2025calibratedphysicsinformeduncertaintyquantification,
title={Calibrated Physics-Informed Uncertainty Quantification},
author={Vignesh Gopakumar and Ander Gray and Lorenzo Zanisi and Timothy Nunn and Stanislas Pamela and Daniel Giles and Matt J. Kusner and Marc Peter Deisenroth},
year={2025},
eprint={2502.04406},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2502.04406},
}
MIT License
- Vignesh Gopakumar
- Ander Gray
- Lorenzo Zanisi
- Stanislas Pamela
- Dan Giles
- Matt J. Kusner
- Marc Peter Deisenroth
For questions and feedback, please contact v.gopakumar@ucl.ac.uk