Skip to content

✨ Add a Laplace wrapper #96

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jun 13, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,6 +84,7 @@ To date, the following post-processing methods have been implemented:

- Temperature, Vector, & Matrix scaling - [Tutorial](https://torch-uncertainty.github.io/auto_tutorials/tutorial_scaler.html)
- Monte Carlo Batch Normalization - [Tutorial](https://torch-uncertainty.github.io/auto_tutorials/tutorial_mc_batch_norm.html)
- A wrapper for Laplace appoximation using the [Laplace library](https://github.com/aleximmer/Laplace)

## Tutorials

Expand Down
11 changes: 10 additions & 1 deletion docs/source/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -242,6 +242,16 @@ Post-Processing Methods

.. currentmodule:: torch_uncertainty.post_processing

.. autosummary::
:toctree: generated/
:nosignatures:
:template: class_inherited.rst
MCBatchNorm
Laplace

Scaling Methods
^^^^^^^^^^^^^^^

.. autosummary::
:toctree: generated/
:nosignatures:
Expand All @@ -250,7 +260,6 @@ Post-Processing Methods
TemperatureScaler
VectorScaler
MatrixScaler
MCBatchNorm

Datamodules
-----------
Expand Down
10 changes: 10 additions & 0 deletions docs/source/references.rst
Original file line number Diff line number Diff line change
Expand Up @@ -193,6 +193,16 @@ For Monte-Carlo Batch Normalization, consider citing:
* Authors: *Mathias Teye, Hossein Azizpour, and Kevin Smith*
* Paper: `ICML 2018 <https://arxiv.org/pdf/1802.06455.pdf>`__.

Laplace Approximation
^^^^^^^^^^^^^^^^^^^^^

For Laplace Approximation, consider citing:

**Laplace Redux - Effortless Bayesian Deep Learning**

* Authors: *Erik Daxberger, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, Philipp Hennig*
* Paper: `NeurIPS 2021 <https://arxiv.org/abs/2106.14806>`__.

Metrics
-------

Expand Down
7 changes: 5 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ dependencies = [
]

[project.optional-dependencies]
image = ["scikit-image", "h5py",]
image = ["scikit-image", "h5py", "webdataset"]
tabular = ["pandas"]
dev = [
"torch_uncertainty[image]",
Expand All @@ -63,7 +63,10 @@ docs = [
"sphinx-design",
"sphinx-codeautolink",
]
all = ["torch_uncertainty[dev,docs,image,tabular]"]
all = [
"torch_uncertainty[dev,docs,image,tabular]",
"laplace-torch"
]

[project.urls]
homepage = "https://torch-uncertainty.github.io/"
Expand Down
68 changes: 68 additions & 0 deletions torch_uncertainty/post_processing/laplace.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
from importlib import util
from typing import Literal

from torch import Tensor, nn
from torch.utils.data import Dataset

if util.find_spec("laplace"):
from laplace import Laplace

laplace_installed = True


class Laplace(nn.Module):
def __init__(
self,
model: nn.Module,
task: Literal["classification", "regression"],
subset_of_weights="last_layer",
hessian_structure="kron",
pred_type: Literal["glm", "nn"] = "glm",
link_approx: Literal[
"mc", "probit", "bridge", "bridge_norm"
] = "probit",
) -> None:
"""Laplace approximation for uncertainty estimation.

This class is a wrapper of Laplace classes from the laplace-torch library.

Args:
model (nn.Module): model to be converted.
task (Literal["classification", "regression"]): task type.
subset_of_weights (str): subset of weights to be considered. Defaults to
"last_layer".
hessian_structure (str): structure of the Hessian matrix. Defaults to
"kron".
pred_type (Literal["glm", "nn"], optional): type of posterior predictive,
See the Laplace library for more details. Defaults to "glm".
link_approx (Literal["mc", "probit", "bridge", "bridge_norm"], optional):
how to approximate the classification link function for the `'glm'`.
See the Laplace library for more details. Defaults to "probit".

Reference:
Daxberger et al. Laplace Redux - Effortless Bayesian Deep Learning. In NeurIPS 2021.
"""
super().__init__()
if not laplace_installed:
raise ImportError(
"The laplace-torch library is not installed. Please install it via `pip install laplace-torch`."
)
self.la = Laplace(
model=model,
task=task,
subset_of_weights=subset_of_weights,
hessian_structure=hessian_structure,
)
self.pred_type = pred_type
self.link_approx = link_approx

def fit(self, dataset: Dataset) -> None:
self.la.fit(dataset=dataset)

def forward(
self,
x: Tensor,
) -> Tensor:
return self.la(
x, pred_type=self.pred_type, link_approx=self.link_approx
)
2 changes: 1 addition & 1 deletion torch_uncertainty/post_processing/mc_batch_norm.py
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ def _est_forward(self, x: Tensor) -> Tensor:
def forward(
self,
x: Tensor,
) -> tuple[Tensor, Tensor]:
) -> Tensor:
if self.training:
return self.model(x)
if not self.trained:
Expand Down