You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
author = {Aki Vehtari and Daniel Simpson and Andrew Gelman and Yuling Yao and Jonah Gabry},
1850
+
title = {Pareto smoothed importance sampling},
1851
+
journal = {Journal of Machine Learning Research},
1852
+
year = {2024},
1853
+
volume = {25},
1854
+
number = {72},
1855
+
pages = {1--58}
1856
+
}
1857
+
1858
+
@article{Gelman:etal:2020:workflow,
1859
+
title={Bayesian workflow},
1860
+
author={Gelman, Andrew and Vehtari, Aki and Simpson, Daniel and Margossian, Charles C and Carpenter, Bob and Yao, Yuling and Kennedy, Lauren and Gabry, Jonah and B{\"u}rkner, Paul-Christian and Modr{\'a}k, Martin},
1861
+
journal={arXiv preprint arXiv:2011.01808},
1862
+
year={2020}
1863
+
}
1864
+
1865
+
@article{Magnusson+etal:2024:posteriordb,
1866
+
title={posteriordb: Testing, benchmarking and developing {Bayesian} inference algorithms},
1867
+
author={Magnusson, M{\aa}ns and Torgander, Jakob and B{\"u}rkner, Paul-Christian and Zhang, Lu and Carpenter, Bob and Vehtari, Aki},
1868
+
journal={arXiv preprint arXiv:2407.04967},
1869
+
year={2024}
1870
+
1848
1871
@article{egozcue+etal:2003,
1849
1872
title={Isometric logratio transformations for compositional data analysis},
1850
1873
author={Egozcue, Juan Jos{\'e} and Pawlowsky-Glahn, Vera and Mateu-Figueras, Gl{\`o}ria and Barcelo-Vidal, Carles},
Copy file name to clipboardExpand all lines: src/reference-manual/pathfinder.qmd
+28-3Lines changed: 28 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ pagetitle: Pathfinder
4
4
5
5
# Pathfinder
6
6
7
-
Stan supports the Pathfinder algorithm @zhang_pathfinder:2022.
7
+
Stan supports the Pathfinder algorithm [@zhang_pathfinder:2022].
8
8
Pathfinder is a variational method for approximately
9
9
sampling from differentiable log densities. Starting from a random
10
10
initialization, Pathfinder locates normal approximations to the target
@@ -22,6 +22,31 @@ the problem of L-BFGS getting stuck at local optima or in saddle points on plate
22
22
Compared to ADVI and short dynamic HMC runs, Pathfinder
23
23
requires one to two orders of magnitude fewer log density and gradient
24
24
evaluations, with greater reductions for more challenging posteriors.
25
-
While the evaluations in @zhang_pathfinder:2022 found that
26
-
single-path and multi-path Pathfinder outperform ADVI for most of the models in the PosteriorDB evaluation set,
25
+
While the evaluations by @zhang_pathfinder:2022 found that
26
+
single-path and multi-path Pathfinder outperform ADVI for most of the models in the PosteriorDB [@Magnusson+etal:2024:posteriordb]evaluation set,
27
27
we recognize the need for further experiments on a wider range of models.
28
+
29
+
## Diagnosing Pathfinder
30
+
31
+
Pathfinder diagnoses the accuracy of the approximation by computing the density ratio of the true posterior and
32
+
the approximation and using Pareto-$\hat{k}$ diagnostic [@Vehtari+etal:2024:PSIS] to assess whether these ratios can
33
+
be used to improve the approximation via resampling. The
34
+
normalization for the posterior can be estimated reliably [@Vehtari+etal:2024:PSIS, Section 3], which is the
35
+
first requirement for reliable resampling. If estimated Pareto-$\hat{k}$ for the ratios is smaller than 0.7,
36
+
there is still need to further diagnose reliability of importance sampling estimate for all quantities of interest [@Vehtari+etal:2024:PSIS, Section 2.2]. If estimated Pareto-$\hat{k}$ is larger than 0.7, then the
37
+
estimate for the normalization is unreliable and any Monte Carlo estimate may have a big error. The resampled draws
38
+
can still contain some useful information about the location and shape of the posterior which can be used in early
39
+
parts of Bayesian workflow [@Gelman:etal:2020:workflow].
40
+
41
+
## Using Pathfinder for initializing MCMC
42
+
43
+
If estimated Pareto-$\hat{k}$ for the ratios is smaller than 0.7, the resampled posterior draws are almost as
44
+
good for initializing MCMC as would independent draws from the posterior be. If estimated Pareto-$\hat{k}$ for the
45
+
ratios is larger than 0.7, the Pathfinder draws are not reliable for posterior inference directly, but they are still
46
+
very likely better for initializing MCMC than random draws from an arbitrary pre-defined distribution (e.g. uniform from
47
+
-2 to 2 used by Stan by default). If Pareto-$\hat{k}$ is larger than 0.7, it is likely that one of the ratios is much bigger
48
+
than others and the default resampling with replacement would produce copies of one unique draw. For initializing several
49
+
Markov chains, it is better to use resampling without replacement to guarantee unique initialization for each chain. At the
50
+
moment Stan allows turning off the resampling completely, and then the resampling without replacement can be done outside of
0 commit comments