Skip to content

Commit 9ac24c6

Browse files
committed
updates on rhat and ess
1 parent c127edc commit 9ac24c6

File tree

2 files changed

+52
-27
lines changed

2 files changed

+52
-27
lines changed

src/cmdstan-guide/stansummary.qmd

Lines changed: 10 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -12,22 +12,20 @@ diagnostic statistics on the sampler chains, reported in the following order:
1212

1313
- Mean - sample mean
1414
- MCSE - Monte Carlo Standard Error, a measure of the amount of noise in the sample
15-
- StdDev - sample standard deviation - the variance around the sample mean.
16-
- MAD - Median Absolute Deviation - the variance around the sample median.
15+
- StdDev - sample standard deviation - the standard deviation around the sample mean.
16+
- MAD - Median Absolute Deviation - the median absolute deviation around the sample median.
1717
- Quantiles - default 5%, 50%, 95%
1818
- ESS_bulk
1919
- ESS_tail
20-
- R_hat - $\hat{R}$ statistic, a measure of chain equilibrium, must be within $0.05$ of $1.0$.
20+
- R_hat - $\hat{R}$ statistic, a MCMC convergence diagnostic
2121

2222
When reviewing the `stansummary` output, it is important to check the final three
23-
output columns first - these are the diagnostic statistics on chain convergence and
24-
number of independent draws in the sample.
25-
A $\hat{R}$ statistic of greater than $1.01$ indicates that the chain has not converged and
26-
therefore the sample is not drawn from the posterior, thus the estimates of the mean and
27-
all other summary statistics are invalid.
23+
output columns first - these are the diagnostic statistics on MCMC convergence and
24+
effective sample size.
25+
A $\hat{R}$ statistic of greater than $1$ indicates potential convergence problems and that the sample is not presentative of the target posterior, thus the estimates of the mean and all other summary statistics are likely to be invalid. A value $1.01$ can be used as generic threshold to decide whether more iterations or further convergence analysis is needed, but other thresholds can be used depending on the specific use case.
2826

2927
Estimation by sampling produces an approximate value for the model parameters;
30-
the MCSE statistic indicates the amount of noise in the estimate.
28+
the MCSE statistic indicates the amount of uncertainty in the estimate.
3129
Therefore MCSE column is placed next to the sample mean column,
3230
in order to make it easy to compare this sample with others.
3331

@@ -37,8 +35,9 @@ chapter of the Stan Reference Manual which describes both the theory and practic
3735
estimation techniques.
3836

3937
The statistics - Mean, StdDev, MAD, and Quantiles - are computed directly from all draws across all chains.
40-
The diagnostic statistics - MCSE, ESS_bulk, ESS_tail, and R_hat are computed from the rank-normalized,
41-
folded chains according to the definitions in @Vehtari+etal:2021:Rhat.
38+
The diagnostic statistics - ESS_bulk, ESS_tail, and R_hat are computed from the rank-normalized,
39+
folded, and splitted chains according to the definitions by @Vehtari+etal:2021:Rhat.
40+
the MCSE statistic is computed using split chain R_hat and autocorrelations.
4241
The summary statistics and the algorithms used to compute them are described in sections
4342
[Notation for samples](https://mc-stan.org/docs/reference-manual/analysis.html#notation-for-samples-chains-and-draws)
4443
and

src/reference-manual/analysis.qmd

Lines changed: 42 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ pagetitle: Posterior Analysis
55
# Posterior Analysis {#analysis.chapter}
66

77
Stan uses Markov chain Monte Carlo (MCMC) techniques to generate
8-
samples from the posterior distribution for full Bayesian inference.
8+
draws from the posterior distribution for full Bayesian inference.
99
Markov chain Monte Carlo (MCMC) methods were developed for situations
1010
in which it is not straightforward to make independent draws
1111
@Metropolis:1953.
@@ -17,7 +17,7 @@ despite the fact that it is not actually a Markov chain.
1717
Stan's Laplace algorithm produces a sample from a normal approximation
1818
centered at the mode of a distribution in the unconstrained space.
1919
If the mode is a maximum a posteriori (MAP) estimate,
20-
the samples provide an estimate of the mean and standard deviation
20+
the sample provides an estimate of the mean and standard deviation
2121
of the posterior distribution.
2222
If the mode is a maximum likelihood estimate (MLE),
2323
the sample provides an estimate of the standard error of the likelihood.
@@ -66,15 +66,15 @@ applies. These problems are addressed in the next two sections.
6666

6767
Stan's posterior analysis tools compute a number of summary
6868
statistics, estimates, and diagnostics for Markov chain Monte Carlo
69-
(MCMC) samples. Stan's estimators and diagnostics are more robust in
69+
(MCMC) sample. Stan's estimators and diagnostics are more robust in
7070
the face of non-convergence, antithetical sampling, and long-term
7171
Markov chain correlations than most of the other tools available. The
7272
algorithms Stan uses to achieve this are described in this chapter.
7373

7474

7575
## Convergence
7676

77-
By definition, a Markov chain generates samples from the target
77+
By definition, a Markov chain samples from the target
7878
distribution only after it has converged to equilibrium (i.e.,
7979
equilibrium is defined as being achieved when $p(\theta^{(n)})$ is the
8080
target density). The following point cannot be expressed strongly
@@ -105,21 +105,23 @@ One way to monitor whether a chain has converged to the equilibrium
105105
distribution is to compare its behavior to other randomly initialized
106106
chains. This is the motivation for the @GelmanRubin:1992 potential
107107
scale reduction statistic, $\hat{R}$. The $\hat{R}$ statistic
108-
measures the ratio of the average variance of samples within each
109-
chain to the variance of the pooled samples across chains; if all
108+
measures the ratio of the average variance of drawss within each
109+
chain to the variance of the pooled draws across chains; if all
110110
chains are at equilibrium, these will be the same and $\hat{R}$ will
111111
be one. If the chains have not converged to a common distribution,
112112
the $\hat{R}$ statistic will be greater than one.
113113

114114
Gelman and Rubin's recommendation is that the independent Markov
115115
chains be initialized with diffuse starting values for the parameters
116-
and sampled until all values for $\hat{R}$ are below 1.1. Stan
116+
and sampled until all values for $\hat{R}$ are below some threshold.
117+
@Vehtari+etal:2021:Rhat suggest in general to use a threshold $1.01$, but
118+
othe thresholds can be used depending on the use case. Stan
117119
allows users to specify initial values for parameters and it is also
118120
able to draw diffuse random initializations automatically satisfying
119121
the declared parameter constraints.
120122

121123
The $\hat{R}$ statistic is defined for a set of $M$ Markov chains,
122-
$\theta_m$, each of which has $N$ samples $\theta^{(n)}_m$. The
124+
$\theta_m$, each of which has $N$ draws $\theta^{(n)}_m$. The
123125
*between-chain variance* estimate is
124126

125127
$$
@@ -190,12 +192,12 @@ because the first half of each chain has not mixed with the second
190192
half.
191193

192194

193-
### Rank-normalization helps when there are heavy tails {-}
195+
### Rank normalization helps when there are heavy tails {-}
194196

195197
Split R-hat and the effective sample size (ESS) are well defined only if
196198
the marginal posteriors have finite mean and variance.
197-
Therefore, following @Vehtari+etal:2021:Rhat, we compute the rank-normalized
198-
paramter values and then feed them into the formulas for split R-hat and ESS.
199+
Therefore, following @Vehtari+etal:2021:Rhat, we compute the rank normalized
200+
parameter values and then feed them into the formulas for split R-hat and ESS.
199201

200202
Rank normalization proceeds as follows:
201203

@@ -210,7 +212,31 @@ $$
210212
z_{(nm)} = \Phi^{-1} \left( \frac{r_{(nm)} - 3/8}{S - 1/4} \right)
211213
$$
212214

215+
To further improve sensitivity to chains having different scales,
216+
217+
rank normalized R-hat is computed also for the
218+
for the corresponding *folded*
219+
draws $\zeta^{(mn)}$, absolute deviations from the median,
220+
$$
221+
\label{zeta}
222+
\zeta^{(mn)} = \left|\theta^{(nm)}-{\rm median}(\theta)\right|.
223+
$$
224+
The rank normalized split-$\widehat{R}$ measure computed on the
225+
$\zeta^{(mn)}$ values is called \emph{folded-split}-$\widehat{R}$.
226+
This measures convergence in the
227+
tails rather than in the bulk of the distribution.
228+
229+
To obtain a single conservative $\widehat{R}$ estimate, we propose
230+
to report the maximum of rank normalized split-$\widehat{R}$ and
231+
rank normalized folded-split-$\widehat{R}$ for each parameter.
232+
233+
Bulk-ESS is defined as ESS for rank normalized split chains. Tail-ESS
234+
is defined as the minimum ESS for the 5% and 95% quantiles. See
235+
[Effective Sample Size section](#effective-sample-size.section) for
236+
details on how ESS is estimated.
237+
213238
### Convergence is global {-}
239+
214240
A question that often arises is whether it is acceptable to monitor
215241
convergence of only a subset of the parameters or generated
216242
quantities. The short answer is "no," but this is elaborated
@@ -275,7 +301,7 @@ analytically.
275301
So what we do in practice is hope that the finite number of draws is
276302
large enough for the expectations to be reasonably accurate. Removing
277303
warmup iterations improves the accuracy of the expectations but there
278-
is no guarantee that removing any finite number of samples will be
304+
is no guarantee that removing any finite number of draws will be
279305
enough.
280306

281307

@@ -297,7 +323,7 @@ distribution for the log posterior density (`lp__`) almost
297323
never looks Gaussian, instead it features long tails that can lead to
298324
large $\hat{R}$ even in the large $N$ limit. Tweaks to $\hat{R}$,
299325
such as using quantiles in place of raw values, have the flavor of
300-
making the samples of interest more Gaussian and hence the $\hat{R}$
326+
making the sample of interest more Gaussian and hence the $\hat{R}$
301327
statistic more accurate.
302328

303329

@@ -316,7 +342,7 @@ and can apply the standard tests.
316342
## Effective sample size {#effective-sample-size.section}
317343

318344
The second technical difficulty posed by MCMC methods is that the
319-
samples will typically be autocorrelated (or anticorrelated) within a
345+
draws will typically be autocorrelated (or anticorrelated) within a
320346
chain. This increases (or reduces) the uncertainty of the estimation of posterior
321347
quantities of interest, such as means, variances, or quantiles; see
322348
@Geyer:2011.
@@ -371,7 +397,7 @@ $$
371397
\, d\theta - \frac{\mu^2}{\sigma^2}.
372398
$$
373399

374-
The effective sample size of $N$ samples generated by a process with
400+
The effective sample size of $N$ draws generated by a process with
375401
autocorrelations $\rho_t$ is defined by
376402
$$
377403
N_{\mathrm{eff}}
@@ -397,7 +423,7 @@ correlation.
397423
In practice, the probability function in question cannot be tractably
398424
integrated and thus the autocorrelation cannot be calculated, nor the
399425
effective sample size. Instead, these quantities must be estimated
400-
from the samples themselves. The rest of this section describes a
426+
from the draws themselves. The rest of this section describes a
401427
autocorrelations and split-$\hat{R}$ based effective sample
402428
size estimator, based on multiple chains. As before, each chain
403429
$\theta_m$ will be assumed to be of length $N$.

0 commit comments

Comments
 (0)