Skip to content

Commit 7e1378b

Browse files
Tom's April 24 edits of importance sampling lecture
1 parent 5153ffc commit 7e1378b

File tree

1 file changed

+9
-11
lines changed

1 file changed

+9
-11
lines changed

lectures/imp_sample.md

Lines changed: 9 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -11,15 +11,15 @@ kernelspec:
1111
name: python3
1212
---
1313

14-
# Computing Mean of a Likelihood Ratio Process
14+
# Mean of a Likelihood Ratio Process
1515

1616
```{contents} Contents
1717
:depth: 2
1818
```
1919

2020
## Overview
2121

22-
In {doc}`this lecture <likelihood_ratio_process>` we described a peculiar property of a likelihood ratio process, namely, that it's mean equals one for all $t \geq 0$ despite it's converging to zero almost surely.
22+
In {doc}`this lecture <likelihood_ratio_process>` we described a peculiar property of a likelihood ratio process, namely, that its mean equals one for all $t \geq 0$ despite it's converging to zero almost surely.
2323

2424
While it is easy to verify that peculiar properly analytically (i.e., in population), it is challenging to use a computer simulation to verify it via an application of a law of large numbers that entails studying sample averages of repeated simulations.
2525

@@ -178,7 +178,7 @@ plt.ylim([0., 3.])
178178
plt.show()
179179
```
180180

181-
## Approximating a cumulative likelihood ratio
181+
## Approximating a Cumulative Likelihood Ratio
182182

183183
We now study how to use importance sampling to approximate
184184
${E} \left[L(\omega^t)\right] = \left[\prod_{i=1}^T \ell \left(\omega_i\right)\right]$.
@@ -319,12 +319,11 @@ for i, t in enumerate([1, 5, 10, 20]):
319319
plt.show()
320320
```
321321

322-
The simulation exercises above show that the importance sampling estimates are unbiased under all $T$
323-
while the standard Monte Carlo estimates are biased downwards.
322+
The simulation exercises above show that the importance sampling estimates are unbiased under all $T$ while the standard Monte Carlo estimates are biased downwards.
324323

325324
Evidently, the bias increases with increases in $T$.
326325

327-
## More Thoughts about Choice of Sampling Distribution
326+
## Choosing a Sampling Distribution
328327

329328
+++
330329

@@ -375,7 +374,7 @@ plt.ylim([0., 3.])
375374
plt.show()
376375
```
377376

378-
We consider two additonal distributions.
377+
We consider two additional distributions.
379378

380379
As a reminder $h_1$ is the original $Beta(0.5,0.5)$ distribution that we used above.
381380

@@ -458,10 +457,9 @@ for i, t in enumerate([1, 20]):
458457
plt.show()
459458
```
460459

461-
However, $h_3$ is evidently a poor importance sampling distribution forpir problem,
460+
However, $h_3$ is evidently a poor importance sampling distribution for our problem,
462461
with a mean estimate far away from $1$ for $T = 20$.
463462

464-
Notice that evan at $T = 1$, the mean estimate with importance sampling is more biased than just sampling with $g$ itself.
463+
Notice that even at $T = 1$, the mean estimate with importance sampling is more biased than sampling with just $g$ itself.
465464

466-
Thus, our simulations suggest that we would be better off simply using Monte Carlo
467-
approximations under $g$ than using $h_3$ as an importance sampling distribution for our problem.
465+
Thus, our simulations suggest that for our problem we would be better off simply using Monte Carlo approximations under $g$ than using $h_3$ as an importance sampling distribution.

0 commit comments

Comments
 (0)