You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: lectures/imp_sample.md
+9-11Lines changed: 9 additions & 11 deletions
Original file line number
Diff line number
Diff line change
@@ -11,15 +11,15 @@ kernelspec:
11
11
name: python3
12
12
---
13
13
14
-
# Computing Mean of a Likelihood Ratio Process
14
+
# Mean of a Likelihood Ratio Process
15
15
16
16
```{contents} Contents
17
17
:depth: 2
18
18
```
19
19
20
20
## Overview
21
21
22
-
In {doc}`this lecture <likelihood_ratio_process>` we described a peculiar property of a likelihood ratio process, namely, that it's mean equals one for all $t \geq 0$ despite it's converging to zero almost surely.
22
+
In {doc}`this lecture <likelihood_ratio_process>` we described a peculiar property of a likelihood ratio process, namely, that its mean equals one for all $t \geq 0$ despite it's converging to zero almost surely.
23
23
24
24
While it is easy to verify that peculiar properly analytically (i.e., in population), it is challenging to use a computer simulation to verify it via an application of a law of large numbers that entails studying sample averages of repeated simulations.
25
25
@@ -178,7 +178,7 @@ plt.ylim([0., 3.])
178
178
plt.show()
179
179
```
180
180
181
-
## Approximating a cumulative likelihood ratio
181
+
## Approximating a Cumulative Likelihood Ratio
182
182
183
183
We now study how to use importance sampling to approximate
@@ -319,12 +319,11 @@ for i, t in enumerate([1, 5, 10, 20]):
319
319
plt.show()
320
320
```
321
321
322
-
The simulation exercises above show that the importance sampling estimates are unbiased under all $T$
323
-
while the standard Monte Carlo estimates are biased downwards.
322
+
The simulation exercises above show that the importance sampling estimates are unbiased under all $T$ while the standard Monte Carlo estimates are biased downwards.
324
323
325
324
Evidently, the bias increases with increases in $T$.
326
325
327
-
## More Thoughts about Choice of Sampling Distribution
326
+
## Choosing a Sampling Distribution
328
327
329
328
+++
330
329
@@ -375,7 +374,7 @@ plt.ylim([0., 3.])
375
374
plt.show()
376
375
```
377
376
378
-
We consider two additonal distributions.
377
+
We consider two additional distributions.
379
378
380
379
As a reminder $h_1$ is the original $Beta(0.5,0.5)$ distribution that we used above.
381
380
@@ -458,10 +457,9 @@ for i, t in enumerate([1, 20]):
458
457
plt.show()
459
458
```
460
459
461
-
However, $h_3$ is evidently a poor importance sampling distribution forpir problem,
460
+
However, $h_3$ is evidently a poor importance sampling distribution for our problem,
462
461
with a mean estimate far away from $1$ for $T = 20$.
463
462
464
-
Notice that evan at $T = 1$, the mean estimate with importance sampling is more biased than just sampling with $g$ itself.
463
+
Notice that even at $T = 1$, the mean estimate with importance sampling is more biased than sampling with just $g$ itself.
465
464
466
-
Thus, our simulations suggest that we would be better off simply using Monte Carlo
467
-
approximations under $g$ than using $h_3$ as an importance sampling distribution for our problem.
465
+
Thus, our simulations suggest that for our problem we would be better off simply using Monte Carlo approximations under $g$ than using $h_3$ as an importance sampling distribution.
0 commit comments