Skip to content

Commit f0bfd5a

Browse files
Tom's May 7 edits of navy_captain lecture
1 parent 10d8880 commit f0bfd5a

File tree

1 file changed

+29
-30
lines changed

1 file changed

+29
-30
lines changed

lectures/navy_captain.md

Lines changed: 29 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -42,59 +42,56 @@ This lecture follows up on ideas presented in the following lectures:
4242
* {doc}`Exchangeability and Bayesian Updating <exchangeable>`
4343
* {doc}`Likelihood Ratio Processes <likelihood_ratio_process>`
4444

45-
In {doc}`A Problem that Stumped Milton Friedman <wald_friedman>` we described a problem
45+
{doc}`A Problem that Stumped Milton Friedman <wald_friedman>` described a problem
4646
that a Navy Captain presented to Milton Friedman during World War II.
4747

48-
The Navy had instructed the Captain to use a decision rule for quality control that the Captain suspected
49-
could be dominated by a better rule.
48+
The Navy had told the Captain to use a decision rule for quality control.
5049

51-
(The Navy had ordered the Captain to use an instance of a **frequentist decision rule**.)
50+
In particular, the Navy had ordered the Captain to use an instance of a **frequentist decision rule**.
5251

53-
Milton Friedman recognized the Captain's conjecture as posing a challenging statistical problem that he and other
54-
members of the US Government's Statistical Research Group at Columbia University proceeded to try to solve.
52+
The Captain doubted that that rule was a good one.
5553

56-
One of the members of the group, the great mathematician Abraham Wald, soon solved the problem.
54+
Milton Friedman recognized the Captain's conjecture as posing a challenging statistical problem that he and other members of the US Government's Statistical Research Group at Columbia University proceeded to try to solve.
55+
56+
A member of the group, the great mathematician and economist Abraham Wald, soon solved the problem.
5757

5858
A good way to formulate the problem is to use some ideas from Bayesian statistics that we describe in
5959
this lecture {doc}`Exchangeability and Bayesian Updating <exchangeable>` and in this lecture
6060
{doc}`Likelihood Ratio Processes <likelihood_ratio_process>`, which describes the link between Bayesian
6161
updating and likelihood ratio processes.
6262

63-
The present lecture uses Python to generate simulations that evaluate expected losses under **frequentist** and **Bayesian**
64-
decision rules for an instance of the Navy Captain's decision problem.
63+
The present lecture uses Python to generate simulations that evaluate expected losses under **frequentist** and **Bayesian** decision rules for an instance of the Navy Captain's decision problem.
6564

66-
The simulations validate the Navy Captain's hunch that there is a better rule than the one the Navy had ordered him
67-
to use.
65+
The simulations confirm the Navy Captain's hunch that there is a better rule than the one the Navy had ordered him to use.
6866

6967
## Setup
7068

71-
To formalize the problem of the Navy Captain whose questions posed the
72-
problem that Milton Friedman and Allan Wallis handed over to Abraham
73-
Wald, we consider a setting with the following parts.
69+
To formalize the problem that had confronted the Navy Captain, we consider a setting with the following parts.
7470

7571
- Each period a decision maker draws a non-negative random variable
76-
$Z$ from a probability distribution that he does not completely
77-
understand. He knows that two probability distributions are possible,
72+
$Z$. He knows that two probability distributions are possible,
7873
$f_{0}$ and $f_{1}$, and that which ever distribution it
7974
is remains fixed over time. The decision maker believes that before
80-
the beginning of time, nature once and for all selected either
75+
the beginning of time, nature once and for all had selected either
8176
$f_{0}$ or $f_1$ and that the probability that it
8277
selected $f_0$ is probability $\pi^{*}$.
8378
- The decision maker observes a sample
84-
$\left\{ z_{i}\right\} _{i=0}^{t}$ from the the distribution
79+
$\left\{ z_{i}\right\} _{i=0}^{t}$ from the distribution
8580
chosen by nature.
8681

8782
The decision maker wants to decide which distribution actually governs
88-
$Z$ and is worried by two types of errors and the losses that they
83+
$Z$.
84+
85+
He is worried about two types of errors and the losses that they will
8986
impose on him.
9087

91-
- a loss $\bar L_{1}$ from a **type I error** that occurs when he decides that
88+
- a loss $\bar L_{1}$ from a **type I error** that occurs if he decides that
9289
$f=f_{1}$ when actually $f=f_{0}$
93-
- a loss $\bar L_{0}$ from a **type II error** that occurs when he decides that
90+
- a loss $\bar L_{0}$ from a **type II error** that occurs if he decides that
9491
$f=f_{0}$ when actually $f=f_{1}$
9592

9693
The decision maker pays a cost $c$ for drawing
97-
another $z$
94+
another $z$.
9895

9996
We mainly borrow parameters from the quantecon lecture
10097
{doc}`A Problem that Stumped Milton Friedman <wald_friedman>` except that we increase both $\bar L_{0}$
@@ -215,8 +212,10 @@ In particular, it gave him a decision rule that the Navy had designed by using
215212
frequentist statistical theory to minimize an
216213
expected loss function.
217214

218-
That decision rule is characterized by a sample size $t$ and a
219-
cutoff $d$ associated with a likelihood ratio.
215+
That decision rule is characterized by
216+
217+
* a sample size $t$, and
218+
* a cutoff value $d$ of a likelihood ratio
220219

221220
Let
222221
$L\left(z^{t}\right)=\prod_{i=0}^{t}\frac{f_{0}\left(z_{i}\right)}{f_{1}\left(z_{i}\right)}$
@@ -227,6 +226,7 @@ The decision rule associated with a sample size $t$ is:
227226

228227
- decide that $f_0$ is the distribution if the likelihood ratio
229228
is greater than $d$
229+
- decide that $f_1$ is the distribution if the likelihood ratio is less than $d$
230230

231231
To understand how that rule was engineered, let null and alternative
232232
hypotheses be
@@ -259,9 +259,8 @@ Here
259259
- $PD$ denotes the probability of a **detection error**, i.e.,
260260
not rejecting $H_0$ when $H_1$ is true
261261

262-
For a given sample size $t$, the pairs $\left(PFA,PD\right)$
263-
lie on a **receiver operating characteristic curve** and can be uniquely
264-
pinned down by choosing $d$.
262+
For a given sample size $t$, the pairs $\left(PFA,PD\right)$ lie on a **receiver operating characteristic curve**.
263+
* by choosing $d$, we select a particular pair $\left(PFA,PD\right)$ along the curve for a given $t$
265264

266265
To see some receiver operating characteristic curves, please see this
267266
lecture {doc}`Likelihood Ratio Processes <likelihood_ratio_process>`.
@@ -702,7 +701,7 @@ then compute $\bar{V}_{Bayes}\left(\pi_{0}\right)$.
702701
We can then determine an initial Bayesian prior $\pi_{0}^{*}$ that
703702
minimizes this objective concept of expected loss.
704703

705-
The figure 9 below plots four cases corresponding to
704+
The figure below plots four cases corresponding to
706705
$\pi^{*}=0.25,0.3,0.5,0.7$.
707706

708707
We observe that in each case $\pi_{0}^{*}$ equals $\pi^{*}$.
@@ -860,8 +859,8 @@ t_idx = t_optimal - 1
860859

861860
## Distribution of Bayesian Decision Rule’s Time to Decide
862861

863-
By using simulations, we compute the frequency distribution of time to
864-
deciding for the Bayesian decision rule and compare that time to the
862+
We use simulations to compute the frequency distribution of the time to
863+
decide for the Bayesian decision rule and compare that time to the
865864
frequentist rule’s fixed $t$.
866865

867866
The following Python code creates a graph that shows the frequency

0 commit comments

Comments
 (0)