You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
where we consider the expectation ``\mathbb E`` with respect to ``d_3\sim N(0, 0.01)`` with density ``f_3``. The first possibility to compute the expectation is to discretize the integral.
351
351
@@ -361,19 +361,15 @@ nothing # hide
361
361
362
362
The second possibility is to approximate the integral by
where ``x_i`` are sampled from ``d_3``. We do this in `expectation1`, and `expectation2`, where the formed generates from the Distributions package while the latter uses our rejection sampling.
366
+
where ``x_i`` are sampled from ``d_3``. We do this in `expectation1`, and `expectation2`, where the formed generates from the Distributions package while the latter uses our rejection sampling. We use the method of the `mean` function, which takes a function as its first argument.
367
367
368
368
```@example monte
369
-
function expectation1(h, d; n=1000000)
370
-
xs = rand(d, n)
371
-
return mean(h.(xs))
372
-
end
369
+
expectation1(h, d; n = 1000000) = mean(h, rand(d, n))
373
370
374
-
function expectation2(h, f, x_max, xlims; n=1000000)
This gives rise to another implementation of the same thing.
389
385
390
386
```@example monte
391
387
function expectation3(h, f, d_gen; n=1000000)
392
-
xs = rand(d_gen, n)
393
-
return mean(h.(xs) .* f.(xs) ./ pdf(d_gen, xs))
388
+
g(x) = h(x)*f(x)/pdf(d_gen, x)
389
+
return mean(g, rand(d_gen, n))
394
390
end
395
391
396
392
nothing # hide
397
393
```
398
394
399
-
We run these three approaches for ``20``repetition.s
395
+
We run these three approaches for ``20``repetitions.
400
396
401
397
```@example monte
402
398
n = 100000
403
399
n_rep = 20
404
400
405
401
Random.seed!(666)
406
402
e1 = [expectation1(h, d3; n=n) for _ in 1:n_rep]
407
-
e2 = [expectation2(h, f3, d3.μ, xlims; n=n) for _ in 1:n_rep]
403
+
e2 = [expectation2(h, f3, f3(d3.μ), xlims; n=n) for _ in 1:n_rep]
408
404
e3 = [expectation3(h, f3, d1; n=n) for _ in 1:n_rep]
409
405
410
406
nothing # hide
@@ -505,7 +501,7 @@ Quantiles form an important concept in statistics. Its definition is slightly co
505
501
506
502
The quantile at level ``\alpha=0.5`` is the mean. Quantiles play an important role in estimates, where they form upper and lower bounds for confidence intervals. They are also used in hypothesis testing.
507
503
508
-
This part will investigate how quantiles on a finite sample differ from the true quantile. We will consider two ways of computing the quantile. Both of them sample ``n`` points from some distribution ``d``. The first one follows the statistical definition and selects the index of the ``n\alpha`` smallest observation. The second one uses the function `quantile`, which performs some interpolation.
504
+
This part will investigate how quantiles on a finite sample differ from the true quantile. We will consider two ways of computing the quantile. Both of them sample ``n`` points from some distribution ``d``. The first one follows the statistical definition and selects the index of the ``n\alpha`` smallest observation by the `partialsort` function. The second one uses the function `quantile`, which performs some interpolation.
Now we add the sampled quantiles and the mean over all repetitions. Since we work with two plots at the same time, we specify into which plot we want to add the new data. It would be better to create a function for plotting and call it for `qs1` and `qs2`, but we wanted to show how to work two plots at the same time.
553
+
Now we add the sampled quantiles and the mean over all repetitions. Since we work with two plots, we specify into which plot we want to add the new data. It would be better to create a function for plotting and call it for `qs1` and `qs2`, but we wanted to show how to work two plots simultaneously.
Both sampled estimates give a lower estimate than the true quantile. In statistical methodology, these estimates are biased. We observe that the interpolated estimate is closer to the true value, and that computing the quantile even on ``10000`` points gives an uncertainty interval of approximately ``0.25``.
588
+
Both sampled estimates give a lower estimate than the true quantile. In statistical methodology, these estimates are biased. We observe that the interpolated estimate is closer to the true value and that computing the quantile even on ``10000`` points gives an uncertainty interval of approximately ``0.25``.
0 commit comments