Skip to content

Commit 3273084

Browse files
Update README.md
1 parent 40ad9a0 commit 3273084

File tree

1 file changed

+6
-4
lines changed

1 file changed

+6
-4
lines changed

README.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ The following are required.
3636

3737
We show an example of generating data drawn according to the Bernoulli distribution and learning from them.
3838

39-
First, we create an instance of a probabilistic data generative model. Here, the parameter $\theta$, which represents an occurrence probability of 1, is set to 0.7.
39+
First, we create an instance of a probabilistic data generative model. Here, the parameter `theta`, which represents an occurrence probability of 1, is set to 0.7.
4040

4141
``` python
4242
from bayesml import bernoulli
@@ -58,7 +58,7 @@ gen_model.visualize_model()
5858
>x4:[1 1 1 0 1 1 0 1 1 1 1 1 1 1 1 1 0 1 1 0]
5959
>![bernoulli_example1](./doc/images/README_ex_img1.png)
6060
61-
After confirming that the frequency of occurrence of 1 is around $\theta$=0.7, we generate a sample and store it to variable x.
61+
After confirming that the frequency of occurrence of 1 is around `theta=0.7`, we generate a sample and store it to variable `x`.
6262

6363
``` python
6464
x = model.gen_sample(sample_size=20)
@@ -78,7 +78,7 @@ learn_model.visualize_posterior()
7878

7979
>![bernoulli_example2](./doc/images/README_ex_img2.png)
8080
81-
After learning from the data, we can see that the density of the posterior distribution is concentrated around the true parameter $\theta$=0.7.
81+
After learning from the data, we can see that the density of the posterior distribution is concentrated around the true parameter `theta=0.7`.
8282

8383
``` python
8484
learn_model.update_posterior(x)
@@ -87,7 +87,9 @@ learn_model.visualize_posterior()
8787

8888
>![bernoulli_example3](./doc/images/README_ex_img3.png)
8989
90-
We can derive the optimal estimator under the Bayes criterion as the following procedure. First, we set a loss function, e.g., a squared-error loss, absolute-error loss, and 0-1 loss. Then, the Bayes risk function is defined by taking the expectation of the loss function with respect to the distribution of data and parameters. By minimizing the Bayes risk function, we obtain the optimal estimator under the Bayes criterion. For example, if we set a squared-error loss, the optimal estimator under the Bayes criterion of the parameter $\theta$ is the mean of the posterior distribution.
90+
In Bayesian decision theory, the optimal estimator under the Bayes criterion is derived as follows. First, we set a loss function, e.g., a squared-error loss, absolute-error loss, and 0-1 loss. Then, the Bayes risk function is defined by taking the expectation of the loss function with respect to the distribution of data and parameters. By minimizing the Bayes risk function, we obtain the optimal estimator under the Bayes criterion. For example, if we set a squared-error loss, the optimal estimator under the Bayes criterion of the parameter `theta` is the mean of the posterior distribution.
91+
92+
In BayesML, the above calclulation is performed by the following methods.
9193

9294
``` python
9395
print(learn_model.estimate_params(loss='squared'))

0 commit comments

Comments
 (0)