You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+10-8Lines changed: 10 additions & 8 deletions
Original file line number
Diff line number
Diff line change
@@ -17,6 +17,8 @@ BayesML has the following characteristics.
17
17
* Many of our learning algorithms are much faster than general-purpose Bayesian learning algorithms such as MCMC methods because they effectively use the conjugate property of a probabilistic data generative model and a prior distribution. Moreover, they are suitable for online learning.
18
18
* All packages have methods to visualize the probabilistic data generative model, generated data from that model, and the posterior distribution learned from the data in 2~3 dimensional space. Thus, you can effectively understand the characteristics of probabilistic data generative models and algorithms through the generation of synthetic data and learning from them.
19
19
20
+
For more details, see our [website](https://yuta-nakahara.github.io/BayesML/"BayesML's Documentation").
21
+
20
22
## Installation
21
23
22
24
Please use the following commands to install BayesML.
@@ -107,14 +109,14 @@ Different settings of a loss function yield different optimal estimates.
107
109
108
110
The following packages are currently available. In this library, a probabilistic data generative model, prior distribution, posterior distribution (or approximate posterior distribution), and predictive distribution (or approximate predictive distribution) are collectively called a model.
In the future, we will add packages to deal with a mixture normal model and a hidden Markov model, which are difficult to perform exact Bayesian inference, by using variational Bayes methods.
Copy file name to clipboardExpand all lines: doc/index.rst
+6-3Lines changed: 6 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -2,6 +2,9 @@
2
2
sphinx-quickstart on Mon Feb 21 21:21:00 2022.
3
3
You can adapt this file completely to your liking, but it should at least
4
4
contain the root `toctree` directive.
5
+
.. Document Author
6
+
Yuta Nakahara <yuta.nakahara@aoni.waseda.jp>
7
+
Shota Saito <shota.s@gunma-u.ac.jp>
5
8
6
9
BayesML's Documentation
7
10
=======================
@@ -41,7 +44,7 @@ Example
41
44
42
45
We show an example of generating data drawn according to the Bernoulli distribution and learning from them.
43
46
44
-
First, we create an instance of a probabilistic data generative model. Here, the parameter $\theta$, which represents an occurrence probability of 1, is set to 0.7.
47
+
First, we create an instance of a probabilistic data generative model. Here, the parameter `theta`, which represents an occurrence probability of 1, is set to 0.7.
45
48
46
49
.. code-block::
47
50
@@ -64,7 +67,7 @@ Outputs:
64
67
|x4:[1 1 1 0 1 1 0 1 1 1 1 1 1 1 1 1 0 1 1 0]
65
68
.. image:: ./images/README_ex_img1.png
66
69
67
-
After confirming that the frequency of occurrence of 1 is around $\theta$=0.7, we generate a sample and store it to variable x.
70
+
After confirming that the frequency of occurrence of 1 is around `theta=0.7`, we generate a sample and store it to variable `x`.
68
71
69
72
.. code-block::
70
73
@@ -88,7 +91,7 @@ Outputs:
88
91
89
92
.. image:: ./images/README_ex_img2.png
90
93
91
-
After learning from the data, we can see that the density of the posterior distribution is concentrated around the true parameter $\theta$=0.7.
94
+
After learning from the data, we can see that the density of the posterior distribution is concentrated around the true parameter `theta=0.7`.
Copy file name to clipboardExpand all lines: docs/_sources/index.rst.txt
+6-3Lines changed: 6 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -2,6 +2,9 @@
2
2
sphinx-quickstart on Mon Feb 21 21:21:00 2022.
3
3
You can adapt this file completely to your liking, but it should at least
4
4
contain the root `toctree` directive.
5
+
.. Document Author
6
+
Yuta Nakahara <yuta.nakahara@aoni.waseda.jp>
7
+
Shota Saito <shota.s@gunma-u.ac.jp>
5
8
6
9
BayesML's Documentation
7
10
=======================
@@ -41,7 +44,7 @@ Example
41
44
42
45
We show an example of generating data drawn according to the Bernoulli distribution and learning from them.
43
46
44
-
First, we create an instance of a probabilistic data generative model. Here, the parameter $\theta$, which represents an occurrence probability of 1, is set to 0.7.
47
+
First, we create an instance of a probabilistic data generative model. Here, the parameter `theta`, which represents an occurrence probability of 1, is set to 0.7.
45
48
46
49
.. code-block::
47
50
@@ -64,7 +67,7 @@ Outputs:
64
67
|x4:[1 1 1 0 1 1 0 1 1 1 1 1 1 1 1 1 0 1 1 0]
65
68
.. image:: ./images/README_ex_img1.png
66
69
67
-
After confirming that the frequency of occurrence of 1 is around $\theta$=0.7, we generate a sample and store it to variable x.
70
+
After confirming that the frequency of occurrence of 1 is around `theta=0.7`, we generate a sample and store it to variable `x`.
68
71
69
72
.. code-block::
70
73
@@ -88,7 +91,7 @@ Outputs:
88
91
89
92
.. image:: ./images/README_ex_img2.png
90
93
91
-
After learning from the data, we can see that the density of the posterior distribution is concentrated around the true parameter $\theta$=0.7.
94
+
After learning from the data, we can see that the density of the posterior distribution is concentrated around the true parameter `theta=0.7`.
<spanclass="sig-name descname"><spanclass="pre">set_h_params</span></span><spanclass="sig-paren">(</span><emclass="sig-param"><spanclass="o"><spanclass="pre">**</span></span><spanclass="n"><spanclass="pre">kwargs</span></span></em><spanclass="sig-paren">)</span><aclass="headerlink" href="#bayesml.autoregressive.GenModel.set_h_params" title="Permalink to this definition">¶</a></dt>
428
428
<dd><p>Set the hyperparameters of the prior distribution.</p>
429
-
<dlclass="field-list">
429
+
<dlclass="field-list simple">
430
430
<dtclass="field-odd">Parameters</dt>
431
-
<ddclass="field-odd"><dl>
431
+
<ddclass="field-odd"><dlclass="simple">
432
432
<dt><strong>**kwargs</strong></dt><dd><p>a python dictionary {‘h_mu_vec’:ndarray, ‘h_lambda_mat’:ndarray, ‘h_alpha’:float, ‘h_beta’:float} or
or {‘hn_mu_vec’:ndarray, ‘hn_lambda_mat’:ndarray, ‘hn_alpha’:float, ‘hn_beta’:float}
435
+
They are obtained by <codeclass="docutils literal notranslate"><spanclass="pre">get_h_params()</span></code> of GenModel,
438
436
<codeclass="docutils literal notranslate"><spanclass="pre">get_h0_params</span></code> of LearnModel or <codeclass="docutils literal notranslate"><spanclass="pre">get_hn_params</span></code> of LearnModel.</p>
or {‘hn_mu_vec’:ndarray, ‘hn_lambda_mat’:ndarray, ‘hn_alpha’:float, ‘hn_beta’:float}
697
+
They are obtained by <codeclass="docutils literal notranslate"><spanclass="pre">get_h_params()</span></code> of GenModel,
702
698
<codeclass="docutils literal notranslate"><spanclass="pre">get_h0_params</span></code> of LearnModel or <codeclass="docutils literal notranslate"><spanclass="pre">get_hn_params</span></code> of LearnModel.</p>
or {‘hn_mu_vec’:ndarray, ‘hn_lambda_mat’:ndarray, ‘hn_alpha’:float, ‘hn_beta’:float}
755
+
They are obtained by <codeclass="docutils literal notranslate"><spanclass="pre">get_h_params()</span></code> of GenModel,
762
756
<codeclass="docutils literal notranslate"><spanclass="pre">get_h0_params</span></code> of LearnModel or <codeclass="docutils literal notranslate"><spanclass="pre">get_hn_params</span></code> of LearnModel.</p>
0 commit comments