@@ -254,7 +254,7 @@ <h5 class="modal-title" id="aboutModalLabel">About Data Space</h5>
254
254
255
255
< div class ="modal fade " id ="bayesInfoModal " tabindex ="-1 " role ="dialog " aria-labelledby ="bayesInfoModalLabel "
256
256
aria-hidden ="true ">
257
- < div class ="modal-dialog modal-dialog-centered " role ="document ">
257
+ < div class ="modal-dialog modal-dialog-centered modal-lg " role ="document ">
258
258
< div class ="modal-content ">
259
259
< div class ="modal-header ">
260
260
< h5 class ="modal-title " id ="bayesInfoModalLabel "> Naive Bayes</ h5 >
@@ -263,7 +263,56 @@ <h5 class="modal-title" id="bayesInfoModalLabel">Naive Bayes</h5>
263
263
</ button >
264
264
</ div >
265
265
< div class ="modal-body ">
266
- < p > TODO</ p >
266
+ < p >
267
+ < a href ="https://scikit-learn.org/stable/modules/naive_bayes.html " target ="_blank ">
268
+ Naive Bayesian models</ a > are a collection of supervised classification algorithms
269
+ that apply Bayes rule of conditional probability with the "naive" assumption
270
+ of conditional independence between all pairs of features given the value.
271
+ Bayesian predictions are based on the conditional likelihood of the joint
272
+ probability of all features and the target class. Becasue features are treated
273
+ like likelihoods, the primary difference of each classifier is the assumptions
274
+ they make about the distrubition of the features.
275
+ </ p >
276
+
277
+ < ul class ="list-unstyled ">
278
+ < li >
279
+ < a href ="https://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.GaussianNB.html#sklearn.naive_bayes.GaussianNB " target ="_blank ">
280
+ GaussianNB </ a > : Assumes the likelihood of features is Gaussian, e.g. a range of infinite values.
281
+ </ li >
282
+ < li >
283
+ < a href ="https://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.MultinomialNB.html#sklearn.naive_bayes.MultinomialNB " target ="_blank ">
284
+ MultinomialNB</ a > : Features are treated as a finite number of discrete events measured as a multinomial distribution.
285
+ </ li >
286
+ < li >
287
+ < a href ="https://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.BernoulliNB.html#sklearn.naive_bayes.BernoulliNB " target ="_blank ">
288
+ BernoulliNB</ a > : Features are distributed according to multivariate Bernoulli disribution: e.g. features are either 1 or 0.
289
+ </ li >
290
+ < li >
291
+ < a href ="https://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.ComplementNB.html#sklearn.naive_bayes.ComplementNB " target ="_blank ">
292
+ ComplementNB</ a > : A modification of MultinomialNB where the class complement is used - good for class imbalance.
293
+ </ li >
294
+ </ ul >
295
+
296
+ < h6 > Hyperparameters</ h6 >
297
+ < dl >
298
+ < dt > Priors/Class Prior · < code > array-like shape (n_classes,)</ code > </ dt >
299
+ < dd > Prior probabilities of the classes. If specified the priors are not adjusted according to the data. (Not used with ComplementNB)</ dd >
300
+
301
+ < dt > Smoothing · < code > float</ code > </ dt >
302
+ < dd > Portion of the largest variance of all features that is added to variances for calculation stability.</ dd >
303
+
304
+ < dt > Alpha · < code > float</ code > </ dt >
305
+ < dd > Additive (Laplace/Lidstone) smoothing parameter (0 for no smoothing).</ dd >
306
+
307
+ < dt > Fit Prior · < code > bool</ code > </ dt >
308
+ < dd > Whether to learn class prior probabilities or not. If false, a uniform prior will be used.</ dd >
309
+
310
+ < dt > Binarize · < code > float or None</ code > </ dt >
311
+ < dd > Threshold for binarizing (mapping to booleans) of sample features. If None, input is presumed to already consist of binary vectors.</ dd >
312
+
313
+ < dt > Norm · < code > bool</ code > </ dt >
314
+ < dd > Whether or not a second normalization of the weights is performed.</ dd >
315
+ </ dl >
267
316
</ div >
268
317
< div class ="modal-footer ">
269
318
< button type ="button " class ="btn btn-secondary " data-dismiss ="modal "> Close</ button >
0 commit comments