Skip to content

Commit 81395bc

Browse files
Volodymyr OrlovVolodymyr Orlov
authored andcommitted
fix: formatting
1 parent 3a3f904 commit 81395bc

File tree

3 files changed

+40
-40
lines changed

3 files changed

+40
-40
lines changed

src/svm/mod.rs

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,20 @@
11
//! # Support Vector Machines
2-
//!
3-
//! Support Vector Machines (SVM) is one of the most performant off-the-shelf machine learning algorithms.
2+
//!
3+
//! Support Vector Machines (SVM) is one of the most performant off-the-shelf machine learning algorithms.
44
//! SVM is based on the [Vapnik–Chervonenkiy theory](https://en.wikipedia.org/wiki/Vapnik%E2%80%93Chervonenkis_theory) that was developed during 1960–1990 by Vladimir Vapnik and Alexey Chervonenkiy.
5-
//!
6-
//! SVM splits data into two sets using a maximal-margin decision boundary, \\(f(x)\\). For regression, the algorithm uses a value of the function \\(f(x)\\) to predict a target value.
5+
//!
6+
//! SVM splits data into two sets using a maximal-margin decision boundary, \\(f(x)\\). For regression, the algorithm uses a value of the function \\(f(x)\\) to predict a target value.
77
//! To classify a new point, algorithm calculates a sign of the decision function to see where the new point is relative to the boundary.
8-
//!
8+
//!
99
//! SVM is memory efficient since it uses only a subset of training data to find a decision boundary. This subset is called support vectors.
10-
//!
11-
//! In SVM distance between a data point and the support vectors is defined by the kernel function.
12-
//! SmartCore supports multiple kernel functions but you can always define a new kernel function by implementing the `Kernel` trait. Not all functions can be a kernel.
13-
//! Building a new kernel requires a good mathematical understanding of the [Mercer theorem](https://en.wikipedia.org/wiki/Mercer%27s_theorem)
10+
//!
11+
//! In SVM distance between a data point and the support vectors is defined by the kernel function.
12+
//! SmartCore supports multiple kernel functions but you can always define a new kernel function by implementing the `Kernel` trait. Not all functions can be a kernel.
13+
//! Building a new kernel requires a good mathematical understanding of the [Mercer theorem](https://en.wikipedia.org/wiki/Mercer%27s_theorem)
1414
//! that gives necessary and sufficient condition for a function to be a kernel function.
15-
//!
15+
//!
1616
//! Pre-defined kernel functions:
17-
//!
17+
//!
1818
//! * *Linear*, \\( K(x, x') = \langle x, x' \rangle\\)
1919
//! * *Polynomial*, \\( K(x, x') = (\gamma\langle x, x' \rangle + r)^d\\), where \\(d\\) is polynomial degree, \\(\gamma\\) is a kernel coefficient and \\(r\\) is an independent term in the kernel function.
2020
//! * *RBF (Gaussian)*, \\( K(x, x') = e^{-\gamma \lVert x - x' \rVert ^2} \\), where \\(\gamma\\) is kernel coefficient

src/svm/svc.rs

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -1,27 +1,27 @@
11
//! # Support Vector Classifier.
2-
//!
2+
//!
33
//! Support Vector Classifier (SVC) is a binary classifier that uses an optimal hyperplane to separate the points in the input variable space by their class.
4-
//!
5-
//! During training, SVC chooses a Maximal-Margin hyperplane that can separate all training instances with the largest margin.
6-
//! The margin is calculated as the perpendicular distance from the boundary to only the closest points. Hence, only these points are relevant in defining
4+
//!
5+
//! During training, SVC chooses a Maximal-Margin hyperplane that can separate all training instances with the largest margin.
6+
//! The margin is calculated as the perpendicular distance from the boundary to only the closest points. Hence, only these points are relevant in defining
77
//! the hyperplane and in the construction of the classifier. These points are called the support vectors.
8-
//!
9-
//! While SVC selects a hyperplane with the largest margin it allows some points in the training data to violate the separating boundary.
10-
//! The parameter `C` > 0 gives you control over how SVC will handle violating points. The bigger the value of this parameter the more we penalize the algorithm
11-
//! for incorrectly classified points. In other words, setting this parameter to a small value will result in a classifier that allows for a big number
12-
//! of misclassified samples. Mathematically, SVC optimization problem can be defined as:
13-
//!
8+
//!
9+
//! While SVC selects a hyperplane with the largest margin it allows some points in the training data to violate the separating boundary.
10+
//! The parameter `C` > 0 gives you control over how SVC will handle violating points. The bigger the value of this parameter the more we penalize the algorithm
11+
//! for incorrectly classified points. In other words, setting this parameter to a small value will result in a classifier that allows for a big number
12+
//! of misclassified samples. Mathematically, SVC optimization problem can be defined as:
13+
//!
1414
//! \\[\underset{w, \zeta}{minimize} \space \space \frac{1}{2} \lVert \vec{w} \rVert^2 + C\sum_{i=1}^m \zeta_i \\]
15-
//!
15+
//!
1616
//! subject to:
17-
//!
17+
//!
1818
//! \\[y_i(\langle\vec{w}, \vec{x}_i \rangle + b) \geq 1 - \zeta_i \\]
1919
//! \\[\zeta_i \geq 0 for \space any \space i = 1, ... , m\\]
20-
//!
21-
//! Where \\( m \\) is a number of training samples, \\( y_i \\) is a label value (either 1 or -1) and \\(\langle\vec{w}, \vec{x}_i \rangle + b\\) is a decision boundary.
22-
//!
23-
//! To solve this optimization problem, SmartCore uses an [approximate SVM solver](https://leon.bottou.org/projects/lasvm).
24-
//! The optimizer reaches accuracies similar to that of a real SVM after performing two passes through the training examples. You can choose the number of passes
20+
//!
21+
//! Where \\( m \\) is a number of training samples, \\( y_i \\) is a label value (either 1 or -1) and \\(\langle\vec{w}, \vec{x}_i \rangle + b\\) is a decision boundary.
22+
//!
23+
//! To solve this optimization problem, SmartCore uses an [approximate SVM solver](https://leon.bottou.org/projects/lasvm).
24+
//! The optimizer reaches accuracies similar to that of a real SVM after performing two passes through the training examples. You can choose the number of passes
2525
//! through the data that the algorithm takes by changing the `epoch` parameter of the classifier.
2626
//!
2727
//! Example:
@@ -73,7 +73,7 @@
7373
//!
7474
//! * ["Support Vector Machines", Kowalczyk A., 2017](https://www.svm-tutorial.com/2017/10/support-vector-machines-succinctly-released/)
7575
//! * ["Fast Kernel Classifiers with Online and Active Learning", Bordes A., Ertekin S., Weston J., Bottou L., 2005](https://www.jmlr.org/papers/volume6/bordes05a/bordes05a.pdf)
76-
//!
76+
//!
7777
//! <script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script>
7878
//! <script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
7979

src/svm/svr.rs

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,21 +1,21 @@
11
//! # Epsilon-Support Vector Regression.
2-
//!
3-
//! Support Vector Regression (SVR) is a popular algorithm used for regression that uses the same principle as SVM.
4-
//!
2+
//!
3+
//! Support Vector Regression (SVR) is a popular algorithm used for regression that uses the same principle as SVM.
4+
//!
55
//! Just like [SVC](../svc/index.html) SVR finds optimal decision boundary, \\(f(x)\\) that separates all training instances with the largest margin.
6-
//! Unlike SVC, in \\(\epsilon\\)-SVR regression the goal is to find a function \\(f(x)\\) that has at most \\(\epsilon\\) deviation from the
6+
//! Unlike SVC, in \\(\epsilon\\)-SVR regression the goal is to find a function \\(f(x)\\) that has at most \\(\epsilon\\) deviation from the
77
//! known targets \\(y_i\\) for all the training data. To find this function, we need to find solution to this optimization problem:
8-
//!
8+
//!
99
//! \\[\underset{w, \zeta}{minimize} \space \space \frac{1}{2} \lVert \vec{w} \rVert^2 + C\sum_{i=1}^m \zeta_i \\]
10-
//!
10+
//!
1111
//! subject to:
12-
//!
12+
//!
1313
//! \\[\lvert y_i - \langle\vec{w}, \vec{x}_i \rangle - b \rvert \leq \epsilon + \zeta_i \\]
1414
//! \\[\lvert \langle\vec{w}, \vec{x}_i \rangle + b - y_i \rvert \leq \epsilon + \zeta_i \\]
1515
//! \\[\zeta_i \geq 0 for \space any \space i = 1, ... , m\\]
16-
//!
17-
//! Where \\( m \\) is a number of training samples, \\( y_i \\) is a target value and \\(\langle\vec{w}, \vec{x}_i \rangle + b\\) is a decision boundary.
18-
//!
16+
//!
17+
//! Where \\( m \\) is a number of training samples, \\( y_i \\) is a target value and \\(\langle\vec{w}, \vec{x}_i \rangle + b\\) is a decision boundary.
18+
//!
1919
//! The parameter `C` > 0 determines the trade-off between the flatness of \\(f(x)\\) and the amount up to which deviations larger than \\(\epsilon\\) are tolerated
2020
//!
2121
//! Example:
@@ -66,7 +66,7 @@
6666
//! * ["A Fast Algorithm for Training Support Vector Machines", Platt J.C., 1998](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-98-14.pdf)
6767
//! * ["Working Set Selection Using Second Order Information for Training Support Vector Machines", Rong-En Fan et al., 2005](https://www.jmlr.org/papers/volume6/fan05a/fan05a.pdf)
6868
//! * ["A tutorial on support vector regression", Smola A.J., Scholkopf B., 2003](https://alex.smola.org/papers/2004/SmoSch04.pdf)
69-
//!
69+
//!
7070
//! <script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script>
7171
//! <script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
7272

0 commit comments

Comments
 (0)