Skip to content

Commit 7007e06

Browse files
Merge pull request #19 from smartcorelib/svm-documentation
SVM documentation
2 parents 3732ad4 + a9446c0 commit 7007e06

File tree

3 files changed

+75
-7
lines changed

3 files changed

+75
-7
lines changed

src/svm/mod.rs

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,25 @@
11
//! # Support Vector Machines
22
//!
3+
//! Support Vector Machines (SVM) is one of the most performant off-the-shelf machine learning algorithms.
4+
//! SVM is based on the [Vapnik–Chervonenkiy theory](https://en.wikipedia.org/wiki/Vapnik%E2%80%93Chervonenkis_theory) that was developed during 1960–1990 by Vladimir Vapnik and Alexey Chervonenkiy.
5+
//!
6+
//! SVM splits data into two sets using a maximal-margin decision boundary, \\(f(x)\\). For regression, the algorithm uses a value of the function \\(f(x)\\) to predict a target value.
7+
//! To classify a new point, algorithm calculates a sign of the decision function to see where the new point is relative to the boundary.
8+
//!
9+
//! SVM is memory efficient since it uses only a subset of training data to find a decision boundary. This subset is called support vectors.
10+
//!
11+
//! In SVM distance between a data point and the support vectors is defined by the kernel function.
12+
//! SmartCore supports multiple kernel functions but you can always define a new kernel function by implementing the `Kernel` trait. Not all functions can be a kernel.
13+
//! Building a new kernel requires a good mathematical understanding of the [Mercer theorem](https://en.wikipedia.org/wiki/Mercer%27s_theorem)
14+
//! that gives necessary and sufficient condition for a function to be a kernel function.
15+
//!
16+
//! Pre-defined kernel functions:
17+
//!
18+
//! * *Linear*, \\( K(x, x') = \langle x, x' \rangle\\)
19+
//! * *Polynomial*, \\( K(x, x') = (\gamma\langle x, x' \rangle + r)^d\\), where \\(d\\) is polynomial degree, \\(\gamma\\) is a kernel coefficient and \\(r\\) is an independent term in the kernel function.
20+
//! * *RBF (Gaussian)*, \\( K(x, x') = e^{-\gamma \lVert x - x' \rVert ^2} \\), where \\(\gamma\\) is kernel coefficient
21+
//! * *Sigmoid (hyperbolic tangent)*, \\( K(x, x') = \tanh ( \gamma \langle x, x' \rangle + r ) \\), where \\(\gamma\\) is kernel coefficient and \\(r\\) is an independent term in the kernel function.
22+
//!
323
//! <script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script>
424
//! <script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
525

src/svm/svc.rs

Lines changed: 30 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,30 @@
11
//! # Support Vector Classifier.
22
//!
3-
//! Example
3+
//! Support Vector Classifier (SVC) is a binary classifier that uses an optimal hyperplane to separate the points in the input variable space by their class.
4+
//!
5+
//! During training, SVC chooses a Maximal-Margin hyperplane that can separate all training instances with the largest margin.
6+
//! The margin is calculated as the perpendicular distance from the boundary to only the closest points. Hence, only these points are relevant in defining
7+
//! the hyperplane and in the construction of the classifier. These points are called the support vectors.
8+
//!
9+
//! While SVC selects a hyperplane with the largest margin it allows some points in the training data to violate the separating boundary.
10+
//! The parameter `C` > 0 gives you control over how SVC will handle violating points. The bigger the value of this parameter the more we penalize the algorithm
11+
//! for incorrectly classified points. In other words, setting this parameter to a small value will result in a classifier that allows for a big number
12+
//! of misclassified samples. Mathematically, SVC optimization problem can be defined as:
13+
//!
14+
//! \\[\underset{w, \zeta}{minimize} \space \space \frac{1}{2} \lVert \vec{w} \rVert^2 + C\sum_{i=1}^m \zeta_i \\]
15+
//!
16+
//! subject to:
17+
//!
18+
//! \\[y_i(\langle\vec{w}, \vec{x}_i \rangle + b) \geq 1 - \zeta_i \\]
19+
//! \\[\zeta_i \geq 0 for \space any \space i = 1, ... , m\\]
20+
//!
21+
//! Where \\( m \\) is a number of training samples, \\( y_i \\) is a label value (either 1 or -1) and \\(\langle\vec{w}, \vec{x}_i \rangle + b\\) is a decision boundary.
22+
//!
23+
//! To solve this optimization problem, SmartCore uses an [approximate SVM solver](https://leon.bottou.org/projects/lasvm).
24+
//! The optimizer reaches accuracies similar to that of a real SVM after performing two passes through the training examples. You can choose the number of passes
25+
//! through the data that the algorithm takes by changing the `epoch` parameter of the classifier.
26+
//!
27+
//! Example:
428
//!
529
//! ```
630
//! use smartcore::linalg::naive::dense_matrix::*;
@@ -47,8 +71,11 @@
4771
//!
4872
//! ## References:
4973
//!
50-
//! * ["Support Vector Machines" Kowalczyk A., 2017](https://www.svm-tutorial.com/2017/10/support-vector-machines-succinctly-released/)
74+
//! * ["Support Vector Machines", Kowalczyk A., 2017](https://www.svm-tutorial.com/2017/10/support-vector-machines-succinctly-released/)
5175
//! * ["Fast Kernel Classifiers with Online and Active Learning", Bordes A., Ertekin S., Weston J., Bottou L., 2005](https://www.jmlr.org/papers/volume6/bordes05a/bordes05a.pdf)
76+
//!
77+
//! <script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script>
78+
//! <script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
5279
5380
use std::collections::{HashMap, HashSet};
5481
use std::fmt::Debug;
@@ -220,7 +247,7 @@ impl<T: RealNumber, M: Matrix<T>, K: Kernel<T, M::RowVector>> SVC<T, M, K> {
220247

221248
impl<T: RealNumber, M: Matrix<T>, K: Kernel<T, M::RowVector>> PartialEq for SVC<T, M, K> {
222249
fn eq(&self, other: &Self) -> bool {
223-
if self.b != other.b
250+
if (self.b - other.b).abs() > T::epsilon() * T::two()
224251
|| self.w.len() != other.w.len()
225252
|| self.instances.len() != other.instances.len()
226253
{

src/svm/svr.rs

Lines changed: 25 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,24 @@
11
//! # Epsilon-Support Vector Regression.
22
//!
3-
//! Example
3+
//! Support Vector Regression (SVR) is a popular algorithm used for regression that uses the same principle as SVM.
4+
//!
5+
//! Just like [SVC](../svc/index.html) SVR finds optimal decision boundary, \\(f(x)\\) that separates all training instances with the largest margin.
6+
//! Unlike SVC, in \\(\epsilon\\)-SVR regression the goal is to find a function \\(f(x)\\) that has at most \\(\epsilon\\) deviation from the
7+
//! known targets \\(y_i\\) for all the training data. To find this function, we need to find solution to this optimization problem:
8+
//!
9+
//! \\[\underset{w, \zeta}{minimize} \space \space \frac{1}{2} \lVert \vec{w} \rVert^2 + C\sum_{i=1}^m \zeta_i \\]
10+
//!
11+
//! subject to:
12+
//!
13+
//! \\[\lvert y_i - \langle\vec{w}, \vec{x}_i \rangle - b \rvert \leq \epsilon + \zeta_i \\]
14+
//! \\[\lvert \langle\vec{w}, \vec{x}_i \rangle + b - y_i \rvert \leq \epsilon + \zeta_i \\]
15+
//! \\[\zeta_i \geq 0 for \space any \space i = 1, ... , m\\]
16+
//!
17+
//! Where \\( m \\) is a number of training samples, \\( y_i \\) is a target value and \\(\langle\vec{w}, \vec{x}_i \rangle + b\\) is a decision boundary.
18+
//!
19+
//! The parameter `C` > 0 determines the trade-off between the flatness of \\(f(x)\\) and the amount up to which deviations larger than \\(\epsilon\\) are tolerated
20+
//!
21+
//! Example:
422
//!
523
//! ```
624
//! use smartcore::linalg::naive::dense_matrix::*;
@@ -44,10 +62,13 @@
4462
//!
4563
//! ## References:
4664
//!
47-
//! * ["Support Vector Machines" Kowalczyk A., 2017](https://www.svm-tutorial.com/2017/10/support-vector-machines-succinctly-released/)
65+
//! * ["Support Vector Machines", Kowalczyk A., 2017](https://www.svm-tutorial.com/2017/10/support-vector-machines-succinctly-released/)
4866
//! * ["A Fast Algorithm for Training Support Vector Machines", Platt J.C., 1998](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-98-14.pdf)
4967
//! * ["Working Set Selection Using Second Order Information for Training Support Vector Machines", Rong-En Fan et al., 2005](https://www.jmlr.org/papers/volume6/fan05a/fan05a.pdf)
50-
//! * ["A tutorial on support vector regression", SMOLA A.J., Scholkopf B., 2003](https://alex.smola.org/papers/2004/SmoSch04.pdf)
68+
//! * ["A tutorial on support vector regression", Smola A.J., Scholkopf B., 2003](https://alex.smola.org/papers/2004/SmoSch04.pdf)
69+
//!
70+
//! <script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script>
71+
//! <script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
5172
5273
use std::cell::{Ref, RefCell};
5374
use std::fmt::Debug;
@@ -183,7 +204,7 @@ impl<T: RealNumber, M: Matrix<T>, K: Kernel<T, M::RowVector>> SVR<T, M, K> {
183204

184205
impl<T: RealNumber, M: Matrix<T>, K: Kernel<T, M::RowVector>> PartialEq for SVR<T, M, K> {
185206
fn eq(&self, other: &Self) -> bool {
186-
if self.b != other.b
207+
if (self.b - other.b).abs() > T::epsilon() * T::two()
187208
|| self.w.len() != other.w.len()
188209
|| self.instances.len() != other.instances.len()
189210
{

0 commit comments

Comments
 (0)