Skip to content

Commit c5a7bea

Browse files
Merge pull request #45 from smartcorelib/api_doc
feat: version change + api documentation updated
2 parents ba16c25 + 9475d50 commit c5a7bea

File tree

4 files changed

+123
-21
lines changed

4 files changed

+123
-21
lines changed

Cargo.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
name = "smartcore"
33
description = "The most advanced machine learning library in rust."
44
homepage = "https://smartcorelib.org"
5-
version = "0.1.0"
5+
version = "0.2.0"
66
authors = ["SmartCore Developers"]
77
edition = "2018"
88
license = "Apache-2.0"

src/cluster/dbscan.rs

Lines changed: 15 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,20 @@
11
//! # DBSCAN Clustering
22
//!
3-
//! DBSCAN - Density-Based Spatial Clustering of Applications with Noise.
3+
//! DBSCAN stands for density-based spatial clustering of applications with noise. This algorithms is good for arbitrary shaped clusters and clusters with noise.
4+
//! The main idea behind DBSCAN is that a point belongs to a cluster if it is close to many points from that cluster. There are two key parameters of DBSCAN:
5+
//!
6+
//! * `eps`, the maximum distance that specifies a neighborhood. Two points are considered to be neighbors if the distance between them are less than or equal to `eps`.
7+
//! * `min_samples`, minimum number of data points that defines a cluster.
8+
//!
9+
//! Based on these two parameters, points are classified as core point, border point, or outlier:
10+
//!
11+
//! * A point is a core point if there are at least `min_samples` number of points, including the point itself in its vicinity.
12+
//! * A point is a border point if it is reachable from a core point and there are less than `min_samples` number of points within its surrounding area.
13+
//! * All points not reachable from any other point are outliers or noise points.
14+
//!
15+
//! The algorithm starts from picking up an arbitrarily point in the dataset.
16+
//! If there are at least `min_samples` points within a radius of `eps` to the point then we consider all these points to be part of the same cluster.
17+
//! The clusters are then expanded by recursively repeating the neighborhood calculation for each neighboring point.
418
//!
519
//! Example:
620
//!

src/lib.rs

Lines changed: 9 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -10,16 +10,11 @@
1010
//!
1111
//! Welcome to SmartCore, the most advanced machine learning library in Rust!
1212
//!
13-
//! In SmartCore you will find implementation of these ML algorithms:
14-
//! * __Regression__: Linear Regression (OLS), Decision Tree Regressor, Random Forest Regressor, K Nearest Neighbors
15-
//! * __Classification__: Logistic Regressor, Decision Tree Classifier, Random Forest Classifier, Supervised Nearest Neighbors (KNN)
16-
//! * __Clustering__: K-Means
17-
//! * __Matrix Decomposition__: PCA, LU, QR, SVD, EVD
18-
//! * __Distance Metrics__: Euclidian, Minkowski, Manhattan, Hamming, Mahalanobis
19-
//! * __Evaluation Metrics__: Accuracy, AUC, Recall, Precision, F1, Mean Absolute Error, Mean Squared Error, R2
13+
//! SmartCore features various classification, regression and clustering algorithms including support vector machines, random forests, k-means and DBSCAN,
14+
//! as well as tools for model selection and model evaluation.
2015
//!
21-
//! Most of algorithms implemented in SmartCore operate on n-dimentional arrays. While you can use Rust vectors with all functions defined in this library
22-
//! we do recommend to go with one of the popular linear algebra libraries available in Rust. At this moment we support these packages:
16+
//! SmartCore is well integrated with a with wide variaty of libraries that provide support for large, multi-dimensional arrays and matrices. At this moment,
17+
//! all Smartcore's algorithms work with ordinary Rust vectors, as well as matrices and vectors defined in these packages:
2318
//! * [ndarray](https://docs.rs/ndarray)
2419
//! * [nalgebra](https://docs.rs/nalgebra/)
2520
//!
@@ -28,21 +23,21 @@
2823
//! To start using SmartCore simply add the following to your Cargo.toml file:
2924
//! ```ignore
3025
//! [dependencies]
31-
//! smartcore = "0.1.0"
26+
//! smartcore = "0.2.0"
3227
//! ```
3328
//!
34-
//! All ML algorithms in SmartCore are grouped into these generic categories:
29+
//! All machine learning algorithms in SmartCore are grouped into these broad categories:
3530
//! * [Clustering](cluster/index.html), unsupervised clustering of unlabeled data.
3631
//! * [Martix Decomposition](decomposition/index.html), various methods for matrix decomposition.
3732
//! * [Linear Models](linear/index.html), regression and classification methods where output is assumed to have linear relation to explanatory variables
3833
//! * [Ensemble Models](ensemble/index.html), variety of regression and classification ensemble models
3934
//! * [Tree-based Models](tree/index.html), classification and regression trees
4035
//! * [Nearest Neighbors](neighbors/index.html), K Nearest Neighbors for classification and regression
36+
//! * [Naive Bayes](naive_bayes/index.html), statistical classification technique based on Bayes Theorem
37+
//! * [SVM](svm/index.html), support vector machines
4138
//!
42-
//! Each category is assigned to a separate module.
4339
//!
44-
//! For example, KNN classifier is defined in [smartcore::neighbors::knn_classifier](neighbors/knn_classifier/index.html). To train and run it using standard Rust vectors you will
45-
//! run this code:
40+
//! For example, you can use this code to fit a [K Nearest Neighbors classifier](neighbors/knn_classifier/index.html) to a dataset that is defined as standard Rust vector:
4641
//!
4742
//! ```
4843
//! // DenseMatrix defenition

src/model_selection/mod.rs

Lines changed: 98 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,106 @@
11
//! # Model Selection methods
22
//!
3-
//! In statistics and machine learning we usually split our data into multiple subsets: training data and testing data (and sometimes to validate),
4-
//! and fit our model on the train data, in order to make predictions on the test data. We do that to avoid overfitting or underfitting model to our data.
3+
//! In statistics and machine learning we usually split our data into two sets: one for training and the other one for testing.
4+
//! We fit our model to the training data, in order to make predictions on the test data. We do that to avoid overfitting or underfitting model to our data.
55
//! Overfitting is bad because the model we trained fits trained data too well and can’t make any inferences on new data.
66
//! Underfitted is bad because the model is undetrained and does not fit the training data well.
7-
//! Splitting data into multiple subsets helps to find the right combination of hyperparameters, estimate model performance and choose the right model for
8-
//! your data.
7+
//! Splitting data into multiple subsets helps us to find the right combination of hyperparameters, estimate model performance and choose the right model for
8+
//! the data.
99
//!
10-
//! In SmartCore you can split your data into training and test datasets using `train_test_split` function.
10+
//! In SmartCore a random split into training and test sets can be quickly computed with the [train_test_split](./fn.train_test_split.html) helper function.
11+
//!
12+
//! ```
13+
//! use crate::smartcore::linalg::BaseMatrix;
14+
//! use smartcore::linalg::naive::dense_matrix::DenseMatrix;
15+
//! use smartcore::model_selection::train_test_split;
16+
//!
17+
//! //Iris data
18+
//! let x = DenseMatrix::from_2d_array(&[
19+
//! &[5.1, 3.5, 1.4, 0.2],
20+
//! &[4.9, 3.0, 1.4, 0.2],
21+
//! &[4.7, 3.2, 1.3, 0.2],
22+
//! &[4.6, 3.1, 1.5, 0.2],
23+
//! &[5.0, 3.6, 1.4, 0.2],
24+
//! &[5.4, 3.9, 1.7, 0.4],
25+
//! &[4.6, 3.4, 1.4, 0.3],
26+
//! &[5.0, 3.4, 1.5, 0.2],
27+
//! &[4.4, 2.9, 1.4, 0.2],
28+
//! &[4.9, 3.1, 1.5, 0.1],
29+
//! &[7.0, 3.2, 4.7, 1.4],
30+
//! &[6.4, 3.2, 4.5, 1.5],
31+
//! &[6.9, 3.1, 4.9, 1.5],
32+
//! &[5.5, 2.3, 4.0, 1.3],
33+
//! &[6.5, 2.8, 4.6, 1.5],
34+
//! &[5.7, 2.8, 4.5, 1.3],
35+
//! &[6.3, 3.3, 4.7, 1.6],
36+
//! &[4.9, 2.4, 3.3, 1.0],
37+
//! &[6.6, 2.9, 4.6, 1.3],
38+
//! &[5.2, 2.7, 3.9, 1.4],
39+
//! ]);
40+
//! let y: Vec<f64> = vec![
41+
//! 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
42+
//! ];
43+
//!
44+
//! let (x_train, x_test, y_train, y_test) = train_test_split(&x, &y, 0.2, true);
45+
//!
46+
//! println!("X train: {:?}, y train: {}, X test: {:?}, y test: {}",
47+
//! x_train.shape(), y_train.len(), x_test.shape(), y_test.len());
48+
//! ```
49+
//!
50+
//! When we partition the available data into two disjoint sets, we drastically reduce the number of samples that can be used for training.
51+
//!
52+
//! One way to solve this problem is to use k-fold cross-validation. With k-fold validation, the dataset is split into k disjoint sets.
53+
//! A model is trained using k - 1 of the folds, and the resulting model is validated on the remaining portion of the data.
54+
//!
55+
//! The simplest way to run cross-validation is to use the [cross_val_score](./fn.cross_validate.html) helper function on your estimator and the dataset.
56+
//!
57+
//! ```
58+
//! use smartcore::linalg::naive::dense_matrix::DenseMatrix;
59+
//! use smartcore::model_selection::{KFold, cross_validate};
60+
//! use smartcore::metrics::accuracy;
61+
//! use smartcore::linear::logistic_regression::LogisticRegression;
62+
//!
63+
//! //Iris data
64+
//! let x = DenseMatrix::from_2d_array(&[
65+
//! &[5.1, 3.5, 1.4, 0.2],
66+
//! &[4.9, 3.0, 1.4, 0.2],
67+
//! &[4.7, 3.2, 1.3, 0.2],
68+
//! &[4.6, 3.1, 1.5, 0.2],
69+
//! &[5.0, 3.6, 1.4, 0.2],
70+
//! &[5.4, 3.9, 1.7, 0.4],
71+
//! &[4.6, 3.4, 1.4, 0.3],
72+
//! &[5.0, 3.4, 1.5, 0.2],
73+
//! &[4.4, 2.9, 1.4, 0.2],
74+
//! &[4.9, 3.1, 1.5, 0.1],
75+
//! &[7.0, 3.2, 4.7, 1.4],
76+
//! &[6.4, 3.2, 4.5, 1.5],
77+
//! &[6.9, 3.1, 4.9, 1.5],
78+
//! &[5.5, 2.3, 4.0, 1.3],
79+
//! &[6.5, 2.8, 4.6, 1.5],
80+
//! &[5.7, 2.8, 4.5, 1.3],
81+
//! &[6.3, 3.3, 4.7, 1.6],
82+
//! &[4.9, 2.4, 3.3, 1.0],
83+
//! &[6.6, 2.9, 4.6, 1.3],
84+
//! &[5.2, 2.7, 3.9, 1.4],
85+
//! ]);
86+
//! let y: Vec<f64> = vec![
87+
//! 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
88+
//! ];
89+
//!
90+
//! let cv = KFold::default().with_n_splits(3);
91+
//!
92+
//! let results = cross_validate(LogisticRegression::fit, //estimator
93+
//! &x, &y, //data
94+
//! Default::default(), //hyperparameters
95+
//! cv, //cross validation split
96+
//! &accuracy).unwrap(); //metric
97+
//!
98+
//! println!("Training accuracy: {}, test accuracy: {}",
99+
//! results.mean_test_score(), results.mean_train_score());
100+
//! ```
101+
//!
102+
//! The function [cross_val_predict](./fn.cross_val_predict.html) has a similar interface to `cross_val_score`,
103+
//! but instead of test error it calculates predictions for all samples in the test set.
11104
12105
use crate::api::Predictor;
13106
use crate::error::Failed;

0 commit comments

Comments
 (0)