Skip to content

Gradient descent

Ilya Gyrdymov edited this page Sep 2, 2017 · 6 revisions

Essentials

One of the simplest ways to find an optima of a function is Gradient Descent. That is why it was implemented in this library in the first turn. Let us consider the foundations of Gradient Descent.

Let us assume that we have a matrix which has a bunch of coordinates on every line (vector of features), and a column vector of function (which we want to optimize) values (vector of target labels). It is required to find a weights vector in order to decrease an indent between actual function value and dot product of weights vector and features vector.

Clone this wiki locally