Skip to content

BinAl-Sadiq/stochastic-gradient-descent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 

Repository files navigation

stochastic-gradient-descent

A very simple C++ library for building and customizing feed-forward fully-connected deep neural networks (DNNs), utilizing the stochastic gradient descent (SGD) optimization algorithm. The library enables users to specify activation functions and loss criteria, providing flexibility for various neural network configurations.

How to use it?

  1. Create an object of type "NeuralNetwork":

     //specify the neurons cuont at each layer
     std::vector<uint32_t> layers_lengths = { 250, 40, 30, 5 };
    
     //specify the activation functions
     NeuralNetwork::acts_t activations = { 3, tanh };
     
     //specify the activation functions' derivatives
     NeuralNetwork::acts_t activations_derivatives = { 3, [](double x) {return 1.0 - x * x; } };
    
     //specify the criterion
     NeuralNetwork::cri_t criterion = [](std::vector<double> p, std::vector<double> y) {double loss = 0.0; for (int i = 0; i < p.size(); i++) loss += pow(p[i] - y[i], 2); loss /= p.size(); return loss; };
    
     //specify the criterion's derivative
     NeuralNetwork::cri_d_t criterion_derivative = [](double p, double y) {return p - y; });
    
     NeuralNetwork NN(layers_lengths, activations, activations_derivatives, criterion, criterion_derivative);
  2. Call the "forward_pass" function to calculate the output layer values:

    //assuming that the vector "inputs" is defined somewhere
    NN.forward_pass(sample);
  3. You can read the output layer values from the member variable "neurons":

    for (size_t i = 0; i < NN.neurons.back().size(); i++)
    {
      double output = NN.neurons.back()[i];
      //...
    }
  4. Calculate the loss

    double loss = NN.loss(desired_output);
  5. To optimize the Neural Network, call the "backward_pass" function

    //the vector "desired_output" holds the correct values that the neural network was supposed to give 
    NN.backward_pass(desired_output, 0.3/*learning rate*/);

License

MIT License

About

Very simple c++ DNN implementation that uses the stochastic gradient descent optimization algorithm

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages