Skip to content

HomeletW/Gradient-W-O-Backpropagation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 

Repository files navigation

Gradient With or Without Backpropagation

Experiment code

Gradient-based optimization plays a central role in the modern day of machine learning. Backpropagation, or reverse-mode automatic differentiation, is the most widely accepted methodology for computing the gradients during the optimization stage of learning algorithms. Recently, an formulation called forward gradient overcame the drawback of forward-mode automatic differentiation, providing an efficient way to obtain an unbiased estimate of the true gradient through one forward pass of a given neural network model. In this project, we implemented the proposed formulation and showed the convergence of forward gradient under SGD. In addition to the original work and experiments under SGD, we utilize the Adam optimizer with forward gradient, and the training dynamics under various learning rates suggested that backpropagation performs significantly better than forward gradients with Adam.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published