Skip to content

shantanupatne/fractals

Repository files navigation

Fractal Dynamics in Neural Network Boundaries

The trainability of neural networks, or the ease with which they can learn from data, is a critical area of study. Recent research suggests that the boundaries delineating trainable from non-trainable configurations in parameter space are not merely irregular, but fractal. This study delves into the implications of such a fractal boundary, highlighting the intricate balance between neural network architecture, initial conditions, and training success. Through a series of experiments, this report validates the fractal nature of trainability boundaries and discusses its implications for understanding the dynamics and stability of neural network training. The findings provide significant insights into optimizing neural network architectures and training methods, potentially leading to more robust and efficient learning algorithms.

Exploring the findings of Sohl-Dickstein.

Course Project for EEE560 - Mathematical Foundations of ML

About

Fractal Dynamics in Neural Network Boundaries

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published