Skip to content

DextroLaev/Soft-exponential-activation-function-research-paper-implementation-

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Soft-exponential-activation-function-research-paper-implementation-

Implementation of parameterized soft-exponential activation function on MNIST dataset. In this implementation, the parameters are the same for all neurons initially starting with 1. The soft-exponential function is a good choice for neural networks that have a lot of connections and a lot of neurons.

This activation function revolves around logarithmic, linear and exponential behaviour.

The equation for the soft-exponential function is:

Figure Given

Problems faced:

1. Misinformation about the function

From a paper by A continuum among logarithmic, linear, and exponential functions, and its potential to improve generalization in neural networks, here in Figure 2, the soft-exponential function is shown as a logarithmic function. This is not the case.

Figure Given

The real figure should be shown here:

Figure Truth

Here we can see in some cases the soft-exponential function is undefined for some values of (alpha,x), (alpha,x) is not a constant.

2. Negative values inside logarithm

Here comes the tricky part. The soft-exponential function is defined for all values of alpha and x. However, the logarithm is not defined for negative values.

In the issues under Keras, one of the person has suggested to use the inverse function of sinh() instead of the log().

3. Initialization of alpha

Starting with an initial value of 1, the soft-exponential function was steep at the beginning and it is more gradual at the end. This was a good idea.

Acknowledgements:

About

Implementation of parameterized soft-exponential activation function on MNIST dataset

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages