Skip to content
tkellehe edited this page Aug 5, 2021 · 8 revisions

Nerve

Nerve is a programming language built towards finding a small neural network that guarantees learning. I have spent a long while searching and varying different techniques to do so.

One of the main hopes from this project is that it will drive a neural network design to be created which covers the input space with minimum bytes. What I mean by this is given the input space of all possible bytes, the neural network can map these to appropriate outputs utilizing a small amount of bytes. Also, it should work with parallel computing, training order should not matter, and should scale with the problem.

Most neural networks are built around supervised learning. These seems to be the best when you find the perfect network for the problem at hand. This is very time consuming and really just plain boring. Also they typically do a good job at solving the problems because they require lots of data which effectively covers the input space. Not to mention, you have to train them a special way based on the samples.

However, the unsupervised learners tend to provide the same level of coverage as supervised learners, but do not require as much hands on decision making for the training or shaping of the network. They still have the problem of size. But, I have found that the byte SOM tends to require less memory and not a lot of sophisticated hardware to make it efficient.

The main breakthrough with the byte SOM has been the layout.

Clone this wiki locally