Skip to content

Commit f91a677

Browse files
committed
docs: Improve internal resource linking
1 parent 5e3433d commit f91a677

File tree

3 files changed

+7
-7
lines changed
  • content
    • about
    • blog
      • spiking-neural-network-framework-benchmarking
      • truenorth-deep-dive-ibm-neuromorphic-chip-design

3 files changed

+7
-7
lines changed

content/about/_index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ ONM is dedicated to providing a comprehensive ecosystem for the neuromorphic com
4646
* Our **[GitHub organization](https://github.com/open-neuromorphic)** serves as a platform for open-source neuromorphic projects. We welcome new contributions and can help migrate existing projects.
4747
* A thriving **[Discord community](https://discord.gg/hUygPUdD8E)** for discussions, Q&A, networking, and real-time collaboration.
4848
* **Clear Project Focus:** We concentrate on projects and resources related to:
49-
* Spiking Neural Networks (SNNs) for training, inference, Machine Learning, and neuroscience applications.
49+
* [Spiking Neural Networks (SNNs)](/neuromorphic-computing/software/snn-frameworks/) for training, inference, Machine Learning, and neuroscience applications.
5050
* Event-based sensor data handling and processing.
5151
* Digital and mixed-signal neuromorphic hardware designs and concepts.
5252

content/blog/spiking-neural-network-framework-benchmarking/index.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ Open Neuromorphic's [list of SNN frameworks](https://github.com/open-neuromorphi
2424

2525
{{< chart data="framework-benchmarking-16k" caption="Comparison of time taken for forward and backward passes in different frameworks, for 16k neurons." mobile="framework-benchmarking-16k.png">}}
2626

27-
The first figure shows runtime results for a 16k neuron network. The SNN libraries evaluated can be broken into three categories: 1. frameworks with tailored/custom CUDA kernels, 2. frameworks that purely leverage PyTorch functionality, and 3. a library that uses JAX exclusively for acceleration. For the custom CUDA libraries, [SpikingJelly](https://github.com/fangwei123456/spikingjelly) with a CuPy backend clocks in at just 0.26s for both forward and backward call combined. The libraries that use an implementation of [SLAYER](https://proceedings.neurips.cc/paper_files/paper/2018/hash/82f2b308c3b01637c607ce05f52a2fed-Abstract.html) ([Lava DL](https://github.com/lava-nc/lava-dl)) or [EXODUS](https://www.frontiersin.org/articles/10.3389/fnins.2023.1110444/full) ([Sinabs EXODUS](/neuromorphic-computing/software/snn-frameworks/sinabs/) / [Rockpool EXODUS](/neuromorphic-computing/software/snn-frameworks/rockpool/)) benefit from custom CUDA code and vectorization across the time dimension in both forward and backward passes and come within 1.5-2x the latency. It is noteworthy that such custom implementations exist for specific neuron models (such as the LIF under test), but not for arbitrary neuron models. On top of that, custom CUDA/CuPy backend implementations need to be compiled and then it is up to the maintainer to test it on different systems. Networks that are implemented in SLAYER, EXODUS or SpikingJelly with a CuPy backend cannot be executed on a CPU (unless converted).
27+
The first figure shows runtime results for a 16k neuron network. The SNN libraries evaluated can be broken into three categories: 1. frameworks with tailored/custom CUDA kernels, 2. frameworks that purely leverage PyTorch functionality, and 3. a library that uses JAX exclusively for acceleration. For the custom CUDA libraries, [SpikingJelly](/neuromorphic-computing/software/snn-frameworks/spikingjelly/) with a CuPy backend clocks in at just 0.26s for both forward and backward call combined. The libraries that use an implementation of [SLAYER](https://proceedings.neurips.cc/paper_files/paper/2018/hash/82f2b308c3b01637c607ce05f52a2fed-Abstract.html) ([Lava DL](/neuromorphic-computing/software/snn-frameworks/lava/)) or [EXODUS](https://www.frontiersin.org/articles/10.3389/fnins.2023.1110444/full) ([Sinabs EXODUS](/neuromorphic-computing/software/snn-frameworks/sinabs/) / [Rockpool EXODUS](/neuromorphic-computing/software/snn-frameworks/rockpool/)) benefit from custom CUDA code and vectorization across the time dimension in both forward and backward passes and come within 1.5-2x the latency. It is noteworthy that such custom implementations exist for specific neuron models (such as the LIF under test), but not for arbitrary neuron models. On top of that, custom CUDA/CuPy backend implementations need to be compiled and then it is up to the maintainer to test it on different systems. Networks that are implemented in SLAYER, EXODUS or SpikingJelly with a CuPy backend cannot be executed on a CPU (unless converted).
2828

2929
In contrast, frameworks such as [snnTorch](/neuromorphic-computing/software/snn-frameworks/snntorch/), [Norse](/neuromorphic-computing/software/snn-frameworks/norse/), [Sinabs](/neuromorphic-computing/software/snn-frameworks/sinabs/) or [Rockpool](/neuromorphic-computing/software/snn-frameworks/rockpool/) are very flexible when it comes to defining custom neuron models.
3030
For some libraries, that flexibility comes at a cost of slower computation.
@@ -47,13 +47,13 @@ The memory usage benchmarks were collected using PyTorch's [max_memory_allocated
4747
The ideal library will often depend on a multitude of factors, such as accessible documentation, usability of the API or pre-trained models. Generally speaking, PyTorch offers good support when custom neuron models (that have additional states, recurrence) are to be explored. For larger networks, it will likely pay off to rely on CUDA-accelerated existing implementations, or ensure your model is compatible with the recent compilation techniques to leverage the backend-specific JIT optimizations. The development of Spyx offers an interesting new framework as it enables the flexible neuron definitions of PyTorch frameworks while also enabling the speed of libraries which utilize custom CUDA backends. One more note on the accuracy of gradient computation: In order to speed up computation, some frameworks will approximate this calculation over time. Networks will still manage to *learn* in most cases, but EXODUS, correcting an approximation in SLAYER and therefore calculating gradients that are equivalent to BPTT, showed that it can make a substantial difference in certain experiments. So while speed is extremely important, other factors such as memory consumption and quality of gradient calculation matter as well.
4848

4949
## Edits
50-
**13/08/2023**: Sumit Bam Shrestha fixed Lava's out-of-memory issue by disactivating quantization. That makes it one of the best performing frameworks.
50+
**13/08/2023**: [Sumit Bam Shrestha](/contributors/sumit-bam-shrestha/) fixed Lava's out-of-memory issue by deactivating quantization. That makes it one of the best performing frameworks.
5151

52-
**22/10/2023**: Kade Heckel reperformed experiments on an A100 and added his Spyx framework.
52+
**22/10/2023**: [Kade Heckel](/contributors/kade-heckel/) reperformed experiments on an A100 and added his Spyx framework.
5353

54-
**07/11/2023**: Cameron Barker containerised the benchmark suite and added the memory utilisation benchmark. The updated benchmarks were run on a RTX 3090 with a batchsize of 16.
54+
**07/11/2023**: [Cameron Barker](/contributors/cameron-barker/) containerized the benchmark suite and added the memory utilization benchmark. The updated benchmarks were run on a RTX 3090 with a batchsize of 16.
5555

56-
**19/2/2024**: Jens Pedersen updated the benchmark for Norse to use the correct neuron model and `torch.compile`.
56+
**19/2/2024**: [Jens E. Pedersen](/contributors/jens-e-pedersen/) updated the benchmark for Norse to use the correct neuron model and `torch.compile`.
5757

5858
## Code and comments
5959
The code for this benchmark is available [here](https://github.com/open-neuromorphic/open-neuromorphic.github.io/blob/main/content/english/blog/spiking-neural-network-framework-benchmarking/). The order of dimensions in the input tensor and how it is fed to the respective models differs between libraries.

content/blog/truenorth-deep-dive-ibm-neuromorphic-chip-design/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -202,7 +202,7 @@ There are some additional blocks, such as the Pseudo Random Number Generator (PR
202202

203203
### Neuron model
204204

205-
Let's get to the equations now! The neuron model employed in TrueNorth is the **Leaky Integrate and Fire** (LIF) one. The update equation is the following:
205+
Let's get to the equations now! The neuron model employed in TrueNorth is the **[Leaky Integrate and Fire](/blog/spiking-neurons-digital-hardware-implementation/)** (LIF) one. The update equation is the following:
206206

207207
{{< math >}}
208208
V_{j}[t] = V_{j}[t-1] + \sum_{i=0}^{255} A_{i}[t] \cdot w_{i,j} \cdot s_{j}^{G_{i}} - \lambda_{j}

0 commit comments

Comments
 (0)