You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/about/_index.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -46,7 +46,7 @@ ONM is dedicated to providing a comprehensive ecosystem for the neuromorphic com
46
46
* Our **[GitHub organization](https://github.com/open-neuromorphic)** serves as a platform for open-source neuromorphic projects. We welcome new contributions and can help migrate existing projects.
47
47
* A thriving **[Discord community](https://discord.gg/hUygPUdD8E)** for discussions, Q&A, networking, and real-time collaboration.
48
48
***Clear Project Focus:** We concentrate on projects and resources related to:
49
-
* Spiking Neural Networks (SNNs) for training, inference, Machine Learning, and neuroscience applications.
49
+
*[Spiking Neural Networks (SNNs)](/neuromorphic-computing/software/snn-frameworks/) for training, inference, Machine Learning, and neuroscience applications.
50
50
* Event-based sensor data handling and processing.
51
51
* Digital and mixed-signal neuromorphic hardware designs and concepts.
Copy file name to clipboardExpand all lines: content/blog/spiking-neural-network-framework-benchmarking/index.md
+5-5Lines changed: 5 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -24,7 +24,7 @@ Open Neuromorphic's [list of SNN frameworks](https://github.com/open-neuromorphi
24
24
25
25
{{< chart data="framework-benchmarking-16k" caption="Comparison of time taken for forward and backward passes in different frameworks, for 16k neurons." mobile="framework-benchmarking-16k.png">}}
26
26
27
-
The first figure shows runtime results for a 16k neuron network. The SNN libraries evaluated can be broken into three categories: 1. frameworks with tailored/custom CUDA kernels, 2. frameworks that purely leverage PyTorch functionality, and 3. a library that uses JAX exclusively for acceleration. For the custom CUDA libraries, [SpikingJelly](https://github.com/fangwei123456/spikingjelly) with a CuPy backend clocks in at just 0.26s for both forward and backward call combined. The libraries that use an implementation of [SLAYER](https://proceedings.neurips.cc/paper_files/paper/2018/hash/82f2b308c3b01637c607ce05f52a2fed-Abstract.html) ([Lava DL](https://github.com/lava-nc/lava-dl)) or [EXODUS](https://www.frontiersin.org/articles/10.3389/fnins.2023.1110444/full) ([Sinabs EXODUS](/neuromorphic-computing/software/snn-frameworks/sinabs/) / [Rockpool EXODUS](/neuromorphic-computing/software/snn-frameworks/rockpool/)) benefit from custom CUDA code and vectorization across the time dimension in both forward and backward passes and come within 1.5-2x the latency. It is noteworthy that such custom implementations exist for specific neuron models (such as the LIF under test), but not for arbitrary neuron models. On top of that, custom CUDA/CuPy backend implementations need to be compiled and then it is up to the maintainer to test it on different systems. Networks that are implemented in SLAYER, EXODUS or SpikingJelly with a CuPy backend cannot be executed on a CPU (unless converted).
27
+
The first figure shows runtime results for a 16k neuron network. The SNN libraries evaluated can be broken into three categories: 1. frameworks with tailored/custom CUDA kernels, 2. frameworks that purely leverage PyTorch functionality, and 3. a library that uses JAX exclusively for acceleration. For the custom CUDA libraries, [SpikingJelly](/neuromorphic-computing/software/snn-frameworks/spikingjelly/) with a CuPy backend clocks in at just 0.26s for both forward and backward call combined. The libraries that use an implementation of [SLAYER](https://proceedings.neurips.cc/paper_files/paper/2018/hash/82f2b308c3b01637c607ce05f52a2fed-Abstract.html) ([Lava DL](/neuromorphic-computing/software/snn-frameworks/lava/)) or [EXODUS](https://www.frontiersin.org/articles/10.3389/fnins.2023.1110444/full) ([Sinabs EXODUS](/neuromorphic-computing/software/snn-frameworks/sinabs/) / [Rockpool EXODUS](/neuromorphic-computing/software/snn-frameworks/rockpool/)) benefit from custom CUDA code and vectorization across the time dimension in both forward and backward passes and come within 1.5-2x the latency. It is noteworthy that such custom implementations exist for specific neuron models (such as the LIF under test), but not for arbitrary neuron models. On top of that, custom CUDA/CuPy backend implementations need to be compiled and then it is up to the maintainer to test it on different systems. Networks that are implemented in SLAYER, EXODUS or SpikingJelly with a CuPy backend cannot be executed on a CPU (unless converted).
28
28
29
29
In contrast, frameworks such as [snnTorch](/neuromorphic-computing/software/snn-frameworks/snntorch/), [Norse](/neuromorphic-computing/software/snn-frameworks/norse/), [Sinabs](/neuromorphic-computing/software/snn-frameworks/sinabs/) or [Rockpool](/neuromorphic-computing/software/snn-frameworks/rockpool/) are very flexible when it comes to defining custom neuron models.
30
30
For some libraries, that flexibility comes at a cost of slower computation.
@@ -47,13 +47,13 @@ The memory usage benchmarks were collected using PyTorch's [max_memory_allocated
47
47
The ideal library will often depend on a multitude of factors, such as accessible documentation, usability of the API or pre-trained models. Generally speaking, PyTorch offers good support when custom neuron models (that have additional states, recurrence) are to be explored. For larger networks, it will likely pay off to rely on CUDA-accelerated existing implementations, or ensure your model is compatible with the recent compilation techniques to leverage the backend-specific JIT optimizations. The development of Spyx offers an interesting new framework as it enables the flexible neuron definitions of PyTorch frameworks while also enabling the speed of libraries which utilize custom CUDA backends. One more note on the accuracy of gradient computation: In order to speed up computation, some frameworks will approximate this calculation over time. Networks will still manage to *learn* in most cases, but EXODUS, correcting an approximation in SLAYER and therefore calculating gradients that are equivalent to BPTT, showed that it can make a substantial difference in certain experiments. So while speed is extremely important, other factors such as memory consumption and quality of gradient calculation matter as well.
48
48
49
49
## Edits
50
-
**13/08/2023**: Sumit Bam Shrestha fixed Lava's out-of-memory issue by disactivating quantization. That makes it one of the best performing frameworks.
50
+
**13/08/2023**: [Sumit Bam Shrestha](/contributors/sumit-bam-shrestha/) fixed Lava's out-of-memory issue by deactivating quantization. That makes it one of the best performing frameworks.
51
51
52
-
**22/10/2023**: Kade Heckel reperformed experiments on an A100 and added his Spyx framework.
52
+
**22/10/2023**: [Kade Heckel](/contributors/kade-heckel/) reperformed experiments on an A100 and added his Spyx framework.
53
53
54
-
**07/11/2023**: Cameron Barker containerised the benchmark suite and added the memory utilisation benchmark. The updated benchmarks were run on a RTX 3090 with a batchsize of 16.
54
+
**07/11/2023**: [Cameron Barker](/contributors/cameron-barker/) containerized the benchmark suite and added the memory utilization benchmark. The updated benchmarks were run on a RTX 3090 with a batchsize of 16.
55
55
56
-
**19/2/2024**: Jens Pedersen updated the benchmark for Norse to use the correct neuron model and `torch.compile`.
56
+
**19/2/2024**: [Jens E. Pedersen](/contributors/jens-e-pedersen/) updated the benchmark for Norse to use the correct neuron model and `torch.compile`.
57
57
58
58
## Code and comments
59
59
The code for this benchmark is available [here](https://github.com/open-neuromorphic/open-neuromorphic.github.io/blob/main/content/english/blog/spiking-neural-network-framework-benchmarking/). The order of dimensions in the input tensor and how it is fed to the respective models differs between libraries.
Copy file name to clipboardExpand all lines: content/blog/truenorth-deep-dive-ibm-neuromorphic-chip-design/index.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -202,7 +202,7 @@ There are some additional blocks, such as the Pseudo Random Number Generator (PR
202
202
203
203
### Neuron model
204
204
205
-
Let's get to the equations now! The neuron model employed in TrueNorth is the **Leaky Integrate and Fire** (LIF) one. The update equation is the following:
205
+
Let's get to the equations now! The neuron model employed in TrueNorth is the **[Leaky Integrate and Fire](/blog/spiking-neurons-digital-hardware-implementation/)** (LIF) one. The update equation is the following:
0 commit comments