Skip to content

Commit d3f3b65

Browse files
authored
Merge pull request #263 from neural-loop/main
Improving internal linking
2 parents 4edefa8 + 4185220 commit d3f3b65

File tree

11 files changed

+36
-21
lines changed
  • content
    • about
    • blog
      • digital-neuromorphic-hardware-read-list
      • efficient-compression-event-based-data-neuromorphic-applications
      • open-neuromorphic-evolves-charter-first-executive-committee-election
      • spiking-neural-network-framework-benchmarking
      • strategic-vision-open-neuromorphic
      • truenorth-deep-dive-ibm-neuromorphic-chip-design
    • contributors
    • getting-involved
  • scripts

11 files changed

+36
-21
lines changed

content/about/_index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ ONM is dedicated to providing a comprehensive ecosystem for the neuromorphic com
4646
* Our **[GitHub organization](https://github.com/open-neuromorphic)** serves as a platform for open-source neuromorphic projects. We welcome new contributions and can help migrate existing projects.
4747
* A thriving **[Discord community](https://discord.gg/hUygPUdD8E)** for discussions, Q&A, networking, and real-time collaboration.
4848
* **Clear Project Focus:** We concentrate on projects and resources related to:
49-
* Spiking Neural Networks (SNNs) for training, inference, Machine Learning, and neuroscience applications.
49+
* [Spiking Neural Networks (SNNs)](/neuromorphic-computing/software/snn-frameworks/) for training, inference, Machine Learning, and neuroscience applications.
5050
* Event-based sensor data handling and processing.
5151
* Digital and mixed-signal neuromorphic hardware designs and concepts.
5252

content/blog/digital-neuromorphic-hardware-read-list/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ show_author_bios: true
1212

1313
Here's a list of articles and theses related to digital hardware designs for neuomorphic applications. I plan to update it regularly. To be redirected directly to the sources, click on the titles!
1414

15-
If you are new to neuromorphic computing, I strongly suggest to get a grasp of how an SNN works from [this paper](https://arxiv.org/abs/2109.12894). Otherwise, it will be pretty difficult to understand the content of the papers listed here.
15+
If you are new to [neuromorphic computing](/neuromorphic-computing/), I strongly suggest to get a grasp of how an SNN works from [this paper](https://arxiv.org/abs/2109.12894). Otherwise, it will be pretty difficult to understand the content of the papers listed here.
1616

1717
## 2015
1818

@@ -44,7 +44,7 @@ The Loihi chip employs **128 neuromorphic cores**, each of which consisting of *
4444

4545
[*A 0.086-mm2 12.7-pJ/SOP 64k-Synapse 256-Neuron Online-Learning Digital Spiking Neuromorphic Processor in 28nm CMOS*](https://arxiv.org/abs/1804.07858), Charlotte Frenkel et al., 2019
4646

47-
In this paper, a digital neuromorphic processor is presented. The Verilog is also [open source](https://github.com/ChFrenkel/ODIN)!
47+
In this paper, a digital neuromorphic processor is presented. The Verilog is also [open source](https://github.com/ChFrenkel/ODIN)! The processor is also known as [ODIN](/neuromorphic-computing/hardware/odin-frenkel/).
4848

4949
The neurons states and the synapses weights are stored in two foundry SRAMs on chip. In order to emulate a crossbar, **time-multiplexing** is adopted: the synapses weights and neurons states are updated in a sequential manner instead of in parallel. On the core, **256 neurons (4kB SRAM)** and **256x256 synapses (64kB SRAM)** are embedded. This allows to get a very high synapses and neuron densities: **741k synapses per squared millimiters** and **3k neurons per squared millimeters**, using a **28nm CMOS FDSOI** process.
5050

content/blog/efficient-compression-event-based-data-neuromorphic-applications/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ show_author_bios: true
1414
---
1515

1616
## Datasets grow larger in size
17-
As neuromorphic algorithms tackle more complex tasks that are linked to bigger datasets, and event cameras mature to have higher spatial resolution, it is worth looking at how to encode that data efficiently when storing it on disk. To give you an example, Prophesee's latest automotive [object detection dataset](https://docs.prophesee.ai/stable/datasets.html) is some 3.5 TB in size for under 40h of recordings with a single camera.
17+
As [neuromorphic algorithms](/neuromorphic-computing/software/) tackle more complex tasks that are linked to bigger datasets, and event cameras mature to have higher spatial resolution, it is worth looking at how to encode that data efficiently when storing it on disk. To give you an example, Prophesee's latest automotive [object detection dataset](https://docs.prophesee.ai/stable/datasets.html) is some 3.5 TB in size for under 40h of recordings with a single camera.
1818

1919
## Event cameras record with fine-grained temporal resolution
2020
In contrast to conventional cameras, event cameras output changes in illumination, which is already a form of compression. But the output data rate is still a lot higher cameras because of the microsecond temporal resolution that event cameras are able to record with. When streaming data, we get millions of tuples of microsecond timestamps, x/y coordinates and polarity indicators per second that look nothing like a frame but are a list of events:
@@ -39,7 +39,7 @@ Ideally, we want to be close to the origin where we read fast and compression is
3939
The authors of this post have released [Expelliarmus](/neuromorphic-computing/software/data-tools/expelliarmus/) as a lightweight, well-tested, pip-installable framework that can read and write different formats easily. If you're working with dat, evt2 or evt3 formats, why not give it a try?
4040

4141
## Summary
42-
When training spiking neural networks on event-based data, we want to be able to feed new data to the network as fast as possible. But given the high data rate of an event camera, the amount of data quickly becomes an issue itself, especially for more complex tasks. So we want to choose a good trade-off between a dataset size that's manageable and reading speed. We hope that this article will help future groups that record large-scale datasets to pick a good encoding format.
42+
When training [spiking neural networks](/neuromorphic-computing/software/snn-frameworks/) on event-based data, we want to be able to feed new data to the network as fast as possible. But given the high data rate of an event camera, the amount of data quickly becomes an issue itself, especially for more complex tasks. So we want to choose a good trade-off between a dataset size that's manageable and reading speed. We hope that this article will help future groups that record large-scale datasets to pick a good encoding format.
4343

4444
## Comments
4545
The aedat4 file contains IMU events as well as change detection events, which increases the file size artificially in contrast to the other benchmarked formats.

content/blog/open-neuromorphic-evolves-charter-first-executive-committee-election/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,6 +55,6 @@ Can't make it? You can transfer your vote to another attending member. Details o
5555

5656
## Get Involved!
5757

58-
This is a significant milestone for Open Neuromorphic, driven by our commitment to building a sustainable and impactful open-source community. Your participation is vital! Please review the charter, consider running for a position, and plan to attend the AGM to cast your vote.
58+
This is a significant milestone for Open Neuromorphic, driven by our commitment to building a sustainable and impactful open-source community. Your participation is vital! Please review the charter, consider running for a position, and plan to attend the AGM to cast your vote on the [Executive Committee](/neuromorphic-computing/initiatives/executive-committee/).
5959

6060
Let's shape the future of ONM together!

content/blog/spiking-neural-network-framework-benchmarking/index.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -18,13 +18,13 @@ show_author_bios: true
1818

1919
## Introduction
2020

21-
Open Neuromorphic's [list of SNN frameworks](https://github.com/open-neuromorphic/open-neuromorphic) currently counts 11 libraries, and those are only the most popular ones! As the sizes of spiking neural network models grow thanks to deep learning, optimization becomes more important for researchers and practitioners alike. Training SNNs is often slow, as the stateful networks are typically fed sequential inputs. Today's most popular training method then is some form of backpropagation through time, whose time complexity scales with the number of time steps. We benchmark libraries that all take slightly different approaches on how to extend deep learning frameworks for gradient-based optimization of SNNs. We focus on the total time it takes to pass data forward and backward through the network as well as the memory required to do so. However, there are obviously other, non-tangible qualities of frameworks such as extensibility, quality of documentation, ease of install or support for neuromorphic hardware that we're not going to try to capture here. In our benchmarks, we use a single fully-connected (linear) and a leaky integrate and fire (LIF) layer. The input data has batch size of 16, 500 time steps and n neurons.
21+
Open Neuromorphic's [SNN frameworks guide](/neuromorphic-computing/software/snn-frameworks/) currently counts 11 libraries, and those are only the most popular ones! As the sizes of [spiking neural network models](/neuromorphic-computing/software/snn-frameworks/) grow thanks to deep learning, optimization becomes more important for researchers and practitioners alike. Training SNNs is often slow, as the stateful networks are typically fed sequential inputs. Today's most popular training method then is some form of backpropagation through time, whose time complexity scales with the number of time steps. We benchmark libraries that all take slightly different approaches on how to extend deep learning frameworks for gradient-based optimization of SNNs. We focus on the total time it takes to pass data forward and backward through the network as well as the memory required to do so. However, there are obviously other, non-tangible qualities of frameworks such as extensibility, quality of documentation, ease of install or support for neuromorphic hardware that we're not going to try to capture here. In our benchmarks, we use a single fully-connected (linear) and a leaky integrate and fire (LIF) layer. The input data has batch size of 16, 500 time steps and n neurons.
2222

2323
## Benchmark Results
2424

2525
{{< chart data="framework-benchmarking-16k" caption="Comparison of time taken for forward and backward passes in different frameworks, for 16k neurons." mobile="framework-benchmarking-16k.png">}}
2626

27-
The first figure shows runtime results for a 16k neuron network. The SNN libraries evaluated can be broken into three categories: 1. frameworks with tailored/custom CUDA kernels, 2. frameworks that purely leverage PyTorch functionality, and 3. a library that uses JAX exclusively for acceleration. For the custom CUDA libraries, [SpikingJelly](https://github.com/fangwei123456/spikingjelly) with a CuPy backend clocks in at just 0.26s for both forward and backward call combined. The libraries that use an implementation of [SLAYER](https://proceedings.neurips.cc/paper_files/paper/2018/hash/82f2b308c3b01637c607ce05f52a2fed-Abstract.html) ([Lava DL](https://github.com/lava-nc/lava-dl)) or [EXODUS](https://www.frontiersin.org/articles/10.3389/fnins.2023.1110444/full) ([Sinabs EXODUS](https://github.com/synsense/sinabs-exodus) / [Rockpool EXODUS](https://rockpool.ai/reference/_autosummary/nn.modules.LIFExodus.html?)) benefit from custom CUDA code and vectorization across the time dimension in both forward and backward passes and come within 1.5-2x the latency. It is noteworthy that such custom implementations exist for specific neuron models (such as the LIF under test), but not for arbitrary neuron models. On top of that, custom CUDA/CuPy backend implementations need to be compiled and then it is up to the maintainer to test it on different systems. Networks that are implemented in SLAYER, EXODUS or SpikingJelly with a CuPy backend cannot be executed on a CPU (unless converted).
27+
The first figure shows runtime results for a 16k neuron network. The SNN libraries evaluated can be broken into three categories: 1. frameworks with tailored/custom CUDA kernels, 2. frameworks that purely leverage PyTorch functionality, and 3. a library that uses JAX exclusively for acceleration. For the custom CUDA libraries, [SpikingJelly](/neuromorphic-computing/software/snn-frameworks/spikingjelly/) with a CuPy backend clocks in at just 0.26s for both forward and backward call combined. The libraries that use an implementation of [SLAYER](https://proceedings.neurips.cc/paper_files/paper/2018/hash/82f2b308c3b01637c607ce05f52a2fed-Abstract.html) ([Lava DL](/neuromorphic-computing/software/snn-frameworks/lava/)) or [EXODUS](https://www.frontiersin.org/articles/10.3389/fnins.2023.1110444/full) ([Sinabs EXODUS](/neuromorphic-computing/software/snn-frameworks/sinabs/) / [Rockpool EXODUS](/neuromorphic-computing/software/snn-frameworks/rockpool/)) benefit from custom CUDA code and vectorization across the time dimension in both forward and backward passes and come within 1.5-2x the latency. It is noteworthy that such custom implementations exist for specific neuron models (such as the LIF under test), but not for arbitrary neuron models. On top of that, custom CUDA/CuPy backend implementations need to be compiled and then it is up to the maintainer to test it on different systems. Networks that are implemented in SLAYER, EXODUS or SpikingJelly with a CuPy backend cannot be executed on a CPU (unless converted).
2828

2929
In contrast, frameworks such as [snnTorch](/neuromorphic-computing/software/snn-frameworks/snntorch/), [Norse](/neuromorphic-computing/software/snn-frameworks/norse/), [Sinabs](/neuromorphic-computing/software/snn-frameworks/sinabs/) or [Rockpool](/neuromorphic-computing/software/snn-frameworks/rockpool/) are very flexible when it comes to defining custom neuron models.
3030
For some libraries, that flexibility comes at a cost of slower computation.
@@ -47,13 +47,13 @@ The memory usage benchmarks were collected using PyTorch's [max_memory_allocated
4747
The ideal library will often depend on a multitude of factors, such as accessible documentation, usability of the API or pre-trained models. Generally speaking, PyTorch offers good support when custom neuron models (that have additional states, recurrence) are to be explored. For larger networks, it will likely pay off to rely on CUDA-accelerated existing implementations, or ensure your model is compatible with the recent compilation techniques to leverage the backend-specific JIT optimizations. The development of Spyx offers an interesting new framework as it enables the flexible neuron definitions of PyTorch frameworks while also enabling the speed of libraries which utilize custom CUDA backends. One more note on the accuracy of gradient computation: In order to speed up computation, some frameworks will approximate this calculation over time. Networks will still manage to *learn* in most cases, but EXODUS, correcting an approximation in SLAYER and therefore calculating gradients that are equivalent to BPTT, showed that it can make a substantial difference in certain experiments. So while speed is extremely important, other factors such as memory consumption and quality of gradient calculation matter as well.
4848

4949
## Edits
50-
**13/08/2023**: Sumit Bam Shrestha fixed Lava's out-of-memory issue by disactivating quantization. That makes it one of the best performing frameworks.
50+
**13/08/2023**: [Sumit Bam Shrestha](/contributors/sumit-bam-shrestha/) fixed Lava's out-of-memory issue by deactivating quantization. That makes it one of the best performing frameworks.
5151

52-
**22/10/2023**: Kade Heckel reperformed experiments on an A100 and added his Spyx framework.
52+
**22/10/2023**: [Kade Heckel](/contributors/kade-heckel/) reperformed experiments on an A100 and added his Spyx framework.
5353

54-
**07/11/2023**: Cameron Barker containerised the benchmark suite and added the memory utilisation benchmark. The updated benchmarks were run on a RTX 3090 with a batchsize of 16.
54+
**07/11/2023**: [Cameron Barker](/contributors/cameron-barker/) containerized the benchmark suite and added the memory utilization benchmark. The updated benchmarks were run on a RTX 3090 with a batchsize of 16.
5555

56-
**19/2/2024**: Jens Pedersen updated the benchmark for Norse to use the correct neuron model and `torch.compile`.
56+
**19/2/2024**: [Jens E. Pedersen](/contributors/jens-e-pedersen/) updated the benchmark for Norse to use the correct neuron model and `torch.compile`.
5757

5858
## Code and comments
5959
The code for this benchmark is available [here](https://github.com/open-neuromorphic/open-neuromorphic.github.io/blob/main/content/english/blog/spiking-neural-network-framework-benchmarking/). The order of dimensions in the input tensor and how it is fed to the respective models differs between libraries.

content/blog/strategic-vision-open-neuromorphic/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,15 +20,15 @@ show_author_bios: true
2020

2121
This post presents a **vision for Open Neuromorphic** towards more open, reproducible, and competitive neuromorphics.
2222

23-
The post will be the first in a series that lays out the **Open Neuromorphic Strategic Initiative** where we later discuss **Neuromorphic UX** and **new initiatives** that will be kickstarted by the [newly elected Executive Committee](/blog/open-neuromorphic-evolves-charter-first-executive-committee-election/).
23+
The post will be the first in a series that lays out the **Open Neuromorphic Strategic Initiative** where we later discuss **Neuromorphic UX** and **new initiatives** that will be kickstarted by the [newly elected Executive Committee](/neuromorphic-computing/initiatives/executive-committee/).
2424

2525
Join the discussion [on Discord](https://discord.gg/hUygPUdD8E), star us [on GitHub](https://github.com/open-neuromorphic/), follow us [on LinkedIn](https://www.linkedin.com/company/98345683/), and give us a watch [on YouTube](https://www.youtube.com/@openneuromorphic).
2626

2727
## What now, Open Neuromorphic?
2828

2929
Open Neuromorphic is almost 4 years old.
3030

31-
We set out to make the field of neuromorphic engineering more transparent, open, and accessible to newcomers. It's been a tremendous success: Open Neuromorphic is the biggest online neuromorphic community *in the world*, our videos are seen by thousands of researchers, our material is reaching even further, and the 2000+ academics and students on our Discord server are actively and happily collaborating to further the scientific vision of neuromorphic engineering.
31+
We set out to make the field of [neuromorphic engineering](/neuromorphic-computing/) more transparent, open, and accessible to newcomers. It's been a tremendous success: Open Neuromorphic is the biggest online neuromorphic community *in the world*, our videos are seen by thousands of researchers, our material is reaching even further, and the 2000+ academics and students on our Discord server are actively and happily collaborating to further the scientific vision of neuromorphic engineering.
3232

3333
But, let's face it: we still have a long way to go.
3434

0 commit comments

Comments
 (0)