You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/about/_index.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -46,7 +46,7 @@ ONM is dedicated to providing a comprehensive ecosystem for the neuromorphic com
46
46
* Our **[GitHub organization](https://github.com/open-neuromorphic)** serves as a platform for open-source neuromorphic projects. We welcome new contributions and can help migrate existing projects.
47
47
* A thriving **[Discord community](https://discord.gg/hUygPUdD8E)** for discussions, Q&A, networking, and real-time collaboration.
48
48
***Clear Project Focus:** We concentrate on projects and resources related to:
49
-
* Spiking Neural Networks (SNNs) for training, inference, Machine Learning, and neuroscience applications.
49
+
*[Spiking Neural Networks (SNNs)](/neuromorphic-computing/software/snn-frameworks/) for training, inference, Machine Learning, and neuroscience applications.
50
50
* Event-based sensor data handling and processing.
51
51
* Digital and mixed-signal neuromorphic hardware designs and concepts.
Copy file name to clipboardExpand all lines: content/blog/digital-neuromorphic-hardware-read-list/index.md
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ show_author_bios: true
12
12
13
13
Here's a list of articles and theses related to digital hardware designs for neuomorphic applications. I plan to update it regularly. To be redirected directly to the sources, click on the titles!
14
14
15
-
If you are new to neuromorphic computing, I strongly suggest to get a grasp of how an SNN works from [this paper](https://arxiv.org/abs/2109.12894). Otherwise, it will be pretty difficult to understand the content of the papers listed here.
15
+
If you are new to [neuromorphic computing](/neuromorphic-computing/), I strongly suggest to get a grasp of how an SNN works from [this paper](https://arxiv.org/abs/2109.12894). Otherwise, it will be pretty difficult to understand the content of the papers listed here.
16
16
17
17
## 2015
18
18
@@ -44,7 +44,7 @@ The Loihi chip employs **128 neuromorphic cores**, each of which consisting of *
44
44
45
45
[*A 0.086-mm2 12.7-pJ/SOP 64k-Synapse 256-Neuron Online-Learning Digital Spiking Neuromorphic Processor in 28nm CMOS*](https://arxiv.org/abs/1804.07858), Charlotte Frenkel et al., 2019
46
46
47
-
In this paper, a digital neuromorphic processor is presented. The Verilog is also [open source](https://github.com/ChFrenkel/ODIN)!
47
+
In this paper, a digital neuromorphic processor is presented. The Verilog is also [open source](https://github.com/ChFrenkel/ODIN)! The processor is also known as [ODIN](/neuromorphic-computing/hardware/odin-frenkel/).
48
48
49
49
The neurons states and the synapses weights are stored in two foundry SRAMs on chip. In order to emulate a crossbar, **time-multiplexing** is adopted: the synapses weights and neurons states are updated in a sequential manner instead of in parallel. On the core, **256 neurons (4kB SRAM)** and **256x256 synapses (64kB SRAM)** are embedded. This allows to get a very high synapses and neuron densities: **741k synapses per squared millimiters** and **3k neurons per squared millimeters**, using a **28nm CMOS FDSOI** process.
Copy file name to clipboardExpand all lines: content/blog/efficient-compression-event-based-data-neuromorphic-applications/index.md
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,7 @@ show_author_bios: true
14
14
---
15
15
16
16
## Datasets grow larger in size
17
-
As neuromorphic algorithms tackle more complex tasks that are linked to bigger datasets, and event cameras mature to have higher spatial resolution, it is worth looking at how to encode that data efficiently when storing it on disk. To give you an example, Prophesee's latest automotive [object detection dataset](https://docs.prophesee.ai/stable/datasets.html) is some 3.5 TB in size for under 40h of recordings with a single camera.
17
+
As [neuromorphic algorithms](/neuromorphic-computing/software/) tackle more complex tasks that are linked to bigger datasets, and event cameras mature to have higher spatial resolution, it is worth looking at how to encode that data efficiently when storing it on disk. To give you an example, Prophesee's latest automotive [object detection dataset](https://docs.prophesee.ai/stable/datasets.html) is some 3.5 TB in size for under 40h of recordings with a single camera.
18
18
19
19
## Event cameras record with fine-grained temporal resolution
20
20
In contrast to conventional cameras, event cameras output changes in illumination, which is already a form of compression. But the output data rate is still a lot higher cameras because of the microsecond temporal resolution that event cameras are able to record with. When streaming data, we get millions of tuples of microsecond timestamps, x/y coordinates and polarity indicators per second that look nothing like a frame but are a list of events:
@@ -39,7 +39,7 @@ Ideally, we want to be close to the origin where we read fast and compression is
39
39
The authors of this post have released [Expelliarmus](/neuromorphic-computing/software/data-tools/expelliarmus/) as a lightweight, well-tested, pip-installable framework that can read and write different formats easily. If you're working with dat, evt2 or evt3 formats, why not give it a try?
40
40
41
41
## Summary
42
-
When training spiking neural networks on event-based data, we want to be able to feed new data to the network as fast as possible. But given the high data rate of an event camera, the amount of data quickly becomes an issue itself, especially for more complex tasks. So we want to choose a good trade-off between a dataset size that's manageable and reading speed. We hope that this article will help future groups that record large-scale datasets to pick a good encoding format.
42
+
When training [spiking neural networks](/neuromorphic-computing/software/snn-frameworks/) on event-based data, we want to be able to feed new data to the network as fast as possible. But given the high data rate of an event camera, the amount of data quickly becomes an issue itself, especially for more complex tasks. So we want to choose a good trade-off between a dataset size that's manageable and reading speed. We hope that this article will help future groups that record large-scale datasets to pick a good encoding format.
43
43
44
44
## Comments
45
45
The aedat4 file contains IMU events as well as change detection events, which increases the file size artificially in contrast to the other benchmarked formats.
Copy file name to clipboardExpand all lines: content/blog/open-neuromorphic-evolves-charter-first-executive-committee-election/index.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -55,6 +55,6 @@ Can't make it? You can transfer your vote to another attending member. Details o
55
55
56
56
## Get Involved!
57
57
58
-
This is a significant milestone for Open Neuromorphic, driven by our commitment to building a sustainable and impactful open-source community. Your participation is vital! Please review the charter, consider running for a position, and plan to attend the AGM to cast your vote.
58
+
This is a significant milestone for Open Neuromorphic, driven by our commitment to building a sustainable and impactful open-source community. Your participation is vital! Please review the charter, consider running for a position, and plan to attend the AGM to cast your vote on the [Executive Committee](/neuromorphic-computing/initiatives/executive-committee/).
Copy file name to clipboardExpand all lines: content/blog/spiking-neural-network-framework-benchmarking/index.md
+6-6Lines changed: 6 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -18,13 +18,13 @@ show_author_bios: true
18
18
19
19
## Introduction
20
20
21
-
Open Neuromorphic's [list of SNN frameworks](https://github.com/open-neuromorphic/open-neuromorphic) currently counts 11 libraries, and those are only the most popular ones! As the sizes of spiking neural network models grow thanks to deep learning, optimization becomes more important for researchers and practitioners alike. Training SNNs is often slow, as the stateful networks are typically fed sequential inputs. Today's most popular training method then is some form of backpropagation through time, whose time complexity scales with the number of time steps. We benchmark libraries that all take slightly different approaches on how to extend deep learning frameworks for gradient-based optimization of SNNs. We focus on the total time it takes to pass data forward and backward through the network as well as the memory required to do so. However, there are obviously other, non-tangible qualities of frameworks such as extensibility, quality of documentation, ease of install or support for neuromorphic hardware that we're not going to try to capture here. In our benchmarks, we use a single fully-connected (linear) and a leaky integrate and fire (LIF) layer. The input data has batch size of 16, 500 time steps and n neurons.
21
+
Open Neuromorphic's [SNN frameworks guide](/neuromorphic-computing/software/snn-frameworks/) currently counts 11 libraries, and those are only the most popular ones! As the sizes of [spiking neural network models](/neuromorphic-computing/software/snn-frameworks/) grow thanks to deep learning, optimization becomes more important for researchers and practitioners alike. Training SNNs is often slow, as the stateful networks are typically fed sequential inputs. Today's most popular training method then is some form of backpropagation through time, whose time complexity scales with the number of time steps. We benchmark libraries that all take slightly different approaches on how to extend deep learning frameworks for gradient-based optimization of SNNs. We focus on the total time it takes to pass data forward and backward through the network as well as the memory required to do so. However, there are obviously other, non-tangible qualities of frameworks such as extensibility, quality of documentation, ease of install or support for neuromorphic hardware that we're not going to try to capture here. In our benchmarks, we use a single fully-connected (linear) and a leaky integrate and fire (LIF) layer. The input data has batch size of 16, 500 time steps and n neurons.
22
22
23
23
## Benchmark Results
24
24
25
25
{{< chart data="framework-benchmarking-16k" caption="Comparison of time taken for forward and backward passes in different frameworks, for 16k neurons." mobile="framework-benchmarking-16k.png">}}
26
26
27
-
The first figure shows runtime results for a 16k neuron network. The SNN libraries evaluated can be broken into three categories: 1. frameworks with tailored/custom CUDA kernels, 2. frameworks that purely leverage PyTorch functionality, and 3. a library that uses JAX exclusively for acceleration. For the custom CUDA libraries, [SpikingJelly](https://github.com/fangwei123456/spikingjelly) with a CuPy backend clocks in at just 0.26s for both forward and backward call combined. The libraries that use an implementation of [SLAYER](https://proceedings.neurips.cc/paper_files/paper/2018/hash/82f2b308c3b01637c607ce05f52a2fed-Abstract.html) ([Lava DL](https://github.com/lava-nc/lava-dl)) or [EXODUS](https://www.frontiersin.org/articles/10.3389/fnins.2023.1110444/full) ([Sinabs EXODUS](https://github.com/synsense/sinabs-exodus) / [Rockpool EXODUS](https://rockpool.ai/reference/_autosummary/nn.modules.LIFExodus.html?)) benefit from custom CUDA code and vectorization across the time dimension in both forward and backward passes and come within 1.5-2x the latency. It is noteworthy that such custom implementations exist for specific neuron models (such as the LIF under test), but not for arbitrary neuron models. On top of that, custom CUDA/CuPy backend implementations need to be compiled and then it is up to the maintainer to test it on different systems. Networks that are implemented in SLAYER, EXODUS or SpikingJelly with a CuPy backend cannot be executed on a CPU (unless converted).
27
+
The first figure shows runtime results for a 16k neuron network. The SNN libraries evaluated can be broken into three categories: 1. frameworks with tailored/custom CUDA kernels, 2. frameworks that purely leverage PyTorch functionality, and 3. a library that uses JAX exclusively for acceleration. For the custom CUDA libraries, [SpikingJelly](/neuromorphic-computing/software/snn-frameworks/spikingjelly/) with a CuPy backend clocks in at just 0.26s for both forward and backward call combined. The libraries that use an implementation of [SLAYER](https://proceedings.neurips.cc/paper_files/paper/2018/hash/82f2b308c3b01637c607ce05f52a2fed-Abstract.html) ([Lava DL](/neuromorphic-computing/software/snn-frameworks/lava/)) or [EXODUS](https://www.frontiersin.org/articles/10.3389/fnins.2023.1110444/full) ([Sinabs EXODUS](/neuromorphic-computing/software/snn-frameworks/sinabs/) / [Rockpool EXODUS](/neuromorphic-computing/software/snn-frameworks/rockpool/)) benefit from custom CUDA code and vectorization across the time dimension in both forward and backward passes and come within 1.5-2x the latency. It is noteworthy that such custom implementations exist for specific neuron models (such as the LIF under test), but not for arbitrary neuron models. On top of that, custom CUDA/CuPy backend implementations need to be compiled and then it is up to the maintainer to test it on different systems. Networks that are implemented in SLAYER, EXODUS or SpikingJelly with a CuPy backend cannot be executed on a CPU (unless converted).
28
28
29
29
In contrast, frameworks such as [snnTorch](/neuromorphic-computing/software/snn-frameworks/snntorch/), [Norse](/neuromorphic-computing/software/snn-frameworks/norse/), [Sinabs](/neuromorphic-computing/software/snn-frameworks/sinabs/) or [Rockpool](/neuromorphic-computing/software/snn-frameworks/rockpool/) are very flexible when it comes to defining custom neuron models.
30
30
For some libraries, that flexibility comes at a cost of slower computation.
@@ -47,13 +47,13 @@ The memory usage benchmarks were collected using PyTorch's [max_memory_allocated
47
47
The ideal library will often depend on a multitude of factors, such as accessible documentation, usability of the API or pre-trained models. Generally speaking, PyTorch offers good support when custom neuron models (that have additional states, recurrence) are to be explored. For larger networks, it will likely pay off to rely on CUDA-accelerated existing implementations, or ensure your model is compatible with the recent compilation techniques to leverage the backend-specific JIT optimizations. The development of Spyx offers an interesting new framework as it enables the flexible neuron definitions of PyTorch frameworks while also enabling the speed of libraries which utilize custom CUDA backends. One more note on the accuracy of gradient computation: In order to speed up computation, some frameworks will approximate this calculation over time. Networks will still manage to *learn* in most cases, but EXODUS, correcting an approximation in SLAYER and therefore calculating gradients that are equivalent to BPTT, showed that it can make a substantial difference in certain experiments. So while speed is extremely important, other factors such as memory consumption and quality of gradient calculation matter as well.
48
48
49
49
## Edits
50
-
**13/08/2023**: Sumit Bam Shrestha fixed Lava's out-of-memory issue by disactivating quantization. That makes it one of the best performing frameworks.
50
+
**13/08/2023**: [Sumit Bam Shrestha](/contributors/sumit-bam-shrestha/) fixed Lava's out-of-memory issue by deactivating quantization. That makes it one of the best performing frameworks.
51
51
52
-
**22/10/2023**: Kade Heckel reperformed experiments on an A100 and added his Spyx framework.
52
+
**22/10/2023**: [Kade Heckel](/contributors/kade-heckel/) reperformed experiments on an A100 and added his Spyx framework.
53
53
54
-
**07/11/2023**: Cameron Barker containerised the benchmark suite and added the memory utilisation benchmark. The updated benchmarks were run on a RTX 3090 with a batchsize of 16.
54
+
**07/11/2023**: [Cameron Barker](/contributors/cameron-barker/) containerized the benchmark suite and added the memory utilization benchmark. The updated benchmarks were run on a RTX 3090 with a batchsize of 16.
55
55
56
-
**19/2/2024**: Jens Pedersen updated the benchmark for Norse to use the correct neuron model and `torch.compile`.
56
+
**19/2/2024**: [Jens E. Pedersen](/contributors/jens-e-pedersen/) updated the benchmark for Norse to use the correct neuron model and `torch.compile`.
57
57
58
58
## Code and comments
59
59
The code for this benchmark is available [here](https://github.com/open-neuromorphic/open-neuromorphic.github.io/blob/main/content/english/blog/spiking-neural-network-framework-benchmarking/). The order of dimensions in the input tensor and how it is fed to the respective models differs between libraries.
Copy file name to clipboardExpand all lines: content/blog/strategic-vision-open-neuromorphic/index.md
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -20,15 +20,15 @@ show_author_bios: true
20
20
21
21
This post presents a **vision for Open Neuromorphic** towards more open, reproducible, and competitive neuromorphics.
22
22
23
-
The post will be the first in a series that lays out the **Open Neuromorphic Strategic Initiative** where we later discuss **Neuromorphic UX** and **new initiatives** that will be kickstarted by the [newly elected Executive Committee](/blog/open-neuromorphic-evolves-charter-first-executive-committee-election/).
23
+
The post will be the first in a series that lays out the **Open Neuromorphic Strategic Initiative** where we later discuss **Neuromorphic UX** and **new initiatives** that will be kickstarted by the [newly elected Executive Committee](/neuromorphic-computing/initiatives/executive-committee/).
24
24
25
25
Join the discussion [on Discord](https://discord.gg/hUygPUdD8E), star us [on GitHub](https://github.com/open-neuromorphic/), follow us [on LinkedIn](https://www.linkedin.com/company/98345683/), and give us a watch [on YouTube](https://www.youtube.com/@openneuromorphic).
26
26
27
27
## What now, Open Neuromorphic?
28
28
29
29
Open Neuromorphic is almost 4 years old.
30
30
31
-
We set out to make the field of neuromorphic engineering more transparent, open, and accessible to newcomers. It's been a tremendous success: Open Neuromorphic is the biggest online neuromorphic community *in the world*, our videos are seen by thousands of researchers, our material is reaching even further, and the 2000+ academics and students on our Discord server are actively and happily collaborating to further the scientific vision of neuromorphic engineering.
31
+
We set out to make the field of [neuromorphic engineering](/neuromorphic-computing/) more transparent, open, and accessible to newcomers. It's been a tremendous success: Open Neuromorphic is the biggest online neuromorphic community *in the world*, our videos are seen by thousands of researchers, our material is reaching even further, and the 2000+ academics and students on our Discord server are actively and happily collaborating to further the scientific vision of neuromorphic engineering.
32
32
33
33
But, let's face it: we still have a long way to go.
0 commit comments