You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/neuromorphic-computing/student-talks/legendre-snn-on-loihi-2/index.md
+5-24Lines changed: 5 additions & 24 deletions
Original file line number
Diff line number
Diff line change
@@ -17,29 +17,10 @@ for Time Series Classification (TSC). Reservoir Computing is a well-established
17
17
processing where a reservoir of statically (and recurrently) connected neurons compute high
18
18
dimensional temporal features, over which a linear readout layer learns the mapping to the output.'
19
19
---
20
-
21
-
In his recent work [1], Ram designed the Legendre-SNN (LSNN), a simple - yet high performing SNN model
22
-
(for univariate TSC) where he has used the Legendre Delay Network (LDN) [2] as a non-spiking reservoir
23
-
(in fact, the LDN in LSNN is implemented with just basic matrix-operations). In a subsequent work
24
-
(currently under review), he extended his LSNN to DeepLSNN that accounts for multivariate time-series
25
-
signals too; upon experimenting with it, he found that DeepLSNN models outperform a popular (and
26
-
complex) LSTM-Conv integrated model [3] on more than 30% of 101 TSC datasets. His latest work is on
27
-
the evaluation of Legendre-SNN on the Loihi-2 chip [4] — on which this talk is focused at.
28
-
Legendre-SNN is composed of a non-spiking LDN followed by one spiking hidden layer and an output
29
-
layer. The Loihi-2 chip has got two On-chip computational resources: low-power x86 Lakemont (LMT)
30
-
microprocessors (total 6) and NeuroCores (total 128). The LMT cores support only INT32-bit operations
31
-
and NeuroCores support deployment of spiking networks. With minimal documentation to program the
32
-
LMT cores, the challenge was -- how to deploy the Legendre-SNN in its entirety (and evaluate it) on a
33
-
Loihi-2 chip. In this talk, Ram will present the technical specifics of implementing the non-spiking LDN on
34
-
an LMT core (the spiking network post the LDN is deployed on NeuroCores). His work: “Legendre-SNN
35
-
on Loihi-2” [4] adds to the scarce technical-documentation to program LMT cores and presents a pipeline
36
-
to deploy an SNN model composed of non-spiking & spiking components -- entirely on Loihi-2.
37
-
20
+
In his recent work [1], Ram designed the Legendre-SNN (LSNN), a simple - yet high performing SNN model (for univariate TSC) where he has used the Legendre Delay Network (LDN) [2] as a non-spiking reservoir (in fact, the LDN in LSNN is implemented with just basic matrix-operations). In a subsequent work (currently under review), he extended his LSNN to DeepLSNN that accounts for multivariate time-series signals too; upon experimenting with it, he found that DeepLSNN models outperform a popular (and complex) LSTM-Conv integrated model [3] on more than 30% of 101 TSC datasets. His latest work is on the evaluation of Legendre-SNN on the Loihi-2 chip [4] — on which this talk is focused at.
21
+
Legendre-SNN is composed of a non-spiking LDN followed by one spiking hidden layer and an output layer. The Loihi-2 chip has got two On-chip computational resources: low-power x86 Lakemont (LMT) microprocessors (total 6) and NeuroCores (total 128). The LMT cores support only INT32-bit operations and NeuroCores support deployment of spiking networks. With minimal documentation to program the LMT cores, the challenge was -- how to deploy the Legendre-SNN in its entirety (and evaluate it) on a Loihi-2 chip. In this talk, Ram will present the technical specifics of implementing the non-spiking LDN on an LMT core (the spiking network post the LDN is deployed on NeuroCores). His work: “Legendre-SNN on Loihi-2” [4] adds to the scarce technical-documentation to program LMT cores and presents a pipeline to deploy an SNN model composed of non-spiking & spiking components -- entirely on Loihi-2.
38
22
References:
39
-
[1]: Gaurav, Ramashish, Terrence C. Stewart, and Yang Yi. "Reservoir based spiking models for univariate Time Series
40
-
Classification." Frontiers in Computational Neuroscience 17 (2023): 1148284.
41
-
[2]: Voelker, Aaron R., and Chris Eliasmith. "Improving spiking dynamical networks: Accurate delays, higher-order synapses,
42
-
and time cells." Neural computation 30.3 (2018): 569-609.
23
+
[1]: Gaurav, Ramashish, Terrence C. Stewart, and Yang Yi. "Reservoir based spiking models for univariate Time Series Classification." Frontiers in Computational Neuroscience 17 (2023): 1148284.
24
+
[2]: Voelker, Aaron R., and Chris Eliasmith. "Improving spiking dynamical networks: Accurate delays, higher-order synapses, and time cells." Neural computation 30.3 (2018): 569-609.
43
25
[3]: Karim, Fazle, et al. "LSTM fully convolutional networks for time series classification." IEEE access 6 (2017): 1662-1669.
44
-
[4]: Gaurav, Ramashish, Terrence C. Stewart, and Yang Yi. "Legendre-SNN on Loihi-2: Evaluation and Insights." NeurIPS 2024
45
-
Workshop Machine Learning with new Compute Paradigms.
26
+
[4]: Gaurav, Ramashish, Terrence C. Stewart, and Yang Yi. "Legendre-SNN on Loihi-2: Evaluation and Insights." NeurIPS 2024 Workshop Machine Learning with new Compute Paradigms.
"title" "Join the <spanclass=\"gradient-text\">Conversation</span>"
12
+
"description" "The Open Neuromorphic community thrives on collaboration. Join our Discord, attend an event, or contribute to a project to help push the boundaries of brain-inspired computing."
13
+
"link" "/getting-involved/"
14
+
"link_text" "Find Your Place in Our Community"
15
+
) }}
16
+
</div>
6
17
{{/* Display the main content of the _index.md page */}}
"title" "Join the <spanclass=\"gradient-text\">Conversation</span>"
109
-
"description" "The Open Neuromorphic community thrives on collaboration. Join our Discord, attend an event, or contribute to a project to help push the boundaries of brain-inspired computing."
"title" "Help Us <spanclass=\"gradient-text\">Improve this Guide</span>"
29
+
"description" "Our hardware guide is community-maintained. If you know of a chip we should add, see an error, or have updated information, please let us know by opening an issue on our GitHub repository."
0 commit comments