Skip to content

Commit 0c8aeaf

Browse files
committed
update SpiNNaker2 neurons / synapses
1 parent 9819b74 commit 0c8aeaf

File tree

1 file changed

+3
-3
lines changed
  • content/english/neuromorphic-computing/hardware/spinnaker-2-university-of-dresden

1 file changed

+3
-3
lines changed

content/english/neuromorphic-computing/hardware/spinnaker-2-university-of-dresden/index.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -17,8 +17,8 @@ product:
1717
announced_date: 2021-07-27
1818
applications: Real-time simulation of SNN; DNN; Symbolic; HPC
1919
chip_type: Digital
20-
neurons: 1 million
21-
synapses: 10000
20+
neurons: 152k
21+
synapses: 152m
2222
weight_bits: null
2323
activation_bits: null
2424
on_chip_learning: true
@@ -44,7 +44,7 @@ SpiNNaker2 is the successor of the SpiNNaker (Spiking Neural Network Architectur
4444

4545
## Overview
4646

47-
SpiNNaker2 aims to achieve a 10x increase in core count over SpiNNaker1, with a target of 10 million ARM processor cores on a single machine. Along with architectural improvements, the shift to a 22nm manufacturing process is expected to provide over 10x more neural simulation capacity while staying within a comparable power envelope.
47+
SpiNNaker2 aims to achieve a 10x increase in core count over SpiNNaker1, with a target of 10 million ARM processor cores on a single machine. One SpiNNaker2 chip contains 152 thousand neurons and 152 million synapses across its 152 cores. Along with architectural improvements, the shift to a 22nm manufacturing process is expected to provide over 10x more neural simulation capacity while staying within a comparable power envelope.
4848

4949
The system retains the flexible, software-based approach of SpiNNaker1, using independent ARM cores arranged in a Globally Asynchronous Locally Synchronous (GALS) configuration to model groups of neurons in parallel. Additional dedicated hardware has been added to accelerate common mathematical operations involved in synapse modeling and neural simulation. Dynamic voltage and frequency scaling techniques allow each core to scale its performance to match instantaneous load, optimizing the power efficiency.
5050

0 commit comments

Comments
 (0)