Skip to content

Commit adc84bd

Browse files
committed
feat(katex): upgrade katex version to 0.16.22
fix(blog): add guard to renderMathInElement function feat(blog): add math shortcode for inline equations
1 parent 0cf659a commit adc84bd

File tree

4 files changed

+154
-126
lines changed
  • content/blog
    • spiking-neurons-digital-hardware-implementation
    • truenorth-deep-dive-ibm-neuromorphic-chip-design
  • layouts

4 files changed

+154
-126
lines changed

content/blog/spiking-neurons-digital-hardware-implementation/index.md

Lines changed: 51 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -4,45 +4,59 @@ date: 2023-01-02
44
description: "Learn how to model Leaky Integrate and Fire (LIF) neurons in digital hardware. Understand spike communication, synapse integration, and more for hardware implementation."
55
math: true
66
draft: false
7-
author:
8-
- "Fabrizio Ottati"
7+
author:
8+
- "Fabrizio Ottati"
99
image: banner.png
1010
tags: ["hardware", "digital", "spiking", "snn", "rtl", "verilog", "AI", "machine learning"]
1111
show_author_bios: true
1212
---
1313

14-
## Introduction
14+
## Introduction
1515

1616
In this article, we will try to model a layer of Leaky Integrate and Fire (LIF) spiking neurons using digital hardware: registers, memories, adders and so on. To do so, we will consider a single output neuron connected to multiple input neurons from a previous layer.
17-
oo
17+
1818
{{< image src="neurons-connected.png" position="center" alt="Multiple pre-synaptic neurons connected to a post-synaptic one." caption="Multiple pre-synaptic neurons connected to a post-synaptic one." >}}
1919

20-
In a Spiking Neural Network (SNN), neurons communicate by means of **spikes**: these activation voltages are then converted to currents through the **synapses**, charging the **membrane potential** of the destination neuron. In the following, the destination neuron is denoted as **post-synaptic** neuron, with the index $i$, while the input neuron under consideration is denoted as **pre-synaptic** neuron, with the index $j$.
20+
In a Spiking Neural Network (SNN), neurons communicate by means of **spikes**: these activation voltages are then converted to currents through the **synapses**, charging the **membrane potential** of the destination neuron. In the following, the destination neuron is denoted as **post-synaptic** neuron, with the index $i$, while the input neuron under consideration is denoted as **pre-synaptic** neuron, with the index $j$.
2121

2222
We denote the input spike train incoming from the pre-synaptic neuron with $\sigma_{j}(t)$:
23-
$$ \sigma_{j}(t) = \sum_{k} \delta(t-t_{k}) $$
24-
where $t_{k}$ are the spike timestamps of the spike train $\sigma_{j}(t)$.
23+
{{< math >}}
24+
\sigma_{j}(t) = \sum_{k} \delta(t-t_{k})
25+
{{< /math >}}
26+
where $t_{k}$ are the spike timestamps of the spike train $\sigma_{j}(t)$.
2527

2628
The **synapse** connecting the pre-synaptic neuron with the post-synaptic neuron is denoted with $w_{ij}$. All the incoming spike trains are then **integrated** by the post-synaptic neuron membrane; the integration function can be modeled by a **first-order low-pass filter**, denoted with $\alpha_{i}(t)$:
27-
$$ \alpha_{i}(t) = \frac{1}{\tau_{u_{i}}} e^{-\frac{t}{\tau_{u_{i}}}}$$
29+
{{< math >}}
30+
\alpha_{i}(t) = \frac{1}{\tau_{u_{i}}} e^{-\frac{t}{\tau_{u_{i}}}}
31+
{{< /math >}}
2832
The spike train incoming from the pre-synaptic neuron, hence, is convolved with the membrane function; in real neurons, this corresponds to the **input currents** coming from the pre-synaptic neurons that **charge** the post-synaptic neuron membrane potential, $v_{i}(t)$. The sum of the currents in input to the post-synaptic neuron is denoted with $u_{i}(t)$ and modeled through the following equation:
29-
$$ u_{i}(t) = \sum_{j \neq i}{w_{ij} \cdot (\alpha_{v} \ast \sigma_{j})(t)} $$
33+
{{< math >}}
34+
u_{i}(t) = \sum_{j \neq i}{w_{ij} \cdot (\alpha_{v} \ast \sigma_{j})(t)}
35+
{{< /math >}}
3036
Each pre-synaptic neuron contributes with a current (spike train multiplied by the $w_{ij}$ synapse) and these sum up at the input of the post-synaptic neuron. Given the membrane potential of the destination neuron, denoted with $v_{i}(t)$, the differential equation describing its evolution through time is the following:
31-
$$ \frac{\partial}{\partial t} v_{i}(t) = -\frac{1}{\tau_{v}} v_{i}(t) + u_{i}(t)$$
37+
{{< math >}}
38+
\frac{\partial}{\partial t} v_{i}(t) = -\frac{1}{\tau_{v}} v_{i}(t) + u_{i}(t)
39+
{{< /math >}}
3240
In addition to the input currents, we have the **neuron leakage**, $\frac{1}{\tau_{v}} v_{i}(t)$, modeled through a **leakage coefficient** $\frac{1}{\tau_{v}}$ that multiplies the membrane potential.
3341

3442
## Discretising the model
3543

3644
Such a differential equation cannot be solved directly using discrete arithmetic, as it would be processed on digital hardware; hence, we need to **discretise** the equation. This discretisation leads to the following result:
37-
$$ v_{i}[t] = \beta \cdot v_{i}[t-1] + (1 - \beta) \cdot u_{i}[t] - \theta \cdot S_{i}[t] $$
45+
{{< math >}}
46+
v_{i}[t] = \beta \cdot v_{i}[t-1] + (1 - \beta) \cdot u_{i}[t] - \theta \cdot S_{i}[t]
47+
{{< /math >}}
3848
where $\beta$ is the **decay coefficient** associated to the leakage. We embed $(1-\beta)$ in the input current $u_{i}[t]$, by merging it with the synapse weights as a scaling factor; in this way, the input current $u_{i}[t]$ is **normalised** regardless of the decay constant $\tau_{v}$ value.
3949

4050
Notice that the **membrane reset** mechanism has been added: when a neuron **spikes**, its membrane potential goes back to the rest potential (usually equal to zero), and this is modeled by **subtracting the threshold** $\theta$ from $v_{i}(t)$ when an output spike occurs. The output spike is modeled through a function $S_{i}[t]$:
41-
$$ S_{i}[t] = 1 ~\text{if}~ v_{i}[t] \gt \theta ~\text{else}~ 0 $$
51+
{{< math >}}
52+
S_{i}[t] = 1 ~\text{if}~ v_{i}[t] \gt \theta ~\text{else}~ 0
53+
{{< /math >}}
4254
This is equal to 1 at spike time (i.e. if at timestamp $t$ the membrane potential $v_{i}[t]$ is larger than the threshold $\theta$) and 0 elsewhere.
4355

4456
The input current is given by:
45-
$$ u_{i}[t] = \sum_{j \neq i}{w_{ij} \cdot S_{j}[t]} $$
57+
{{< math >}}
58+
u_{i}[t] = \sum_{j \neq i}{w_{ij} \cdot S_{j}[t]}
59+
{{< /math >}}
4660
Notice that since $S_{i}[t]$ is either 0 or 1, the input current $u_{i}[t]$ is equal to the **sum of the synapses weights** of the pre-synaptic neurons that spike at timestamp $t$.
4761

4862
## Storage and addressing neurons states
@@ -59,10 +73,12 @@ Since there are $M$ neurons in the layer, we need an $M$-entries vector, denoted
5973

6074
An **address** is associated to each neuron, which can be thought as the $i$ index in the $V[t]$ vector; to obtain $v_{i}[t]$, the post-synaptic neuron address is used to index the membrane potentials memory $V[t]$.
6175

62-
We are able to store and retrieve a post-synaptic neuron membrane potential using a memory; now, we would like to **charge it with the pre-synaptic neurons currents** in order to emulate the behaviour of a neuron membrane; to do that, we need to get the corresponding input synapses $W_{i}$, **multiply** these by the spikes of the associated pre-synaptic neurons, sum them up and, then, accumulate these in the post-synaptic neuron membrane.
76+
We are able to store and retrieve a post-synaptic neuron membrane potential using a memory; now, we would like to **charge it with the pre-synaptic neurons currents** in order to emulate the behaviour of a neuron membrane; to do that, we need to get the corresponding input synapses $W_{i}$, **multiply** these by the spikes of the associated pre-synaptic neurons, sum them up and, then, accumulate these in the post-synaptic neuron membrane.
6377

64-
Let us start from a single input pre-synaptic neuron:
65-
$$ u_{ij}[t] = w_{ij} \cdot S_{j}[t] $$
78+
Let us start from a single input pre-synaptic neuron:
79+
{{< math >}}
80+
u_{ij}[t] = w_{ij} \cdot S_{j}[t]
81+
{{< /math >}}
6682
We know that $S_{j}[t]$ is either 1 or 0; hence, we have either $u_{ij}[t] = w_{ij}$ or $u_{ij}[t] = 0$; this means that the synapse weight is **either added or not** to the total current $u_{i}[t]$; hence, the weight $w_{ij}$ is read from memory **only if the corresponding pre-synaptic neuron spikes!** Given a layer of $M$ neurons, each of which is connected in input to $N$ synapses, we can think of grouping the $M \cdot N$ weights in a **matrix**, which can be associated with another memory array, denoted with $W$.
6783

6884
{{< image src="synapses-weights.png" position="center" alt="The synapses weights memory." caption="The synapses weights memory.">}}
@@ -83,33 +99,39 @@ To **prevent multiple read-write cycles** due to multiple spiking pre-synaptic n
8399

84100
A **multiplexer** is placed on one side of the adder; in this way:
85101
- the first weight $w_{i0}$ to be accumulated is added to the $v_{i}[t]$ read from memory and saved to the membrane register:
86-
$$ v_{i}[t+1] = v_{i}[t] + w_{i0} $$
87-
- the successive weights are added to the membrane register content, so that all the currents are accumulated before writing $v_{i}[t+1]$ back to memory; using a non rigorous notation, this can be translated to the following equation:
88-
$$ v_{i}[t+1] = v_{i}[t+1] + w_{ij},~ 0 \lt j \leq N $$
102+
{{< math >}}
103+
v_{i}[t+1] = v_{i}[t] + w_{i0}
104+
{{< /math >}}
105+
- the successive weights are added to the membrane register content, so that all the currents are accumulated before writing $v_{i}[t+1]$ back to memory; using a non rigorous notation, this can be translated to the following equation:
106+
{{< math >}}
107+
v_{i}[t+1] = v_{i}[t+1] + w_{ij},~ 0 \lt j \leq N
108+
{{< /math >}}
89109

90110
## Excitatory and inhibitory neurons
91111

92-
Our post-synaptic neuron is able to accumulate spikes in its membrane; however, input spikes do not always result in membrane potential charging! In fact, a pre-synaptic neuron can be **excitatory** (i.e. it **charges** the post-synaptic neuron membrane) or **inhibitory** (i.e. it **discharges** the post-synaptic neuron membrane); in the digital circuit, this phenomenon corresponds to **adding** or **subtracting**, respectively, the synapse weight $w_{ij}$ to or from $v_{i}[t]$; this functionality can be added to the architecture by placing an adder capable of performing **both additions and subtractions**, choosing among these with a control signal generated by an **FSM (Finite State Machine)**, which is a sequential digital circuit that evolves through a series of states depending on its inputs and, consequently, generates controls signals for the rest of the circuit.
112+
Our post-synaptic neuron is able to accumulate spikes in its membrane; however, input spikes do not always result in membrane potential charging! In fact, a pre-synaptic neuron can be **excitatory** (i.e. it **charges** the post-synaptic neuron membrane) or **inhibitory** (i.e. it **discharges** the post-synaptic neuron membrane); in the digital circuit, this phenomenon corresponds to **adding** or **subtracting**, respectively, the synapse weight $w_{ij}$ to or from $v_{i}[t]$; this functionality can be added to the architecture by placing an adder capable of performing **both additions and subtractions**, choosing among these with a control signal generated by an **FSM (Finite State Machine)**, which is a sequential digital circuit that evolves through a series of states depending on its inputs and, consequently, generates controls signals for the rest of the circuit.
93113

94114
{{< image src="inhibitory.png" position="center" alt="Control circuit for choosing between excitatory and inhibitory stimulation." caption="Control circuit for choosing between excitatory and inhibitory stimulation." >}}
95115

96-
This FSM, given the operation to be executed on the post-synaptic neuron, chooses if the adder has to add or subtract the synapse current.
116+
This FSM, given the operation to be executed on the post-synaptic neuron, chooses if the adder has to add or subtract the synapse current.
97117

98118
However, is this design efficient in terms of resources employed? It has to be reminded that inhibitory and excitatory neurons are chosen at **chip programming time**; this means that **the neuron type does not change during the chip operation** (however, with the solution we are about to propose, it would not be a problem to change the neuron type on-the-fly); hence, we can **embed this information** in the neuron description by **adding a bit to the synapses weights memory row** that, depending on its value, denotes that neuron as excitatory or inhibitory.
99119

100120
{{< image src="synapse-encoding.png" position="center" alt="Synapses weight storage in memory." caption="Synapses weight storage in memory." >}}
101121

102-
Suppose that, given a pre-synaptic neuron, all its $M$ output synapses are stored in a memory row of $n$ bits words, where $n$ is the number of bits to which the synapse weight is quantized. At the end of the memory row $j$, we add a bit denoted with $e_{j}$ that identifies the neuron type and that is read together with the weights from the same memory row: if the pre-synaptic neuron $j$ is **excitatory**, $e_{j}=1$ and the weight is **added**; if it is **inhibitory**, $e_{j}=0$ and the weight is **subtracted**; in this way, **the $e_{j}$ field of the synapse can drive the adder directly**.
122+
Suppose that, given a pre-synaptic neuron, all its $M$ output synapses are stored in a memory row of $n$ bits words, where $n$ is the number of bits to which the synapse weight is quantized. At the end of the memory row $j$, we add a bit denoted with $e_{j}$ that identifies the neuron type and that is read together with the weights from the same memory row: if the pre-synaptic neuron $j$ is **excitatory**, $e_{j}=1$ and the weight is **added**; if it is **inhibitory**, $e_{j}=0$ and the weight is **subtracted**; in this way, **the $e_{j}$ field of the synapse can drive the adder directly**.
103123

104124
{{< image src="modified-adder.png" position="center" alt="Using the neuron type bit to drive the adder." caption="Using the neuron type bit to drive the adder." >}}
105125

106-
## Leakage
126+
## Leakage
107127

108128
Let us introduce the characteristic feature of the LIF neuron: the **leakage**! We shall choose a (constant) leakage factor $\beta$ and multiply it by $v_{i}[t]$ to obtain $v_{i}[t+1]$, which is **lower** than $v_{i}[t]$ since some current has leaked from the membrane, and we model this through $\beta$:
109-
$$ v_{i}[t+1] = \beta \cdot v_{i}[t] $$
129+
{{< math >}}
130+
v_{i}[t+1] = \beta \cdot v_{i}[t]
131+
{{< /math >}}
110132
However, multiplication is an **expensive** operation in hardware; furthermore, the leakage factor is **smaller than one**, so we would need to perform a **fixed-point multiplication** or, even worse, a **division**! How can we solve this problem?
111133

112-
If we choose $\beta$ as a power of $\frac{1}{2}$, such as $2^{-n}$, the multiplication becomes **equivalent to a $n$-positions right shift**! A really **hardware-friendly** operation!
134+
If we choose $\beta$ as a power of $\frac{1}{2}$, such as $2^{-n}$, the multiplication becomes **equivalent to a $n$-positions right shift**! A really **hardware-friendly** operation!
113135

114136
{{< image src="leak.png" alt="Leakage circuit." position="center" caption="Leakage circuit." >}}
115137

@@ -125,9 +147,9 @@ Denoting with `adder_ctrl` the signal which controls the adder and with `leak_op
125147

126148
`adder_ctrl` can be obtained as the logic AND operation of `leak_op_n` and $e_{j}$ so that, when `leak_op_n=0`, `adder_ctrl=0` regardless of the value of $e_{j}$ and a subtraction is performed by the adder.
127149

128-
## Spike mechanism
150+
## Spike mechanism
129151

130-
Our neuron needs to spike! If this is encoded as a single digital bit, given the spiking threshold $\theta$, we **compare $v_{i}[t]$ to $\theta$** and generate a logic 1 in output **when the membrane potential is larger than the threshold**. This can be implemented using a **comparator** circuit.
152+
Our neuron needs to spike! If this is encoded as a single digital bit, given the spiking threshold $\theta$, we **compare $v_{i}[t]$ to $\theta$** and generate a logic 1 in output **when the membrane potential is larger than the threshold**. This can be implemented using a **comparator** circuit.
131153

132154
{{< image src="spike.png" alt="Spike circuit." position="center" caption="Spike circuit." >}}
133155

@@ -149,10 +171,10 @@ The resulting circuit is the following.
149171

150172
Here we are, with a first prototype of our LIF layer digital circuit. In the next episode:
151173
- we will make it actually work. Right now, this is a functional model, that needs some modifications to behave correctly as a spiking neurons layer.
152-
- we will implement it in Verilog.
174+
- we will implement it in Verilog.
153175
- we will simulate it using open source tools, such as [Verilator](ihttps://www.veripool.org/verilator/).
154176

155-
## Acknowledgements
177+
## Acknowledgements
156178

157179
I would like to thank [Jason Eshraghian](https://jasoneshraghian.com), [Steven Abreu](https://stevenabreu.com) and [Gregor Lenz](https://lenzgregor.com) for the valuable corrections and comments that made this article way better than the original draft!
158180

0 commit comments

Comments
 (0)