@@ -22,13 +22,13 @@ You should start by reading [Amit et al. (1985)](https://www.dropbox.com/scl/fi/
22
22
23
23
- ** Memory Storage:** Implement the Hebbian learning rule to compute the weight matrix, given a set of network configurations (memories). This is described in ** Equation 1.5** of the paper:
24
24
25
- Let \( p \) be the number of patterns and \( \xi_i^\mu \in \{ -1, +1\} \) the value of neuron \( i \) in pattern \( \mu \) . The synaptic coupling between neurons \( i \) and \( j \) is:
25
+ Let \$ p \$ be the number of patterns and \$ \xi_i^\mu \in \{ -1, +1\}\$ the value of neuron \$ i \$ in pattern \$ \mu\$ . The synaptic coupling between neurons \$ i \$ and \$ j \$ is:
26
26
27
27
$$
28
28
J_{ij} = \sum_{\mu=1}^p \xi_i^\mu \xi_j^\mu
29
29
$$
30
30
31
- Note that the matrix is symmetric \( J_ {ij} = J_ {ji} \ ) , and there are no self-connections by definition \( J_ {ii} = 0 \ ) .
31
+ Note that the matrix is symmetric ( \$ J_ {ij} = J_ {ji}\$ ), and there are no self-connections by definition ( \$ J_ {ii} = 0\$ ).
32
32
33
33
- ** Memory Retrieval:** Implement the retrieval rule using ** Equation 1.3** and surrounding discussion. At each time step, each neuron updates according to its ** local field** :
34
34
@@ -42,13 +42,13 @@ You should start by reading [Amit et al. (1985)](https://www.dropbox.com/scl/fi/
42
42
S_i(t+1) = \text{sign}(h_i(t)) = \text{sign} \left( \sum_{j} J_{ij} S_j(t) \right)
43
43
$$
44
44
45
- Here \( S_i \in \{ -1, +1\} \) is the current state of neuron \( i \) .
45
+ Here \$ S_i \in \{ -1, +1\}\$ is the current state of neuron \$ i \$ .
46
46
47
47
---
48
48
49
49
### 2. Test with a Small Network
50
50
51
- Encode the following test memories in a Hopfield network with \( N = 5 \) neurons:
51
+ Encode the following test memories in a Hopfield network with \$ N = 5\$ neurons:
52
52
53
53
$$
54
54
\xi^1 = [+1, -1, +1, -1, +1] \\
@@ -75,7 +75,7 @@ Questions to consider:
75
75
- ** Network Size** (number of neurons)
76
76
- ** Number of Stored Memories**
77
77
78
- To generate \( m \) memories \( \xi_1, \dots, \xi_m \) for a network of size \( N \) , use:
78
+ To generate \$ m \$ memories \$ \xi_1, \dots, \xi_m\$ for a network of size \$ N \$ , use:
79
79
80
80
``` python
81
81
import numpy as np
@@ -89,28 +89,28 @@ xi = 2 * (np.random.rand(m, N) > 0.5) - 1
89
89
90
90
** Visualization 1:**
91
91
Create a heatmap:
92
- - \( x \) -axis: network size
93
- - \( y \) -axis: number of stored memories
92
+ - \$ x \$ -axis: network size
93
+ - \$ y \$ -axis: number of stored memories
94
94
- Color: proportion of memories retrieved with ≥99% accuracy
95
95
96
96
** Visualization 2:**
97
97
Plot the expected number of accurately retrieved memories vs. network size:
98
98
99
99
Let:
100
- - \( P[ m, N] \in [ 0, 1] \) : proportion of \( m \) memories accurately retrieved in a network of size \( N \)
101
- - \( \mathbb{E}[ R_N] \) : expected number of successfully retrieved memories
100
+ - \$ P[ m, N] \in [ 0, 1] \$ : proportion of \$ m \$ memories accurately retrieved in a network of size \$ N \$
101
+ - \$ \mathbb{E}[ R_N] \$ : expected number of successfully retrieved memories
102
102
103
103
Then:
104
104
$$
105
105
\mathbb{E}[R_N] = \sum_{m=1}^{M} m \cdot P[m, N]
106
106
$$
107
107
108
- Where \( M \) is the maximum number of memories tested.
108
+ Where \$ M \$ is the maximum number of memories tested.
109
109
110
110
** Follow-Up:**
111
111
112
112
- What relationship (if any) emerges between network size and capacity?
113
- - Can you develop rules or intuitions that help predict a network' s capacity?
113
+ - Can you develop rules or intuitions that help predict a network’ s capacity?
114
114
115
115
---
116
116
@@ -121,12 +121,12 @@ Where \( M \) is the maximum number of memories tested.
121
121
#### Setup: A–B Pair Structure
122
122
123
123
- Each memory consists of two parts:
124
- - First half: ** Cue** (\( A \) )
125
- - Second half: ** Response** (\( B \) )
124
+ - First half: ** Cue** (\$ A \$ )
125
+ - Second half: ** Response** (\$ B \$ )
126
126
127
- If \( N \) is odd:
128
- - Let cue length = \( \lfloor N/2 \rfloor \)
129
- - Let response length = \( \lceil N/2 \rceil \)
127
+ If \$ N \$ is odd:
128
+ - Let cue length = \$ \lfloor N/2 \rfloor\$
129
+ - Let response length = \$ \lceil N/2 \rceil\$
130
130
131
131
Each full memory:
132
132
$$
137
137
138
138
For each trial:
139
139
140
- 1 . ** Choose a memory** \( \xi^\mu \)
141
- 2 . ** Construct initial state** \( x \) :
142
- - Cue half: set to \( A^\mu \)
140
+ 1 . ** Choose a memory** \$ \xi^\mu\$
141
+ 2 . ** Construct initial state** \$ x \$ :
142
+ - Cue half: set to \$ A^\mu\$
143
143
- Response half: set to 0
144
144
3 . ** Evolve the network** using the update rule:
145
145
$$
146
146
x_i \leftarrow \text{sign} \left( \sum_j J_{ij} x_j \right)
147
147
$$
148
148
- Optionally: ** clamp** the cue (i.e., hold cue values fixed)
149
149
4 . ** Evaluate success** :
150
- - Compare recovered response to \( B^\mu \)
150
+ - Compare recovered response to \$ B^\mu\$
151
151
- Mark as successful if ≥99% of bits match:
152
152
$$
153
153
\frac{1}{|B|} \sum_{i \in \text{response}} \mathbb{1}[x^*_i = B^\mu_i] \geq 0.99
154
154
$$
155
155
156
156
#### Analysis
157
157
158
- - Repeat across many \( A \) – \( B \) pairs
159
- - For each network size \( N \) , compute the ** expected number** of correctly retrieved responses
160
- - Plot this value as a function of \( N \)
158
+ - Repeat across many \$ A \$ – \$ B \$ pairs
159
+ - For each network size \$ N \$ , compute the ** expected number** of correctly retrieved responses
160
+ - Plot this value as a function of \$ N \$
161
161
162
162
#### Optional Extensions
163
163
164
164
- Compare performance with and without clamping the cue
165
- - Try cueing with noisy or partial versions of \( A \)
165
+ - Try cueing with noisy or partial versions of \$ A \$
166
166
167
167
---
168
168
184
184
185
185
Context drift:
186
186
187
- - Set \( \text{context}^1 \) randomly
188
- - For each subsequent \( \text{context}^{t+1} \) , copy \( \text{context}^t \) and flip ~ 5% of the bits
187
+ - Set \$ \text{context}^1\$ randomly
188
+ - For each subsequent \$ \text{context}^{t+1}\$ , copy \$ \text{context}^t\$ and flip ~ 5% of the bits
189
189
190
190
#### Simulation Procedure
191
191
192
192
1 . Store all 10 memories in the network.
193
- 2 . For each memory \( i = 1, \dots, 10 \) :
194
- - Cue the network with \( \text{context}^i \)
193
+ 2 . For each memory \$ i = 1, \dots, 10\$ :
194
+ - Cue the network with \$ \text{context}^i\$
195
195
- Set item neurons to 0
196
196
- Run until convergence
197
- - For each stored memory \( j \) , compare recovered item to \( \text{item}^j \)
198
- - If ≥99% of bits match, record \( j \) as retrieved
199
- - Record \( \Delta = j - i \) (relative offset)
197
+ - For each stored memory \$ j \$ , compare recovered item to \$ \text{item}^j\$
198
+ - If ≥99% of bits match, record \$ j \$ as retrieved
199
+ - Record \$ \Delta = j - i\$ (relative offset)
200
200
201
201
#### Analysis
202
202
203
203
- Repeat the procedure (e.g., 100 trials)
204
- - For each \( \Delta \in [ -9, +9] \) , compute:
204
+ - For each \$ \Delta \in [ -9, +9] \$ , compute:
205
205
- Probability of retrieval
206
206
- 95% confidence interval
207
207
208
208
#### Visualization
209
209
210
210
Create a line plot:
211
211
212
- - \( x \) -axis: Relative position \( \Delta \)
213
- - \( y \) -axis: Retrieval probability
212
+ - \$ x \$ -axis: Relative position \$ \Delta\$
213
+ - \$ y \$ -axis: Retrieval probability
214
214
- Error bars: 95% confidence intervals
215
215
216
216
Write a brief interpretation of the observed pattern.
0 commit comments