Clone the repository
git clone https://github.com/samp5/ha141.git snn
Initialize the Pybind11 submodule with
cd snn
git submodule update --init
Create the python library and server executable
make pybind && make server
This will create a python package is snn/extern
and a server.out
in build/server/
Create a virtual environment (if you want) and install the necessary dependencies
cd extern
mkdir venv
python3 -m venv ./venv
source venv/bin/activate
pip install networkx numpy pandas scikit-learn
- The
snn
module has a small interface but deep functionality.
- Before the constructing the network ensure the following environment variables are set
SNN_SERVER_DIR = /path/to/server_dir # if you don't move the exectuable you should set this to PROJECT_ROOT/build/server
SNN_AUTO_START_SERVER = '1' | '0' # whether or not to auto run the server exectuable
- To obtain a "default" dictionary,
default = snn.pySNN.getDefaultConfig()
which returns:
ConfigDict dict = {
{"refractory_duration", 5}, // the time which neurons will ignore messages after firing.
{"initial_membrane_potential", -6.0},
{"activation_threshold", -5.0},
{"refractory_membrane_potential", -7.0},
{"tau", 100.0}, // the decay rate for a single time step is cacluated as
// (current_membrane_potential - refractory_membrane_potential) / tau.
{"max_latency", 10}, // the maximum latency for an input neuron
{"max_synapse_delay", 2},
{"min_synapse_delay", 1},
{"max_weight", 1.0},
{"poisson_prob_of_success", 0.8},
{"time_per_stimulus", 200},
{"seed", -1}}; // uses system time, otherwise specify a seed
The function pySNN.updateConfig(dict: dict[string: double])
can be used to update the configuration between calls to runBatch
.
Note
Max and min synapse delay must be set prior to initialization and will not affect the network's behavior if altered later. These values are only important if you are not including synapse delays in the initalization dictionary. See Network Initialization.
client_snn.initialize( adjacencyDict : dict[tuple[int, int] : dict[tuple[int, int] : dict[string : float]]] )
This method is used to set the number of neurons, their connections, and the attributes of those connections.
- The adjacencyDict should be a dictionary with the following structure
{
(0,0) : {
(0,1) : {
"weight" : 1.0
"delay" : 1.0
}
(0,2) : {
"weight" : 2.0
"delay" : 1.0
}
}
(0,1) : {
(1,1) : {
"weight" : 1.0
"delay" : 2.0
}
(0,2) : {
"weight" : 2.0
"delay" : 1.0
}
}
...
}
This structure can be obtained from networkx
.
For example.
# get a graph with 7056 neurons
G = nx.navigable_small_world_graph(84, seed=1)
# add attributes to that graph
for n in G:
for nbr in G[n]:
G[n][nbr]["weight"] = random.random() * 9 + 1 # weight in [1, 10]
G[n][nbr]["delay"] = random.random() * 7 + 1 # delay in [1, 8]
default = snn.pySNN.getDefaultConfig(); # get the default config
default["tau"] = 20.0 # set something custom
net = snn.client_snn(default) # construct our network
# initialize our network with the dictionary!
net.initialize(nx.to_dict_of_dicts(G))
Note
The final dictionary layer keys, weight
and delay
are optional and if omitted, will be randomly determined via a random number generator with limits that depend on max_synapse_delay
and min_synapse_delay
(specified in the configuration).
The adjacencyDict
can be automatically generated from networkx
as long as the graph has type directed grid. See networkx.navigable_small_world_graph
for a strict definition.
import networkx as nx
import numpy as np
G = nx.navigable_small_world_graph(10, seed=1)
# since a directed grid is NOT weighted, you have to add weights
# optionally add delays
for n in G:
for nbr in G[n]:
G[n][nbr]["weight"] = random.random() * 0.1
G[n][nbr]["delay"] = random.randint(2, 10)
stimulus = np.array([[1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0],
[1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0],
[1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0],
[1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0],
[1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0]])
config_dict = snn.pySNN.getDefaultConfig();
config_dict["tau"] = 20
net = snn.client_snn(config_dict);
net.initialize(nx.to_dict_of_dicts(G)) # initialize the network
for i in range(0,5):
net.runBatch(stimulus[0:i]) # Run the batch
activation = net.getActivation() # Get activation
net.batchReset()
# some custom function to adjust the graph
update_graph(G, activation)
net.updateEdges(nx.to_dict_of_dicts(G))
client_snn.updateEdges( adjacencyDict : dict[tuple[int, int] : dict[tuple[int, int] : dict[string : float]]] )
Does two things:
- Updates synapse weight for connection between
(x,y)
and(a,b)
based onadjacencyDict[(x,y)][(a,b)]["weight"]
- Updates synapse delay for connection between
(x,y)
and(a,b)
based onadjacencyDict[(x,y)][(a,b)]["delay"]
Starts a child process of the network in order to run the given stimulus set.
Returns a numpy array with time_per_stimulus
columns and bins
rows.
Given a nonnegative bins
argument, the timesteps will be split into bins
discrete categories. If bins
is omitted (or is negative), then time_per_stimulus + 1
bins are used (there will be a "bin" for each timestamp).
If the time_per_stimulus
is not divisible by bins
, and produce a remainder k
, the first k
bins capture 1
additional element.
For exampple, with 10 neurons, simluating 15ms runtimes on 20 stimulus, a call netobj.getActivation(3)
will return a matrix with 20 rows and 3 columns. In this case the first column represents the sum of activations that occured in [0,5]
, the second in timestamps [6, 10]
, and the third in timestamps [11, 15]
.
Reset the network to be ready to run another batch.
Warning
This will delete any logging data held in the network. Be sure to call client_snn.getActivation()
or client_snn.getPotential()
(if you are tracking individual neurons) before resetting for another batch to avoid overwriting your activation data.
Update the configuration dictionary for subsequent calls to runBatch
Reboot ther server. This will automatically construct a new remote network with the same configuration dictionary held in client_snn
.
Important
The network needs to be reinitialized after the server is rebooted.
- For each given tuple
(x,y)
, corresponding to a neuron identifier in the adjacency dictionary, request that additional logging information about membrane potential changes be saved. In order to access that information, callclient_snn.getPotential()
.
Important
This function needs to be called after the network is initialized since otherwise the coordinate pair (x,y)
means nothing to the network!
- Return a numpy tensor with the following shape :
(number_stimulus_in_batch, number_tracked_neurons, time_per_stimulus)
.
Currently, the server listens on local host on port 5005.
Server output responds to the debug level set in the configuration dictionary. For debug levels <= LogLevel::NONE
or <= 0
the server will not produce any output beyond the inital confirmation output that the server is running.
-
Each input neuron recieves a sequence of "stimulus events" in which the value of the "pixel" to which that are attached is added to their membrane potential.
-
For a sequence of events with timestamps
0, 1, 7, 10, 11, 24, 56
an input neuron with a latency value of15
will only recieve events24, 56
whereas an input neuron with a latency value of10
will recieve11, 24, 56
. -
The latency of an input neuron is calculated via their distance the "center" of an image.
-
For an image with dimensions 5 x 5, and a maximum latency of 10ms input neurons at each "corner" of the image will have a latency of 10ms whereas those in the center will have a latency of (integer rounding) 0.
-
The latency for a given input neuron is calculated as follows:
- The index of the input neuron is transformed into a coordinate as if the 1-D stimulus was a 2-D square (or a rectangle of the smallest perimeter)
- The distance of the input neuron to the center of the image is calculated
- The latency of the input neuron is
$d_{\textrm{input}}/d_{\textrm{max}} \times$ max_latency
interpreted as an integer
- For each timestep, a neuron decays according to,
To run the network:
- Clone the repository
git clone https://github.com/samp5/ha141.git snn
-
Write any client code in
src/rpc/rpc_main.cpp
-
Build the executable
make server && make cpp_client
-
If
SNN_AUTO_START_SERVER
is set, simply run./build/client/client.out
-
Otherwise, execute
./build/server/server.out
and then./build/client/client.out