Analog sequential hippocampus memory model for trajectory learning and recalling: a robustness analysis overview
Code on which the paper entitled "Analog sequential hippocampus memory model for trajectory learning and recalling: a robustness analysis overview" is based, sent to a journal and awaiting review.
A fully functional analog spike-based implementation of a sequential memory model bio-inspired by the hippocampus implemented on the DYNAP-SE hardware platform using the technology of the Spiking Neuronal Network (SNN) is presented. The code is written in Python and makes use of the Samna library and their adaptation for DYNAP-SE called dynap-se1. This model has been applied to robotic navigation for learning and recalling trajectories. In addition, the tolerance and robustness of the system to sources of random input noise has been analysed. The necessary scripts to replicate the tests and plots carried out in the paper are included, together with data and plots of the tests.
Please go to section cite this work to learn how to properly reference the works cited here.
Title: Analog sequential hippocampus memory model for trajectory learning and recalling: a robustness analysis overview
Abstract: The rapid expansion of information systems in all areas of society demands more powerful, efficient and low energy consumption computing systems. Neuromorphic engineering has emerged as a solution that attempts to mimic the brain to incorporate its capabilities to solve complex problems in a computationally and energy-efficient way in real time. Within neuromorphic computing, building systems to efficiently store the information is still a challenge. Among all the brain regions, the hippocampus stands out as a short-term memory capable of learning and recalling large amounts of information quickly and efficiently. In this work, we propose a spike-based bio-inspired hippocampus sequential memory model that makes use of the benefits of analog computing and Spiking Neural Networks: noise robustness, improved real-time operation and energy efficiency. This model is applied to robotic navigation in order to learn and recall trajectories that lead to a goal position within a known grid environment. The model was implemented on the special-purpose Spiking Neural Networks mixed-signal DYNAP-SE hardware platform. Through extensive experimentation together with an extensive analysis of the model's behaviour in the presence of external noise sources, its correct functioning was demonstrated, proving the robustness and consistency of the proposed neuromorphic sequential memory system.
Keywords: Hippocampus model, analog sequential memory, robustness analysis, Spiking Neural Networks, Neuromorphic engineering, DYNAP-SE
Author: Daniel Casanueva-Morato
Contact: dcasanueva@us.es
- Have or have access to the DYNAP-SE hardware platform
- Python version 3.8.10
- Python libraries:
- samna 0.18.0.0
- dynap-se1 available in the gitlab repository
- ctxctl_contrib available in the gitlab repository
- numpy 1.21.4
- matplotlib 3.5.0
- pandas 2.0.3
sequential_memory.ipynb: python notebook containing the definition of the complete sequential memory model and tests to verify its basic functioning. The configuration of the STDP mechanism of this model is contained in the triplet_stdp_params_sequential.json file.
sequential_memory_noise_1_A_only_learn.ipynb, sequential_memory_noise_1_B_only_recall.ipynb and sequential_memory_noise_1_C_both_phases.ipynb: python notebooks containing the definition of the complete sequential memory model together with the random noise generators based on a Poisson distribution. For each notebook, a set of tests of the network under noise is carried out for different phases: learning only, recall only and both phases respectively. The configuration of the STDP mechanism of this model is contained in the triplet_stdp_params_sequential_noise.json file.
results folder: contains the figures (.png) generated by all the tests of the different models, as well as files with the trace of modifications in the synaptic weight of CA3 during the operations performed (trace_.txt) and spikes generated by the network (events_.txt) during these tests. In the event file, the following can be found for each spike generated in the network: the time instant at which it occurred (timestamp_ms), the id of the neuron that generated it at the global level of the network (neuron_ids) as formatted at the local level of the population to which it belongs (neuron_ids_formated) and the tag associated with said neuron (event_tag) formed by the population to which the neuron that produces the spike belongs plus its local id. This can be seen for the model without noise in sequential_memory folder and for the model with noise (and its different test cases) in sequential_memory_with_noise.
noise_analisis folder: contains the script used to analyse the noise applied to the network (noise_analysis.py) and the script used to analyse the results of the network as a consequence of this noise (results_analysis.py). In addition, it includes the results folder where the figures generated by both analyses can be found at an individual level for each test case and at a global level as a summary.
To run the different experiments, it is necessary to install all the libraries indicated in the instalation section, to have a local or online tool for running notebooks and to have access to a DYNAP-SE board. Each cell of each notebook comments to a greater or lesser extent on what is happening in it. In general terms: connecting to the board, declaring the functions to be used during the definition of the network, defining the neural network itself, defining the learning mechanism, configuring the parameters of neurons and synapses per core of the board, elaborating and applying the tests to the model, taking and formatting the results network data and creating the figures with the data taken as a result of the test.
For this code to work, it is necessary to modify the local path to the "ctxctl_contrib" library in the first cell and the path to the STDP triplet mechanism parameter file in the configuration cell of this mechanism. To configure the test case to be performed, the following parameters can be varied in the network model definition cell: "exp_id" to indicate whether to perform learning or recall, "poisson_freq" to indicate the frequency of each Poisson generator, "noise_target" to indicate the part of the memory that will be affected by the noise and "rep_id" to indicate the number of repetitions of the experiment.
Work in progress...
The author of the original idea is Daniel Casanueva-Morato while working on a research project of the RTC Group.
This work is part of the projects PID2019-105556GB-C33 (MINDROB), PDC2023-145841-C33 (NASSAI) and TED2021-130825B-I00 (SANEVEC) funded by MCIN/ AEI /10.13039/501100011033, by “ERDF A way of making Europe” and by the European Union NextGenerationEU/PRTR. D. C.-M. was supported by a "Formación de Profesorado Universitario" Scholarship and by "Ayudas complementarias de movilidad" from the Spanish Ministry of Education, Culture and Sport.
D. Casanueva-Morato would like to thank Giacomo Indiveri and his group for hosting him during a three-month internship between 1st June 2023 and 31th August 2023, during which the idea of this paper was originated and most of the results presented in this work were obtained.
This project is licensed under the GPL License - see the LICENSE.md file for details.
Copyright © 2023 Daniel Casanueva-Morato
dcasanueva@us.es