Add trial number to unit #69
Replies: 4 comments 1 reply
-
What I would strongly suggest from a data modeling point of view, is to decouple the A Likewise, the And yes, this can accept multiple electrode indices! For example, nwbfile.add_unit(spike_times=[1,2,3], electrodes=[5,6,7]) (notice these indices don't even need to be explicit Also, for the |
Beta Was this translation helpful? Give feedback.
-
With that structure, how do I easily get all the units from a single or select number of trials? My understanding is that I would have to iterate through every row the individual electrode and then search for the timestep related to the trial and that append each one of those together to see trends across trials. |
Beta Was this translation helpful? Give feedback.
-
If you mean 'spike times' of a single An easy way to do this with import numpy
spike_times_of_unit = numpy.array([1.1,1.2,1.3,1.4,1.5])
single_trial_interval = [1.15, 1.45]
spiking_within_trial_start, spiking_within_trial_stop = numpy.searchsorted(
spike_times_of_a_unit, a_single_trial_interval
)
> (1, 4) # indices of the spikes that occur during the trial
spike_times_of_unit_within_trial = spike_times_of_a_unit[spiking_within_trial_start:spiking_within_trial_stop]
> array([1.2, 1.3, 1.4]) # Actual subset of spikes during the trial A huge advantage of storing data this way is how it generalizes to multiple data streams within the file; by not trializing the source storage the only task to achieve for each individual stream is to ensure it is synchronized to the common It also makes it easier to add hyperparameter shifts around those alignment points, such as a similar data view for events occurring some period of time before and after each trial Example of existing visualizers for aligned spiking activity: https://flatironinstitute.github.io/neurosift/?p=/nwb&url=https://api.dandiarchive.org/api/assets/2abd0c2b-1190-4e79-a47c-9636e8ec4160/download/&dandisetId=000409&dandisetVersion=draft&tab=view:PSTH|/intervals/trials And note in that file, there are many additional data streams for behavior too, so you can make joint plots such as this one: https://flatironinstitute.github.io/neurosift/?p=/nwb&url=https://api.dandiarchive.org/api/assets/2abd0c2b-1190-4e79-a47c-9636e8ec4160/download/&dandisetId=000409&dandisetVersion=draft&tab=neurodata-items:neurodata-item:/intervals/trials|TimeIntervals@neurodata-item:/processing/behavior/WheelVelocity|TimeSeries@neurodata-item:/processing/behavior/WheelAcceleration|TimeSeries&tab-time=1467.8397655900355,1476.8397655900355,1470.336463080128 (trial regions annotated on top) |
Beta Was this translation helpful? Give feedback.
-
Understood, thank you, I appreciate it! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
When I run my experiments I run a given trial for some time, I move on to another trial. In a given series of tests on any given day, I may have 1000's trials. I am also using a Utah Array so I have 96 channels that I am recording data from. Therefore for each trial I have 96 spike times. Is it possible for me to add more than one channel for a row in the unit table? By doing it this way, the unit table would have only 1 row per trial (so 1000 rows, for example). Or am I required to make a unique row for each row and each trial resulting in 96*1000 rows (96000 rows in this case). The code below is an example with 3 channels, and I get the error shown in the screenshot when I try to append the spike times together.
If neither approach is recommended, please do let me know the best way to set this up.
Beta Was this translation helpful? Give feedback.
All reactions