Neo is a Python package for representing electrophysiology data in a common format across different recording systems and analysis tools. It provides a hierarchical data model and I/O support for many proprietary formats.
Why Neo?
- Standardization: Common data structures across different recording systems
- Metadata: Rich metadata support for experimental details
- Relationships: Maintains relationships between data types
- I/O Support: Read from many electrophysiology file formats
- Tool Integration: Works with Elephant, Ephyus, and other analysis tools
Core Data Structures
SpikeTrain
Represents timestamps of action potentials:
from neo import SpikeTrain
import quantities as pq
# Create spike train
spike_times = [0.1, 0.3, 0.45, 0.8, 1.2] * pq.s
spiketrain = SpikeTrain(
spike_times,
t_start=0.0*pq.s,
t_stop=2.0*pq.s,
units='s',
name='Neuron 1',
description='Well-isolated single unit'
)
# Add metadata
spiketrain.annotations['brain_area'] = 'V1'
spiketrain.annotations['depth'] = 500 # microns
AnalogSignal
Represents continuous signals (LFP, EEG, voltage):
from neo import AnalogSignal
import numpy as np
# Create LFP signal
lfp = AnalogSignal(
np.random.randn(10000, 4), # 10000 samples, 4 channels
units='mV',
sampling_rate=1000*pq.Hz,
t_start=0*pq.s,
name='LFP Recording',
channel_names=['Ch1', 'Ch2', 'Ch3', 'Ch4']
)
# Access data
voltage = lfp.magnitude # NumPy array
time = lfp.times # Time array with units
Hierarchical Organization
Neo uses a hierarchical structure to organize data:
from neo import Block, Segment
# Block: highest level container (one experiment/recording session)
block = Block(name='Experiment 2024-01-15')
# Segment: trial, epoch, or recording period
segment1 = Segment(name='Trial 1', index=0)
segment1.annotations['condition'] = 'control'
# Add spike trains to segment
segment1.spiketrains.append(spiketrain)
# Add analog signals to segment
segment1.analogsignals.append(lfp)
# Add segment to block
block.segments.append(segment1)
Complete Hierarchy
Block (experiment/session)
├── Segment (trial/epoch)
│ ├── AnalogSignal (continuous data)
│ ├── SpikeTrain (spike times)
│ ├── Event (markers, stimuli)
│ └── Epoch (time intervals)
├── Unit (single neuron across segments)
└── ChannelView (channels across segments)
Working with Multiple Trials
# Create block for multi-trial experiment
block = Block(name='Visual Stimulus Experiment')
# Add multiple trials
for trial_idx in range(10):
segment = Segment(name=f'Trial {trial_idx}', index=trial_idx)
# Add spike data for this trial
spikes = SpikeTrain([...], t_stop=2.0*pq.s)
segment.spiketrains.append(spikes)
# Add LFP for this trial
lfp = AnalogSignal([...], sampling_rate=1000*pq.Hz)
segment.analogsignals.append(lfp)
block.segments.append(segment)
# Access all trials
for segment in block.segments:
print(f"{segment.name}: {len(segment.spiketrains)} neurons")
Reading Electrophysiology Data
Neo supports many file formats through I/O modules:
from neo import io
# Blackrock (.nev, .ns*)
reader = io.BlackrockIO(filename='recording.nev')
block = reader.read_block()
# Plexon (.plx)
reader = io.PlexonIO(filename='session.plx')
block = reader.read_block()
# Neuralynx
reader = io.NeuralynxIO(dirname='data_folder')
block = reader.read_block()
# Access loaded data
for segment in block.segments:
for spiketrain in segment.spiketrains:
print(f"Unit: {spiketrain.name}, {len(spiketrain)} spikes")
Events and Epochs
Events
Discrete time points (stimuli, rewards):
from neo import Event
# Stimulus onset times
stim_events = Event(
times=[0.5, 1.5, 2.5]*pq.s,
labels=['stim_1', 'stim_2', 'stim_3'],
name='Stimulus Onsets'
)
segment.events.append(stim_events)
Epochs
Time intervals (behavioral states, trial phases):
from neo import Epoch
# Movement periods
movement_epochs = Epoch(
times=[0.2, 1.0, 2.0]*pq.s, # Start times
durations=[0.3, 0.4, 0.5]*pq.s,
labels=['mvmt_1', 'mvmt_2', 'mvmt_3'],
name='Movement Periods'
)
segment.epochs.append(movement_epochs)
Practical Workflows
Organizing Multi-Channel Recordings
# Load data
block = io.BlackrockIO('recording.nev').read_block()
# Access by channel
for segment in block.segments:
# Get LFP from specific channel
lfp_ch1 = segment.analogsignals[0][:, 0] # First channel
# Get all spike trains
for st in segment.spiketrains:
if st.annotations['channel'] == 1:
# Process spikes from channel 1
pass
Time-Slicing
# Extract time window
from quantities import ms
# Slice analog signal
baseline = lfp.time_slice(0*pq.s, 500*pq.ms)
response = lfp.time_slice(500*pq.ms, 1500*pq.ms)
# Slice spike train
baseline_spikes = spiketrain.time_slice(0*pq.s, 500*pq.ms)
Integration with Elephant
import elephant
from neo import SpikeTrain
# Create Neo spike train
st = SpikeTrain([0.1, 0.3, 0.5]*pq.s, t_stop=1.0*pq.s)
# Use Elephant for analysis
isi = elephant.statistics.isi(st)
rate = elephant.statistics.mean_firing_rate(st)
cv = elephant.statistics.cv(isi)
Saving and Loading
# Save to Neo's native format (pickle)
import pickle
with open('experiment_data.pkl', 'wb') as f:
pickle.dump(block, f)
# Load
with open('experiment_data.pkl', 'rb') as f:
block = pickle.load(f)
# Or use Neo's NixIO for HDF5-based storage
writer = io.NixIO('experiment.nix')
writer.write_block(block)
Installation
pixi add neo
# or
conda install -c conda-forge neo
# or
pip install neo
Best Practices
- Use consistent annotation keys across your datasets
- Include sampling rates and units for all signals
- Organize data hierarchically (Block → Segment → Data)
- Preserve metadata from original recordings
- Use descriptive names for data objects
- Document channel mappings and experimental conditions
- Consider using NixIO for long-term storage
- Time-slice data before analysis to reduce memory usage