Probabilistic tracing of neural representations over time.
RepTrace is an early-stage Python toolkit for benchmarking calibrated, time-resolved decoders on M/EEG data. The initial goal is to turn classifier outputs from non-invasive neural recordings into probability traces that are useful for studying representational dynamics, planning, and replay-like sequences.
RepTrace currently provides tools for:
Epochs files;P(class | time), and score
detected events against held-out annotations;RepTrace owns the dataset-independent M/EEG decoding layer. Keep reusable
feature-matrix decoding, classifier calibration, temporal generalization,
onset/state inference, confusion and per-class metrics, MNE Epochs decoding,
and generic summary-table/reporting helpers here.
Dataset-specific projects should adapt their own file formats and experimental
conventions into RepTrace’s feature-matrix and probability-observation
interfaces. In particular, PyMEGDec owns the MATLAB .mat loaders, the
Part*Data.mat / Part*CueData.mat participant-file conventions, CTF sensor
geometry handling, alpha analyses, stimulus-specific defaults, and
paper-facing export scripts for the MEG dataset it was developed around.
RepTrace requires Python 3.11 or newer and earlier than Python 3.14.
For development from a source checkout, use Poetry:
poetry install --with dev
Alternatively, install the package in editable mode with pip:
python -m pip install -e .
Installed environments expose both a grouped reptrace command and focused
workflow commands such as reptrace-benchmark, reptrace-mne-time-decode,
reptrace-onset-detect, reptrace-continuous-stimulus-scan,
reptrace-stimulus-detect, and
reptrace-temporal-model. The equivalent python -m reptrace.<module> forms
remain available for source-checkout debugging.
Run the pilot NOD-EEG benchmark from a manifest:
reptrace-validate-manifest \
benchmarks/nod_animate_sub01.csv \
--report-out results/nod_animate_sub01_validation.csv
reptrace-benchmark \
benchmarks/nod_animate_sub01.csv \
--out-dir results/nod_animate_sub01 \
--aggregate-out results/nod_animate_sub01_summary.csv \
--plot-out results/nod_animate_sub01_summary.png \
--chance 0.5
The grouped CLI provides the same workflows:
reptrace validate-manifest \
benchmarks/nod_animate_sub01.csv \
--report-out results/nod_animate_sub01_validation.csv
reptrace benchmark \
benchmarks/nod_animate_sub01.csv \
--out-dir results/nod_animate_sub01 \
--aggregate-out results/nod_animate_sub01_summary.csv \
--plot-out results/nod_animate_sub01_summary.png \
--chance 0.5
Run time-resolved decoding directly on an MNE epochs file with metadata:
reptrace-mne-time-decode \
--epochs path/to/sub-01_epo.fif \
--metadata-csv path/to/sub-01_events.csv \
--label-column stim_is_animate \
--group-column session \
--out results/nod_sub-01_animate.csv \
--observations-out results/nod_sub-01_animate_observations.csv
Plot the resulting time course:
reptrace-plot-time-decode \
results/nod_sub-01_animate.csv \
--chance 0.5 \
--out results/nod_sub-01_animate.png
Detect the first threshold-crossing representation time from probability observations:
reptrace-onset-detect \
results/nod_sub-01_animate_observations.csv \
--threshold-window -0.35 -0.05 \
--threshold-quantile 0.95 \
--threshold-method max_run \
--min-consecutive 2 \
--require-stable-prediction \
--out-events results/nod_sub-01_animate_onset_events.csv \
--out-summary results/nod_sub-01_animate_onset_summary.csv \
--out-threshold-summary results/nod_sub-01_animate_threshold_summary.csv
The threshold summary reports baseline false-positive rates separately from post-stimulus threshold-crossing rates.
Detect zero, one, or many stimulus events in a long probability stream:
python -m reptrace.stimulus_detection \
results/sub-01_stream_observations.csv \
--stream-column sequence_id \
--score-mode class_probability \
--threshold-window -0.35 -0.05 \
--threshold-method max_run \
--threshold-quantile 0.95 \
--min-consecutive 2 \
--merge-gap 0.05 \
--refractory 0.20 \
--out-events results/stimulus_events.csv \
--out-summary results/stimulus_event_summary.csv
With annotation matching and latency summaries:
reptrace-stimulus-detect \
results/sub-01_stream_observations.csv \
--annotations results/sub-01_stimulus_annotations.csv \
--stream-column stream_id \
--score-mode class_probability \
--threshold-window -0.35 -0.05 \
--threshold-method max_run \
--threshold-quantile 0.95 \
--detection-window 0.0 inf \
--min-consecutive 2 \
--merge-gap 0.05 \
--refractory 0.20 \
--match-tolerance 0.10 \
--out-events results/sub-01_stimulus_events.csv \
--out-summary results/sub-01_stimulus_event_summary.csv \
--out-thresholds results/sub-01_stimulus_thresholds.csv
This stream-oriented detector returns one row per detected event, including the stimulus class, onset, offset, peak, confirmed detection time, and optional annotation match.
Train an event-locked decoder on one raw run and scan a held-out raw run for face-like events:
reptrace-continuous-stimulus-scan \
--train-raw data/ds000117/sub-01/ses-meg/meg/sub-01_ses-meg_task-facerecognition_run-01_meg.fif \
--train-events data/ds000117/sub-01/ses-meg/meg/sub-01_ses-meg_task-facerecognition_run-01_events.tsv \
--scan-raw data/ds000117/sub-01/ses-meg/meg/sub-01_ses-meg_task-facerecognition_run-02_meg.fif \
--scan-events data/ds000117/sub-01/ses-meg/meg/sub-01_ses-meg_task-facerecognition_run-02_events.tsv \
--source-column stim_type \
--positive-pattern "Famous|Unfamiliar" \
--negative-pattern "Scrambled" \
--positive-label face \
--negative-label scrambled \
--target-class face \
--train-window 0.15 0.25 \
--picks meg \
--demean-window \
--slice-duration 6.0 \
--slice-count 10 \
--require-target-event \
--exclude-events-from-threshold-window \
--threshold-window 0.0 0.8 \
--detection-window 0.8 6.0 \
--threshold-method max_run \
--threshold-quantile 0.975 \
--min-consecutive 2 \
--min-duration 0.05 \
--merge-gap 0.05 \
--refractory 0.30 \
--match-tolerance 0.35 \
--out-dir results/ds000117_continuous_scan
The workflow writes stream_observations.csv, stimulus_events.csv,
stimulus_summary.csv, stimulus_thresholds.csv, stimulus_annotations.csv,
and heldout_event_metrics.csv.
If the events CSV has the NOD stim_is_animate column but no named decoding
condition yet, create one:
reptrace-metadata \
--events-csv data/nod/sub-01_events.csv \
--source-column stim_is_animate \
--positive-pattern "True" \
--label-column condition \
--positive-label animate \
--negative-label inanimate \
--out data/nod/sub-01_metadata_animate.csv
After running several subjects, aggregate them:
reptrace-results \
results/nod_sub-01_animate.csv \
results/nod_sub-02_animate.csv \
--out results/nod_animate_summary.csv
The first public benchmark target is NOD-MEG/NOD-EEG because the dataset
provides preprocessed MNE epochs and metadata for natural-image decoding. The
recommended first task is animate-versus-inanimate decoding from the NOD-EEG
stim_is_animate metadata. The second staged task is superclass decoding
between canine and device trials, which keeps the same public dataset and
reporting workflow while testing a different semantic contrast.
THINGS-EEG and THINGS-MEG are natural follow-up benchmarks for larger visual object representation experiments. Lab data with task localizers and planning periods should come after these public baselines are reproducible.
The calibration-aware temporal-state workflow runs the three staged NOD tasks as a reusable downstream-state inference evidence pass. It exports probability observations with matched calibrated and uncalibrated emissions, fits conservative sticky switching models, compares controls, summarizes semantic stages, and writes compact artifacts:
reptrace-temporal-state-workflow \
--data-root data/nod \
--out-dir results/temporal_state_inference \
--compact-export-dir ../RepTrace-Compact-Results/results/temporal_state_inference \
--decoders logistic linear_svm \
--n-permutations 100
Use --max-subjects 1 --task nod_animate --n-permutations 5 for a local smoke
test before launching the full run. Resume is enabled by default; pass
--no-resume only when existing subject-decoder outputs should be overwritten.
The documentation site is published at https://ips-stuttgart.github.io/RepTrace/.
The docs/ directory contains the project documentation:
data/.Build the documentation site locally with:
poetry install --with docs --without dev
poetry run mkdocs build --strict
Run the test suite from a development environment:
python -m pytest
If you use RepTrace in your research, please cite the repository for now:
@software{pfaff_reptrace_2026,
author = {Florian Pfaff},
title = {RepTrace: Probabilistic Tracing of Neural Representations over Time},
year = {2026},
url = {https://github.com/IPS-Stuttgart/RepTrace},
license = {MIT}
}
RepTrace is licensed under the MIT License.