Core Concepts
Neuroformer is a powerful multimodal, multitask generative pretrained transformer model designed for systems neuroscience data analysis.
Abstract
Abstract:
Large-scale, cellular-resolution neuronal spiking data analyzed with Neuroformer.
Model predicts behavior from neural representations without explicit supervision.
Joint training on neuronal responses and behavior enhances performance.
Introduction:
Systems neuroscience experiments complexity increasing with technical advances.
Deep neural networks show potential in modeling neural activity and circuitry.
Autoencoder-based latent variable models applied to analyze neuronal activities.
Related Work:
Similarities between DNNs and mammalian brains' hierarchical representations.
Transformers replicating specific neural functions and circuits observed.
Model:
Neuroformer architecture includes contrastive matching, feature fusion, AR decoding.
Workflow involves processing action potential data from multiple neurons.
Results:
Neuroformer accurately predicts visually evoked neuronal activity in mice.
Outperforms GLM in population response prediction.
Attention mechanisms reveal relationships between stimuli and neuronal responses.
Ablations:
Incorporating Past State, Video, Behavior, and Contrastive learning improves model performance across datasets.
Stats
Neuroformerは、シミュレートされたニューラルネットワークの接続性に関する地面事実を検証します。
Neuroformerは、視覚刺激に対するニューロン活動を予測するためにトレーニングされました。
Quotes
"Joint training on neuronal responses and behavior boosted performance."
"Attention mechanisms infer causality revealing the hub neurons."