脳の暗号を解読する
Decoding human brain signals
AL1
脳の暗号を解読する
Decoding human brain signals

○神谷之康1
○Yukiyasu Kamitani1
ATR 脳情報研究所 神経情報学研究室1
Department of Neuroinformatics, ATR Computational Neuroscience Laboratories1

Objective assessment of mental experience in terms of brain activity represents a major challenge in neuroscience. Despite its wide-spread use in human brain mapping, functional magnetic resonance imaging (fMRI) has been thought to lack the resolution to probe into putative neural representations of perceptual and behavioral features, which are often found in neural clusters smaller than the size of single fMRI voxels. As a consequence, the potential for reading out mental contents from human brain activity, or “neural decoding”, has not been fully explored. In this talk, I present our work on the decoding of fMRI signals based on machine learning-based analyses. I first show that visual features represented in “subvoxel” neural structures can be decoded from population fMRI responses, using a machine learning model (“decoder”) trained on sample fMRI responses to visual features. Decoding of stimulus features is extended to the method for “neural mind-reading”, which predicts a person's subjective state using a decoder trained with unambiguous stimulus presentation. Several applications of this approach will be presented including brain-machine interfaces. We also discuss how a multivoxel pattern can represent more information than the sum of individual voxels, and how an effective set of voxels for decoding can be selected from all available voxels. Next, a modular decoding approach is presented in which a wide variety of contents can be predicted by combining the outputs of multiple modular decoders. I demonstrate an example of visual image reconstruction where arbitrary visual images can be accurately reconstructed from a singe-trial or single-volume fMRI signals. Finally, I present our recent results on the decoding of visual dream contents using database-assisted decoding models.


上部に戻る 前に戻る