TOP一般口演
 
一般口演
感覚統合 / 視覚 / 随意運動など
Multisensory Integration / Vision / Voluntary Movements
座長:牛場 潤一(慶応義塾大学理工学部生命情報学科)
2022年7月3日 11:00~11:15 沖縄コンベンションセンター 会議場B5~7 第4会場
4O04a1-01
遠心性コピーの皮質ダイナミクス:マーモセット発声中の大規模皮質脳波
Cortical dynamics of efference copy: Large-scale ECoG during marmoset vocalizations

*飯島 和樹(1,2,3)、小松 三佐子(2,3)、鈴木 航(2)、山森 哲雄(2,3)、一戸 紀孝(2)、松元 まどか(1,2,3)
1. 国立精神・神経医療研究センター 精神保健研究所 児童・予防精神医学研究部、2. 国立精神・神経医療研究センター 神経研究所 微細構造研究部、3. 理化学研究所 脳科学総合研究センター 高次脳機能分子解析チーム
*Kazuki Iijima(1,2,3), Misako Komatsu(2,3), Wataru Suzuki(2), Tetsuo Yamamori(2,3), Noritaka Ichinohe(2), Madoka Matsumoto(1,2,3)
1. Department of Preventive Intervention for Psychiatric Disorders, National Institute of Neuroscience, National Center of Neurology and Psychiatry, 2. Department of Ultrastructural Research, National Institute of Neuroscience, National Center of Neurology and Psychiatry, 3. Laboratory for Molecular Analysis of Higher Brain Function, RIKEN Center for Brain Science

Keyword: ECoG, efference copy, auditory cortex, marmoset

The sensations associated with one's own movements are inhibited by efference copies derived from motor commands and predictions of outcomes. It has been suggested that auditory hallucinations in schizophrenia may be caused by abnormalities in efference copy of vocalizations. However, the neural dynamics of efference copy of vocalizations have not been fully elucidated. In the present study, a 96-channel electrocorticography (ECoG) electrode array designed to cover the temporal, frontal, parietal, and occipital lobes, including the primary auditory cortex, was placed on the dura of two marmosets (Callithrix jacchus). Using the marmosets’ characteristic reciprocal exchange of calls, ECoG was recorded while they vocalized fee calls in response to pre-recorded phee calls of another individual (the "vocalization condition") and while they listened to their own calls recorded in the vocalization condition (the "listening condition"). The brain activities recorded from each electrode were compared between the two conditions to examine how the sensory response to vocalization is suppressed in the presence of motor commands and their efference copies.
In the vocalization condition, a large suppression of brain activity in the high gamma frequency band (80-200 Hz) was observed at multiple electrodes, mainly in the temporal lobe, but also in the parietal and frontal lobes. This suppression began approximately 500 ms before the onset of vocalization. In the listening condition, on the other hand, after the presentation of the vocal stimulus, an increase in the high-gamma frequency band was observed at multiple electrodes around the temporal lobe. When both conditions were compared, significant differences were obtained at multiple electrodes around the temporal lobe. These observations were stable between the two individuals.
These results are consistent with the hypothesis that the premotor cortex sends inhibitory signals to the auditory cortex during spontaneous vocalizations to distinguish between its own vocalizations and those of others. In schizophrenic patients with positive symptoms and severe auditory hallucinations, inhibition of activity in the auditory cortex is known to be reduced during vocalization. Thus, this study provides a prototype experimental system for abnormalities of neural dynamics in animal models of schizophrenia.
2022年7月3日 11:15~11:30 沖縄コンベンションセンター 会議場B5~7 第4会場
4O04a1-02
逸脱検出の共通領域:マーモセットにおける皮質脳波 (ECoG) 計測による視覚・聴覚ミスマッチ陰性電位
Common areas for detecting deviation: visual and auditory mismatch negativities from whole-cortical electrocorticogram (ECoG) arrays in common marmosets

*松井 大(1)、小松 三佐子(2)、兼子 峰明 (3,4)、岡野 栄之(3,5)、一戸 紀孝(6)、吉田 正俊(1)
1. 北海道大学 人間知・脳・AI研究教育センター、2. 理化学研究所 脳神経科学研究センター 高次脳機能分子解析チーム、3. 脳神経科学研究センター マーモセット神経構造研究チーム、4. 京都大学 霊長類研究所 統合脳システム分野、5. 慶應義塾大学 医学部 生理学教室、6. 国立研究開発法人 国立精神・神経医療研究センター 微細構造研究部
*Hiroshi Matsui(1), Misako Komatsu(2), Takaaki Kaneko(3,4), Hideyuki Okano(3,5), Noritaka Ichinohe(6), Masatoshi Yoshida(1)
1. CHAIN, Hokkaido University, 2. Laboratory for Molecular Analysis of Higher Brain Function, RIKEN CBS, 3. Laboratory for Marmoset Neural Architecture, RIKEN CBS, 4. Systems Neuroscience Section, Primate Research Institute, Kyoto University, 5. Department of Physiology, Keio University School of Medicine, 6. Department of Ultrastructural Research, NCNP

Keyword: marmoset, mismatch negativity, deviance detection, ECoG

Mismatch negativity (MMN) is an event-related potential that reflects a neuronal response for the detection of deviation from regularity. Its decrement is known to be a brain marker of schizophrenia, implicating aberrant process of salience detection. MMN was identified in common marmosets for auditory (Komatsu et. al., 2015), and visual stimuli (Matsui et al., in the previous meeting of JNS). The present study is intended to examine visual and auditory MMN from principally the same procedures and the same animals to unveil whether a common area responsible for MMN exists across different modalities. For this purpose, three marmosets with the epidurally implemented whole-cortical recordings with multi-channel electrocorticograms (ECoG) array were tested using oddball and many standard paradigms (Koshiyama et al., 2020) to extract adaptation, stimulus difference, and deviance detection component from MMN. In the visual oddball task, a static sinusoidal grating (e.g., 45 degrees) were presented in 87.5% of stimuli (standard), and another grating (e.g., 135 degrees) was presented in 12.5% of stimuli (deviant). In the auditory oddball task, two pure tones were used with probabilities 90% (standard) and 10% (deviant). In the many standards task, gratings with one of eight orientations, or pure tones with one of ten pitches were randomly presented with equal probabilities, resulting in the same probabilities of deviant stimuli between oddball and many standard tasks (12.5% in visual or 10% in auditory tasks). The stimulus-triggered averages of LFP signals identified the MMN component for both visual and auditory stimuli in temporal regions, suggesting a common circuit to detect a deviation. In contrast, time-frequency analysis produced modality specificity across areas. The high-gamma activity to a mismatch stimulus was found mainly in posterior parietal and extrastriate visual areas for visual stimuli, whereas the auditory mismatch responses were limited in a primary auditory cortex. M.Y. and K.M. was funded by the program for Brain Mapping by Integrated Neurotechnologies for Disease Studies (Brain/MINDS) from Ministry of Education, Culture, Sports Science, MEXT and the Japan Agency for Medical Research and Development (AMED) (19dm0207069h0001). H.M. was funded by JSPS KAKENHI (JP20H05487), M.Y. (JP20H05487 and 20H00001), and M.K. (19H04993). M.Y. was also funded by the Center of Innovation Program from Japan Science and Technology Agency, JST, to M.Y.
2022年7月3日 11:30~11:45 沖縄コンベンションセンター 会議場B5~7 第4会場
4O04a1-03
Why am I left or right-handed? Timing error as a source of handedness
*Atsushi Takagi(1), Sho Ito(1), Hiroaki Gomi(1)
1. NTT Communication Science Laboratories

Keyword: handedness, movement variability, electromyography

Most people have a preferred hand for dexterous tasks like writing or throwing, yet the specific mechanism pertaining to this handedness remains elusive. We propose a new temporal synchrony theory wherein the dominant hand’s advantage lies in the precise control of muscle activation timing. In the first experiment, 10 right-handed people had to produce a periodic force against a fixed handle according to the beats of a metronome. While the variability of the right hand’s force remained constant, the non-dominant hand’s force grew progressively more variable at higher frequencies. Temporal synchrony theory, implemented in a computational simulation of the brain controlling the arm, was most accurate at explaining this difference in force regulation between the hands, outperforming other prominent theories of handedness. Temporal synchrony predicts that fast continuous motion should be less variable with the dominant hand, a prediction that was validated in a second experiment wherein 15 left-handed and 67 right-handed individuals rotated a smartphone device using circular motion at 3.5Hz. Analysis of the device’s accelerometer readings revealed that the dominant hand exhibited significantly lower movement variability in both left and right-handed people. Our theory also provides a satisfying explanation as to why the non-dominant arm is usually more coactivated during motion. Coactivation increases the arm’s stiffness which can help to reduce the movement variability caused by mistimed muscles. However, coactivation only helps if the arm’s position is allowed to stabilize, explaining why rapid movements like finger tapping and the pegboard test are reliable measures of handedness. Our theory provides the crucial missing link between manual dexterity and speech, namely that both require temporal precision in order to function, explaining why they lateralize to a similar degree in left- and right-handed people. The precise regulation of muscle activation timing is the key to dexterity.
2022年7月3日 11:45~12:00 沖縄コンベンションセンター 会議場B5~7 第4会場
4O04a1-04
皮質脳波を用いた想起型brain-computer interface
Imagery-based brain-computer interface using electrocorticograms

*福間 良平(1,2,3)、柳澤 琢史(1,2,3,4)、西本 伸志(5,6)、菅野 秀宣(7)、田村 健太郎(8)、山本 祥太(1)、飯村 康司(7)、藤田 祐也(1)、押野 悟(1)、谷 直樹(1)、小出(間島) 真子(5,6)、神谷 之康(2,9)、貴島 晴彦(1,4)
1. 大阪大学大学院医学系研究科 脳神経外科、2. ATR脳情報研究所、3. 大阪大学高等共創研究院、4. 大阪大学附属病院 てんかんセンター、5. 情報通信研究機構、6. 大阪大学大学院生命機能研究科、7. 順天堂大学 脳神経外科、8. 奈良県立医科大学 脳神経外科、9. 京都大学大学院情報学研究科
*Ryohei Fukuma(1,2,3), Takufumi Yanagisawa(1,2,3,4), Shinji Nishimoto(5,6), Hidenori Sugano(7), Kentaro Tamura(8), Shota Yamamoto(1), Yasushi Iimura(7), Yuya Fujita(1), Satoru Oshino(1), Naoki Tani(1), Naoko Koide–Majima(5,6), Yukiyasu Kamitani(2,9), Haruhiko Kishima(1,4)
1. Department of Neurosurgery, Graduate School of Medicine, Osaka University, Suita, Japan, 2. ATR Computational Neuroscience Laboratories, Seika-cho, Japan, 3. Institute for Advanced Co-Creation Studies, Osaka University, Suita, Japan, 4. Osaka University Hospital Epilepsy Center, Suita, Japan, 5. Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT), Suita, Japan, 6. Graduate School of Frontier Biosciences, Osaka University, Suita, Japan, 7. Department of Neurosurgery, Juntendo University, Tokyo, Japan, 8. Department of Neurosurgery, Nara Medical University, Kashihara, Japan, 9. Graduate School of Informatics, Kyoto University, Kyoto, Japan

Keyword: Brain-computer interface (BCI), Electrocorticogram (ECoG), Visual imagery, Real-time visual feedback

Objective: Cortical activity evoked by visual stimulus is affected by visual imagery. Here, we hypothesized that a subject can intentionally control feedback images that were inferred from his/her cortical activity so that the feedback image contains intended meaning.
Methods: Movies for visual stimuli were created by concatenating short video clips (median duration: 16 s) cropped out from film, resulting in 60-min movies containing various semantic contents. Electrocorticograms (ECoGs) were recorded from occipital or temporal lobes when subjects were watching the movies. Every 1 s, images were extracted from the movies to be annotated by cloud-workers; words in the annotations were then converted to 1,000 dimensional vectors using a word2vec model to be averaged within the image (semantic vector). The semantic vectors were inferred from high-γ powers in the ECoGs using ridge regression and nested cross-validation. For each category of “human face”, “landscape”, and “word”, 50 images denoting the category were selected from the 3,600 images shown to the subject. Accuracy to identify the category of these selected images among each pair of the categories were evaluated (binary accuracy). Finally, using the decoder in real-time, subjects participated in the feedback task, in which they control the feedback images inferred from their cortical activities in real-time. The subjects were instructed to control the feedback image by visual imagery. During the task, oral instructions were given to the subject to instruct a category to display (“human face”, “landscape”, or “word”). Controllability of the feedback image was evaluated using Pearson’s correlation coefficient between the semantic vector inferred in real time and the semantic vectors of the three categories.
Results: Binary accuracy to identify the category of these selected images were 70.5%. Moreover, all subjects could control the inferred semantic vector significantly closer to semantic vector of the instruction.
Conclusion: Subject could intentionally control the feedback images so that the feedback image contained the instructed meaning using visual imagery. We will introduce our approach to a new brain-computer interface (BCI) based on visual imagery.