TOPポスター
 
ポスター
G. モデリング、ハードウェア、応用
G. Modeling, Hardware Implementation, and Applications
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-259
Continuous learning in immature neural network induces memory loss.
*Hiroyasu Watanabe(1)
1. Bream Research Group

Keyword: Neuronal maturation, bifurcation, learning, memory

Central Nerves Systems are established through developmental maturation, especially in higher mammalian brain. Maturations are including improvement of neural transmission fidelity, high dynamic neuronal transmission plasticity, and selection of synapses contact site. Even it is endogenous event, we have not clearly clarified functional significance of neuronal maturation yet.
Using simple neuronal network model, I investigated functional significance of neuronal transmission fidelity. On the MNIST text recognition learning, low fidelity neuronal transmission still could learn text recognition almost same level with normal high-fidelity transmission. But as the result of continuous learning in low fidelity transmission, established learning ability suddenly dropped off and lost. This memory loss was only observed when using Relu type activation function, not in Sigmoid type activation function in 3 layers network. I speculated this phenomenon represented a hallucination.
Then, I looked for real example relating to this phenomenon. In a database, I found KV11.3 is highly expressed potassium channel in developed neuron, and show low expression in immature type neuron such as specific population of Dentate Gyrus.
Previous study showed this gene SNP is relating to bipolar disorder, and SNP channel shifts membrane voltage activation a bit higher threshold. Introducing this activation property in neuron simulation model of IO neuron induces change of excitation property from stable resting state to continuous spiking.
My results give an idea to reveal the mechanisms of this mutation of KV11.3 for bipolar disorder and would be one example of excitation property changes inducing psychiatric disorders.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-260
Self-supervised pattern tuning by somato-dendritic coupling modulation
*Gaston Sivori(1), Tomoki Fukai(1)
1. OIST

Keyword: predictive coding, backpropagation, synaptic plasticity, computational neuroscience

The brain is continually anticipating responses given the current sensory information and past history. Predictive coding is the framework that supports these computations but we do not know how predictions are computed nor how errors are propagated. One key aspect of brain computations is that top-down input carries both sensory and context information to basal and distal dendritic branches -respectively- of cortical pyramidal neurons. At the cell level, back-propagating action potentials induce synaptic plasticity changes that may represent the intrinsic error signal but how cells assign credit to the appropriate synapses remains elusive. Here we present a biologically-plausible learning rule for dendritic synapses which reproduces the biophysical mechanisms observed in these neurons. For addressing the error propagation, we show that a simple plasticity rule that informs recent synapses of the cell's output is fundamentally the only key computational requirement, and we further model Ca2+ as the error message as it has been evidenced in the scientific literature. We report that the solution to synaptic credit assignment entirely depends on the discrepancy observed between the recent cell output and its expected output. However, we do not need to model these aspects intrinsically and instead focus on the known biophysical mechanisms. Our results suggests that cortical neurons compute predictions based on their current synaptic structure and recent somatic activity, and propagate prediction errors by modulating the coupling conductances present in dendritic branches. By constructing a recurrent clustered network, we further prove that convergence in tuning to a certain pattern hidden in streams of information requires surprisingly few presentations.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-261
決定一意性、学習飽和度は状態空間拡張の適切さを規定する~動的状態空間強化学習モデルとディリクレモデルとの比較
Decision uniqueness and experience saturation define the appropriateness of state expansion: comparison between dynamic state space reinforcement learning model and Dirichlet models

*坂本 一寛(1,2)、片倉 世雄(2)、虫明 元(2)
1. 東北医科薬科大学医学部、2. 東北大学大学院医学系研究科
*Kazuhiro Sakamoto(1,2), Tokio Katakura(2), Hajime Mushiake(2)
1. Fac Med, Tohoku Med Pharm Univ, Sendai, Japan, 2. Grad Sch Med, Tohoku Univ, Sendai, Japan

Keyword: reinforcement learning, dynamic state space, decision uniqueness, experience saturation

The real world is essentially an indefinite environment in which the probability space, i.e., what can happen, cannot be specified in advance. Conventional reinforcement learning models that learn under uncertain conditions are given the state space as prior knowledge. Here, we developed a reinforcement learning model with a dynamic state space. The model expanded its state space so that it referred to the arbitrary length of previous states, based on two explicit criteria: experience saturation and decision uniqueness of action selection. The model not only performed comparably to the ideal model given prior knowledge of the task structure, but also performed well on a task that was not envisioned when the models were developed. Furthermore, we compared the proposed model with infinite hidden Markov models (iHMMs) that dynamically generate states without prior knowledge by hierarchically using a Dirichlet process. In contrast to our model, these models generated too many states unnecessary to perform the behavioral task tested, and showed much less reproducibility. These observations indicates that decision uniqueness provides the purpose of state expansion and experience saturation determines the timing of state expansion. The proposed model will be a basis of learning models that can adapt to an indefinite environment by including criteria defining the appropriateness of state expansion.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-262
深層強化学習を用いた不完全情報ゲームにおける戦略分析
Strategy Analysis in Incomplete Information Games Using Deep Reinforcement Learning

*阿部 慎太郎(1)、竹川 高志(1)
1. 工学院大学
*Shintaro Abe(1), Takashi Takekawa(1)
1. Kogakuin University of Technology and Engineering

Keyword: Incomplete Information Games, Deep Reinforcement Learning

Research on "game AI," which allows computers to play games, has been ongoing for a long time. In recent years, it has become possible to create game AI that is stronger than humans, and research is underway to create strong game AI in various games.
In a perfect information game, a game AI can take appropriate actions regardless of the actions of its opponent. However, it isn't easy to find the optimal action from the vast amount of information in the game. Recent advances in computers and algorithms have made significant progress in creating strong game AI.
In incomplete information games, not only is the game complex, but the hidden information is probable, and the appropriate action will change depending on the opponent's following action. Therefore, it is necessary to predict the opponent's actions and make decisions. Consequently, it is challenging to construct a game AI that is more versatile than a perfect information game. This research aims to understand the game's structure and build a more potent strategy than the previous generation by repeatedly learning to play against a specific opponent in imperfect information games.
Experiments were conducted in a simplified environment using one of the imperfect information games, "Hagetakanoejiki." Specifically, the hand and the field card consist of 5 cards with numbers 1 to 5 written on them.. Reinforcement learning was used to create strategies learned from playing against an opponent, and the strategies were updated through generations of learning to verify their superiority over the strategies of the previous generation.We also evaluated the importance of game information in determining the behavior of the strategy.
By using reinforcement learning to update the strategy from the opponent's game, we were able to obtain a win rate of over 60% against all strategies of the older generation. This strategy focuses on using the numbers on the cards in the remaining deck to determine the action.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-263
皮質脊髄路における皮質運動ニューロンシナプスの後天的発達の計算シミュレーション
Computational simulation of postnatal development of cortico-motoneuronal synapse in cortico-spinal tract.

*閔 庚甫(1)
1. 東京都医学総合研究所
*Kyuengbo Min(1)
1. Tokyo Metropolitan Institute of Medical Science

Keyword: postnatal development of monosynaptic inputs to spinal motoneurons, cortico-basal ganglia circuit, reinforcement learning, computational model

In previous studies of the primary motor cortex (M1) and cortico-spinal tract, it was reported that the input from cortico-motoneuronal (CM) cells to spinal motoneurons is monosynaptically processed without via spinal interneurons and CM cell activities are individually correlated with the temporal pattern of force production and the dynamics of motor output. Due to this structure and its function, the activities of CM cells may assist the activities of corticospinal (CST) neurons, which control motor primitives (MPs) with disynaptic inputs to motoneurons via spinal interneurons and are correlated with the overall direction and kinematics of motion. This anatomical and functional synergy between two cortico-spinal tracts may involve the phylogenetic evolution in primates. In contrast to CST neurons in rostral gyrus, CM cells in the caudal sulcus are found in only great ape including Human, Gorilla and some developed monkeys, and thereby may involve the developed motor control. These CM cells are postnatally developed and fully mature at about 2 years in monkeys. According to the aforementioned synergistic relation between two cortico-spinal tracts, I postulate that the ontogenetic development of CM cells in producing monosynaptic inputs to spinal motoneurons is induced by the dynamic trade-off between inhibition and disinhibition of CST neuron activities, and its reinforcement learning process in the cortico-Basal ganglia circuit, of which involvement in M1 contributes to producing input signals from M1 to spinal motoneurons through the corticospinal tract. Through the simulation using computational model based on neurophysiological evidences, I show the possibility of the proposed hypothesis.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-264
教師なし学習で形成された潜在空間における新規概念の獲得
Novel concept learning in latent space obtained through unsupervised learning

*片岡 麻輝(1)、長野 祥大(2)、大泉 匡史(1)
1. 東京大学大学院総合文化研究科、2. 東京大学大学院新領域創成科学研究科
*Asaki Kataoka(1), Yoshihiro Nagano(2), Masafumi Oizumi(1)
1. Graduate School of Arts and Sciences, The University of Tokyo, 2. Graduate School of Frontier Sciences, The University of Tokyo

Keyword: Latent Space, Contrastive Unsupervised Learning, Novel Concept Learning

A remarkable feature of the human brain is its ability to learn novel, unseen concepts from a few examples. However, the neural mechanisms that enable such few-shot learning has not been yet understood.
Such an ability is considered to be realized by that the structure of the latent space, or representational space onto which inputs are projected, constructed through learning. A previous work investigated the latent space of deep convolutional neural networks (DCNNs) after training on supervised object classification. They showed that in latent spaces of these DCNNs, discrimination of novel concepts can be quickly learnt.
While the previous work addressed few-shot learning after supervised learning, the question whether biologically plausible unsupervised learning algorithms can also attain effective latent space remains. In this study, we focus on contrastive learning as an example of such learning algorithms. In contrastive learning, DCNNs are trained to match the distances between representations of inputs in the latent space to semantic similarities of those inputs. Since in most practical cases, semantic similarity is regulated based on random augmentation of images, this learning framework does not need supervision signals.
Here we show that the latent space obtained through contrastive learning also enables few-shot learning of novel concepts.
First, we trained two DCNNs using 50 object categories randomly chosen from CIFAR-100 dataset, in supervised and contrastive learning frameworks, respectively. After training, we evaluated these networks on few-shot learning of discrimination between pairs of novel concepts based on linear discriminant analysis using 10 inputs as training examples. As a result, the contrastive model showed a high performance with average error rate of 0.16, and the supervised model showed average error 0.11. From these results, we conclude that DCNNs after contrastive learning can also perform few-shot learning of novel concepts discrimination competitively to previously investigated supervised networks.
This work can be anticipated as a starting point to answering more general questions of what are biologically plausible as learning algorithms underlying perception of humans. Especially, understanding the information representation in latent spaces attained by unsupervised learning is expected to help understanding how the brain of animals or humans during developmental period obtains new concepts and learns to discriminate them.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-265
Eigenfunction Synchronicity Modelにおける直交補空間学習による事変現象の安定的な時不変表象化 -時間連続事象中で時間離散化された情報のゼロ表現の有用性 第10報-
Stable Time-invariant Representation of Event-invariant Phenomena by orthogonal complementary learning in the Eigenfunction Synchronicity Model -Utility of zero representation of time-discretized information in time-continuous phenomena Part 10-

*西尾 直樹(1,2)、前田 太郎(1,2)、田中 靖人(1,3,4)
1. 大阪大学大学院、2. 独立行政法人 脳情報通信研究機構 脳情報通信融合研究センター、3. 株式会社三城ホールディングス R&D、4. 神経数理学研究所
*Naoki Nishio(1,2), Taro Maeda(1,2), Yasuto Tanaka(1,3,4)
1. Osaka Univ., 2. CiNet, 3. Miki. Inc. Research and Development, Section, 4. Neuro-Mathematics Lab.

Keyword: Wave Field, Eigenfunction Synchronicity Model, Self-Organization, Consciousness

Introduction:
In the 1st to 9th reports, we showed that the identity of memory across the whole brain is necessary for the identity of consciousness across the brain. The synchronization of the entire brain is ensured by the eigenfunctions of the spatial direction of standing waves, that are established synchronously on the whole brain. Only invariant and immovable states can be used as memory representation. A standing wave node satisfies its condition of being time-invariant and immovable. The cerebral cortex is composed of a large number of neurons, which form a wave field as in the Wilson-Cowan model. In the 9th paper, we proposed orthogonal complementary space learning for plastic excitatory neurons. Local self-organized learning can forms the nodes of the standing wave. In the tenth paper, we examine whether standing waves self-organizes to acquire time-invariant events by learning from time-varying phenomena.
Methods:
A multilayer perceptron composed of time-integrating neurons propagates waves that follow the advection equation. Two multilayer perceptrons with wave speeds in the forward and reverse directions are superimposed to represent an oscillatory field. The input to each multilayer perceptron is a sin wave whose phase advances in the spatial direction. We prepared two sin waves whose spatial phases change with time were of opposite sign to each other. This is similar to the relationship between the phase change of the head and the phase change of the eye due to the vestibular oculomotor reflex caused by head movement.
Results:
In a standing wave, no completely stable standing wave nodes were detected. However, it was found that some neurons represented a large percentage of standing wave nodes in unit time, while others did not. In addition, when the phase change with time of the multilayer perceptron was stopped, the standing wave node pattern was significantly different from that during the phase change.
Discussion:
The standing wave knots obtained when an input that changes phase with time is given are considered to be time-invariant events that express the relationship between the two inputs independent of the phase change. This is a time-invariant event that is extracted from a time-varying phenomenon by time discretization. Since orthogonal complementary space learning is expected to cause a retraction of standing waves to the nodes, learning will be able to acquire stable time-invariant events.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-266
Semi-supervised contrastive learning for semantic segmentation of ISH gene expression in the marmoset brain
*Charissa Poon(1), Muhammad Febrian Rachmadi(1), Michal Byra(2), Tomomi Shimogori(1), Henrik Skibbe(1)
1. RIKEN Center for Brain Science, Wako-shi, Japan, 2. Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland

Keyword: deep learning, segmentation, gene expression, marmoset

Gene expression brain atlases such as the Allen Mouse Brain Atlas are widely used in neuroscience research. Such atlases in lower order model organisms have led to great research achievements, but interspecies differences in brain structure and function point at the need for characterizing gene expressions in the primate brain. The Marmoset Gene Atlas, created by the Brain Mapping by Integrated Neurotechnologies for Disease Studies (Brain/MINDS) project in Japan, is an in situ hybridization (ISH) database of gene expression in the marmoset brain. The goal of our work is to create a deep learning model to automatically segment gene expression from ISH images of the adult marmoset brain.

Expression patterns of over 2000 different genes can be labelled and visualized using ISH. To characterize gene expression in brain images, ISH signals must be labelled and segmented. Expression intensity and localization can then be analyzed using image processing methods. Deep learning techniques have been widely applied for the segmentation of images on a per-pixel level, known as semantic segmentation. Supervised architectures such as the U-Net have led to impressive segmentation results, but require large labelled training datasets, which are expensive to obtain. Furthermore, in histological images, image variations caused by factors such as tissue preparation and image acquisition methods have been found to profoundly influence outputs from deep learning models, at times more than the signal itself. The ideal model for gene segmentation of the ISH marmoset brain data would require minimal to no labelling and produce consistent segmentations regardless of changes in image hue, brightness, or contrast.

We use a contrastive learning based self-supervised framework in order to create semantic segmentations of gene expressions in the adult marmoset brain. In contrastive learning, the model is trained in latent space, such as by maximizing agreement between the features of different augmented views of the same unlabeled image, or between the features of a labelled image and the model’s encoded representations of the unlabeled equivalent. We first create a small labelled ‘champion’ dataset of easily segmented gene expression brain images, which is then used to train a model to segment more difficult images, such as ones in which the background signal intensity is non-uniform. We propose using a wide range of augmentations to generate strongly perturbed images to account for a range of differences in image profiles. We show an example of a gene that has been fully segmented and mapped to a common 3D template of the marmoset brain. We hope that this work can be used for the segmentation of fine-detailed structures in biomedical images and assist in advancing primate brain research. This work was supported by the Brain/MINDS project from the Japan Agency for Medical Research and Development AMED (JP21dm0207001).
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-267
自由エネルギー原理に基づく感覚減衰の発達過程の計算論的説明
A computational account of the development of sensory attenuation via free-energy minimization

*出井 勇人(1,3)、大畑 渉(2)、山下 祐一(1)、尾形 哲也(3)、谷 淳(2)
1. 国立精神・神経医療研究センター、2. 沖縄科学技術大学院大学、3. 早稲田大学
*Hayato IDEI(1,3), Wataru Ohata(2), Yuichi Yamashita(1), Tetsuya Ogata(3), Jun Tani(2)
1. National Center of Neurology and Psychiatry, 2. Okinawa Institute of Science and Technology, 3. Waseda University

Keyword: Sensory attenuation, Free-energy principle, Development of prior precision, Variational Bayes recurrent neural network model

Sensory attenuation is the phenomenon where self-produced exteroceptions feel less salient than those produced externally, such as the difficulty of tickling oneself or the ability to ignore visual changes during eye or head movements. In computational modeling, sensory attenuation has been explained by a reduction in sensory prediction error signals based on a predictive processing framework, specifically free-energy minimization. A popular hypothesis holds that, when generating an action, a copy of the motor command (an efference copy) can be used to predict the resulting sensations, reducing prediction errors. Another recent hypothesis suggests that sensory attenuation can be understood as a reduction in the precision (or confidence) of prediction error of bottom-up input to the sensory area. However, a fundamental question remains unexplored: Is sensory attenuation an innate ability or is it acquired through learning? Here, using a variational recurrent neural network model, we provide a novel computational explanation suggesting that sensory attenuation can develop through learning of two distinct types of sensorimotor patterns characterized by self-produced or externally produced exteroceptive feedback. In simulations, the network, consisting of sensory (exteroceptive and proprioceptive), association, and executive areas, developed a particular global free-energy minimum for each sensorimotor pattern, characterized by the corresponding closed circuit created by the top-down estimation of the precision of priors (predictions) and the bottom-up prediction error flow inside the network. Thus, in externally produced conditions, the network adjusted posteriors in both sensory and association areas to minimize prediction errors by decreasing the precision of priors in both areas. In self-produced conditions, on the other hand, the network attenuated the change in posteriors in the sensory areas by increasing the precision of priors in the sensory areas. Abrupt shifts between the sensorimotor conditions induced transitions from one free-energy state to another in the network via executive control, leading to shifts between attenuated and amplified responses in the sensory areas. Our results suggest that sensory attenuation develops through learning as a particular hierarchical structure of precision in priors (rather than precision in prediction error), providing a novel perspective on neural mechanism underlying the emergence of perceptual phenomena.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-268
時空間学習則に基づく文脈情報処理と記憶の構造化
Contextual memory processing and memory structuring based on spatio-temporal learning rules

*塚田 啓道(1)、塚田 稔(2)
1. 中部大学 AI数理データサイエンスセンター 、2. 玉川大学 脳科学研究所
*Hiromichi Tsukada(1), Minoru Tsukada(2)
1. Center for Mathematical Science and Artificial Intelligence (CMSAI), Chubu University, Aichi, Japan, 2. Brain Science Institute, Tamagawa University, Tokyo, Japan

Keyword: Neural Network, Learning and Memory, Spatio-Temporal Learning Rule, Contextual Memory Processing

The hippocampus is a brain region associated with contextual information processing, including episodic memory. Physiological experiments have shown that there are two types of learning rules in the hippocampus: spatiotemporal learning rule (STLR), which mainly depends on the timing between inputs, and Hebbian learning rule (HEB), which mainly depends on the timing between inputs and outputs. STLR has an advantage in contextual discrimination processing because of its ability to separate the temporal order of spatio-temporal patterns. HEB with recurrent networks, on the other hand, works for memory stability because of its ability to complete spatial patterns. Although these learning rules are known to coexist in the hippocampus, their functional roles in learning and memory of contextual information remain unclear. Here, we focused on these two types of learning rules and constructed a neural network model to elucidate the mechanism of contextual information processing in memory. The network consists of a one-layer neural network with feed-forward connections modified by STLR and feedback connections modified by HEB. We set up a spatio-temporal series with similar spatial patterns as the contextual inputs and ran simulations to evaluate how the weight space is self-organized. As a result, we found that the spatio-temporal context patterns are stably embedded in the weight space when these two learning rules interact in an appropriately balanced. In addition, PCA analysis of the temporal changes in the weight space due to these learning rules revealed that this memory learning system embeds the history (temporal order) of inputs into the memory space in a hierarchical (fractal-like) structure. These results suggest that the hippocampus has the functions such as efficiently compressing and embedding temporal structures (contextual information) into the memory space when the two learning rules that actually exist in the hippocampus are controlled in an appropriate balance by neuromodulators. These findings contribute to the understanding of the basic neural mechanisms of spatiotemporal context learning in the brain.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-269
Coherent resonance in the brain under visual stimulation
*Andrey Andreev(1,2), Alexander Pisarchik(3,1), Alexander Hramov(1,2)
1. Neuroscience and Cognitive Technology Laboratory, Innopolis University, Innopolis, Russia, 2. Center for Neurotechnology and Machine Learning, Immanuel Kant Baltic Federal University, Kaliningrad, Russia, 3. Center for Biomedical Technology, Technical University of Madrid, Madrid, Spain

Keyword: Coherent resonance, Visual stimulus, Neural network, Hodgkin-Huxley model

Neuronal brain network is a distributed computing system, whose architecture is dynamically adjusted to provide optimal performance of sensory processing. A small amount of visual information needed effortlessly to be processed, activates neural activity in occipital and parietal areas. Conversely, a visual task that requires sustained attention to process a large amount of sensory information, involves a set of long-distance connections between parietal and frontal areas coordinating the activity of these distant brain regions. We demonstrate theoretically using simulation of Hodgkin-Huxley neurons network and experimentally by means of analyzing EEG data that while neural interactions result in coherence, the strongest connection is achieved through coherence resonance induced by adjusting intrinsic brain noise. The experimental and theoretical studies provide substantial evidence for the beneficial effect of intrinsic brain noise on the efficiency of sensory processing and cognitive ability. At the same time, the effect of noise is observed for neuronal ensembles in particular task-related areas, mostly in visual cortex. According to our study, intrinsic brain noise contributes not only to enhancing neuronal response in particular brain areas, but also provides pathways for neural communication between remote brain regions. Our results confirm other studies claiming that effective visual sensory processing in the brain requires neural communication within the frontoparietal cortical network and that neural communication requires coherence.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-270
ラットの空間探索における好奇心の計算モデル
A computational model of curiosity during rats' spatial exploration

*川原 大典(1,2)、藤澤 茂義(1,2)
1. 東京大学大学院新領域創成科学研究科、2. 理化学研究所脳神経科学研究センター
*Daisuke Kawahara(1,2), Shigeyoshi Fujisawa(1,2)
1. Univ of Tokyo, Tokyo, Japan, 2. RIKEN Center for Brain Science

Keyword: CURIOSITY, COMPUTATIONAL MODEL, SPATIAL EXPLORATION , RAT

Curiosity is a fundamental instinct that is necessary to learn new things. We learn an environment model by actively exploring the environment and acquiring new knowledge through curiosity. Rats are known to learn maps of the environment, or models of the environment while exploring their environment. This active learning of the environment model is deeply related to curiosity, but the neural computational mechanisms are not well understood. This study aims to propose a computational model of curiosity and validate the computational model by comparing experimental data.
We used silicon probes to extracellularly record the activity of hippocampal neurons during rats' free exploration of a novel environment. We attempted to reconstruct a map of the environment (model of the environment) from the measured spikes without spatial cordination information. To do so, we used variational Bayesian unsupervised learning to estimate the rat's location from the firing information. Many methods have been proposed to decode the latent variables behind neuronal activity. These methods consist of a probabilistic model p(s'|s) for state transitions of a latent variable (s) (in this case, the position of the rat) and an observation model p(o|s) between the latent variable (s) and the spike information (o) of the neuron. Most conventional methods use linear models to model these. However, in many cases, the state transitions of the rat's position and the relationship between the position and the neuron's firing are highly nonlinear. Therefore, we modeled the latent variable dynamic p(s'|s) with a recurrent neural network (RNN) and the observation model with a neural network. Our proposed method was able to estimate the position of the rat more accurately than the conventional method. One of the goals of curiosity is to obtain as much new information as possible by exploration, which is necessary to learn the environment models p(o|s), p(s'|s). Therefore, we modeled curiosity in the framework of reinforcement learning by considering the amount of information obtained through exploration as a reward. We verified the validity of the proposed computational model of curiosity by comparing the model using reinforcement learning with experimental data.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-271
ヒトiPS細胞由来アストロサイトと共培養した単一ニューロンの興奮性シナプス機能解析
Modulation of excitatory synaptic release in the single neuron by human-induced pluripotent stem cell-derived astrocytes

*内野 鉱也(1)、田中 泰圭(2)、川口 紗果(1)、窪田 香織(1)、渡辺 拓也(1)、桂林 秀太郎(1,2)、廣瀬 伸一(2,3)、岩崎 克典(1)
1. 臨床疾患薬理学教室、2. 福岡大学 てんかん分子病態研究所、3. 福岡大学 医学部 総合医学研究センター
*Kouya Uchino(1), Yasuyoshi Tanaka(2), Sayaka Kwaguchi(1), Kaori Kubota(1), Takuya Watanabe(1), Shutaro Katsurabayashi(1,2), Shinichi Hirose(2,3), Katsunori Iwasaki(1)
1. Department of Neuropharmacology, Faculty of Pharmaceutical Sciences, Fukuoka University, Fukuoka, Japan, 2. Research Institute for the Molecular Pathogeneses of Epilepsy, Fukuoka University, Fukuoka, Japan, 3. General Medical Research Center, School of Medicine, Fukuoka University, Fukuoka, Japan

Keyword: Astrocyte, Synapse, Induced pluripotent stem cell

Brain cells are composed of neurons and glial cells. Astrocyte is a type of glial cell involved in synaptic transmission and the formation and maturation of synapses; thus, they are a constitutive element of the tripartite synapse. Currently, the establishment of induced pluripotent stem cells (iPSCs) allows the differentiation of stem cells into various types of cells while preserving the patient phenotype. Therefore, patient iPSCs replace animal models in pathological analysis and drug discovery. Technological advances have provided access to human astrocytes through the induction of iPSCs, and the mRNA profiles, protein expressions, and morphology of iPSC-derived astrocytes (HiAs) have been reported. Furthermore, neurons co-cultured with pathological astrocytes have been used to study their morphology, synaptic gene expression, protein levels, and spontaneous synaptic responses. However, these studies did not investigate detailed synaptic functions such as synaptic transmission evoked by electrical stimulation and morphological analysis at the single neuron level. In this study, we established autaptic cultures with HiAs (HiAs Autaptic Cultures, HiAACs), single neuron cultures grown in isolation on microislands of HiAs that form synapses exclusively with themselves. We evaluated the effect of astrocytes on the synaptic functions of human-derived neurons. We found a significantly higher Na+ current amplitude, membrane capacitance, and the number of synapses, as well as longer dendrites, in HiAACs compared with neuron monocultures. Furthermore, HiAs were involved in the formation and maturation of functional synapses that exhibited excitatory postsynaptic currents. Although we used healthy astrocytes in this study, HiAACs can be used to study various diseases by using patient-derived astrocytes in the future.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-272
タスクによらない時系列の具体的表現の学習とそのfMRI時系列解析への応用
Learning task-agnostic concrete representation of time series and their applications in fMRI time series analysis

*白 聞駿(1)、徳田 智磯(1)、山下 宙人(1)、吉本 潤一郎(1)
1. 国際電気通信基礎技術研究所
*Wenjun BAI(1), Tomoki Tokuda(1), Okito Yamashita(1), Junichiro Yoshimoto(1)
1. ATR

Keyword: Time series analysis, fMRI analysis, Mutual information

Attaining informative representations of time series is considered as one of the fundamental research inquiries in time series analysis. These extracted informative representations lead to the advancement of many scientific discoveries, including various neuroimaging applications. Modelling the predictive nature of subsequences, local predictive models, e.g., RNNs and auto-regressive models, are well known for extracting informative representation of subsequences, achieving impressive results in various time series analyses. Unfortunately, these learned representations are task-specific and abstract, which are specifically acquired to address certain sequence modelling problems. Thence, to attain task-agnostic and concrete (non-abstract) representation of subsequences, we propose an extension to the conventional local predictive model -- the unified local predictive model -- to incorporate two predictive processes that permit learned representations to be predictive of both concurrent and upcoming subsequences. Integrated with a periodic function, these task-agnostic representations are able to indicate the frequency of the time series modelled to reveal its spectral information.

Through a proof-of-concept simulation study, we show the empirical superiority of task-agnostic representations from our proposed unified local predictive model over task-specific ones in solving both temporal reconstruction and smoothing tasks. These task-agnostic representations also correctly recall the periodicity of the input time series, corresponding to the provided ground truth. Additionally, the robustness of the proposed unified predictive model under the violation of predictive assumption is further demonstrated, in contrast with the vain attempts from the implementation of a conventional local predictive model. Applying the proposed unified local predictive model in fMRI time series analysis, the learned task-agnostic representations are capable of offering us a novel spectral characterisation of cortical regions at rest, reconstructing more smoothed fMRI signals to the original ones, and yielding much stable dynamic functional connectivity on basis of reconstructed fMRI signals.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-273
複数の情報メトリックを用いた空間ナビゲーションのデコード
Decoding spatial navigation with different information metrics

*張 如月(1)、宮川 剛(2)、篠本 滋(3,1)、石井 信(1,3,4)
1. 京都大学大学院情報学研究科、2. 藤田医科大学総合医科学研究所、3. 国際電気通信基礎技術研究所脳情報解析研究所、4. 東京大学ニューロインテリジェンス国際研究機構
*Ruyue Zhang(1), Tsuyoshi Miyakawa(2), Shigeru Shinomoto(3,1), Shin Ishii(1,3,4)
1. Graduate School of Informatics, Kyoto University,Kyoto,Japan, 2. Institute for Comprehensive Medical Science, Fujita Health University, Aichi, Japan, 3. Neural Information Analysis Laboratories, Advanced Telecommunications Research Institute International, Kyoto,Japan, 4. International Research Center for Neurointelligence, the University of Tokyo, Tokyo, Japan

Keyword: dentate gyrus, information metrics, decoding

The dentate gyrus (DG), a part of the hippocampal formation, plays critical roles in navigation. However, how navigational information such as position and head direction is encoded in DG neurons still remains unknown. An open field experiment where mice can move freely was conducted by Miyakawa and colleagues [1]. In order to find out the mechanism of DG neurons in encoding the navigation information, we applied statistical methods to Ca2+ imaging data of mice which was accompanied by their navigational information. By taking the neuron-wise firing rate into account, we compared two commonly used information metrics for estimating the position and related information carried by single neurons. One was proposed by Skaggs et al. with the unit bits/spike [2] and another one was Shannon’s mutual information using bits as its unit. We noticed the differences in these two methods when calculating the same neurons’ information. However, they were consistent in identifying the information between normal mice and αCaMKII knockout mice, while the neuron-wise information of αCaMKII mice was considerably lower than of the normal mice; the latter mutant mice had more frequent movements during the whole experiments. The mutual information between the position and Ca2+ signals of DG neurons strongly suggests there is a non-linear relation between these two variables which enables us to decode the position of mice from the DG neurons’ activities.
[1] Murano, T., et al. Multiple types of navigational information are diffusely and independently encoded in the population activities of the dentate gyrus neurons.bioRxiv 2020.06.09.141572; doi: https://doi.org/10.1101/2020.06.09.141572 (2020)
[2] Skaggs, W.E., et al. An information-theoretic approach to deciphering the hippocampal code. Advances in Neural Information Processing Systems, 1030-1037 (1993)
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-274
協調的リザバーモデルによる多感覚統合
Multi-sensory integration by a collaborative reservoir computing model

*金村 一輝(1)
1. 立命館大学
*Itsuki Kanemura(1)
1. Ritsumeikan University

Keyword: Multi-sensory integration, Reservoir computing

Introduction : Humans brain integrates information from different modalities, obtained through the sensory organs. However, the aforementioned mechanism is still unclear and has been widespread interest in the fields of brain science and so on. The brain integrates repetitive information presented simultaneously as a chunk of external events to be recognized. In addition, sensory information is generally acquired as a time-varying signal, and the temporal structure of the stimulus is useful information.
Objective : The objective of this study is to detect and integrate temporally stimulus patterns that co-occur between different modalities.
Method : The recurrent neural network model trained to mimic each other's output, can detect stimulus patterns that repeatedly appear in a time series signal. We applied this model for identifying specific patterns that co-occur between information from different modalities. For input, we use text-tone signals for auditory perception and image signals for visual perception.
Result : The model was self-organized by specific fluctuation patterns that co-occurred between different modalities, and could detect each fluctuation pattern. This can be confirmed by the result of principal component analysis that the recurrent network dynamics of each information pattern occurs at different locations in the same principal component space. In another validation, the model failed to work correctly for signals that did not co-occur with corresponding fluctuation patterns.
Discussion : In the brain, integration is thought to be facilitated by stimulus patterns that are presented simultaneously and repeatedly between different modalities. In addition, top-down control from higher brain regions such as association cortices and direct interaction between primary sensory regions have been shown to be important for perception. The present model proposed a possible mechanism for integrating time-varying stimuli from different modalities by considering the importance of the direct interaction between primary sensory assumed regions and the temporal relationship and frequency of appearance of stimuli, without considering the interaction through higher layers. However, the proposed model uses FORCE Learning as the learning rule. This algorithm is not biologically valid. In the future, it is necessary to consider with learning algorithms such as biological synaptic modification rules.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-275
両眼性マイクロサッケードに誘発された非線形固視微動
Non-linear fixational eye movement: binocularly produced by micro-saccades

*田中 靖人(1,2,3)、藤江 博幸(1)、前田 太郎(2)
1. (株)三城ホールディングス、2. 大阪大学、3. 神経数理学研究所
*Yasuto Tanaka(1,2,3), Hiroyuki Fujie(1), Taro Maeda(2)
1. Paris Miki Holdings Inc. , 2. Osaka University, 3. NeuroMathematics Laboratory

Keyword: fixtational eye movement , microsaccades, nonlinear, binocular

Introduction: Recently, we found miniature eye movement together with miniature head movement during fixation, and both interacted each other. Our vector analysis showed that occasionally their directions were opposite, suggesting the system of vestibulo-ocular reflection (VOR) in a miniature scale. However its mechanism including monocular or binocular characteristic was not known. Here different types of statistical distribution were fitted to evaluate linearity.
Methods: First, miniature eye movement from both left and right eyes were obtained with miniature head movement using the VOG method (Bakaris et al. 2017). Gaussian, Poisson, Gamma, Exponential, Logistic, and Kernel fitting analyses were carried out to determine general characteristic together with their accumulated shape of non-Gaussian and Weibull fit. Hilbert analysis was further conducted to test a phase effect. Left eye, right eye, and head components were individually analyzed and compared using the multi-taper frequency analysis.
Results: As to vector amplitude, Gamma distribution fitted better than Logistic distribution, which was followed by Gauss, Weibull, Exponential, and Poisson distributions. As to vector angle, Logistic distribution fitted best, followed by the Gaussian fit. Hilbert analysis revealed that the degree of non-linearity was greater with either eye compared with head-movement. These results show that non-linear characteristic are binocular with fixational eye movements.
Discussion: Different type of fitting between vector amplitude (Gamma and Logistic) and vector angle (Logistic and Gaussian) is attributed to non-linear components of micro-saccadic eye movement that produces skew (3rd order parameter) and long-tail characteristic to the distribution of micro-saccades. In comparison, pseudo-linear characteristic was found in tremors and drifts. Results of Hilbert analysis suggest that binocular miniature communications between two eyes and head are based on phase of low frequency (around 20Hz) components.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-276
視覚運動統合モデルを用いたロボットハンドによる把持制御の汎化性能の評価
Evaluation of Generalization Performance of Grasping Control with Robot Hand Using Visuomotor Integration Model

*松田 基(1)、大橋 真愛(1)、片山 哲(1)、福村 直博(1)
1. 豊橋技術科学大学 情報・知能工学専攻
*Motoi Matsuda(1), Mai Ohashi(1), Satoshi Katayama(1), Naohiro Fukumura(1)
1. Department of Computer Science and Engineering, Toyohashi University of Technology, Aichi, Japan

Keyword: VISUOMOTOR INTEGRATION MODEL, NEURAL NETWORK, MANIPULATION

In this study, we focused on a robot hand control technology based on image recognition, and conducted experiments on a model that calculates the finger joint angle information of the robot hand from an image of a target object. To determine the hand shape of the robot hand from image information, we focus on the human grasping manipulation. When grasping a cup, even if the humans watch a cup for the first time, they can grasp the appropriate part of the cup, such as the handle, with the appropriate hand shape. Humans rely not only on the sensory information (e.g. visual information) related to the object, but also on a copy of the motor command generated during previous grasping of similar objects. Through the experiences, humans learn the process of visuomotor integration and translation. A model of such human visuomotor integration and translation has been proposed. This model consists of modular auto-encoders corresponding to visual and motor information. In addition to identity mapping, some of the middle layer neurons of the autoencoders are constrained to take the same values. After the learning, those neurons extract the features that are commonly included in visual and motor information in a many-to-many relationship without supervisory signals. Moreover, when the image data of the object intended to grasp is inputted to the auto-encoder for visual information, the value of the middle layer neurons are copied to the corresponding middle layer neurons of the auto-encoder for motor information, and the finger joint angle information to grasp the object can be calculated. In order to verify the effectiveness of this model, we have conducted learning experiments with real multi-fingered robot hand with multiple degrees of freedom. We used cups with different bottom diameters and handle sizes as target objects. We captured images of the cups in different handle directions as visual information. In addition, we grasped the cups with a robot hand and obtained the finger joint angle when holding the sides of the cup and when holding the top of the cup from above as motor information. In this study, we conducted a learning experiment using depth images as visual information, which can easily obtain information on the three-dimensional shape of an object, and introduced CNN (ResNet50) into the encoding part of the autoencoder for visual information. As a result, we confirmed that the information on the diameter of the cups could be extracted more clearly with fewer training sessions. Furthermore, even when images with different cup sizes and handle angles, which were not included in the training data, were input as validation data, the system was able to represent the cup size. At last, in the visuomotor translation experiment, we confirmed that the shape of the robot hand can be changed according to the size of these untrained cups. These results indicate that the proposed model has excellent generalizability.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-277
強化学習モジュールとしての小脳スパイキングニューラルネットワークの実装
Implementation of a cerebellar spiking neural network as a reinforcement learning module

*栗山 凜(1)、吉村 英幸(2)、保足 凌平(3)、山﨑 匡(1)
1. 電気通信大学、2. 沖縄科学技術大学院大学、3. 株式会社コーエーテクモゲームス
*Rin Kuriyama(1), Hideyuki Yoshimura(2), Ryohei Hoashi(3), Tadashi Yamazaki(1)
1. The University of Electro-Communications, Tokyo, Japan, 2. Okinawa Institute of Science & Technology Graduate University, Okinawa, Japan, 3. KOEI TECMO GAMES CO., LTD.

Keyword: cerebellum, reinforcement learning, spiking neural network

The brain consists of multiple anatomical regions such as the cerebral cortex, basal ganglia, and cerebellum, which communicate with each other and thereby constituting "loops" called the cerebral cortico-basal ganglia loop and the cerebrocerebellar loop. We have hypothesized that these two loops could act as hierarchical reinforcement learning (RL) machine, provided that cerebellum is an RL machine rather than a conventional supervised learning machine (Yamazaki, Lennon 2019; Yamazaki 2021). To justify the hypothesis, implementation of the hierarchical RL by a spiking network model would be mandatory. In particular, cerebellar spiking network models as RL machines have never been existed. Therefore, in this study, we aimed to implement an RL machine based on known anatomy and physiology of the cerebellum. Our cerebellar spiking network model is an implementation of an actor-critic model of RL, where Purkinje cells (PCs) and molecular layer interneurons (MLIs) act as an actor and a critic, respectively. Long-term synaptic plasticity was implemented at PF-PC synapses as well as PF-MLI synapses, and short-term plasticity was also implemented at PF-MLI synapses. MLIs calculate temporal difference error (TDE) from state-value inputs fed by parallel fibers (PFs) and negative reward fed by a climbing fiber (CF). The TDE was used to update PF-PC and PF-MLI synaptic weights. To validate the performance of our model, we carried out simulation of Pavlovian delay eyeblink conditioning, and successfully reproduced characteristic pause of PCs as conditioned responses. We also examined a machine learning benchmark known as a mountain car, in which a car is asked to climb up a mountain by moving left or right appropriately while using the gravity and acceleration. We confirmed that our model was able to let the car to climb up. These results suggest that our cerebellar spiking network model will be used as a building block of the hierarchical RL to elucidate the holistic learning mechanism of the whole brain, and also be potentially useful as neuromorphic hardware for machine learning tasks.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-278
少数データを用いたAnti-Aliased Convolutional Neural Network構築のための知識蒸留学習
Knowledge Distilled Training for Anti-Aliased Convolutional Neural Network with Data-Limited Situation

*鈴木 聡志(1)、武田 翔一郎(1)、澤田 雅人(1)、増村 亮(1)、庄野 逸(2)
1. 日本電信電話株式会社 NTT コンピュータアンドデータサイエンス研究所、2. 電気通信大学
*Satoshi Suzuki(1), Shoichiro Takeda(1), Masato Sawada(1), Ryo Masumura(1), Hayaru Shouno(2)
1. NTT Computer & Data Science Laboratories, NTT Corp., 2. The University of Electro-Communications

Keyword: Convolutional neural network, anti-aliased CNN, knowledge distillation, data-limited situation

Blur filters often play a crucial role in generalizing image recognition models because the blur filters can relax the difference in the image objects' scale or position. For example, Neocognitron achieves robustness for image position shifts by introducing blur filters to an ancestor model, Cognitron. Recent studies have demonstrated that modern convolutional neural networks (CNNs) also improve their accuracies by introducing blur filters. Especially, anti-aliased CNNs introduce blur filters to intermediate representations in CNNs to achieve high accuracy. A promising way to build a new antialiased CNN is to fine-tune a pre-trained CNN, which can easily be found online, with blur filters. However, blur filters drastically degrade the pre-trained representation, so the fine-tuning needs to rebuild the representation by using massive training data. Therefore, if the training data is limited, the fine-tuning cannot work well because it induces overfitting to the limited training data. To tackle this problem, this paper proposes “knowledge distilled fine-tuning”. On the basis of the idea of knowledge distillation, our method distills the knowledge from intermediate representations in the pre-trained CNN to the anti-aliased CNN while fine-tuning. We distill only essential knowledge using a pixel-level loss that distills detailed knowledge and a global-level loss that distills coarse knowledge. We evaluated the proposed method with the ImageNet 2012 dataset. Experimental results demonstrate that our method significantly outperforms the simple fine-tuning method.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-279
認知経験における共時的情報を統合する神経回路を用いた数覚の獲得
Acquisition of number sense using neural networks that integrate synchronic information in cognitive experience

*末谷 大道(1)
1. 大分大学
*Hiromichi Suetani(1)
1. Oita University

Keyword: number sense, deep neural networks, canonical correlation analysis

The mathematical ability of humans, which is formed on the basis of "numbers" represented by symbols, enables precise scientific descriptions and has played a important role in the development of advanced human societies and cultures.On the other hand, it is now known that the visual ability to perceive a few objects instantaneously and the basic arithmetic for performing addition and subtraction between a few objects are innate in human babies a few months after birth, as well as in other species such as rats, macaques and parrots. Such abilities are collectively referred to as "number sense" [1]. We are now working the problem of how these number senses can be formed in the nervous system through cognitive experiences based on various senses such as touch, vision, and hearing [2] from viewpoints of computational neuroscience and nonlinear dynamical systems. For example, we are studying the problem of classifying images according to the number of rectangles present in each image using AlexNet [3], which is a representative architecture of convolutional neural networks. Here, the sizes and the arrangements of rectangles are randomly determined and the total area of rectangles is normalized in each image. First, AlexNet was pre-trained using images of general objects. Next, this network is modified by presenting images with several rectangles arranged in it and using transfer learning.When we visualize the neuronal activities of the trained net using the manifold learning such as t-SNE, we can see the neural states corresponding to the number "2", "3", "4", ... (except for “1”) of rectangles are arranged nearly on a one-dimensional curve in a low-dimensional space. On the other hand, when only the images with several rectangles placed randomly are used for training AlexNet, such a topological structure does not appear, despite the classification performance itself is high. This result suggests the importance of "natural visual experience" in acquiring the number concept of ordinal numbers. In this presentation, I will extend the above research to the problem of integrating information of multiple modalities. Specifically, based on a nonlinear version of canonical correlation analysis [4], we construct a neural circuit in which two interacting multilayer neural circuits extract common information from different cognitive experiences such as quantity and amount. We then discuss how the invariant feature of "number" is organized in the neural circuits from the integration of different sensory modalities.
[1] S. Dehaene. The number sense: How the mind creates mathematics. OUP USA (2011).
[2] G. Lakoff and R. Núñez. Where mathematics comes from. New York: Basic Books (2000).
[3] A. Krizhevsky, I. Sutskever, and G. 0E. Hinton. Advances in neural information processing systems 25, 1097-1105 (2012).
[4] H. Suetani, Y. Iba and K. Aihara. Journal of Physics A: mathematical and General 39, 10723-10742 (2006).
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-280
The Perceiver Architecture is a Functional Global Workspace
*Arthur William Juliani(1), Ryota Kanai(1), Shuntaro Sasai(1)
1. ARAYA Inc., Tokyo, Japan

Keyword: GLOBAL WORKSPACE, NEURAL NETWORKS, WORKING MEMORY, ATTENTIONAL CONTROL

In the past few decades, the Global Workspace Theory (GWT) has become a prominent functional account of conscious access in humans and other primates. This has been due to it receiving both theoretical elaborations as well as support from neuroscientific findings. Since its initial proposal, there have been a number of computational models developed to study the hypothetical dynamics of the global workspace, most of which are hand-designed to reflect the expectations of the theory. Here we take a different approach, and examine a recently successful general deep learning architecture, the Perceiver, as a potential theoretical candidate for the global workspace. We find that despite being developed in an unrelated context, the Perceiver meets many of the theoretical requirements of a functional global workspace. More importantly, it demonstrates empirical behavior consistent with that expected by GWT in both attentional control and working memory tasks drawn from the cognitive science literature. We furthermore compare the Perceiver to two other popular neural network models of executive function, the Long Short-Term Memory Unit and the Global Workspace model of Goyal et al. (2020). In this comparison, we find that both theoretically and empirically, the Perceiver is better aligned with the expectations of a global workspace than these other models. Taken together, this evidence suggests that the Perceiver class of models can be a potentially useful tool for studying the global workspace and its realization in both artificial and biological agents.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-281
ニューラルネットワークを用いた神経活動に基づく運動予測におけるパラメータの影響
Effect of parameters on accuracy of motor prediction from neural signals by applying a neural network

*藤木 聡一朗(1)、神作 憲司(1,2)
1. 獨協医科大学、2. 電通大脳医工セ
*Soichiro Fujiki(1), Kenji Kansaku(1,2)
1. Dokkyo Medical University, 2. CNBE, UEC, Tokyo, Japan

Keyword: Brain machine interface, Neural network, Motor prediction

Brain machine interface (BMI) technologies have been developed to support persons with physical disabilities. BMI predicts the user’s intentions or future behaviors from the neural activities and selects a desired response based on its prediction. Improvement of prediction accuracy is one of the important issues for BMI.

Technology of machine learning is rapidly developed in recent years. In this study, we investigated the effect of temporal parameters (the bin size of the firing frequency of the neurons and time interval from neural activities to behaviors) on the prediction accuracy of the motion of animals by machine learning.

We imposed a behavioral task to a mouse and measured the neural activities of the motor cortex of the mouse. The head fixed mouse was trained and performed the lever-pull task using a TaskForcer (O’HARA & CO.,LTD.) for one hour per day. If the lever was pulled with appropriate duration and nearby the target trajectory, the mouse got the reward water from a spout. The neural activities were measured by a 32-channel acute silicon probe at 20kHz. The recorded neural activities were sorted by Kilosort2 and the sorted data were transformed to firing frequency by custom-made MATLAB code (MathWorks). The mouse performed over 450 trials (including success and failed trials) and 4 neural signals were obtained on the two recording days.

For the aim of this study, size of bin and time interval from neural activities to behavior were changed and the lever motion was predicted from the neural signals using Deep Learning Toolbox of MATLAB. The sizes of bin were used from 10 to 100 ms every 10 ms, and the time intervals were used from 1 to 5 times of the selected bin size. We used root-square-mean-error (RSME) as the evaluation index of the prediction accuracy. The obtained data were separated into training and test data randomly. The lever motion was used for supervised data and neural signals were used for input data. Based on the trained neural network, the lever motion was predicted using a test data and the obtained RSMEs were averaged among the test data. In addition, the data separation (i.e. shuffling the training and test data) and prediction were conducted 10 times and ten RSMEs data were averaged.

Our results showed that using the bin size of 10 ms and the time interval of 3 or 4 times (i.e. 30~40 ms) produced the smallest RSME, and these suggested the importance of the parameter selection for better behavioral prediction in BMI.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-282
脳機能の全容は巨大クロスワードの如く急速に解明されるかもしれない
The great crossword: potential for rapid progress in understanding the function of the entire brain

*田和辻 可昌(1,2)、布川 絢子(2)、荒川 直哉(2)、高橋 恒一(3,2,4)、山川 宏(5,2,6)
1. 早稲田大学、2. 全脳アーキテクチャ・イニシアティブ、3. 理化学研究所、4. 慶應義塾大学、5. 東京大学、6. 電気通信大学
*Yoshimasa Tawatsuji(1,2), Ayako Fukawa(2), Naoya Arakawa(2), Koichi Takahashi(3,2,4), Hiroshi Yamakawa(5,2,6)
1. Waseda University, 2. WBAI, 3. RIKEN, 4. Keio University, 5. The University of Tokyo, 6. The University of Electro-Communications

Keyword: WHOLE BRAIN ARCHITECTURE, BRAIN REFERENCE ARCHITECTURE, ARTIFICIAL GENERAL INTELLIGENCE, COMPUTATIONAL MODELING

In the Brain Reference Architecture (BRA)-driven development [1], the process of trying to figure out the computational functions of the entire brain can be compared to solving a big crossword puzzle. Therefore, it is expected that the overall picture will rapidly emerge later in that process.
A crossword is a puzzle in which words that serve as clues are applied to a framework consisting of squares that intersect vertically and horizontally. Similarly, in the BRA-driven development, we will first describe anatomical structures at the mesoscopic level in a standardized manner. This is equivalent to creating the framework of a crossword.
The BRA-driven development process of applying functions according to that anatomical structure is like solving a crossword. In other words, by assigning detailed functions corresponding to letters to each square, we can realize hints such as tasks and abilities.
In crosswords, this is equivalent to the constraint of matching letters to the intersection of vertical and horizontal words. Because the brain is more tightly networked than that, a one brain region is often used for multiple tasks and abilities, resulting in even stronger constraints. These constraints accelerate solving speed in the later stage.
So why is this possible with the BRA-driven development?
First, the standardized way of describing anatomical structures now provides a framework for describing functions in a unified way.
In tradition, brain function was identified by interpreting neural activity phenomena for each specific region, like each square is filled by a letter. In contrast, BRA-driven development designs systematize functions that follow anatomical structures (SCID method), like each sequence of squares is filled by a word.
Nevertheless, it is difficult for any individual to fully describe the great crossword that is the brain. Here again, the ability to collaborate on a common framework is particularly advantageous. In other words, each expert proposes functional hypotheses for specific brain regions and tasks, and in the process of resolving conflicts among these functional hypotheses, a consistent hypothesis for the computational function of the entire brain gradually emerges.
We would be happy to hear your honest opinion about the ambitious idea that a great crossword of the whole brain function will be abruptly solved.
This work was supported by JSPS KAKENHI Grant Number JP17H06315.

[1] Yamakawa, H. (2021). The whole brain architecture approach: Accelerating the development of artificial general intelligence by referring to the brain. Neural Networks: The Official Journal of the International Neural Network Society. https://doi.org/10.1016/j.neunet.2021.09.004
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-283
運動タスク実行中のエキスパートと模倣エージェントの神経表現と神経ダイナミクス
Neural representation and dynamics of expert and imitating agents performing motor tasks

*川北 源二(1)、大泉 匡史(1)
1. 東京大学大学院総合文化研究科
*Genji Kawakita(1), Masafumi Oizumi(1)
1. Grad Sch Arts and Sciences, Univ of Tokyo, Tokyo, Japan

Keyword: MOTOR CONTROL, MOTOR LEARNING, REINFORCEMENT LEARNING, IMITATION LEARNING

When we want to learn a new motor skill, one intuitive and effective approach is to observe demonstration by an expert and imitate their movement. Such a learning method is known as Imitation Learning (IL) in the fields of reinforcement learning (RL) and robotics. Particularly, learning from an expert with a different body structure (e.g. a baby learning by imitating an adult) is categorized under Cross-domain Imitation Learning (CDIL). Despite the recent surge of interest in CDIL in RL and robotics, little attention has been paid on CDIL in the field of neuroscience, although the idea seems to be highly relevant to how humans learn motor skills. Moreover, previous studies on CDIL have focused on an agent’s task performance evaluated both quantitatively and qualitatively but have overlooked the underlying neural representation and dynamics of learned policy networks. As a first step to investigate whether an imitating agent can obtain the comparable neural representation and dynamics of an expert agent, we examine the neural dynamics of policy networks of agents trained with RL and IL. To this end, we prepare an expert agent (trained with RL) and a novice agent (trained with IL) to perform a naturalistic task (running) in a physics simulator (MuJoCo) and analyze the neural dynamics of policy networks. Here we show that the neural dynamics of an expert agent is lower dimensional than that of a novice agent. Applying Principal Component Analysis (PCA) to neural activities while agents are running, we found that it requires more principal components to explain the variance in the neural activity of a novice agent. We also found that the dynamics of expert and agent bodies have comparable dimensionalities, which implies that the difference in the dimensionalities of neural dynamics is unlikely to be a reflection of the dynamics of bodies. Our result implies that stable task performance by an expert may be resulted from low dimensional neural activity for task execution. We anticipate that this study may provide new insights into the neural mechanism of motor learning.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-284
Graded persistent activity を示すニューロンモデルを用いたReservoir computing
Reservoir computing with Graded persistent activity neuron model

*富田 風太(1)、寺前 順之介(1)
1. 京都大学大学院情報学研究科
*Futa Tomita(1), Jun-nosuke Teramae(1)
1. Grad Sch Info, Kyoto Univ, Kyoto, Japan

Keyword: graded persistent activity, reservoir computing, dynamical system

Recently, it has been reported that neurons in the olfactory entorhinal cortex, a relay point between the hippocampus and the cerebral cortex that seems to be important in the generation of episodic memory, code temporal information with high accuracy on scales of seconds to hours after an animal initiates a certain behavior (Tsao et al. Nature 2018).
However, it is not clear what kind of neural basis create that neural activity.
On the other hand, it is known that there are neurons in the olfactory cortex that show persistent firing activity called graded persistent activity (GPA) (Egorov et al. Nature 2002).
GPA neuron changes its firing frequency in response to successive stimuli and maintains a stable firing frequency after stimulus presentation. Thus it behave as continuous attractors on its own.
However, it is still unclear what kind of properties the network with GPA neurons can have.
In order to theoretically investigate the relationship between temporal coding in entrohinal cortex and GPA neurons, we approach networks with GPA neurons from the perspective of reservoir computing.
Reservoir computing is a machine learning technique that learns a model to solve a specific task from the activity of a recursive network (called a reservoir) to which a certain input signal is added. Since the performance for a specific task changes depending on the property of the reservoir, the relationship between the property of various reservoirs and their performance has been studied.
In this study, we specifically use the rate neuron model, which consists of two continuous variables with different time constants, as a reservoir element.
The ratio of two time constants in a single neuron can be used to adjust its properties, such as showing or not showing GPA.
We will report on how the distribution of these time constants in the reservoir affects metrics such as temporal readout performance and robustness to noise.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-285
On the Quantitative Bases of the Multi-Level Perspective in Transitions Research
*Benedict Ryan Chua Tiu(1), Renzo Roel Perez Tan(2,3,4), Kazushi Ikeda(2), Shunji Matsuoka(1)
1. Waseda University, 2. Nara Institute of Science and Technology, 3. Kyoto University, 4. Ateneo de Manila University

Keyword: Technology transition, Multi-level perspective, Anomaly detection, Long short-term memory networks

Technological transitions stem not only from the presence of technological innovation but also from shifts in social arrangement. Radical transformations are thus complex social processes that entail actors that choose to adopt, produce, and normalize new technologies over old ones. The Multi-Level Perspective is a theoretical framework in the social sciences widely used to trace socio-technical transitions. It divides societal interaction into three levels of analysis - niche innovations, socio-technical regimes, and landscape developments. Viewing transitions as the product of multiple causes at the three levels, the theory offers a typology of transition pathways depending on the maturity of the innovation, stability of the regime, and the disruption caused by exogenous trends and shocks. While the theory is a persistent subject of research, critics have pointed out vague operationalization and under-exploration in existing literature.
In the study, a machine learning framework for the Multi-Level Perspective in investigating the transition to renewable energy is proposed. A text mining approach is utilized to produce text-based metrics from articles and tweets in measuring niche and landscape pressures; conventional and renewable energy investment levels are set as measures of regime destabilization and stabilization, respectively. In quantifying transition pathways, moreover, time series anomaly and change point detection is done. The pseudosynchronization of anomalies and change points across representative series is then determined through a juxtaposition of a model-free algorithm based on state-space reconstruction and a machine learning approach based on long short-term memory networks. Two main insights are drawn:
(1) That the Multi-Level Perspective may be made more palpable through measurable observations; and
(2) That text-based metrics, coupled with statistical learning methods, may approximate the niche and landscape levels of analysis.
To close, notes on the generalizability of the model are presented.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-286
Adopting graph neural network for finding systemically important financial institutions
*Zavialov Igor(1)、Ikeda Kazushi(1)
*Igor Zavialov(1), Kazushi Ikeda(1)
1. Division of Information Science, Nara Institute of Science and Technology, Nara, Japan

Keyword: Systemic risk, financial crisis, graph neural network, Bron-Kerbosch algorithm

“Risk" is one of the most important concepts in finance. Profit of the bank, in general, depends on the riskiness of its portfolio. Financial instruments with higher risk promise higher return with a lower probability of this return taking place. But because many financial institutes perform so-called accrued accounting, they achieve “reported profit" at the beginning of the possession the instrument. Only to the end of its life cycle, due to poor management and higher risks behind the instrument, bank suffers from larger losses. Financial crisis of 2007-2008 is a good example of such situation when subprime mortgage market created a huge bubble after participating in multiple trading cycles what finally led to the huge collapse of the financial system in US and affected many economics globally. Thus, it is important to predict and prevent a financial crisis. The possibility of a financial crisis is usually associated with systemic risk. Systemic risk is defined as the probability that the default of one institution will make other institutions default.
The main goal of this study is to provide a framework for accurate systemic risk estimation in order to prevent financial turmoil (crisis). Financial system consists of many different participants (institutions) but some are more involved than others and thus contribute to the systemic risk more. There are proposed methods in the field of Economics which help to identify such participants and apply strict taxation policy to them.
For finding core elements of the financial network (so-called SIFI - systemically important financial institutions), we adopted a graph neural network (GNN) consisting of two convolutional and one attention layer. For reference, we used the Bron-Kerbosch algorithm which performs core-periphery decomposition of the financial network by finding graph cliques in it.
For the experimental setup, we used a simulated financial network with a central bank and 15 commercial banks together with transactional data between them.
We derived a set of SIFI using GNN and validated the result using Bron-Kerbosch algorithm and Bank of Finland PSS3 simulator.
Bron-Kerbosch algorithm is a comparatively simple algorithm and has many limitations which we discuss. Alternatively, GNN can be more efficient for node classification as it considers static information about the nodes and links (such as weight and direction). We consider adding dynamic representation to the model as a part of future work.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-287
状態空間キネティックイジングモデルを用いた非平衡神経ダイナミクスの解析
The state-space kinetic Ising model for nonequilibrium neuronal dynamics

*石原 憲(1)、島崎 秀昭(2)
1. 北海道大学大学院生命科学研究科、2. 北海道大学人間知・脳・AI研究教育センター
*Ken Ishihara(1), Hideaki Shimazaki(2)
1. Graduate School of Life Science, Hokkaido University, Sapporo, Japan, 2. Center for Human Nature, Artificial Intelligence, and Neuroscience (CHAIN), Hokkaido University, Sapporo, Japan

Keyword: nonequilibrium neuronal dynamics, kinetic Ising model, state-space model

In neural systems, populations of neurons convey information about stimuli, decisions, and motor actions using their concerted activities. The nonequilibrium and nonstationary dynamics of the neurons hallmark the recognition and learning dynamics observed in vivo. The activity flow over asymmetric neuronal networks makes the neuronal dynamics nonequilibrium. Moreover, autonomous or external drives to the observed neurons can make their activity nonstationary, i.e., the firing rates and their interactions evolve in time. The kinetic Ising model is a powerful tool for studying the nonequilibrium neuronal dynamics, offering to model history dependency of the spiking activities, in contrast to the classical equilibrium Ising model that dictates neurons' simultaneous activities without the history effect. However, the typical kinetic Ising model assumes stationarity, limiting the models' applicability to analyze in vivo data. Here, we propose a state-space framework for the kinetic Ising model to analyze nonstationary, nonequilibrium neuronal dynamics. In this approach, the parameters of the kinetic Ising model, dictating the firing rates and their interactions, evolve in time. We developed the Bayesian filtering and smoothing algorithms to estimate the nonequilibrium neuronal dynamics and the EM algorithm to optimize the smoothness parameters of the dynamics. This framework extends the previous state-space modeling of the equilibrium Ising model. In contrast to the conventional Ising model approach, this approach requires less computational costs and applies to data obtained from large-scale measurements. Further, it unveils the nonequilibrium nature of neuronal dynamics. We corroborate that the state-space kinetic Ising model captures the nonequilibrium statistics of the population activity by simulation studies, and demonstrate that it is applicable to large-scale parallel recordings using the Allen Brain Observatory data sets.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-288
樹状突起を持つ小脳プルキンエ細胞モデルによる平行線維刺激シーケンスの学習
Learning sequences of parallel fiber stimulation by a cerebellar Purkinje cell model with dendrites

*田村 花綾(1)、山本 祐輝(2)、小林 泰良(1)、山﨑 匡(1)
1. 電気通信大学大学院、2. 東京医科歯科大学大学院 医歯学総合研究科 精神行動医科学分野
*Kaaya Tamura(1), Yuki Yamamoto(2), Taira Kobayashi(1), Tadashi Yamazaki(1)
1. Graduate school of the University of Electro-Communications, 2. Department of Psychiatry and Behavioral Sciences, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University

Keyword: Purkinje cell, multicompartment model, dendritic discrimination, spike-timing-dependent learning

Various brain functions are considered to emerge from the dynamics of networks composed of a large number of neurons, where neurons are assumed implicitly to be simple elements that receive spikes from the other neurons and emit a spike when enough number of spikes are received. On the other hand, several studies demonstrate that individual neurons can perform complex computation by harnessing nonlinearity on their dendrites. The cerebellum plays important roles in motor control, where sequential contraction of muscles with precise gain and timing would be essential, and cerebellar Purkinje cells have characteristic dendrites. A previous study reports the Purkinje cell can discriminate stimulus sequence. However, linking this function to motor control requires attaining sequences that occur spike through learning. We have demonstrated that a multi-compartment model of Purkinje cells can discriminate stimulus sequences by computer simulation. That is, the cell emits a few spikes when multiple synapses are stimulated in a certain order, but do not in the reverse order. However, to link this function to motor control, the cell do not just discriminate specific sequences but also learn arbitrary sequences. In this study, first, we examined whether the cell model can represent arbitrary sequences by setting synaptic weights appropriately. Specifically, we found that by adjusting synaptic weights, we were able to reverse the order of stimulation for emitting spikes. Next, we connected multiple parallel fibers on the dendrites of Purkinje cells by synapses and stimulated them. Shortly after the stimulation, the entire dendrite was stimulated with a strong pulse corresponding to the climbing fibers. It would occur spike-timing-dependent Hebbian learning at each synapse. We observed that appropriate synaptic weights were automatically acquired by the paired stimulation. These results suggest that Purkinje cells could learn sequences of parallel fiber activation.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-289
時間ブロッキング法と空間結合パタンを用いた入力総和の並列化によるスパイキングニューラルネットワークシミュレーションの高速化
Acceleration of spiking neural network simulation by temporal blocking method and parallelization of input summation using spatial connectivity pattern

*五十嵐 潤(1)、山﨑 匡(2)、山浦 洋(2)、野村 昴太郎(3)
1. 理化学研究所、2. 電気通信大学、3. 神戸大学
*Jun Igarashi(1), Tadashi Yamazaki(2), Hiroshi Yamaura(2), Kentaro Nomura(3)
1. RIKEN, 2. The University of Electro-Communications, 3. Kobe University

Keyword: LARGE-SCALE SIMULATION, CEREBRAL CORTEX, CEREBELLUM, SPIKING NEURAL NETWORKS

Large-scale simulations of biological spiking neural networks have been actively performed thanks to the development of computational resources and physiological measurements for the past 60 years. Relatively decreasing memory bandwidth for calculation in the recent computers and heterogeneous property of brain regions are obstacles for efficient computing of whole-brain simulations on supercomputers. For example, data size variability across brain regions decreases data reuses on fast cache memory, which increases slow DRAM use and stalls in the pipeline of arithmetic logic units. Vast gaps in neuron numbers among neuron types cause load imbalance in calculating input summation, especially for granule cells in the cerebellum. To solve these problems, we propose two methods in the current study. The first is a temporal blocking method that uses a minimum signal transmission delay. The temporal blocking method is one of the cache blocking methods that divides a temporal domain into blocks, groups data, and improves the reuse rate of the data on cache memory, used in scientific simulations, such as fluid dynamics. The second one is parallelization of calculation of input summation using the orthogonal connectivity pattern of parallel fibers from granule cells to the postsynaptic cells. We evaluated the performance of the proposed methods implemented in a spiking neural network simulator MONET for a layered sheet type of neural network models of the cerebral cortex and cerebellum [1, 2] on 1024 compute nodes of the supercomputer Fugaku. When data sizes were adjusted to fit with cache memory size in the temporal blocking method, the elapsed time of numerical calculations of synaptic conductance and membrane potentials sped up by 1.7 and 1.9 times faster for the cerebral cortex and cerebellum. The parallelization for calculation of input summation using the orthogonal connectivity pattern performed 1.7 times faster computation and 1.7 times faster synchronization due to the improved load balance in the calculation of input summation. These results demonstrate that 1) the temporal blocking method using minimum signal transmission delay can effectively perform simulations by adjusting variable data sizes reflecting heterogeneity among brain regions, and 2) parallelization using a spatial feature of circuit architecture is effective especially for granule cells with the clear spatial connectivity pattern and the massive population. [1] Igarashi et al., Frontiers in Neuroinformtics. 13:71. 2019. [2] Yamaura et al., Frontiers in Neuroinformtics. 14:16. 2020.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-290
脳構造ネットワーク上のパケット通信ダイナミクスにおけるハブ領域の役割
Roles of hub regions in packet-based communication dynamics on structural brain networks

*福嶋 誠(1,2,3)、ライプニッツ 賢治(3,4)
1. 奈良先端科学技術大学院大学先端科学技術研究科情報科学領域、2. 奈良先端科学技術大学院大学データ駆動型サイエンス創造センター、3. 情報通信研究機構未来ICT研究所脳情報通信融合研究センター、4. 大阪大学大学院情報科学研究科
*Makoto Fukushima(1,2,3), Kenji Leibnitz(3,4)
1. Division of Information Science, Graduate School of Science and Technology, NAIST, Nara, Japan, 2. Data Science Center, NAIST, Nara, Japan, 3. CiNet, Advanced ICT Research Institute, NICT, Osaka, Japan, 4. Graduate School of Information Science and Technology, Osaka University, Osaka, Japan

Keyword: Brain networks, Communication dynamics, Packet, Simulation

Hub regions (high degree nodes) in structural brain networks allow for efficient communication between distant cortical areas. Despite the importance of hub regions in cortical communication, little is known on how hubs contribute to the dynamics of packet-based communication in structural brain networks. In packet-based communication, signals are packetized during their hop-by-hop transitions between source and destination nodes in a network, and there is increasing evidence of such a communication framework existing in the cortex. A previous simulation study demonstrated that packetization of signals improved the communication speed among nodes in a structural brain network only when the signal flow was modeled with physiologically reasonable propagation strategies that balance speed and information cost. Here, we aim to discover the background mechanism behind this finding by investigating the role of hubs in packet-based communication. We first compared the results of signal flow simulated on the brain network used in the previous study to those obtained from its degree-preserved or fully randomized networks. We found that the previous finding held with the degree-preserved randomized networks but not with fully randomized ones. This observation suggests that the existence of high-degree hubs in the network was crucial for obtaining the results in the previous study. We then quantified how many signals were stored into the buffer of nodes on average during the simulations. We found that hub nodes gathered more signals acting as bottlenecks when signals were packetized under inefficient and unreasonable propagation strategies, e.g., random walks. By contrast, this did not happen for speed- and cost-balanced propagation strategies. This result indicates that hubs were responsible for the slowdown of communication by packetization when inefficient propagation strategies were used. The present study revealed the unique roles of hub regions in shaping the dynamics of packet-based communication on structural brain networks.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-291
マーモセット前頭前野からの皮質間および、皮質線条体投射が示す2つの対照的な特徴について
Two contrasting features of corticocortical and corticostriatal projections of the marmoset prefrontal cortex

*渡我部 昭哉(1)、Henrik Skibbe(1)、中江 健(2)、阿部 央(1)、一戸 紀孝(3)、Jian Wang(1)、高司 雅史(1)、水上 浩明(4)、Alexander Woodward(1)、Rui Gong(1)、畑 純一(5)、岡野 栄之(6)、石井 信(2)
1. 理研 脳神経科学研究センター、2. 京都大学、3. 国立精神・神経医療研究センター、4. 自治医科大学、5. 東京都立大学、6. 慶應大学
*Akiya Watakabe(1), Henrik Skibbe(1), Ken Nakae(2), Hiroshi Abe(1), Noritaka Ichinohe(3), Jian Wang(1), Masafumi Takaji(1), Hiroaki Mizukami(4), Alexander Woodward(1), Rui Gong(1), Junichi Hata(5), Hideyuki Okano(6), Shin Ishii(2)
1. RIKEN CBS, Wako, Japan, 2. Kyoto Univ, Kyoto, Japan, 3. NCNP, Kodaira, Japan, 4. Jichi Med Univ., Shimotsuke, Japan, 5. Tokyo Metropolitan Univ., Tokyo, Japan, 6. Keio Univ., Tokyo, Japan

Keyword: prefrontal cortex, marmoset, corticostriatal, connectome

The prefrontal cortex (PFC) is positioned at the highest stage of neural integration and exerts top-down influences on various brain regions. To clarify the structural basis of control by PFC neurons, we performed a projectome mapping of PFC using common marmosets as the model primate. Based on high-resolution 3D datasets, we characterized two contrasting features of corticocortical and corticostriatal projections that may exert differential effects on the target neurons. One was the abundance of focal projections, which is characterized by multiple foci of axonal terminations packed into narrow spots. These foci were scattered within the extra-PFC association areas or in the caudate nucleus in a globally topographic manner. The other was the wide coverage of cortical and striatal regions by the spread of tracers with various intensity and sparsity, which led to extensive overlaps between different injections. These overlaps were analyzed as the common patterns of projections by nonnegative matrix factorization (NMF). Quantitative characterization of these features revealed the primary importance of topographic gradients within PFC in determining the projection profiles of a given site. These data not only provide deep insight into the organization of PFC areas and their influences on their projection targets but also found the basis to decipher the complex architecture of the primate brain including humans. Ref: https://dataportal.brainminds.jp/marmoset-tracer-injection
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-292
A novel python programming interface for STEPS
*Jules Lallouette(1), Erik De Schutter(1)
1. Okinawa Institute of Science and Technology Graduate University (OIST)

Keyword: Computational Neuroscience, Simulation, Software

STEPS (steps.sourceforge.net) is a modeling and simulation software package that allows the simulation of stochastic neuronal reaction-diffusion processes on realistic tetrahedral meshes [1]. STEPS models are fully defined by one or several python scripts that encompass model declaration, geometry specification, and simulation control. With the original STEPS python interface, as the complexity of models increases, these scripts become harder to maintain and extend. Notably, the modeling of multi-state complexes with a very high number of distinct functional states, like the Ca2+/calmodulin dependent protein kinase II [2], can be a tedious and error-prone process.

To address these problems, we introduce a novel, more intuitive STEPS python interface. This new programming interface implements rule based modeling methods [3] that greatly simplify the declaration of chemical reactions involving multi-state complexes by specifying template reactions involving subunits of these complexes. With the release of this new interface, data recording was overhauled and automatized so that simulation data can be exported to file formats readily usable by state-of-the-art data visualization software like ParaView (www.paraview.org) or VisIt (visit-dav.github.io/visit-website). In addition to the increased readability of model scripts, emphasis was put on making model sharing and publication easier with the addition of a physical units system and the automatic export of parameter tables. We discuss these new features and present the modeling workflow using this new python interface, from model specification to data visualization.

References
[1] I. Hepburn, W. Chen, S. Wils, and E. De Schutter. Steps: efficient simulation of stochastic reaction–diffusion models in realistic morphologies. BMC systems biology, 6(1):36, 2012.
[2] J. B. Myers, V. Zaegel, S. J. Coultrap, A. P. Miller, K. U. Bayer, and S. L. Reichow. The camkII holoenzyme structure in activation-competent conformations. Nature communications, 8(1):1–15, 2017.
[3] L. A. Chylek, E. C. Stites, R. G. Posner, and W. S. Hlavacek. Innovations of the rule-based modeling approach. In Systems Biology, pages 273–300. Springer, 2013.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-293
多変量時系列の類似性定量化手法及びグループサロゲート法:安静時fMRIにおける研究
Similarity quantification and group surrogate data generating models of multivariate time series: a resting-state fMRI study

*奥野 琢人(1)、畑 純一(1,4)、塚田 啓道(3)、中江 健(2)、岡野 栄之(1)、Alexander Woodward(1)
1. 理化学研究所、2. 京都大学、3. 中部大学、4. 東京都立大学
*Takuto Okuno(1), Junichi Hata(1,4), Hiromichi Tsukada(3), Ken Nakae(2), Hideyuki Okano(1), Alexander Woodward(1)
1. RIKEN, 2. Kyoto University, 3. Chubu University, 4. Tokyo Metropolitan University

Keyword: Group Surrogate Data Generating Model, Multivariate Time-series Ensemble Similarity Score, VARDNN, Resting-state fMRI

To elucidate the nature of brain function and its pathologies, improvement in non-invasive approaches such as big-data analysis and in silico simulation are important. Structural and functional analyses of the whole brain using MRI (magnetic resonance imaging) data is one popular approach. Simulation based on this data is an important and challenging topic in the field of neuroscience. In this study, we propose a new method for quantifying the similarity between two multivariate signals called Multivariate Time-series Ensemble Similarity Score (MTESS). MTESS consists of several major statistical properties for analyzing multivariate time-series similarity. We analyzed gender difference and session difference in human HCP (Human Connectome Project) resting state (rs-) fMRI data (132 regions) and the effect of anesthesia in marmoset rs-fMRI data (254 regions) by using MTESS and other statistical tests. MTESS results showed clear performance as a quantification method. Then, we used several VAR (vector auto-regression) surrogate techniques to establish a set of ‘group surrogate data’ generating models (GSDGMs) that generate plausible standard brain dynamics based on a large HCP rs-fMRI dataset (N=410, 410). The group surrogate data generated by the Vector Auto-Regressive Deep Neural Network (VARDNN) surrogate showed the best similarity score with rs-fMRI signals of HCP subjects, followed by the Principal Component VAR (PCVAR) and VAR surrogate methods. The similarity scores of group surrogate data vs. real rs-fMRI data also showed better results than a similarity comparison across real rs-fMRI of HCP subjects. Our results show that VAR based surrogate techniques can successfully generate group representative fMRI signals which possess statistical properties found in real fMRI signals. These will be further used for in silico whole brain simulations.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-294
3種類の抑制性細胞が協働する局所神経回路の興奮-抑制バランス変化に起因する活動変調
Pathological Effects of an Excitatory/Inhibitory Imbalance in a Microcircuit Model Involving Three Inhibitory Neuron Classes

*我妻 伸彦(1)、信川 創(2)
1. 東邦大学、2. 千葉工業大学
*Nobuhiko Wagatsuma(1), So Nobukawa(2)
1. Toho University, 2. Chiba Institute of Technology

Keyword: Inhibitory Neuron Classes, Cortical Microcircuit Model, Excitation/Inhibition Imbalance, Autism spectrum disorder (ASD)

Autism spectrum disorder (ASD) and schizophrenia (SZ) are a group of complex and heterogeneous mental disorders involving multiple neural system dyfunctions. Atypical visual perception in people with these disorders is hypothesized to stem from an Excitatory/Inhibitory (EI) imbalance in the brain (Bruining et al., Sci. Rep., 2020). Previous studies have reported that a majority of cortical inhibitory neurons expresses one of three genes, parvalbumin (PV), somatostatin (SOM) and vasoactive intestinal polypeptide (VIP), which underlie the regulation of cortical activities (Pfeffer et al., Nature Neurosci., 2013; Lee et al., Cell Reports, 2018). However, the detailed neuronal microcircuit consisting of these inhibitory neuron classes and excitatory pyramidal (Pyr) neurons for underlying perception in people with ASD and SZ remains largely unclear. In this study, we developed a computational microcircuit model with biologically plausible structure of cortical visual layers 2 and 3 and performed simulations of the model with varying the EI imbalance in the proposed model by transferring the specific inhibitory neuron class to Pyr neurons and vice versa. Simulations of our microcircuit model implied that changes in EI imbalance by decreasing level of the SOM neuronal population preferentially impaired the neural gamma-band actitivity, which were agreement with experimental results of MEG study for patients with ASD during perceptual organization (Sun et al., J. Neurosci., 2012). By contrast, the magnitude of gamma-band activity was markedly enhanced with a decreasing level of the PV neuronal population. These simulation results of our microcircuit model might provide important insights into the atypical structure of neuronal network for ASD and SZ.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-295
小脳神経回路のマルチコンパートメントモデルシミュレーションを用いたプルキンエ細胞の同期活動がもたらす深部小脳核ニューロンへの影響の調査
Multi-compartment model simulation of cerebellar Purkinje cells for gating activation of deep cerebellar nuclei via synchronization

*小林 泰良(1)、栗山 凜(1)、山﨑 匡(1)
1. 電気通信大学大学院情報理工学研究科
*Taira Kobayashi(1), Rin Kuriyama(1), Tadashi Yamazaki(1)
1. Grad Info Eng, Univ of Elec-Com, Tokyo, Japan

Keyword: Cerebellum, Purkinje cell, Network model simulation, Multi-compartment model

Spiking network simulation is a useful tool to reproduce and predict spatiotemporal activity of neurons in a network. Typical simulation employs relatively a simple neuron model such as a leaky integrate-and-fire model. Use of such simple neuron models would be justified as long as brain functions emerge solely from interactions of multiple neurons via exchanging spikes rather than information processing in a single neuron via dendritic nonlinear computation. However, properties of single neurons such as dendritic morphology and ion channel distributions actually play various essential roles in information processing in the brain. For example, in neurodegenerative diseases such as spinocerebellar ataxia type 1 (SCA1), morphological degeneration of cerebellar Purkinje cells and motor dysfunction are reported. To investigate the causal relationship between neuronal morphology and brain function, spiking network simulation using biophysical neuron models with detailed morphology and ion channels, called multi-compartment models, will be useful. In this study, we simulated a cerebellar network model using multi-compartment models, and investigated functional roles of morphological structure of cerebellar Purkinje cells. In particular, we manipulated the dendritic morphology of Purkinje cells as seen in SCA1 animals, and examined how the manipulation affected the activities of Purkinje cells and downstream deep cerebellar nuclear neurons, while assuming gap junctions between dendrites of Purkinje cells in a parasagittal plane. We observed that normal Purkinje cells synchronized at a beta oscillation range due to the gap junctions, which in turn inhibited the deep cerebellar nuclei rhythmically and allowed the nuclei to elicit spikes regularly and oscillatory. In contrast, abnormal Purkinje cells failed to emit spikes synchronously, because degenerated dendrites cannot form sufficient gap junctions. The asynchronous activity inhibited the deep cerebellar nuclei tonically, and let the nuclei silent completely. These results suggest that normal morphological structure of Purkinje cells play a key role in synchronizing their activity, which in turn causes regular spiking activity in the deep cerebellar nuclei.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-296
Using cDNN approach to link light information with neuronal pathways
*Joshua Kai Kinugasa(1,2), Karen Wakamatsu(1,3), Rafael Viana(1), Pavel Prosselkov(1)
1. Dept Bio, Manai Institute of Science and Technology, Tokyo, Japan, 2. KAIS International School, Tokyo, Japan, 3. Mita International School

Keyword: Neuroinformatics, Clustered Deep Neural Network, Zebrafish

Existing works based on modern computing paradigms have shown a correlative linkage between stimuli and responses of various types of neuronal cells, but the confidence intervals presented are often extremely wide, and the findings between different groups tend to have differences and are not always reproducible. Furthermore, works which attempt to casually link the two different types of data described above are rare. As such, how each type of neuron integrates the variety of signals it receives from the downstream neighbors, as well as how the captured information gets distributed across the neuronal network population and coordinates its synchronous activity, still remains largely undetermined.

Last year, to tackle such challenges, the precedent of this work (1) suggested various computing methods, namely multithreaded CDNNs and multivariable Taylor series, to establish a connection between the following two datasets:
1. whole brain neuronal dynamics during the natural behavior of zebrafish during prey capture of 10,000 neurons, with activity presented as fluorescence intensity recordings (Cong et al. 2017, eLife, 6:e28158)
2. how diverse functional and structural properties of individual retinal ganglion cells in zebrafish, displaying distinct types of colors presented (both at dendritic and somatic levels, stimulated by two photon imaging) contribute to the total information flow of the spectral contrast while intermixed with temporal information, also during prey capture (Zhou et al. 202, bioRxiv, doi: 10.1101/2020.01.31.927087)

Since then, we have actually used these methods to analyze the data to produce vector field representations for the first datasets, and have detected differences between the pan retinal and the strike zone discussed in the original paper. Now, we are running a similar analysis for the second paper, and striving to link them together, in order to create a function of light to the neuronal path of the associated signal. We are expecting that by integrating these two available datasets using the proposed multithreading method, it can improve the current understanding of how a brain consisting of more than 100 billion active neurons processes information to drive behavior.

(1) J. Karpelowitz, R. Viana, P. Prosselkov (2021) "Understanding brain-behavior interactions and information processing by parallel computation approach" 44th JNS-2021, Kobe, 28th-31st July.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-297
ヒト脳の機能的ネットワークにおける双方向接続のコア
Bidirectionally connected cores of the human functional connectome

*田口 智也(1)、北園 淳(1)、笹井 俊太朗(2)、大泉 匡史(1)
1. 東京大学、2. 株式会社アラヤ
*Tomoya Taguchi(1), Jun Kitazono(1), Shuntaro Sasai(2), Masafumi Oizumi(1)
1. The University of Tokyo, 2. Araya Inc.

Keyword: consciousness, network core, functional connectivity, Human Connectome Project

The neural substrate for consciousness is still unknown but is gradually becoming better understood. Previous studies have shown that conscious experience depends on both feedforward and feedback processing, suggesting that bidirectional interactions among brain regions are important for consciousness. Based on this, it has been hypothesized that the basis of consciousness is in the “core” of the brain network, where bidirectional interaction is particularly strong. Accordingly, extracting such cores from brain dynamics would likely aid the identification of brain regions supporting consciousness. Also, if some specific regions are commonly included in such cores across various cognitive states, the regions would be indispensable for consciousness. Here, we applied a recently proposed method of extracting network cores with strong bidirectional connections to causal networks estimated from whole-brain fMRI data. To examine the core structure shared across different cognitive states, we compared data in a resting state and seven cognitive task states from the Human Connectome Project. We found that, for the resting state, many regions of the occipital lobe were included in the cores, while regions of the frontal lobe were not. We also found that the same tendency was commonly observed for the seven tasks. These results indicate that the occipital lobe is essential for consciousness, which is roughly consistent with the prediction of the integrated information theory that consciousness is associated with the posterior part of the neocortex.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-298
Spatiotemporal Reconstruction Accuracies of M/EEG Inverse Solution Models
*Duygu Sahin(1), Hiroaki Mizuhara(1)
1. Grad Sch Informatics, Kyoto Univ, Kyoto, Japan

Keyword: M/EEG INVERSE PROBLEM, SOURCE LOCALIZATION, MATRIX-BASED AND TENSOR-BASED MODEL ACCURACY

Magnetoencephalogram (MEG) and Electroencephalogram (EEG) are two of the popular non-invasive techniques for investigating oscillations and neuronal network dynamics. By recent advances in technology and computational science, source localization techniques are getting to delineate the physiological relevance of the oscillations as well as the transients in source space. Current models solve the inverse problem of M/EEG by utilizing various regularization techniques and spatio-temporal dictionaries. However, there is no perfect model fitting for all types of data yet and there is no clear guideline for selection of the models and parameters. Thus, in this study we focused on how selected unsupervised matrix-based (STONNICA, Valdés-Sosa et al., 2009) and tensor-based (PARAFAC, Karahan et al., 2015) models as well as dictionary supervised models (STOUT, Castaño-Candamil et al., 2015) perform on simulated EEG data with various parameters related with the signal complexity. We fixed the head model, electrode number, source extent and orientation while varying the number of sources, locations, types of signals and percentage of noise. In total there are 45 different cases with 100 simulations for each. To assess the quality of the methods focality, location accuracy, and Earth’s Mover Distance is used for spatial extent of the sources whereas correlation and envelope correlation is used for temporal extent of the sources. Reconstruction accuracy is carried on to take both extents into account. All the results from the six quality measurement metrics were then tested statistically by 4-way ANOVA. We observed that the results are heavily dependent on the tuning parameter optimization and no information criteria offered the best solution. Thus, both visual inspection and tuning parameter optimization techniques such as Bayesian Information Criterion and Corcondia are used. In summary, apart from the envelope correlation all quality metrics of each model are found to be significantly different from each other. While tensor-based models offer flexibility over the selection and tuning of the dimensions, matrix-based methods perform better when the penalties and dictionaries explain the data well. Our study demonstrates that even with the state-of-the-art models available, the inverse problem estimation is impossible to achieve both temporal and spatial accuracy perfectly.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-299
ZAViewer: An Online High-Resolution Zooming Brain Image Viewer with Artificial Intelligence Capability
*Rui Gong(1), Frederic Papazian(1), Masahide Maeda(1), Jonathan Lai(1), Hiroshi Abe(2), Toshiki Tani(2), Noritaka Ichinohe(2), Alexander Woodward(1)
1. Connectome Analysis Unit, RIKEN Center for Brain Science, 2. Laboratory for Molecular Analysis of Higher Brain Function, RIKEN Center for Brain Science

Keyword: Online image viewer, Brain Atlas, Artificial Intelligence, Image processing

We present our online image viewer, the Zooming Atlas Viewer (ZAViewer), for viewing and sharing very high resolution (e.g. 50,000x50,000 pixels) microscopy images of the brain. The viewer was originally designed for viewing brain images of the common marmoset brain generated from Japan’s Brain/MINDS project. ZAViewer supports multi-layer visualization as a stack, so users can overlay data such as fluorescence images, signal segmentation, myelin, Nissl, or brain region delineations. The viewer supports loading of anatomical hierarchy information associated with region delineations so an entire brain dataset can be easily navigated through. One of the novelties of the viewer is its ability to process the displayed images via trained artificial intelligence models running entirely in the browser. Other features include the ability to edit and save vector based delinations, and the choice of using ZAViewer with a dedicated back-end (image server and web-services providing the configuration), or without (as a simple web-app hosted with its image and configuration data stored as static files). With an image server for serving images, ZAViewer supports 16 bit images and server-side image manipulation, such as contrast and gamma adjustment. Our goal is to provide a user-friendly tool to the neuroscience community for the viewing and sharing of high-resolution, multi-contrast brain image data in the web-browser.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-300
The STEPS distributed solver: Optimizing memory consumption of large scale spatial reaction-diffusion parallel simulations
*Weiliang Chen(1), Tristan Carel(2), Iain Hepburn(1), Jules Lallouette(1), Alessandro Cattabiani(2), Christos Kotsalos(2), Nicola Cantarutti(2), Pramod Kumbhar(2), James Gonzalo King(2), Erik De Schutter(1)
1. Okinawa Institute of Science and Technology Graduate University, 2. École Polytechnique Fédérale de Lausanne

Keyword: reaction-diffusion, parallel simulation, stochastic, large-scale

Computer simulations have become an essential tool for neuroscience research in recent decades thanks to the rapid evolution of computer hardware and scientific simulators. High-performance computing clusters and parallel simulators, such as the "TetOpSplit" solver in the STEPS simulator [1], provide new opportunities to model complex neuroscience phenomena at a scale that is unreachable for their traditional serial counterparts. For example, STEPS can simulate molecular reactions and diffusion with electrophysiological changes on a complete neuron cell morphology [2].
The scale of scientific models simulated on HPC clusters is often restricted by multiple factors, one of which is the memory capacity. Unlike regular desktop computers, computing clusters have a much constrained per-core memory quota [3], usually below 4 Gigabytes for CPU-based systems and even less for other accelerators such as GPUs. Such constraint becomes a major bottleneck in the "TetOpSplit" solver as the model scale increases [4]. In order to support super large-scale model simulations, we have introduced a new parallel "DistTetOpSplit" solver in STEPS 4.0 [4] with a sophisticated distributed mesh library and efficient data layout as backend. This new solution significantly reduces the memory footprint of the simulation while maintaining similar performance.
In this presentation we will report the recent progress of this solver, including performance optimizations and feature enhancements. We will also explain its usage from a modeler's perspective and discuss suitable modeling scenarios for this new solution.

References:
[1] Chen, W. and De Schutter, E. (2017). Parallel STEPS: Large scale stochastic spatial reaction-diffusion simulation with high performance computers. Frontiers in Neuroinformatics, 11, 13. doi:10.3389/fninf.2017.00013.
[2] Chen, W., Hepburn, I., Martyushev, A., De Schutter, E.. Modeling Neurons in 3D at the Nanoscale, in Guigliano, M. (Ed.) Computational Neuroscience Approaches to Cells and Circuits, Springer Nature, in press.
[3] Zivanovic, D., Pavlovic, M., Radulovic, M., Shin, H., Son, J., Mckee, S. A., et al. (2017). Main memory in hpc: Do we need more or could we live with less? ACM Trans. Archit. Code Optim.14. doi:10.1145/3023362.
[4] Carel, T., Chen, W., et al. STEPS 4.0: Fast and memory-efficient molecular simulations of neurons at the nanoscale, Frontiers in Neuroinformatics, submitted.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-301
皮質コネクトームにおける階層的な双方向性ネットワーク構造の種間比較
Cross-species comparison of bidirectionally connected core structures in the cortical connectome

*阿部 剛大(1)、田口 智也(1)、北園 淳(1)、大泉 匡史(1)
1. 東京大学 大学院総合文化研究科
*Kota Abe(1), Tomoya Taguchi(1), Jun Kitazono(1), Masahumi Oizumi(1)
1. Grad Sch of Arts and Sciences, Univ of Tokyo, Tokyo, Japan

Keyword: cross-species comparison, network core, structural connectome , consciousness

In recent years, many comparative studies of brain network topology across species have been conducted to identify general principles and species-specific features in neural wiring. The connectomes of different species can be compared quantitatively by graph theory, and many structural similarities have been identified, including a remarkable community structure. A recent study proposed a method for hierarchically decomposing a network into cores based on the strength of bidirectionality. Bidirectionality is regarded as an important property closely related to consciousness, and it was found that the cores with strong bidirectional connections included regions related to consciousness in a whole-brain mouse connectome. However, little is known about how such a structure is shared across species. Here, we performed a comparative analysis of bidirectionally connected core structures in the cortical connectome of mouse, rat, and macaque. We found that cores with strong bidirectional connections contain specific regions in all cortical connectomes. Many of them were isocortical areas, but the entorhinal cortex in the hippocampal formation and the basolateral amygdala in the cortical subplate were also contained in the cores with strong bidirectional connections. In addition, the visual cortex tended to be contained in relatively weak cores compared to the other isocortical areas. On the other hand, species-specificity of the core structure was observed in the macaque connectomes, where the relative bidirectional strength of the cores including frontal cortical regions is different from both mouse and rat. These results suggest that while most of the core structure of bidirectional connections could be evolutionarily conserved in mammalian cortical networks, there may be significant differences in the structure when species difference is large.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-302
Geometric Measures of Dorsolateral Prefrontal Cortex in Parkinson’s Disease: Implications for Personalized Transcranial Brain Stimulation
*Lu Hanna(1,2)、Zhang Li(3)、Meng Lin(4)、Ning Yuping(2,5)、Jiang Tianzi(6)
*Hanna Lu(1,2), Li Zhang(3), Lin Meng(4), Yuping Ning(2,5), Tianzi Jiang(6)
1. Department of Psychiatry, The Chinese University of Hong Kong, Hong Kong SAR, China, 2. The Affiliated Brain Hospital of Guangzhou Medical University, Guangzhou, China, 3. Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China., 4. Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China, 5. The First School of Clinical Medicine, Southern Medical University, Guangzhou, China, 6. Institute of Automation, Chinese Academy of Sciences, Beijing, China

Keyword: Brain stimulation, Parkinson's disease, Computational modeling, Simulation

Background: Transcranial magnetic stimulation (TMS) is increasingly used for ameliorating the cognitive dysfunction and mood disturbance in Parkinson’s disease (PD). Cortical morphometry, as a phenotype of PD, plays a vital role in determining the precise locations of treatment targets and the corresponding stimulation-induced electric field intensity. This study was proposed to investigate and quantify the geometric features of dorsolateral prefrontal cortex (DLPFC) and its impact on stimulation-induced electric field in PD patients.
Methods: Structural magnetic resonance imaging scans from PD patients (n=47) and age-matched normal controls (n=36) were drawn from the NEUROCON and Tao Wu datasets. Scalp-to-cortex distance (SCD), as a geometric index, was used to measure the distance from scalp (i.e., TMS coil) to the cortex in the targets of left DLPFC. The Montreal Neurological Institute (MNI) coordinates of the targets of left DLPFC were drawn from the published studies, including: (1) Brodmann Area (BA) 9 Centre; (2) EEG F3; (3) Average 5 cm; (4) Fitzgerald Target; (5) Paus Cho Target; (6) Rusjan Target; (7) BA46 Centre. The intensity and focality of the SCD-dependent TMS-induced electric field of the seven targets of left DLPFC were examined and quantified by using Finite Element Method (FEM).
Results: PD patients had increased SCD and higher variability of left DLPFC than normal controls across the seven targets. Significant differences of the SCDs were found in specific targets, including Fitzgerald Target (t = -2.14, p = 0.036), Paus Cho Target (t = -2.42, p = 0.018), and BA 46 center (t = -2.23, p = 0.029). Moreover, PD patients demonstrated marked decreased the peak value of electric field strength in EEG F3 (Target 2: t = 2.129, p = 0.041), Average 5 cm (Target 3: t = 2.919, p = 0.006), Rusjan Target (Target 6: t = 2.936, p = 0.006), BA 46 center (Target 7: t = 2.298, p = 0.028). The SCD of left DLPFC had better performance in differentiating early-stage PD patients from normal controls than other morphometric measures.
Conclusion: This is the first demonstration that region-specific scalp-to-cortex distance, as a key parameter of transcranial brain stimulation, has great impacts on stimulation-induced electric field and could be used as a marker to differentiate early-stage PD patients. Our findings have important implications for developing and optimizing personalized dosimetry in the treatment of age-related neurodegenerative diseases.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-303
細胞間の相互作用に起因した小脳における樹状突起選択の計算モデル
A computational model for dendritic pruning in the cerebellum induced by intercellular interactions

*加藤 瑞己(1)、Erik De Schutter(1)
1. 沖縄科学技術大学院大学
*Mizuki Kato(1), Erik De Schutter(1)
1. Okinawa Institute of Science and Technology Graduate School

Keyword: Purkinje cell, granule cell, dendritic selection, migration

The cerebellum is involved in coordinating motor functions as well as in cognition and emotion. Major neurons in this complex information processor migrate and maturate postnatally in mammalian brains. During the postnatal development phase, each Purkinje cell neuron, the sole output from the cerebellar cortex, selects a primary dendritic tree among multiple young branches. Meanwhile, a large population of granule cells, the most copious neurons in the brain, migrates from a surface to a bottom of the cortex. This cortex layer reconstruction by granule cells makes highly crowded environment for the Purkinje cell to growth dendrites. Although intensive interactions between Purkinje cell dendrites and migrating granule cells have been recognized, involvement of the granule cells in the primary dendritic selection rules is still unclear. In order to investigate their interactions under strong control over maturation parameters, we constructed a computational model representing migrations of 3,024 granule cells and dendritic development of 48 Purkinje cells in a 3D cube. By varying dendritic pruning conditions, changes in maturation of synapses between primary trees and granule cells were observed. A new version of the NeuroDevSim software (developed by Computational Neuroscience Unit at OIST) is used for its capability to simulate interactions among large populations for shaping neuronal morphology. This study presents the first computational model that simultaneously simulates populations of growing Purkinje cells and the dynamics of migrating granule cells. The model can bridge the gap in understanding the developmental course of early neonatal Purkinje cell dendrites from the aspect of cellular interactions, and may provide new insights in how the cerebellar cortex develops into a normal or abnormal structure.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-304
Branch-specific clustered Parallel Fiber input computation in a Purkinje cell
*Capo Rangel Gabriela(1)、De Schutter Erik(1)
1. 沖縄科学技術大学院大学
*Gabriela Capo Rangel(1), Erik De Schutter(1)
1. Okinawa Institute of Science and Technology (OIST)

Keyword: Purkinje Cell, parallel fiber , multiplexed coding, dendritic spikes

The cerebellum is located in the back side of the brain and has been intensively studied for its role in coordinating movements and maintaining equilibrium. The recent years have uncovered a yet more vital role that cerebellum may play in cognition and thinking [1,2,3], therefore indicating an immense need of further study. The most predominant cell type in the cerebellum are the Purkinje cells (PCs), which are characterized by a remarkably extensive branching of their dendritic trees. PCs represent the sole output of the cerebellar cortex. Their unique dendritic morphology enables them to receive more inputs than any other cell type in the brain and allows them to encode sensorimotor information with great accuracy. PCs receive two types of excitatory synaptic input: a single climbing fiber (CF) that forms hundreds of synapses or approximately 100.000 parallel fibers (PFs) that run orthogonally on the dendrites of the PCs. Unlike the calcium spikes triggered by CF [4], the dendritic spikes triggered by PFs are still unexplored and therefore, represent the focus of this study.
To understand cerebellar function, it is essential to untangle the mechanisms through which PCs encode their input information. Hong et al. [5] offered the first evidence of multiplexed coding in the cerebellum, showing that PCs use both the timing and the rate of their spiking activity to encode precise information. To determine which coding strategy PCs use in response to a clustered PF input, Zang et al. [6] developed a PC computational model. Their results showed that most of the branches of the PC linearly integrate PF inputs while 4 others generate localized all-or-none dendritic spikes.
We further explored the multiplexed coding strategies that Purkinje cells employ in response to clustered PF input by proposing the first heterogenous ion channel density PC model. Our dendritic tree is split into different branches on which we distribute PF synapses. We show how changing the biophysical properties shifts the coding strategy from linear to burst-pause and vice-versa for all branches. We determine the PF thresholds required to initiate an all-or-none dendritic spike for each branch and we address dendritic spike propagation. Furthermore, we simulate multiple branch activation and we show their burst-pause response and dendritic spike propagation. Finally, we explore how a different branch selection influences the PF threshold and we analyze its effect on the coding strategy.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-305
構造化モデルを用いた脳磁場信号源推定における信号源パターンの推定精度の評価
MEG source estimation using a grouped automatic relevance determination prior for complex brain activity patterns

*小泉 恒介(1)、宮崎 海(1)、韮澤 駿(1)、赤松 和昌(1)、宮脇 陽一(1)
1. 電気通信大学
*Kosuke Koizumi(1), Kai Miyazaki(1), Shun Nirasawa(1), Kazuki Akamatsu(1), Yoichi Miyawaki(1)
1. The University of Electro-Communications

Keyword: MEG, MACHINE LEARNING, SOURCE LOCALIZATION, DATA ANALYSIS

Magnetoencephalography (MEG) excels in temporal resolution to measure human brain activity but lacks spatial resolution to examine neural information representation at a fine scale. To overcome this issue, source estimation is often applied to MEG signals (Hämäläinen et al., 1994) and the estimated source patterns are analyzed by neural decoding (Nieuwenhuijzen et al, 2013). However, recent studies have shown that most MEG source estimation methods undergo “information spreading” that yields significant decoding accuracy in brain areas irrelevant to true source locations, misleading us to false-positive interpretation (Sato et al, 2018). To suppress this phenomenon, we have proposed a structured Bayesian model using grouped automatic relevance determination (gARD) (Yu et al., 2015) and succeeded in narrowing the spatial extent of information spreading in a simple case assuming homogeneous source patterns in specified cortical areas (Ishibashi et al, 2019). In this study, we examined whether the model also works effectively for more complex source patterns like real brain activity. For this purpose, we used two independent random patterns of source currents generated based on the Gaussian distribution and assigned them to the bilateral V1 and the inferotemporal cortex (IT). MEG signals were generated from the source currents with additive observation noises, to which the gARD-based source estimation model was applied. Source estimation accuracy was evaluated by correlation coefficients between the original and estimated source patterns. The spatial extent of information spreading was examined by searchlight decoding (Kriegeskorte et al., 2006) to classify the two source patterns for all cortical areas. The model performance was compared with variational Bayesian multimodal encephalography (VBMEG) (Sato et al., 2004) as one of the most accurate source estimation models currently available. Results showed that the gARD-based model achieved higher correlations between the original and estimated source patterns in both areas for the small observation noise and narrowed the spatial extent of information spreading irrespective of the magnitude of the observation noise than did VBMEG. These results indicate that the gARD-based model can better estimate complex source patterns than the previous model while suppressing information spreading, particularly for the small observation noise, suggesting its applicability to real human brain activity.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-306
Parallel modeling of vesicles in STEPS
*Iain Hepburn(1), Weiliang Chen(1), Jules Lallouette(1), Andrew Gallimore(1), Sarah Yukie Nagasawa(1), Erik De Schutter(1)
1. Okinawa Institute of Science and Technology Graduate University

Keyword: Molecular modeling, Vesicles, Software, Parallel computing

The molecular simulator STEPS [1] is designed to simulate neuronal molecular models spanning scales from synaptic signaling pathways up to single-cell electrical signals. A recent extension allows modeling of finite-sized vesicles in a hybrid system: vesicles are spherical structures that occupy volume in the same tetrahedral mesh in which regular reaction-diffusion and voltage computations are carried out. The interactions between vesicles, the transport and trafficking proteins that they host and their environment allows modeling of processes such as docking, exocytosis, vesicle-vesicle binding, endocytosis and active transport, alongside the regular reaction-diffusion processes that STEPS supports.

Single-core applications of molecular simulators such as STEPS are very limited, and the real power of molecular simulators is only unlocked in parallel. We have successfully parallelized the reaction-diffusion and voltage components of STEPS in MPI [2,3], yet parallelization of the new vesicle-related components brings many additional challenges. Specifically, volume-occupying structures such as vesicle may cross MPI partitions, and phenomena such as endocytosis and exocytosis also require unique treatment. We describe our 1-to-n decomposition solution to this problem in which one MPI process owns specific vesicle-related operations such as passive and active vesicle transport, and communicates to n processes that carry out the more computationally intensive reaction-diffusion operations, now incorporating some vesicle-related phenomena such as vesicle surface reactions and vesicle binding events. We present performance of realistic models in our initial parallel solution to vesicle modeling in STEPS, and discuss both benefits and limitations of this approach.

[1] I Hepburn, W Chen, S Wils, E De Schutter (2012) STEPS: efficient simulation of stochastic reaction-diffusion models in realistic geometries. BMC Systems Biology 6:36
[2] W Chen and E De Schutter (2017) Parallel STEPS: Large Scale Stochastic Spatial Reaction-Diffusion Simulation with High Performance Computers. Front Neuroinform. 10;11:13
[3] W Chen, I Hepburn, A Martyushev, E De Schutter “Modeling Neurons in 3D at the Nanoscale” in M Guigliano (Ed.) Computational Neuroscience Approaches to Cells and Circuits, Springer Nature, in press
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-307
長期計測可能な脳血流・神経活動計測用イメージングデバイスの開発
Chronic brain imaging device for measuring blood flow and neural activity

*春田 牧人(1)、Mark Christian Guinto(1)、倉澤 和暉(1)、太田 安美(1)、河原 麻実子(1)、竹原 浩成(1)、田代 洋行(1,2)、笹川 清隆(1)、太田 淳(1)
1. 奈良先端科学技術大学院大学,先端科学技術研究科、2. 九州大学,医学研究院
*Makito Haruta(1), Mark Christian Guinto(1), Kazuki Kurasawa(1), Yasumi Ohta(1), Mamiko Kawahara(1), Hironari Takehara(1), Hiroyuki Tashiro(1,2), Kiyotaka Sasagawa(1), Jun Ohta(1)
1. Nara Institute of Science and Technology, Nara, Japan, 2. Kyushu University, Faculty of Medical Sciences, Fukuoka, Japan

Keyword: CMOS image sensor, Brain functional imaging, Cerebral blood flow, Behavioral experiment

We have developed a chronic brain imaging device for measuring blood flow and neural activity in a freely behaving mouse and assessed brain dynamics in a migraine model to better understand migraine pathology that is closely related to blood flow and neuronal ensemble activity. In previous studies, we designed ultra-small implantable imaging devices equipped with custom CMOS image sensors [1]. In particular, our chronic brain blood flow imaging approach realized long-term cerebrovascular observation in mice [2]. We made use of a chronic window based on a fiber optic plate and acquired brain surface images by coupling the device to the window. In this study, we have upgraded the chronic imaging device to allow two types of brain functional imaging: namely, blood flow imaging and fluorescence calcium imaging. The device has two colors of LEDs (Green: 535 nm, Blue: 465 nm) and a long-pass color filter (>500 nm) for the fluorescence imaging. We achieved measurement of blood flow and fluorescence using only a single device by collecting scattered light from the green LEDs on the brain surface, and fluorescence emission from the calcium indicator as it passed through the long-pass color filter, respectively. Additionally, the sensor component of the device can easily be detached from the chronic window. In total, the device weighs only 0.5 g. This device can be mounted on the head of the mouse without inducing undue stress during behavioral experiments, by which we demonstrated brain surface imaging in mice displaying migraine-associated features. Using our system, we tracked blood flow dynamics and fluorescence intensity changes in the course of cortical spreading depression, an electrophysiological event that is correlated with migraine aura.
All animal procedures conformed to the animal care and experimentation guidelines of Nara Institute of Science and Technology.
References
[1] J. Ohta, Y. Ohta, H. Takehara, T. Noda, K. Sasagawa, T. Tokuda, M. Haruta, T. Kobayashi, Y. M. Akay, M. Akay, “Implantable Microimaging Device for Observing Brain Activities of Rodents,” Proceedings of the IEEE, 105(1), 158-166, 2017.
[2] M. Haruta, Y. Kurauchi, M. Ohsawa, C. Inami, R. Tanaka, K. Sugie, A. Kimura, Y. Ohta, T. Noda, K. Sasagawa, T. Tokuda, H. Katsuki, J. Ohta, " Chronic brain blood-flow imaging device for a behavioral experiment using mice," Biomedical Optics Express, 10(4), 1557-1566, 2019.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-308
OpTER:オープンソースを用いた経上皮/内皮抵抗の測定手法
OpTER: a Low-Cost, Reproducible and Customizable Method to Assess the Cellular Barrier Function

*吉川 慧(1)、原田  佳奈(1)、田中  茂(1)、秀  和泉(1)、酒井  規雄(1)
1. 広島大学大学院医系科学研究科神経薬理学
*Satoshi Kikkawa(1), Kana Harada(1), Shigeru Tanaka(1), Izumi Hide(1), Norio Sakai(1)
1. Department of Molecular and Pharmacological Neuroscience, Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, Japan

Keyword: transepithelial/endothelial resistance , epithelial/endothelial barrier function, open labware, Arduino

The fabrication of open source-based experimental equipment has advantages such as cost reduction and high versatility. Transepithelial/endothelial resistance (TER) measurement is a non-invasive technique for assessing the integrity of tight junctions in model cells, including cells that constitute the blood-brain barrier, which is essential for drug and chemical transport experiments. Commercial equipment has many limitations in terms of size, specifications, and usage. Additionally, equipment for long-term real-time measurement is prohibitively expensive, which raises the barrier to entry in this field. We propose an open source-based prototype device: OpTER, a new versatile, reproducible, and inexpensive TER measurement method. OpTER is composed of the following: a volt-ohm meter consisting of a 16-bit precision AD converter and a simple voltage divider circuit based on Arduino and the free and open source software (FOSS); custom-made electrodes made of silver chloride, stainless steel (SUS304), and titanium with electrode material-dependent self-potentials, their drift, and noise characteristics, as previously reported; and a new measurement program using quasi-DC current considering the characteristics of electrodes. TER of filter-grown Caco-2 cells was measured, and the results reproduced the values obtained using existing equipment, except for the difference expected due to the heterogeneity of the current density. Additionally, it was possible to perform continuous measurement for 24 hours in the cell culture incubator by using titanium and SUS304. Furthermore, we report the application of the temperature sensor (DS18B20) for real-time measurement of the medium temperature and TER, and the example of high-throughput measurement of multiple samples using relays. The circuitry and programming of the OpTER will be made available on the Internet, and its simple mechanism, which can be assembled by non-specialists in electrical engineering, can be modified to suit the researcher's purpose. Although there is room for improvement, we expect that OpTER could be a new "option" for both amateurs and professionals when attempting to measure TER.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-309
高分解能マカクMRIのための頭蓋骨装着型8chフェイズドアレイコイル
A Skull-Fit 8-channel phased-array Coil (SFiC) for high-resolution macaque MRI

*浦山 慎一(1)、岡田 知久(1)、山口 玲欧奈(2)、疋嶋 啓吾(3)、窪田 慶(4)、高橋 淳(4)、川畑 義彦(5)、Wim Vanduffel(6)、伊佐 正(1,2,7)、尾上 浩隆(1)
1. 京都大学・医学研究科附属・脳機能総合研究センター、2. 京都大学・ ASHBi ヒト生物学高等研究拠点、3. 産業技術総合研究所・健康医工学研究部門・医療機器研究グループ 、4. 京都大学・iPS細胞研究所、5. (株)高島製作所、7. 京都大学大学院・医学研究科・高次脳科学講座・神経生物学分野
*Shin-ichi Urayama(1), Tomohisa Okada(1), Reona Yamaguchi(2), Keigo Hikishima(3), Kei Kubota(4), Jun Takahashi(4), Yoshihiko Kawabata(5), Wim Vanduffel(6), Tadashi Isa(1,2,7), Hirotaka Onoe(1)
1. Human Brain Research Center, Grad Sch Med, Kyoto Univ, Kyoto, Japan, 2. ASHBi, Kyoto Univ, Kyoto, Japan, 3. Medical Devices Research Group, Health and Medical Research Institute, AIST, Ibaraki, Japan, 4. Center for iPS Cell Research and Application, Kyoto Univ, Kyoto, Japan, 5. Takashima seisakusho Co.,Ltd, Tokyo, Japan, 6. Laboratory of Neuro- and Psychophysiology, Department of Neurosciences, KU Leuven Medical School, Leuven, Belgium, 7. Department of Neuroscience, Grad Sch Med, Kyoto Univ, Kyoto, Japan

Keyword: macaque MRI, RF coil, skull-fit coil

In order to improve the sensitivity of functional brain imaging with MRI at 3T, an 8-channel Skull-Fit phased-array Coil (SFiC), which was implantable and detachable on the monkey skull, was developed. First, using image data of a monkey's skull, we used a 3D printer to create a thin base that fits the surface of the skull. Next, all loop elements and electrical components except the resonant circuits were mounted and fixed with biocompatible UV-curing resin. Finally, the entire coil except for the connector was coated with Parylene (Parylene Japan LLC, Tokyo), a biocompatible and highly insulating coating material. During imaging, the coil was fixed to the skull via a head-post assembly, and a cable assembly with resonant circuits was connected between the coil and the preamplifier box. Validation of the coil using a NiCl2 solution phantom was carried out by comparing it with the 8-channel cylindrical coil, which was routinely used for macaque structural MRI. Signal-to-noise ratio (SNR) was significantly improved within the brain region for the SFiC versus the cylindrical coils (SFiC showed four times or higher SNR in the cortical region, and 0.8 to 1.2 times in the basal region where SNR is the lowest). In addition, B0/B1 maps demonstrated that the homogeneity was comparable between the two coils, suggesting that SFiC does not hamper images despite close arrangement to the signal source. In vivo measurement in Japanese macaque (macaca fuscata) showed that the SFiC substantially increased the spatial resolution and sensitivity in anatomical and functional brain imaging. Resting-state functional MRI under mild anesthesia (0.7% isoflurane) showed that interregional functional connectivity was reliably depicted by test-retest observations in millimeter-scale regional circuits in the brain. This study suggests that an SFiC can be used in macaque MRI for inferring the neural basis of functional connectivity in the brain.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-310
アンカリング効果と不確実な情報に対する学習
Anchoring effect and learning for uncertain information

*竹川 高志(1)、濱田 智明(1)、小沢 勲(1)、矢﨑 敬人(1)
1. 工学院大学
*Takashi Takekawa(1), Tomoaki Hamada(1), Isao Ozawa(1), Yoshihito Yasaki(1)
1. Kogakuin University of Technology and Engineering

Keyword: BAYESIAN BRAIN, COGNITIVE BIAS, LEARNING, INDIVIDUAL DIFFERENCE

In neuroscience, the Bayesian Brain hypothesis has been proposed, which views the brain as a device that Bayesian updates a prior distribution with input. On the other hand, in behavioral economics, the anchoring effect is widely known as the bias in subsequent decisions caused by numerical information presented in advance. Suppose we try to explain the bias caused by the pre-presented information from the standpoint of the Bayesian brain. That case corresponds to knowledge about the target (prior distribution) renewed by considering the anchor as information about the target. In this study, we use experimental results to test the hypothesis that the anchoring effect consists of two steps: the judgment of the reliability of the anchor as a source of information and updating the posterior distribution based on the reliability.
We pre-process input anchors by logarithm for size and logit for probability. Knowledge is assumed to be represented by a normal distribution. Also, the anchor corresponds to a normal distribution with an indeterminate confidence level (variance), i.e., Student's t-distribution. We can estimate the posterior distribution of the confidence level from the values of the presented anchors. After that, the knowledge renewed using the confidence level. By the assumptions, we can quantitatively reproduce the distribution of responses by the value of the anchor in the experiment. The relationship between confidence judgments and learning in the anchoring effect could extend to general learning.
We also attempted to estimate individual variability in the amount of knowledge, confidence in knowledge, and attitude toward anchors. Finally, by conducting experiments on multiple subjects with the same collaborators, we extracted the differences in individual responses to anchors with a certain degree of accuracy.
2022年7月2日 11:00~12:00 宜野湾市民体育館 ポスター会場2
3P-311
YAB -Yet Another Brain: 脳シミュレーション用の新フレームワーク
YAB -Yet Another Brain: A New Framework for Brain Simulation

*大和田 稔(1)、カルロス グティエレス(1)、丹下 敦矢(1)、泉 一孝(1)、石若 裕子(1)
1. ソフトバンク株式会社
*Minoru Owada(1), Carlos Enrique Gutierrez(1), Atsuya Tange(1), Kazutaka Izumi(1), Yuko Ishiwaka(1)
1. SoftBank Corp.

Keyword: framework, brain simulation

Brain simulation is becoming an important tool in neurosciences and medicine, since it is useful for testing hypotheses and making new theories. Here we introduce YAB [https://yab.atpo.info/], a new framework for brain simulation, which organizes neural networks as graphs in a database.
YAB basic unit is a node. It stores attributes and methods (programs) that define its dynamics. Nodes can be used for modeling neural systems at different resolutions. For example, several nodes can incorporate methods for modeling dendrites, soma and axonal dynamics of Hodgkins-Huxley equations; or a single node can describe the membrane potential of a spiking neuron, or the dynamics of a neuronal ensemble. While current simulators support specific levels of details for modeling, YAB enables the implementation of methods at different complexities, depending on the requirements.
Another YAB component is an edge. Edges connect nodes, arranging the network connectivity and supporting the hierarchical structure of biologically inspired brain models. Nodes and edges use a unique key-value data structure.
YAB models are stored automatically in NDS (Neuro Data Store), a dedicated graph database. This enables modifying and examining models on running time, neurons and connections can be created and deleted during simulation, and users can search and extract subsets of data for analysis. After simulation, the graph can be reloaded from NDS to memory. Furthermore, in our framework, the model scale is no longer constrained by the computers' physical memory size.
Besides that, simulations of combined models, like brain and body, may require different tools. The integration of such tools may be sophisticated and technically difficult. In YAB, integration between models is natural, since nodes implement different methods and interact within a common graph.
Our framework supports parallel processing and aims to contribute to the advancement of brain simulations in computational neuroscience and related fields.