TOP一般口演
 
一般口演
神経回路モデル化と人工知能
Neural Network Modeling and Artificial Intelligence
座長:濱口 航介(京都大学大学院医学研究科)
2022年7月2日 16:10~16:25 沖縄コンベンションセンター 会議場B5~7 第4会場
3O04e1-01
局所神経回路モデルを用いたダイナミクス依存的な領野間情報通信の解析
Inter-areal communications dependent on neural dynamics in a detailed local circuit model

*栗川 知己(1)
1. 関西医科大学
*Tomoki Kurikawa(1)
1. Kansai Medical University

Keyword: neural dynamics, information routing

Dynamic information communication between cortical areas plays a critical role in many cognitive functions. Such dynamic communication is considered to be evoked by coherence between neural activities in the corresponding areas. In the spatial working memory task, for instance, a specific increase in gamma range coherence between hippocampus (HPC) and medial prefrontal cortex (mPFC) is observed. Further, the dynamic communication is also considered to depend on neural dynamics. The burst activities in the chaotic neural dynamics are crucial to dynamic communication between different cortical areas.Dynamics in local circuits in the neural system are dependent on different types of neurons. Parvalbumin (PV)-expressing neurons are involved in the faster oscillations, whereas somatostatin (SOM)-expressing neurons are involved in the slower oscillations. Vasoactive intestinal peptide (VIP)-expressing neurons are related to the information gating by regulating the excitatory neurons through dis-inhibitory circuit. Thus, how these different inhibitory neurons are related to the communications between different areas is an important question for understanding the information processing in the neural system.In this presentation, we propose a detailed neural circuit model including three types of inhibitory neurons (PV, SOM, and VIP neurons) in addition to the excitatory neurons. By using this detailed model, we found that balance among these inhibitory neurons is important to efficient inter-areal communications.
2022年7月2日 16:25~16:40 沖縄コンベンションセンター 会議場B5~7 第4会場
3O04e1-02
Recurrent Mutual Inhibition: A Key Ingredient for Rapid Flexibility in Neural Circuits
*Alexander James White(1), Belle Liu(2), Chung-Chuan Lo(1)
1. Institute of Neuroscience, National Tsing-Hua University, Hsinchu, Taiwan, 2. Department of Physics, National Tsing-Hua University, Hsinchu, Taiwan

Keyword: Flexiblity, Mutual Inhibition, Cusp Bifurcation

Organisms must be able to rapidly and flexibly respond to an ever changing environment. To accomplish this, neural circuits must be able to rapidly switch from performing one task to performing a different task. Specifically, a neural circuit cannot only rely on changing the synaptic weights to respond to an unexpected stimulus. Recent experimental advances in multicellular recordings have shown that this rapid, flexible switching between activity patterns is indeed present in neural microcircuits. However, the underlying neural mechanism is not clear. Strikingly, we show in a neural circuit model that mutually inhibitory connections are crucial for rapid and flexible switching between distinct functions without synaptic plasticity. As a concrete example, we build a simulation model of a class of functional 4-neuron microcircuits we call Coupled Recurrent Inhibitory and Recurrent excitatory Loops (CRIRELs). These CRIRELs have the advantage of being multi-functional, performing a plethora of functions, including decisions, switches, toggles, and central pattern generators, depending solely on the input type. We then use bifurcation theory to show how mutual inhibition gives rise to this broad repertoire of possible functions. Specifically, we show that this dynamic flexibility comes from an underlying multicusp bifurcation. Moreover, we show that all of these functions coexist for a single set of synaptic weight. As a final demonstration, we are able to seamlessly transition between 24 unique functions simply by modifying the input current to the network (rather than changing the synaptic weights). We show that not only are the functions robust to noise, but they are fast (on the order of 20 ms). Finally, we demonstrate how this trend also holds for larger networks, and how mutual inhibition greatly expands the amount of information a recurrent network can hold.
2022年7月2日 16:40~16:55 沖縄コンベンションセンター 会議場B5~7 第4会場
3O04e1-03
Energy cost of synaptic computation for combinatorial optimization in spiking neural networks
*Timothee Leleu(1)
1. The University of Tokyo, IRCN

Keyword: Ising model, synaptic computation, winner-take-all, neuromorphic hardware

The Ising model has proven to be useful for describing theories of memory retrieval, decision making, analyzing physiological data, and constructing energy-based models in machine learning. The Ising model suggests that neural states are organized according to an “energy” landscape with saddle points, local and global minima. In a recent work[1], we have demonstrated that the lower energy states can in fact be reached much faster than expected from the classical Hopfield neural networks and Boltzmann machines when destabilizing attractors in a recurrent neural network through slow modulation of synaptic connection strengths[2,3] that corrects the inhomogeneity in amplitude of the neurons’ analog state. In such a model, the state-space includes the instantaneous value of synaptic weights modulated within the intermediate time-scale and the resulting dynamics does not exhibit detailed balance. In this talk, I will present how such mechanisms can be implemented using spiking neural networks and biologically realistic neural architecture that is based on winner-take-all circuits. The proposed model allows to set estimates on the tradeoff between energy consumption and complexity of combinatorial optimization problems solvable in theory by biological neural networks. Moreover, the gain in performance that can be achieved when implementing these principles on digital/analog electronic[1] and photonic neuromorphic hardware[4] will be summarized.

This research is part of a recently established social collaboration program between the University of Tokyo IRCN and NTT Research Inc.

[1] T. Leleu et al., Commun. Phys. 4(1):1-10 (2021)
[2] T. Leleu et al., Phys. Rev. Lett. 122(4):p.040607 (2019)
[3] Y. Yamamoto, T. Leleu, S. Ganguli, H. Mabuchi, Appl. Phys. Lett. 117(16):160501 (2020)
[4] T. Inagaki, K. Inaba K, T. Leleu, et al. Nat. Commun 12(1):1-8 (2021)
2022年7月2日 16:55~17:10 沖縄コンベンションセンター 会議場B5~7 第4会場
3O04e1-04
ノンレム睡眠中のシナプス可塑性・記憶再編成に対する情報量最大化によるアプローチ
Information maximization explains state-dependent synaptic plasticity and memory reorganization during NREM sleep

*吉田 健祐(1,2)、豊泉 太郎(1,2)
1. 理化学研究所脳神経科学研究センター、2. 東京大学大学院情報理工学系研究科
*Kensuke Yoshida(1,2), Taro Toyoizumi(1,2)
1. RIKEN Center for Brain Science, 2. Graduate School of Information Science and Technology, the University of Tokyo

Keyword: Sleep, Slow wave, Spiking neuron model, Information maximization

Sleep is considered important for memory reorganization. Slow waves during non-rapid eye movement (NREM) sleep reflect low-frequency transitions of cortical neurons between up states with high firing rates and down states with low firing rates. Previous findings suggest that slow waves and associated neuronal reactivation have a causal effect on memory consolidation. A recent study further suggests that global slow waves contribute to memory consolidation whereas local slow waves contribute to forgetting. One possible underlying mechanism of this difference is that neuronal reactivation within slow waves induces different synaptic plasticity depending on the sleep states. It was reported that spike-timing-dependent plasticity (STDP) is dominated by synaptic depression during up states compared with down states. However, it remains unclear how these different synaptic plasticity rules contribute to neural information processing and memory reorganization during NREM sleep. Here, we show that the optimal synaptic plasticity rule for maximizing information transmission (infomax rule) in a spiking neuron model provides a unified account for the experimental findings. The infomax rule indicates that the optimal synaptic plasticity is shifted toward depression as the expected firing rate increases because the information transmission per spike decreases with the rate of background spikes. This property provides an account for experimental findings of different STDP between up and down states. We next construct a cortical neuronal network model which generates global and local slow waves. The model suggests that the mean firing rates of excitatory neurons are lower in global up states than in local up states because the involvement of lateral inhibition is stronger during global up states. Furthermore, the infomax rule in the slow wave model suggests that global and local slow waves tend to potentiate and depress synapses, respectively, because of different baseline firing rates. This result suggests that global and local slow waves might control the balance of memory consolidation and forgetting via distinct synaptic plasticity, consistent with previous experimental findings. In summary, information maximization provides a unified explanation for neural information processing, synaptic plasticity, and memory reorganization during NREM sleep.