TOP一般口演
 
一般口演
学習理論 / 神経回路モデル化と人工知能 2
Learning Theory / Neural Network Modeling and Artificial Intelligence 2
座長:島崎 秀昭(北海道大学 人間知・脳・AI研究教育センター)
2022年7月2日 17:10~17:25 沖縄コンベンションセンター 会議場B5~7 第4会場
3O04e2-01
神経回路とベイズ推論の等価性、今後の展望
A perspective about the equivalence between neural networks and variational Bayes

*磯村 拓哉(1)
1. 理化学研究所脳神経科学研究センター
*Takuya Isomura(1)
1. RIKEN Center for Brain Science

Keyword: Free energy principle, Active inference, Bayesian inference, Computational theory

The free-energy principle, proposed by Friston, advocates that biological organisms adapt to their environment by minimising an upper bound of sensory surprise—called variational free energy—thereby, they actively perform variational Bayesian inference [1]. Indeed, the dynamics of standard neural networks whose neural activity and plasticity minimise an arbitrary cost function can be cast as minimising variational free energy [2]. This notion enables us to explain arbitrary neural network dynamics and predict the consequences of self-organisation. That is, standard neural networks implicitly perform adaptive behavioural control—including inference, learning, prediction, planning, and action—in a Bayes optimal manner under some set of prior beliefs. This speaks to the free-energy principle as a universal characterisation of neural networks.

In this short talk, I would like to briefly introduce a perspective of formally characterising the sentient behaviour of biological organisms in terms of optimisation under Bayes optimality assumptions. This seemingly too mathematical framework might help us to understand the brain, which is otherwise unclear by simply observing its activity. I will also discuss its potential limitations.

References
1. Friston, K. J. The free-energy principle: a unified brain theory? Nat. Rev. Neurosci. 11, 127-138 (2010). https://doi.org/10.1038/nrn2787
2. Isomura, T., Shimazaki, H. & Friston, K. J. Canonical neural networks perform active inference. Commun. Biol. 5, 55 (2022). https://doi.org/10.1038/s42003-021-02994-2
2022年7月2日 17:25~17:40 沖縄コンベンションセンター 会議場B5~7 第4会場
3O04e2-02
A functional role for inhibitory diversity in associative memory
*Burns Thomas(1)、Haga Tatsuya(1)、Fukai Tomoki(1)
*Thomas F Burns(1), Tatsuya F Haga(1), Tomoki Fukai(1)
1. OIST

Keyword: inhibition, memory, hippocampus, model

Inhibitory neurons take on many forms and functions. How this diversity contributes to memory function is not completely known. Previous formal studies indicate inhibition differentiated by local and global connectivity in associative memory networks functions to rescale the level of retrieval of excitatory assemblies. However, such studies lack biological details such as a distinction between types of neurons (excitatory and inhibitory), unrealistic connection schemas, and non-sparse assemblies. In this study, we present a rate-based cortical model where neurons are distinguished (as excitatory, local inhibitory, or global inhibitory), connected more realistically, and where memory items correspond to sparse excitatory assemblies. We use this model to study how local-global inhibition balance can alter memory retrieval in associative memory structures, including naturalistic and artificial structures. Experimental studies have reported inhibitory neurons and their sub-types uniquely respond to specific stimuli and can form sophisticated, joint excitatory-inhibitory assemblies. Our model suggests such joint assemblies, as well as a distribution and rebalancing of overall inhibition between two inhibitory sub-populations – one connected to excitatory assemblies locally and the other connected globally – can quadruple the range of retrieval across related memories. We identify a possible functional role for local-global inhibitory balance to, in the context of choice or preference of relationships, permit and maintain a broader range of memory items when local inhibition is dominant and conversely consolidate and strengthen a smaller range of memory items when global inhibition is dominant. This model therefore highlights a biologically-plausible and behaviourally-useful function of inhibitory diversity in memory.
2022年7月2日 17:40~17:55 沖縄コンベンションセンター 会議場B5~7 第4会場
3O04e2-03
部分的に入力を受けるレザバー計算の動的平均場理論による解析
Dynamical mean-field analysis of a reservoir computing receiving input signals partially

*高須 正太郎(1)、青柳 富誌生(1)
1. 京都大学大学院情報学研究科
*Shotaro Takasu(1), Toshio Aoyagi(1)
1. Grad. Sch. Info., Kyoto Univ., Kyoto, Japan

Keyword: RESERVOIR COMPUTING, RECURRENT NEURAL NETWORK, LYAPUNOV EXPONENT, ECHO STATE PROPERTY

Reservoir computing (RC) is a framework that utilizes the complex behavior of dynamical systems with large degrees of freedom as a computational resource. RC was originally proposed as a method for training recurrent neural networks (RNNs). Beyond RNNs, the idea of RC has also been applied to some real dynamical systems, including water, soft materials, and cultured neuronal networks. In RC, only the weights of the readout are trained, whereas the weights of the reservoir are fixed. This simple training method allows us to reduce the computational cost and fast learning. The performance of RC generally depends on some dynamical properties of the reservoir because the reservoir is not trained. In order for RC to work well, the reservoir must have the "echo state property (ESP)." The ESP implies that the reservoir dynamics should be uniquely determined by the history of the input signal, independent of the initial conditions. In the context of dynamical systems theory, ESP is equivalent to common-signal-induced synchronization, which occurs if and only if the maximum Lyapunov exponent (max LE) of the reservoir is negative. Here, we study the ESP of an RNN whose spontaneous dynamics are chaotic (chaotic RNN). Using dynamical mean-field theory, we analytically calculate the max LE of the chaotic RNN and show that the larger input signals decrease the max LE. We consider the case of a partial input model, where a neuron of the chaotic RNN connects with an input unit with probability P. We then demonstrate that there exists a critical connectivity rate Pc, in which sufficiently large input signals make the max LE negative only if P>Pc. In addition, we show that the critical connectivity rate Pc depends on how chaotic the spontaneous dynamics of the RNN are. These results suggest that if the input connectivity rate P is higher than Pc, the chaotic RNN can be used for RC by appropriately amplifying input signals. Our findings are expected to lead to a new principle in designing a reservoir computing.
2022年7月2日 17:55~18:10 沖縄コンベンションセンター 会議場B5~7 第4会場
3O04e2-04
確率計算におけるカオス的な神経活動の役割
Computational roles of chaotic neural activity in probabilistic inference

*寺田 裕(1,2)、豊泉 太郎(1,3)
1. 理化学研究所脳神経科学研究センター、2. カリフォルニア大学サンディエゴ校生物科学部門、3. 東京大学大学院情報理工学系研究科
*Yu Terada(1,2), Taro Toyoizumi(1,3)
1. RIKEN Center for Brain Science, Japan, 2. Bio Div, UCSD, USA, 3. Dep Math Eng Info Phys, Univ of Tokyo, Tokyo, Japan

Keyword: Neural variability, Neural sampling, Probabilistic computation, Local learning rule

Variability is a generic property of neural dynamics in the brain. Neural activity exhibits characteristic trial-to-trial and temporal fluctuations spontaneously as well as during information processing. Although the origin of such variability remains elusive, theoretical works offer a hypothesis that chaotic dynamics of neural activity could underlie neural variability. Balanced inputs from strong excitatory and inhibitory synapses can endow networks with the ability to generate fluctuating neural dynamics sensitive to perturbations. However, the benefits of such chaotic neural dynamics are still unclear because variability could hamper accurate information processing by neurons and limit working memory.

We propose one possible computational benefit of chaotic neural dynamics for probabilistic information processing. We train neural networks to draw samples from a target probability distribution. During training, we employ a local learning rule to train the networks. Specifically, the so-called node-perturbation learning rule is implemented as a three-factor synaptic plasticity rule for recurrently connected networks. While this learning rule is slow compared to non-local learning rules used in machine learning, it can successfully train networks to perform simple probabilistic inference tasks. We demonstrate that chaotic neural dynamics yield a critical substrate for probabilistic sampling. Specifically, we consider a cue integration task as an example of paradigmatic cognitive tasks involving probabilistic computation. In this task, recurrently connected neurons receive multi-modal inputs from sensory neurons, integrate the information, and need to estimate the Bayes posterior distributions. We show that the networks learn to draw samples by accurately approximating the posterior distribution that integrates multi-modal sensory inputs.
2022年7月2日 18:10~18:25 沖縄コンベンションセンター 会議場B5~7 第4会場
3O04e2-05
神経接続データのニューラルネットワーク埋め込み法の検討
Neural network embedding of functional microconnectome.

*白上 新(1)、長谷 武志(3)、山口 祐嬉(1)、下野 昌宜(1,2)
1. 京都大学大学院医学研究科人間健康科学専攻、2. 京都大学白眉センター、3. 東京医科歯科大学統合教育機構
*Arata Shirakami(1), Takeshi Hase(3), Yuki Yamaguchi(1), Masanori Shimono(1,2)
1. Graduate School of Kyoto University, 2. Hakubi center, Kyoto University, 3. Institute of Education, Tokyo Medical and Dental University

Keyword: network embedding, neural networks, centrality, new metrics

Our brain works as a complex network system. Experiential knowledge seems to be coded into a complex network organization rather than retaining only properties of individual neurons.
When considering the high complexity of the network architecture, extracting simple rules through automated and interpretable analysis of topological patterns will allow more useful observations of interrelationships within the complex neural architecture. Furthermore, because of recent advances in measurement technology, the amount of data on brain connectivity is becoming vast, and more efficient compression methods are needed. In this study, by combining two types of analysis methods, we could automatically compress and naturally interpret topological patterns of functional connectivities, which produced electrical activities from many neurons simultaneously from acute slices of mice brain for more than 2.5 hours [Kajiwara et al. 2021].
As the first type of analysis, this study trained an “artificial neural network” system called Neural Network Embedding (NNE), and automatically compressed the functional connectivities into only small (25%) dimensions.As the second type of analysis, we widely compared the compressed features with ~15 typical representative network variables, having clear interpretations, including > 5 centrality-type metrics and developed new network metrics that quantify degrees or ratio of hubs distanced by several-nodes from initially focused hubs.
As the result, typical representative network variables could give interpretations for only 55-60% of the extracted features, however, these new metrics, together with the generally employed network metrics, enabled interpretations for 80-100% features.The result shows not only the fact that the NNE method exceeds naive ability and limitation of generally used human-made variables, but also the possibility that acknowledgement of our own limitation drives us to extend interpretable possibilities by developing new methodologies.