TOP一般口演
 
一般口演
神経回路モデル化と人工知能 / その他
Neural Network Modeling and Artificial Intelligence / Others
座長:栗川 知己(関西医科大学)
2022年7月1日 14:00~14:15 ラグナガーデンホテル 羽衣:中 第9会場
2O09a1-01
Spiking neural networks deployment on a graph database. A new approach for brain modeling and simulation.
*Carlos Enrique Gutierrez(1), Minoru Owada(1), Kenji Doya(2), Yuko Ishiwaka(1)
1. Advanced Technology Promotion Office, CPIT, SoftBank Corp., 2. Okinawa Institute of Science and Technology Graduate University

Keyword: brain modeling, spiking neural network models, graph database, neural simulation

In conventional neural network simulations, programs and data are treated as separate components. In this work we introduce an alternative approach of running spiking neural network simulation on a database, by using a new computational framework for brain modeling called YAB (Yet Another Brain) [https://yab.atpo.info/] to realize biologically plausible network dynamics with minimum memory consumption. The framework organizes neural networks as graphs stored in a database. Neurons are represented by nodes that integrate data and programs, such as parameters and methods defining the dynamics. Network connectivity is defined by edges, which organize the hierarchical structure of biologically inspired brain models. We created template nodes for spiking neuron models and alpha function-based synapses. For efficient signal propagation over the graph, neuronal action-potentials were assumed as events triggered by the source node and transmitted to its corresponding target nodes after an axonal delay, avoiding node’s reading of presynaptic signals at every step of the simulation. The state of the network is available on-demand by data retrieval from the database. It is possible to stop the execution, restart it from the stored graph, and execute in single steps for verifying the correctness of the computations. For model building, we used the SNNbuilder [Gutierrez et al 2022, manuscript submitted], a data-driven brain modeling tool that registers anatomical and physiological data and generates simulation codes in both NEST 3 and YAB. Simulation results showed similar spiking activities by YAB and NEST 3, providing an initial validation of our results. Furthermore, we observed a low memory consumption with YAB for different networks and sizes, indicating that simulation and storage happened simultaneously with minimum memory usage. Our simulation approach was able to generate biologically plausible results and aims to serve as a new engine for brain simulations. The framework allows building new neuron and synapse models, by developing methods of diverse complexity at the graph nodes. Moreover, the integration of nodes running non-neural models, such as motor and sensory organs and physical environments, is possible for embodied simulations. We are systematically gathering parameters, building models of the brain, and testing simulations on CPU and GPU for the effective use of this framework on brain modeling.
2022年7月1日 14:15~14:30 ラグナガーデンホテル 羽衣:中 第9会場
2O09a1-02
リザバー状態のマハラノビス距離を用いた省メモリ型時系列異常検知
Memory-Saving Time-Series Anomaly Detection Using Mahalanobis Distance of Reservoir States

*田村 浩人(1)、田中 剛平(1,2)、藤原 寛太郎(1)
1. 東京大学ニューロインテリジェンス国際研究機構、2. 東京大学大学院工学系研究科
*Hiroto Tamura(1), Gouhei Tanaka(1,2), Kantaro Fujiwara(1)
1. International Research Center for Neurointelligence, The University of Tokyo, Tokyo, Japan, 2. Graduate School of Engineering, The University of Tokyo, Tokyo, Japan

Keyword: Reservoir Computing, Anomaly Detection, Unsupervised Learning, Online Learning

Time-series anomaly detection technology has a wide range of applications in modern society. Especially, self-contained devices for time-series anomaly detection will be useful for health monitoring in medicine or equipment monitoring in industry. In recent years, time-series anomaly detection using reservoir computing has attracted much attention in compatibility with edge computing devices, since the reservoir part can be implemented by physical systems and the readout part can achieve fast and low-cost training. In order to realize efficient learning with low-power and small-sized reservoir computing devices, it is essential to develop a memory-saving and high-performance learning method. Therefore, this study proposes to update the mean vector and precision matrix of the reservoir states online and to use the Mahalanobis distance based on them as a measure of anomaly. Although anomaly detection using the Mahalanobis distance is a classic technique, its combination with reservoir computing provides much higher performance than a standard reservoir computing method that uses prediction errors to measure anomaly. In order to save memory cost, it is effective to use only a small number of neurons in the reservoir for training. In such small sampling cases, the prediction error-based method loses performance significantly, but our proposed method can maintain reasonably high performance. Furthermore, we show that the choice of neurons used for learning has a significant impact on performance.
2022年7月1日 14:30~14:45 ラグナガーデンホテル 羽衣:中 第9会場
2O09a1-03
Portilla-Simoncelli statistics を用いたResNetのテクスチャ解析
Analysis of ResNet texture representation using Portilla-Simoncelli statistics

*長坂 祥子(1)、庄野 逸(1)
1. 電気通信大学大学院情報理工学研究科
*Shoko Nagasaka(1), Hayaru Shouno(1)

Keyword: Deep Convolutional Neural Network, ResNet, texture, Portilla-Simoncelli statistics

Most of what we see around us can be considered to be composed of textures, which are important visual information for understanding the material and condition of objects. On the other hand, in the field of image processing, Deep Convolutional Neural Network (DCNN), a hierarchical neural network model inspired by mammalian vision processing, has recently achieved high performance in various tasks such as object recognition. In addition, DCNNs trained by large-scale natural image recognition tasks are able to generate and synthesize natural textures by using features in the middle layer. Although this suggests that the DCNN obtains feature representations that are perceptually useful, it remains unclear what specific features are captured. In previous research, texture representation of VGG, one of the models of DCNN, was analyzed using Portilla-Simoncelli statistics (PSS), which is a texture feature based on human visual perception. VGG is a model with a basic structure of repeated convolutional and pooling layers. The results showed that the VGG trained on large-scale natural images obtained abstract feature representations step by step from basic features such as texture. It is also observed that the results have a similar trend to the analysis results in the mammalian visual cortex, suggesting that VGG has obtained a biologically natural feature representation. In recent years, however, models such as ResNet, which employs convolutional layers with strides, have become common in DCNNs. ResNet is a model that uses convolutional layers with strides instead of pooling layers, and has a structure called skip connections, which makes it easier to learn even when the layers get deeper. ResNet-based models are widely used in engineering applications, and it is important to understand their properties. In our work, we analyzed ResNet using PSS to understand the characteristics of ResNet from the viewpoint of texture representation. As a result, in ResNet, we found that texture features are not acquired significantly as the layers become deeper.
2022年7月1日 14:45~15:00 ラグナガーデンホテル 羽衣:中 第9会場
2O09a1-04
深層予測動的注意モデルによる道具身体化:身体と道具に関わらない「エンドエフェクタ」注意モジュールの創発
Tool embodiment via Deep predictive active attention: Emergence of an attention module for “end-effector” regardless of a robotic body or a tool.

*森 裕紀(1)、昼間 彪吾(1)、伊藤 洋(2)、尾形 哲也(1,3)
1. 早稲田大学、2. 日立製作所、3. 産業技術総合研究所
*Hiroki Mori(1), Hyogo Hiruma(1), Hiroshi Ito(2), Tetsuya Ogata(1,3)
1. Waseda University, 2. Hitachi Ltd., 3. National Institute of Advanced Industrial Science and Technology (AIST)

Keyword: Deep Neural Network, Attention, Tool use, Embodiment

Iriki et al 1996 showed that a body representation in a macaque’s postcentral neurons extends to a tool after learning of tool use. Several models were proposed to explain the extension of the representation. In this presentation, we try to show that such extension to tool naturally emerges in an End-to-End predictive attentional learning of tool use behaviors from a perspective of a constructivist approach. We construct a deep predictive End-to-End learning model with a visual attention model to generate a robotic tool use behavior based on raw vision and joint angle sequences. We call it the “Deep predictive active attention model.” The model has multiple “attention modules” to extract image features based on the bottom-up visual feature and top-down modification from a recurrent neural network that receives image features on the attention locations and robotic joint angles and generates a one-step future image and one-step future joint angles. The robot is operated by the generated future joints by the model. The model is expected to change the attention location depending on what the model tries to do. The model is trained to pull an object, by a drag rake (tool) when it is located farther than the robot's reach, or by the hand when it is located within the robot’s reach. The input of the model consists of a current image and joint angles of the robot and the output of the model consist of an image and joint angles in the one-step future. The model successfully achieves the robotic behavior to pull an object by tool and hand depending on the object's location. The analysis of the model during the behaviors shows that an attention module moves its attention between the tool and the hand depending on the situation whether the robot grasps the tool or not. The model “sees” the tool if the robot tries to use the tool when the object is not reachable by the robotic hand or the hand if the robot does not grasp the tool when the object is reachable by the robotic hand. It means that the attention module emerges to recognize the “end-effector” for the task regardless of the situation for tool use or manual handling. The research indicates that the predictive attentional model can assimilate neuronal activity without any assumption of “body” or “tool.” This work was financially supported by JSPS KAKENHI Grant Number 21H05138 and Hitachi, Ltd. Especially we thank Kenjiro Yamamoto and Hideyuki Ichiwara in Hitachi, Ltd for their fruitful comments.