TOP一般口演(Oral)
 
Oral
Sencory motor integration
一般口演
感覚運動統合
7月27日(土)8:45~9:00 第10会場(万代島ビル 6F 会議室)
3O-10m1-1
発達中の小脳における機能的区画の3次元的理解: ゼブラフィッシュを用いた時空間的解析
Kanae Hiyoshi(日吉 加菜映)1,Narumi Fukuda(福田 成美)1,Kanoko Okumura(奥村 華乃子)1,Kyo Yamasu(弥益 恭)1,Sachiko Tsuda(津田 佐知子)1,2
1埼玉大院理工生命科学
2埼玉大学研究機構

During cerebellar development, a multitude of neurons are generated and form functional circuits. For a deeper understanding of cerebellar function, it is important to uncover the organizing principles as well as the development of such functional circuitry. To address this issue, especially focusing on cerebellar compartments, which is suggested to work as functional modules in the cerebellum, we have applied optical approaches and behavior analysis to zebrafish, an ideal model system for studying neurogenesis and optical techniques.
First, to examine functional circuits in the developing cerebellum, we performed optokinetic response (OKR) test, a behavior for which the cerebellum is known to play a pivotal role, while observing cerebellar Purkinje cell activity. For this, we performed whole-cerebellum and high-speed calcium imaging with wide-field and confocal microscopy by using transgenic fish which express GCaMP6 specifically in cerebellar Purkinje cells. At 6 day post fertilization (dpf), when stable OKR was observed, specific populations of Purkinje cells were activated during OKR, and their positions were different depending on the direction of visual stimuli. Moreover, Purkinje cell populations tended to be activated in a columnar or patchy manner throughout the cerebellum. Next, to examine the activity pattern and distribution of these Purkinje cells at single-cell resolution, we performed three-dimensional (3D) reconstruction and spatiotemporal analysis of the recorded Purkinje cells, obtaining 3D maps of all the recorded Purkinje cells. Four groups of Purkinje cells were found to show distinct patterns of activity, and they were distributed differently along dorso-ventral and left-right axes, forming clusters in a 3D manner, which was confirmed quantitatively. These findings obtained by our comprehensive and spatiotemporal analysis of Purkinje cell activity illuminate the 3D structure and roles of functional compartments in the developing cerebellum, which would help understand the emergence of the cerebellar compartments.
7月27日(土)9:00~9:15 第10会場(万代島ビル 6F 会議室)
3O-10m1-2
サッケード眼球運動系の座標軸について
Mayu Takahashi(高橋 真有),Yuriko Sugiuchi(杉内 友理子),Yoshikazu Shinoda(篠田 義一)
東京医歯大医歯総合認知行動・システム神経生理

Sensory signals for eye movements (visual and vestibular) are initially coded in different frames of reference and finally translated into common coordinates, and share the same final common pathway, because all eye movements are mediated by the same population of extraocular muscles. From clinical studies in humans and lesion studies in animals, it is generally accepted that voluntary saccadic eye movements are organized in horizontal and vertical Cartesian coordinates. However, this issue is not settled yet, because neural circuits for vertical saccades remain unidentified. To determine brainstem neural circuits from the superior colliculus to ocular motoneurons for horizontal and vertical saccades, we microstimulated different parts of the superior colliculi and recorded intracellular potentials from ocular motoneurons and last-order premotor neurons that terminated on ocular motoneurons in anesthetized cats. In addition, using combined electrophysiological and neuroanatomical techniques, we analyzed the commissural neural connections between the bilateral superior colliculi in relation to output tectoreticular neurons. Comparing well-known vestibuloocular pathways with our findings of commissural excitation and inhibition between both superior colliculi, we concluded that the saccade system uses the same frame of reference as the vestibuloocular system, common semicircular canal coordinate. This conclusion is mainly based on the marked similarities (1) between output neural circuitry from one superior colliculus to extraocular motoneurons and that from a respective canal to its innervating extraocular motoneurons, (2) of patterns of commissural reciprocal inhibitions between upward saccade system on one side and downward system on the other, and between anterior canal system on one side and posterior canal system on the other, and (3) between saccade and quick phase of vestibular nystagmus sharing brainstem burst neurons.
7月27日(土)9:15~9:30 第10会場(万代島ビル 6F 会議室)
3O-10m1-3
眼球運動に頑健な網膜地図推定アルゴリズムの開発
Ryunosuke Togawa(外川 龍之介),Mitsuyuki Nakao(中尾 光之),Norihiro katayama(片山 統裕)
東北大学大学院情報科学研究科バイオモデリング論研究室

We have developed a novel algorithm to estimate the fine retinotopic map of the visual cortex from the transcranial intrinsic optical signal (IOS) induced by visual stimulation in an awake mouse. The proposed algorithm is robust against eye movement because a retinal image change caused by eye movements was explicitly incorporated. The magnitude of the visual response of a region of interest (ROI) of the visual cortex was estimated by the integral of the retinal image weighted by the sensitivity distribution mapped on the retina, assuming that the distribution to be a two-dimensional Gaussian function. The parameters of the sensitivity distribution, i.e. the gain, the spread, and the center coordinates for each ROIs were estimated by the nonlinear least square error method. In addition. the global signal regression (GSR) method was incorporated in the algorithm to suppress background noise in the IOS instead of the conventional synchronous averaging method. The retinotopic map was estimated by on the spatial distribution of the central coordinates of the sensitivity distribution.

To evaluate the performance of the proposed algorithm, the algorithm was applied to real IOS of the visual cortex obtained from head-fixed un-anesthetized mice of which the eye movements were measured with an infrared cMOS camera. The position in the cortex and the structure of the estimated retinotopic map were consistent with those estimated by conventional methods. It was confirmed that the spatial resolution of the map is not limited by the number of spatial divisions of the visual stimulus of the mouse. These results suggest accuracy and usefulness of the proposed method.
7月27日(土)9:30~9:45 第10会場(万代島ビル 6F 会議室)
3O-10m1-4
視覚の「相対性理論」:微細前庭系頭部運動に基づく網膜座標系を使った新たな枠組み
Yasuto Tanaka(田中 靖人)1,Hiroyuki Fujie(藤江 博幸)2,Ryuuto Fujie(藤江 龍登)2
1神経数理学研究所
2(株)三城ホールディングス

According to the literature of visual processing, experiments measuring psychophysical/physiological parameters have been conducted in a static condition where the human/animal's head was to be fixed. The body movement including head motion was assumed as unnecessary noise (Multinez-Conde et al, 2004, etc). Recent study of fixational eye movement points to the limitation of such approach considering if miniature elements such as head (neck) vibrations as well as body movement are all included in eye movement. Indeed, eye and head move by themselves in natural environment where the actual visual system is dynamically operated (Collewijn & Kowler, 2008). In this project, we propose a novel framework of recognizing visual processing within a dynamic situation where eyes and head are moving concurrently within a miniature scale. The coordinate system for mathematical analysis is now considered as mobile where the original point and coordinate itself is moved instead of fixed. The main purpose of the system should be to find a stable parameter such as achieved in Vestibular Ocular Reflex (its micrometer version, Fujie and Tanaka 2018 JNS). The visual processing ought to be considered as relative in terms of its coordinate system. This approach was applied to the real three dimensional coordinate space (x, y, z) at the head centered space where the miniature eye movement is expressed. Two steps were taken: (1) first head movement vector is to be extracted by averaging the two independent affine transformations (from the left eye-head, and the right eye-head affine). (2) Eye movement vector is to be calculated assuming the mobile coordinate system of the head by the subtracting method where eye and calculated head coordinate vectors were subtracted. Eye movement is expressed relative according to the moving coordinate system of the head. Following the algorithm, we determined miniature eye movement as well as miniature head movement on tenths of nanometer scale, which include ocular drifts (10~20Hz, micrometer order) and ocular micro-tremors (80, 110Hz, sub-micrometer order), together with miniature head movement (30Hz) during fixation and during watching visual objects such as a single Gabor patch. Such a novel framework should coordinate well with natural dynamic environment where the actual visual system (animal/human vision) is relative in light of vestibular heading, therefore it can deal with self navigation.