視覚2
Vision 2
O1-6-6-1
キャストシャドウによる奥行き知覚における光源方向の推定
Estimation of the light source in depth perception induced by cast shadow

○勝山成美1,2, 臼井信男1,2, 野瀬出3, 泰羅雅登1,2
○Narumi Katsuyama1,2, Nobuo Usui1,2, Izuru Nose3, Masato Taira1,2
東京医科歯科大・医歯学総合・認知神経生物1, 東京医科歯科大・CBIR2, 日本獣医生命科学大・比較発達心理学3
Dept. Cogn. Neurobiol., Tokyo Med. and Dent. Univ.1, CBIR, Tokyo Med. and Dent. Univ.2, Lab. Comp. Develop. Psychol., Nippon Vet. and Life Sci. Univ.3

By using a modified version of a motion illusion called 'ball-in-a-box', we have previously investigated the effect of cast shadow on depth perception. In the study, participants were required to estimate the ball trajectory in the movie scene. However, the position of the ball in the movie could not be determined uniquely - it could be anywhere on the line of the sight. This should make it difficult to infer the ball trajectory only by the spatial relationship with cast shadow unless explicit light source is assumed. Nevertheless, the perceived ball trajectories reported by participants were stable, suggesting that they might have implicit light source to infer the ball trajectory. In the present study, we investigated the implicit light source by comparing the perceived ball trajectory with predicted trajectory calculated by assuming explicit light sources. On the assumption of parallel light, we assumed 10 putative light sources with incident angle from 50 ° to 140 ° in the sagittal plane (vertical was set as 90 °) and calculated the predicted ball trajectory for each putative light source by determining the ball position at the intersection between the incident light and the line of the sight. The scatter of the perceived ball trajectories from the predicted ones was evaluated as validity of the putative light sources. The results showed that when the ball and cast shadow moved close together, the scatter of the perceived ball trajectory showed no difference over the putative light sources. On the other hand, when the shadow moved away from the balls, the scatter changed as the incident angle of the putative light source with the minimum at 80 ° or 90 °. These results suggest that participants utilize vertical light source implicitly to infer the ball trajectory as cast shadow moves away from the ball, whereas they can infer the ball trajectory independently of the implicit light source when shadow moves closely with the ball.
O1-6-6-2
サッケードに伴う視空間表現の更新におけるMT/MST野の役割
The role of visual areas MT/MST in spatiotopic integration of visual motion across saccadic eye movements

○河野憲二1, 三浦健一郎1, 稲場直子1
○Naoko Inaba1, Kenichiro Miura1, Kenji Kawano1
京都大学 医学研究科 認知行動脳科学1
Dept of Integrative Brain Science, Kyoto Univ, Kyoto1

We perceive the visual world as stable and continuous despite our eye movements. After each eye movement, the retinotopic representation of the visual world in the brain must be overwritten by a new one according to the new eye position. To maintain the perceptual stability across saccade eye movements, we have to integrate the visual information before and after the eye movements in line with the spatiotopic coordinates. Monkey electrophysiology has demonstrated that neurons in various brain areas show changes in the spatial profile of their receptive fields around the time of saccade eye movements, the so-called "remapping". To investigate the neuronal mechanisms for the remapping process of visual motion across saccades, we recorded neuronal activities from areas MT/MST in awake monkeys performing a simple saccade task in which a spatially stable moving stimulus was presented at various visual fields. We found when the stimulus was placed in the pre-saccadic receptive field, the neurons in both areas responded until the saccade took the stimulus out of the receptive field. When the stimulus placed in the post-saccadic receptive field and lived after the saccade, the neurons in both areas began their responses at the end of the saccade. However, the MT and MST neurons showed different response patterns when the stimulus was gone before the stimulated site entered their receptive fields due to the eye movement. The neuronal responses of most MST neurons in the post-saccadic period reflected the stimuli that had been presented in their future receptive fields ("memory remapping"), while the responses of most MT neurons were extinguished after the saccade. These results suggest that MST neurons took the eye movement into account in their processing of the remembered stimulus to integrate the visual information properly matching the old retinal image to the current one during or after saccades.
O1-6-6-3
V2の恒常的次元の推定
Estimating invariant dimensions in V2

○細谷晴夫1,2, 佐々木耕太3, 大澤五住3
○Haruo Hosoya1,2, Kota Sasaki3, Izumi Ohzawa3
ATR研究所1, JSTさきがけ2, 大阪大学3
ATR International, Kyoto, Japan1, JST Presto, Tokyo, Japan2, Osaka University, Osaka, Japan3

Our visual system can robustly detect an important feature in a scene, such as the identity of an object, even when its retinal image can vary substantially. Neural basis for such invariant recognition is thought to be the common property in visual cortex where a cell retains its activity even with large changes in stimuli along a certain dimension. For example, a face-selective cell may keep firing when the viewing angle of the face is changed; a V1 complex cell may remain active when the phase of an oriented grating is changed. A semi-automatic method for estimating such invariant dimensions is spike-triggered covariance analysis (STC), which can yield Gabor-like features with different phases for V1 complex cells from their responses to a large number of random stimuli.This work aims at generalizing the previous method to reveal invariant dimensions in V2. STC is not suitable for direct applications to V2 cells since (1) these typically have a much higher-degree of nonlinearity compared to V1 cells and (2) a large number of parameters are needed to characterize V2 cells and therefore STC requires an unrealistic amount of data. To address the first issue, we attempt to reduce the complexity by assuming a population model of V1 cells and analyzing their outputs with respect to the responses of a V2 cell. To address the second issue, we apply a Bayesian version of STC, proposed by Park and Pillow, imposing a prior assuming that receptive fields generally have smooth profiles.We applied our method to a publicly availble dataset of V2 cells responding to a large number of natural images. Our analyses revealed that V2 cells often had several invariant dimensions, some of which could be interpreted as positional translation, rotation, and expansion or compression. We will discuss how to visualize the invariant representations through sample stimuli generated along the invariant dimensions, as well as how to quantitatively characterize those generated samples.
O1-6-6-4
Neural mechanisms of visual orientation constancy
○Ari Rosenberg1, Dora Angelaki1
Baylor College of Medicine1

As we examine the world visually, movements of the eyes, head, and body change how the scene projects onto the retina. Despite this changing retinal image, perception of the environment remains stably oriented along the gravitational vector termed earth vertical. Consider an upright observer fixating a vertical bar. The bar is correctly perceived to be oriented vertically in space, and its image runs along the retina's vertical meridian. If the observer's head then rolls to one side while maintaining fixation, perception of the bar remains near vertical even though its retinal image is now oblique. This visual orientation constancy reveals the influence of extra-retinal sources such as vestibular signals on visual perception, without which, the head-rolled observer could misinterpret the vertical bar as oblique. Where and how this is achieved in the brain remains unknown. Electrophysiological studies conducted in primary visual cortex have yielded conflicting results, and human studies suggest parietal cortex may be involved. Here we examine this possibility by recording extracellularly from 3D orientation selective neurons in the caudal intraparietal area (CIP) of macaque monkeys. Tilt tuning curves, describing how responses depend on the direction in which a plane leans towards the observer, were recorded for each cell with the animal upright and rolled ear down. Relative to the upright tilt tuning curve, about 40% of the head/body rolled tuning curves shifted significantly in the direction preserving the preferred tilt relative to earth vertical. This shift was generally larger than the ocular counter-roll but smaller than the roll amplitude (partially compensating, as often observed in multisensory integration) and sometimes accompanied by a gain change. Our findings show that the responses of CIP neurons correlate with visual orientation constancy, providing a novel look into how multisensory integration unifies and stabilizes perception of the environment.
上部に戻る 前に戻る