TOPシンポジウム
 
シンポジウム
35 情動情報解読による人文系学問の再構築
35 Reconstruction of human sciences, based on decoding of emotionalinformation
座長:近添 淳一(①株式会社アラヤ・脳事業研究開発室 ②生理学研究所・生体機能情報解析室)・石津 智大(関西大学・文学部)
2022年7月1日 16:10~16:30 沖縄コンベンションセンター 会議場B3・4 第6会場
2S06e-01
Deep learning reveals what emotional expressions mean to people in different cultures
*Alan Cowen(1)
1. Hume AI

Keyword: Emotion, Deep neural networks, Culture, Semantic space theory

Cross-cultural investigations into the meaning of emotional expressions have largely focused on perceptions of small sets of curated images and audio samples by small numbers of people. Such studies are limited to broad distinctions among a limited range of expressions and subject to perceptual and linguistic biases. As a result, the meanings of emotional expressions within and across cultures are still not well understood. Here, we used large-scale data collection and machine learning techniques to map what emotional expressions convey in four large-scale experimental studies spanning over 10 countries across five continents. More than 10,000 participants generated photographs, recordings, and/or videos of themselves experiencing and expressing a wide range of emotions and mental states using at-home recording devices. Participants each judged their own experiences or expressions in terms of 48 emotions and mental states in one of five languages. Deep neural networks tasked with predicting the culture-specific meanings people attributed to their expressions while ignoring physical appearance and context discovered 28 distinct dimensions of facial expression 24 distinct dimensions of vocal expressions. Across studies, these dimensions of facial and vocal expression were between 58% and 80% preserved in meaning across countries and languages, with 17 dimensions of facial expression and 24 dimensions of vocal expression showing a high degree of universality and 11 additional dimensions of facial expression showing moderate-to-strong cultural specificity. These results capture the underlying dimensions of the meanings of emotional expressions within and across cultures in unprecedented detail.
2022年7月1日 16:30~16:50 沖縄コンベンションセンター 会議場B3・4 第6会場
2S06e-02
Emotional objectivity: Neural coding of perceptual representations of affect and their function in the human brain
*Adam Keith Anderson(1)
1. Cornell University

Keyword: emotion, valence, fMRI, locus coeruleus

At the inception of experimental psychology, W. Wundt expanded empirical investigations beyond psychophysics to a deeper subjective level of how perceptual events psychologically affect us. Central to this affect is valence, the negative and positive values associated with an event. Valence is now known to regulate many aspects of perception and cognition, including attention, memory, and decision-making. While affect has been thought of as common neural code, the brain may support many different forms of affective representation. Here we report on examinations from self-report, perception, machine learning, psychophysiology, and fMRI on the existence multiple neural representations of valence, which we hypothesize support affective (feeling), semantic (knowing), and perceptual (seeing) features. With a particular focus on the latter, we demonstrate: 1) evidence for valence specific coding in sensory specific cortices, alongside objective dimensions in the extra-striate cortex 2) with the use of machine learning, evidence that global visual properties that discriminate the valence of real-world naturalistic images transfer to discriminating the affective response to abstract paintings that contain no concrete objects or scenes, consistent with their causal status; 3) bottom-up orienting via the brain stem locus coeruleus, either by cardiac gating signals or stimulus behavioral relevance, increases pupil dilation and generates the perception of value; 4) acquired valence may serve as an factor in perceptual organization, attracting or repulsing same vs opposite valence features and objects. These results suggest that our brains learn and produce low-level associations between visual experience and valence content to regulate our perceptual experience. Further, they support the notion of “emotional objectivity”, whereby our internal affective responses are neurally represented alongside exteroceptive perceptual dimensions, giving the illusion that feelings are projected out into the objective physical world.
2022年7月1日 16:50~17:10 沖縄コンベンションセンター 会議場B3・4 第6会場
2S06e-03
負の感情価をもつ美学的体験の経験科学的検討
Empirical studies on the negatively valenced aesthetic experience

*石津 智大(1)
1. 関西大学文学部心理学専修
*Tomohiro Ishizu(1)
1. Dep. Psychology, Kansai University, Osaka, Japan

Keyword: aesthetic experience, negative valence, neuroaesthetics

Human beings are equipped with altruism or empathy, which allows them to go beyond individual pleasure, which is tied to positive valence, or discomfort/pain, which is tied to negative valence, and motivate themselves to acts that benefit others or the group. Normally, one should avoid acts or events that lead to negative valence. However, there are cases in which we actively value them and accept negatively valenced emotion—for example, altruism or self-sacrificing behaviours. It may seem contrary to the individual's survival needs, but self-sacrificing behaviour is related to the very foundations of humanity. Being able to motivate oneself to act in a way that is beneficial to others without regard for personal pleasure/discomfort is extremely human and is considered an important aspect of humanity. Interestingly, there is an aesthetic experience that may work to promote altruism and empathy. It is so-called "aesthetic sadness" or "sad beauty". In aesthetics and philosophy, sad beauty is defined as mixed emotions consisting of positive valence (beauty) and negative valence (sadness or sorrow). However, it is not known quantitatively what emotions it consists of and how such aesthetic experience can promote empathy. In this talk, I will present empirical data from neuroaesthetics studies including an online survey, MRI experiment, and behavioural testing in which we quantitatively examine what constitutes aesthetic sadness, what brain responses engage it, and whether aesthetic sadness influences and promotes empathy.
2022年7月1日 17:10~17:30 沖縄コンベンションセンター 会議場B3・4 第6会場
2S06e-04
アレキシサイミアにおける感情の認識と内受容感覚
Emotional awareness and interoception in alexithymia

*寺澤 悠理(1)
1. 慶應義塾大学
*Yuri Terasawa(1)
1. Keio University

Keyword: Interoception, Alexithymia, Emotional awareness

“Alexithymia” is a common personality trait among patients with psychosomatic disorders. It was originally described as “relative constriction in emotional functioning, poverty of fantasy life, and inability to find appropriate words to describe their emotions” (Sifneos, 1973). Since high alexithymia individuals are impaired in their feeling and describing of sensations in emotional situations using appropriate words, they can potentially serve as an informative model to consider the relationship between emotional awareness and bodily sensation.
In this talk, I will introduce our study which examined how the interoceptive processing in an emotional context relates to problems of alexithymia in recognizing self-emotions. We prepared experimental conditions to induce emotional awareness based on interoceptive information. High alexithymia participants showed attenuated functional connectivity within their “interoception network”, particularly between the insula and the somatosensory areas when they focused on interoception. In contrast, they had enhanced functional connectivity between these regions when they focused on their anxiety. Although access to somatic information is supposed to be more strongly activated while attending to interoception in the context of primary sensory processing, high alexithymia individuals were biased as this process was activated when they felt anxiety, suggesting they recognize primitive and unprocessed bodily sensations as emotions. The results suggest that people with high alexithymia have a problem integrating information from several pathways, i.e., the visceral (interoception) and surface bodily sensations underlying cardiac interoception. The challenge linked to integrating whole-body sensations via such different channels may have a deleterious impact on the conception of bodily responses associated with emotions. Difficulties with integrating bodily sensations in those with higher levels of alexithymia indicate that association between bodily sensation and emotional awareness varies across individuals. Based on those findings, I would like to discuss the relationship between emotional awareness and bodily responses mediated by interoception.
2022年7月1日 17:30~17:50 沖縄コンベンションセンター 会議場B3・4 第6会場
2S06e-05
人文系学問としての「音楽」の芸術的感性と創造性の神経科学と計算論的理解
Neural and computational understanding of musical emotion and the creativity

*大黒 達也(1)
1. 東京大学ニューロインテリジェンス国際研究機構
*Tatsuya Daikoku(1)
1. International Research Center for Neurointelligence, The University of Tokyo, Japan

Keyword: MUSIC, CREATIVITY, UNCERTAINTY, STATISTICAL LEARNING

Music is omnipresent in our lives but unique to humans. The interaction between music and the human brain engages various neural mechanisms underlying emotion, learning, action, and creativity. Recently, a body of evidence has suggested that statistical learning contributes to musical creativity as well as musical acquisition (Wiggins, 2019; Daikoku et al., 2021). Statistical learning is an innate and implicit function of the human brain and is considered essential for brain development. Through statistical learning, humans can create and comprehend music, causing emotion. However, the mechanism by which musical emotion and the creativity emerges in the brain remains debatable. It is considered that creativity is linked to acquired knowledge, but the creative moments often occur without the intention to use the acquired knowledge. In our studies, we postulate that some types of creativity can be linked to implicit statistical knowledge in the brain. Here, I talk about a series of our neural and computational studies on how creativity emerges within the framework of statistical learning in the brain. I propose a hierarchical Bayesian Statistical Learning model (HBSL): statistically chunking into a unit (shallow statistical learning) and combining several units (deep statistical learning). Then I propose two core factors of musical creativity. First, the deep statistical learning may mainly contribute to musical creativity. Second, the temporal dynamics of perceptual uncertainty can induce musical creativity and the associated emotion. I also show about a newly devised system that visualizes the individuality and creativity of musical sensibilities using the neuro-inspired HBSL model. Further, we estimate by computational simulation how music creativity has changed over the era, and what kind of music will be created in the future. Through a series of neural and computational findings, I aim to further develop and strengthen music research in the fields of humanities and social sciences.
2022年7月1日 17:50~18:10 沖縄コンベンションセンター 会議場B3・4 第6会場
2S06e-06
Modeling sensory-to-value transformation using neural networks
*Junichi Chikazoe(1), Trung Quang Pham(1)
1. Araya Inc.

Keyword: Multivoxel patten analysis, Artificial intelligence, Emotion, functional MRI

Our ability to value sensory stimuli is essential, yet we know little of how this occurs in the brain. We examined valuation in the visual domain by using artificial neural networks (ANNs) to model and test the brain’s transformation of information from vision to value. We collected subjective valuations of art images to train individualized ANNs (iANNs). We then compared the hidden layers of each iANN to that subject’s functional neuroimaging voxel activity during valuation. We found a hierarchical, vision-to-value correspondence between the two that spanned the Principal Gradient, a global axis of brain organization that begins with satellites of modal sensory input and converges at an integrative center of value and self in the default mode network. As an independent test of this relationship, we conducted a parallel, auditory study where participants evaluated music stimuli during functional neuroimaging. Again, we found a correspondence between iANNs that modeled audition-to-value and voxel activity along the Principal Gradient, suggesting that this hierarchical axis of the brain encodes sensory-to-value transformation independent of modality. ANNs do not inherently replicate the human brain. Rather, they are optimized to perform a specific task, such as predicting subjective value by projecting information from one space (e.g., visual) to another (e.g., value) across sequential hierarchical layers. Thus the evidence of correspondence suggests there may be computational constraints underlying ANNs that cohere with the information processing rules expressed within biological neural networks. We discuss these computational constraints in this symposium, as their elucidation may be profitable for both adjoining fields of artificial intelligence and neuroscience.