標(biāo)題: Titlebook: Affective Information Processing; Jianhua Tao,Tieniu Tan Book 2009 Springer-Verlag London 2009 Affect.Affective Computing.Affective Inform [打印本頁] 作者: 爆裂 時間: 2025-3-21 19:22
書目名稱Affective Information Processing影響因子(影響力)
書目名稱Affective Information Processing影響因子(影響力)學(xué)科排名
書目名稱Affective Information Processing網(wǎng)絡(luò)公開度
書目名稱Affective Information Processing網(wǎng)絡(luò)公開度學(xué)科排名
書目名稱Affective Information Processing被引頻次
書目名稱Affective Information Processing被引頻次學(xué)科排名
書目名稱Affective Information Processing年度引用
書目名稱Affective Information Processing年度引用學(xué)科排名
書目名稱Affective Information Processing讀者反饋
書目名稱Affective Information Processing讀者反饋學(xué)科排名
作者: 過剩 時間: 2025-3-21 21:28
Affect and Emotions in Intelligent Agents: Why and How?fit between an organism‘s functioning and the demands and opportunities that are presented by the environment in which it finds itself. It is proposed that effective functioning requires the interplay of four elements—affect, cognition, motivation, and behavior—and that the level of information proc作者: SPALL 時間: 2025-3-22 03:59
Cognitive Emotion Modeling in Natural Language Communicationignificant projects in this domain in recent years. The review is focused on probabilistic dynamic models, due to the key role of uncertainty in the relationships among the variables involved: the authors‘ experience in this domain is discussed by outlining open problems. Two aspects are discussed i作者: configuration 時間: 2025-3-22 05:31 作者: ARENA 時間: 2025-3-22 09:16
Affective Agents for Education Against Bullyingstem is discussed and the chapter considers issues relating to believability. It describes the emotionally driven architecture used for characters as well as the management of the overall story and finally considers issues relating to evaluation of systems like this.作者: 凌辱 時間: 2025-3-22 14:40
Emotion Perception and Recognition from Speechs more and more important. This chapter begins by introducing the correlations be?tween basic speech features such as pitch, intensity, formants, MFCC, and so on, and the emotions. Several recognition methods are then described to illustrate the performance of the previously proposed models, includi作者: 積習(xí)已深 時間: 2025-3-22 19:45 作者: intimate 時間: 2025-3-23 00:08
Emotional Speech Generation by Using Statistic Prosody Conversion Methodsression Tree (CART) model. Unlike the rule-based or linear modification method, the GMM and CART models try to map the subtle prosody distributions between neutral and emotional speech. A pitch target model that is optimized to describe Mandarin F0 contours is also introduced. For all conversion met作者: TSH582 時間: 2025-3-23 01:47 作者: 大門在匯總 時間: 2025-3-23 05:53
Automatic Facial Action Unit Recognition by Modeling Their Semantic And Dynamic Relationshipstion unit (AU) recognition approaches often recognize AUs or certain AU combinations in dividually and statically, ignoring the semantic relationships among AUs and the dynamics of AUs. Hence, these approaches cannot always recognize AUs reliably, robustly, and consistently due to the richness, ambi作者: ALIEN 時間: 2025-3-23 11:11
Face Animation Based on Large Audiovisual Databasematically addresses audiovisual data acquisition, expressive trajectory analysis, and audiovisual mapping. Based on this framework, we learn the correlation between neutral facial deformation and expressive facial deformation with the Gaussian Mixture Model (GMM). A hierarchical structure is propose作者: morale 時間: 2025-3-23 16:02
Affect in Multimodal Informationrbal and thenonverbal modalities of communication. From this point of view, the transmission of the information content is redundant, because the same information is transferred through several channels as well. How much information about the speaker‘s emotional state is transmitted by each channel 作者: 附錄 時間: 2025-3-23 20:39 作者: 典型 時間: 2025-3-24 02:03
A Multimodal Corpus Approach for the Study of Spontaneous Emotions. Current prototypes are often limited to the detection and synthesis of a few primary emotions and are most of the time grounded on acted data collected in-lab. In order to model the sophisticated relations between spontaneous emotions and their expressions in different modalities, an exploratory a作者: cajole 時間: 2025-3-24 04:20
Physiological Sensing for Affective Computinghysiological processes related to sympathetic activity of the autonomic nervous system. A reason for physiological measurements not being used in emotion-related HCI research or affective applications is the lack of appropriate sensing devices. Existing systems often don’t live up to the high requir作者: 增減字母法 時間: 2025-3-24 06:38
Evolutionary Expression of Emotions in Virtual Humans Using Lights and Pixelsield of virtual humans has been neglecting the kind of expression we see in the arts. In fact, researchers have tended to focus on gesture, face, and voice for the expression ofemotions. But why limit ourselves to the body? In this context, drawing on accumulated knowledge from the arts, this chapte作者: 火海 時間: 2025-3-24 13:22 作者: 木訥 時間: 2025-3-24 18:00 作者: Optometrist 時間: 2025-3-24 21:17 作者: extrovert 時間: 2025-3-25 01:15 作者: 背心 時間: 2025-3-25 05:59
A Linguistic Interpretation of the OCC Emotion Model for Affect Sensing from Textecific values of the cognitive variables of the emotion model. The resulting linguistics-based rule set for the OCC emotion types and cognitive states allows us to determine a broad class of emotions conveyed by text.作者: ARM 時間: 2025-3-25 08:11 作者: BATE 時間: 2025-3-25 12:10 作者: 天文臺 時間: 2025-3-25 15:52 作者: 榮幸 時間: 2025-3-25 23:03
Physiological Sensing for Affective Computingapplications rely on today, and commonly used techniques to access them. After this, requirements of affective applications for physiological sensors are worked out. A design concept meeting the requirements is drawn and exemplary implementations including evaluation results are described.作者: 包裹 時間: 2025-3-26 02:39 作者: Desert 時間: 2025-3-26 05:48
https://doi.org/10.1007/978-3-319-63480-7elationships among the variables involved: the authors‘ experience in this domain is discussed by outlining open problems. Two aspects are discussed in particular: how probabilistic emotion models can be validated and how the problem of emotional-cognitive inconsistency can be dealt with in probabilistic terms.作者: Cougar 時間: 2025-3-26 09:55 作者: 反話 時間: 2025-3-26 16:31
Expressive Speech Synthesis: Past, Present, and Possible Futureslicit control” paradigm in statistical parametric speech synthesis, which provides control over expressivity by combining and interpolating between statistical models trained on different expressive databases. The present chapter provides an overview of the past and present approaches, and ventures a look into possible future developments.作者: TEN 時間: 2025-3-26 18:14 作者: LIEN 時間: 2025-3-26 21:26 作者: Filibuster 時間: 2025-3-27 03:20
https://doi.org/10.1007/978-1-84800-306-4Affect; Affective Computing; Affective Information Processing; Animation; Multimodal Information; User In作者: Corral 時間: 2025-3-27 07:28
978-1-84996-777-8Springer-Verlag London 2009作者: Bucket 時間: 2025-3-27 13:04 作者: 混沌 時間: 2025-3-27 14:47 作者: 內(nèi)向者 時間: 2025-3-27 21:50
Emotion Recognition Based on Multimodal InformationHere is a conversation between an interviewer and a subject occurring in an Adult Attachment Interview (Roisman, Tsai, & Chiang, 2004). AUs are facial action units defined in Ekman, Friesen, and Hager (2002)....”作者: savage 時間: 2025-3-27 23:56
Freya Samuelson-Cramp,Elvira Bolatbeen detailed in the book . (Lewis & Haviland-Jones, 2000). Emotion is a positive or negative mental state that combines physiological input with cognitive appraisal (Oatley, 1987; Ortony et al., 1990; Thagard, 2005). Although not traditionally considered an aspect of cognitive science, it has recen作者: acetylcholine 時間: 2025-3-28 05:10 作者: 輕彈 時間: 2025-3-28 09:00
https://doi.org/10.1007/978-3-319-63480-7ignificant projects in this domain in recent years. The review is focused on probabilistic dynamic models, due to the key role of uncertainty in the relationships among the variables involved: the authors‘ experience in this domain is discussed by outlining open problems. Two aspects are discussed i作者: Spinous-Process 時間: 2025-3-28 11:55 作者: Repatriate 時間: 2025-3-28 15:23 作者: Endoscope 時間: 2025-3-28 20:46
Rita Goyal,Nada Kakabadse,Andrew Kakabadses more and more important. This chapter begins by introducing the correlations be?tween basic speech features such as pitch, intensity, formants, MFCC, and so on, and the emotions. Several recognition methods are then described to illustrate the performance of the previously proposed models, includi作者: essential-fats 時間: 2025-3-29 00:43 作者: Living-Will 時間: 2025-3-29 03:51 作者: instulate 時間: 2025-3-29 07:27
Cláudia Sim?es,Alin Stancu,Georgiana Grigoreson is, and second, we may be misled by them. The present chapter has the goal to present some findings about the importance of faces in the context of emotion communication. Most research on emotion considers the face as if it were a blank canvas with no meaning of its own. However, as we show, fac作者: 談判 時間: 2025-3-29 11:49 作者: 凈禮 時間: 2025-3-29 16:12 作者: instulate 時間: 2025-3-29 19:47 作者: GLIB 時間: 2025-3-30 01:06 作者: nugatory 時間: 2025-3-30 07:25
Michael Blatz,Karl-J. Kraus,Sascha Haghani. Current prototypes are often limited to the detection and synthesis of a few primary emotions and are most of the time grounded on acted data collected in-lab. In order to model the sophisticated relations between spontaneous emotions and their expressions in different modalities, an exploratory a作者: 自負(fù)的人 時間: 2025-3-30 10:49 作者: 鎮(zhèn)痛劑 時間: 2025-3-30 14:03
Michael Blatz,Karl-J. Kraus,Sascha Haghaniield of virtual humans has been neglecting the kind of expression we see in the arts. In fact, researchers have tended to focus on gesture, face, and voice for the expression ofemotions. But why limit ourselves to the body? In this context, drawing on accumulated knowledge from the arts, this chapte作者: Commodious 時間: 2025-3-30 19:04 作者: 停止償付 時間: 2025-3-30 23:48
http://image.papertrans.cn/a/image/150631.jpg作者: Noctambulant 時間: 2025-3-31 02:56 作者: 刺激 時間: 2025-3-31 05:44
Introductionnderstanding social emotional influences in the workplace. Nowadays, more and more researchers are interested in how to integrate emotions into HCI, which has become known as “affective computing” (Picard, 1997). Affective computing builds an “affect model” based on a variety of information, which r作者: textile 時間: 2025-3-31 09:13 作者: 明智的人 時間: 2025-3-31 13:33 作者: ascend 時間: 2025-3-31 19:39
Why the Same Expression May Not Mean the Same When Shown on Different Faces or Seen by Different Peoeaupré, & 2002). This in turn has relevance for the use of emotion expressions in human—interfaces. Agents are made to express emotions so as to facilitate communication by making it more like human—mmunication (Koda & 1996; Pelachaud & Bilvi, 2003). However, this very naturalness may mean that an a作者: 毗鄰 時間: 2025-4-1 00:14 作者: 獎牌 時間: 2025-4-1 05:19
Affect in Multimodal Informationame amount of information as the combined channels, suggesting that each channel performs a robust encoding of the emotional features that is very helpful in recovering the perception of the emotional state when one of the channels is degraded by noise.作者: DENT 時間: 2025-4-1 09:34 作者: 無可非議 時間: 2025-4-1 13:36
A Multimodal Corpus Approach for the Study of Spontaneous Emotionss of spontaneous complex emotions and at which level of abstraction and temporality..We also defined a copy-synthesis approach in which these behaviors were annotated, represented, and replayed by an expressive agent, enabling a validation and refinement of our annotations. We also studied individua