派博傳思國際中心

標題: Titlebook: Affective Computing and Intelligent Interaction; Fourth International Sidney D’Mello,Arthur Graesser,Jean-Claude Martin Conference proceedi [打印本頁]

作者: 貪求    時間: 2025-3-21 17:55
書目名稱Affective Computing and Intelligent Interaction影響因子(影響力)




書目名稱Affective Computing and Intelligent Interaction影響因子(影響力)學(xué)科排名




書目名稱Affective Computing and Intelligent Interaction網(wǎng)絡(luò)公開度




書目名稱Affective Computing and Intelligent Interaction網(wǎng)絡(luò)公開度學(xué)科排名




書目名稱Affective Computing and Intelligent Interaction被引頻次




書目名稱Affective Computing and Intelligent Interaction被引頻次學(xué)科排名




書目名稱Affective Computing and Intelligent Interaction年度引用




書目名稱Affective Computing and Intelligent Interaction年度引用學(xué)科排名




書目名稱Affective Computing and Intelligent Interaction讀者反饋




書目名稱Affective Computing and Intelligent Interaction讀者反饋學(xué)科排名





作者: 誘騙    時間: 2025-3-22 00:17

作者: 桉樹    時間: 2025-3-22 02:48

作者: JEER    時間: 2025-3-22 05:55

作者: Obituary    時間: 2025-3-22 11:24
https://doi.org/10.1007/b109153even for human annotators – only indirectly inferable using background information and the observation of the interaction’s progression as well as the social signals produced by the interlocutors. In this paper, coincidences of directly observable patterns and different user states are examined in o
作者: Hay-Fever    時間: 2025-3-22 15:18
Lutgart Berghe,Christoph Elst,Abigail Levrauo Tutor. The purpose of this study is to examine the acoustic cues for emotion detection of the speech channel from the learning system, and to compare the emotion-discriminant performance of acoustic cues (in this study) with the conversational cues (available in previous work). Comparison between
作者: multiply    時間: 2025-3-22 19:28

作者: heart-murmur    時間: 2025-3-22 21:43

作者: 乏味    時間: 2025-3-23 04:27

作者: 使人入神    時間: 2025-3-23 07:25

作者: Charade    時間: 2025-3-23 11:26
Christian Lohmann MBR,Peter R?tzel, the data resulting from research studies in affective computing include large individual differences. As a result, it is important that the data gleaned from an affective computing system be tailored for each individual user by re-tuning it using user-specific training examples. Given the often ti
作者: 我不死扛    時間: 2025-3-23 14:33

作者: osteoclasts    時間: 2025-3-23 21:27

作者: 鬧劇    時間: 2025-3-23 23:23

作者: Ballerina    時間: 2025-3-24 04:04

作者: resuscitation    時間: 2025-3-24 07:07
Sidney D’Mello,Arthur Graesser,Jean-Claude MartinFast-track conference proceedings.State-of-the-art research.Up-to-date results
作者: MENT    時間: 2025-3-24 11:23

作者: 蛙鳴聲    時間: 2025-3-24 17:36

作者: Euphonious    時間: 2025-3-24 18:59
https://doi.org/10.1007/978-3-642-24571-8emotion modeling and context; natural language processing; pattern recognition; social communication; vi
作者: entreat    時間: 2025-3-25 02:17

作者: 擴張    時間: 2025-3-25 03:34
0302-9743 ereed proceedings of the Fourth International Conference on Affective Computing and Intelligent Interaction, ACII 2011, held in Memphis,TN, USA, in October 2011.The 135 papers in this two volume set presented together with 3 invited talks were carefully reviewed and selected from 196 submissions. Th
作者: anesthesia    時間: 2025-3-25 07:34

作者: 桶去微染    時間: 2025-3-25 11:59

作者: 怪物    時間: 2025-3-25 18:17
CSR, Sustainability, Ethics & Governanced effective possibility. Our work will investigate the use of a facially-expressive android head as a social partner for children with ASC. The main goal of this research is to improve the emotion recognition capabilities of the children through observation, imitation and control of facial expressions on the android.
作者: 奇怪    時間: 2025-3-25 21:55
A Pattern-Based Model for Generating Text to Express Emotionning module, which provides rules and constraints for our model.?We present some examples and results for our model. We show that the model can generate various types of emotion sentences, either from semantic representation of input, or by choosing the pattern and the desired emotion class.
作者: 重疊    時間: 2025-3-26 03:21

作者: 里程碑    時間: 2025-3-26 04:17

作者: ZEST    時間: 2025-3-26 09:08

作者: CRACK    時間: 2025-3-26 15:46

作者: Urgency    時間: 2025-3-26 20:07
The Affective Triad: Stimuli, Questionnaires, and Measurements assumes that emotion is what people are saying to feel, and seems more likely..We propose a novel method, which extends the mentioned ones by looking for the physiological measurement mostly correlated to the self-report due to emotion, not the stimulus. This guarantees to find a measure best related to subject’s affective state.
作者: 彩色    時間: 2025-3-26 23:27
The Machine Knows What You Are Hiding: An Automatic Micro-expression Recognition System the performance of trained human subjects. To further improve the performance, a more representative training set, a more sophisticated testing bed, and an accurate image alignment method should be focused in future research.
作者: Custodian    時間: 2025-3-27 03:17
Context-Sensitive Affect Sensing and Metaphor Identification in Virtual Dramaion results for the new developments are provided. Our work benefits systems with intention to employ emotions embedded in the scenarios/characters and open-ended input for visual representation without detracting users from learning situations.
作者: MAIZE    時間: 2025-3-27 08:03

作者: 沒花的是打擾    時間: 2025-3-27 11:15
Investigating Acoustic Cues in Automatic Detection of Learners’ Emotion from Auto Tutorthe classification performance obtained using acoustic cues and conversational cues shows that the emotions: flow and boredom are better captured in acoustics than conversational cues while conversational cues play a more important role in multiple-emotion classification.
作者: Mercantile    時間: 2025-3-27 15:12

作者: Cocker    時間: 2025-3-27 18:16
Lutgart Berghe,Christoph Elst,Abigail Levrauctor’s confession of a behavior with doubtful morality, during which the actor either blushed or not. In the second study, we examine people’s responses when confronted with a moral dilemma in a Virtual Environment.
作者: 引導(dǎo)    時間: 2025-3-28 01:43

作者: 最后一個    時間: 2025-3-28 02:53
Lutgart Berghe,Christoph Elst,Abigail Levrau assumes that emotion is what people are saying to feel, and seems more likely..We propose a novel method, which extends the mentioned ones by looking for the physiological measurement mostly correlated to the self-report due to emotion, not the stimulus. This guarantees to find a measure best related to subject’s affective state.
作者: Trochlea    時間: 2025-3-28 09:59
Rouven Trapp,Klaus Berding,Andreas Hoffjan the performance of trained human subjects. To further improve the performance, a more representative training set, a more sophisticated testing bed, and an accurate image alignment method should be focused in future research.
作者: Prostaglandins    時間: 2025-3-28 13:51

作者: Urgency    時間: 2025-3-28 15:40
0302-9743 tive computing, affective and social robotics, affective and behavioral interfaces, relevant insights from psychology, affective databases, Evaluation and annotation tools.978-3-642-24570-1978-3-642-24571-8Series ISSN 0302-9743 Series E-ISSN 1611-3349
作者: 乳白光    時間: 2025-3-28 20:34
https://doi.org/10.1007/978-3-8349-6183-9d approach for transforming the 1,034 English words’ ratings to the corresponding Chinese words’ ratings. The experimental result demonstrated that the proposed approach can be practically implemented and provide adequate results.
作者: FLIT    時間: 2025-3-29 02:13
Unternehmensführung & Controllingtal results using a kNN classifier show that one of them almost always results in higher classification accuracy than a uniform sampling approach. We expect that ACS, together with transfer learning, will greatly reduce the data acquisition effort to customize an affective computing system.
作者: 注意到    時間: 2025-3-29 05:41

作者: 縮減了    時間: 2025-3-29 08:44
A Regression Approach to Affective Rating of Chinese Words from ANEWd approach for transforming the 1,034 English words’ ratings to the corresponding Chinese words’ ratings. The experimental result demonstrated that the proposed approach can be practically implemented and provide adequate results.
作者: Ischemia    時間: 2025-3-29 12:53

作者: 侵略者    時間: 2025-3-29 16:00
EMOGIB: Emotional Gibberish Speech Database for Affective Human-Robot Interactionral state and the big six emotions: anger, disgust, fear, happiness, sadness and surprise. The database has been evaluated through a perceptual test for all subsets of the database by adults and one subset of the database with children, achieving recognition scores up to 81%.
作者: 共棲    時間: 2025-3-29 20:01
Lutgart Berghe,Christoph Elst,Abigail Levrauthe classification performance obtained using acoustic cues and conversational cues shows that the emotions: flow and boredom are better captured in acoustics than conversational cues while conversational cues play a more important role in multiple-emotion classification.
作者: 隨意    時間: 2025-3-30 02:44

作者: 關(guān)心    時間: 2025-3-30 05:37

作者: 狼群    時間: 2025-3-30 09:53

作者: 確定無疑    時間: 2025-3-30 12:46
https://doi.org/10.1057/9780230286191nal dialogs. The game of EMO20Q provides a framework for demonstrating this shared meaning and, moreover, provides a standardized way for collecting the judgments of ordinary people. The paper offers preliminary results of EMO20Q pilot experiments, showing that such a game is feasible and that it ge
作者: 生氣地    時間: 2025-3-30 18:43

作者: Condescending    時間: 2025-3-30 21:35

作者: 孤獨無助    時間: 2025-3-31 02:32
Interpretations of Artificial Subtle Expressions (ASEs) in Terms of Different Types of Artifact: A C
作者: 輕快走過    時間: 2025-3-31 05:04
A Comparison of Unsupervised Methods to Associate Colors with Words
作者: 悄悄移動    時間: 2025-3-31 12:58

作者: BRACE    時間: 2025-3-31 15:29

作者: peritonitis    時間: 2025-3-31 21:33
Emotion Twenty Questions: Toward a Crowd-Sourced Theory of Emotionsle about emotion terms. . (EMO20Q) is a dialog-based game that is similar to the familiar Twenty Questions game except that the object of guessing is the name for an emotion, rather than an arbitrary object. The game is implemented as a dyadic computer chat application using the Extensible Messaging
作者: 克制    時間: 2025-3-31 22:53
A Pattern-Based Model for Generating Text to Express Emotionded patterns. From the extended patterns, we chose good patterns that are suitable for generating emotion sentences. We also introduce a sentence planning module, which provides rules and constraints for our model.?We present some examples and results for our model. We show that the model can genera
作者: 拉開這車床    時間: 2025-4-1 05:26

作者: 外向者    時間: 2025-4-1 09:51

作者: 失望昨天    時間: 2025-4-1 10:13
How Low Level Observations Can Help to Reveal the User’s State in HCIeven for human annotators – only indirectly inferable using background information and the observation of the interaction’s progression as well as the social signals produced by the interlocutors. In this paper, coincidences of directly observable patterns and different user states are examined in o
作者: Invertebrate    時間: 2025-4-1 16:24
Investigating Acoustic Cues in Automatic Detection of Learners’ Emotion from Auto Tutoro Tutor. The purpose of this study is to examine the acoustic cues for emotion detection of the speech channel from the learning system, and to compare the emotion-discriminant performance of acoustic cues (in this study) with the conversational cues (available in previous work). Comparison between
作者: 被告    時間: 2025-4-1 20:25

作者: Glaci冰    時間: 2025-4-2 01:33

作者: 暫時中止    時間: 2025-4-2 06:18
A Regression Approach to Affective Rating of Chinese Words from ANEWdies were rated with a large number of participants, making it difficult to apply to different languages. Moreover, difference in culture across different ethnic groups makes the language/culture-specific affective norms not directly translatable to the applications using different languages. To ove
作者: 膽大    時間: 2025-4-2 07:56

作者: 切掉    時間: 2025-4-2 12:18





歡迎光臨 派博傳思國際中心 (http://www.pjsxioz.cn/) Powered by Discuz! X3.5
深泽县| 泰州市| 灵川县| 喀喇| 四子王旗| 洞口县| 镇沅| 军事| 和田县| 靖宇县| 神池县| 广丰县| 双鸭山市| 南汇区| 鞍山市| 永昌县| 刚察县| 平果县| 梅河口市| 沾化县| 普兰店市| 弥勒县| 长沙县| 北票市| 山东省| 共和县| 洮南市| 新泰市| 乌鲁木齐市| 静宁县| 靖江市| 松溪县| 古丈县| 泸水县| 依安县| 平顺县| 扎鲁特旗| 林甸县| 库伦旗| 秦皇岛市| 嘉义县|