標(biāo)題: Titlebook: Affective Computing and Intelligent Interaction; Fourth International Sidney D’Mello,Arthur Graesser,Jean-Claude Martin Conference proceedi [打印本頁(yè)] 作者: squamous-cell 時(shí)間: 2025-3-21 18:16
書目名稱Affective Computing and Intelligent Interaction影響因子(影響力)
書目名稱Affective Computing and Intelligent Interaction影響因子(影響力)學(xué)科排名
書目名稱Affective Computing and Intelligent Interaction網(wǎng)絡(luò)公開(kāi)度
書目名稱Affective Computing and Intelligent Interaction網(wǎng)絡(luò)公開(kāi)度學(xué)科排名
書目名稱Affective Computing and Intelligent Interaction被引頻次
書目名稱Affective Computing and Intelligent Interaction被引頻次學(xué)科排名
書目名稱Affective Computing and Intelligent Interaction年度引用
書目名稱Affective Computing and Intelligent Interaction年度引用學(xué)科排名
書目名稱Affective Computing and Intelligent Interaction讀者反饋
書目名稱Affective Computing and Intelligent Interaction讀者反饋學(xué)科排名
作者: certain 時(shí)間: 2025-3-22 00:17
Affect, Learning, and Delightsearch on interactive learning environments. The intelligent tutoring systems community has begun actively exploring computational models of affect, and game-based learning environments present a significant opportunity for investigating student affect in interactive learning. One family of game-bas作者: 有權(quán)威 時(shí)間: 2025-3-22 02:26 作者: 樂(lè)意 時(shí)間: 2025-3-22 04:59 作者: Extemporize 時(shí)間: 2025-3-22 09:57 作者: 不可思議 時(shí)間: 2025-3-22 15:48
ikannotate – A Tool for Labelling, Transcription, and Annotation of Emotionally Coloured Speechprosodic features, linguistics provides several transcription systems. Furthermore, in emotion labelling different methods are proposed and discussed. In this paper, we introduce the tool ., which combines prosodic information with emotion labelling. It allows the generation of a transcription of ma作者: 愛(ài)管閑事 時(shí)間: 2025-3-22 20:42 作者: 詞根詞綴法 時(shí)間: 2025-3-22 21:24
Fast-FACS: A Computer-Assisted System to Increase Speed and Reliability of Manual FACS Codingd difficult to standardize. A goal of automated FACS coding is to eliminate the need for manual coding and realize automatic recognition and analysis of facial actions. Success of this effort depends in part on access to reliably coded corpora; however, manual FACS coding remains expensive and slow.作者: bisphosphonate 時(shí)間: 2025-3-23 05:24 作者: sultry 時(shí)間: 2025-3-23 09:09
Agents with Emotional Intelligence for Storytellingactive storytelling and making them to be perceived as a close friend or a hated enemy by an user is an hard task. This paper addresses the problem of creating autonomous agents capable of establishing social relations with others in an interactive narrative. We present an innovative approach by loo作者: 教義 時(shí)間: 2025-3-23 10:33
“That’s Aggravating, Very Aggravating”: Is It Possible to Classify Behaviors in Couple Interactions cal and theoretical challenges in observational practice. Technology holds the promise of mitigating some of these difficulties by assisting in the evaluation of higher level human behavior. In this work we attempt to address two questions: (1) Does the lexical channel contain the necessary informat作者: 輕率看法 時(shí)間: 2025-3-23 15:01
Predicting Facial Indicators of Confusion with Hidden Markov Modelsctive state is confusion, which has been positively associated with effective learning. Although identifying episodes of confusion presents significant challenges, recent investigations have identified correlations between confusion and specific facial movements. This paper builds on those findings 作者: Left-Atrium 時(shí)間: 2025-3-23 20:59 作者: PUT 時(shí)間: 2025-3-23 23:00
Using Individual Light Rigs to Control the Perception of a Virtual Character’s Personalityystem composed of three dynamic lights that can be configured using an interactive editor. To study the effect of character-centric lighting on observers, we created four lighting configurations derived from the photography and film literature. A user study with 32 subjects shows that the lighting s作者: Nerve-Block 時(shí)間: 2025-3-24 03:12
Call Center Stress Recognition with Person-Specific Modelsad the same job profile, we found large differences in how individuals reported stress levels, with similarity from day to day within the same participant, but large differences across the participants. We examined two ways to address the individual differences to automatically recognize classes of 作者: 內(nèi)向者 時(shí)間: 2025-3-24 08:23 作者: chemoprevention 時(shí)間: 2025-3-24 13:55
Multiple Instance Learning for Classification of Human Behavior Observationswho are asked to evaluate several aspects of the observations along various dimensions. This can be a tedious task. We propose that automatic classification of behavioral patterns in this context can be viewed as a multiple instance learning problem. In this paper, we analyze a corpus of married cou作者: Mindfulness 時(shí)間: 2025-3-24 16:51 作者: 熟練 時(shí)間: 2025-3-24 23:00 作者: Hemiparesis 時(shí)間: 2025-3-25 00:20 作者: 蘆筍 時(shí)間: 2025-3-25 03:35 作者: 粘土 時(shí)間: 2025-3-25 08:35 作者: hieroglyphic 時(shí)間: 2025-3-25 11:54 作者: LIMN 時(shí)間: 2025-3-25 17:46 作者: modifier 時(shí)間: 2025-3-25 22:48
https://doi.org/10.1007/978-3-663-07943-9d an accuracy across participants of 78.03% when trained and tested on different days from the same person, and of 73.41% when trained and tested on different people using the proposed adaptations to SVMs.作者: 傳授知識(shí) 時(shí)間: 2025-3-26 02:29 作者: lesion 時(shí)間: 2025-3-26 04:29 作者: 舉止粗野的人 時(shí)間: 2025-3-26 10:26 作者: 許可 時(shí)間: 2025-3-26 14:59
Agents with Emotional Intelligence for Storytellingnd perform interpersonal emotion regulation in order to dynamically create relations with others. Some sample scenario are presented in order to illustrate the type of behaviour achieved by the model and the creation of social relations.作者: Yag-Capsulotomy 時(shí)間: 2025-3-26 18:54
“That’s Aggravating, Very Aggravating”: Is It Possible to Classify Behaviors in Couple Interactions ted with several session-level behavioral codes (e.g., level of acceptance toward other spouse). Our results will show that both of our research questions can be answered positively and encourage future research into such assistive observational technologies.作者: NUL 時(shí)間: 2025-3-26 23:47
Predicting Facial Indicators of Confusion with Hidden Markov Modelsgful modes of interaction within the tutoring sessions. The results demonstrate that because of its predictive power and rich qualitative representation, the model holds promise for informing the design of affective-sensitive tutoring systems.作者: 歡騰 時(shí)間: 2025-3-27 03:27 作者: 征服 時(shí)間: 2025-3-27 09:01
Form as a Cue in the Automatic Recognition of Non-acted Affective Body Expressionsle to finalize the affective state of each sequence. Our results show that using form information only, the system recognition reaches performances very close to the agreement between observers who viewed the affective expressions as animations containing both form and temporal information.作者: Magnitude 時(shí)間: 2025-3-27 10:51 作者: sphincter 時(shí)間: 2025-3-27 14:28 作者: 阻擋 時(shí)間: 2025-3-27 21:23
Are You Friendly or Just Polite? – Analysis of Smiles in Spontaneous Face-to-Face Interactionslonger durations of amused smiles while also suggesting new findings about symmetry of the smile dynamics. We found more symmetry in the velocities of the rise and decay of the amused smiles, and less symmetry in the polite smiles. We also found fastest decay velocity for polite but shared smiles.作者: 不成比例 時(shí)間: 2025-3-28 00:47 作者: FISC 時(shí)間: 2025-3-28 04:32 作者: opprobrious 時(shí)間: 2025-3-28 09:56 作者: 情節(jié)劇 時(shí)間: 2025-3-28 11:21 作者: 把…比做 時(shí)間: 2025-3-28 16:23 作者: 無(wú)脊椎 時(shí)間: 2025-3-28 18:56
Matias Zegers Ruíz-Tagle,Carla Meza Moralesples interacting about a problem in their relationship. We extract features from both the audio and the transcriptions and apply the Diverse Density-Support Vector Machine framework. Apart from attaining classification on the expert annotations, this framework also allows us to estimate salient regions of the complex interaction.作者: strdulate 時(shí)間: 2025-3-28 23:37 作者: 低位的人或事 時(shí)間: 2025-3-29 06:17
Using Individual Light Rigs to Control the Perception of a Virtual Character’s Personalityetups do influence the perception of the characters’ personality. We found lighting effects with regard to the perception of dominance. Moreover, we found that the personality perception of female characters seems to change more easily than for male characters.作者: Prologue 時(shí)間: 2025-3-29 08:28
Multiple Instance Learning for Classification of Human Behavior Observationsples interacting about a problem in their relationship. We extract features from both the audio and the transcriptions and apply the Diverse Density-Support Vector Machine framework. Apart from attaining classification on the expert annotations, this framework also allows us to estimate salient regions of the complex interaction.作者: DEAF 時(shí)間: 2025-3-29 11:47
Conference proceedings 2011 affect-sensitive applications, methodological issues in affective computing, affective and social robotics, affective and behavioral interfaces, relevant insights from psychology, affective databases, Evaluation and annotation tools.作者: invulnerable 時(shí)間: 2025-3-29 18:08
https://doi.org/10.1007/978-3-8349-8414-2, and Self Assessment Manikins. Finally, we present results of two usability tests observing the ability to identify emotions in labelling and comparing the transcription tool “Folker” with our application.作者: 武器 時(shí)間: 2025-3-29 19:56
Muhammet Mikdat Akbaba,Onur ?etinle sources of evidence. During a 7 day, 13 participant field study we found that recognition accuracy for physiological activation improved from 63% to 79% with two sources of evidence and in an additional pilot study this improved to 100% accuracy for one subject over 10 days when context evidence was also included.作者: 曲解 時(shí)間: 2025-3-30 03:32 作者: flavonoids 時(shí)間: 2025-3-30 06:47 作者: Infirm 時(shí)間: 2025-3-30 09:52 作者: 量被毀壞 時(shí)間: 2025-3-30 15:56
978-3-642-24599-2Springer-Verlag GmbH Berlin Heidelberg 2011作者: guardianship 時(shí)間: 2025-3-30 17:12
Begriffliche und konzeptionelle Grundlagen,of humans and machines via the development of affective and multimodal intelligent systems. This development appears . based on the popular notion that emotions play an important part in social interactions. In fact, research shows that humans missing the possibility to express or perceive/interpret作者: Dawdle 時(shí)間: 2025-3-30 21:37
https://doi.org/10.1007/978-3-8349-8414-2search on interactive learning environments. The intelligent tutoring systems community has begun actively exploring computational models of affect, and game-based learning environments present a significant opportunity for investigating student affect in interactive learning. One family of game-bas作者: amenity 時(shí)間: 2025-3-31 02:40
Begriffliche und konzeptionelle Grundlagen, you are on the go, e.g. the Affectiva Q. Sensor for capturing sympathetic nervous system activation, or without distracting you while you are online, e.g. webcam-based software capturing heart rate variability and facial expressions. We are also developing new technologies that capture and respond 作者: puzzle 時(shí)間: 2025-3-31 06:01
Begriffliche und konzeptionelle Grundlagen,commonly used to detect affective states from physiological data. Previous studies have achieved some success in detecting affect from physiological measures, especially in controlled environments where emotions are experimentally induced. One challenge that arises is that physiological measures are作者: 會(huì)議 時(shí)間: 2025-3-31 09:51
https://doi.org/10.1007/978-3-8349-8414-2thin two versions of a virtual laboratory for chemistry. This analysis is conducted using field observation data collected within undergraduate classes using the virtual laboratory software as part of their regular chemistry classes. We find that off-task behavior co-occurs with boredom, but appears作者: 滔滔不絕的人 時(shí)間: 2025-3-31 17:13 作者: magnanimity 時(shí)間: 2025-3-31 18:46
Begriffliche und konzeptionelle Grundlagen,turalistic recordings of scenario meetings. The core method consists of computing a set of prosody and voice quality measures, followed by a Principal Components Analysis (PCA) and Support Vector Machine (SVM) classification to identify the core factors predicting the associated social signal or rel作者: 袋鼠 時(shí)間: 2025-4-1 00:31