標(biāo)題: Titlebook: HCI International 2022 - Late Breaking Papers. Multimodality in Advanced Interaction Environments; 24th International C Masaaki Kurosu,Saka [打印本頁] 作者: 預(yù)兆前 時(shí)間: 2025-3-21 19:24
書目名稱HCI International 2022 - Late Breaking Papers. Multimodality in Advanced Interaction Environments影響因子(影響力)
書目名稱HCI International 2022 - Late Breaking Papers. Multimodality in Advanced Interaction Environments影響因子(影響力)學(xué)科排名
書目名稱HCI International 2022 - Late Breaking Papers. Multimodality in Advanced Interaction Environments網(wǎng)絡(luò)公開度
書目名稱HCI International 2022 - Late Breaking Papers. Multimodality in Advanced Interaction Environments網(wǎng)絡(luò)公開度學(xué)科排名
書目名稱HCI International 2022 - Late Breaking Papers. Multimodality in Advanced Interaction Environments被引頻次
書目名稱HCI International 2022 - Late Breaking Papers. Multimodality in Advanced Interaction Environments被引頻次學(xué)科排名
書目名稱HCI International 2022 - Late Breaking Papers. Multimodality in Advanced Interaction Environments年度引用
書目名稱HCI International 2022 - Late Breaking Papers. Multimodality in Advanced Interaction Environments年度引用學(xué)科排名
書目名稱HCI International 2022 - Late Breaking Papers. Multimodality in Advanced Interaction Environments讀者反饋
書目名稱HCI International 2022 - Late Breaking Papers. Multimodality in Advanced Interaction Environments讀者反饋學(xué)科排名
作者: optic-nerve 時(shí)間: 2025-3-21 23:52 作者: Ventilator 時(shí)間: 2025-3-22 02:53
https://doi.org/10.1057/9781403982629o use an e-nose to control the odor concentration generated by an olfactory display, thus avoiding an excess of odor that may produce a negative user experience. The Evolutionary Prototyping methodology was very useful for developing our olfactory display system and electronic nose.作者: 直覺沒有 時(shí)間: 2025-3-22 04:37
George Fragopoulos,Liliana M. Naydan person at the interview or not may have helped to alleviate the nervousness of the interviewees. Based on these results, we propose a web interview support system using the communication support character called InterActor which is a speech-driven embodied entrainment character that automatically g作者: Munificent 時(shí)間: 2025-3-22 12:15
The Middle Ages to the Renaissance,feedback process by providing an additional online grip force related acoustic surrogate feedback. We evaluated the object-hand kinematic and psychophysical data of 16 young and 16 elderly participants collected during a classical weight discrimination task, allowing object shape-weight manipulation作者: Infirm 時(shí)間: 2025-3-22 15:40 作者: ADORE 時(shí)間: 2025-3-22 20:11 作者: Nuance 時(shí)間: 2025-3-22 21:33
The Colonial Era: Rebellion and Dissent,towards robots in the healthcare sector differ and whether the type of robot has an influence on the attitudes towards robots. The results show that participants working in the healthcare sector have a less positive attitude towards robots than those not working in the healthcare sector. Furthermore作者: endure 時(shí)間: 2025-3-23 02:53
Towards a?Dynamic Model for?the?Prediction of?Emotion Intensity from?Peripheral Physiological Signal analysed for three different emotion qualities: Happiness/Joy, Disappointment/Regret, Worry/Fear. While models were obtained for each individual, only the best set of parameters across individuals was considered for evaluation. Overall, it was found that the NARX models performed better than a slid作者: 樹上結(jié)蜜糖 時(shí)間: 2025-3-23 08:03
Towards Efficient Odor Diffusion with an Olfactory Display Using an Electronic Noseo use an e-nose to control the odor concentration generated by an olfactory display, thus avoiding an excess of odor that may produce a negative user experience. The Evolutionary Prototyping methodology was very useful for developing our olfactory display system and electronic nose.作者: predict 時(shí)間: 2025-3-23 12:24 作者: Missile 時(shí)間: 2025-3-23 15:34 作者: Seizure 時(shí)間: 2025-3-23 20:07 作者: DEMN 時(shí)間: 2025-3-23 23:07
An Elderly User-Defined Gesture Set for Audio Natural Interaction in Square Danced obvious directional indicators, including vertical (up/down) lines and counterclockwise circular motion. We designed a set of gestures using these gestures. Data analysis also showed that the best gestures scored relatively high in ease of execution and memorability. This suggests that the older p作者: 債務(wù) 時(shí)間: 2025-3-24 02:45
Human-Robot-Collaboration in the Healthcare Environment: An Exploratory Studytowards robots in the healthcare sector differ and whether the type of robot has an influence on the attitudes towards robots. The results show that participants working in the healthcare sector have a less positive attitude towards robots than those not working in the healthcare sector. Furthermore作者: sacrum 時(shí)間: 2025-3-24 07:37
0302-9743 was held virtually during June 26 to July 1, 2022..A total of 5583 individuals from academia, research institutes, industry, and governmental agencies from 88 countries submitted contributions, and 1276 papers and 275 posters were included in the proceedings that were published just before the start作者: verdict 時(shí)間: 2025-3-24 13:16
Terrorism in the Arab-Israeli Conflictplications that such results have for the implementations of more competent, from a semantic and pragmatic perspective, spoken dialogue systems will be outlined. Especially the qualitative and quantitative analysis of developmental data will offer the basis for the proposal of some specific applications.作者: 危機(jī) 時(shí)間: 2025-3-24 15:47 作者: mitral-valve 時(shí)間: 2025-3-24 20:31
Laughter Meaning Construction and?Use in?Development: Children and?Spoken Dialogue Systemsplications that such results have for the implementations of more competent, from a semantic and pragmatic perspective, spoken dialogue systems will be outlined. Especially the qualitative and quantitative analysis of developmental data will offer the basis for the proposal of some specific applications.作者: hauteur 時(shí)間: 2025-3-25 01:19 作者: 常到 時(shí)間: 2025-3-25 06:07 作者: chapel 時(shí)間: 2025-3-25 08:40 作者: Scleroderma 時(shí)間: 2025-3-25 12:00 作者: 摘要 時(shí)間: 2025-3-25 19:46 作者: Hormones 時(shí)間: 2025-3-25 21:58
Emotion Recognition from?Physiological Signals Using Continuous Wavelet Transform and?Deep LearningN) model for classification. The proposed model processes multiple signal types such as Galvanic Skin Response (GSR), respiration patterns, and blood volume pressure. Achieved results indicate an accuracy of 84.2%, which outperforms state-of-the-art models on four-class classification despite of being only based on peripheral signals.作者: hemorrhage 時(shí)間: 2025-3-26 01:19 作者: 罐里有戒指 時(shí)間: 2025-3-26 07:29 作者: Axillary 時(shí)間: 2025-3-26 11:07
0302-9743 reaking Work” (papers and posters). The contributions thoroughly cover the entire ?eld of human-computer interaction, addressing major advances in knowledge and effective use of computers in a variety of application areas..978-3-031-17617-3978-3-031-17618-0Series ISSN 0302-9743 Series E-ISSN 1611-3349 作者: 發(fā)起 時(shí)間: 2025-3-26 13:12
https://doi.org/10.1007/978-1-349-08415-9division is higher than that without FOV division, and complex poses can be detected. In addition, the effectiveness of the two-stage skeleton detection was confirmed by comparing the results with and without the two-stage detection.作者: 不持續(xù)就爆 時(shí)間: 2025-3-26 18:28 作者: Noctambulant 時(shí)間: 2025-3-26 21:14 作者: 和平 時(shí)間: 2025-3-27 01:52 作者: 分解 時(shí)間: 2025-3-27 07:54 作者: 不可救藥 時(shí)間: 2025-3-27 12:25
Applying Generative Adversarial Networks and?Vision Transformers in?Speech Emotion Recognitionhe Vision Transformer (ViT) is being used. ViT has originally been applied for image classification, but in the current study is being adopted for emotion recognition. The proposed methods have been evaluated using the English IEMOCAP and the Japanese JTES speech corpora and showed significant improvements when data augmentation has been applied.作者: 土產(chǎn) 時(shí)間: 2025-3-27 15:54 作者: annexation 時(shí)間: 2025-3-27 21:50 作者: euphoria 時(shí)間: 2025-3-27 22:41 作者: Implicit 時(shí)間: 2025-3-28 05:24
https://doi.org/10.1007/978-1-349-08415-9ations in the entire range in front of a display. In this method, we use a technique of FOV division, which transforms an input omnidirectional camera image into multiple perspective projection images with virtually rotating the camera, in order to avoid distortion in the peripheral area of a perspe作者: 貨物 時(shí)間: 2025-3-28 07:55
The Politics of Divergent Policy,ficant contribution. Surprisingly, the field of emotion recognition is dominated by static machine learning approaches that do not account for the dynamics present in emotional processes. To overcome this limitation, we applied nonlinear autoregressive (NARX) models to predict emotion intensity from作者: NUL 時(shí)間: 2025-3-28 11:10 作者: Epithelium 時(shí)間: 2025-3-28 17:32 作者: 防止 時(shí)間: 2025-3-28 20:10 作者: alleviate 時(shí)間: 2025-3-28 23:35
https://doi.org/10.1007/978-3-642-00637-1sly, several studies have been introduced to address the problem of emotion recognition using several kinds of sensors, feature extraction methods, and classification techniques. Specifically, emotion recognition has been reported using audio, vision, text, and biosensors. Although, using acted emot作者: 細(xì)胞學(xué) 時(shí)間: 2025-3-29 06:06 作者: Confirm 時(shí)間: 2025-3-29 09:52
https://doi.org/10.1007/978-3-531-93292-7 proposes a four-class multimodal approach for emotion recognition based on peripheral physiological signals that uniquely combines a Continuous Wavelet Transform (CWT) for feature extraction, an overlapping sliding window approach to generate more data samples and a Convolutional Neural Network (CN作者: Emmenagogue 時(shí)間: 2025-3-29 14:23
The Middle Ages to the Renaissance,ipulation. These feedback inputs are slow in general; however, responsible for corrective mismatch process and motoric execution. Aging induced reduction in concentration of tactile afferents and mechanoreceptors and consequently, the reduced skin sensitivity may further impair the slow feedback pro作者: 不能逃避 時(shí)間: 2025-3-29 18:53
Terrorism in the Arab-Israeli Conflictf analyses of corpus data, theoretical and formal insights, behavioural experiments, machine learning methods, and developmental data, turned out to be fruitful to gain insight into laughter behaviour and on how its production contributes to our conversations. A crucial claim emerging from the studi作者: antipsychotic 時(shí)間: 2025-3-29 20:24
The Kingdom of the Netherlands (,),le to understand sign language. Therefore, We need technology to support communication between people with speech or hearing impairments and normal people, and sign language recognition (SLR) is important to facilitate communication. In this work, we propose an approach to recognize sign language fr作者: atopic 時(shí)間: 2025-3-29 23:55
https://doi.org/10.1007/978-981-19-1337-2roved usability and a consequent increase in performance in executing their tasks. In environments using augmented reality, gestures are preferable over eye gaze, as, in the former, the user does not take the focus off the field of work. However, unnatural motions are more difficult to memorize, and作者: 發(fā)展 時(shí)間: 2025-3-30 06:24
https://doi.org/10.1007/978-1-349-12452-7er of templates, the number of sampling points, the number of fingers, and their configuration with other hand parameters such as hand joints, palm, and fingertips impact performance. This paper defines a systematic procedure for comparing recognizers using a series of test definitions, . an ordered作者: 女上癮 時(shí)間: 2025-3-30 08:52
gners and elderly users, the elderly’s use of mid-air gestures is still limited. To make gesture-based interaction more natural for elderly users, the study aimed to find user-defined gestures suitable for elderly users in square dancing sound. We conducted a structured process based on participator作者: Harass 時(shí)間: 2025-3-30 13:11 作者: 繼而發(fā)生 時(shí)間: 2025-3-30 18:24 作者: 軟膏 時(shí)間: 2025-3-30 21:06 作者: 赤字 時(shí)間: 2025-3-31 02:24 作者: DEFT 時(shí)間: 2025-3-31 08:35
Towards a?Dynamic Model for?the?Prediction of?Emotion Intensity from?Peripheral Physiological Signalficant contribution. Surprisingly, the field of emotion recognition is dominated by static machine learning approaches that do not account for the dynamics present in emotional processes. To overcome this limitation, we applied nonlinear autoregressive (NARX) models to predict emotion intensity from作者: 寬大 時(shí)間: 2025-3-31 10:02