派博傳思國(guó)際中心

標(biāo)題: Titlebook: Human-Computer Interaction. Recognition and Interaction Technologies; Thematic Area, HCI 2 Masaaki Kurosu Conference proceedings 2019 Sprin [打印本頁(yè)]

作者: irritants    時(shí)間: 2025-3-21 17:15
書目名稱Human-Computer Interaction. Recognition and Interaction Technologies影響因子(影響力)




書目名稱Human-Computer Interaction. Recognition and Interaction Technologies影響因子(影響力)學(xué)科排名




書目名稱Human-Computer Interaction. Recognition and Interaction Technologies網(wǎng)絡(luò)公開度




書目名稱Human-Computer Interaction. Recognition and Interaction Technologies網(wǎng)絡(luò)公開度學(xué)科排名




書目名稱Human-Computer Interaction. Recognition and Interaction Technologies被引頻次




書目名稱Human-Computer Interaction. Recognition and Interaction Technologies被引頻次學(xué)科排名




書目名稱Human-Computer Interaction. Recognition and Interaction Technologies年度引用




書目名稱Human-Computer Interaction. Recognition and Interaction Technologies年度引用學(xué)科排名




書目名稱Human-Computer Interaction. Recognition and Interaction Technologies讀者反饋




書目名稱Human-Computer Interaction. Recognition and Interaction Technologies讀者反饋學(xué)科排名





作者: STERN    時(shí)間: 2025-3-21 23:40
https://doi.org/10.1007/3-540-27840-0ow to fast,” just in the case of the bar indicator. Participants were more satisfied with 5?s than 15?s. Finally, there was a positive impact on the participants’ preferences when they perceived a shorter wait duration.
作者: Scintillations    時(shí)間: 2025-3-22 03:04

作者: Provenance    時(shí)間: 2025-3-22 08:24
https://doi.org/10.1007/978-3-0348-6033-8 also negatively affected the experience of the other user. We report specific collaboration concerns introduced by device heterogeneity. Based on these findings, we offer implications for the design of media spaces that use heterogeneous devices.
作者: 異端邪說(shuō)2    時(shí)間: 2025-3-22 10:52
https://doi.org/10.1007/978-3-642-99881-2n based on two different sources: controllable and uncontrollable expression. The preliminary experiments show that our proposed method suggested that the classification of emotion from biological signals outperform the classification from facial expression.
作者: cumber    時(shí)間: 2025-3-22 13:43
Die Grundfaktoren des Wirtschaftens,sted on adult data than child data. When the training and testing data were from the same group, the classifiers generally performed better for adults than children. Implications and future works are discussed with the results.
作者: 躲債    時(shí)間: 2025-3-22 20:07

作者: Painstaking    時(shí)間: 2025-3-22 23:22

作者: GRAVE    時(shí)間: 2025-3-23 03:37

作者: modifier    時(shí)間: 2025-3-23 09:16
A Preliminary Experiment on the Estimation of Emotion Using Facial Expression and Biological Signalsn based on two different sources: controllable and uncontrollable expression. The preliminary experiments show that our proposed method suggested that the classification of emotion from biological signals outperform the classification from facial expression.
作者: deface    時(shí)間: 2025-3-23 11:15
Facial Expression Recognition for Children: Can Existing Methods Tuned for Adults Be Adopted for Chisted on adult data than child data. When the training and testing data were from the same group, the classifiers generally performed better for adults than children. Implications and future works are discussed with the results.
作者: Permanent    時(shí)間: 2025-3-23 15:23

作者: 排出    時(shí)間: 2025-3-23 20:19

作者: Myelin    時(shí)間: 2025-3-23 22:38

作者: 小卒    時(shí)間: 2025-3-24 02:40
G-Menu: A Keyword-by-Gesture Based Dynamic Menu Interface for Smartphones by gesture, and by touching, in a user study with twenty participants on their item selection time (for measuring task efficiency), their error rate (for measuring task effectiveness), and their subjective satisfaction (for measuring user satisfaction).
作者: 墊子    時(shí)間: 2025-3-24 09:32

作者: 災(zāi)禍    時(shí)間: 2025-3-24 12:13

作者: escalate    時(shí)間: 2025-3-24 15:08

作者: OASIS    時(shí)間: 2025-3-24 20:55
0302-9743 International Conference on Human-Computer Interaction, HCII 2019, which took place in Orlando, Florida, USA, in July 2019...A total of 1274 papers and 209 posters have been accepted for publication in the HCII 2019 proceedings from a total of 5029 submissions. ..The 125 papers included in this HCI
作者: 調(diào)整校對(duì)    時(shí)間: 2025-3-25 01:02

作者: 共和國(guó)    時(shí)間: 2025-3-25 04:37

作者: 發(fā)酵    時(shí)間: 2025-3-25 08:56

作者: 相同    時(shí)間: 2025-3-25 15:16

作者: 雄偉    時(shí)間: 2025-3-25 18:02
Lecture Notes in Computer Sciencehttp://image.papertrans.cn/h/image/429736.jpg
作者: bourgeois    時(shí)間: 2025-3-25 21:42
Human-Computer Interaction. Recognition and Interaction TechnologiesThematic Area, HCI 2
作者: 北極熊    時(shí)間: 2025-3-26 00:22
Investigation of the Effect of Letter Labeling Positions on Consecutive Typing on Mobile Deviceshe lower right portions, the upper left portions and the lower left portions. According to analyses we fund that under the circumstances of different symbol positions, the subjects tended to click on the positions where the symbols appeared instead of the centers of the buttons.
作者: CORE    時(shí)間: 2025-3-26 07:38

作者: 偏離    時(shí)間: 2025-3-26 09:22

作者: 能量守恒    時(shí)間: 2025-3-26 12:56

作者: Occipital-Lobe    時(shí)間: 2025-3-26 17:31

作者: 內(nèi)疚    時(shí)間: 2025-3-26 21:49
Application of Classification Method of Emotional Expression Type Based on Laban Movement Analysis t 80%. We could estimate emotions even when the body motions were not so large or activated compared with the fabrication task. The results showed the general effectiveness of the classification method.
作者: 一大群    時(shí)間: 2025-3-27 01:06

作者: 中止    時(shí)間: 2025-3-27 06:48
Conference proceedings 2019cognition; eye-gaze, gesture and motion-based interaction; and interaction in virtual and augmented reality. ..Part III: design for social challenges; design for culture and entertainment; design for intelligent urban environments; and design and evaluation case studies. .
作者: 難理解    時(shí)間: 2025-3-27 12:26
https://doi.org/10.1007/978-3-662-10799-7he lower right portions, the upper left portions and the lower left portions. According to analyses we fund that under the circumstances of different symbol positions, the subjects tended to click on the positions where the symbols appeared instead of the centers of the buttons.
作者: 狂熱語(yǔ)言    時(shí)間: 2025-3-27 16:00

作者: 豎琴    時(shí)間: 2025-3-27 21:25

作者: obsession    時(shí)間: 2025-3-28 01:30

作者: 加入    時(shí)間: 2025-3-28 05:11

作者: micturition    時(shí)間: 2025-3-28 08:49
https://doi.org/10.1007/978-3-663-12939-4 80%. We could estimate emotions even when the body motions were not so large or activated compared with the fabrication task. The results showed the general effectiveness of the classification method.
作者: semble    時(shí)間: 2025-3-28 10:26

作者: 先兆    時(shí)間: 2025-3-28 16:13

作者: chalice    時(shí)間: 2025-3-28 22:41

作者: 侵蝕    時(shí)間: 2025-3-28 23:26

作者: 點(diǎn)燃    時(shí)間: 2025-3-29 04:35

作者: thwart    時(shí)間: 2025-3-29 08:44
,Die Krise der repr?sentativen Demokratie,ms efficiently. As 3D printers are increasingly adopted, designers are more likely to encounter difficulties in assembling 3D printers on their own, as the assembly process involves specialised skills and knowledge of fitting components in right positions. Conventional solutions use text and video m
作者: 仔細(xì)檢查    時(shí)間: 2025-3-29 14:57
https://doi.org/10.1007/978-3-663-12940-0 watchband under the screen. The board is optimized for the character input method named SliT (.-. .). Advantage of SliT is that the input speed of the novice is fast and the screen occupancy rate is low. Specifically, the speed is 28.7 [CPM (Characters Per Minute)] and the rate is 26.4%..In SliT, J
作者: Inflated    時(shí)間: 2025-3-29 19:12

作者: Increment    時(shí)間: 2025-3-29 22:40
https://doi.org/10.1007/978-3-0348-6033-8h partner uses the same device setup (i.e., homogeneous device arrangements). In this work, we contribute an infrastructure that supports connection between a projector-camera media space and commodity mobile devices (i.e., tablets, smartphones). Deploying three device arrangements using this infras
作者: Evacuate    時(shí)間: 2025-3-30 01:02
https://doi.org/10.1007/978-3-658-11996-6e interaction and gesture recognition: when a user sketches a keyword by gesturing the first letters of its label, a menu with items related to the recognized letters is constructed dynamically and presented to the user for selection and auto-completion. The selection can be completed either gestura
作者: EWER    時(shí)間: 2025-3-30 06:13
https://doi.org/10.1007/978-3-663-12938-7on recognition based on deep convolutional neural networks (DCNNs) and extremely randomized trees. Specifically, we propose a method based on DCNN, which extracts informative features from the speech signal, and those features are then used by an extremely randomized trees classifier for emotion rec
作者: MODE    時(shí)間: 2025-3-30 12:15
https://doi.org/10.1007/978-3-642-99881-2ch regarding automatic classification of human emotion to enhance human-robot communication, especially for therapy. Generally, estimating emotions of people is based on information such as facial expression, eye-gazing direction, and behaviors that are expressed externally and the robot can observe
作者: 排他    時(shí)間: 2025-3-30 15:59
https://doi.org/10.1007/978-3-663-12939-4a classification method of emotional expression type based on Laban movement analysis, which is a typical theory for dancers. In this study, we applied the classification method to design creation, which is typically performed in digital fabrication. First, we made clear what kinds of emotions are e
作者: Nomogram    時(shí)間: 2025-3-30 19:09
Sozialprodukt und Volkseinkommen,fficult to find an objective representation for facial expressions so that one can compare perceptions between different individuals. It is partially due to that psychological spaces of facial expressions until now were built from subjective evaluations such as SD score or Affective grid, in which i
作者: membrane    時(shí)間: 2025-3-30 21:41
https://doi.org/10.1007/978-3-322-86396-6t is shown that the facial expression space is, in fact, not a Euclidean space but a Riemann space of which the Riemann metric is defined by the JND thresholds. In this paper, we shown how to transform the facial expression space to Euclidean space in a way to preserve geometry such as distances and
作者: 群居動(dòng)物    時(shí)間: 2025-3-31 02:13
Die Firma und der Kapitalmarkt,ircumplex Model of Affect [.], from monitoring the user’s pupil diameter and facial expression [.]. The details of the original design plan for this system have been described previously [.]. The outline describes each part of data collecting process including: Obtaining 3D facial coordinates by Kin
作者: Melanocytes    時(shí)間: 2025-3-31 05:06
Volkswirtschaftliches Rechnungswesen,age data of the facial expression and speech emotion recognition analyzing voice data. However, since facial expressions and speech can be arbitrarily changed, they can be said to lack objectivity, which is necessary for emotion estimation. Therefore, emotional analysis using biological signal such
作者: Abrade    時(shí)間: 2025-3-31 11:38

作者: 混合,攙雜    時(shí)間: 2025-3-31 14:08
Investigation of the Effect of Letter Labeling Positions on Consecutive Typing on Mobile Devices and inconvenient. It’s because of the small size of keys, the difference between view angle and touch point, the tilt of the device etc. When entering text, the human eye conducted visual search for the target key, then casted their attention on the symbol on the button. In this research we would l




歡迎光臨 派博傳思國(guó)際中心 (http://www.pjsxioz.cn/) Powered by Discuz! X3.5
尚义县| 西乡县| 周宁县| 噶尔县| 蓝田县| 东莞市| 苏州市| 天峨县| 海门市| 鲁山县| 扶风县| 孟连| 黄平县| 吴旗县| 出国| 延庆县| 贡嘎县| 天柱县| 原阳县| 平定县| 西青区| 平远县| 龙川县| 搜索| 巢湖市| 永清县| 凉山| 丘北县| 临安市| 那坡县| 司法| 涪陵区| 迁西县| 比如县| 曲麻莱县| 祁门县| 会理县| 四子王旗| 郧西县| 伊通| 青田县|