標(biāo)題: Titlebook: Artificial Intelligence in HCI; 5th International Co Helmut Degen,Stavroula Ntoa Conference proceedings 2024 The Editor(s) (if applicable) [打印本頁] 作者: MOURN 時(shí)間: 2025-3-21 18:30
書目名稱Artificial Intelligence in HCI影響因子(影響力)
書目名稱Artificial Intelligence in HCI影響因子(影響力)學(xué)科排名
書目名稱Artificial Intelligence in HCI網(wǎng)絡(luò)公開度
書目名稱Artificial Intelligence in HCI網(wǎng)絡(luò)公開度學(xué)科排名
書目名稱Artificial Intelligence in HCI被引頻次
書目名稱Artificial Intelligence in HCI被引頻次學(xué)科排名
書目名稱Artificial Intelligence in HCI年度引用
書目名稱Artificial Intelligence in HCI年度引用學(xué)科排名
書目名稱Artificial Intelligence in HCI讀者反饋
書目名稱Artificial Intelligence in HCI讀者反饋學(xué)科排名
作者: Postmenopause 時(shí)間: 2025-3-21 21:22
https://doi.org/10.1007/978-3-031-60606-9Artificial Intelligence in HCI; Human-Centered Artificial Intelligence; Dialogue systems; Language mode作者: 制度 時(shí)間: 2025-3-22 04:04
978-3-031-60605-2The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerl作者: 無法取消 時(shí)間: 2025-3-22 05:55 作者: heart-murmur 時(shí)間: 2025-3-22 09:06
The Demographic Impact of Famine: A Review,cessible to everyone. This article evaluates the use of voicebots, which are chatbots with vocal interfaces, as educational tools to enhance conversational skills in adult education. The assessment considers their impact on the learning experience and their effects on conversation-related anxiety. T作者: 蛙鳴聲 時(shí)間: 2025-3-22 13:48
Dynamic Communication and Decision Supporttegration of generative AI tools in software development processes is scrutinized across various phases, from ideation to deployment. By conducting a literature review and a preliminary evaluation with 18 students, this study identifies critical tasks within the software development life cycle where作者: 形上升才刺激 時(shí)間: 2025-3-22 19:07 作者: 萬神殿 時(shí)間: 2025-3-22 21:51 作者: Frenetic 時(shí)間: 2025-3-23 01:26 作者: 吵鬧 時(shí)間: 2025-3-23 05:32 作者: 先兆 時(shí)間: 2025-3-23 12:19 作者: Arctic 時(shí)間: 2025-3-23 14:26 作者: beta-carotene 時(shí)間: 2025-3-23 18:03
Dominik Collet,Maximilian Schuhor the user role “system tester of AI-based systems.” It investigates whether established explanation types adequately address the explainability requirements of ML-based application testers. Through a qualitative study (n = 12), we identified the explanation needs for three user tasks: test strateg作者: atopic 時(shí)間: 2025-3-23 22:34
Dominik Collet,Maximilian Schuhto provide tailored emotional support for older adults. This unique combination fosters critical thinking and engagement through iterative questioning, explicitly addressing older adults’ cognitive and emotional needs. This paper outlines a systematic approach for integrating a Socratic, ethical, an作者: 針葉類的樹 時(shí)間: 2025-3-24 05:27 作者: patriarch 時(shí)間: 2025-3-24 06:50
Rudolf Brázdil,Old?ich Kotyza,Martin Bauch, the importance of user experience in XAI has become increasingly prominent. Simultaneously, the user interface (UI) serves as a crucial link between XAI and users. However, despite the existence of UI design principles for XAI, there is a lack of prioritization based on their significance. This wi作者: Lipoma 時(shí)間: 2025-3-24 13:06
Dominik Collet,Maximilian Schuhy advanced technology on the one hand and non-expert users on the other hand can lead to unfounded reliance. To improve collaboration between end-users and the system, Explainable AI (XAI) is gaining momentum. However, recent studies yield mixed results on the effects of explanations, leaving uncert作者: 遺傳 時(shí)間: 2025-3-24 17:05 作者: 金盤是高原 時(shí)間: 2025-3-24 19:52
H.-G. Dammann,M. Dreyer,P. Müller,B. Simonture and therefore the seemingly opacity and inscrutability of their characteristics and decision-making process. Literature extensively discusses how this can lead to phenomena of over-reliance and under-reliance, ultimately limiting the adoption of AI. We addressed these issues by building a theor作者: 逗留 時(shí)間: 2025-3-25 00:09
https://doi.org/10.1007/978-3-642-73807-4uested in the European Aviation Safety Agency’s AI Roadmap 2.0 in order to meet the requirement of Trustworthy AI. The IPAS is currently being developed to provide AI-based decision support in commercial aircraft to assist the flight crew, especially in emergency situations. The development of the I作者: Etching 時(shí)間: 2025-3-25 07:09
Artificial Intelligence in HCI978-3-031-60606-9Series ISSN 0302-9743 Series E-ISSN 1611-3349 作者: acclimate 時(shí)間: 2025-3-25 10:05
Qualitative User-Centered Requirements Analysis for?a?Recommender System for?a?Project Portfolio Plachallenges in navigating through the abundance of available resources. To tackle this issue, a recommender system is being designed to support a digital project portfolio platform. The system aims to improve transparency, networking, collaboration, and cooperation among educational stakeholders. Per作者: 售穴 時(shí)間: 2025-3-25 14:46
Examining User Perceptions to Vocal Interaction with AI Bots in Virtual Reality and Mobile Environmecessible to everyone. This article evaluates the use of voicebots, which are chatbots with vocal interfaces, as educational tools to enhance conversational skills in adult education. The assessment considers their impact on the learning experience and their effects on conversation-related anxiety. T作者: 指數(shù) 時(shí)間: 2025-3-25 17:19 作者: mortuary 時(shí)間: 2025-3-25 23:47 作者: 嘲弄 時(shí)間: 2025-3-26 02:48
Evaluating the?Effectiveness of?the?Peer Data Labelling System (PDLS) (ML) algorithms for use in Child Computer Interaction (CCI) research and development. For a supervised ML model to make accurate predictions it requires accurate data on which to train. Poor quality input data to systems results in poor quality outputs often referred to as garbage in, garbage out (作者: Champion 時(shí)間: 2025-3-26 06:16
Enhancing Historical Understanding in?School Students: Designing a?VR Application with?AI-Animated Cal and informal settings are becoming more prevalent as the cost of headsets decreases, making these technologies more accessible to students and schools alike. Furthermore, the emergence of Large Language Models, such as the one supporting ChatGPT, which is widely available, opens up new possibilit作者: 新娘 時(shí)間: 2025-3-26 09:04 作者: STANT 時(shí)間: 2025-3-26 16:21 作者: 知識(shí) 時(shí)間: 2025-3-26 19:41
What Makes People Say Thanks to?AIods. We focused on the evolving sophistication of AI, especially in large language models like GPT, and its influence on user behavior and perception. A notable finding is the significant correlation between AI intelligence and the frequency of user politeness. As AI progressively mimics human-like 作者: 開始沒有 時(shí)間: 2025-3-26 21:03
How to?Explain It to?System Testers?or the user role “system tester of AI-based systems.” It investigates whether established explanation types adequately address the explainability requirements of ML-based application testers. Through a qualitative study (n = 12), we identified the explanation needs for three user tasks: test strateg作者: 變形 時(shí)間: 2025-3-27 04:16
WisCompanion: Integrating the?Socratic Method with?ChatGPT-Based AI for?Enhanced Explainability in?Eto provide tailored emotional support for older adults. This unique combination fosters critical thinking and engagement through iterative questioning, explicitly addressing older adults’ cognitive and emotional needs. This paper outlines a systematic approach for integrating a Socratic, ethical, an作者: 牛的細(xì)微差別 時(shí)間: 2025-3-27 07:16 作者: 極肥胖 時(shí)間: 2025-3-27 10:40
What Is the?Focus of?XAI in?UI Design? Prioritizing UI Design Principles for?Enhancing XAI User Expe, the importance of user experience in XAI has become increasingly prominent. Simultaneously, the user interface (UI) serves as a crucial link between XAI and users. However, despite the existence of UI design principles for XAI, there is a lack of prioritization based on their significance. This wi作者: 眼界 時(shí)間: 2025-3-27 17:01
Navigating Transparency: The Influence of On-demand Explanations on Non-expert User Interaction withy advanced technology on the one hand and non-expert users on the other hand can lead to unfounded reliance. To improve collaboration between end-users and the system, Explainable AI (XAI) is gaining momentum. However, recent studies yield mixed results on the effects of explanations, leaving uncert作者: 細(xì)胞學(xué) 時(shí)間: 2025-3-27 18:13 作者: 絕緣 時(shí)間: 2025-3-28 00:33 作者: 無王時(shí)期, 時(shí)間: 2025-3-28 05:40
Operationalizing AI Explainability Using Interpretability Cues in?the?Cockpit: Insights from?User-Ceuested in the European Aviation Safety Agency’s AI Roadmap 2.0 in order to meet the requirement of Trustworthy AI. The IPAS is currently being developed to provide AI-based decision support in commercial aircraft to assist the flight crew, especially in emergency situations. The development of the I作者: Pelago 時(shí)間: 2025-3-28 08:34
0302-9743 -driven technologies; AI in industry and operations;..Part III: Large language models for enhanced interaction; advancing human-robot interaction through AI; AI applications for social impact and human wellbeing...?.978-3-031-60605-2978-3-031-60606-9Series ISSN 0302-9743 Series E-ISSN 1611-3349 作者: jungle 時(shí)間: 2025-3-28 10:43 作者: modest 時(shí)間: 2025-3-28 17:29
Dynamic Communication and Decision Supportstudy highlights the importance of human-AI collaboration, suggesting that while generative AI can significantly support software development tasks, human oversight and critical evaluation of AI-generated outputs remain essential. This research contributes to understanding how generative AI tools ca作者: Pastry 時(shí)間: 2025-3-28 21:58
FEWS NET’s Integrated Analytical Areasamine whether the data fit the hypothesized delegation framework and analyze the changes. Furthermore, it enabled us to estimate the magnitudes of the effects of latent factors. Through these two statistical analyses, we discovered that people have become more and more risk-conscious with the develo作者: optic-nerve 時(shí)間: 2025-3-29 00:05
https://doi.org/10.1007/978-3-540-75369-8man review process found that the pupil observers and reviewers reached consensus in classifying most of the data as engaged. Recognising disengagement is more challenging, and further work is required to ensure that there is more consistency in what the participants recognise as engagement and dise作者: condemn 時(shí)間: 2025-3-29 05:13 作者: RODE 時(shí)間: 2025-3-29 07:25 作者: 煉油廠 時(shí)間: 2025-3-29 11:53
FEWS NET’s Integrated Analytical Areas ML-based prototypes using Explainable AI (XAI) techniques. We employed the cognitive walk-through method, wherein experts engaged in a series of activities with PyFlowML while sharing their thoughts in real time during the session. While initial findings are promising, they also indicate that to ef作者: ASTER 時(shí)間: 2025-3-29 19:31 作者: evince 時(shí)間: 2025-3-29 20:09
Dominik Collet,Maximilian Schuhs in conversational AI creates a dynamic and multi-layered dialogue structure. These techniques work in unison to foster a deeper understanding of the user’s perspectives, emotions, and experiences, thereby significantly enhancing the quality of AI-older adult interactions.作者: Hay-Fever 時(shí)間: 2025-3-30 01:40 作者: medium 時(shí)間: 2025-3-30 05:21 作者: Foreshadow 時(shí)間: 2025-3-30 11:58 作者: 樹木中 時(shí)間: 2025-3-30 13:11 作者: 希望 時(shí)間: 2025-3-30 19:50 作者: Androgen 時(shí)間: 2025-3-30 21:53
https://doi.org/10.1007/978-3-642-73807-4ng decisions in emergencies. Focus of the research was to identify initial interpretability requirements and to answer the question of what interpretation cues pilots need from the AI-based system. Based on a user study with airline pilots, four requirements for interpretation cues were formulated. 作者: cultivated 時(shí)間: 2025-3-31 04:55 作者: 無表情 時(shí)間: 2025-3-31 07:33
Evaluation of?Generative AI-Assisted Software Design and?Engineering: A User-Centered Approachstudy highlights the importance of human-AI collaboration, suggesting that while generative AI can significantly support software development tasks, human oversight and critical evaluation of AI-generated outputs remain essential. This research contributes to understanding how generative AI tools ca作者: Crepitus 時(shí)間: 2025-3-31 11:45
A Three-Year Analysis of?Human Preferences in?Delegating Tasks to?AIamine whether the data fit the hypothesized delegation framework and analyze the changes. Furthermore, it enabled us to estimate the magnitudes of the effects of latent factors. Through these two statistical analyses, we discovered that people have become more and more risk-conscious with the develo作者: Meditate 時(shí)間: 2025-3-31 16:27 作者: Feature 時(shí)間: 2025-3-31 21:31
Enhancing Historical Understanding in?School Students: Designing a?VR Application with?AI-Animated Cand ask questions to better understand a historical situation. Additionally, students can immerse themselves in the app in small groups, interact with multiple avatars simultaneously, listen to different theses, and avatars can engage in dialogue with each other while the student observes.作者: Flatus 時(shí)間: 2025-4-1 00:17