派博傳思國際中心

標題: Titlebook: Explainable Artificial Intelligence; First World Conferen Luca Longo Conference proceedings 2023 The Editor(s) (if applicable) and The Auth [打印本頁]

作者: FORAY    時間: 2025-3-21 18:29
書目名稱Explainable Artificial Intelligence影響因子(影響力)




書目名稱Explainable Artificial Intelligence影響因子(影響力)學(xué)科排名




書目名稱Explainable Artificial Intelligence網(wǎng)絡(luò)公開度




書目名稱Explainable Artificial Intelligence網(wǎng)絡(luò)公開度學(xué)科排名




書目名稱Explainable Artificial Intelligence被引頻次




書目名稱Explainable Artificial Intelligence被引頻次學(xué)科排名




書目名稱Explainable Artificial Intelligence年度引用




書目名稱Explainable Artificial Intelligence年度引用學(xué)科排名




書目名稱Explainable Artificial Intelligence讀者反饋




書目名稱Explainable Artificial Intelligence讀者反饋學(xué)科排名





作者: squander    時間: 2025-3-21 23:50
Evaluating Self-attention Interpretability Through Human-Grounded Experimental Protocolntly better than a random baseline regarding average participant reaction time and accuracy. Moreover, data analysis highlights that high probability prediction induces great explanation relevance. This work shows how self-attention can be aggregated and used to explain Transformer classifiers. The
作者: 哪有黃油    時間: 2025-3-22 03:56

作者: 烤架    時間: 2025-3-22 07:51

作者: 遭遇    時間: 2025-3-22 11:53

作者: 危機    時間: 2025-3-22 16:17
Causal-Based Spatio-Temporal Graph Neural Networks for?Industrial Internet of?Things Multivariate Tidata features effectively. Experimental results on industrial datasets demonstrate that the proposed method outperforms existing baselines and achieves state-of-the-art performance. The proposed approach offers a promising solution for accurate and interpretable spatio-temporal data forecasting.
作者: 危機    時間: 2025-3-22 20:57

作者: 描繪    時間: 2025-3-23 00:32

作者: syring    時間: 2025-3-23 03:36

作者: nutrition    時間: 2025-3-23 06:46
Development of?a?Human-Centred Psychometric Test for?the?Evaluation of?Explanations Produced by?XAI ability. The questionnaire development process was divided into two phases. First, a pilot study was designed and carried out to test the first version of the questionnaire. The results of this study were exploited to create a second, refined version of the questionnaire. The questionnaire was evalu
作者: 內(nèi)部    時間: 2025-3-23 13:43
Adding Why to?What? Analyses of?an?Everyday Explanationfrom a video recall to explore how Explainers (EX) justified their explanation. We found that EX were focusing on the physical aspects of the game first (Architecture) and only later on aspects of the Relevance. Reasoning in the video recalls indicated that EX regarded the focus on the Architecture
作者: crease    時間: 2025-3-23 17:09

作者: objection    時間: 2025-3-23 18:29
The Importance of?Distrust in?AIto prevent both disuse of these systems as well as overtrust. From our analysis of research on interpersonal trust, trust in automation, and trust in (X)AI, we identify the potential merit of the distinction between trust and distrust (in AI). We propose that alongside trust a healthy amount of dist
作者: ungainly    時間: 2025-3-23 23:08
Leveraging Group Contrastive Explanations for?Handling Fairnesssights through a comprehensive explanation of the decision-making process, enabling businesses to: detect the presence of direct discrimination on the target variable and choose the most appropriate fairness framework.
作者: Generator    時間: 2025-3-24 02:51
Handbook of Phenomenological Aesthetics problem-solving strategies. Additionally, by inspecting the attention weights layer by layer, we uncover an unconventional finding that layer 10, rather than the model’s final layer, is the optimal layer to unfreeze for the least parameter-intensive approach to fine-tune the model. We support these
作者: FUME    時間: 2025-3-24 09:31

作者: JOG    時間: 2025-3-24 14:09

作者: TAG    時間: 2025-3-24 18:52

作者: 微枝末節(jié)    時間: 2025-3-24 22:10

作者: 洞穴    時間: 2025-3-25 01:37
Handbook of Philosophical Logicdata features effectively. Experimental results on industrial datasets demonstrate that the proposed method outperforms existing baselines and achieves state-of-the-art performance. The proposed approach offers a promising solution for accurate and interpretable spatio-temporal data forecasting.
作者: 云狀    時間: 2025-3-25 06:08
Handbook of Philosophical Logic features from the EEG data. Despite impressive test accuracy, a fundamental need remains for an in-depth comprehension of the models. Attributions proffer initial insights into the decision-making process. Still, they did not allow us to determine why specific channels are more contributory than ot
作者: Narcissist    時間: 2025-3-25 10:03

作者: hallow    時間: 2025-3-25 14:40

作者: 推延    時間: 2025-3-25 18:30
Islamism and the Political Orderability. The questionnaire development process was divided into two phases. First, a pilot study was designed and carried out to test the first version of the questionnaire. The results of this study were exploited to create a second, refined version of the questionnaire. The questionnaire was evalu
作者: homeostasis    時間: 2025-3-25 21:33

作者: Diuretic    時間: 2025-3-26 03:11

作者: 使更活躍    時間: 2025-3-26 06:11

作者: WAG    時間: 2025-3-26 11:07

作者: FRAUD    時間: 2025-3-26 16:36
Conference proceedings 2023series and Natural Language Processing;?Human-centered explanations and xAI for Trustworthy and Responsible AI;?Explainable and Interpretable AI with Argumentation, Representational Learning and concept extraction for xAI..
作者: 明智的人    時間: 2025-3-26 17:03
1865-0929 for time series and Natural Language Processing;?Human-centered explanations and xAI for Trustworthy and Responsible AI;?Explainable and Interpretable AI with Argumentation, Representational Learning and concept extraction for xAI..978-3-031-44069-4978-3-031-44070-0Series ISSN 1865-0929 Series E-ISSN 1865-0937
作者: DEMN    時間: 2025-3-26 21:55

作者: 潰爛    時間: 2025-3-27 02:38

作者: Insufficient    時間: 2025-3-27 06:25
Power, Politics, and the Civil Sphereingful concepts achieving 4.8% higher concept completeness and 36.5% lower purity scores on average, (iii) provide high-quality concept-based logic explanations for their prediction, and (iv) support effective interventions at test time: these can increase human trust as well as improve model performance.
作者: cancellous-bone    時間: 2025-3-27 10:17

作者: Hormones    時間: 2025-3-27 15:20

作者: TSH582    時間: 2025-3-27 19:48
Hassan Namazi,Mohsen Mosadegh,Mozhgan Hayasi or not. In this paper, we study an out-of-distribution (OoD) detection approach based on a rule-based eXplainable Artificial Intelligence (XAI) model. Specifically, the method relies on an innovative metric, i.e., the weighted mutual information, able to capture the different way decision rules are used in case of in- and OoD data.
作者: ECG769    時間: 2025-3-27 22:23

作者: 賠償    時間: 2025-3-28 03:12
Weighted Mutual Information for?Out-Of-Distribution Detection or not. In this paper, we study an out-of-distribution (OoD) detection approach based on a rule-based eXplainable Artificial Intelligence (XAI) model. Specifically, the method relies on an innovative metric, i.e., the weighted mutual information, able to capture the different way decision rules are used in case of in- and OoD data.
作者: 停止償付    時間: 2025-3-28 09:28
Communications in Computer and Information Sciencehttp://image.papertrans.cn/e/image/319287.jpg
作者: 享樂主義者    時間: 2025-3-28 14:22

作者: TOXIC    時間: 2025-3-28 16:23
Body and Movement: Basic Dynamic Principles,er, Transformers remain hard to interpret and are considered as black-boxes. In this paper we assess how attention coefficients from Transformers help in providing classifier interpretability when properly aggregated. A fast and easy-to-implement way of aggregating attention is proposed to build loc
作者: 流動性    時間: 2025-3-28 20:19
Handbook of Philosophical Logico find and remove online hate speech, which would address a critical problem. A variety of explainable AI strategies are being developed to make model judgments and justifications intelligible to people as artificial intelligence continues to permeate numerous industries and make critical change. Ou
作者: 我正派    時間: 2025-3-29 01:39
Handbook of Philosophical Logic, in order to explain them to humans. Social science research states that such explanations should be conversational, similar to human-to-human explanations. In this work, we show how to incorporate XAI in a conversational agent, using a standard design for the agent comprising natural language unde
作者: phase-2-enzyme    時間: 2025-3-29 06:21

作者: incisive    時間: 2025-3-29 11:07

作者: THE    時間: 2025-3-29 13:20
Handbook of Philosophical Logicnd authentication methods. This research comprehensively compared EEG data pre-processing techniques, focusing on biometric applications. In tandem with this, the study illuminates the pivotal role of Explainable Artificial Intelligence (XAI) in enhancing the transparency and interpretability of mac
作者: 香料    時間: 2025-3-29 15:50
Handbook of Philosophical Logic are based on implicit time series information, ranging from contextual recommendations on smartwatches to human activity recognition on production workshop. Despite the advantages of these systems, the opaqueness and unpredictability of these systems for users has elicited concerns. To mitigate the
作者: 珊瑚    時間: 2025-3-29 21:31
Handbook of Plant Ecophysiology Techniquesrning models has increased. In particular, XAI for time series data has become increasingly important in finance, healthcare, and climate science. However, evaluating the quality of explanations, such as attributions provided by XAI techniques, remains challenging. This paper provides an in-depth an
作者: esculent    時間: 2025-3-30 02:26

作者: 星球的光亮度    時間: 2025-3-30 07:25
Islamism and the Political Ordere it comprehensible for humans. To reach it, it is necessary to have a reliable tool to collect the opinions of human users about the explanations generated by XAI methods of trained complex models. Psychometrics can be defined as the science behind psychological assessment. It studies the theory an
作者: 別名    時間: 2025-3-30 08:50
Power, Politics, and the Civil Sphereing post-hoc explanations, however, they fail to make the model itself more interpretable. To fill this gap, we introduce the Concept Distillation Module, the first differentiable concept-distillation approach for graph networks. The proposed approach is a layer that can be plugged into any graph ne
作者: 寬大    時間: 2025-3-30 15:04

作者: STRIA    時間: 2025-3-30 16:52

作者: intoxicate    時間: 2025-3-30 22:15

作者: hidebound    時間: 2025-3-31 02:23
Hassan Namazi,Mohsen Mosadegh,Mozhgan Hayasiot (in- or out-of-distribution) to the ones the ML system has been trained on may lead to potentially fatal consequences. Operational data compliance with the training data has to be verified by the data analyst, who must also understand, in operation, if the autonomous decision-making is still safe
作者: GREEN    時間: 2025-3-31 05:48
Popular Narratives of the Cochlear Implantnot discriminate against specific groups of people becomes crucial. Reaching this objective requires a multidisciplinary approach that includes domain experts, data scientists, philosophers, and legal experts, to ensure complete accountability for algorithmic decisions. In such a context, Explainabl
作者: 溺愛    時間: 2025-3-31 11:50
978-3-031-44069-4The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerl
作者: 砍伐    時間: 2025-3-31 14:06
Explainable Artificial Intelligence978-3-031-44070-0Series ISSN 1865-0929 Series E-ISSN 1865-0937
作者: LIMN    時間: 2025-3-31 20:54
https://doi.org/10.1007/978-3-031-44070-0artificial intelligence; interpretable machine learning; causal inference & explanations; argumentative
作者: Enervate    時間: 2025-4-1 00:03
Opening the?Black Box: Analyzing Attention Weights and?Hidden States in?Pre-trained Language Models e recent advancements in pre-trained language models based on transformers and their increasing integration into daily life, addressing this issue has become more pressing. In order to achieve an explainable AI model, it is essential to comprehend the procedural steps involved and compare them with




歡迎光臨 派博傳思國際中心 (http://www.pjsxioz.cn/) Powered by Discuz! X3.5
云龙县| 长寿区| 宜昌市| 磐安县| 瑞丽市| 金川县| 祁东县| 隆德县| 昆明市| 阳东县| 临江市| 宁阳县| 石屏县| 乡宁县| 马山县| 阿拉善右旗| 开平市| 集安市| 邳州市| 洛宁县| 深水埗区| 玉山县| 平阳县| 东乡| 甘孜县| 栾城县| 连云港市| 峨眉山市| 奉新县| 卢氏县| 大足县| 苍山县| 肇东市| 阜宁县| 门源| 宁津县| 吴忠市| 茶陵县| 明水县| 波密县| 定陶县|