標(biāo)題: Titlebook: Explainable Artificial Intelligence; First World Conferen Luca Longo Conference proceedings 2023 The Editor(s) (if applicable) and The Auth [打印本頁(yè)] 作者: Forbidding 時(shí)間: 2025-3-21 19:55
書(shū)目名稱(chēng)Explainable Artificial Intelligence影響因子(影響力)
作者: Lacerate 時(shí)間: 2025-3-21 20:42
Natural Example-Based Explainability: A Surveyng models. While saliency maps have stolen the show for the last few years in the XAI field, their ability to reflect models’ internal processes has been questioned. Although less in the spotlight, example-based XAI methods have continued to improve. It encompasses methods that use examples as expla作者: 白楊魚(yú) 時(shí)間: 2025-3-22 00:47 作者: Presbyopia 時(shí)間: 2025-3-22 05:48
Contrastive Visual Explanations for?Reinforcement Learning via?Counterfactual Rewardsive explanation framework for reinforcement learning (RL) based on comparing learned policies under actual environmental rewards vs. hypothetical (counterfactual) rewards. The framework provides policy-level explanations by accessing learned Q-functions and identifying intersecting critical states. 作者: STALE 時(shí)間: 2025-3-22 11:33 作者: 共同確定為確 時(shí)間: 2025-3-22 16:22 作者: 共同確定為確 時(shí)間: 2025-3-22 17:14
A Novel Architecture for?Robust Explainable AI Approaches in?Critical Object Detection Scenarios Basritical situations. We provide a novel approach focusing on potential edge cases in order to address those specific situations which are often most the hardest part in bringing machine learning models into production within the aforementioned scenarios. We improve upon existing explainable artificia作者: aspect 時(shí)間: 2025-3-22 23:50 作者: Jocose 時(shí)間: 2025-3-23 04:50 作者: exorbitant 時(shí)間: 2025-3-23 08:38
The Duet of?Representations and?How Explanations Exacerbate Itwith the human’s prior belief. Explanations can direct the human’s attention to the conflicting feature and away from other relevant features. This leads to causal overattribution and may adversely affect the human’s information processing. In a field experiment we implemented an XGBoost-trained mod作者: BIBLE 時(shí)間: 2025-3-23 12:47
Closing the?Loop: Testing ChatGPT to?Generate Model Explanations to?Improve Human Labelling of?Spons Unfair Commercial Practices Directive (UCPD) in the European Union, or Section?5 of the Federal Trade Commission Act. Yet enforcing these obligations has proven to be highly problematic due to the sheer scale of the influencer market. The task of automatically detecting sponsored content aims to en作者: 虛弱的神經(jīng) 時(shí)間: 2025-3-23 15:00
Human-Computer Interaction and?Explainability: Intersection and?Terminologychnological artifacts or systems. Explainable AI (xAI) is involved in HCI to have humans better understand computers or AI systems which fosters, as a consequence, better interaction. The term “explainability” is sometimes used interchangeably with other closely related terms such as interpretabilit作者: Carcinoma 時(shí)間: 2025-3-23 18:31
Explaining Deep Reinforcement Learning-Based Methods for?Control of?Building HVAC Systems learning techniques. However, due to the black-box nature of these algorithms, the resulting control policies can be difficult to understand from a human perspective. This limitation is particularly relevant in real-world scenarios, where an understanding of the controller is required for reliabili作者: Ophthalmoscope 時(shí)間: 2025-3-24 00:36 作者: 挑剔為人 時(shí)間: 2025-3-24 04:26
Necessary and?Sufficient Explanations of?Multi-Criteria Decision Aiding Models, with?and?Without Intclassify an instance in an ordered list of categories, on the basis of multiple and conflicting criteria. Several models can be used to achieve such goals ranging from the simplest one assuming independence among criteria - namely the weighted sum model - to complex models able to represent complex 作者: fatty-acids 時(shí)間: 2025-3-24 08:13
XInsight: Revealing Model Insights for?GNNs with?Flow-Based Explanationssystems. While this progress is significant, many networks are ‘black boxes’ with little understanding of the ‘what’ exactly the network is learning. Many high-stakes applications, such as drug discovery, require human-intelligible explanations from the models so that users can recognize errors and 作者: arbovirus 時(shí)間: 2025-3-24 11:59
What Will Make Misinformation Spread: An XAI Perspective when making the decisions. Online social networks have a problem with misinformation which is known to have negative effects. In this paper, we propose to utilize XAI techniques to study what factors lead to misinformation spreading by explaining a trained graph neural network that predicts misinfo作者: Implicit 時(shí)間: 2025-3-24 17:26 作者: Maximizer 時(shí)間: 2025-3-24 20:53 作者: MORPH 時(shí)間: 2025-3-24 23:28
Handbook of Practical Astronomyan aggregation of the explanations provided by the clients participating in the cooperation. We empirically test our proposal on two different tabular datasets, and we observe interesting and encouraging preliminary results.作者: 啤酒 時(shí)間: 2025-3-25 05:15
The History of Prenatal Psychologymplished by combining different XAI methods including surrogate models, Shapley values, and counterfactual examples. We show the results of the DRL-based controller in terms of energy consumption and thermal comfort and provide insights and explainability to the underlying control strategy using this XAI layer.作者: 報(bào)復(fù) 時(shí)間: 2025-3-25 08:12
Michel Hersen,Robert T. Ammermanlso of the importance of a missing value in a given record for a particular prediction. Extensive experiments show the effectiveness of the proposed method with respect to some baseline solutions relying on traditional data imputation.作者: BAIL 時(shí)間: 2025-3-25 12:26 作者: 沒(méi)有準(zhǔn)備 時(shí)間: 2025-3-25 17:09
Explaining Black-Boxes in?Federated Learningan aggregation of the explanations provided by the clients participating in the cooperation. We empirically test our proposal on two different tabular datasets, and we observe interesting and encouraging preliminary results.作者: MIME 時(shí)間: 2025-3-25 23:58
Explaining Deep Reinforcement Learning-Based Methods for?Control of?Building HVAC Systemsmplished by combining different XAI methods including surrogate models, Shapley values, and counterfactual examples. We show the results of the DRL-based controller in terms of energy consumption and thermal comfort and provide insights and explainability to the underlying control strategy using this XAI layer.作者: 進(jìn)取心 時(shí)間: 2025-3-26 01:26 作者: FOIL 時(shí)間: 2025-3-26 06:15
Potentiality in Aristotle’s Metaphysicstion density. We conclude our paper by pointing to the two main challenges we encountered during our work: data processing and model design that might be restricted by currently available XAI methods, and the importance of domain knowledge to interpret explanations.作者: 步履蹣跚 時(shí)間: 2025-3-26 11:01 作者: 本能 時(shí)間: 2025-3-26 15:18 作者: 無(wú)可非議 時(shí)間: 2025-3-26 17:46 作者: 原始 時(shí)間: 2025-3-26 23:52
Bernell K. Stone,John B. Guerard Jr. color of the square indicates the classification impact of this feature. The size of the filled square describes the variability of the impact between single samples. For interesting features that require further analysis, a detailed view is necessary that provides the distribution of these values.作者: 怎樣才咆哮 時(shí)間: 2025-3-27 03:49 作者: 完成才能戰(zhàn)勝 時(shí)間: 2025-3-27 07:31
Problem Solving and Self-advocacyucators to interact with AI effectively, as well as how XAI impacts politics and government. Finally, we provide recommendations for additional research in this field and suggest potential future directions for XAI in educational research and practice.作者: ELATE 時(shí)間: 2025-3-27 13:29 作者: 者變 時(shí)間: 2025-3-27 13:38
E. Grant Read,Magnus Hindsbergers followed by the subsequent marker classification task via a reimplemented LeNet-like Bayesian CNN. Our presented results demonstrate that this approach to quantify the XAI techniques can improve the interpretability of both object detection and marker classification explanations by reducing their 作者: 教義 時(shí)間: 2025-3-27 19:47
Critical Care of the Pregnant Patient,ing algorithm that is able to predict and explain under which conditions the base classifier has a high or low . or any other classification performance metric. We evaluate PERFEX using several classifiers and datasets, including a case study with urban mobility data. It turns out that PERFEX typica作者: Wallow 時(shí)間: 2025-3-27 23:18 作者: 不安 時(shí)間: 2025-3-28 03:28 作者: 心神不寧 時(shí)間: 2025-3-28 07:15
Leonard A. Jason,David Thompson,Thomas Rosed by any value. We generalize the notion of information that needs to be kept not only on the values of the criteria (values of the instance of the criteria) but also on the weights of criteria (parameters of the model). For the HCI model, we propose a Mixed-Integer Linear Program (MILP) formulation作者: Detonate 時(shí)間: 2025-3-28 10:48 作者: 鉆孔 時(shí)間: 2025-3-28 15:28
Handbook of Printed Circuit Manufacturing is proposed for predicting misinformation spread by leveraging GraphSAGE with heterogeneous graph convolution. Secondly, we propose an explanation module that uses gradient-based and perturbation-based methods, to identify what makes misinformation spread by explaining the trained prediction module作者: aquatic 時(shí)間: 2025-3-28 19:52 作者: CHARM 時(shí)間: 2025-3-29 02:01
Natural Example-Based Explainability: A Survey simply means that it is directly drawn from the training data without involving any generative process. The exclusion of methods that require generating examples is justified by the need for plausibility which is in some regards required to gain a user’s trust. Consequently, this paper will explore作者: Cleave 時(shí)間: 2025-3-29 03:12
Explainable Artificial Intelligence in Education: A Comprehensive Reviewucators to interact with AI effectively, as well as how XAI impacts politics and government. Finally, we provide recommendations for additional research in this field and suggest potential future directions for XAI in educational research and practice.作者: FORGO 時(shí)間: 2025-3-29 07:20 作者: 火光在搖曳 時(shí)間: 2025-3-29 11:38 作者: 軍火 時(shí)間: 2025-3-29 17:52
PERFEX: Classifier Performance Explanations for?Trustworthy AI Systemsing algorithm that is able to predict and explain under which conditions the base classifier has a high or low . or any other classification performance metric. We evaluate PERFEX using several classifiers and datasets, including a case study with urban mobility data. It turns out that PERFEX typica作者: 跳脫衣舞的人 時(shí)間: 2025-3-29 21:42 作者: CLASP 時(shí)間: 2025-3-30 02:19
Human-Computer Interaction and?Explainability: Intersection and?Terminologyies and relationships) has its specificity and properties which need to be clearly defined. Currently, the definitions of these various terms are not well established in the literature, and their usage in various contexts is ambiguous. The goals of this paper are threefold: First, clarify the termin作者: Synovial-Fluid 時(shí)間: 2025-3-30 05:07 作者: Complement 時(shí)間: 2025-3-30 09:05
XInsight: Revealing Model Insights for?GNNs with?Flow-Based Explanationsly learn the maximum reward sample. We demonstrate XInsight by generating explanations for GNNs trained on two graph classification tasks: classifying mutagenic compounds with the MUTAG dataset and classifying acyclic graphs with a synthetic dataset that we have open-sourced. We show the utility of 作者: 花爭(zhēng)吵 時(shí)間: 2025-3-30 13:06
What Will Make Misinformation Spread: An XAI Perspective is proposed for predicting misinformation spread by leveraging GraphSAGE with heterogeneous graph convolution. Secondly, we propose an explanation module that uses gradient-based and perturbation-based methods, to identify what makes misinformation spread by explaining the trained prediction module作者: Bravado 時(shí)間: 2025-3-30 20:27
Conference proceedings 2023Chapters “Finding Spurious Correlations with Function-Semantic Contrast Analysis” and “Explaining Socio-Demographic and Behavioral Patterns of Vaccination Against the Swine Flu (H1N1) Pandemic” are available open access under a Creative Commons Attribution 4.0 International License via link.springer.com..作者: Etymology 時(shí)間: 2025-3-30 21:04
https://doi.org/10.1007/978-3-031-44067-0artificial intelligence; interpretable machine learning; causal inference & explanations; argumentative作者: IVORY 時(shí)間: 2025-3-31 03:15 作者: 花爭(zhēng)吵 時(shí)間: 2025-3-31 07:23
1865-0929 of Vaccination Against the Swine Flu (H1N1) Pandemic” are available open access under a Creative Commons Attribution 4.0 International License via link.springer.com..978-3-031-44066-3978-3-031-44067-0Series ISSN 1865-0929 Series E-ISSN 1865-0937