派博傳思國際中心

標題: Titlebook: Explainable Artificial Intelligence; First World Conferen Luca Longo Conference proceedings 2023 The Editor(s) (if applicable) and The Auth [打印本頁]

作者: 固執(zhí)已見    時間: 2025-3-21 19:54
書目名稱Explainable Artificial Intelligence影響因子(影響力)




書目名稱Explainable Artificial Intelligence影響因子(影響力)學科排名




書目名稱Explainable Artificial Intelligence網絡公開度




書目名稱Explainable Artificial Intelligence網絡公開度學科排名




書目名稱Explainable Artificial Intelligence被引頻次




書目名稱Explainable Artificial Intelligence被引頻次學科排名




書目名稱Explainable Artificial Intelligence年度引用




書目名稱Explainable Artificial Intelligence年度引用學科排名




書目名稱Explainable Artificial Intelligence讀者反饋




書目名稱Explainable Artificial Intelligence讀者反饋學科排名





作者: 厚顏無恥    時間: 2025-3-22 00:10
Li-Rong Yu,Van M. Hoang,Timothy D. Veenstray acts concern the speaker’s intended meaning, perlocutionary acts refer to the listener’s reaction, and locutionary acts are about the speech act itself. We suggest a new way to categorise established definitions of explanation based on these speech act principles. This method enhances our grasp of
作者: 一小塊    時間: 2025-3-22 04:07

作者: Outspoken    時間: 2025-3-22 05:00

作者: GREEN    時間: 2025-3-22 09:15

作者: 歡笑    時間: 2025-3-22 15:57

作者: 歡笑    時間: 2025-3-22 18:40

作者: abject    時間: 2025-3-23 00:21
Trichotillomania (Hair-Pulling Disorder ),butions are susceptible to adversarial attacks. This originates from target function evaluations at extrapolated data points, which are easily detectable and hence, enable models to behave accordingly. In this paper, we introduce a novel strategy for increased robustness against adversarial attacks
作者: 極肥胖    時間: 2025-3-23 03:48
The Plenum Series in Crime and Justiceesults beyond their decisions. A significant goal of XAI is to improve the performance of AI models by providing explanations for their decision-making processes. However, most XAI literature focuses on how to explain an AI system, while less attention has been given to how XAI methods can be exploi
作者: Arb853    時間: 2025-3-23 09:07

作者: 感染    時間: 2025-3-23 13:08
Substance P in the Nervous System,inable artificial intelligence (XAI) to understand black-box machine learning models. While many real-world applications require dynamic models that constantly adapt over time and react to changes in the underlying distribution, XAI, so far, has primarily considered static learning environments, whe
作者: KIN    時間: 2025-3-23 17:55
Graham W. Taylor,Howard R. Morrises. Among the various XAI techniques, Counterfactual (CF) explanations have a distinctive advantage, as they can be generated post-hoc while still preserving the complete fidelity of the underlying model. The generation of feasible and actionable CFs is a challenging task, which is typically tackled
作者: GLUT    時間: 2025-3-23 19:11
Neuroleptics: Clinical Use in Psychiatry,de such feature attributions has been limited. Clustering algorithms with built-in explanations are scarce. Common algorithm-agnostic approaches involve dimension reduction and subsequent visualization, which transforms the original features used to cluster the data; or training a supervised learnin
作者: 顧客    時間: 2025-3-23 23:16
https://doi.org/10.1007/978-1-4613-0933-8e), compared to other features. Feature importance should not be confused with the . used by most state-of-the-art post-hoc Explainable AI methods. Contrary to feature importance, feature influence is measured against a . or .. The Contextual Importance and Utility (CIU) method provides a unified de
作者: colloquial    時間: 2025-3-24 03:05
The Psychopharmacology of Aggression,planations (CFEs) provide a causal explanation as they introduce changes in the original image that change the classifier’s prediction. Current counterfactual generation approaches suffer from the fact that they potentially modify a too large region in the image that is not entirely causally related
作者: Allowance    時間: 2025-3-24 10:10
https://doi.org/10.1007/978-1-4613-4045-4ver, the inability of these methods to consider potential dependencies among variables poses a significant challenge due to the assumption of feature independence. Recent advancements have incorporated knowledge of causal dependencies, thereby enhancing the quality of the recommended recourse action
作者: DRILL    時間: 2025-3-24 10:54
Robert M. Post,Frederick K. Goodwinusal structure learning algorithms. GCA generates an explanatory graph from high-level human-interpretable features, revealing how these features affect each other and the black-box output. We show how these high-level features do not always have to be human-annotated, but can also be computationall
作者: Opponent    時間: 2025-3-24 15:46

作者: antenna    時間: 2025-3-24 23:01

作者: 無可非議    時間: 2025-3-25 01:36
978-3-031-44063-2The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerl
作者: 擁擠前    時間: 2025-3-25 04:36
Explainable Artificial Intelligence978-3-031-44064-9Series ISSN 1865-0929 Series E-ISSN 1865-0937
作者: 做事過頭    時間: 2025-3-25 11:34

作者: acrobat    時間: 2025-3-25 14:47
The Xi Method: Unlocking the?Mysteries of?Regression with?Statisticsonally, the Xi method is model and data agnostic, meaning that it can be applied to a wide range of machine learning models and data types. To demonstrate the potential of the Xi method, we provide applications that encompass three different data types: tabular, image, and text.
作者: CHOIR    時間: 2025-3-25 18:36

作者: Analogy    時間: 2025-3-25 22:53

作者: Bucket    時間: 2025-3-26 01:39

作者: 浪費物質    時間: 2025-3-26 04:20
Do Intermediate Feature Coalitions Aid Explainability of?Black-Box Models?ronomies, i.e., part-whole relationships, via a domain expert that can be utilised to generate explanations at an abstract level. We illustrate the usability of this approach in a real-world car model example and the Titanic dataset, where intermediate concepts aid in explainability at different levels of abstraction.
作者: 詞匯表    時間: 2025-3-26 09:12
Conference proceedings 2023bon, Portugal, in July 2023.?.The 94 papers presented were thoroughly reviewed and selected from the 220 qualified submissions. They are organized in the following topical sections:??.Part I: Interdisciplinary perspectives, approaches and strategies for xAI;?Model-agnostic explanations, methods and
作者: 常到    時間: 2025-3-26 14:07
Applications of PGAA with Neutron Beams,us associated expectations, as well as non-functional explainability requirements, we show how business-oriented XAI requirements can be formulated and prepared for integration into process design. This case study is a valuable resource for researchers and practitioners seeking better to understand the role of explainable AI in practice.
作者: 隱士    時間: 2025-3-26 16:49
Trichotillomania (Hair-Pulling Disorder ),on-manifold data for the calculation of Shapley values upfront, instead of having to estimate a large number of conditional densities or make strong parametric assumptions. Through real and simulated data experiments, we demonstrate the effectiveness of knockoff imputation against adversarial attacks.
作者: indoctrinate    時間: 2025-3-27 00:27
The Plenum Series in Crime and Justicerategies to use the explanation to improve a classification system are reported and empirically evaluated on three datasets: Fashion-MNIST, CIFAR10, and STL10. Results suggest that explanations built by Integrated Gradients highlight input features that can be effectively used to improve classification performance.
作者: 變異    時間: 2025-3-27 01:54
https://doi.org/10.1007/978-1-4613-0933-8es the fidelity and stability of different methods, and shows how explanations that use contextual importance and contextual utility can provide more expressive and flexible explanations than when using influence only.
作者: 雀斑    時間: 2025-3-27 08:58
https://doi.org/10.1007/978-1-4613-4045-4riables. In this work, we motivate the need to integrate the temporal dimension into causal algorithmic recourse methods to enhance recommendations’ plausibility and reliability. The experimental evaluation highlights the significance of the role of time in this field.
作者: harbinger    時間: 2025-3-27 10:42
XAI Requirements in?Smart Production Processes: A Case Studyus associated expectations, as well as non-functional explainability requirements, we show how business-oriented XAI requirements can be formulated and prepared for integration into process design. This case study is a valuable resource for researchers and practitioners seeking better to understand the role of explainable AI in practice.
作者: Mosaic    時間: 2025-3-27 16:50

作者: impale    時間: 2025-3-27 21:43

作者: 刺激    時間: 2025-3-27 23:17

作者: Palate    時間: 2025-3-28 05:40
The Importance of?Time in?Causal Algorithmic Recourseriables. In this work, we motivate the need to integrate the temporal dimension into causal algorithmic recourse methods to enhance recommendations’ plausibility and reliability. The experimental evaluation highlights the significance of the role of time in this field.
作者: 極肥胖    時間: 2025-3-28 09:52
1865-0929 eld in Lisbon, Portugal, in July 2023.?.The 94 papers presented were thoroughly reviewed and selected from the 220 qualified submissions. They are organized in the following topical sections:??.Part I: Interdisciplinary perspectives, approaches and strategies for xAI;?Model-agnostic explanations, me
作者: rectocele    時間: 2025-3-28 12:48
XAI Requirements in?Smart Production Processes: A Case Studyms. This is often emphasized with respect to societal and developmental contexts, but it is also crucial within the context of business processes, including manufacturing and production. While this is widely recognized, there is a notable lack of practical examples that demonstrate how to take expla
作者: Inflammation    時間: 2025-3-28 17:44

作者: RECUR    時間: 2025-3-28 19:40
Dear XAI Community, We Need to?Talk!unately, these unfounded parts are not on the decline but continue to grow. Many explanation techniques are still proposed without clarifying their purpose. Instead, they are advertised with ever more fancy-looking heatmaps or only seemingly relevant benchmarks. Moreover, explanation techniques are
作者: 名次后綴    時間: 2025-3-29 02:11
Speeding Things Up. Can Explainability Improve Human Learning? such circumstances, the algorithm requests a teacher, usually a human, to select or verify the system’s prediction on the most informative points. The most informative usually refers to the instances that are the hardest for the algorithm to label. However, it has been proven that humans are more l
作者: fastness    時間: 2025-3-29 06:15

作者: CLAY    時間: 2025-3-29 08:25

作者: excursion    時間: 2025-3-29 15:01
Do Intermediate Feature Coalitions Aid Explainability of?Black-Box Models? a hierarchical structure in which each level corresponds to features of a dataset (i.e., a player-set partition). The level of coarseness increases from the trivial set, which only comprises singletons, to the set, which only contains the grand coalition. In addition, it is possible to establish me
作者: 尊敬    時間: 2025-3-29 18:04
Unfooling SHAP and?SAGE: Knockoff Imputation for?Shapley Valuesbutions are susceptible to adversarial attacks. This originates from target function evaluations at extrapolated data points, which are easily detectable and hence, enable models to behave accordingly. In this paper, we introduce a novel strategy for increased robustness against adversarial attacks
作者: commune    時間: 2025-3-29 20:07
Strategies to?Exploit XAI to?Improve Classification Systemsesults beyond their decisions. A significant goal of XAI is to improve the performance of AI models by providing explanations for their decision-making processes. However, most XAI literature focuses on how to explain an AI system, while less attention has been given to how XAI methods can be exploi
作者: 抓住他投降    時間: 2025-3-30 00:19
Beyond Prediction Similarity: ShapGAP for?Evaluating Faithful Surrogate Models in?XAImodels. Surrogation, emulating a black-box model (BB) with a white-box model (WB), is crucial in applications where BBs are unavailable due to security or practical concerns. Traditional fidelity measures only evaluate the similarity of the final predictions, which can lead to a significant limitati
作者: 動脈    時間: 2025-3-30 07:32
iPDP: On Partial Dependence Plots in?Dynamic Modeling Scenariosinable artificial intelligence (XAI) to understand black-box machine learning models. While many real-world applications require dynamic models that constantly adapt over time and react to changes in the underlying distribution, XAI, so far, has primarily considered static learning environments, whe
作者: 前奏曲    時間: 2025-3-30 10:41

作者: hangdog    時間: 2025-3-30 16:10
Algorithm-Agnostic Feature Attributions for?Clusteringde such feature attributions has been limited. Clustering algorithms with built-in explanations are scarce. Common algorithm-agnostic approaches involve dimension reduction and subsequent visualization, which transforms the original features used to cluster the data; or training a supervised learnin
作者: JOT    時間: 2025-3-30 18:42
Feature Importance versus Feature Influence and?What It Signifies for?Explainable AIe), compared to other features. Feature importance should not be confused with the . used by most state-of-the-art post-hoc Explainable AI methods. Contrary to feature importance, feature influence is measured against a . or .. The Contextual Importance and Utility (CIU) method provides a unified de
作者: 記憶法    時間: 2025-3-30 21:59
ABC-GAN: Spatially Constrained Counterfactual Generation for Image Classification Explanationsplanations (CFEs) provide a causal explanation as they introduce changes in the original image that change the classifier’s prediction. Current counterfactual generation approaches suffer from the fact that they potentially modify a too large region in the image that is not entirely causally related
作者: 小卷發(fā)    時間: 2025-3-31 04:07
The Importance of?Time in?Causal Algorithmic Recoursever, the inability of these methods to consider potential dependencies among variables poses a significant challenge due to the assumption of feature independence. Recent advancements have incorporated knowledge of causal dependencies, thereby enhancing the quality of the recommended recourse action
作者: Ejaculate    時間: 2025-3-31 06:21

作者: cornucopia    時間: 2025-3-31 10:58

作者: LEERY    時間: 2025-3-31 13:40

作者: CAMEO    時間: 2025-3-31 19:49





歡迎光臨 派博傳思國際中心 (http://www.pjsxioz.cn/) Powered by Discuz! X3.5
武强县| 龙川县| 平湖市| 弥勒县| 肥城市| 托克逊县| 永昌县| 广东省| 文安县| 石狮市| 星座| 旌德县| 田林县| 乐业县| 汤原县| 大石桥市| 吴江市| 大宁县| 和政县| 仲巴县| 雷山县| 四会市| 万安县| 靖宇县| 资阳市| 湘乡市| 新巴尔虎左旗| 石首市| 肥城市| 海口市| 迁西县| 吴忠市| 宜黄县| 加查县| 保德县| 水城县| 民和| 乌拉特前旗| 凤翔县| 柳林县| 博兴县|