標題: Titlebook: Explainable and Transparent AI and Multi-Agent Systems; Third International Davide Calvaresi,Amro Najjar,Kary Fr?mling Conference proceedi [打印本頁] 作者: Hayes 時間: 2025-3-21 18:36
書目名稱Explainable and Transparent AI and Multi-Agent Systems影響因子(影響力)
書目名稱Explainable and Transparent AI and Multi-Agent Systems影響因子(影響力)學科排名
書目名稱Explainable and Transparent AI and Multi-Agent Systems網(wǎng)絡公開度
書目名稱Explainable and Transparent AI and Multi-Agent Systems網(wǎng)絡公開度學科排名
書目名稱Explainable and Transparent AI and Multi-Agent Systems被引頻次
書目名稱Explainable and Transparent AI and Multi-Agent Systems被引頻次學科排名
書目名稱Explainable and Transparent AI and Multi-Agent Systems年度引用
書目名稱Explainable and Transparent AI and Multi-Agent Systems年度引用學科排名
書目名稱Explainable and Transparent AI and Multi-Agent Systems讀者反饋
書目名稱Explainable and Transparent AI and Multi-Agent Systems讀者反饋學科排名
作者: flimsy 時間: 2025-3-21 23:39 作者: 土坯 時間: 2025-3-22 03:34 作者: STRIA 時間: 2025-3-22 06:59
Visual Explanations for DNNs with Contextual Importancentributing segments for a prediction. Results are compared with two explanation methods, namely mask perturbation and LIME. The results for the MNIST hand-written digit dataset produced by the three methods show that CI provides better visual explainability.作者: 小教堂 時間: 2025-3-22 10:34 作者: glamor 時間: 2025-3-22 15:04
Towards Explainable Visionary Agents: License to Dare and Imaginegly, we investigate research areas including: . reasoning and automatic theorem proving to synthesize novel knowledge via inference; . automatic planning and simulation, used to speculate over alternative courses of action; . machine learning and data mining, exploited to induce new knowledge from e作者: glamor 時間: 2025-3-22 18:58
Toward XAI & Human Synergies to Explain the History of Art: The Smart Photobooth Projecth and knowledge dissemination project, it uses a smart photo-booth, capable of automatically transforming the user’s picture into a well-known artistic style (. impressionism), as an interactive approach to introduce the principles of the history of art to the open public and provide them with a sim作者: 地牢 時間: 2025-3-22 21:15 作者: Capitulate 時間: 2025-3-23 04:53
Equilibrium Statistics of Semiconductorsh. With respect to ., GridEx produces . rule lists retaining higher fidelity w.r.t. the original regressor. We report several experiments assessing GridEx performance against . and . (i.e., decision-tree regressors) used as benchmarks.作者: Pcos971 時間: 2025-3-23 07:48 作者: cogitate 時間: 2025-3-23 09:41 作者: carotenoids 時間: 2025-3-23 17:42 作者: 闡釋 時間: 2025-3-23 18:44 作者: MUMP 時間: 2025-3-23 23:01 作者: 抱負 時間: 2025-3-24 05:44 作者: 舊石器 時間: 2025-3-24 10:29 作者: Longitude 時間: 2025-3-24 11:43 作者: 充滿人 時間: 2025-3-24 15:24
0302-9743 ssions. The papers are organized in the following topical sections: XAI & machine learning; XAI vision, understanding, deployment and evaluation; XAI applications; XAI logic and argumentation; decentralized and heterogeneous XAI..978-3-030-82016-9978-3-030-82017-6Series ISSN 0302-9743 Series E-ISSN 1611-3349 作者: condone 時間: 2025-3-24 22:46
Michael Massengale,Elise Choe,Don E. Davismilar or better results than LIME with significantly shorter calculation times. However, the main purpose of this paper is to bring the existence of this package to general knowledge and use, rather than comparing with other explanation methods.作者: Hemiplegia 時間: 2025-3-25 00:04
Handbook of the Sociology of Sexualitiesle way. We develop an aggregate taxonomy for explainability and analyse the requirements based on roles. We explain in which steps on the new code migration process machine learning is used. Further, we analyse additional effort needed to make the new way of code migration explainable to different stakeholders.作者: Lacerate 時間: 2025-3-25 05:46 作者: DRILL 時間: 2025-3-25 10:31
What Does It Cost to Deploy an XAI System: A Case Study in Legacy Systemsle way. We develop an aggregate taxonomy for explainability and analyse the requirements based on roles. We explain in which steps on the new code migration process machine learning is used. Further, we analyse additional effort needed to make the new way of code migration explainable to different stakeholders.作者: 合乎習俗 時間: 2025-3-25 14:07
Cecilia L. Ridgeway,Sandra Nakagawaf localised structures in NN, helping to reduce NN opacity. The proposed work analyses the role of local variability in NN architectures design, presenting experimental results that show how this feature is actually desirable.作者: 羊齒 時間: 2025-3-25 19:14 作者: 威脅你 時間: 2025-3-25 22:49
The Moral Identity in Sociologytical relationships between different parameters. In addition, the explanations make it possible to inspect the presence of bias in the database and in the algorithm. These first results lay the groundwork for further additional research in order to generalize the conclusions of this paper to different XAI methods.作者: 2否定 時間: 2025-3-26 01:03
Vapor-Liquid Critical Constants of Fluids,through a consistent features attribution. We apply this methodology to analyse in detail the March 2020 financial meltdown, for which the model offered a timely out of sample prediction. This analysis unveils in particular the contrarian predictive role of the tech equity sector before and after the crash.作者: overshadow 時間: 2025-3-26 04:48
https://doi.org/10.1007/978-3-319-22041-3ey factors that should be included in evaluating these applications and show how these work with the examples found. By using these assessment criteria to evaluate the explainability needs for Reinforcement Learning, the research field can be guided to increasing transparency and trust through explanations.作者: 象形文字 時間: 2025-3-26 10:43 作者: catagen 時間: 2025-3-26 13:31
A Two-Dimensional Explanation Framework to Classify AI as Incomprehensible, Interpretable, or Undersncepts in a concise and coherent way, yielding a classification of three types of AI-systems: incomprehensible, interpretable, and understandable. We also discuss how the established relationships can be used to guide future research into XAI, and how the framework could be used during the development of AI-systems as part of human-AI teams.作者: 輕信 時間: 2025-3-26 19:09
Towards an XAI-Assisted Third-Party Evaluation of AI Systems: Illustration on?Decision Treestical relationships between different parameters. In addition, the explanations make it possible to inspect the presence of bias in the database and in the algorithm. These first results lay the groundwork for further additional research in order to generalize the conclusions of this paper to different XAI methods.作者: coltish 時間: 2025-3-26 22:35
Explainable AI (XAI) Models Applied to the Multi-agent Environment of Financial Marketsthrough a consistent features attribution. We apply this methodology to analyse in detail the March 2020 financial meltdown, for which the model offered a timely out of sample prediction. This analysis unveils in particular the contrarian predictive role of the tech equity sector before and after the crash.作者: Proponent 時間: 2025-3-27 02:54 作者: 占線 時間: 2025-3-27 05:18
https://doi.org/10.1007/978-3-319-52308-8se require a means to understand why a schedule is reasonable and what would happen with different schedules. Building upon a recently proposed approach to argumentation-supported explainable scheduling, we present a tool, ., that provides interactive explanations in makespan scheduling easily and with clarity.作者: electrolyte 時間: 2025-3-27 11:53
Schedule Explainer: An Argumentation-Supported Tool for Interactive Explanations in Makespan Schedulse require a means to understand why a schedule is reasonable and what would happen with different schedules. Building upon a recently proposed approach to argumentation-supported explainable scheduling, we present a tool, ., that provides interactive explanations in makespan scheduling easily and with clarity.作者: BOAST 時間: 2025-3-27 13:55
0302-9743 2021, which was held virtually due to the COVID-19 pandemic..The 19 long revised papers and 1 short contribution were carefully selected from 32 submissions. The papers are organized in the following topical sections: XAI & machine learning; XAI vision, understanding, deployment and evaluation; XAI 作者: 媒介 時間: 2025-3-27 18:05
https://doi.org/10.1007/978-3-030-82017-6artificial intelligence; autonomous agents; computer networks; computer programming; computer science; co作者: Adulterate 時間: 2025-3-27 22:34
https://doi.org/10.1007/978-1-4684-6066-7 the objects not important to those goals (distractor objects). Previous investigations have shown that the mechanisms of selective attention contribute to enhancing perception in both simple daily tasks and more complex activities requiring learning new information..Recently, it has been verified t作者: inscribe 時間: 2025-3-28 04:06 作者: obnoxious 時間: 2025-3-28 07:25 作者: PLUMP 時間: 2025-3-28 13:31 作者: narcissism 時間: 2025-3-28 18:28
Cecilia L. Ridgeway,Sandra Nakagawalainable AI (XAI) research community has focused on making Machine Learning (ML) and Deep Learning methods . and ., seeking .. This work is a preliminary study on the applicability of Neural Architecture Search (NAS) (a sub-field of DL looking for automatic design of NN structures) in XAI. We propos作者: mighty 時間: 2025-3-28 21:20 作者: Ingratiate 時間: 2025-3-28 22:53
Jan E. Stets,Jonathan H. Turner, and distributed allocation problems. Such problems have been studied for decades, and various solutions have been proposed. However, even the most straightforward resource allocation mechanisms lead to debates on efficiency vs. fairness, business quality vs. passenger’s user experience, or perform作者: Outspoken 時間: 2025-3-29 07:07 作者: ERUPT 時間: 2025-3-29 11:07 作者: 最高峰 時間: 2025-3-29 14:45 作者: 平息 時間: 2025-3-29 15:39 作者: handle 時間: 2025-3-29 22:50
Vapor-Liquid Critical Constants of Fluids,trees (GBDT) approach to predict large S&P 500 price drops from a set of 150 technical, fundamental and macroeconomic features. We report an improved accuracy of GBDT over other machine learning (ML) methods on the S&P 500 futures prices. We show that retaining fewer and carefully selected features 作者: Filibuster 時間: 2025-3-30 00:11 作者: FAWN 時間: 2025-3-30 06:28
https://doi.org/10.1007/978-3-319-22041-3most people are unfamiliar with how AIs make their decisions and many of them feel anxious about AI decision-making. A result of this is that AI methods suffer from trust issues and this hinders the full-scale adoption of them. In this paper we determine what the main application domains of Reinforc作者: 最高點 時間: 2025-3-30 11:43
https://doi.org/10.1007/978-3-319-52308-8ow for development of efficient solvers. Yet, the same mathematical intricacies often make solvers black-boxes: their outcomes are hardly explainable and interactive even to experts, let alone lay users. Still, in real-world applications as well as research environments, lay users and experts likewi作者: 大包裹 時間: 2025-3-30 13:21 作者: 玩笑 時間: 2025-3-30 17:31 作者: 構成 時間: 2025-3-30 21:25
Lecture Notes in Computer Sciencehttp://image.papertrans.cn/e/image/319308.jpg