找回密碼
 To register

QQ登錄

只需一步,快速開始

掃一掃,訪問微社區(qū)

打印 上一主題 下一主題

Titlebook: Explainable AI with Python; Leonida Gianfagna,Antonio Di Cecco Book 2021 The Editor(s) (if applicable) and The Author(s), under exclusive

[復(fù)制鏈接]
查看: 45046|回復(fù): 41
樓主
發(fā)表于 2025-3-21 16:37:35 | 只看該作者 |倒序?yàn)g覽 |閱讀模式
書目名稱Explainable AI with Python
編輯Leonida Gianfagna,Antonio Di Cecco
視頻videohttp://file.papertrans.cn/320/319283/319283.mp4
概述Offers a high-level perspective that explains the basics of XAI and its impacts on business and society, as well as a useful guide for machine learning practitioners to understand the current techniqu
圖書封面Titlebook: Explainable AI with Python;  Leonida Gianfagna,Antonio Di Cecco Book 2021 The Editor(s) (if applicable) and The Author(s), under exclusive
描述.This book provides a full presentation of the current concepts and available techniques to make “machine learning” systems more explainable. The approaches presented can be applied to almost all the current “machine learning” models: linear and logistic regression, deep learning neural networks, natural language processing and image recognition, among the others..Progress in Machine Learning is increasing the use of artificial agents to perform critical tasks previously handled by humans (healthcare, legal and finance, among others). While the principles that guide the design of these agents are understood, most of the current deep-learning models are "opaque" to human understanding. .Explainable AI with Python. fills the current gap in literature on this emerging topic by taking both a theoretical and a practical perspective, making the reader quickly capable of working with tools and code for Explainable AI..Beginning with examples of what Explainable AI (XAI) is and why it is needed in the field, the book details different approaches to XAI depending on specific context and need.? Hands-on work on interpretable models with specific examples leveraging Python are then presented,
出版日期Book 2021
關(guān)鍵詞XAI; Artificial Intelligence; Machine Learning; intrinsic interpretable models; Shapley Values; Deep Tayl
版次1
doihttps://doi.org/10.1007/978-3-030-68640-6
isbn_softcover978-3-030-68639-0
isbn_ebook978-3-030-68640-6
copyrightThe Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerl
The information of publication is updating

書目名稱Explainable AI with Python影響因子(影響力)




書目名稱Explainable AI with Python影響因子(影響力)學(xué)科排名




書目名稱Explainable AI with Python網(wǎng)絡(luò)公開度




書目名稱Explainable AI with Python網(wǎng)絡(luò)公開度學(xué)科排名




書目名稱Explainable AI with Python被引頻次




書目名稱Explainable AI with Python被引頻次學(xué)科排名




書目名稱Explainable AI with Python年度引用




書目名稱Explainable AI with Python年度引用學(xué)科排名




書目名稱Explainable AI with Python讀者反饋




書目名稱Explainable AI with Python讀者反饋學(xué)科排名




單選投票, 共有 0 人參與投票
 

0票 0%

Perfect with Aesthetics

 

0票 0%

Better Implies Difficulty

 

0票 0%

Good and Satisfactory

 

0票 0%

Adverse Performance

 

0票 0%

Disdainful Garbage

您所在的用戶組沒有投票權(quán)限
沙發(fā)
發(fā)表于 2025-3-21 20:51:07 | 只看該作者
板凳
發(fā)表于 2025-3-22 01:42:48 | 只看該作者
Shiriki Kumanyika,Ross C. Brownson, XAI can be achieved by looking at the internals with the proper interpretations of the weights and parameters that build the model. We will make practical examples (using Python code) that will deal with the quality of wine, the survival properties in a .-like disaster, and for the ML-addicted the
地板
發(fā)表于 2025-3-22 05:31:41 | 只看該作者
https://doi.org/10.1007/978-1-4614-4839-6 that . .. We also provided in Table . of Chap. . (don’t worry to look at it now, we will start again from this table in the following) a set of operational criteria based on question to distinguish between interpretability as a lighter form of explainability. As we saw, explainability is able to an
5#
發(fā)表于 2025-3-22 09:31:04 | 只看該作者
Paul Tae-Woo Lee,Jasmine Siu Lee Lam as shown by Goodfellow et al. (2014), the first one has been classified as a panda by a NN with 55.7% confidence, while the second has been classified by the same NN as a gibbon with 99.3% confidence. What is happening here? The first thoughts are about some mistakes in designing or training the NN
6#
發(fā)表于 2025-3-22 15:46:43 | 只看該作者
Mohsen Golalikhani,Mark H. KarwanFor our purposes we place the birth of AI with the seminal work of Alan Turing (1950) in which the author posed the question “Can machines think?” and the later famous mental experiment proposed by Searle called the ..
7#
發(fā)表于 2025-3-22 21:05:09 | 只看該作者
Books, NIH and Journal Helpful ReferencesThis chapter is a bridge between the high-level overview of XAI presented in Chap. . and the hands-on work with XAI methods that we will start in Chap. .. The chapter will introduce a series of key concepts and a more complete terminology as you will find in literature and papers.
8#
發(fā)表于 2025-3-23 00:13:13 | 只看該作者
9#
發(fā)表于 2025-3-23 04:32:22 | 只看該作者
https://doi.org/10.1007/978-3-031-22727-1In this chapter, we will talk about XAI methods for Deep Learning models.
10#
發(fā)表于 2025-3-23 06:06:02 | 只看該作者
Hierarchical Data Format 5 : HDF5We reached the end of this journey. In this chapter, we close the loop presenting the full picture of our point of view on XAI; in particular, we will get back to our proposed flow for XAI commenting again on it but keeping in mind all the methods we discussed.
 關(guān)于派博傳思  派博傳思旗下網(wǎng)站  友情鏈接
派博傳思介紹 公司地理位置 論文服務(wù)流程 影響因子官網(wǎng) 吾愛論文網(wǎng) 大講堂 北京大學(xué) Oxford Uni. Harvard Uni.
發(fā)展歷史沿革 期刊點(diǎn)評 投稿經(jīng)驗(yàn)總結(jié) SCIENCEGARD IMPACTFACTOR 派博系數(shù) 清華大學(xué) Yale Uni. Stanford Uni.
QQ|Archiver|手機(jī)版|小黑屋| 派博傳思國際 ( 京公網(wǎng)安備110108008328) GMT+8, 2025-10-10 16:31
Copyright © 2001-2015 派博傳思   京公網(wǎng)安備110108008328 版權(quán)所有 All rights reserved
快速回復(fù) 返回頂部 返回列表
无棣县| 卢龙县| 安阳市| 新和县| 永州市| 彩票| 临清市| 伊金霍洛旗| 舞钢市| 岢岚县| 资兴市| 荔浦县| 白朗县| 长治市| 广东省| 平邑县| 湖州市| 岱山县| 喀喇| 山阳县| 聂拉木县| 云龙县| 田林县| 昌图县| 南投市| 攀枝花市| 内黄县| 财经| 台安县| 华池县| 昭通市| 南召县| 西充县| 军事| 沁水县| 安溪县| 宜都市| 盈江县| 都匀市| 五寨县| 长丰县|