找回密碼
 To register

QQ登錄

只需一步,快速開(kāi)始

掃一掃,訪(fǎng)問(wèn)微社區(qū)

打印 上一主題 下一主題

Titlebook: Introduction to Deep Learning; From Logical Calculu Sandro Skansi Textbook 2018 Springer International Publishing AG, part of Springer Natu

[復(fù)制鏈接]
查看: 28227|回復(fù): 50
樓主
發(fā)表于 2025-3-21 16:08:50 | 只看該作者 |倒序?yàn)g覽 |閱讀模式
書(shū)目名稱(chēng)Introduction to Deep Learning
副標(biāo)題From Logical Calculu
編輯Sandro Skansi
視頻videohttp://file.papertrans.cn/474/473601/473601.mp4
概述Offers a welcome clarity of expression, maintaining mathematical rigor yet presenting the ideas in an intuitive and colourful manner.Includes references to open problems studied in other disciplines,
叢書(shū)名稱(chēng)Undergraduate Topics in Computer Science
圖書(shū)封面Titlebook: Introduction to Deep Learning; From Logical Calculu Sandro Skansi Textbook 2018 Springer International Publishing AG, part of Springer Natu
描述.This textbook presents a concise, accessible and engaging first introduction to deep learning, offering a wide range of connectionist models which represent the current state-of-the-art. The text explores the most popular algorithms and architectures in a simple and intuitive style, explaining the mathematical derivations in a step-by-step manner. The content coverage includes convolutional networks, LSTMs, Word2vec, RBMs, DBNs, neural Turing machines, memory networks and autoencoders. Numerous examples in working Python code are provided throughout the book, and the code is also supplied separately at an accompanying website..Topics and features: introduces the fundamentals of machine learning, and the mathematical and computational prerequisites for deep learning; discusses feed-forward neural networks, and explores the modifications to these which can be applied to any neural network; examines convolutional neural networks, and the recurrent connections to a feed-forward neural network; describes the notion of distributed representations, the concept of the autoencoder, and the ideas behind language processing with deep learning; presents a brief history of artificial intellige
出版日期Textbook 2018
關(guān)鍵詞Deep learning; Neural networks; Pattern recognition; Natural language processing; Autoencoders
版次1
doihttps://doi.org/10.1007/978-3-319-73004-2
isbn_softcover978-3-319-73003-5
isbn_ebook978-3-319-73004-2Series ISSN 1863-7310 Series E-ISSN 2197-1781
issn_series 1863-7310
copyrightSpringer International Publishing AG, part of Springer Nature 2018
The information of publication is updating

書(shū)目名稱(chēng)Introduction to Deep Learning影響因子(影響力)




書(shū)目名稱(chēng)Introduction to Deep Learning影響因子(影響力)學(xué)科排名




書(shū)目名稱(chēng)Introduction to Deep Learning網(wǎng)絡(luò)公開(kāi)度




書(shū)目名稱(chēng)Introduction to Deep Learning網(wǎng)絡(luò)公開(kāi)度學(xué)科排名




書(shū)目名稱(chēng)Introduction to Deep Learning被引頻次




書(shū)目名稱(chēng)Introduction to Deep Learning被引頻次學(xué)科排名




書(shū)目名稱(chēng)Introduction to Deep Learning年度引用




書(shū)目名稱(chēng)Introduction to Deep Learning年度引用學(xué)科排名




書(shū)目名稱(chēng)Introduction to Deep Learning讀者反饋




書(shū)目名稱(chēng)Introduction to Deep Learning讀者反饋學(xué)科排名




單選投票, 共有 0 人參與投票
 

0票 0%

Perfect with Aesthetics

 

0票 0%

Better Implies Difficulty

 

0票 0%

Good and Satisfactory

 

0票 0%

Adverse Performance

 

0票 0%

Disdainful Garbage

您所在的用戶(hù)組沒(méi)有投票權(quán)限
沙發(fā)
發(fā)表于 2025-3-21 21:24:29 | 只看該作者
板凳
發(fā)表于 2025-3-22 01:15:13 | 只看該作者
地板
發(fā)表于 2025-3-22 07:33:15 | 只看該作者
Feedforward Neural Networks,present these abstract and graphical objects as mathematical objects (vectors, matrices and tensors). Rosenblatt’s perceptron rule is also presented in detail, which makes it clear that a multilayered perceptron is impossible. The Delta rule, as an alternative, is presented, and the idea of iterativ
5#
發(fā)表于 2025-3-22 10:01:16 | 只看該作者
Modifications and Extensions to a Feed-Forward Neural Network,em of local minima as one of the main problems in machine learning is explored with all of its intricacies. The main strategy against local minima is the idea of regularization, by adding a regularization parameter when learning. Both L1 and L2 regularizations are explored and explained in detail. T
6#
發(fā)表于 2025-3-22 16:33:35 | 只看該作者
Convolutional Neural Networks,regression accepts data, and defines 1D and 2D convolutional layers as a natural extension of the logistic regression. The chapter also details on how to connect the layers and dimensionality problems. The local receptive field is introduced as a core concept of any convolutional architecture and th
7#
發(fā)表于 2025-3-22 17:12:58 | 只看該作者
Recurrent Neural Networks, basic settings of learning (sequence to label, sequence to sequence of labels and sequences with no labels) are introduced and explained in probabilistic terms. The role of hidden states is presented in a detailed exposition (with abundant illustrations) in the setting of a simple recurrent network
8#
發(fā)表于 2025-3-22 22:21:13 | 只看該作者
Autoencoders,was left out in Chap.?., completing the exposition of the principal component analysis, and demonstrating what a distributed representation is in mathematical terms. The chapter then introduces the main unsupervised learning technique for deep learning, the autoencoder. The structural aspects are pr
9#
發(fā)表于 2025-3-23 05:02:37 | 只看該作者
10#
發(fā)表于 2025-3-23 06:13:04 | 只看該作者
 關(guān)于派博傳思  派博傳思旗下網(wǎng)站  友情鏈接
派博傳思介紹 公司地理位置 論文服務(wù)流程 影響因子官網(wǎng) 吾愛(ài)論文網(wǎng) 大講堂 北京大學(xué) Oxford Uni. Harvard Uni.
發(fā)展歷史沿革 期刊點(diǎn)評(píng) 投稿經(jīng)驗(yàn)總結(jié) SCIENCEGARD IMPACTFACTOR 派博系數(shù) 清華大學(xué) Yale Uni. Stanford Uni.
QQ|Archiver|手機(jī)版|小黑屋| 派博傳思國(guó)際 ( 京公網(wǎng)安備110108008328) GMT+8, 2026-1-25 19:16
Copyright © 2001-2015 派博傳思   京公網(wǎng)安備110108008328 版權(quán)所有 All rights reserved
快速回復(fù) 返回頂部 返回列表
余姚市| 龙岩市| 赤城县| 镇宁| 车致| 同德县| 三穗县| 潼南县| 青岛市| 福海县| 慈利县| 淮安市| 华坪县| 焦作市| 板桥市| 宁武县| 花垣县| 洛川县| 辛集市| 吉安县| 临洮县| 泸溪县| 金阳县| 卓尼县| 周宁县| 武胜县| 拜城县| 九龙城区| 平定县| 柳州市| 桦南县| 海口市| 武威市| 陵川县| 兴安盟| 永清县| 余姚市| 阿拉善左旗| 浏阳市| 金昌市| 柳州市|