找回密碼
 To register

QQ登錄

只需一步,快速開始

掃一掃,訪問微社區(qū)

打印 上一主題 下一主題

Titlebook: Efficient Processing of Deep Neural Networks; Vivienne Sze,Yu-Hsin Chen,Joel S. Emer Book 2020 Springer Nature Switzerland AG 2020

[復(fù)制鏈接]
查看: 45894|回復(fù): 45
樓主
發(fā)表于 2025-3-21 19:15:33 | 只看該作者 |倒序瀏覽 |閱讀模式
書目名稱Efficient Processing of Deep Neural Networks
編輯Vivienne Sze,Yu-Hsin Chen,Joel S. Emer
視頻videohttp://file.papertrans.cn/303/302996/302996.mp4
叢書名稱Synthesis Lectures on Computer Architecture
圖書封面Titlebook: Efficient Processing of Deep Neural Networks;  Vivienne Sze,Yu-Hsin Chen,Joel S. Emer Book 2020 Springer Nature Switzerland AG 2020
描述.This book provides a structured treatment of the key principles and techniques for enabling efficient processing of deep neural networks (DNNs).. DNNs are currently widely used for many artificial intelligence (AI) applications, including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Therefore, techniques that enable efficient processing of deep neural networks to improve key metrics—such as energy-efficiency, throughput, and latency—without sacrificing accuracy or increasing hardware costs are critical to enabling the wide deployment of DNNs in AI systems...The book includes background on DNN processing; a description and taxonomy of hardware architectural approaches for designing DNN accelerators; key metrics for evaluating and comparing different designs; features of DNN processing that are amenable to hardware/algorithm co-design to improve energy efficiency and throughput; and opportunities for applying new technologies. Readers will find a structured introduction to the field as well as formalization and organization of key concepts from contemporary wor
出版日期Book 2020
版次1
doihttps://doi.org/10.1007/978-3-031-01766-7
isbn_softcover978-3-031-00638-8
isbn_ebook978-3-031-01766-7Series ISSN 1935-3235 Series E-ISSN 1935-3243
issn_series 1935-3235
copyrightSpringer Nature Switzerland AG 2020
The information of publication is updating

書目名稱Efficient Processing of Deep Neural Networks影響因子(影響力)




書目名稱Efficient Processing of Deep Neural Networks影響因子(影響力)學(xué)科排名




書目名稱Efficient Processing of Deep Neural Networks網(wǎng)絡(luò)公開度




書目名稱Efficient Processing of Deep Neural Networks網(wǎng)絡(luò)公開度學(xué)科排名




書目名稱Efficient Processing of Deep Neural Networks被引頻次




書目名稱Efficient Processing of Deep Neural Networks被引頻次學(xué)科排名




書目名稱Efficient Processing of Deep Neural Networks年度引用




書目名稱Efficient Processing of Deep Neural Networks年度引用學(xué)科排名




書目名稱Efficient Processing of Deep Neural Networks讀者反饋




書目名稱Efficient Processing of Deep Neural Networks讀者反饋學(xué)科排名




單選投票, 共有 1 人參與投票
 

0票 0.00%

Perfect with Aesthetics

 

1票 100.00%

Better Implies Difficulty

 

0票 0.00%

Good and Satisfactory

 

0票 0.00%

Adverse Performance

 

0票 0.00%

Disdainful Garbage

您所在的用戶組沒有投票權(quán)限
沙發(fā)
發(fā)表于 2025-3-21 21:55:23 | 只看該作者
https://doi.org/10.1007/978-3-662-39613-1tion of DNNs to speech recognition [6] and image recognition. [7], the number of applications that use DNNs has exploded. These DNNs are employed in a myriad of applications from self-driving cars [8], to detecting cancer [9], to playing complex games [10]. In many of these domains, DNNs are now abl
板凳
發(fā)表于 2025-3-22 00:49:48 | 只看該作者
Die Sicherung der Baugrubenwandungenapidly to improve accuracy and efficiency. In all cases, the input to a DNN is a set of values representing the information to be analyzed by the network. For instance, these values can be pixels of an image, sampled amplitudes of an audio wave, or the numerical representation of the state of some s
地板
發(fā)表于 2025-3-22 06:57:44 | 只看該作者
Operationen am Darm. Appendektomiekey metrics that one should consider when comparing and evaluating the strengths and weaknesses of different designs and proposed techniques and that should be incorporated into design considerations. While efficiency is often only associated with the number of operations per second per Watt (e.g.,
5#
發(fā)表于 2025-3-22 08:58:38 | 只看該作者
Operationen am Darm. Appendektomiele dependencies between these operations and the accumulations are commutative, there is considerable flexibility in the order in which MACs can be scheduled and these computations can be easily parallelized. Therefore, in order to achieve high performance for DNNs, highly parallel compute paradigms
6#
發(fā)表于 2025-3-22 13:01:46 | 只看該作者
https://doi.org/10.1007/978-3-642-49774-2multiplications, in order to achieve higher performance (i.e., higher throughput and/or lower latency) on off-the-shelf general-purpose processors such as CPUs and GPUs. In this chapter, we will focus on optimizing the processing of DNNs directly by designing specialized hardware.
7#
發(fā)表于 2025-3-22 20:35:29 | 只看該作者
8#
發(fā)表于 2025-3-22 22:24:24 | 只看該作者
9#
發(fā)表于 2025-3-23 03:04:47 | 只看該作者
https://doi.org/10.1007/978-3-642-91000-5eferring to the fact that there are many repeated values in the data. Much of the time the repeated value is zero, which is what we will assume unless explicitly noted. Thus, we will talk about the sparsity or density of the data as the percentage of zeros or non-zeros, respectively in the data. The
10#
發(fā)表于 2025-3-23 07:49:43 | 只看該作者
 關(guān)于派博傳思  派博傳思旗下網(wǎng)站  友情鏈接
派博傳思介紹 公司地理位置 論文服務(wù)流程 影響因子官網(wǎng) 吾愛論文網(wǎng) 大講堂 北京大學(xué) Oxford Uni. Harvard Uni.
發(fā)展歷史沿革 期刊點評 投稿經(jīng)驗總結(jié) SCIENCEGARD IMPACTFACTOR 派博系數(shù) 清華大學(xué) Yale Uni. Stanford Uni.
QQ|Archiver|手機(jī)版|小黑屋| 派博傳思國際 ( 京公網(wǎng)安備110108008328) GMT+8, 2025-10-28 01:52
Copyright © 2001-2015 派博傳思   京公網(wǎng)安備110108008328 版權(quán)所有 All rights reserved
快速回復(fù) 返回頂部 返回列表
兰考县| 广水市| 大余县| 长治县| 潍坊市| 噶尔县| 津市市| 建湖县| 施秉县| 阳泉市| 建阳市| 沅陵县| 日土县| 邮箱| 湖口县| 平塘县| 西乡县| 屯门区| 灵宝市| 泽州县| 丁青县| 钦州市| 砀山县| 虞城县| 凤山市| 来凤县| 西城区| 轮台县| 交口县| 庆阳市| 青川县| 商南县| 峡江县| 绥江县| 全州县| 栾城县| 福建省| 阳江市| 清水县| 招远市| 广宁县|