找回密碼
 To register

QQ登錄

只需一步,快速開(kāi)始

掃一掃,訪問(wèn)微社區(qū)

打印 上一主題 下一主題

Titlebook: Backdoor Attacks against Learning-Based Algorithms; Shaofeng Li,Haojin Zhu,Xuemin (Sherman) Shen Book 2024 The Editor(s) (if applicable) a

[復(fù)制鏈接]
查看: 16293|回復(fù): 37
樓主
發(fā)表于 2025-3-21 16:11:46 | 只看該作者 |倒序?yàn)g覽 |閱讀模式
期刊全稱Backdoor Attacks against Learning-Based Algorithms
影響因子2023Shaofeng Li,Haojin Zhu,Xuemin (Sherman) Shen
視頻videohttp://file.papertrans.cn/181/180215/180215.mp4
發(fā)行地址Thorough review of backdoor attacks and their potential mitigations in learning-based algorithms.Focus on challenges such as design of invisible backdoor triggers and natural language processing syste
學(xué)科分類Wireless Networks
圖書(shū)封面Titlebook: Backdoor Attacks against Learning-Based Algorithms;  Shaofeng Li,Haojin Zhu,Xuemin (Sherman) Shen Book 2024 The Editor(s) (if applicable) a
影響因子This book introduces a new type of data poisoning attack, dubbed, backdoor attack. In backdoor attacks, an attacker can train the model with poisoned data to obtain a model that performs well on a normal input but behaves wrongly with crafted triggers. Backdoor attacks can occur in many scenarios where the training process is not entirely controlled, such as using third-party datasets, third-party platforms for training, or directly calling models provided by third parties. Due to the enormous threat that backdoor attacks pose to model supply chain security, they have received widespread attention from academia and industry. This book focuses on exploiting backdoor attacks in the three types of DNN applications, which are image classification, natural language processing, and federated learning..Based on the observation that DNN models are vulnerable to small perturbations, this book demonstrates that steganography and regularization can be adopted to enhance the invisibility of backdoor triggers. Based on image similarity measurement, this book presents two metrics to quantitatively measure the invisibility of backdoor triggers. The invisible trigger design scheme introduced in th
Pindex Book 2024
The information of publication is updating

書(shū)目名稱Backdoor Attacks against Learning-Based Algorithms影響因子(影響力)




書(shū)目名稱Backdoor Attacks against Learning-Based Algorithms影響因子(影響力)學(xué)科排名




書(shū)目名稱Backdoor Attacks against Learning-Based Algorithms網(wǎng)絡(luò)公開(kāi)度




書(shū)目名稱Backdoor Attacks against Learning-Based Algorithms網(wǎng)絡(luò)公開(kāi)度學(xué)科排名




書(shū)目名稱Backdoor Attacks against Learning-Based Algorithms被引頻次




書(shū)目名稱Backdoor Attacks against Learning-Based Algorithms被引頻次學(xué)科排名




書(shū)目名稱Backdoor Attacks against Learning-Based Algorithms年度引用




書(shū)目名稱Backdoor Attacks against Learning-Based Algorithms年度引用學(xué)科排名




書(shū)目名稱Backdoor Attacks against Learning-Based Algorithms讀者反饋




書(shū)目名稱Backdoor Attacks against Learning-Based Algorithms讀者反饋學(xué)科排名




單選投票, 共有 0 人參與投票
 

0票 0%

Perfect with Aesthetics

 

0票 0%

Better Implies Difficulty

 

0票 0%

Good and Satisfactory

 

0票 0%

Adverse Performance

 

0票 0%

Disdainful Garbage

您所在的用戶組沒(méi)有投票權(quán)限
沙發(fā)
發(fā)表于 2025-3-22 00:00:05 | 只看該作者
板凳
發(fā)表于 2025-3-22 03:34:14 | 只看該作者
地板
發(fā)表于 2025-3-22 07:46:46 | 只看該作者
Hidden Backdoor Attacks in NLP Based Network Services,a language model and may only be activated by specific inputs (called triggers), to trick the model into producing unexpected behaviors. In this chapter, we create covert and natural triggers for textual backdoor attacks, ., where triggers can fool both modern language models and human inspection. W
5#
發(fā)表于 2025-3-22 09:19:22 | 只看該作者
6#
發(fā)表于 2025-3-22 13:15:19 | 只看該作者
Summary and Future Directions,ive of human vision as the research target. A new type of invisible backdoor attack was designed for DNN models in the field of image classification and natural language processing. Lastly, for backdoor attacks in the federated learning system, an analysis was introduced from the perspective of coop
7#
發(fā)表于 2025-3-22 19:46:22 | 只看該作者
2366-1186 ible backdoor triggers and natural language processing systeThis book introduces a new type of data poisoning attack, dubbed, backdoor attack. In backdoor attacks, an attacker can train the model with poisoned data to obtain a model that performs well on a normal input but behaves wrongly with craft
8#
發(fā)表于 2025-3-23 00:26:58 | 只看該作者
9#
發(fā)表于 2025-3-23 02:29:07 | 只看該作者
10#
發(fā)表于 2025-3-23 07:03:22 | 只看該作者
Health Promotion in Sports Settings, which mainly include the success rate of the attack and the availability of the model. At last, we survey related works on backdoor attacks to provide a comprehensive overview of the current literature on the three application areas of deep neural networks mentioned above.
 關(guān)于派博傳思  派博傳思旗下網(wǎng)站  友情鏈接
派博傳思介紹 公司地理位置 論文服務(wù)流程 影響因子官網(wǎng) 吾愛(ài)論文網(wǎng) 大講堂 北京大學(xué) Oxford Uni. Harvard Uni.
發(fā)展歷史沿革 期刊點(diǎn)評(píng) 投稿經(jīng)驗(yàn)總結(jié) SCIENCEGARD IMPACTFACTOR 派博系數(shù) 清華大學(xué) Yale Uni. Stanford Uni.
QQ|Archiver|手機(jī)版|小黑屋| 派博傳思國(guó)際 ( 京公網(wǎng)安備110108008328) GMT+8, 2025-10-9 11:45
Copyright © 2001-2015 派博傳思   京公網(wǎng)安備110108008328 版權(quán)所有 All rights reserved
快速回復(fù) 返回頂部 返回列表
阿克| 静安区| 青田县| 苗栗市| 凭祥市| 曲松县| 韩城市| 东港市| 拉萨市| 文山县| 岳普湖县| 四子王旗| 盘山县| 钟祥市| 克山县| 阿鲁科尔沁旗| 伊金霍洛旗| 内江市| 凤山县| 崇仁县| 文成县| 甘孜| 乌兰县| 浠水县| 龙胜| 赣州市| 太和县| 蓝田县| 旺苍县| 安国市| 盘锦市| 永州市| 本溪市| 福清市| 邛崃市| 襄汾县| 连城县| 云安县| 江津市| 曲水县| 若羌县|