找回密碼
 To register

QQ登錄

只需一步,快速開始

掃一掃,訪問微社區(qū)

打印 上一主題 下一主題

Titlebook: Large Language Models in Cybersecurity; Threats, Exposure an Andrei Kucharavy,Octave Plancherel,Vincent Lenders Book‘‘‘‘‘‘‘‘ 2024 The Edito

[復(fù)制鏈接]
查看: 34379|回復(fù): 51
樓主
發(fā)表于 2025-3-21 18:30:52 | 只看該作者 |倒序瀏覽 |閱讀模式
書目名稱Large Language Models in Cybersecurity
副標題Threats, Exposure an
編輯Andrei Kucharavy,Octave Plancherel,Vincent Lenders
視頻videohttp://file.papertrans.cn/582/581341/581341.mp4
概述This book is open access, which means that you have free and unlimited access.Provides practitioners with knowledge about inherent cybersecurity risks related to LLMs.Provides methodologies on how to
圖書封面Titlebook: Large Language Models in Cybersecurity; Threats, Exposure an Andrei Kucharavy,Octave Plancherel,Vincent Lenders Book‘‘‘‘‘‘‘‘ 2024 The Edito
描述.This open access book provides cybersecurity practitioners with the knowledge needed to understand the risks of the increased availability of powerful large language models (LLMs) and how they can be mitigated. It attempts to outrun the malicious attackers by anticipating what they could do. It also alerts LLM developers to understand their work‘s risks for cybersecurity and provides them with tools to mitigate those risks...The book starts in Part I with a general introduction to LLMs and their main application areas. Part II collects a description of the most salient threats LLMs represent in cybersecurity, be they as tools for cybercriminals or as novel attack surfaces if integrated into existing software. Part III focuses on attempting to forecast the exposure and the development of technologies and science underpinning LLMs, as well as macro levers available to regulators to further cybersecurity in the age of LLMs. Eventually, in Part IV, mitigation techniques that should allow safe and secure development and deployment of LLMs are presented. The book concludes with two final chapters in Part V, one speculating what a secure design and integration of LLMs from first principl
出版日期Book‘‘‘‘‘‘‘‘ 2024
關(guān)鍵詞Open Access; large language models; cybersecurity; cyberdefense; neural networks; societal implications; r
版次1
doihttps://doi.org/10.1007/978-3-031-54827-7
isbn_softcover978-3-031-54829-1
isbn_ebook978-3-031-54827-7
copyrightThe Editor(s) (if applicable) and The Author(s) 2024
The information of publication is updating

書目名稱Large Language Models in Cybersecurity影響因子(影響力)




書目名稱Large Language Models in Cybersecurity影響因子(影響力)學科排名




書目名稱Large Language Models in Cybersecurity網(wǎng)絡(luò)公開度




書目名稱Large Language Models in Cybersecurity網(wǎng)絡(luò)公開度學科排名




書目名稱Large Language Models in Cybersecurity被引頻次




書目名稱Large Language Models in Cybersecurity被引頻次學科排名




書目名稱Large Language Models in Cybersecurity年度引用




書目名稱Large Language Models in Cybersecurity年度引用學科排名




書目名稱Large Language Models in Cybersecurity讀者反饋




書目名稱Large Language Models in Cybersecurity讀者反饋學科排名




單選投票, 共有 0 人參與投票
 

0票 0%

Perfect with Aesthetics

 

0票 0%

Better Implies Difficulty

 

0票 0%

Good and Satisfactory

 

0票 0%

Adverse Performance

 

0票 0%

Disdainful Garbage

您所在的用戶組沒有投票權(quán)限
沙發(fā)
發(fā)表于 2025-3-21 20:56:57 | 只看該作者
Conversational Agents evaluation of model output, which is then used for further fine-tuning. Fine-tuning with . (RLHF) models perform better but are resource-intensive and specific for each model. Another critical difference in the performance of various CA is their ability to access auxiliary services for task delegation.
板凳
發(fā)表于 2025-3-22 02:46:51 | 只看該作者
地板
發(fā)表于 2025-3-22 05:17:21 | 只看該作者
5#
發(fā)表于 2025-3-22 09:52:08 | 只看該作者
LLM Controls Execution Flow Hijackingtical systems, developing prompt and resulting API calls verification tools, implementing security by designing good practices, and enhancing incident logging and alerting mechanisms can be considered to reduce the novel attack surface presented by LLMs.
6#
發(fā)表于 2025-3-22 14:46:35 | 只看該作者
7#
發(fā)表于 2025-3-22 18:08:43 | 只看該作者
8#
發(fā)表于 2025-3-23 00:05:40 | 只看該作者
9#
發(fā)表于 2025-3-23 02:16:12 | 只看該作者
Private Information Leakage in LLMs generative AI. This chapter relates the threat of information leakage with other adversarial threats, provides an overview of the current state of research on the mechanisms involved in memorization in LLMs, and discusses adversarial attacks aiming to extract memorized information from LLMs.
10#
發(fā)表于 2025-3-23 07:28:05 | 只看該作者
 關(guān)于派博傳思  派博傳思旗下網(wǎng)站  友情鏈接
派博傳思介紹 公司地理位置 論文服務(wù)流程 影響因子官網(wǎng) 吾愛論文網(wǎng) 大講堂 北京大學 Oxford Uni. Harvard Uni.
發(fā)展歷史沿革 期刊點評 投稿經(jīng)驗總結(jié) SCIENCEGARD IMPACTFACTOR 派博系數(shù) 清華大學 Yale Uni. Stanford Uni.
QQ|Archiver|手機版|小黑屋| 派博傳思國際 ( 京公網(wǎng)安備110108008328) GMT+8, 2025-10-7 17:34
Copyright © 2001-2015 派博傳思   京公網(wǎng)安備110108008328 版權(quán)所有 All rights reserved
快速回復(fù) 返回頂部 返回列表
新安县| 孝昌县| 江华| 呼玛县| 老河口市| 古交市| 海宁市| 昌都县| 青河县| 东光县| 宝鸡市| 吴堡县| 黄平县| 河源市| 原阳县| 东阳市| 贵南县| 玛多县| 平原县| 沁源县| 盈江县| 泉州市| 抚松县| 宜宾县| 富裕县| 敦煌市| 合作市| 澄江县| 龙江县| 云安县| 宁夏| 宝兴县| 怀来县| 临颍县| 叙永县| 临洮县| 岳普湖县| 青海省| 千阳县| 衡山县| 萨嘎县|