找回密碼
 To register

QQ登錄

只需一步,快速開始

掃一掃,訪問微社區(qū)

打印 上一主題 下一主題

Titlebook: Large Language Models in Cybersecurity; Threats, Exposure an Andrei Kucharavy,Octave Plancherel,Vincent Lenders Book‘‘‘‘‘‘‘‘ 2024 The Edito

[復制鏈接]
查看: 34374|回復: 51
樓主
發(fā)表于 2025-3-21 18:30:52 | 只看該作者 |倒序瀏覽 |閱讀模式
書目名稱Large Language Models in Cybersecurity
副標題Threats, Exposure an
編輯Andrei Kucharavy,Octave Plancherel,Vincent Lenders
視頻videohttp://file.papertrans.cn/582/581341/581341.mp4
概述This book is open access, which means that you have free and unlimited access.Provides practitioners with knowledge about inherent cybersecurity risks related to LLMs.Provides methodologies on how to
圖書封面Titlebook: Large Language Models in Cybersecurity; Threats, Exposure an Andrei Kucharavy,Octave Plancherel,Vincent Lenders Book‘‘‘‘‘‘‘‘ 2024 The Edito
描述.This open access book provides cybersecurity practitioners with the knowledge needed to understand the risks of the increased availability of powerful large language models (LLMs) and how they can be mitigated. It attempts to outrun the malicious attackers by anticipating what they could do. It also alerts LLM developers to understand their work‘s risks for cybersecurity and provides them with tools to mitigate those risks...The book starts in Part I with a general introduction to LLMs and their main application areas. Part II collects a description of the most salient threats LLMs represent in cybersecurity, be they as tools for cybercriminals or as novel attack surfaces if integrated into existing software. Part III focuses on attempting to forecast the exposure and the development of technologies and science underpinning LLMs, as well as macro levers available to regulators to further cybersecurity in the age of LLMs. Eventually, in Part IV, mitigation techniques that should allow safe and secure development and deployment of LLMs are presented. The book concludes with two final chapters in Part V, one speculating what a secure design and integration of LLMs from first principl
出版日期Book‘‘‘‘‘‘‘‘ 2024
關鍵詞Open Access; large language models; cybersecurity; cyberdefense; neural networks; societal implications; r
版次1
doihttps://doi.org/10.1007/978-3-031-54827-7
isbn_softcover978-3-031-54829-1
isbn_ebook978-3-031-54827-7
copyrightThe Editor(s) (if applicable) and The Author(s) 2024
The information of publication is updating

書目名稱Large Language Models in Cybersecurity影響因子(影響力)




書目名稱Large Language Models in Cybersecurity影響因子(影響力)學科排名




書目名稱Large Language Models in Cybersecurity網絡公開度




書目名稱Large Language Models in Cybersecurity網絡公開度學科排名




書目名稱Large Language Models in Cybersecurity被引頻次




書目名稱Large Language Models in Cybersecurity被引頻次學科排名




書目名稱Large Language Models in Cybersecurity年度引用




書目名稱Large Language Models in Cybersecurity年度引用學科排名




書目名稱Large Language Models in Cybersecurity讀者反饋




書目名稱Large Language Models in Cybersecurity讀者反饋學科排名




單選投票, 共有 0 人參與投票
 

0票 0%

Perfect with Aesthetics

 

0票 0%

Better Implies Difficulty

 

0票 0%

Good and Satisfactory

 

0票 0%

Adverse Performance

 

0票 0%

Disdainful Garbage

您所在的用戶組沒有投票權限
沙發(fā)
發(fā)表于 2025-3-21 20:56:57 | 只看該作者
Conversational Agents evaluation of model output, which is then used for further fine-tuning. Fine-tuning with . (RLHF) models perform better but are resource-intensive and specific for each model. Another critical difference in the performance of various CA is their ability to access auxiliary services for task delegation.
板凳
發(fā)表于 2025-3-22 02:46:51 | 只看該作者
地板
發(fā)表于 2025-3-22 05:17:21 | 只看該作者
5#
發(fā)表于 2025-3-22 09:52:08 | 只看該作者
LLM Controls Execution Flow Hijackingtical systems, developing prompt and resulting API calls verification tools, implementing security by designing good practices, and enhancing incident logging and alerting mechanisms can be considered to reduce the novel attack surface presented by LLMs.
6#
發(fā)表于 2025-3-22 14:46:35 | 只看該作者
7#
發(fā)表于 2025-3-22 18:08:43 | 只看該作者
8#
發(fā)表于 2025-3-23 00:05:40 | 只看該作者
9#
發(fā)表于 2025-3-23 02:16:12 | 只看該作者
Private Information Leakage in LLMs generative AI. This chapter relates the threat of information leakage with other adversarial threats, provides an overview of the current state of research on the mechanisms involved in memorization in LLMs, and discusses adversarial attacks aiming to extract memorized information from LLMs.
10#
發(fā)表于 2025-3-23 07:28:05 | 只看該作者
 關于派博傳思  派博傳思旗下網站  友情鏈接
派博傳思介紹 公司地理位置 論文服務流程 影響因子官網 吾愛論文網 大講堂 北京大學 Oxford Uni. Harvard Uni.
發(fā)展歷史沿革 期刊點評 投稿經驗總結 SCIENCEGARD IMPACTFACTOR 派博系數 清華大學 Yale Uni. Stanford Uni.
QQ|Archiver|手機版|小黑屋| 派博傳思國際 ( 京公網安備110108008328) GMT+8, 2025-10-7 12:41
Copyright © 2001-2015 派博傳思   京公網安備110108008328 版權所有 All rights reserved
快速回復 返回頂部 返回列表
乌拉特后旗| 威远县| 团风县| 永修县| 和林格尔县| 天峻县| 新和县| 商南县| 萝北县| 怀远县| 小金县| 淮阳县| 方正县| 军事| 贡觉县| 舒城县| 泗洪县| 于田县| 孟津县| 屏东市| 祁东县| 汝城县| 湖州市| 航空| 玉门市| 岐山县| 连南| 水富县| 吴江市| 承德县| 花莲县| 泰宁县| 新巴尔虎右旗| 罗山县| 安西县| 山阴县| 禹城市| 同德县| 建昌县| 景泰县| 湘潭县|