標(biāo)題: Titlebook: AI Verification; First International Guy Avni,Mirco Giacobbe,Christian Schilling Conference proceedings 2024 The Editor(s) (if applicable) [打印本頁] 作者: Interjection 時間: 2025-3-21 19:28
書目名稱AI Verification影響因子(影響力)
書目名稱AI Verification影響因子(影響力)學(xué)科排名
書目名稱AI Verification網(wǎng)絡(luò)公開度
書目名稱AI Verification網(wǎng)絡(luò)公開度學(xué)科排名
書目名稱AI Verification被引頻次
書目名稱AI Verification被引頻次學(xué)科排名
書目名稱AI Verification年度引用
書目名稱AI Verification年度引用學(xué)科排名
書目名稱AI Verification讀者反饋
書目名稱AI Verification讀者反饋學(xué)科排名
作者: fiscal 時間: 2025-3-21 22:02 作者: 不法行為 時間: 2025-3-22 03:42
,Concept-Based Analysis of?Neural Networks via?Vision-Language Models,cifications for vision tasks and the lack of efficient verification procedures. In this paper, we propose to leverage emerging multimodal, vision-language, foundation models (VLMs) as a lens through which we can reason about vision models. VLMs have been trained on a large body of images accompanied作者: 爵士樂 時間: 2025-3-22 06:40
,Parallel Verification for?,-Equivalence of?Neural Network Quantization,emory. However, it also brings in loss of generalization and even potential errors to the models. In this work, we propose a parallelization technique for formally . the . between quantized models and their original real-valued counterparts. In order to guarantee both . and ., mixed integer linear p作者: 浸軟 時間: 2025-3-22 09:50 作者: Ledger 時間: 2025-3-22 14:40 作者: CLEFT 時間: 2025-3-22 19:15 作者: 修飾語 時間: 2025-3-22 21:24 作者: 浪費物質(zhì) 時間: 2025-3-23 03:21
Iterative Counter-Example Guided Robustness Verification for Neural Networks,y considering different types of abstraction techniques focused toward the non-linearity in the computation in the neural network. We propose a complementary approach of abstracting the neural network by discarding the neurons. Our abstraction is based on the robustness property being verified. We t作者: 創(chuàng)造性 時間: 2025-3-23 05:54
Grundkurs Microsoft Dynamics AXout the laborious task of manual heuristic creation and tuning, data-driven approaches demonstrate significant potential by extracting crucial patterns from a small set of data points. However, at present, symbolic methods generally surpass data-driven solvers in performance. In this work, we develo作者: intention 時間: 2025-3-23 12:10
Pharmakologie, Gerinnung und Mikrochirurgie’s output. However, recent work has shown that most existing methods to implement SVAs have some drawbacks, resulting in biased or unreliable explanations that fail to correctly capture the true intrinsic relationships between features and model outputs. Moreover, the mechanism and consequences of t作者: 完成才會征服 時間: 2025-3-23 14:11
Pharmakologie, Gerinnung und Mikrochirurgiecifications for vision tasks and the lack of efficient verification procedures. In this paper, we propose to leverage emerging multimodal, vision-language, foundation models (VLMs) as a lens through which we can reason about vision models. VLMs have been trained on a large body of images accompanied作者: 公司 時間: 2025-3-23 20:04
Postoperative überwachung (?Monitoring?)emory. However, it also brings in loss of generalization and even potential errors to the models. In this work, we propose a parallelization technique for formally . the . between quantized models and their original real-valued counterparts. In order to guarantee both . and ., mixed integer linear p作者: 仇恨 時間: 2025-3-23 23:54
Pharmakologie, Gerinnung und Mikrochirurgies assume a fixed control period. In control theory, higher frequency usually improves performance. However, for current analysis methods, increasing the frequency complicates verification. In the limit, when actuation is performed continuously, no existing neural network control systems verification作者: aggressor 時間: 2025-3-24 02:23
Mikrochirurgische Nervenkoaptation approach to controlling quality in classification tasks is to ensure that predictive performance is balanced between different classes; however, it has been shown in previous work that even if class performance is balanced, instances of some classes are easier to perturb in such a way that they are作者: 新手 時間: 2025-3-24 10:31 作者: 喪失 時間: 2025-3-24 12:01 作者: parallelism 時間: 2025-3-24 16:51
https://doi.org/10.1007/978-3-8348-9054-2y considering different types of abstraction techniques focused toward the non-linearity in the computation in the neural network. We propose a complementary approach of abstracting the neural network by discarding the neurons. Our abstraction is based on the robustness property being verified. We t作者: Inflated 時間: 2025-3-24 19:30 作者: 流浪 時間: 2025-3-25 01:04
https://doi.org/10.1007/978-3-031-65112-0Computer Science; Informatics; Conference Proceedings; Research; Applications作者: Admonish 時間: 2025-3-25 04:07
978-3-031-65111-3The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerl作者: cultivated 時間: 2025-3-25 07:30
AI Verification978-3-031-65112-0Series ISSN 0302-9743 Series E-ISSN 1611-3349 作者: 跳脫衣舞的人 時間: 2025-3-25 15:05 作者: 印第安人 時間: 2025-3-25 15:57 作者: HERE 時間: 2025-3-25 21:55
Pharmakologie, Gerinnung und Mikrochirurgiethough we demonstrate bottlenecks in existing tools when handling the required specifications. We demonstrate the approach’s efficacy by applying it to a vision-based autonomous airplane taxiing system and compare with a fixed frequency analysis baseline.作者: 共同給與 時間: 2025-3-26 00:54
Mikrochirurgische Nervenkoaptationof instances. We observed that the robustness of the same class over the same data can significantly differ from each other for different neural networks; this means that even when a neural network appears to be unbiased, it might be easier to perturb instances of a given class so that they are misc作者: 裂縫 時間: 2025-3-26 06:28
https://doi.org/10.1007/978-3-8348-9054-2vel of difficulty. Experimental results show that for this dataset: (i) LLMs are reasonably successful at automatically generating formal specifications; and (ii) our consistency checker achieves a promising acceptance rate (up to .) for correct instances while maintaining zero tolerance for adversa作者: OFF 時間: 2025-3-26 11:40
,Error Analysis of?Shapley Value-Based Model Explanations: An Informative Perspective,under-informative explanations. We demonstrate how these concepts can be effectively used to understand potential errors of existing SVA methods. In particular, for the widely deployed assumption-based SVAs, we find that they can easily be under-informative due to the distribution drift caused by di作者: 倔強不能 時間: 2025-3-26 16:20 作者: 清真寺 時間: 2025-3-26 17:36 作者: Commonplace 時間: 2025-3-26 22:55
,A Preliminary Study to?Examining Per-class Performance Bias via?Robustness Distributions,of instances. We observed that the robustness of the same class over the same data can significantly differ from each other for different neural networks; this means that even when a neural network appears to be unbiased, it might be easier to perturb instances of a given class so that they are misc作者: GREG 時間: 2025-3-27 02:39
Clover: ,sed-Loop ,ifiable Code Generation,vel of difficulty. Experimental results show that for this dataset: (i) LLMs are reasonably successful at automatically generating formal specifications; and (ii) our consistency checker achieves a promising acceptance rate (up to .) for correct instances while maintaining zero tolerance for adversa作者: Pericarditis 時間: 2025-3-27 07:41
0302-9743 uring July 2024...The scope of the topics was broadly categorized into two groups. The first group, formal methods for artificial intelligence, comprised: formal specifications for systems with AI components; formal methods for analyzing systems with AI components; formal synthesis methods of AI com作者: vanquish 時間: 2025-3-27 11:58
Conference proceedings 2024plainability of systems with AI components. The second group, artificial intelligence for formal methods, comprised: AI methods for formal verification; AI methods for formal synthesis; AI methods for safe control; and AI methods for falsification..作者: Indebted 時間: 2025-3-27 17:41 作者: 歡呼 時間: 2025-3-27 18:40 作者: 滑稽 時間: 2025-3-27 22:00 作者: muster 時間: 2025-3-28 05:22 作者: 小隔間 時間: 2025-3-28 10:10 作者: gerrymander 時間: 2025-3-28 11:21 作者: 重畫只能放棄 時間: 2025-3-28 18:15 作者: 沖突 時間: 2025-3-28 18:53
Conference proceedings 2024 2024...The scope of the topics was broadly categorized into two groups. The first group, formal methods for artificial intelligence, comprised: formal specifications for systems with AI components; formal methods for analyzing systems with AI components; formal synthesis methods of AI components; t作者: 劇本 時間: 2025-3-28 23:53
https://doi.org/10.1007/978-3-8348-9054-2ided abstraction-refinement based verification strategy. We present the viability of our strategy by deploying the Marabou verification framework and applying it to verify the robustness properties of the neural network used in the next-generation airborne collision avoidance system for unmanned aircraft (ACAS Xu).作者: Lipoma 時間: 2025-3-29 06:18
Iterative Counter-Example Guided Robustness Verification for Neural Networks,ided abstraction-refinement based verification strategy. We present the viability of our strategy by deploying the Marabou verification framework and applying it to verify the robustness properties of the neural network used in the next-generation airborne collision avoidance system for unmanned aircraft (ACAS Xu).