標(biāo)題: Titlebook: Artificial Neural Networks and Machine Learning – ICANN 2024; 33rd International C Michael Wand,Kristína Malinovská,Igor V. Tetko Conferenc [打印本頁] 作者: burgeon 時間: 2025-3-21 16:10
書目名稱Artificial Neural Networks and Machine Learning – ICANN 2024影響因子(影響力)
書目名稱Artificial Neural Networks and Machine Learning – ICANN 2024影響因子(影響力)學(xué)科排名
書目名稱Artificial Neural Networks and Machine Learning – ICANN 2024網(wǎng)絡(luò)公開度
書目名稱Artificial Neural Networks and Machine Learning – ICANN 2024網(wǎng)絡(luò)公開度學(xué)科排名
書目名稱Artificial Neural Networks and Machine Learning – ICANN 2024被引頻次
書目名稱Artificial Neural Networks and Machine Learning – ICANN 2024被引頻次學(xué)科排名
書目名稱Artificial Neural Networks and Machine Learning – ICANN 2024年度引用
書目名稱Artificial Neural Networks and Machine Learning – ICANN 2024年度引用學(xué)科排名
書目名稱Artificial Neural Networks and Machine Learning – ICANN 2024讀者反饋
書目名稱Artificial Neural Networks and Machine Learning – ICANN 2024讀者反饋學(xué)科排名
作者: Circumscribe 時間: 2025-3-21 22:13 作者: 蚊帳 時間: 2025-3-22 02:38 作者: FRET 時間: 2025-3-22 06:01
Addressing the Privacy and Complexity of Urban Traffic Flow Prediction with Federated Learning and Sorporate external factors into the road network, which helps the model to consider multiple factors affecting traffic flow more fully. Evaluation on the real dataset shows that our framework can achieve high accuracy while preserving privacy.作者: SAGE 時間: 2025-3-22 10:08 作者: 財產(chǎn) 時間: 2025-3-22 15:12
Mark R. Harrigan,John P. Deveikisorporate external factors into the road network, which helps the model to consider multiple factors affecting traffic flow more fully. Evaluation on the real dataset shows that our framework can achieve high accuracy while preserving privacy.作者: 分發(fā) 時間: 2025-3-22 18:43 作者: LEVER 時間: 2025-3-23 00:08 作者: Hemiplegia 時間: 2025-3-23 03:23 作者: 弄臟 時間: 2025-3-23 09:35
Cross-Modal Attention Alignment Network with?Auxiliary Text Description for?Zero-Shot Sketch-Based Io textual information involved. However, the growing prevalence of Large-scale pre-trained Language Models (LLMs), which have demonstrated great knowledge learned from web-scale data, can provide us with an opportunity to conclude collective textual information. Our key innovation lies in the usage 作者: 聰明 時間: 2025-3-23 10:22
Exploring Interpretable Semantic Alignment for?Multimodal Machine Translationsults. Existing methods focus on constructing the global cross-modal interaction between text and vision while ignoring the local semantic correspondences, which may improve the interpretability of multimodal feature fusion. To this end, we propose a novel multimodal fusion encoder with local semant作者: 癡呆 時間: 2025-3-23 15:04
Modal Fusion-Enhanced Two-Stream Hashing Network for?Cross Modal Retrievalretrieval, which does not rely on image label information, has garnered widespread attention. However, existing unsupervised methods still face several common issues. Firstly, current methods often only consider either local or global single-feature extraction in image feature extraction. Secondly, 作者: 人充滿活力 時間: 2025-3-23 20:01 作者: 召集 時間: 2025-3-24 01:31
Unifying Visual and?Semantic Feature Spaces with?Diffusion Models for?Enhanced Cross-Modal Alignmenting visual perspectives of subject objects and lighting discrepancies. To mitigate these challenges, existing studies commonly incorporate additional modal information matching the visual data to regularize the model’s learning process, enabling the extraction of high-quality visual features from co作者: granite 時間: 2025-3-24 05:35
Addressing the Privacy and Complexity of Urban Traffic Flow Prediction with Federated Learning and S short of adequately safeguarding user privacy. Moreover, these systems tend to overlook how external factors affect traffic flow. To tackle these concerns, we propose a novel architecture based on federated learning and Spatiotemporal GCN. Simultaneously, we employ graph embedding techniques to inc作者: 束縛 時間: 2025-3-24 08:35
An Accuracy-Shaping Mechanism for?Competitive Distributed Learningw data, while competing for the same customer base using model-based services. Federated learning is an extensively studied distributed learning approach, but it has been shown to discourage collaboration in a competitive environment. The reason is that the shared global model is a public good, whic作者: Custodian 時間: 2025-3-24 10:42
Federated Adversarial Learning for?Robust Autonomous Landing Runway Detectionhe face of possible adversarial attacks. In this paper, we propose a federated adversarial learning-based framework to detect landing runways using paired data comprising of clean local data and its adversarial version. Firstly, the local model is pre-trained on a large-scale lane detection dataset.作者: 撤退 時間: 2025-3-24 15:12 作者: dagger 時間: 2025-3-24 22:15
Layer-Wised Sparsification Based on?Hypernetwork for?Distributed NN Trainingining strategies have been proposed to speed up training, the efficiency of these strategies is often hindered by the frequent communication required between different computational nodes. Numerous gradient compression techniques (e.g., Sparsification, Quantization, Low-Rank) have been introduced to作者: glowing 時間: 2025-3-24 23:55 作者: dialect 時間: 2025-3-25 06:27
ESSformer: Transformers with?ESS Attention for?Long-Term Series Forecasting for LTSF: ESSformer. It is built upon two essential components: (i) We adopt the Channel-Patch Independence architecture, where channels share the same model weights and have independent embeddings to avoid the impact of distribution shifts between channels. Patches are used to extract local semant作者: placebo-effect 時間: 2025-3-25 08:57 作者: LATE 時間: 2025-3-25 12:32 作者: 惡名聲 時間: 2025-3-25 19:07
https://doi.org/10.1007/978-3-031-72347-6artificial intelligence; classification; deep learning; generative models; graph neural networks; image p作者: 漂泊 時間: 2025-3-25 23:28
978-3-031-72346-9The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerl作者: 流利圓滑 時間: 2025-3-26 00:44 作者: FLIRT 時間: 2025-3-26 05:05
Mark R. Harrigan M.D.,John P. Deveikis M.D.consistencies in feature spaces, and constraints on downstream tasks. To address these issues, we propose an Adaptive Attention-based Cross-Modal Representation Integration Framework. This framework can adaptively capture and associate feature information from different modalities and effectively al作者: glisten 時間: 2025-3-26 09:26
Mark R. Harrigan M.D.,John P. Deveikis M.D.with multiple videos but are incorrectly labeled as exclusive to ones, leading to numerous incorrectly mismatched data. Furthermore, such ignorance may hinder model performance and flaw the evaluation of video retrieval. To alleviate this problem, we develop a training-free annotation pipeline, Boot作者: 跳脫衣舞的人 時間: 2025-3-26 13:53 作者: 美食家 時間: 2025-3-26 20:32 作者: Permanent 時間: 2025-3-26 21:24 作者: ASSAY 時間: 2025-3-27 01:52
Treatment of Acute Ischemic Strokeretrieval, which does not rely on image label information, has garnered widespread attention. However, existing unsupervised methods still face several common issues. Firstly, current methods often only consider either local or global single-feature extraction in image feature extraction. Secondly, 作者: Maximizer 時間: 2025-3-27 06:18 作者: congenial 時間: 2025-3-27 10:22
Mark R. Harrigan,John P. Deveikising visual perspectives of subject objects and lighting discrepancies. To mitigate these challenges, existing studies commonly incorporate additional modal information matching the visual data to regularize the model’s learning process, enabling the extraction of high-quality visual features from co作者: 館長 時間: 2025-3-27 14:37
Mark R. Harrigan,John P. Deveikis short of adequately safeguarding user privacy. Moreover, these systems tend to overlook how external factors affect traffic flow. To tackle these concerns, we propose a novel architecture based on federated learning and Spatiotemporal GCN. Simultaneously, we employ graph embedding techniques to inc作者: 加入 時間: 2025-3-27 21:22 作者: Infinitesimal 時間: 2025-3-28 00:12 作者: Arctic 時間: 2025-3-28 03:22 作者: infatuation 時間: 2025-3-28 07:00 作者: 起波瀾 時間: 2025-3-28 10:30 作者: 漂浮 時間: 2025-3-28 16:46 作者: 捕鯨魚叉 時間: 2025-3-28 19:57 作者: Overdose 時間: 2025-3-28 23:54 作者: Robust 時間: 2025-3-29 03:22 作者: 尖叫 時間: 2025-3-29 11:04 作者: 加入 時間: 2025-3-29 11:35
Cross-Modal Attention Alignment Network with?Auxiliary Text Description for?Zero-Shot Sketch-Based ILLM with several interrogative sentences, (ii) a Feature Extraction Module that includes two ViTs for sketch and image data, a transformer for extracting tokens of sentences of each training category, finally (iii) a Cross-modal Alignment Module that exchanges the token features of both text-sketch 作者: 忙碌 時間: 2025-3-29 18:11
Exploring Interpretable Semantic Alignment for?Multimodal Machine Translationalysis of the results demonstrates the effectiveness and interpretability of our model, which is highly competitive compared to the baseline. Further exploration of extractors in MMT shows that a large multimodal pre-trained model can provide more fine-grained semantic alignment, thus giving it an a作者: GEN 時間: 2025-3-29 22:35
Modal Fusion-Enhanced Two-Stream Hashing Network for?Cross Modal Retrievalon matrices between modalities. Subsequently, by adjusting the similarity weights of the fusion matrix between modalities, we shorten the distances between the most similar instance pairs and increase the distances between the most dissimilar instance pairs, thereby generating hash codes with higher作者: sigmoid-colon 時間: 2025-3-30 01:26
Text Visual Question Answering Based on?Interactive Learning and?Relationship Modeling (RPRET) layer is introduced to model the relative position relationship between different modalities in the image, thereby improving the performance of answering the question related to spatial position relationships. The proposed method outperforms various state-of-the-art models on two public dat作者: insurrection 時間: 2025-3-30 05:47 作者: Painstaking 時間: 2025-3-30 09:21
An Accuracy-Shaping Mechanism for?Competitive Distributed Learning the main server’s model, enabling the provision of differentiated models to each organization. Both our theoretical analysis and numerical experiments validate the efficacy of SFL and the proposed mechanism, showing significant improvements in both model accuracy and social welfare at equilibrium.作者: 上下連貫 時間: 2025-3-30 16:04
Federated Adversarial Learning for?Robust Autonomous Landing Runway Detectionample problem in landing runway detection. Our experimental evaluations over both synthesis and real images of Landing Approach Runway Detection (LARD) dataset consistently demonstrate good performance of the proposed federated adversarial learning and robust to adversarial attacks.作者: TAP 時間: 2025-3-30 19:08 作者: overreach 時間: 2025-3-31 00:18 作者: 不利 時間: 2025-3-31 03:48 作者: Ballad 時間: 2025-3-31 08:40 作者: 緯度 時間: 2025-3-31 12:27 作者: 同音 時間: 2025-3-31 15:23
Mark R. Harrigan M.D.,John P. Deveikis M.D.ieval models on the corrected datasets, demonstrating the real performance of video retrieval models. Moreover, to fully use the corrected training data, we integrated a Cross-matching-based Learning Rate Strategy (CLRS) into video retrieval models, achieving a 2.85 R@10 improvement on MSVD.作者: Entirety 時間: 2025-3-31 17:44