作者: Microaneurysm 時(shí)間: 2025-3-21 23:19
MoQuad: Motion-focused Quadruple Construction for?Video Contrastive Learning. By simply applying MoQuad to SimCLR, extensive experiments show that we achieve superior performance on downstream tasks compared to the state of the arts. Notably, on the UCF-101 action recognition task, we achieve 93.7% accuracy after pre-training the model on Kinetics-400 for only 200 epochs, s作者: 折磨 時(shí)間: 2025-3-22 03:31
On the?Effectiveness of?ViT Features as?Local Semantic Descriptorsily applicable across a variety of domains. We show by extensive qualitative and quantitative evaluation that our simple methodologies achieve competitive results with recent state-of-the-art . methods, and outperform previous unsupervised methods by a large margin. Code is available in ..作者: 書(shū)法 時(shí)間: 2025-3-22 07:49
A Study on?Self-Supervised Object Detection Pretrainingby using a contrastive loss, and (2) predicting box coordinates using a transformer, which potentially benefits downstream object detection tasks. We found that these tasks do not lead to better object detection performance when finetuning the pretrained model on labeled data.作者: circumvent 時(shí)間: 2025-3-22 10:06
Artifact-Based Domain Generalization of?Skin Lesion Models, when evaluating such models in out-of-distribution data, they did not prefer clinically-meaningful features. Instead, performance only improved in test sets that present similar artifacts from training, suggesting models learned to ignore the known set of artifacts. Our results raise a concern tha作者: 帶子 時(shí)間: 2025-3-22 14:00 作者: 帶子 時(shí)間: 2025-3-22 17:31
FairDisCo: Fairer AI in?Dermatology via?Disentanglement Contrastive Learninghighlighting the skin-type bias in skin lesion classification. Extensive experimental evaluation demonstrates the effectiveness of FairDisCo, with fairer and superior performance on skin lesion classification tasks.作者: 貨物 時(shí)間: 2025-3-22 22:27 作者: FLAX 時(shí)間: 2025-3-23 03:50 作者: anchor 時(shí)間: 2025-3-23 09:28 作者: 現(xiàn)暈光 時(shí)間: 2025-3-23 11:34 作者: Predigest 時(shí)間: 2025-3-23 16:06 作者: Offstage 時(shí)間: 2025-3-23 19:40 作者: 阻擋 時(shí)間: 2025-3-24 01:07 作者: 外表讀作 時(shí)間: 2025-3-24 02:47 作者: Glossy 時(shí)間: 2025-3-24 09:05
0302-9743 xt in Everything; W14 - BioImage Computing; W15 - Visual Object-Oriented Learning Meets Interaction: Discovery, Representations, and Applications; W16 - AI for 978-3-031-25068-2978-3-031-25069-9Series ISSN 0302-9743 Series E-ISSN 1611-3349 作者: padding 時(shí)間: 2025-3-24 10:46 作者: PAN 時(shí)間: 2025-3-24 16:06 作者: 飛鏢 時(shí)間: 2025-3-24 22:21 作者: Ccu106 時(shí)間: 2025-3-25 02:23 作者: RAGE 時(shí)間: 2025-3-25 04:23
CIRCLe: Color Invariant Representation Learning for?Unbiased Classification of?Skin Lesionsk+ images spanning 6 Fitzpatrick skin types and 114 diseases, using classification accuracy, equal opportunity difference (for light versus dark groups), and normalized accuracy range, a new measure we propose to assess fairness on multiple skin type groups. Our code is available at ..作者: 起來(lái)了 時(shí)間: 2025-3-25 10:11
Distinctive Image Captioning via?CLIP Guided Group Optimizatione wide applicability of our strategy and the consistency of metric results with human evaluation. By comparing the performance of our best model with existing state-of-the-art models, we claim that our model achieves new state-of-the-art towards distinctiveness objective.作者: 公共汽車 時(shí)間: 2025-3-25 13:48 作者: filial 時(shí)間: 2025-3-25 16:59 作者: GUMP 時(shí)間: 2025-3-25 20:37 作者: Highbrow 時(shí)間: 2025-3-26 04:06 作者: 漂亮 時(shí)間: 2025-3-26 06:20
Bootstrapping Autonomous Lane Changes with?Self-supervised Augmented Runshe Lane-Change Feasibility Prediction problem and also propose a data-driven learning approach to solve it. Experimental results are also presented to show the effectiveness of learned lane-change patterns for the decision making.作者: flammable 時(shí)間: 2025-3-26 11:45 作者: 窒息 時(shí)間: 2025-3-26 14:12 作者: Gerontology 時(shí)間: 2025-3-26 19:40 作者: 感染 時(shí)間: 2025-3-26 21:23
Multi-level Governance and Europeanizationn. In this position paper, we first explain how self-supervised representations can be easily used to achieve state-of-the-art performance in commonly reported anomaly detection benchmarks. We then argue that tackling the next generation of anomaly detection tasks requires new technical and conceptual improvements in representation learning.作者: Anecdote 時(shí)間: 2025-3-27 03:54 作者: 翻動(dòng) 時(shí)間: 2025-3-27 07:47
Towards Self-Supervised and Weight-preserving Neural Architecture Searchancements further reduce the computational overhead to an affordable level. However, it is still cumbersome to deploy NAS in real-world applications due to the fussy procedures and the supervised learning paradigm. In this work, we propose the self-supervised and weight-preserving neural architectur作者: Redundant 時(shí)間: 2025-3-27 13:15 作者: 毀壞 時(shí)間: 2025-3-27 15:10
On the?Effectiveness of?ViT Features as?Local Semantic Descriptorstrate that such features, when extracted from a self-supervised ViT model (DINO-ViT), exhibit several striking properties, including: (i) the features encode powerful, well-localized semantic information, at high spatial granularity, such as object .; (ii) the encoded semantic information is ., and 作者: Vaginismus 時(shí)間: 2025-3-27 20:15 作者: 流出 時(shí)間: 2025-3-27 22:40 作者: 躺下殘殺 時(shí)間: 2025-3-28 03:58
A Study on?Self-Supervised Object Detection Pretrainingspatially consistent dense representation from an image, by randomly sampling and projecting boxes to each augmented view and maximizing the similarity between corresponding box features. We study existing design choices in the literature, such as box generation, feature extraction strategies, and u作者: 尋找 時(shí)間: 2025-3-28 07:08 作者: corn732 時(shí)間: 2025-3-28 13:50
Bootstrapping Autonomous Lane Changes with?Self-supervised Augmented Runsr words, our task is bootstrapping the predictability of lane-change feasibility for the autonomous vehicle. Unfortunately, autonomous lane changes happen much less frequently in autonomous runs than in manual-driving runs. Augmented runs serve well in terms of data augmentation: the number of sampl作者: 大猩猩 時(shí)間: 2025-3-28 17:04 作者: sigmoid-colon 時(shí)間: 2025-3-28 21:05
An Evaluation of?Self-supervised Pre-training for?Skin-Lesion Analysisetext tasks, self-supervision allows pre-training models on large amounts of pseudo-labels before fine-tuning them on the target task. In this work, we assess self-supervision for diagnosing skin lesions, comparing three self-supervised pipelines to a challenging supervised baseline, on five test da作者: Conclave 時(shí)間: 2025-3-29 02:58
Skin_Hair Dataset: Setting the?Benchmark for?Effective Hair Inpainting Methods for?Improving the?Imay hair, which makes interpreting them more challenging for clinicians and computer-aided diagnostic algorithms. Hence, automated artifact recognition and inpainting systems have the potential to aid the clinical workflow as well as serve as an preprocessing step in the automated classification of de作者: pester 時(shí)間: 2025-3-29 04:54
FairDisCo: Fairer AI in?Dermatology via?Disentanglement Contrastive Learninge lesions on darker skin types are usually underrepresented and have lower diagnosis accuracy, receives little attention. In this paper, we propose FairDisCo, a disentanglement deep learning framework with contrastive learning that utilizes an additional network branch to remove sensitive attributes作者: 認(rèn)識(shí) 時(shí)間: 2025-3-29 10:40 作者: Anthem 時(shí)間: 2025-3-29 13:59 作者: obligation 時(shí)間: 2025-3-29 19:34 作者: Coeval 時(shí)間: 2025-3-29 23:16 作者: Lipohypertrophy 時(shí)間: 2025-3-30 02:35
European Demographic Monographsction strategy to boost the learning of motion features in video contrastive learning. The proposed method, dubbed .tion-focused .ruple Construction (MoQuad), augments the instance discrimination by meticulously disturbing the appearance and motion of both the positive and negative samples to create作者: Phagocytes 時(shí)間: 2025-3-30 04:18 作者: PSA-velocity 時(shí)間: 2025-3-30 11:43
Multi-level Governance and Europeanizationexpected and unknown during training. Recent advances in self-supervised representation learning have directly driven improvements in anomaly detection. In this position paper, we first explain how self-supervised representations can be easily used to achieve state-of-the-art performance in commonly作者: deactivate 時(shí)間: 2025-3-30 16:08
https://doi.org/10.1007/978-3-319-16471-7 In this work, we explore such actions and seek to identify the points in videos where the actions transition from intentional to unintentional. We propose a multi-stage framework that exploits inherent biases such as motion speed, motion direction, and order to recognize unintentional actions. To e作者: Obliterate 時(shí)間: 2025-3-30 16:34
Theorizing Citizenship and Identityspatially consistent dense representation from an image, by randomly sampling and projecting boxes to each augmented view and maximizing the similarity between corresponding box features. We study existing design choices in the literature, such as box generation, feature extraction strategies, and u