作者: PLUMP 時(shí)間: 2025-3-21 23:33
,Constrained Mean Shift Using Distant yet?Related Neighbors for?Representation Learning,s like mean-shift (MSF) cluster images by pulling the embedding of a query image to be closer to its nearest neighbors (NNs). Since most NNs are close to the query by design, the averaging may not affect the embedding of the query much. On the other hand, far away NNs may not be semantically related作者: chlorosis 時(shí)間: 2025-3-22 03:45 作者: Limerick 時(shí)間: 2025-3-22 07:39 作者: Accord 時(shí)間: 2025-3-22 09:37
,Dual Adaptive Transformations for?Weakly Supervised Point Cloud Segmentation, desirable due to the heavy burden of collecting abundant dense annotations for the model training. However, existing methods remain challenging to accurately segment 3D point clouds since limited annotated data may lead to insufficient guidance for label propagation to unlabeled data. Considering t作者: NAG 時(shí)間: 2025-3-22 14:52 作者: NAG 時(shí)間: 2025-3-22 20:10
Self-Supervised Classification Network,multaneously in a single-stage end-to-end manner by optimizing for same-class prediction of two augmented views of the same sample. To guarantee non-degenerate solutions (i.e., solutions where all labels are assigned to the same class) we propose a mathematically motivated variant of the cross-entro作者: 高度 時(shí)間: 2025-3-23 00:31
Data Invariants to Understand Unsupervised Out-of-Distribution Detection, applicability over its supervised counterpart. Despite this increased attention, U-OOD methods suffer from important shortcomings. By performing a large-scale evaluation on different benchmarks and image modalities, we show in this work that most popular state-of-the-art methods are unable to consi作者: ORE 時(shí)間: 2025-3-23 04:25
Domain Invariant Masked Autoencoders for Self-supervised Learning from Multi-domains,ile recent self-supervised learning methods have achieved good performances with evaluation set on the same domain as the training set, they will have an undesirable performance decrease when tested on a different domain. Therefore, the self-supervised learning from multiple domains task is proposed作者: 苦澀 時(shí)間: 2025-3-23 05:33 作者: 鐵塔等 時(shí)間: 2025-3-23 12:50
,Completely Self-supervised Crowd Counting via?Distribution Matching,ould learn good representations, they require some labeled data to map these features to the end task of density estimation. We mitigate this issue with the proposed paradigm of complete self-supervision, which does not need even a single labeled image. The only input required to train, apart from a作者: Rejuvenate 時(shí)間: 2025-3-23 17:11 作者: 偽造者 時(shí)間: 2025-3-23 18:12 作者: surmount 時(shí)間: 2025-3-23 22:44
,Learn2Augment: Learning to?Composite Videos for?Data Augmentation in?Action Recognition,pace of possible augmented data points either at random, without knowing which augmented points will be better, or through heuristics. We propose to learn what makes a “good” video for action recognition and select only high-quality samples for augmentation. In particular, we choose video compositin作者: overture 時(shí)間: 2025-3-24 04:27 作者: 占線 時(shí)間: 2025-3-24 07:26 作者: ingestion 時(shí)間: 2025-3-24 12:41
,Improving Self-supervised Lightweight Model Learning via?Hard-Aware Metric Distillation,ghtweight models, which is important for many mobile devices. To address this problem, we propose a method to improve the lightweight network (as student) via distilling the metric knowledge in a larger SSL model (as teacher). We exploit the relation between teacher and student to mine the positive 作者: Obligatory 時(shí)間: 2025-3-24 18:55 作者: Bother 時(shí)間: 2025-3-24 20:53 作者: Ventilator 時(shí)間: 2025-3-24 23:56
0302-9743 ruction; stereo vision; computational photography; neural networks; image coding; image reconstruction; object recognition; motion estimation..978-3-031-19820-5978-3-031-19821-2Series ISSN 0302-9743 Series E-ISSN 1611-3349 作者: 鋼盔 時(shí)間: 2025-3-25 03:42
?rn B. Bodvarsson,Hendrik Van den Berggmentation of an image from the previous epoch, and (2) outperforms PAWS in semi-supervised setting with less training resources when the constraint ensures that the NNs have the same pseudo-label as the query. Our code is available here: ..作者: 調(diào)味品 時(shí)間: 2025-3-25 09:38
Economic Growth and Immigrationto work well under the linear evaluation protocol, while may hurt the transfer performances on long-tailed classification tasks. Moreover, negative samples do not make models more sensible to the choice of data augmentations, nor does the asymmetric network structure. We believe our findings provide useful information for future work.作者: amygdala 時(shí)間: 2025-3-25 13:35 作者: BOON 時(shí)間: 2025-3-25 18:25
The Supply Curve Under Perfect Competition the virtual category as the lower bound of the inter-class distance. Moreover, we also modify the localisation loss to allow high-quality boundaries for location regression. Extensive experiments demonstrate that the proposed VC learning significantly surpasses the state-of-the-art, especially with small amounts of available labels.作者: committed 時(shí)間: 2025-3-25 20:53 作者: BAIL 時(shí)間: 2025-3-26 03:44 作者: 惡臭 時(shí)間: 2025-3-26 05:23
,Constrained Mean Shift Using Distant yet?Related Neighbors for?Representation Learning,gmentation of an image from the previous epoch, and (2) outperforms PAWS in semi-supervised setting with less training resources when the constraint ensures that the NNs have the same pseudo-label as the query. Our code is available here: ..作者: 煩憂 時(shí)間: 2025-3-26 10:29 作者: Deference 時(shí)間: 2025-3-26 15:09
Data Invariants to Understand Unsupervised Out-of-Distribution Detection, on the invariants of the training dataset. We show how this characterization is unknowingly embodied in the top-scoring MahaAD method, thereby explaining its quality. Furthermore, our approach can be used to interpret predictions of U-OOD detectors and provides insights into good practices for evaluating future U-OOD methods.作者: obstruct 時(shí)間: 2025-3-26 16:51
Semi-supervised Object Detection via VC Learning, the virtual category as the lower bound of the inter-class distance. Moreover, we also modify the localisation loss to allow high-quality boundaries for location regression. Extensive experiments demonstrate that the proposed VC learning significantly surpasses the state-of-the-art, especially with small amounts of available labels.作者: 蛤肉 時(shí)間: 2025-3-26 22:50
,Completely Self-supervised Crowd Counting via?Distribution Matching,ed with self-supervision and then the distribution of predictions is matched to the prior. Experiments show that this results in effective learning of crowd features and delivers significant counting performance.作者: REP 時(shí)間: 2025-3-27 04:06
Coarse-To-Fine Incremental Few-Shot Learning,ts from fine labels, once learning an embedding space contrastively from coarse labels. Besides, as CIL aims at a stability-plasticity balance, new overall performance metrics are proposed. In hat sense, on CIFAR-100, BREEDS, and tieredImageNet, Knowe outperforms all recent relevant CIL or FSCIL methods.作者: 矛盾心理 時(shí)間: 2025-3-27 06:40 作者: 愉快么 時(shí)間: 2025-3-27 12:41
Conference proceedings 2022ning; object recognition; image classification; image processing; object detection; semantic segmentation; human pose estimation; 3d reconstruction; stereo vision; computational photography; neural networks; image coding; image reconstruction; object recognition; motion estimation..作者: 不斷的變動 時(shí)間: 2025-3-27 14:38 作者: minion 時(shí)間: 2025-3-27 19:57 作者: 膽小鬼 時(shí)間: 2025-3-27 23:30 作者: achlorhydria 時(shí)間: 2025-3-28 03:27 作者: 無力更進(jìn) 時(shí)間: 2025-3-28 06:43
,Object Discovery via?Contrastive Learning for?Weakly Supervised Object Detection,ng, called . (WSCL). WSCL aims to construct a credible similarity threshold for object discovery by leveraging consistent features for embedding vectors in the same class. As a result, we achieve new state-of-the-art results on MS-COCO 2014 and 2017 as well as PASCAL VOC 2012, and competitive results on PASCAL VOC 2007. The code is available at ..作者: promote 時(shí)間: 2025-3-28 13:44
https://doi.org/10.1007/978-3-031-19821-2Computer Science; Informatics; Conference Proceedings; Research; Applications作者: Allowance 時(shí)間: 2025-3-28 14:49
978-3-031-19820-5The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerl作者: Oligarchy 時(shí)間: 2025-3-28 19:11 作者: Genetics 時(shí)間: 2025-3-29 01:51 作者: Allege 時(shí)間: 2025-3-29 03:07 作者: unstable-angina 時(shí)間: 2025-3-29 10:05
Economic Growth and Immigrationthe authenticity of the following assumption: different frameworks bring about representations of different characteristics even with the same pretext task. We establish the first benchmark for fair comparisons between MoCo v2 and BYOL, and observe: (i) sophisticated model configurations enable bett作者: Obstreperous 時(shí)間: 2025-3-29 14:37 作者: Exploit 時(shí)間: 2025-3-29 16:07
https://doi.org/10.1007/978-3-031-30968-7 desirable due to the heavy burden of collecting abundant dense annotations for the model training. However, existing methods remain challenging to accurately segment 3D point clouds since limited annotated data may lead to insufficient guidance for label propagation to unlabeled data. Considering t作者: 專橫 時(shí)間: 2025-3-29 20:30
Immigration Beyond the Cities: An Analysis,d to better learn representations for visual correspondence. However, we find that these methods often fail to leverage semantic information and over-rely on the matching of low-level features. In contrast, human vision is capable of distinguishing between distinct objects as a pretext to tracking. 作者: SPECT 時(shí)間: 2025-3-30 03:49
https://doi.org/10.1007/978-1-349-15320-6multaneously in a single-stage end-to-end manner by optimizing for same-class prediction of two augmented views of the same sample. To guarantee non-degenerate solutions (i.e., solutions where all labels are assigned to the same class) we propose a mathematically motivated variant of the cross-entro作者: 一再煩擾 時(shí)間: 2025-3-30 06:12 作者: 很像弓] 時(shí)間: 2025-3-30 08:43
A Digression on the Four Cost Curvesile recent self-supervised learning methods have achieved good performances with evaluation set on the same domain as the training set, they will have an undesirable performance decrease when tested on a different domain. Therefore, the self-supervised learning from multiple domains task is proposed作者: 儲備 時(shí)間: 2025-3-30 14:57 作者: 環(huán)形 時(shí)間: 2025-3-30 19:15 作者: 假裝是你 時(shí)間: 2025-3-30 21:28
Two Applications of Characteristics Theorysses over time without forgetting pre-trained classes. However, a given model will be challenged by test images with finer-grained classes, e.g., a basenji is at most recognized as a dog. Such images form a new training set (i.e., support set) so that the incremental model is hoped to recognize a ba作者: Thymus 時(shí)間: 2025-3-31 01:32
Is Imperfect Competition Empirically Empty?eally, the source and target distributions should be aligned to each other equally to achieve unbiased knowledge transfer. However, due to the significant imbalance between the amount of annotated data in the source and target domains, usually only the target distribution is aligned to the source do作者: Root494 時(shí)間: 2025-3-31 05:59
Imperfect Competition After Fifty Yearspace of possible augmented data points either at random, without knowing which augmented points will be better, or through heuristics. We propose to learn what makes a “good” video for action recognition and select only high-quality samples for augmentation. In particular, we choose video compositin作者: 莊嚴(yán) 時(shí)間: 2025-3-31 12:05
https://doi.org/10.1007/978-1-349-08630-6lex scenes like COCO. This gap exists largely because commonly used random crop augmentations obtain semantically inconsistent content in crowded scene images of diverse objects. In this work, we propose a framework which tackles this problem via joint learning of representations and segmentation. W作者: 整體 時(shí)間: 2025-3-31 16:15 作者: 高談闊論 時(shí)間: 2025-3-31 18:31 作者: 過于平凡 時(shí)間: 2025-4-1 00:52
Enrique Martínez-García,Jens S?ndergaardate-of-the-art models benefit from self-supervised instance-level supervision, but since weak supervision does not include count or location information, the most common “argmax” labeling method often ignores many instances of objects. To alleviate this issue, we propose a novel multiple instance la作者: ordain 時(shí)間: 2025-4-1 05:30 作者: gentle 時(shí)間: 2025-4-1 07:02
Lecture Notes in Computer Sciencehttp://image.papertrans.cn/c/image/234247.jpg作者: 親密 時(shí)間: 2025-4-1 13:58
?rn B. Bodvarsson,Hendrik Van den Bergoisy inputs coming from each individual view. Additionally, we propose a novel regularization strategy to address the feature collapse problem, which is common in cluster-based self-supervised learning methods. Our extensive evaluation shows the effectiveness of our learned representations on downst作者: 催眠藥 時(shí)間: 2025-4-1 16:02
?rn B. Bodvarsson,Hendrik Van den Bergitional SSL methods that balances the contributions from both data types. Especially, we introduce a warmup training stage to achieve a more optimal balance in task specificity by ignoring class information in the pseudo labels, while preserving localization training signals. As a result, our warmup作者: 拉開這車床 時(shí)間: 2025-4-1 21:11
https://doi.org/10.1007/978-3-031-30968-7at enforcing the local and structural smoothness constraints on 3D point clouds. We evaluate our proposed DAT model with two popular backbones on the large-scale S3DIS and ScanNet-V2 datasets. Extensive experiments demonstrate that our model can effectively leverage the unlabeled 3D points and achie作者: Dna262 時(shí)間: 2025-4-2 01:48
Immigration Beyond the Cities: An Analysis,er, demonstrating that they boost performance synergistically. Our method surpasses previous state-of-the-art self-supervised methods using convolutional networks on a variety of visual correspondence tasks, including video object segmentation, human pose tracking, and human part tracking.