Abstract
In an era where the volume of data drives the effectiveness of self-supervised learning, the specificity and clarity of data semantics play a crucial role in model training. Addressing this, we introduce HYPerbolic Entailment filtering (HYPE), a novel methodology designed to meticulously extract modality-wise meaningful and well-aligned data from extensive, noisy image-text pair datasets. Our approach leverages hyperbolic embeddings and the concept of entailment cones to evaluate and filter out samples with meaningless or underspecified semantics, focusing on enhancing the specificity of each data sample. HYPE not only demonstrates a significant improvement in filtering efficiency but also sets a new state-of-the-art in the DataComp benchmark when combined with existing filtering techniques. This breakthrough showcases the potential of HYPE to refine the data selection process, thereby contributing to the development of more accurate and efficient self-supervised learning models. Additionally, the image specificity $\epsilon_{i}$ can be independently applied to induce an image-only dataset from an image-text or image-only data pool for training image-only self-supervised models and showed superior performance when compared to the dataset induced by CLIP score.
Abstract (translated)
在数据驱动自我监督学习的时代,数据语义的具体性和清晰度在模型训练中起着关键作用。为了解决这个问题,我们引入了HYPerbolic Entailment filtering(HYPE),一种旨在仔细提取广泛、嘈杂图像-文本对数据集中的模态意义和良好对齐的数据的新方法。我们的方法利用了双曲嵌入和等价线概念来评估和过滤具有无意义或欠specified语义的数据样本,重点关注增强每个数据样本的特定性。HYPE不仅展示了在过滤效率方面的显著改进,而且在结合现有过滤技术后,在DataComp基准中达到了新的最先进水平。这一突破展示了HYPE改进数据选择过程的潜力,从而为更准确、高效的自我监督学习模型的发展做出了贡献。此外,图像特定性 $\epsilon_i$ 可以独立应用,从图像-文本或图像-only数据池中生成图像仅数据集,用于训练图像仅自我监督模型,并且与基于CLIP分数的数据集相比表现出优异的性能。
URL
https://arxiv.org/abs/2404.17507