A new contrastive learning framework for reducing the effect of hard negatives

Wentao Cui, Liang Bai, Xian Yang, Jiye Liang

Research output: Contribution to journalArticlepeer-review

82 Downloads (Pure)


Contrastive learning as a self-supervised method has achieved great success. Although it is an instance-level discriminative method, the model can eventually learn latent semantic class information. Its core idea is pulling different views of the same instance closer but pushing out different instances. However, treating an instance as a class hinders the model from learning true latent semantic classes, which is caused by instances (called hard negatives) that are similar to the anchor but do not belong to the same semantic class. In this paper, we propose a new contrastive learning framework based on the Student-t distribution with a neighbor consistency constraint (TNCC) to reduce the effect of hard negatives. In this framework, we propose to use the loss based on the Student-t distribution as the instance-level discriminative loss to keep hard negatives far away. Furthermore, we add a new neighbor consistency constraint to maintain consistency within the semantic classes. Finally, we compare TNCC with recent state-of-the-art contrastive learning methods on five benchmark datasets to verify the effectiveness of the proposed framework.

Original languageEnglish
Article number110121
JournalKnowledge-Based Systems
Early online date17 Nov 2022
Publication statusPublished - 25 Jan 2023


  • Contrastive learning
  • Hard negatives
  • Neighbor consistency constraint
  • Self-supervised learning
  • Student-t distribution


Dive into the research topics of 'A new contrastive learning framework for reducing the effect of hard negatives'. Together they form a unique fingerprint.

Cite this