High-Frequency Channel Attention and Contrastive Learning for Image Super-Resolution

Tianyu Yan, Hujun Yin

Research output: Contribution to journalArticlepeer-review

Abstract

Over the last decade, convolutional neural networks (CNNs) have allowed remarkable advances in single image super-resolution (SISR). In general, recovering high-frequency features is crucial for high-performance models. High-frequency features suffer more serious damages than low-frequency features during downscaling, making it hard to recover edges and textures. In this paper, we attempt to guide the network to focus more on high-frequency features in restoration from both channel and spatial perspectives. Specifically, we propose a High-Frequency Channel Attention (HFCA) module and a Frequency Contrastive Learning (FCL) loss to aid the process. For the channel-wise perspective, the HFCA module rescales channels by predicting statistical similarity metrics of the feature maps and their high-frequency components. For the spatial perspective, the FCL loss introduces contrastive learning to train a spatial mask that adaptively assigns high-frequency areas with large scaling factors. We incorporate the proposed HFCA module and FCL loss into an EDSR baseline model to construct the proposed lightweight High-Frequency Channel Contrastive Network (HFCCN). Extensive experimental results show that it can yield markedly improved or competitive performances compared to the state-of-the-art networks of similar model parameters.
Original languageEnglish
JournalVisual Computer
Early online date29 Feb 2024
DOIs
Publication statusE-pub ahead of print - 29 Feb 2024

Keywords

  • Image super-resolution
  • Attention mechanism
  • Contrastive learning
  • Deep learning

Fingerprint

Dive into the research topics of 'High-Frequency Channel Attention and Contrastive Learning for Image Super-Resolution'. Together they form a unique fingerprint.

Cite this