Attention Driven Multi-modal Similarity Learning

Xinjian Gao, Tingting Mu, John Y. Goulermas, Meng Wang

Research output: Contribution to journalArticlepeer-review

299 Downloads (Pure)

Abstract

To learn a function for measuring similarity or relevance between objects is an important machine learning task, referred to as similarity learning. Conventional methods are usually insufficient for processing complex patterns, while more sophisticated methods produce results supported by parameters and mathematical operations that are hard to interpret. To improve both model robustness and interpretability, we propose a novel attention driven multi-modal algorithm, which learns a distributed similarity score over different relation modalities and develops an interaction-oriented dynamic attention mechanism to selectively focus on salient patches of objects of interest. Neural networks are used to generate a set of high-level representation vectors for both the entire object and its segmented patches. Multi-view local neighboring structures between objects are encoded in the high-level object representation through an unsupervised pre-training procedure. By initializing the relation embeddings with object cluster centers, each relation modality can be reasonably interpreted as a semantic topic. A layer-wise training scheme based on a mixture of unsupervised and supervised training is proposed to improve generalization. The effectiveness of the proposed method and its superior performance compared against state-of-the-art algorithms are demonstrated through evaluations based on different image retrieval tasks.
Original languageEnglish
JournalInformation Sciences
Volume432
Early online date7 Aug 2017
DOIs
Publication statusPublished - 2017

Keywords

  • : Multi-modal similarity
  • attention mechanism
  • , representation learning
  • multi-view
  • neural network

Fingerprint

Dive into the research topics of 'Attention Driven Multi-modal Similarity Learning'. Together they form a unique fingerprint.

Cite this