An Unsupervised Deep Learning Model for Aspect Retrieving Using Transformer Encoder

Atanu Dey, Mamata Jenamani, Arijit De

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

10 Downloads (Pure)

Abstract

We introduce a deep-learning-based aspect extraction model called RATE, which stands for Retrieving of Aspects using Transformer Encoder. When doing unsupervised aspect-based sentiment analysis, the process of retrieving aspects is both critical and challenging. Most prior efforts use some kind of topic modeling to extract aspect only. Despite their efficacy, these techniques seldom provide consistent outcomes for highly coherent aspects. Even though some approaches address these issues by employing an attention-based deep neural model, their performance is hindered by their single-headed attention mechanism. Thus, RATE is designed to improve the performance of extracting coherent aspects from the text by using multi-headed attention mechanism with transformer encoder, negative samplings, and word embeddings. This model promotes the proximity in the embedding space of words that arise in similar contexts, as opposed to topic models and other techniques that often presume independently created words. To further enhance the coherence of the aspects, we use multi-headed attention technique in the encoder of the RATE architecture to downplay unimportant words during training. The RATE model outperforms the highest performing unsupervised baseline ones in terms of precision (8.34%), recall (0.94%), and f1-score (5.44%) on the ACOS-Laptop dataset. Again, on the ACOS-Restaurant dataset, RATE enhances precision, recall, and f1-score by 1.4%, 8.87%, and 4.31%, respectively, for finding more significant and coherent aspects.

Original languageEnglish
Title of host publicationLecture Notes in Networks and Systems
Subtitle of host publicationIntelligent Computing. SAI 2024.
EditorsKohei Arai
Pages303-317
Number of pages15
ISBN (Electronic)978-3-031-62277-9
DOIs
Publication statusPublished - 13 Jun 2024

Publication series

NameLecture Notes in Networks and Systems
Volume1017 LNNS
ISSN (Print)2367-3370
ISSN (Electronic)2367-3389

Keywords

  • Aspect extraction
  • Attention mechanism
  • Encoder
  • Transformer
  • Unsupervised deep neural network
  • Word embedding

Fingerprint

Dive into the research topics of 'An Unsupervised Deep Learning Model for Aspect Retrieving Using Transformer Encoder'. Together they form a unique fingerprint.

Cite this