Learning Disentangled Representations of Negation and Uncertainty.

Jake Vasilakes, Chrysoula Zerva, Makoto Miwa, Sophia Ananiadou

Research output: Contribution to conferencePaperpeer-review

Abstract

Negation and uncertainty modeling are long-standing tasks in natural language processing. Linguistic theory postulates that expressions of negation and uncertainty are semantically independent from each other and the content they modify. However, previous works on representation learning do not explicitly model this independence. We therefore attempt to disentangle the representations of negation, uncertainty, and content using a Variational Autoencoder. We find that simply supervising the latent representations results in good disentanglement, but auxiliary objectives based on adversarial learning and mutual information minimization can provide additional disentanglement gains.
Original languageEnglish
Pages8380–8397
Publication statusPublished - 2022

Keywords

  • Variational Autoencoders
  • Negation
  • Uncertainty
  • natural language generation

Fingerprint

Dive into the research topics of 'Learning Disentangled Representations of Negation and Uncertainty.'. Together they form a unique fingerprint.

Cite this