WLASL-LEX: a Dataset for Recognising Phonological Properties in American Sign Language

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review


Signed Language Processing (SLP) concerns the automated processing of signed languages, the main means of communication of Deaf and hearing impaired individuals. SLP features many different tasks, ranging from sign recognition to translation and production of signed speech, but has been overlooked by the NLPcommunity thus far.
In this paper, we bring to attention the task of modelling the phonology of sign languages. We leverage existing resources to construct a large-scale dataset of American Sign Language signs annotated with six different phonological properties. We then conduct an extensive empirical study to investigate whether data-driven end-to-end and feature-based approaches can be optimised to automatically recognise these properties. We find that, despite the inherent challenges of the task, graph-based neural networks that operate over skeleton features extracted from raw videos are able to succeed at the task to a varying degree. Most importantly, we show that this performance pertains even on signs unobserved during training.
Original languageEnglish
Title of host publicationACL 2022 60th Annual Meeting of the Association for Computational Linguistics
Publication statusAccepted/In press - 24 Feb 2022


Dive into the research topics of 'WLASL-LEX: a Dataset for Recognising Phonological Properties in American Sign Language'. Together they form a unique fingerprint.

Cite this