Multi-modality sensor fusion for gait classification using deep learning

Syed Usama YUNAS, Abdullah Alharthi, Krikor Ozanyan

Research output: Contribution to conferencePaperpeer-review

377 Downloads (Pure)

Abstract

Human gait has been acquired and studied through modalities such as video cameras, inertial sensors and floor sensors etc. Due to many environmental constraints such as illumination, noise, drifts over extended periods or restricted environment, the classification f-score of gait classifications is highly dependent on the usage scenario. This is addressed in this work by proposing sensor fusion of data obtained from 1) ambulatory inertial sensors (AIS) and 2) plastic optical fiber-based floor sensors (FS). Four gait activities are executed by 11 subjects on FS whilst wearing AIS. The proposed sensor fusion method achieves classification f-scores of 88% using artificial neural network (ANN) and 91% using convolutional neural network (CNN) by learning the best data representations from both modalities.
Original languageEnglish
Pages1-6
Number of pages6
DOIs
Publication statusPublished - 11 Mar 2020
EventIEEE Sensors Applications Symposium 2020 - Malaysia, Kuala Lumpur, Malaysia
Duration: 9 Mar 202011 Mar 2020
https://2020.sensorapps.org/

Conference

ConferenceIEEE Sensors Applications Symposium 2020
Abbreviated titleSAS
Country/TerritoryMalaysia
CityKuala Lumpur
Period9/03/2011/03/20
Internet address

Keywords

  • ambulatory inertial sensors (AIS)
  • floor sensors (FS)
  • deep learning (DL)
  • multi-modality sensor fusion
  • artificial neural network (ANN)
  • convolutional neural network (CNN)

Fingerprint

Dive into the research topics of 'Multi-modality sensor fusion for gait classification using deep learning'. Together they form a unique fingerprint.

Cite this