Gait Activity Classification From Feature-Level Sensor Fusion of Multi-Modality Systems

Syed Usama YUNAS, Krikor Ozanyan

Research output: Contribution to journalArticlepeer-review

Abstract

Gait activity classifications from single-modality data, e.g. acquired by separate vision, pressure, sound and inertial measurements, can be improved by complementary multi-modality fusion to capture a larger set of distinctive gait activity features. We demonstrate a feature-level based sensor fusion of spatio-temporal data obtained from a set of 116 collaborative floor sensors for spatio-temporal sampling of the ground reaction force and ambulatory inertial sensors at 3 positions on the human body. Principle Component Analysis and Canonical Correlation Analysis are used for automatic feature extraction. Fusion at feature level elucidates the balance between otherwise disproportional number of inputs from the two modalities, while reducing the overall number of inputs for classification without degrading substantially the information content. Improvement in the classification is achieved using K-Nearest Neighbor and Kernel Support Vector Machine, manifesting f-scores of 0.95 and 0.94 respectively.
Original languageEnglish
Pages (from-to)4801-4810
Number of pages10
JournalIEEE Sensors Journal
Volume21
Issue number4
DOIs
Publication statusPublished - 5 Oct 2020

Fingerprint

Dive into the research topics of 'Gait Activity Classification From Feature-Level Sensor Fusion of Multi-Modality Systems'. Together they form a unique fingerprint.

Cite this