The distinctive features of human gait are distributed across modalities based on vision, inertial measurements, pressure, and sound. Gait features pertaining to a single modality have different scales and intensities. Single modality systems suffer from misclassification due to the unavailability of complementary features that provide the semantic information involved in gait activity. We aim to adequately map the complete gait features which is not possible using a simple and feasible modality. In this research work, multi-modality sensor fusion approach has been adapted which is capable to extract and fuse information from two sources and provides maximum description of individual's gait. Feature level-based sensor fusion is proposed for the spatio-temporal data obtained from 3 inertial sensors based Ambulatory Inertial Sensors (AIS) placed at pelvis and both heels of user and a set of 116 collaborative Floor Sensors (FS), which is novel. The complimentary nature and relationships among datasets of the associated spatio-temporal features are explored using Principal Component Analysis (PCA) and Canonical Correlation Analysis (CCA) techniques. Supremacy of the proposed approach is tested using different machine learning (ML) algorithms. With K-Nearest Neighbour (K-NN) and Kernel Support Vector Machine (K-SVM), our multi-modal sensor fusion approach demonstrates improved f-scores of 95% and 94% respectively, beyond the individual f-scores. Furthermore, deep learning (DL) models will be utilized to perform automatic feature extraction of the ground reaction force and lower body movements using FS and AIS, simultaneously. Benefits of implementing DL models are twofold: First, the spatio-temporal information from the two modalities are balanced despite disproportionate number of inputs. Second the extracted information is fused over DL model layers whilst reserving the categorical content of each gait activity. This proposed fused approach is further assessed with f-scores using various DL models i.e., LSTM (99.90%), 2D-CNN (88.73%), 1D-CNN (94.97%) and FFNN (89.33%). It is concluded that using given DL models, robustness and execution time are the tradeoff while observing the overall performance of proposed system.
Date of Award | 1 Aug 2022 |
---|
Original language | English |
---|
Awarding Institution | - The University of Manchester
|
---|
Supervisor | Krikor Ozanyan (Supervisor) & Patricia Scully (Supervisor) |
---|
Sensor Fusion and Data Processing to Analyse Human Gait and Activities for Healthcare Applications
Yunas, S. (Author). 1 Aug 2022
Student thesis: Phd