The essential human gait parameters are briefly reviewed, followed by a detailed review of the state-of-the-art in deep learning for human gait analysis. The modalities for capturing gait data are grouped according to the sensing technology: video sequences, wearable sensors and floor sensors, as well as the publicly available datasets. The established Artificial Neural Network architectures for deep learning are reviewed for each group, and their performance compared, with particular emphasis on the spatiotemporal character of gait data and the motivation for multi-sensor, multi-modality fusion. It is shown that, by most of the essential metrics, deep learning Convolutional Neural Networks typically outperform shallow learning models. In the light of the discussed character of gait data, this is attributed to the possibility to extract the gait features automatically in Deep Learning, as opposed to shallow learning from handcrafted gait features.
|Number of pages||17|
|Journal||IEEE Sensors Journal|
|Publication status||Published - 4 Oct 2019|
- Deep learning
- wearable sensors
- legged locomotion