Abstract
In this paper, we present a tracking framework for capturing articulated human motions in real-time, without the need for attaching markers onto the subject's body. This is achieved by first obtaining a low dimensional representation of the training motion data, using a nonlinear dimensionality reduction technique called back-constrained GPLVM. A prior dynamics model is then learnt from this low dimensional representation by partitioning the motion sequences into elementary movements using an unsupervised EM clustering algorithm. The temporal dependencies between these elementary movements are efficiently captured by a Variable Length Markov Model. The learnt dynamics model is used to bias the propagation of candidate pose feature vectors in the low dimensional space. By combining this with an efficient volumetric reconstruction algorithm, our framework can quickly evaluate each candidate pose against image evidence captured from multiple views. We present results that show our system can accurately track complex structured activities such as ballet dancing in real-time. ©2007 IEEE.
Original language | English |
---|---|
Title of host publication | Proceedings of the IEEE International Conference on Computer Vision|Proc IEEE Int Conf Comput Vision |
Publisher | IEEE |
DOIs | |
Publication status | Published - 2007 |
Event | 2007 IEEE 11th International Conference on Computer Vision, ICCV - Rio de Janeiro Duration: 1 Jul 2007 → … |
Conference
Conference | 2007 IEEE 11th International Conference on Computer Vision, ICCV |
---|---|
City | Rio de Janeiro |
Period | 1/07/07 → … |
Keywords
- Computer Science, Artificial Intelligence
- Engineering, Electrical &
- Electronic
- Imaging Science & Photographic Technology