Effective and Efficient Similarity Searching in Motion Capture Data
| Authors | |
|---|---|
| Year of publication | 2018 | 
| Type | Article in Periodical | 
| Magazine / Source | Multimedia Tools and Applications | 
| MU Faculty or unit | |
| Citation | |
| Doi | https://doi.org/10.1007/s11042-017-4859-7 | 
| Field | Informatics | 
| Keywords | Motion capture data retrieval;Effective similarity measure;Efficient indexing;k-NN query;Motion image;Convolutional neural network;Fixed-size motion feature | 
| Description | Motion capture data describe human movements in the form of spatio-temporal trajectories of skeleton joints. Intelligent management of such complex data is a challenging task for computers which requires an effective concept of motion similarity. However, evaluating the pair-wise similarity is a difficult problem as a single action can be performed by various actors in different ways, speeds or starting positions. Recent methods usually model the motion similarity by comparing customized features using distance-based functions or specialized machine-learning classifiers. By combining both these approaches, we transform the problem of comparing motions of variable sizes into the problem of comparing fixed-size vectors. Specifically, each rather-short motion is encoded into a compact visual representation from which a highly descriptive 4,096-dimensional feature vector is extracted using a fine-tuned deep convolutional neural network. The advantage is that the fixed-size features are compared by the Euclidean distance which enables efficient motion indexing by any metric-based index structure. Another advantage of the proposed approach is its tolerance towards an imprecise action segmentation, the variance in movement speed, and a lower data quality. All these properties together bring new possibilities for effective and efficient large-scale retrieval. | 
| Related projects: |