Multimodal hand gesture recognition combining temporal and pose information based on CNN descriptors and histogram of cumulative magnitudes.

Nenhuma Miniatura disponível
Data
2020
Título da Revista
ISSN da Revista
Título de Volume
Editor
Resumo
In this paper, we present a new approach for dynamic hand gesture recognition. Our goal is to integrate spatiotemporal features extracted from multimodal data captured by the Kinect sensor. In case the skeleton data is not provided, we apply a novel skeleton estimation method to compute temporal features. Furthermore, we introduce an effective method to extract a fixed number of keyframes to reduce the processing time. To extract pose features from RGB-D data, we take advantage of two different approaches: (1) Convolutional Neural Networks and (2) Histogram of Cumulative Magnitudes. We test different integration methods to fuse the extracted spatiotemporal features to boost recognition performance in a linear SVM classifier. Extensive experiments prove the effectiveness and feasibility of the proposed framework for hand gesture recognition.
Descrição
Palavras-chave
Spherical coordinates, Keyframe extraction, Convolucional neuronal networks, Fusion schemes
Citação
ESCOBEDO CÁRDENAS, E. J.; CÁMARA CHÁVEZ, G. Multimodal hand gesture recognition combining temporal and pose information based on CNN descriptors and histogram of cumulative magnitudes. Journal of Visual Communication and Image Representation, v. 67, artigo 102772, 2020. Disponível em: <https://www.sciencedirect.com/science/article/abs/pii/S1047320320300225>. Acesso em: 25 ago. 2021.