Hand posture recognition has been a wide region of applications in Human Computer Interaction and Computer Vision for many years. The problem arises mainly due to the high dexterity of hand and self-occlusions created in the limited view of the camera or illumination variations. To remedy these problems, a hand posture recognition method using 3-D point cloud is proposed to explicitly utilize 3-D information from depth maps in this paper. Firstly, hand region is segmented by a set of depth threshold. Next, hand image normalization will be performed to ensure that the extracted feature descriptors are scale and rotation invariant. By robustly coding and pooling 3-D facets, the proposed descriptor can effectively represent the various hand postures. After that, SVM with Gaussian kernel function is used to address the issue of posture recognition. Experimental results based on posture dataset captured by Kinect sensor (from 1 to 10) demonstrate the effectiveness of the proposed approach and the average recognition rate of our method is over 96%.
remendous technology shift has played a dominant role in all disciplines of science and technology. During the last decades, the technology of hand gesture recognition has been a very attractive research topic of Human Computer Interaction (HCI). The main purpose of researching hand gesture recognition is to use this nature manner to implement HCI. With the rapid development of computer technology, the research on new HCI, which suits human’s communication custom, has obtained the great evolvement. The hand gesture acts as one of the common communication methods in human life. So, hand gesture recognition has become the research focus. Nevertheless, because of the variety, confusing, otherness in both temporal field and spatial field uncertainty in vision and that the hand is complicated plasmodium, the research has become a multi-knowledge inter across problem filled with challenge.
A gesture is spatiotemporal pattern which maybe static, dynamicor both. Static morphs of the hands are called postures and hand movements are calledgestures.Hand posture recognition from visual images has a number of potential applications in HCI, machine vision, virtual reality (VR), machine control in the industrialarea, and so on. Most conventional approaches to hand posture recognition have employed data gloves
. But, for more natural interface, hand posture must be recognized from visual images as in the communication between humans without using any external devices. Our research is intended to find a high efficiency approach to improve the algorithm of hand detecting and hand posture recognition.
Extensive researches have been conducted on hand gesture recognition making use of 2-D digital image
. However, it is still ongoing research as most papers do not provide a complete solution to the previously mentioned problems. As the first step of hand gesture recognition, hand detection and tracking are usually implemented by skin color or shape based segmentation, which can be inferred from RGB image
. However, because of the intrinsic vulnerability against background clutters and illumination variations, hand gesture recognition based on 2-D RGB images usually requires a clean and simple background, which limits its applications in the real world.
With the rapid development of RGB-Depth (RGB-D) sensors, it becomes possible to obtain the 3-D point cloud of the observed scene and offers great potential for real-time measurement of static and dynamic scenes. This means some of the common monocular and stereo vision limitations are partially resolved due to the nature of the depth sensor. Compared to the traditional RGB camera, research on 3-D depth map has significant advantages for its availability to discover strong clues in boundaries and 3-D spatial layout even in cluttered background and weak illumination. Particularly, those traditional challenging tasks such as object detection and segmentation become much easier with the depth information
The recent progress in depth sensors such as Microsoft’s Kinect device has generated a new level of excitement in gestures recognition. With the depth information, a skeleton tracking system has been developed by Microsoft and hand gesture recognition based on depth maps has gained growing interests, but it does not handle hand gestures which typically involve palm and finger motions. Several researchers have proposed some approaches based on depth information for this issue
. Depth image generated by depth sensor is a simplified 3-D description, however most of current methods only treat depth image as an additional dimension of information and still implement the recognition process in 2-D space. Ren et al. employed a template matching based approach to recognize hand gestures through a histogram distance metric of Finger Earth Mover Distance (FEMD) through a near-convex estimation
. Bergh and Van Gool
used a Time of Flight (ToF) camera combined with a RGB camera to successfully recognize four hand gestures by simply using small patches of hands. However, their method only considered the outer contour of fingers but ignored the palm region that also provides important shape and structure information for complex hand gestures. Most of these methods explicitly use the sufficient 3-D information conveyed by the depth maps.
In this paper, we proposed a novel feature descriptor to explicitly encode the 3-D shape information from depth maps based on 3-D point cloud. After hand region segmentation by using depth information (distance from Kinect sensor), a 3-D local support surface associated with each 3-D cloud point is defined as a 3-D facet. By robust coding and pooling these facets, SVM with Guassian kernel function is utilized to classify the posture from dataset. Compared with contour-matching method and 2-D HOG method, our proposed method verifies the advantage of effectiveness.
The rest of this paper is organized as follow: In Section 2, we review the previous work in hand posture recognition. Section 3 addresses the issue of hand posture recognition based on depth information descriptor. The detail of hand segmentation and normalization is presented firstly in this section, and then cell feature and pooling feature descriptor are proposed to implement the classification by SVM algorithm. Section 4 presents experimental evaluation of classification accuracy using the method proposed in this paper compared to methods of state-of-the-art. Finally, the conclusion and the future work is provided in Section 5.
The human hand is a highly deformable articulated object with a total of about 27 degrees of freedom (DOFs)
. As a consequence, the hand can adopt a variety of static posture that can have distinct meanings in human communication. A first group of hand posture recognition researchers focus on these so-called ‘static’ hand poses. A second research domain is the recognition of ‘dynamic’ hand gestures, in which not the pose but the trajectory of the hand is analyzed. This article focuses on the static hand poses. For more information on dynamic gestures see
Hand posture recognition techniques consist of two stages: hand detection and hand pose classification. First, the hand is detected in the image and segmented. Afterwards, information is extracted that can be utilized to classify the hand posture. This classification allows it to be interpreted as a meaningful command
Hand detection techniques can be divided into two main groups
: data-glove based
and vision based approaches. The former uses sensors attached to a glove to detect the hand and finger positions. The latter requires only a camera, so they are relatively low cost and are minimally obtrusive for the user. The vision based approaches can detect the hand using information about the depth, color, etc. Once the hand is detected, hand posture classification methods for vision-based approaches can be divided into three categories: low level features, appearance based approaches and high-level features.
Many researchers raised the thought that full reconstruction of the hand is not necessary for gesture recognition. Therefore, these methods only use low-level image features that are fairly robust to noise and can be extracted quickly. An example of low-level features used in hand postures recognition is the radial histogram. Appearance-based methods use a collection of 2-D intensity images to model the hand. These images can for example be acquired by Principal Component Analysis (PCA). Some algorithms based on appearance are presented in
. S. Gupta et al. proposed method using 15 local Gabor filters and the features are being reduced by PCA to overcome small sample size problem. Classification of the gestures as per their classes will be done with the help of one against one multiclass SVM
Methods relying on high-level features use a 3-D hand model. High-level features can be derived from the joint angles and pose of the palm. Most model-based approaches create a 3-D model of a hand by defining kinematic parameters and project the 3-D model onto a 2-D space
. The hand posture can be estimated by finding the kinematic parameters of the model that result in the best match between the projected edges and the edges extracted from the input image. P. Breuer et al. present a gesture recognition system for recognitizing hand movements in near realtime in
, the measured data is transformed into a cloud of 3-D-points after depth keying and suppression of camera noise by median filtering.
One advantage of 3-D hand model based approaches is that they allow a wide range of hand gestures if the model has enough DOFs. However, these methods also have disadvantages. The database required to cover different poses under diverse views is very large. Complicated invariant representations have to be used. The initial parameters have to be close to the solution at each frame. Moreover the fitting process is highly sensitive to noise. Due to the complexity of the 3-D structures used, these methods may be relatively slower than the other approaches. Of course, this problem must be suppressed in order to assure on-line performance.
3. Hand Posture Recognition Approach Using 3-D Information
In this study, the first work is to create a point cloud using Kinect sensor for utilizing sufficient 3-D information conveyed by the depth map. There are two main reasons why we can ignore color information: first, the hand is relatively flat textured and second, we cannot assume consistent lighting conditions. In fact, our system can operate in complete darkness. While ignoring color information, we acknowledge that it also could provide useful information for solving the hand posture estimation problem.
- 3.1 Hand Region Segmentation Based on Depth Information
As we discussed above, the hand needs to be extracted (segmentation) from the input images before each kind of recognition algorithm can be worked out. For this, we use an effective rule based on the closest points to the camera and a region of interest around them. For this segmentation method, the assumption is made that the user’s hand is the object closest to the Kinect camera.
Given the point cloud set
} in the world coordinate system, we need to extract the points that belong to the user’s hand. We required that, during this process, the hand will be the closest object to the Kinect sensor. The coordinate of the closest point is written as
X, Y, Z
). The subset
} will be searched in
for hand area,where the following conditions hold.
is guaranteed to be contained in a cuboid with volume
.The value 0.15 and 0.2 for width, height and depth of the bounding box, were determined empirically to ensure it can contain hands of various sizes. Assuming the segmentation process completes successfully, the subset contains points pertaining to the hand and part of the forearm. The hand segmentation result is shown as
Hand segmentation from depth map: (a) RGB color image, (b) depth image, (c) hand segmentation results.
- 3.2 Hand Image Normalization
As the problem of hand self-occlusion, in-plane rotation is a challenge to hand posture recognition, scale and orientation normalization will be performed in this step so that the extracted feature descriptors are scale and rotation invariant. For orientation normalization, we first estimate the hand orientation parameters, and then rotate the hand point cloud in such way that the palm plane will be parallel to the image plane and the hand will point upward.
The normalization algorithm consists of three steps as described below:
Scale normalization: The hand region onPis scaled to fit into a predefined rectangle with size of150*150.
Out-Plane normalization: Fit a planePto the hand point cloud, and compute a rotation that will rotatePto be parallel to the image plane. This step is very useful when the visible surface of the hand is approximately planar. If not, such normalization could lead to an overstretched image with holes, and we do not perform out-plane normalization in this case. SeeFig. 2(a) and2(b).
In-Plane normalization: We project all the points ontoPand compute the principal direction. We then compute an in-plane rotation matrix so that the principal direction points upward after rotation. SeeFig. 2(b) and2(c).
Orientation and scale normalization: (a) out-plane rotation image, (b) in-plane rotation image, (c) normalization image.
Given one point of the posture point cloud is
, the transformation matrix is written as
, then the point
after normalization is
is written as matrix form as
α, β, γ
represents the rotation angle around the
After the normalization procedure, we obtain the rotation parameters and a depth map of the normalized hand point cloud. This image is called “Hand-Image”, which along with the rotation parameters, will be used at the feature generation stage.
- 3.3Extraction of Cell Feature and Pooling Feature
The 3-D surface properties, such as bumps and grooves provide significant information, especially when the outer contour is not sufficient or discriminative to perform classification. As shown in the first two pictures in
, the 3-D surfaces of the thumb constitute of an informative region to differentiate the two hand gestures that share similar visual patterns.
4*4 Cell feature
A 3-D facet defined as a 3-D plane which can be represented by [
, where the first three coefficients are the normal vector
of a 3-D facet and the forth one
is the Euclidean distance from the plane to the origin coordinate. Although all four coefficients are needed to determine a local surface, in this paper we only concentrate on the distance rather than the absolute orientation of a local surface. Thus, we code a 3-D facet only using its relative distance
. The procedure of coding each 3-D facet is illustrated in
, we call it “Cell feature”.
For cell feature, we compute the occupied area of each cell as well as the average depth for the non-empty cells, and then we scale the average value into [0, 1] by
means the average of depth value in one cell,
denotes the depth value at (
represents the maximum and minimum depth value in one cell, respectively.
A cloud point of 3-D depth map can be mapped onto a 2-D depth image. Therefore, each cloud point corresponds to a pixel in 2-D depth image, which means the pixels on 2-D depth image originate from those 3-D points that locate in the front surface. Therefore, we can completely assert that the
attribute of all the occupancy features are non-negative. Proposed cell feature can alleviate the problem of similar contour since we consider the interior region of the folded thumb, which makes the depth average value of each cell more discriminative.
After coding the 3-D facets of the depth map, a concentric spatial pooling scheme
is used to group these coded 3-D facets from the entire hand posture region to generate the feature descriptor, as illustrated in
. In the step of concentric spatial pooling, we divide the normalized hand region into 32 spatial bins, namely, four radius quantization bins and 8 angular quantization bins. We also compute the average depth value for each bin as the feature descriptor is therefore with the dimension of
The concentric spatial pooling scheme (The entire hand region is quantized as 4bins in radius and 8 bins in angular).
- 3.4Hand Posture Recognition Based on Support Vector Machine (SVM)
After these descriptors are generated as above, the Support Vector machines (SVM) with Radial Basis Function(RBF) kernel is used as the classifier in this study.SVM is a supervised learning technique for optimal modeling of data. It learns decision function and separates data class to the maximum width. SVM learner defines the hyper-plane for the data and maximum margin is found between these hyper-planes. Because of the maximum separation of hyper-plane, it is also considered as a margin classifier. Margin of the hyper-plane is the minimum distance between hyper-plane and the support vectors and this margin is maximized in it
SVM is a well-suited classifier where features are large in number because they are robust to the curse of dimensionality. Kernel function is the computation of the inner product
) directly from the input. One of the characteristics of using the kernel is that there is no need to explicitly represent the mapped feature space. For optimizing classification issue in this paper, we need to identify the best parameters for the SVM. In
, we list three different SVM kernels with various parameter settings and each SVM is tested on our hand posture database. From
, Gaussian Kernel Function with parameter
= 3performs the best performance (the average accuracy is over 96%) for hand posture recognition. Therefore, SVM using Gaussian Kernel Function with
= 3 is leveraged to classify the posture descriptorin this paper, which obtained at last section.
Experiment results using 3 kernel types with different parameters
Experiment results using 3 kernel types with different parameters
4. Experimental Classification Results and Analysis
The hand posture recognition system we designed is running on the hardware environment of Intel (R) Core (TM) i5 (3.40 GHz), a Kinect sensor, and the software environment of Windows 7 and Visual Studio 2010.
The dataset for training and testing is captured by a Microsoft sensor and it contains 1000 depth maps of hand gestures (decimal digit from 1 to 10) from 10 subjects with 10 samples for each hand gesture. We use recordings of nine persons for training and the remaining person for testing. Such experiment is repeated for 10 random subdivisions of the data. Before computing the descriptor, we normalize the hand region into an image patch with the fixed size of
to reduce the information loss in quantization.
Hand posture database used in this study, which captured by Kinect sensor (the first row are the depth images and the second row are the corresponding color images)
For verifying the performance of Cell+Pooling feature, we perform comparison experiments with cell feature, pooling feature and the mixture of these two features, respectively. The comparison result is indicated in
, X-axis expresses the categories of 10 postures and Y-axis represents the recognition rate. From
, the hybrid descriptor that combines the cell feature and pooling feature is superior to individual ones. Because of Cell+Pooling feature explicitly utilizes the sufficient 3-D information conveyed by the depth maps than the other two, thereby differentiates each hand posture effectively.
Comparisons of Cell Feature, Pooling Feature and Cell+Pooling Feature
We compare the proposed method in this paper with the Minimum near-convex decomposition method
, contour-matching method
and conventional 2-D image based HOG
on our hand posture dataset. Methods proposed in
only considered the outer contour of fingers but ignored the palm region that also provides important shape and structure information for complex hand posture. Moreover, in the implementation of HOG, we evenly separate the normalized patches into 8*8 non-overlapping cells and each cell has eight orientation bins. The feature vectors of four different normalizations, namely, L1-norm,L2-norm, L1-sqrt and L2-Hys are concatenated as the final HOG representation as
. The HOG is descriptor is therefore with the dimension of 8*8*8*4=2048. The posture recognition accuracies of three methods are shown in
Comparisons of our proposed method with other three methods presented previously
From these comparisons, the proposed method considerably outperforms the contour matching based method
and the traditional 2-D HOG descriptor
. Compare to the contour-matching method in
, our method explicitly captures the 3-D surface properties such as folded thumb in palm rather than only outer contour information. Meanwhile, compared to the HOG, the feature descriptor is with the dimension of 512 in proposed method rather than 2048. Therefore, our method greatly decreases computation complexity and decoding time.
The performance of hand posture recognition is further evaluated using a confusion matrix as shown in
. The classification class with the maximum score over the 10 classifiers was chosen when classifying an arbitrary posture (Max-Win rule). This metric always results in a single classification (correct or incorrect), and no false positive cases. If the maximum score points to the incorrect class, then we said that the posture was misclassified. The confusion matrix (see
) was created by comparing the scores obtained by each classifier applied to a given testing image, and selecting the maximum score from all 10 classifiers. The average accuracy of correct classification over the confusion matrix using the proposed method in this paper reaches 96.1%, which is higher than the use of the other method mentioned above.
Confusion matrix computed when using the method proposed in this paper
In general, proposed method performs 2.3% and 2.0% higher accuracy than
on our posture database, and 4.3% higher than 2-D HOG descriptor. The confusion matrix indicates the accuracies of classification are all over 96%, which demonstrates the effectiveness of our method.
Hand posture recognition has played a very important role in Human Computer Interaction (HCI) and Computer Vision (CV) for many years. The challenge arises mainly due to self-occlusions caused by the limited view of the camera and illumination variations. In this paper, a normalized 3-D descriptor conveyed by the depth maps is used to explicitly capture and model discriminative surface information. Unless using color information (RGB), depth information obtained from IR sensor is not affected by the condition of illumination, the system proposed in this study can work well even in darkness.
We implement the hand posture recognition by using SVM with linear kernel function after coding the facets and spatial pooling. Compared to contour-matching method and 2-D HOG method, the experimental results verified the effectiveness of the proposed approach based on using the sufficient 3-D information, which including the shape and structure.
Additionally, we would like to expand our hand posture recognition system in order to accommodate more challenging postures from other domains in future work.
WenkaiXu received his B. S. at Dalian Polytechnic University in China (2006-2010) and Master degree at Tongmyong University in Korea (2010-2012). Currently, he is studying in Department ofInformation and Communications Engineering Tongmyong University for PH.D. His main research areas are image processing, computer vision, biometrics and pattern recognition.
Eung-Joo Lee received his B. S., M. S. and Ph. D. in Electronic Engineering from Kyungpook National University, Korea, in 1990, 1992, and Aug. 1996, respectively. Since 1997, he has been with the Department of Information&Communications Engineering, Tongmyong University, Korea, where he is currently a professor. From 2000 to July 2002, he was a president of Digital Net Bank Inc. From 2005 to July 2006, he was a visiting professor in the Department of Computer and Information Engineering, Dalian Polytechnic University, China. His main research interests include biometrics, image processing, and computer vision.
“Object interaction detection using hand posture cues in an office setting”
International Journal of Human-Computer Studies
DOI : 10.1016/j.ijhcs.2010.09.003
“Spelling it out: Real-time ASL fingerspelling recognition”
in Proc. of IEEE International Conference on Computer Vision Workshops (ICCVW)
Pujari Nitin V.
“Finger detection for sign language recognition”
in Proc. of International MultiConference of Engineers & Computer Scientists
“Hand gesture recognition using neural networks”
in Proc. of IEEE Advance Computing Conference
Feb. 19-20, 2010
“Template-based hand pose recognition using multiplecues”
in the Proc. of 7th Asian Conference on Computer Vision
January 13-16, 2006
“Hand gesture recognition for human-machine interaction”
Journal of WSCG
“Continuous Gesture Trajectory Recognition System based on Computer Vision”
International Journal of Applied Mathematics & Information Science
“Indoor Segmentation and Support Inference from RGBD Image”
in Proc. of 12thEuropean Conference on Computer Vision
“Hand Gesture Recognition using Depth Data”
in Proc. of International Conference on Automatic Face and Gesture Recognition
“View-independent human action recognition with Volume Motion template on single stereo camera”
Patter Recognition Letter
DOI : 10.1016/j.patrec.2009.11.017
“Robust hand gesture recognition based on finger-earth mover's distance with a commodity depth camera”
in Proc. of 19thACM International Conference on Multimedia
“Minimum near-convex decomposition for robust shape representation”
in Proc. of IEEE International Conference on Computer Vision
Van den Bergh M.
Van Gool L.
“Combining RGB and ToF cameras for real-time 3-D hand gesture interaction”
in Proc. of IEEE Workshop on Applications of Computer Vision
“Vision based hand gesture recognition”
World Academy of Science, Engineering and Technology
“Real-time hand tracking using a mean shift embedded particle filter”
DOI : 10.1016/j.patcog.2006.12.012
“A review of vision based hand gestures recognition,”
in Proc. of International Journal of Information Technology
vol. 2, no. 2
Ahmad Wan Fatimah Wan
“Static Hand Gesture Recognition Using Local Gabor Filter”
DOI : 10.1016/j.proeng.2012.07.250
“Hand gesture recognition using Kinect”
in Proc. of IEEE 3rd International Conference on Software Engineering and Service Science
June 22-24, 2012
“Hand Gesture Recognition with a novel IR Time-of-Flight Range Camera-A pilot study”
in Proc. of 3rdInternational Conference of MIRAGE
March 28-30, 2007
“Robust gesture based on finger-earth mover's distance with a commodity depth camera”
in Proc. of the 19th ACM International Conference on Multimedia
Bruges C.J. C.
“A Tutorial on Support Vector Machines for Pattern Recognition,”
Data Mining and Knowledge Discovery
DOI : 10.1023/A:1009715923555
“Static Hand Gesture Recognition Based on HOG with Kinect”
in Proc. of 2012 4th International Conference on Intelligent Human-Machine Systems and Cybernetics
26-27 Aug. 2012
“Recognizing Actions Using Depth Motion Maps-based Histograms of Oriented Gradients”
in Proc. of International Conference on ACM Multimedia