In this paper, we propose a method combining an accelerometer with a cross structured light system to estimate the golf green slope. The cross-line laser provides two laser planes whose functions are computed with respect to the camera coordinate frame using a least square optimization. By capturing the projections of the cross-line laser on the golf slope in a static pose using a camera, two 3D curves’ functions are approximated as high order polynomials corresponding to the camera coordinate frame. Curves’ functions are then expressed in the world coordinate frame utilizing a rotation matrix that is estimated based on the accelerometer’s output. The curves provide some important information of the green such as the height and the slope’s angle. The curves estimation accuracy is verified via some experiments which use OptiTrack camera system as a ground-truth reference.
1. Introduction
The laser structured light system has been used for several decades in many areas since it provides high accuracy 3D measurement. It is applied for welding detection
[1]
, 3D depth map construction
[2]
, polyhedral surfaces location and orientation
[3]
, and 3D object identification
[4]
. The laser structured light system consists of a laser (line or cross type) together with a camera. The laser projects a line (or cross line) on the scene and the camera captures images containing the projection. For each laser point in each image, the 3D position can be computed if the relative pose between the laser and camera is known
[5]
.
In this paper, we apply a laser structure light system to a golf green slope angle estimation problem. In the golf putting, it is important to correctly estimate the green slope angle. However, it is not always easy to read the green slope correctly
[6
,
7]
, which is especially true for novice players. Inclinometers (placed on the green) are often used to aid the slope angle estimation
[8]
. Although the inclinometer gives an accurate slope angle, it can give only one point slope angle. We propose a new method of estimating the green slope angles combining a cross laser structured light and an accelerometer. The accelerometer is used to estimate the attitude of the camera, which is needed to transform the 3D position data (which is described in terms of the camera coordinate system) into the slope angle. Since smartphones contain an accelerometer sensor unit, the proposed system can be constructed by simply attaching a laser to a smartphone.
This paper is organized as below. Section 2 shows problem formulations and solutions. In Section 3, we introduce a camera and laser plane calibration problem and in Section 4 we present a slope estimation algorithm. The experimental results are given in Section 5 and conclusions are presented in Section 6.
2. Problem Formulation
During the putting time in the golf, one of the most important things is to predict the motion of ball near the hole. The trajectory of the ball depends heavily on the slope but sometimes it is not easy to correctly estimate the slope angle. In this paper, the slope angle is estimated using a camera, a cross laser and a three axis accelerometer unit.
There are four coordinate frames: world, accelerometer, camera and laser coordinate frames. The
z
axis of the world coordinate system coincides with the local gravitational (upward) direction. The
x
direction is not important and can be chosen arbitrarily. The three axes of the accelerometer coordinate system coincide with three axes of an accelerometer unit. The
z
axis of the camera coordinate system is defined as shown in
Fig. 1
, which is the standard way to define the camera coordinate system. One line of the cross line laser is assumed to lie on the
zx
plane (plane 1) of the laser coordinate system and the other line is assumed to lie on the
yz
plane (plane 2) of the laser coordinate system. Here the cross laser lines are assumed to be perpendicular.
System overview
We need to know the relative attitude between the accelerometer unit and the camera as well as the relative pose between the laser and the camera. Let rotation matrix a
represent the rotation relationship from the camera coordinate system to the accelerometer coordinate system. For a vector
p
∈
R
3
, we have the following relationship:
where [
p
]
a
([
p
]
c
) represents that the vector
p
is expressed in the accelerometer (camera) coordinate system. Similarly we use the notation [
p
]
l
and [
p
]
w
for a vector in the laser and world coordinate systems, respectively. The translation vector [
tc
]
a
is not required in this paper since only the attitude is essential. The relative pose between the laser and camera can be expressed using two plane equations with respect to the camera coordinate system
[9]
(see
Fig. 2
):
Relationship between laser and camera
where (
a
,
b
) represents the inner product of vectors
a
∈
R
3
and
b
∈
R
3
.
n
1
∈
R
3
and
n
2
∈
R
3
are unit normal vectors of two laser planes, respectively.
The required calibration parameters are
,
n
1
,
d
1
,
n
2
and
d
2
, which are computed in Section 3.1 and 3.2. The camera intrinsic parameters are computed using the method in Chapter 11 of
[10]
.
3. Sensors Calibrations
In this section, the calibration parameters are computed, where a checkerboard with known checker size is used.
- 3.1 Computation ofn1,d1,n2andd2
A checkerboard is placed on a flat table. Without losing generality, it is assumed that the checkerboard is on the
xy
plane of the world coordinate system. With arbitrarily chosen position and attitude, two pictures are taken with and without cross laser lines alternately. After that, the camera position and attitude are changed and two pictures are taken again by the same way. This step is repeated
Ncal
times and an index
i
(1 ≤
i
≤
Ncal
) is used to denote the step number.
Two line equations are derived for two cross laser lines in the checkerboard (see
Fig. 4
for one laser line). Let two line equations (line equations are derived using points in the normalized image plane) are given by
where [
Aj,i
Bj,i
Cj,i
]
T
(
j
=1 or 2) is the unit vector and the index
i
(1 ≤
i
≤
Ncal
) represents that the line equation corresponding to the
i
-
th
step image.
Consider the first line equation in (3). The origin of the camera coordinate frame and points on the first line equation form a plane as shown in
Fig. 3
(see plane
c
1
). The plane
c
1
equation is given by:
Three planes: camera, laser, checkerboard planes
where
In (4), the plane
c
1
is expressed in the camera coordinate system. Note that there are three planes in
Fig. 3
: plane
c
1
, checkerboard plane (
xy
plane of the world coordinate system) and laser plane
l
1
. The laser line on the checkerboard is the intersection of these three planes. Using this fact, we are going to derive the plane
l
1
equation (
n
1
and
d
1
) in the camera coordinate system. To do that, we first derive the checkerboard plane equation.
From the checkerboard image (with known checker size), we can compute the relative pose
and [
tw
]
c
∈
R
3
between the camera and the world coordinate systems using the method in
[11]
so that the following is satisfied for a vector
p
∈
R
3
:
The checkerboard plane equation (which is assumed to be on the
xy
plane of the world coordinate system) in the world coordinate system is given by:
where
Combining (5) and (6), the checkerboard plane equation in the camera coordinate system is given by:
where
Now we are going to derive the laser line equation, which is the intersection of three planes in
Fig. 3
. We use (
p
,
n
) line representation (see 4.2.2 in
[9]
), where a space point
r
is on the line if the following is satisfied:
where [
p'
,
n'
]
'
∈
R
6×1
is the unit norm vector. Since the intersection line between plane
c
1,i
and checkerboard plane is the same as the intersection line between plane
l
1
and checkerboard, the following is derived using (4.72) in
[9]
:
In (9),
N
(
a
) denote the normalized vector of a vector a. Note that the unknown parameters in (9) are
n
1
and
d
1
. Equation (9) can be written in the following form:
where
In (10), [
a
×] denotes the 3×3 matrix form of the cross product so that
a
×
b
= [
a
×]
b
. Since we have
Ncal
set of pictures, there are
Ncal
equations. Finding
x
1
from
Ncal
equations can be written as the following optimization problem:
where
C
= [
I
3
0
3×Ncal
]. If we define
as:
(11) can be written as:
The solution to (12) is given in A5.4.2 of
[12]
. Thus we can compute the plane
l
1
equation in the camera coordinate system. Using the exactly same method, we can compute the plane
l
2
equation in the camera coordinate system as well.
For an arbitrary image point (
xa
,
ya
) given on the laser plane
l
1
, the corresponding 3D point in the camera coordinate frame is computed as
where
sa
is a scaling factor computed based on the following:
- 3.2 Rotation matrix between the accelerometer and the camera coordinate system
The rotation matrix between the accelerometer and camera coordinate systems can be computed using the method proposed in
[13]
. The rotation can be estimated by having both sensors observe the vertical direction in several poses. In detail, the accelerometer measures the gravitational force while the camera captures vertical landmarks. We note that the system is static while taking data so the accelerometer only measures the gravitational force.
4. Slope Estimation
In this section, we propose a method to detect the golf slope using a cross-line laser system. The intersections of the laser planes and the golf slope create two curves as can be seen from
Fig. 5a
. From the calibration step in the Section 3, all 3D points of two laser planes are known in the camera coordinate frame if their coordinates in the camera images are defined. Based on that fact, the intersections of two laser planes and the golf slope can be approximated as two high order polynomials with respect to the camera coordinate frame. These curves are then expressed in the world coordinate frame by utilizing an accelerometer. The purpose of the accelerometer is to measure the gravitational force in order to estimate the relative pose between the system and the world coordinate frame. Since the camera images contain two laser stripes, our first step is to distinguish each 2D stripe in the images and later estimate the curves in 3D with respect to the world coordinate frame.
(a) Intersection line (b) Separated curves
- 4.1 Curves distinction
Since the relative position of the camera and the laser is fixed, the intersection of two laser planes is also fixed with respect to the camera coordinate frame. Thus this intersection line is fixed in the image and can be used to distinguish two intersection stripes of laser planes and the golf slope.
Assume that
Q
1
and
Q
2
are two points in two laser planes’ intersection line
L
1
, and their coordinates in the image coordinate frame are
Q
1c
= [
x
1
y
1
1]
T
,
Q
2c
= [
x
2
y
2
1]
T
(see
Fig. 5a
).
The equation of the laser planes’ intersection line in the image coordinate frame can be identified by
Q
1c
and
Q
2c
as follow:
In
Fig. 5b
, two laser curves are separated by line
L
1
and
L
2
.
L
1
is the image of the intersection line between laser planes with the line equation given in (14). In each image, we can scan a bright point (
x
0
,
y
0
) that belongs to the
L
1
line function and satisfies
I
(
x
0
,
y
0
) >
δI
, where
I
(
xi
,
yi
) indicates the intensity of a point (
xi
,
yi
) and
δI
is a chosen intensity threshold. However, due to the blurring of the image and the width of the laser line, there are a number of points satisfy above constraints. Therefore, the coordinate of intersection point (
x
0
,
y
0
) between two curves in
L
1
can be approximated by:
where
k
is number of point (
xi
,
yi
) in
L
1
. For simplicity,
L
2
is a vertical line with the line function
x
=
x
0
in the image. With these definitions, two lines
y
=
a
1
x
+
b
1
and
x
=
x
0
split the image into four parts (see
Fig. 5b
). All laser points which belong to the part one and part three must be in the curve 1, and the rest must be in the curve 2. In other words, the condition of a point (
xj
,
yj
) in
• curve 1 is:
• curve 2 is:
An easiest way to define laser points in the image is to subtract two images which are captured in sequence so that the laser is turned on in the first frame while in the second frame the laser is off. By subtracting two images, an image that contains only laser stripes is obtained. This image is then converted from RGB format to Gray format in order to define the laser point coordinates which have higher intensities than the background. Conditions (16) and (17) are then used to classify point coordinates belonging to the first or the second curve.
- 4.2 Curves estimation
After obtaining a set of points belonging to the first and second curve, respectively, two basic curves are created based on representative points in each column in the image. An illustration for this process is described in
Fig. 6
where a gray image of the laser line is constructed. A set of bright point coordinates collected from the gray image are sub-pixelled based on following weighting method
[14]
: in each
xk
-
th
image column,
yk
is computed by:
Estimated curve
where
I
(
xk
,
yj
) is the intensity of a pixel (
xk
,
yj
) ,
I
is larger than a chosen intensity threshold and
n
is the number laser points included in
xk
-
th
image column. The basic curve formed from sub-pixelled points is then approximated to a high order polynomial in order to get a smooth curve function in the image frame.
Based on calibration steps the estimated curve function now can be expressed in the camera coordinate frame. Our next destination is to describe estimated curves with respect to the world coordinate frame as our goal is to estimate the golf slope in real situation. The procedure of golf slope estimation is briefly sketched out in
Fig. 7
where an accelerometer was used together with a laser-camera system.
Coordinate frame conversion
The most important factor in estimating golf slope is the vertical which can be represented by the gravitational force. It is assumed that while capturing the golf slope the system is not moving. This assumption guarantees that the output of the accelerometer only contains the gravitational acceleration. Since the
x
and
y
direction of the world coordinate frame are not important, the rotation from the accelerometer coordinate frame to the world coordinate frame can be estimated by applying a triad algorithm
[15]
for the following:
where
ya
= [
yax
yay
yaz
]
T
is the output of the accelerometer and
is the rotation matrix from the accelerometer coordinate frame to the world coordinate frame. Let
be the rotation matrix from the camera coordinate frame to the accelerometer coordinate frame. Since the translation between the camera, accelerometer and world coordinate frame is not important, the origins for these three coordinate frames are assumed to be in the same position. For a point
Qc
in the camera coordinate frame, its coordinates in the world coordinate frame can be express by
Therefore, the coordinates of each laser point
Q
(
xm
,
ym
) in the image is computed in the world coordinate frame as:
where
T
is the intrinsic matrix obtained from the camera calibration step
[10]
. Note that
shows the height of the point
, thus the relationship between the slope and the world coordinate frame can be identified based on
.
- 4.3 Slope surface reconstruction
Section 4.2 gives two laser stripes of coordinate points which represent the captured slope surface. However, only by looking at these two 3D stripes, it might be difficult for the users to imagine the slope. The objective of this section is to reconstruct the slope surface based on given 3D coordinate points.
The reconstruction method used in this section was proposed by John R. D’Errico
[16]
. In
[16]
, the interpolation problem is expressed as a linear algebra problem such as:
where the vector
x
has
nx
·
ny
length (
nx
is the number of nodes in the
x
direction, and
ny
is the number of grid nodes in the
y
direction). Thus
Ap
has
np
rows, corresponding to each data point supplied by the user, and
nx
·
ny
columns.
At every node of the surface, interpolation algorithm will try to make the partial derivatives of the surface in neighboring cells to be equal. As the result, we have the linear equation
where the derivatives are approximated using finite differences of the surface at neighboring nodes.
The combination of (22) and (23) results an optimization problem as follow: find
x
to minimize
where
λ
is a chosen parameter nominally 1 allows the user to control the relative grid stiffness.
5. Experiments
In this section, we verify the accuracy of the proposed system through experiments.
- 5.1 Flat slope angle estimation
In this experiment, the accuracy of flat slope angle estimation using the proposed system was investigated by measuring the angle error between a flat plane and the vertical. In reality, the golf course is almost flat and has a small decent angle that can be approximated as a tilted flat plane. The experiment setup is described in
Fig. 8
where the proposed system projects a cross laser line on a flat plane to estimate the flat plane’s 3D equation. The angle of the slope can be deduced from the angle formed by the flat plane’s normal vector and the vertical vector. The estimated angle is compared with the value obtained from an inclinometer to evaluate the estimation accuracy.
Slope angle estimation experiment setup
The flat plane was placed in various positions with different angles with respect to the vertical. Projection points from the laser are approximated as a plane in the camera coordinate frame using a least square algorithm
[17]
. A normal vector computed from the plane’s equation is then expressed in the world coordinate frame. This vector makes an
angle with the vertical vector that is compared with the ground truth angle
α
obtained from an inclinometer to find the estimation error
:
Ten measurements of the slope angle are shown in
Table 1
. As can be seen from
Table 1
, the estimation gives a high accuracy with a mean average error of 0.0910° where the maximum error was 0.201° . In application, the angle of the slope can be computed by
Flat slope angle estimation
Flat slope angle estimation
- 5.2 Complex Slope Angle Estimation
This experiment provides a 3D view of a complex slope with respect to the user view. Based on the estimated laser projection curves the slope is depicted with the slope’s characteristics such as relative height and angle. In this experiment, a green slope model with 1.2
m
× 1.2
m
size was used as in
Fig. 9a
.
Slope angle trend estimation
Fig. 9b
represents the slope via estimated height while
Fig. 9c
shows the estimated angle of each point in the laser projections with respect to the world coordinate frame. The color in
Fig. 9b
changes from purple to yellow respectively with the relative height from the smallest value ( 0
mm
) to the largest value ( 180
mm
). Similarly,
Fig. 9c
illustrates the angle value of each point. The yellow color indicates that at that point the angle of the slope is16°.
The slope angle is computed based on the tangent of the slope at each point corresponding to the horizontal plane. To reduce computation, the process of defining the slope angle is sketched as following:
-
Rotate each laser plane so that its normal vector coincides with the vertical vector. Thus the 3D laser curve becomes 2D inz= 0 plane.
-
Compute the tangent vectorνat each point in 2D.
-
Convert the tangent vectorνback to 3D in the world coordinate frame.
-
Compute the slope anglemade by tangent vector and the vertical vectorzn:
The estimated slope accuracy is evaluated using the OptiTrack camera system. The overview of the experiment is described in
Fig. 10a
where a green slope model is set within the camera system’s working field. Projecting the cross-line laser on the slope model, the 3D shape of the slope is computed and expressed in the world coordinate frame using proposed algorithm in Section 4. To verify the accuracy of the estimation some circle markers are placed on the laser projection stripes so that centers of the markers lie on the stripes. These markers are observed by the camera system to give the ground truth values that will be used to compare with the corresponding points in estimated curves (see
Fig. 10b
).
Slope angle accuracy verification experiment
The position of the markers in the camera image can be defined by extracting circle shapes as can be seen from
Fig. 11a
-
b
.
Markers positioning in camera image
Fig. 11a
-
b
shows the coordinates of nine markers in the world coordinate frame as well as the corresponding points estimated from the proposed system. The result is given in
Fig. 12
, where the proposed system provides an accurate estimation of the slope since the estimated points are close to the true values. The estimation accuracy is also investigated through five times measuring the slope in different poses of the laser system. The result is given in
Table 2
.
Table 2
shows the distance error of each point corresponding to the marker position in millimeter. The root mean square of the distance error is used as an evaluating criterion. The table indicates that our proposed system provides an accurate estimation of the golf slope with the mean distance error of 3.34
mm
.
Estimated and true slope coordinate points
Marker position estimation errors in different poses
Marker position estimation errors in different poses
To provide a visual expression for the user, the surface of the slope is reconstructed based on the estimated laser points.
Fig. 13
shows the reconstruction result of the slope. The height of the slope is represented by color for an easy imagination. By looking at the reconstructed slope, the user can easily understand the slope and have a better prediction of the golf ball trajectory.
Slope surface reconstruction
In this experiment, the reconstructed surface was built based on 831 3D laser points within 0.072 seconds in a Core i5 3.00GHz, 8GB RAM PC. This speed implies that the proposed method can be applied in a smartphone for a quick surface generation for golfers while playing.
In the next experiment, we examined the effect of the grass on the green surface on the slope estimation quality. In reality, there are grasses in the golf green, however, is very short (from 2.25 to 4
mm
[18]
) and dense. In this experiment, we check the feasibility of extracting laser stripes in an artificial grass environment. The experimental object was a golf hitting mat with 5
mm
height grass. The experimental process was the same as it is proposed in Section 4 with image subtraction to obtain laser stripe from captured images.
The result in
Fig. 14
shows that, the system can extract the laser stripe successfully in grass slope. After this step, the slope is estimated totally the same as it is mentioned in Section 5.
Slope surface reconstruction
6. Conclusion
The paper proposed a system consisting of an accelerometer, a cross structured light and a camera to estimate the surface structure of the golf green slope. In order to estimate the golf slope, a method to detect laser planes as well as laser projection curves was proposed. Laser projections are separated by utilizing the fact that the intersection of two laser planes is fixed with respect to the camera and image coordinate frame. After the projections are distinguished, the laser planes’ equations are computed using an optimization problem. Hence, all the intersection point coordinates of the laser planes and the slope can be computed with respect to the world coordinate frame. Based on these points, the slope surface is reconstructed using an interpolation method.
To verify the feasibility of the proposed system some experiments have been done. The result shows that our system can work with a high accuracy of 0.1° in flat slope angle error and 3.34
mm
for mean distance error in complex slope. In addition, an experiment on an artificial grass mat verified that the system still can extract the laser points from the grass surface.
Using the system, the golf slope is presented in 3D that provides an insight view of the slope’s characteristics for the players. Furthermore, the system is light, simple and cheap, which can be easily applied in a smart phone where a high resolution camera and an accelerometer are available. By simply attaching a laser light the smart phone can become our proposed system which is very convenient for players to use in the golf field.
Acknowledgements
This work was supported by the 2014 Research Fund of University of Ulsan.
BIO
Duy Duong Pham He received his B.S. degree of Automation from Department of Electrical Engineering at Danang University of Technology - The University of Nanang, Vietnam in 2010. From 2011 he works as a lecturer at Danang College of Technology - The University of Nanang. He is currently pursuing the Master degree at University of Ulsan. His research interests include mobile robot control, motion tracking and personal navigation.
Quoc Khanh Dang He received his B.S. degree of Automation from Department of Electrical Engineering at Hanoi University of Science and Technology, Vietnam in 2009. In 2012, he received his M.S. degree from School of Electrical Engineering at University of Ulsan, Korea. He is currently pursuing the PhD degree at University of Ulsan. His research interests include mobile robot control, motion tracking and personal navigation.
Young-Soo Suh He received his B.S. and M.S. degrees from the Department of Control and Instrumentation Engineering at Seoul National University, Korea in 1990 and 1992, and his Ph.D. degree from the Department of Mathematical at Engineering and Information Physics at the University of Tokyo, Japan in 1997. He is currently a professor in the Department of Electrical Engineering, University of Ulsan, Korea. His research interests include networked control systems and attitude estimation and control, and personal navigation systems.
Zhang L.
,
Ye Q.
,
Yang W.
,
Jiao J.
2014
“Robust Weld Line Detection and Tracking via Spatial-Temporal Cascaded Hidden Markov Models and Cross Structured Light,”
IEEE Transaction on Instrumentation and Measurement
6
742 -
752
Botterill Tom
,
Mills Steven
,
Green Richard
2011
“Design and calibration of a hybrid computer vision and structured light 3D imaging system,”
International Conference on Automation, Robotics and Applications
Wellington, New Zealand
Tseng Din-Chang
,
Chen Zen
1991
“Computing Location and Orientation of Polyhedral Surfaces Using a Laser-Based Vision System,”
IEEE Transaction on Robotics and Automation
7
Thomas Ben
,
Jothilingam A.
2014
“Object Identification Based on Structured Light Reconstruction and CAD Based Matching,”
International Journal of Innovative Research in Science, Engineering and Technology
3
Van Lier W.
,
Van Der Kamp J.
,
Savelsbergh G.J.P.
2010
“Gaze in golf putting: effects of slope,”
International Journal of Sport Psychology
41
160 -
176
Hecht Heiko
,
Shaffer Dennis
,
Keshavarz Behrang
,
Flint Mariagrace
2014
“Slope estimation and viewing distance of the observer,”
Attention Perception & Psychophysics
76
1729 -
1738
DOI : 10.3758/s13414-014-0702-7
Inclinometers: Mechanical & Electronic
Available:
Kanatani Kenichi
1996
Statistical Optimization for Geometric Computation
Dover Publications
Bradski Gary
,
Kaehler Adrian
2011
Learning OpenCV
O’Reilly Media
Zhang Zhengyou
2000
“A Flexible New Technique for Camera Calibration,”
IEEE Transaction on Pattern Analysis and Machine Intelligence
22
1330 -
1334
DOI : 10.1109/34.888718
Hartley Richard
,
Zisserman Andrew
2003
Multiple View Geometry in Computer Vision
Cambridge University Press
Cambridge
Lobo Jorge
,
Dias Jorge
2007
“Relative Pose Calibration Between Visual and Inertial Sensors,”
The International Journal of Robotics Research
26
561 -
575
DOI : 10.1177/0278364907079276
Kim Daesik
,
Lee Seongsoo
,
Kim Hyunwoo
,
Lee Sukhan
2010
“Wide-Angle Laser Structured Light System Calibration with a Planar Object,”
KINTEX
International Conference on Control, Automation and Systems
Gyeongki-do, Korea
Black Harold D.
1964
“A passive system for determining the attitude of a satellite,”
AIAA Journal
2
1350 -
1351
DOI : 10.2514/3.2555
D'Errico J. R.
2006
Surface Fitting Uisng Gridfit
Available:
Chong Edwin K. P.
,
Zak Stanislaw H.
2001
An Introduction to Optimization
John Wiley & Sons, inc.
New York, The USA
Dunsmuir A.
(2011)
Height of cut: Tread with caution when comparing golf courses
Available: