Advanced
Vision-Based Indoor Localization Using Artificial Landmarks and Natural Features on the Ceiling with Optical Flow and a Kalman Filter
Vision-Based Indoor Localization Using Artificial Landmarks and Natural Features on the Ceiling with Optical Flow and a Kalman Filter
International Journal of Fuzzy Logic and Intelligent Systems. 2013. Jun, 13(2): 133-139
Copyright ©2013, Korean Institute of Intelligent Systems
This is an Open Access article distributedunder the terms of the CreativeCommons Attribution Non-Commercial License(http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted noncommercialuse, distribution, and reproductionin any medium, provided the originalwork is properly cited.
  • Received : April 11, 2013
  • Accepted : June 25, 2013
  • Published : June 25, 2013
Download
PDF
e-PUB
PubReader
PPT
Export by style
Share
Article
Author
Metrics
Cited by
TagCloud
About the Authors
Angga Rusdinar
Sungshin Kim
sskim@pusan.ac.kr

Abstract
This paper proposes a vision-based indoor localization method for autonomous vehicles. A single upward-facing digital camera was mounted on an autonomous vehicle and used as a vision sensor to identify artificial landmarks and any natural corner features. An interest point detector was used to find the natural features. Using an optical flow detection algorithm, information related to the direction and vehicle translation was defined. This information was used to track the vehicle movements. Random noise related to uneven light disrupted the calculation of the vehicle translation. Thus, to estimate the vehicle translation, a Kalman filter was used to calculate the vehicle position. These algorithms were tested on a vehicle in a real environment. The image processing method could recognize the landmarks precisely, while the Kalman filter algorithm could estimate the vehicle’s position accurately. The experimental results confirmed that the proposed approaches can be implemented in practical situations.
Keywords
1. Introduction
Landmarks are essential for autonomous vehicle localization and navigation. Landmarks are used as references when a robot localizes or navigates within its environment. Many different types of landmark are used by autonomous vehicles or robots to locate their position. The positions of landmarks can be divided into vertical and horizontal views. The advantage of using ceiling landmarks or vertical views is the tolerance of dynamic obstacles and rearrangements, so many researchers have used ceiling landmarks as references for robot navigation [1 - 7] .
Vision is a good option for robot sensors because it is sufficiently flexible for detecting or recognizing any feature with any color and any size. A vision system or a combination of a vision system with sensors has been used in many localization and navigation systems [1 , 5 , 8 - 12] . Lee et al. [8] used a combination of vision and a range sensor for mobile robot localization. Park et al. [11] used a charge-coupled device (CCD) camera to calculate the zero moment point, which was measured from the reference object image (the camera was located on the robot’s head).
Ceiling lamps in an office were used as a landmark by Panzieri et al. [5] because they are the same shape and are spaced in a regular pattern, while they can be observed easily without any obstacles blocking the robot vision system. However, ceiling light-based positioning methods assume that all of the lights are lit and there are problems when a light is damaged or off. In addition, a light might not appear in the camera view. Xu et al. [6] used natural features on ceilings to estimate the initial orientation and position of an autonomous vehicle in the word frame of a specific block on the ceiling using a perspective n-point-based positioning method. The global orientation was estimated based on a point feature on the ceiling. Wu and Tsai [7] attached a circular artificial landmark to the ceiling because the circular shape on the ceiling became an irregular ellipse when viewed from below, and the parameters of the ellipse were used to estimate the vehicle location with good precision during navigation.
The Kalman filter (KF) and extended Kalman filter (EKF) have been used in many areas such as automotive engineering, communications, and simultaneous localization and mapping (SLAM) [1 , 2 , 11 - 16] . Myung et al. [13] used a KF to estimate the parameters given physical constraints using a general constrained optimization technique. Kim and Hong [14] used an EKF and an unscented KF for vehicle tracking in an automated container terminal. Rusdinar et al. [1] (our previous research) used EKF to estimate the vehicle position using artificial landmarks and odometry sensors. The EKF was used when the artificial landmark was not detectable by the vehicle. The EKF was also used in robot formation control to estimate the virtual target point position [15 , 16] .
In this paper, we propose a ceiling localization method based on a combination of artificial and natural landmarks with a KF. Previously, we used artificial landmark to localize a mobile robot and odometry sensors to track the vehicle displacement between one landmark and another landmark. To improve our previous method, we propose natural feature recognition and use it to track vehicle movements. Corner detection algorithms have been used to detect natural features on ceilings. Thus, we used a corner detection algorithm and removed the odometry sensor.
The contributions of this study are as follows: 1) optical flow algorithms replaced the encoder and gyro sensor; 2) a robust and fast image processing algorithm was developed, which can detect and recognize landmarks in real-time in areas with uneven light; 3) natural landmarks were used when the artificial landmarks could not be detected. We consider that our proposed system will help engineers to develop simple and cheap systems without reducing the capacity for autonomous vehicle guidance during localization and navigation in an indoor environment.
The paper is structured as follows. In Section 2, we describe the system configuration and image processing. In Section 3,
PPT Slide
Lager Image
Mobile robot with an upward-facing camera and the landmarkon the ceiling.
we present the localization method based on optical flow and the KF. In Section 4, our experimental results are described. In Section 5, we present our conclusions.
2. System Configuration and Image Processing
Figure 1 shows the autonomous vehicle with an upward-facing digital camera, which was used in the experiments, and the landmark on the ceiling. A Compaq Presario CQ20 (Hewlett- Packard, Palo Alto, CA, USA) notebook running a C++ program was used to handle the image processing routines and to control the vehicle movements. Two direct current motors were used to propel the vehicle. The motors were controlled by the notebook via a USB DAQ (National Instrument, Austin, TX, USA) motor driver.
During image processing, the object recognition algorithm used corner features on the ceiling as landmarks. The corner features were used as references to track the vehicle’s displacement. Harris corner detection was used to recognize the corner features of the ceiling. Optical flow was used to calculate the displacement distance.
- 2.1 Corner Feature Detection and Optical Flow
The point of a corner is defined by
PPT Slide
Lager Image
where M is a 2 × 2 structure matrix and k is a scale constant with a range of 0.004 < k < 0.006. The determinant M and trace M are defined by the eigenvalue ( λ ) of matrix M , where det M = λ 1 λ 2 and trace M = λ 1 + λ 2 . The structure matrix M
PPT Slide
Lager Image
Vehiclemovements and the upward-facing camera position.
PPT Slide
Lager Image
Calculation of the angular velocity.
is defined by
PPT Slide
Lager Image
where w ( x , y ) is a weight function and I is the image intensity. The camera movement can be determined using
PPT Slide
Lager Image
where E is the difference between the original and the moved window. I 1 and I 2 are the intensity of the image position ( x , y ) and the intensity of the moved window, respectively. I 2 is defined by
PPT Slide
Lager Image
where u and v are the window displacements in the x and y directions, respectively.
- 2.2 Calculation of the Angular Velocity
The camera position is located exactly in the middle of the axle, so the vehicle movements are defined by two modes: straight and rotate. These movement modes simplify the angular velocity calculations. Figure 2 shows the robot movement modes and the camera position.
PPT Slide
Lager Image
Image processing to calculate the optical flow.
To calculate the angular velocity of the mobile robot based on the image processing results, the image is divided into three Region of Interests(ROIs), i.e., the left, front, and right ROIs. Each ROI has an angular velocity ω . Figures 3 and 4 show the angular velocity calculations relative to the image feature translation. The angular velocity in each ROI is defined by
PPT Slide
Lager Image
where R is the distance between the center of the image and the center of the ROI and V ⊥ is the cross-radial component or a component perpendicular to the radius R . R and V ⊥ are defined by
PPT Slide
Lager Image
PPT Slide
Lager Image
where ( xr , yr ) is center of the ROI position in the image frame, ( x 0 , y 0 ) is the center of the image, and Vi and φ are the speed of translation from point t -1 to t and the angle between Vi and the cross-radial component, respectively. The angle φ is defined by
PPT Slide
Lager Image
where β is defined by
PPT Slide
Lager Image
The translation of the center point is used to calculate the translation distance. The translation distance in the image is defined by
PPT Slide
Lager Image
where ( xc , yc ) is the center of the image defined by
  • image_width/2 andimage_height/2,
while ( xu , yu ) is defined by (3). All of the calculations are performed in a pixel format. The artificial landmark from our previous study [1] is used to convert pixels to a metric format. The artificial landmark comprises circles arranged in a defined pattern. The pattern generates landmarks and directional information. A comparison of the real diameter of the circles and their diameter in the image is used to calculate the metric format during image processing.
3. Vehicle Localization and Navigation
- 3.1 Vehicle Movements
The proposed motion model is defined by
PPT Slide
Lager Image
where xpt , ypt , and θt the ( x , y ) vehicle position and the heading at each step, respectively; p indicates that x , y , and θ are derived from the optical flow algorithm; θt –1 is the robot heading at step t -1; ωt is the angular velocity calculated based on the optical flow during image processing; and s is the translation distance between one point and another point according to the optical flow algorithm. The translation distance is defined by
PPT Slide
Lager Image
where l is defined by (11) and vp is the vehicle speed, which is determined based on the speed of translation from one point to another point in the image. The vehicle speed is defined by
PPT Slide
Lager Image
where ts is the time sampling rate during image processing, which is calculated when the algorithm detects the original corners based on the translated corners, and tb is the looping time for image processing, which is calculated after the end of ts to the first ts . A time function in C++ is used to calculate the time sampling rate.
- 3.2 Implementation of the Kalman Filter
During image processing, the noises change the vehicle translation and direction, which makes the vehicle position distant from the real position. The KF is a tool that can estimate the variables used for the localization calculations during image processing. In this study, a KF was used to estimate the vehicle position based on its input (translation and direction). The KF state is modeled by
PPT Slide
Lager Image
and the measurements are taken from the robot translation calculation produced by the optical flow, where x , y , and θ can be measured directly:
PPT Slide
Lager Image
where Xt and Xt –1 are state vectors at time t and t-1 , respectively.
In the simulation, the state is defined as
PPT Slide
Lager Image
A is an n × n matrix that describes the state at time t -1, which is defined by
PPT Slide
Lager Image
B is the n × m matrix that describes the control input u , which is defined by
PPT Slide
Lager Image
and zt is a measurement state.
The KF process comprises two stages: the prediction state and the measurement update state. The prediction state is given by
PPT Slide
Lager Image
where Pt and
PPT Slide
Lager Image
are a priori and a posteriori estimates of the error covariance, respectively, and Ex is the process with noise respect to Xt .
The measurement update state is given by
PPT Slide
Lager Image
PPT Slide
Lager Image
PPT Slide
Lager Image
where K is the Kalman gain, Ez is a measurement noise matrix
PPT Slide
Lager Image
Map of the experimental environment (laboratory to hall).
with respect to vt , and I is an identity matrix. The KF reduces the noise of the target point position. H is a Jacobian matrix that contains the partial derivatives of the measurement function h(x) with respect to the state Xt , where
PPT Slide
Lager Image
4. Experimental Results
We tested the proposed algorithm in experiments using our mobile robot. The mobile robot was moved and the upwardfacing camera captured images of the ceiling while tracking the robot movements. The experiment was divided into two areas. In the first experiment, the vehicle was moved from the study room through the meeting room of our laboratory into the hall of the third floor of our building. Figure 5 shows a map of our laboratory condition and tracks showing the vehicles movements. The vehicle moved straight from the start point in the study room, turned right after moving 350 cm, turned left after 570 cm, and stopped outside the laboratory or in the hall. The robot trajectory was plotted using the optical flow and the artificial landmark. Figure 6 shows the results of the vehicle movements and the position estimated using only the optical flow. The trajectory of the vehicle calculated used the optical flow alone shows that the difference between the vehicle positions was far from the real position of the vehicle. This was attributable to noise during image processing.
PPT Slide
Lager Image
Vehicle trajectory based on optical flow alone.
PPT Slide
Lager Image
Vehicle trajectory using optical flow, artificial landmarks,and the Kalman filter.
Figure 7 shows the vehicle trajectory calculated using the artificial landmark and the KF. The KF could be used to estimate the vehicle position. The figure compares the localization results using optical flow alone with that using the KF and an artificial landmark. The noise that interfered with the position calculation was eliminated by the KF, so the vehicle orientation error was corrected when the vehicle detected the artificial landmark.
In the second experiment, the vehicle moved in the corridor of our building. Figure 8 shows a map of the experimental environment in the hall. This environment was similar to that where we tested the encoder and gyro in our previous studies [7 , 8] . The figure shows that there were still errors in the tracking algorithm. These errors were due to light malfunctions along the corridor. These errors caused the vehicle’s position to deviate far from the original trajectory, which could be reduced by the KF. Figure 9 shows the trajectory of the vehicle without the KF and artificial landmarks. This shows that the errors were very different from the real trajectory because there was no
PPT Slide
Lager Image
Map of the experimental environment (hall).
PPT Slide
Lager Image
Experimental results in the hall using optical flow alone.
PPT Slide
Lager Image
Experimental results in the hall using artificial landmarks,optical flow, and the Kalman filter.
correction process when the vehicle moved along the corridor. Figure 10 shows the trajectory of the vehicle using the KF and artificial landmarks, where the KF corrected the positional error caused by noise.
5. Conclusion
In this study, we developed a vehicle tracking method based on an optical flow algorithm using natural features on the ceiling for real-time image processing. The results confirmed that our tracking algorithms can replace odometry sensors. Using a combination of artificial landmarks and natural features on the ceiling, the vehicle could localize and navigate autonomously with a single low cost digital camera. The natural landmarks could be recognized by an artificial neural network based on the shape and color of the image.
Acknowledgements
This research was supported by the MOTIE (The Ministry ofTrade, Industry and Energy), Korea, under the Human ResourcesDevelopment Program for Special Environment Navigation/Localization National Robotics Research Center supportprogram supervised by the NIPA (National IT Industry PromotionAgency). (H1502-13-1001)
References
Rusdinar A. , Kim J. , Lee J. , Kim S. 2011 “Implementationof real-time positioning system using extended Kalman filterand artificial landmark on ceiling” Journal of Mechanical Science and Technology 26 (3) 949 - 958    DOI : 10.1007/s12206-011-1251-9
Kim J. M. “Optical guidance: ceiling” Available http: //www.youtube.com/watch?v=qM1mcRejils
Fukuda T. , Ito S. , Arai F. , Yokoyama Y. , Abe Y. , Tanaka K. , Tanaka Y. 1995 “Navigation system based on ceiling landmark recognition for autonomous mobile robot: landmark detection based on fuzzy template matching (FTM)” in Proceedings of 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems Pittsburgh, PA 150 - 155    DOI : 10.1109/IROS.1995.526153
Jeong W. Y. , Lee K. M. 2005 “CV-SLAM: a new ceilingvision-based SLAM technique” in Proceedings of 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems Edmonton, AB 3195 - 3200    DOI : 10.1109/IROS.2005.1545443
Panzieri S. , Pascucci F. , Setola R. , Ulivi G. 2001 “A lowcost vision based localization system for mobile robots” Target 4
Xu D. , Han L. , Tan M. , Li Y. F. 2009 “Ceiling-based visualpositioning for an indoor mobile robot with monocular vision” IEEE Transactions on Industrial Electronics 56 (5) 1617 - 1628    DOI : 10.1109/TIE.2009.2012457
Wu C. J. , Tsai W. H. 2009 “Location estimation for indoorautonomous vehicle navigation by omni-directional visionusing circular landmarks on ceilings” Robotics and Autonomous Systems 57 (5) 546 - 555    DOI : 10.1016/j.robot.2008.10.001
Lee Y. J. , Yim B. D. , Song J. B. 2009 “Mobile robot localizationbased on effective combination of vision andrange sensors” International Journal of Control, Automation and Systems 7 (1) 97 - 104    DOI : 10.1007/s12555-009-0112-0
Kim M. Y. , Cho H. 2006 “An active trinocular vision systemof sensing indoor navigation environment for mobilerobots” Sensors and Actuators A: Physical 125 (2) 192 - 209    DOI : 10.1016/j.sna.2005.07.015
Menegatti E. , Maeda T. , Ishiguro H. 2004 “Image-basedmemory for robot navigation using properties of omnidirectionalimages” Robotics and Autonomous Systems 47 (4) 251 - 267    DOI : 10.1016/j.robot.2004.03.014
Park S. , Han Y. , Hahn H. 2009 “Balance control of a bipedrobot using camera image of reference object” International Journal of Control, Automation and Systems 7 (1) 75 - 84    DOI : 10.1007/s12555-009-0110-2
Jin T. S. , Morioka K. , Hashimoto H. 2004 “Appearancebased object identification for mobile robot localizationin intelligent space with distributed vision sensors” International Journal of Fuzzy Logic and Intelligent Systems 4 (2) 165 - 171    DOI : 10.5391/IJFIS.2004.4.2.165
Myung H. , Lee H. K. , Choi K. , Bang S. 2010 “Mobilerobot localization with gyroscope and constrained Kalmanfilter” International Journal of Control, Automation and Systems 8 (3) 667 - 676    DOI : 10.1007/s12555-010-0321-6
Kim Y. S. , Hong K. S. 2005 “A tracking algorithm forautonomous navigation of AGVs in an automated containerterminal” Journal of Mechanical Science and Technology 19 (1) 72 - 86    DOI : 10.1007/BF02916106
Rusdinar A. , Kim S. 2012 “Modeling of vision based robotformation control using fuzzy logic controller and extendedKalman filter” International Journal of Fuzzy Logic and Intelligent System 12 (3) 238 - 244    DOI : 10.5391/IJFIS.2012.12.3.238
Lee H. “FormationEKF” Available http://youtu.be/zCASF4I12rA