Vision Sensor-Based Driving Algorithm for Indoor Automatic Guided Vehicles
Vision Sensor-Based Driving Algorithm for Indoor Automatic Guided Vehicles
International Journal of Fuzzy Logic and Intelligent Systems. 2013. Jun, 13(2): 140-146
Copyright ©2013, Korean Institute of Intelligent Systems
This is an Open Access article distributedunder the terms of the CreativeCommons Attribution Non-Commercial License( which permits unrestricted noncommercialuse, distribution, and reproductionin any medium, provided the originalwork is properly cited.
  • Received : May 14, 2013
  • Accepted : June 23, 2013
  • Published : June 25, 2013
Export by style
Cited by
About the Authors
Quan Nguyen Van
School of Electrical Electronic and Control Engineering, Kongju National University, Cheonan, Korea
Hyuk-Min Eum
Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Korea
Jeisung Lee
Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Korea
Chang-Ho Hyun
School of Electrical Electronic and Control Engineering, Kongju National University, Cheonan, Korea

In this paper, we describe a vision sensor-based driving algorithm for indoor automatic guided vehicles (AGVs) that facilitates a path tracking task using two mono cameras for navigation. One camera is mounted on vehicle to observe the environment and to detect markers in front of the vehicle. The other camera is attached so the view is perpendicular to the floor, which compensates for the distance between the wheels and markers. The angle and distance from the center of the two wheels to the center of marker are also obtained using these two cameras. We propose five movement patterns for AGVs to guarantee smooth performance during path tracking: starting, moving straight, pre-turning, left/right turning, and stopping. This driving algorithm based on two vision sensors gives greater flexibility to AGVs, including easy layout change, autonomy, and even economy. The algorithm was validated in an experiment using a two-wheeled mobile robot.
1. Introduction
Automatic guided vehicles (AGVs) have been a growing research over the last two decades. AGVs have important roles in the design of new factories and warehouses, and in the safe movement of goods to their correct destinations. AGVs have been used in assembly lines or production lines in many factories, e.g., automobiles, food processing, wood working, pharmaceuticals, and other factories. Many researchers have developed and designed AGVs to suit their applications, which are typically related to the major problems encountered in a factory working space.
There are two main types of navigation methods for AGVs: guidance with lines and guidance without lines. In the first type of system, most AGVs track buried cables or guidepaths painted on the floor. One of the most popular types of navigation is based on the use of a magnetic tape as a guide path where the AGV is fitted with an appropriate guide sensor so it can follow the path of the tape [1] . However, it is not easy to change the layout of these guide paths and any breaks in the wires makes it impossible to detect the route. The other form of AGV navigation system uses no paths or guidelines on the floor but instead it is based on laser targets and inertial navigation [2 , 3] . Vehicles navigate using a laser scanner, which measures the angles and distances to reflectors mounted on the walls and machines. The use of lasers provides maximum flexibility to make easy guidance path changes but this type of system is expensive to construct.
Image processing techniques have also been of interest for detecting and recognizing guide paths during the development of AGVs [4 - 10] . In many studies related to vision-based orientation, various methods have been introduced for use in navigation systems, e.g., a feedback optimal controller [11] , a fuzzy logic controller [12] , neural networks, and interactive learning navigation. The use of vision systems has a significant effect on the development of systems that are flexible and efficient, with low maintenance costs, especially mobile equipment. AGV systems that utilize a vision-based sensor and computer image processing for navigation can improve the flexibility of system because more environmental information can be observed, which makes the system robust [13 - 15] .
In this study, we implemented a vision sensor-based driving algorithm to allow AGVs to follow a guide path where markers were installed on the floor, which facilitated high reliability and rapid execution. This method only requires minor modifications to make adjustments for use with any layout platform. Using two cameras, we can reduce the position errors and allow the AGV to change the driving algorithm smoothly while following the desired path. A camera sensor on the top of the AGV can be used to identify the next distant navigation marker. This camera is attached to the vehicle so it subtends an angle α with the floor. The other camera, which is at half the distance from top of the robot to the floor, is positioned perpendicular to the floor and is used to detect the nearest marker. These two cameras also provide angle and distance information between the center of the robot and the center of the marker, which is very helpful for allowing the AGVs to track the guide path robustly. These vision sensors can also be used to detect other AGVs, thereby preventing AGVs from colliding.
The paper is organized as follows. Section 2 presents the AGV platform configuration. The marker recognition algorithm is described in Section 3. Section 4 explains the driving algorithm for tracking the desired path of an AGV. The experimental results using the two-wheeled mobile robot Stella B2 are given in Section 5, which are followed by the conclusions in Section 6.
2. AGV Platform Configuration
This section describes the configuration of the AGV in detail, where a two-wheeled mobile Stella B2 robot is used as the platform, with two 360092305universal serial bus (USB) cameras, vision and movement control software, and a laptop computer.
The Stella B2 robot (NTREX, Incheon, Korea) is a commercial
PPT Slide
Lager Image
Automatic guided vehicle (AGV) navigation control system.
mobile robot system, which comprises a frame with three plate layers, two assembled nonholonomic wheels powered by two DC motors attached to an encoder, a motor driver, a one caster wheel, a power module, and a battery (12V 7A). A Lenovo Thinkpad (1.66 GHz, 1GB of RAM; Lenovo, Morrisvulle, NC, USA) laptop computer is used as the controller device. The Microsoft Foundation Classes (MFC) control application was built for image processing and movement control. The computer and mobile robot communicate via a USB communication (COM) port.
Two USB cameras are installed on the second and third plate layer. The camera on the second plate layer subtends an angle of 90ο with the floor. The second camera is placed on top of the third plate layer so the angle subtended between its view and the floor is α .
An overview of the AGV navigation control system is shown in Figure 1 . The laptop computer obtains environmental information from the front of the robot using two USB cameras. The scenarios analyzed by the image processing algorithm are used as inputs by the driving navigation module so it can set a suitable velocity based on the current AGV environment.
3. Marker Recognition Algorithm
We use markers to provide directional information to the AGV, instead of using the line tracking method. The markers are detected and recognized according to the following steps [16 - 19] .
  • 1) Learn the marker images and classify them using a support
PPT Slide
Lager Image
Flow chart of the vision-based marker recognition algorithm.
  • vector machine (SVM).
  • 2) Acquire images from two cameras: camera 1 for a bird’s eye view and camera 2 for a perpendicular aspect.
  • 3) HSV (hue, saturation, and value) color-based image calibration.
  • 4) Noise reduction with median filters.
  • 5) Conversion of binary images
  • 6) Marker extraction using the histogram of oriented gradients (HOG) algorithm.
Using the marker recognition algorithm as shown in Figure 2 , the markers are detected initially and the directional information and yaw angle information obtained from the detected markers are provided to the AGV. This information is used for AGV path tracking control.
4. Driving Algorithm for Tracking the Desired Path
In this section, we introduce the AGV driving algorithm based on vision sensors, including the movement patterns and the path following algorithm that allows an AGV to track the guide path correctly and robustly while avoiding collisions.
- 4.1 Movement Patterns
We use five main movement patterns to ensure the smooth performance of the AGV: starting, moving straight, pre-turning, left/right turning, and stopping. The movement pattern changes according to the navigation marker detected by the AGV.
- 4.1.1 Starting
The AGV does not move forward immediately at high speed but it starts gradually and increases its speed until it reaches the desired velocity. This start process protects the two motors from damage and it also reduces the vibration cause by sudden changes in velocity.
- 4.1.2 Moving straight
The two AGV wheels move forward at the same speed. When the camera on top of the AGV detects the next straight navigation marker, it increases its speed to the next speed level. If the next marker on the floor is still straight ahead, the AGV will not change its speed. While the AGV moves straight ahead it also uses the path following algorithm to ensure that it remains in the center of the guide path. This algorithm will be described in more detail in the next section.
- 4.1.3 Pre-turning
The turning marker on the floor is detected by the camera on the top of the AGV. Immediately after its detection, the AGV reduced its speed to a lower rate but it keeps moving forward until the lower camera detects the turning marker. At this point, the AGV automatically changes to the turning pattern. In this movement pattern, the path following algorithm still allows the AGV to keep its exact position on the guide path.
- 4.1.4 Left/right turning
The AGV reduces its speed continuously and it stops when the center of the AGV is close to the center of the turning marker, before it makes a turn of 90ο from its current position (by maintaining the speed of the two wheels as equal but moving them in opposite directions).
- 4.1.5 Stopping
Immediately after the AGV detects a stop marker, the overall velocity of the vehicle is reduced so it is equal to zero when the AGV reaches the stop sign. The path following algorithm is also used to drive the AGVs to the correct marker position.
- 4.2 Path Following
The path following algorithm uses the information the AGV obtains from the vision sensor. The most important information required by the algorithm is the angle between the center of the
PPT Slide
Lager Image
Angle intervals between the automatic guided vehicle and the navigation marker.
AGV and the center of the navigation marker on the floor. The AGV analyzes the angle and gives the two motors an appropriate velocity so it can follow the guide path smoothly and robustly. The upper and lower cameras use this algorithm to ensure more precise path following.
We use nine angle intervals for AGVs, which correspond to nine velocity levels, four angle intervals for turning left, four angle intervals for turning right, and one for moving straight, which track the guide path adequately. The AGV velocity is selected based on the angle between the center of the marker and the center view of the AGV. If the navigation marker is on the center view of the camera or very closed to it in interval A0, the AGV moves straight ahead. However, if the angle between the marker and the camera view center is not in the A0 interval, the velocity of the two wheels will change to make the AGV turn back to the correct track. From angle intervals A0 to A4 or A’0 to A’4, the angle between the AGV and the marker appears to become larger, as shown in Figure 3 .
Let us assume that the AGV is in a position, which that subtends angle β with the marker in interval A’4, as shown in Figure 4 . Thus, the AGV needs to turn right so it can move back to the guide path. The AGV will not turn back to the track abruptly, but instead it gradually turns right so angle β lie in the interval A’3. When the AGV is in A’3, the velocity difference between the left wheel and right wheel is reduced slightly to ensure that the movement is smooth. Using a similar algorithm, the AGV turns right to A’2 and A’1 until it reaches interval A0, before it moves straight ahead. In this case, angle β is on right side from A4 to A1, so we use the same algorithm for the AGV, except we execute a left turn.
PPT Slide
Lager Image
Path following by an automatic guided vehicle.
- 4.3 AGV Collision Avoidance
Factories often use several AGVs to increase performance. Each AGV works in a different area, but two AGVs or more may work in one area occasionally. To avoid AGV collisions, we constructed a collision avoidance algorithm to protect the AGVs. Based on the information obtained from the two cameras, the AGV can detect whether another AGV is close by, so it can stop moving until the other AGV moves out of the way.
5. Experimental Results
This sections presents our experimental results using a twowheeled Stella B2 mobile robot. Each experiment was conducted in a controlled environment, including the specification of the layout design and the lighting source. The system captured images directly from two USB cameras during real-time, processed the images with the MFC program, analyzed the information using the proposed control algorithms, and sent the result to the mobile robot motor controller via the serial interface of the laptop computer. The overall control system comprised the navigation marker, the AGV detection system, and the driving algorithm. In this experimental study, the control system was evaluated separately according to each of the following criteria: 1) marker detection experiment, 2) AGV detection experiment, 3) path following experiment.
PPT Slide
Lager Image
Navigation path on the floor.
PPT Slide
Lager Image
The navigation marker detection result.
- 5.1 Navigation Marker Detection Algorithm Experimental Results
We tested the capacity for autonomous navigations using navigation markers installed on the floor, where the navigation path was set up as shown in Figure 5 .
Figure 6 shows a sample image result, which was acquired after the marker detection process. In this image, the marker on the floor is straight ahead, while the negative sign of the angle value indicates that the AGV is deviating to the right from the center of the marker.
The AGVs moved from the start position by detecting the sign marker using the driving algorithm, until it reached the final target.
- 5.2 AGVs Detection Algorithm Experimental Results
In this experiment, the effectiveness of the proposed AGVs detection algorithm was tested and evaluated using two Stella B2 mobile robots, which moved in opposite directions. When the two mobile robots were sufficiently close to detect each other, they stopped to avoid a collision. Figure 7 shows the experimental results with the AGV detection algorithm.
PPT Slide
Lager Image
Automatic guided vehicle collision avoidance experiment.
- 5.3 Path Following Algorithm Experimental Results
The path following algorithm was constructed to ensure that the AGV would move exactly toward the center of the navigation marker or return to the correct guide path if the AGV was not in the center of the navigation marker. In this experiment, a Stella B2 mobile robot was placed on the floor so it subtended an angle β relative to the center of the navigation marker. The results of this experiment showed that the proposed algorithm could track the guide path smoothly and precisely. Figure 8 shows actual images of the Stella B2 mobile robot while it was following the desired path.
We produced a graph of the path tracking process by generating a log of the angle data while the AGV was running. Figure 9 shows the path tracking control results for the AGV. We compared the path tracking performance of a controller using the proposed driving algorithm with that of a controller that lacked the proposed driving algorithm. Figure 9 shows that the proposed driving algorithm converged on the desired path with 10% tracking errors.
6. Conclusion
The experimental results showed that the vision sensor-based driving algorithm for AGVs was implemented successfully in a real path guidance system platform in a laboratory environment.
PPT Slide
Lager Image
Path following algorithm experiment using a Stella B2 mobile robot.
The vision-based marker following method for AGVs worked perfectly using two low-cost USB cameras. The combination of information from two cameras allowed the AGV to operate highly efficiently and smoothly. The AGV followed the guide path correctly and moved as close as possible to the center of the navigation marker using the vision sensors. The AGVs could avoid collisions when more than one AGV worked in the same space. This control system does not require that the destination target is programmed because it depends totally on signs placed on the floor. Thus, AGVs can operate in a flexible manner using any layout.
PPT Slide
Lager Image
Automatic guided vehicle path tracking using the path following algorithm; sample time: sec.
Lee Y. J. , Ryoo Y. J. 2011 “Navigation of unmanned vehicleusing relative localization and magnetic guidance” Journal of Korean Institute of Intelligent Systems 21 (4) 430 - 435    DOI : 10.5391/JKIIS.2011.21.4.430
Alenya G. , Escoda J. , Martinez A. B. , Torras C. 2005 “Usinglaser and vision to locate a robot in an industrial environment:a practical experience” in Proceedings of the 2005 IEEE International Conference on Robotics and Automation Barcelona 3528 - 3533    DOI : 10.1109/ROBOT.2005.1570656
Lee M. , Han J. , Jang C. , Sunwoo M. 2013 “Informationfusion of cameras and laser radars for perception systemsof autonomous vehicles” Journal of Korean Institute of Intelligent Systems 23 (1) 35 - 45    DOI : 10.5391/JKIIS.2013.23.1.35
Desouza G. N. , Kak A. C. 2002 “Vision for mobile robotnavigation: a survey” IEEE Transactions on Pattern Analysis and Machine Intelligence 24 (2) 237 - 267    DOI : 10.1109/34.982903
Badal S. , Ravela S. , Draper B. , Hanson A. 1994 “A practicalobstacle detection and avoidance system” in Proceedings of the 2nd IEEE Workshop on Applications of Computer Vision Sarasota, FL 97 - 104    DOI : 10.1109/ACV.1994.341294
Christensen H. I. , Kirkeby N. O. , Kristensen S. , Knudsen L. , Granum E. 1994 “Model-driven vision for in-doornavigation” Robotics and Autonomous Systems 12 (2-3) 199 - 207    DOI : 10.1016/0921-8890(94)90026-4
Dao N. X. , You B. J. , Oh S. R. 2005 “Visual navigation forindoor mobile robots using a single camera” in Proceedings of 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems Edmonton, AB 1992 - 1997    DOI : 10.1109/IROS.2005.1545494
Park J. , Rasheed W. , Beak J. 2008 “Robot navigation usingcamera by identifying arrow signs” in Proceedings of the 3rd International Conference on Grid and Pervasive Computing Workshops Kunming 382 - 386    DOI : 10.1109/GPC.WORKSHOPS.2008.41
Hu H. , Gu D. 2000 “Landmark-based navigation of industrial mobile robots” International Journal of Industrial Robot 27 (6) 458 - 467    DOI : 10.1108/01439910010378879
Chao M. T. , Braunl T. , Zaknich A. 1999 “Visually-guided obstacle avoidance” in Proceedings of the 6th International Conference on Neural Information Processing Perth, WA 650 - 655    DOI : 10.1109/ICONIP.1999.845672
Lee J. W. , Choi S. U. , Lee C. H. , Lee Y. J. , Lee K. S. 2011 “A study for AGV steering control and identification using vision system” in Proceedings of 2001 IEEE International Symposium on Industrial Electronics Pusan 1575 - 1578    DOI : 10.1109/ISIE.2001.931941
Cho J. T. , Nam B. H. 2000 “A study on the fuzzy control navigation and the obstacle avoidance of mobile robot using camera” in Proceedings of 2000 IEEE International Conference on Systems, Man, and Cybernetics Nashville, TN 2993 - 2997    DOI : 10.1109/ICSMC.2000.884456
Jin T. S. , Morioka K. , Hashimoto H. 2011 “Appearancebased object identification for mobile robot localizationin intelligent space with distributed vision sensors” International Journal of Fuzzy Logic and Intelligent Systems 4 (2) 165 - 171    DOI : 10.5391/IJFIS.2004.4.2.165
Lee W. H. , Lee H. W. , Kim S. H. , Jung J. Y. , Roh T. J. 2004 “Moving path following and high speed precision control of autonomous mobile robot using fuzzy” Journal of Korean Institute of Intelligent Systems 14 (7) 907 - 913    DOI : 10.5391/JKIIS.2004.14.7.907
Min D. H. , Jung K. W. , Kwon K. Y. , Park J. Y. 2011 “Applicationof recent approximate dynamic programmingmethods for navigation problems” Journal of Korean Institute of Intelligent Systems 21 (6) 737 - 742    DOI : 10.5391/JKIIS.2011.21.6.737
Chapman D. 1998 Teach Yourself Visual C++ 6 in 21 Days Sams Indianapolis
Gonzalez R. C. , Woods R. E. 2007 Digital Image Processing 3rd ed. Prentice Hall Harlow
Szeliski R. 2010 Computer Vision: Algorithms and Applications Springer New York
Baxes G. A. 1994 Digital Image Processing: Principles andApplications 1st ed. Wiley New York