Advanced
Multi-robot Mapping Using Omnidirectional-Vision SLAM Based on Fisheye Images
Multi-robot Mapping Using Omnidirectional-Vision SLAM Based on Fisheye Images
ETRI Journal. 2014. Oct, 36(6): 913-923
Copyright © 2014, Electronics and Telecommunications Research Institute(ETRI)
  • Received : May 14, 2014
  • Accepted : October 02, 2014
  • Published : October 01, 2014
Download
PDF
e-PUB
PubReader
PPT
Export by style
Share
Article
Author
Metrics
Cited by
TagCloud
About the Authors
Yun-Won Choi
Kee-Koo Kwon
Soo-In Lee
Jeong-Won Choi
Suk-Gyu Lee

Abstract
This paper proposes a global mapping algorithm for multiple robots from an omnidirectional-vision simultaneous localization and mapping (SLAM) approach based on an object extraction method using Lucas–Kanade optical flow motion detection and images obtained through fisheye lenses mounted on robots. The multi-robot mapping algorithm draws a global map by using map data obtained from all of the individual robots. Global mapping takes a long time to process because it exchanges map data from individual robots while searching all areas. An omnidirectional image sensor has many advantages for object detection and mapping because it can measure all information around a robot simultaneously. The process calculations of the correction algorithm are improved over existing methods by correcting only the object’s feature points. The proposed algorithm has two steps: first, a local map is created based on an omnidirectional-vision SLAM approach for individual robots. Second, a global map is generated by merging individual maps from multiple robots. The reliability of the proposed mapping algorithm is verified through a comparison of maps based on the proposed algorithm and real maps.
Keywords
I. Introduction
With rapid advances in robotics, high-performance robots of various types have been used in diverse fields. In general, a single robot has a practical limitation because it requires high-performance hardware and software systems to perform a task by itself. As a result, using a single robot is difficult in tasks such as exploration, object transport, and cleaning. In such tasks, using multiple robots that are simple and inexpensive can be better than using a single expensive high-performance robot. Multiple robots can perform a given task efficiently through cooperation, such as exchanging information with one another. Multiple robots require various algorithms for avoiding collisions between themselves or obstacles and for facilitating formation, cooperation, and global mapping, among others [1] [8] . Previous studies have covered such algorithms well.
A multi-robot mapping algorithm creates a global map by using the local maps of individual robots, while maintaining their formation. Formation-control refers to the technology that enables a group of robots to control the shape of their own formation according to the surrounding environment so as to enable them to complete their mission. A large number of robots change their formation to complete tasks according to their purpose and environment based on multiple maps that are generated simultaneously. Studies have examined the formation and mapping-controlled position of each robot based on its location on the global map and its controlled formation through the sharing of information on the global location of each robot [9] [11] . Few studies on formation control and mapping have proposed methods for creating global maps using information on robots and their environments obtained through cameras attached to the robots. Previous camera-based studies have employed various algorithms. Some studies have detected and controlled the formation of robots using their positions on a global map based on barcode information attached to the side of each robot or on a cylindrical target on top of the robot. Others have controlled robots using omnidirectional cameras by estimating the global positions based on unmanned aerial vehicle camera data, as well as controlling the robot formation by estimating the object information based on omnidirectional images reflected in a mirror [12] [13] .
Existing omnidirectional-vision simultaneous localization and mapping (SLAM) approaches create maps by estimating object locations based on images reflected on paraboloidal mirrors or by using omnidirectional information based on images from fisheye cameras installed toward the ceiling [14] [18] . However, such methods are subject to light and are limited in that there are parts hidden from the cameras. In this paper, a fisheye camera is attached, to minimize the negative effect of light, at the top of a glass tube installed downward. This camera system is used to obtain information around all robots simultaneously. Existing omnidirectional-vision SLAM approaches correct for local positions by estimating the object locations based on corrected fisheye images. This paper extends these approaches using an object detection method based on uncorrected fisheye images.
This paper proposes a fusion algorithm for the global mapping of multiple robots to keep formation. In formation control, it is important to identify surrounding robots and to maintain the formation flexibly in accordance with the surrounding environment. An omnidirectional image obtained by using a fisheye camera is ideal to instantaneously estimate the location of other robots. Existing methods, in general, make use of forward cameras with a field of view of 60 degrees, which means that any robot using such a camera may need to move at least six times before it is able to fully recognize the surrounding environment [19] . In contrast, the proposed method using omnidirectional images can complete the same mission in an instant. The proposed algorithm uses local mappings based on a novel omnidirectional-vision SLAM approach for each robot and employs global mappings using formations based on motions around multiple robots. This novel omnidirectional-vision SLAM approach removes a bottom based on a histogram obtained through a fisheye camera facing downward and labels the objects (robots and obstacles). Motion vectors are obtained using the Lucas–Kanade optical flow (LKOF) method based on the corner points of an extracted object, and then the coordinate of the motion vector is corrected using a forward mapping [20] . Local maps are created using feature point (the outer point of a motion vector) connections of the same label based on the characteristics of the corrected motion vectors. Robot locations are detected by identifying those with the same movement speed on the local map of each robot, and global maps are created while robots maintain their formation. The proposed algorithm is applied to real robots, and the performance of the proposed mapping algorithm is verified by comparing real data on the robot positions with the position data generated by global maps from the proposed algorithm.
II. System Description
- 1. Overall System
The proposed algorithm processes data from fisheye cameras facing downward and uses the data to create local and global maps of multiple robots, while each robot with a local map maintains its formation. The proposed algorithm has two parts: local mappings for individual robots and global mappings and formation control for robots with local maps. The local mappings from individual robots produce a histogram of the ground information, which is found by analyzing the environmental and ground information obtained through fisheye lenses mounted onto the individual robots. It then clusters information using a labeling algorithm and corrects the coordinates of motion vectors (obtained through feature points in the corners of fisheye images) using forward mappings. It creates global maps (based on the shapes of the objects that disturb the path of the robot) through the motion vector information collected and cluster information. Formation-control and global mappings distinguish between obstacles and robots based on the motion vectors contained in the local maps and process the given tasks while maintaining the formation of the robots. The location of robots is estimated based on a global map generated by merging local maps.
- 2. Modeling
The robot system used in this paper is modeled in Fig. 1 and the state vectors of (1), where x , y , and θ in Fig. 1 indicate the coordinates and angles of the robot, L is the distance between its wheels, vr and vl indicate the speeds of the wheels, and v is the speed of the robot. A real robot is made using this modeling information, as shown in Fig. 2(a) . The robot has two wheels (diameter of 15 cm, and height of 30 cm) and has a cylindrical shape. A glass tube is installed on top (center) of the robot, and a fisheye camera is installed on top of the glass tube. This allows for environmental images of the robot to be obtained. Figure 2(b) shows a fisheye image (1,280 × 960 (width × height), with a viewing angle of 185°). All environmental data from the robot can be saved simultaneously because of the viewing angle.
x (k) =[ x ( k+1 ) y ( k+1 ) θ ( k+1 ) ]=[ x ( k ) + v ( k ) Δtcos θ ( k ) y ( k ) + v ( k ) Δtsin θ ( k ) θ ( k ) +Δt vrvl L ].
PPT Slide
Lager Image
Robot modeling.
PPT Slide
Lager Image
(a) Robot system and (b) a fisheye image.
III. Mapping Algorithm for Individual Robot
- 1. Sequence of Local Mapping
The local mapping algorithm, used by each robot and shown in Fig. 3 , produces a histogram showing environmental and ground information, which is obtained from images collected through fisheye cameras attached to the robots. It first removes unnecessary ground information using a histogram obtained from an analysis of the fisheye images, and then it labels candidate objects for clustering. It obtains motion vectors using the LKOF method based on corners as feature points in fisheye images and removes unnecessary vectors by using the magnitude and angle distribution of the motion vectors. The start and end points of available motion vectors are corrected using a correction algorithm based on a hemispherical projection model. The local mapping requires two sets of data: a dataset of the outer points of objects connected using a convex hull and one of clusters based on the labeling of objects.
PPT Slide
Lager Image
Overall flow of local mapping algorithm.
- 2. Clustering Using the Labeling and Connection Algorithm
Previous studies have used k-means, k-neighbors, artificial neural networks, and Bayesian classifiers, among others, to extract feature points for clustering purposes. However, if the initial number of objects or groups is different from the current number, then an error prevents grouping. In this paper, ground information is removed using a histogram obtained from fisheye images. The remaining information is then labeled for clustering. The labeling results are obtained from the image that remains after the ground information has been removed, as shown in Fig. 4 . As shown in Fig. 5 , the labeling process is refined using a connection algorithm to merge unclear labeling regions. Algorithm 1 determines the size of the enlarged object by enclosing the overlapped or closely located objects. Algorithm 1 summarizes the connection algorithm in the form of an algorithm, and the proposed algorithm can detect objects in a static environment as well as a dynamic environment. It also improves the processing speed through a simple subtraction operation and verifies the detection reliability. The results of the connection algorithm are shown in Fig. 5 .
PPT Slide
Lager Image
Labeling before application of connection algorithm.
PPT Slide
Lager Image
Position of obstacle candidates using labeling.
         Algorithm 1. Region connection.    Input : corners list in object regions (rectn,x1:x4,y1:y4)    Input : corners of connected regions (Rectn,(x,y,width,height)) 1.   //Similar Region Connection Algorithm 2.   for n < blobs do 3.    if rectnrectn+1 then 4.                    Rectn,x = min(rectn:n+1,x1:x4); 5.                    Rectn,y = min(rectn:n+1,y1:y4); 6.                    Rectn,width = max(rectn:n+1,x1:x4) − min(rectn:n+1,x1:x4) 7.                    Rectn,width = max(rectn:n+1,y1:y4) − min(rectn:n+1,y1:y4) 8.              else if rectnrectn+1 then 9.                     Rectn,x = min(rectn,x1:x4); 10.                     Rectn,y = min(rectn,y1:y4); 11.                     Rectn,width = max(rectn,x1:x4) − min(rectn,x1:x4) 12.                     Rectn,width = max(rectn,y1:y4) − min(rectn,y1:y4) 13.    else 14.        add n step 15.    end if 16. end for 17. for changedRegionCounts > 0 do 18.       begin Override label Process 19. end for 20. for OverridedlabelCounts > 0 do 21.       begin Sort regions Process 22. end for
- 3. Extraction of Feature Points Using LKOF Method
A mono-vision or ceiling-vision SLAM approach, using images obtained from a single camera, estimates the current location of a robot based on the coordinates of the feature points obtained by comparing the previous frame with the current one [21] [26] . This requires a comparison of the feature corners and boundaries without other information, but it is difficult to determine the correlations. In this paper, the relationship of the feature points is determined using the optical flow between the previous and current frames, and object information is estimated using the motion vectors. The basic optical flow method is a motion-estimation technique based on the differences between two consecutive frames. It assumes that the brightness of a pixel does not change even when the image frame changes and that spatially adjacent points have the same movement. Therefore, the optical flow method represents motion vectors for the direction and speed of the movement between two consecutive frames and must obtain motion vectors in a region with a distinct characteristic by extracting its edges or boundaries. In this regard, feature points are extracted in this paper using the Harris corner detection method for fisheye images [14] . That is, we have
C( x,y )=[ W ( I x ( x i , y i ) ) 2 W I x ( x i , y i ) I y ( x i , y i ) W I x ( x i , y i ) I y ( x i , y i ) W ( I y ( x i , y i ) ) 2 ], O( x,y )=det( C( x,y ) )k [ trace( C( x,y ) ) ] 2 , HCM( x,y )={ 1        if   O( x,y )>0, 0       otherwise ,
where C ( x, y ) indicates a change in the brightness of pixels calculated based on the Gaussian window W and brightness of each direction I x or y , and O ( x, y ) is the intensity of the corners obtained using a change in brightness. If the intensity of a corner exceeds the value of the constant m (0.3), then a binary image, HCM ( x, y ), can be obtained. In particular, the Harris corner detection algorithm uses 0.04 as the value for the constant k. If the algorithm applies the optical flow method using the corners as feature points, then the direction and magnitude of the motion vectors at the starting points, which are the feature points in the previous frame, can be determined. The optical flow between two frames can be expressed by the following equation based on the assumptions described earlier [15] :
f( x,  y,  t )=I( x,  y,  t )                   =I(x+dx,  y+dy,  t+dt),
where dx , dy , and dt indicate changes between two frames. Hence, using these changes for all pixels, we can obtain a matrix whose elements are the velocity components of the x - and y -axis. The least squares method is applied to this matrix to extract Vx and Vy against each axis as follows:
AT Av= A T b,     v= (AT A) 1 A T b, v=[ V x V y ]= [ I x i 2 I x i I y i I x i I y i I y i 2 ] 1 [ I x i I t i I y i I t i ],
where A and b are matrices that consist of the partial derivatives of the image I with respect to position ( x, y ) and time t , respectively. In addition matrix v consists of Vx and Vy . Equation (4) shows the motion vectors of the general optical flow, and the LK method uses both the addition of the Gaussian filter and the number adjustments of feature points to improve the computational load. The velocity components ( Vx and Vy ) of the LKOF method applied to the Gaussian function W , ωi in (5), is as follows:
A T WAv= A T Wb,     v=(AT WA) 1 A T Wb, [ V x V y ]= [ ω i I x i 2 ω i I x i I y i ω i I x i I y i ω i I y i 2 ] 1 [ ω i I x i I t i ω i I y i I t i ].
The results of the application of the LKOF method to fisheye images are shown in Fig. 6 . The points in Fig. 6 are the starting points (that is, the feature points before the robot begins to move), and the magnitude and direction of the motion vectors is shown by the red arrows. The surrounding robots have small motion vectors relative to each other because they move in a similar manner. However, the obstacles have large motion vectors relative to the moving robots because they are static.
PPT Slide
Lager Image
Motion vectors using LKOF method.
- 4. Correction of Locations Based on Semispherical Modeling
Fisheye images have some distortion due to the distortion of the fisheye lens. Previous studies have used corrected images for all pixels in an image. This calibration method uses no information on the outer regions with many distortions because it only corrects small areas of a given distortion from the center of the fisheye image. In addition, it is slow because it operates for all pixels of an image. In this paper, only those coordinates of the feature points extracted using the LKOF method are corrected to improve the operating speed regardless of the size of the image.
The correction method divides the forward and backward mappings depending on how the original image is read. The forward mapping method determines the coordinates of the target image based on the original image. The backward mapping method calculates the coordinates of the original image from the target image. Forward mappings create holes that do not match because the output coordinates are calculated based on the original coordinates; thus they result in blurred parts because of the interpolation method used to remove the holes. Backward mappings do not create holes because they use the original coordinates, which are calculated based on the output coordinates, and many general corrections. The proposed algorithm uses forward mappings to correct coordinates for two reasons. First, it does not encounter any problems related with holes because it only needs to map the coordinates of objects (robots, obstacles, and so on). Second, it reduces the processing time, which is related to the access time and size of the input images. Forward mappings can be represented by (6) based on a hemispherical projection modeling, as shown in Fig. 7 . Here, F is the focal length, ( xd, yd ) is the coordinate before the distortion is corrected, and ( xu, yu ) is the coordinate after the distortion correction.
( x u , y u )=( x d L F ,   y d L F ).
L= F 2 F 2 r d 2 .
Equation (6) obtains the corrected coordinate using the uncorrected coordinates, and (7) obtains L , the ratio of corrected coordinates to uncorrected coordinates. In (7), rd is the distance from the original coordinates to ( xd, yd ). The corrected motion vectors of objects, based on (6), can be seen in Fig. 8 .
PPT Slide
Lager Image
Semispherical modeling.
PPT Slide
Lager Image
Corrected motion vectors of objects.
- 5. Local Mapping Using Corrected Points of an Object
The proposed algorithm distinguishes misrecognized vectors using the characteristics (magnitude and direction) of objects that have the same label. It then sorts all vectors according to a measured angle and filters out any misrecognized vectors in the process. Finally, using the outer-most points of an object’s respective feature points, it then employs a further algorithm (convex hull) to extract the outlines of all existing objects [27] [29] .
In this paper, property information (the labelling of feature points) is carried out using the data that remains after the removal of the ground information. The resulting labelling data is then used by the convex hull algorithm to determine the outer points of objects. Once the convex hull algorithm is completed, we are then able to obtain a map that shows the locations of all the objects, as illustrated in Fig. 9 .
PPT Slide
Lager Image
Connection of outer points using a convex hull algorithm.
IV. Global Mapping for Multiple Robots
- 1. Sequence of Global Mapping
The global mapping of multiple robots, shown in Fig. 10 , extracts environmental information on objects using local maps created for individual robots and classifies objects. It estimates the motion of the robots found using the classified objects, and the robots move according to a predetermined formation algorithm. It maintains the formation of the robots by updating the local maps based on the object information from other robots and creates paths and global maps.
PPT Slide
Lager Image
Overall flow of global mapping.
- 2. Object Classification and Estimation Using Vector Information
The proposed algorithm extracts the central point O (x, y) using (8) for the object region obtained from the local map and estimates the relationship between the center of the object and that of the robot. In (8), I (x, y) is the brightness of the ( x, y ) coordinate, and n is axially the number of bright pixels. It distinguishes between obstacles and robots based on the vector and relationship information. Because obstacles are static in comparison to robots (whose velocity is constant), the motion vectors of the obstacles have a characteristically large magnitude and are in the opposite direction relative to the motion of the robots. As shown in Fig. 11 , it determines the current location of the robots using information on both the robots and the obstacles, which can be distinguished based on the following characteristic:
O (x,y) =( 0 w xdx n x ,   0 w ydy n y )    if     I (x,y) 0.
PPT Slide
Lager Image
Robot and object information.
- 3. Formation Control between Multiple Robots
The movement of multiple robots generates motion vectors related to the robot positions. Robots that move at a constant speed can obtain less vector information than static obstacles. The proposed algorithm checks the movement of the robots based on the vector information and retains all formations while maintaining the angles and distances between the robots. Because the robots employ an omnidirectional-vision SLAM approach, it is possible to quickly maintain their formation. In addition, formation control is possible through additional connections between the robots because the proposed algorithm can link the robots within the navigation area. As shown in Fig. 12 , the proposed system consists of three robots (R 1 , R 2 , and R 3 ). If obstacles are present in the path of R 1 , then R 1 changes direction. R 2 and R 3 detect the change in R 1 ’s direction and correspondingly change their directions so as to maintain their group formation. In Fig. 12 , d is the distance to an object’s center point (from a robot’s center point), and θ is the angle between the object’s center point and the heading angle of the robot. The dotted circles, therefore, indicate regions that a robot is able to explore.
PPT Slide
Lager Image
Multi-robot formation for a global mapping.
- 4. Global Mapping of Multiple Robots
Global mappings identify the positions of robots and obstacles to create global maps using local maps from other robots. As shown in Fig. 13 , a global map is generated using overlay techniques for the same region. If it receives maps from many robots, it can then generate more precise maps.
PPT Slide
Lager Image
Global map.
V. Results
- 1. Experiment in Static Environment
A. Experiment Environment
An experiment was conducted to verify the performance of the proposed algorithm in a static environment. As shown in Fig. 14 , the obstacle on the ground surface of the experiment environment, which is composed of 5 cm grids, is smaller than any of the depicted robots. Obstacles are installed to verify the generation of local maps based on an omnidirectional-vision SLAM approach, and local maps generated by individual robots are verified. When robots move in a straight line for a distance of 5 cm, the proposed algorithm conducts a generation experiment for both local and global maps.
PPT Slide
Lager Image
Experiment environment in a static state.
B. Experiment Results
The experimental results were obtained in a static environment, and local maps obtained from R 1 for formation control are shown in Fig. 15 . Local maps obtained from R 2 are shown in Fig 16 , and those from R 3 are shown in Fig. 17 . The top-left images in Figs. 15 , 16 , and 17 are the labeling results, and the top-right images are motion vectors obtained from fisheye images. The bottom-left images are corrected motion vectors, and the bottom-right images are local maps for each robot. The results for global maps generated using local maps obtained from individual robots are shown in Fig. 18 . The real distance between objects is compared with the distance based on the global maps obtained from the experimental results to verify the accuracy of the proposed algorithm. According to the results, the proposed algorithm has small errors ( x -error of 2.23 cm and y -error of 1.49 cm).
PPT Slide
Lager Image
Experiment result of R1.
PPT Slide
Lager Image
Experiment result of R2.
PPT Slide
Lager Image
Experiment result of R3.
PPT Slide
Lager Image
Global map in a static environment.
- 2. Experiment in Dynamic Environment
A. Experiment Environment
An experiment for the generation of global maps in a dynamic environment is shown in Fig. 19(a) . A color patch was attached on the top of each robot to detect its position. Global maps of the robots and obstacles were obtained using a ceiling camera. Maps based on the proposed algorithm are compared with those obtained from the ceiling camera. Here, it is a requirement that the color patches be easily recognizable and that the floor contain no colors that are similar in likeness to those used for the patches. A space painted in a non-glossy paint is used. As shown in Fig. 19(b) , the position-estimation program, using global images, utilizes the various types of color patches as detected by the ceiling camera. A blue patch is the team color and represents the location of all robots. An individual color patch represents the location of individual robots. The angle between an individual robot’s center line, which is the line between the center of its blue patch and that of its unique color patch, and the horizontal axis of the image indicates the robot’s heading. The position-estimation program implements a function that stores the position of the robot in a text file. The accuracy of experimental system experimented in previous research, and it obtained the small errors (4.16 cm and 4.46°) [30] .
PPT Slide
Lager Image
(a) Experimental environment in a dynamic state and (b) position-tracking program using a global vision system.
B. Experiment Results
As shown in Fig. 20 , an experiment was conducted based on the movement of a real robot in a dynamic environment. Path data on the robots were obtained using a position-estimation program through the ceiling camera, and the performance of the proposed algorithm was verified by comparing the path data from the position-estimation program with that from the global maps based on the proposed algorithm. The results of the path comparisons are shown in Fig. 21 , and those of the errors relative to the path of multiple robots are shown in Fig. 22 . The proposed algorithm has relatively small errors ( x = 5.65 cm and y = 4.50 cm) compared with those listed in Table 1 .
PPT Slide
Lager Image
Snapshots in a dynamic environment.
PPT Slide
Lager Image
Comparisons between the paths in the location-tracking system and the paths in the proposed system.
PPT Slide
Lager Image
Error between the paths in the location-tracking system and the paths in the proposed system.
Comparison of experiment results for global paths with five other omnidirectional-vision SLAMs.
Distance error (cm)
 Method using the proposed algorithm 7.32
 Method using Ladybug camera in 2012 [31] 8.51
 Method using spherical image in 2011 [32] 7.11
 Method using upward camera and CHL map in 2007 [33] 19.6
 Method using topological navigation in 2007 [34] 11
 Method using scan matching in 2006 [35] 10
VI. Conclusion
This paper proposed a global mapping of multiple robots based on an omnidirectional-vision SLAM approach through images obtained from fisheye cameras attached in a downward position to the robots. The proposed algorithm extracts feature points, using the LKOF method, from fisheye images and corrects the coordinates using a correction algorithm based on a hemispherical projection model. It then creates local maps by connecting the corrected outer coordinates of objects using a convex hull algorithm. Multiple robots create global maps based on the location and vector information of objects obtained from local maps while maintaining formation control. According to the experimental results, highly accurate maps are obtained from robots based on the proposed algorithm in both static and dynamic environments. In general, multiple robots must recognize all environmental data by moving extensively; however, in our proposed system, the global mapping process is significantly enhanced by simultaneously making use of all robots’ local data in just a single movement.
In our future research, we plan to optimize the global mapping algorithm, in particular the way that it corrects the motion vectors. In addition, we are preparing research to improve the performance of the multi-robot system by using a fusion of laser scanner sensors and omnidirectional images. Finally, we will proceed with research on more accurate mappings through the use of a different coordinate correction algorithm.
This work was supported by the Agency Specific Research Program of MSIP, Rep. of Korea (Development of Multi-Sensor Platform Technology for Context Cognitive Smart Car).
BIO
yunwon.choi@etri.re.kr
Yun-Won Choi received his BS degree in electrical engineering from Yeungnam University, Daegu, Rep. of Korea, in 2009 and his MS degree in electronic engineering from Yeungnam University in 2011. He completed a doctoral course in robot engineering from Yeungnam University in 2013. Since 2013, he has been working at the Electronics and Telecommunications Research Institute, Daegu, Rep. of Korea, where he is now a researcher. His main research interests are robotics, omnidirectional image processing, and multi-robot control.
kwonkk@etri.re.kr
Kee-Koo Kwon received his BS degree in electronic engineering from Inje University, Gimhae, Rep. of Korea, in 1997 and his MS and PhD degrees in electronic engineering from Kyungpook National University, Daegu, Rep. of Korea, in 2000 and 2004, respectively. In 2004, he began working for the Electronics and Telecommunications Research Institute (ETRI), Daejeon, Rep. of Korea, as a researcher. In 2006, he relocated to work at ETRI’s Daegu branch, where he is now a principal researcher. His main research interests are image and video signal processing; machine vision; intelligent vehicles; and automotive ADAS systems.
silee@etri.re.kr
Soo-In Lee received his BS, MS, and PhD degrees in electronic engineering from Kyungpook National University, Daegu, Rep. of Korea, in 1985, 1989, and 1996, respectively. Since 1990, he has been working at the Electronics and Telecommunications Research Institute, Daegu, Rep. of Korea, where he is currently a principal researcher at the Daegu-Gyeongbuk Research Center. His main research interest is IT convergence technology.
choijw@ync.ac.kr
Jeong-Won Choi received his BS, MS, and PhD degrees in electrical engineering from the School of Electrical Engineering, Yeungnam University, Daegu, Rep. of Korea, in 1995, 1997, and 2002, respectively. From 2003 to 2006, he worked for STX, Changwon, Rep. of Korea, and from 2006 to 2009, he worked at the Kumoh National Institute of Technology, Gumi, Rep. of Korea. Since 2009, he has been with the Department of Automation Electrical Engineering, Yeungnam University College, Daegu, Rep. of Korea, where he is now an assistant professor. His main research interests are mobile robots and automation control systems.
Corresponding Author  sglee@ynu.ac.kr
Suk-Gyu Lee received his BS and MS degrees in electrical engineering from Seoul National University, Seoul, Rep. of Korea, in 1979 and 1981, respectively. He received his PhD degree in electrical engineering from the University of California, Los Angeles, CA, USA, in 1990. Since 1982, he has been with the Department of Electrical Engineering, Yeungnam University, Daegu, Rep. of Korea, where he is now a professor. His research interests include robotics, SLAM, nonlinear control, and adaptive control.
References
Thrun S. 2001 “A Probabilistic Online Mapping Algorithm for Teams of Mobile Robots,” Int. J. Robot. Res. 20 (5) 335 - 363    DOI : 10.1177/02783640122067435
Wei Z. , Huang G. , Wang D. 2007 “The Research on Multi-robot Simultaneous Localization Mapping Algorithm,” IEEE Int. Conf. Autom. Logistics Jinan, China Aug. 18–21 1241 - 1246    DOI : 10.1109/ICAL.2007.4338759
Cai C. , et al. 2007 “Collision Avoidance in Multi-robot Systems,” Int. Conf. Mechatronics, Autom. Harbin, China Aug. 5–8 et al. 2795 - 2800
Souliman A. , et al. 2013 “Real Time Control of Multi-agent Mobile Robots with Intelligent Collision Avoidance System,” Sci. Inf. Conf. London, UK Oct. 7–9 et al. 93 - 98
Kwon J.W. , Chwa D.K. 2012 “Hierarchical Formation Control Based on a Vector Field Method for Wheeled Mobile Robots,” IEEE Trans. Robot. 28 (6) 1335 - 1345    DOI : 10.1109/TRO.2012.2206869
Liu J. , et al. 2013 “Optimal Formation of Robots by Convex Hull and Particle Swarm Optimization,” IEEE Symp. Comput. Intell. Contr. Autom. Apr. 16–19 et al. 104 - 111    DOI : 10.1109/CICA.2013.6611670
Tan H. , Liao Q. , Zhang J. 2007 “An Improved Algorithm of Multiple Robots Cooperation in Obstacle Existing Environment,” IEEE Int. Conf. Rotot. Biomimetics Sanya, China Dec. 15–18 1001–1006. -    DOI : 10.1109/ROBIO.2007.4522300
Majumder C.G. , Janarthanan R. , Majumder T.G. 2013 “Cooperating a Blind Robot Teammate through Vision Based Real-Time Communication,” Int. Multi-conf. iMac4s Kottayam, India 59 - 64    DOI : 10.1109/iMac4s.2013.6526384
Moon S.R. , et al. 2011 “Leader-Following Approach Based Adaptive Formation Control for Mobile Robots with Unknown Parameters,” Trans. Korean Institute Electr. Eng. et al. 60 (8) 1592 - 1598    DOI : 10.5370/KIEE.2011.60.8.1592
Eoh G.H. , et al. 2011 “Multi-robot Cooperative Formation for Overweight Object Transportation,” IEEE/SICE Int. Symp. Syst. Integr. Kyoto, Japan Dec. 20–22 et al. 726 - 731    DOI : 10.1109/SII.2011.6147538
Touahmi Y. , et al. 2012 “Congestion Avoidance for Multiple Micro-robots Using the Behavior of Fish Schools,” Int. J. Adv. Robot. Syst. et al. 9 1 - 12    DOI : 10.5772/51190
Chaimowicz L. , et al. 2004 “Experiments in Multirobot Air-ground Coordination,” IEEE Int. Conf. Robot. Autom. New Orleans, LA, USA Apr. 26–May 1 et al. 4 4053 - 4058    DOI : 10.1109/ROBOT.2004.1308905
Shakemia O. , Vidal R. , Sastry S. 2003 “Multibody Motion Estimation and Segmentation from Multiple Central Panoramic Views,” IEEE Int. Conf. ICRA Taipei, Taiwan 1 571 - 576    DOI : 10.1109/ROBOT.2003.1241655
Yagi Y. 1999 “Omni-directional Sensing and its Applications,” IEICE Trans. E82-D (3) 568 - 579
Boult T.E. , et al. 1999 “Frame-Rate Omnidirectional Surveillance and Tracking of Camouflaged and Occluded Targets,” IEEE Workshop Vis. Surveillance Fort Collins, CO, USA et al. 48 - 55    DOI : 10.1109/VS.1999.780268
Peri V.N. , Nayar S.K. 1997 “Generation of Perspective and Panoramic Video from Omnidirectional Video,” DARPA Image Understanding Workshop New Orleans, LA, USA
Miyamoto K. “Fish Eye Lens,” J. Opt. Soc. America 54 (8) 1060 - 1061    DOI : 10.1364/JOSA.54.001060
Basu A. , Licardie S. 1995 “Alternative Models for Fish-Eye Lenses,” Pattern Recogn. 16 (4) 433 - 441    DOI : 10.1016/0167-8655(94)00115-J
Marjovi A. , Choobdar S. , Marques L. 2012 “Robotic Clusters: Multi-robot Systems as Computer Clusters: A Topological Map Merging Demonstration,” Robot. Auton. Syst. 60 (9) 1191 - 1204    DOI : 10.1016/j.robot.2012.05.007
Nguyen T.D. , et al. 2012 “A Novel Interaction Method for Mobile Devices Using Low Complexity Global Motion Estimation,” ETRI J. et al. 34 (5) 734 - 742    DOI : 10.4218/etrij.12.0112.0070
Fukuda T. , et al. 1995 “Navigation System Based on Ceiling Landmark Recognition for Autonomous Mobile Robot-Landmark Detection Based on Fuzzy Template Matching (FTM),” Proc. IEEE/RSJ Int. Conf. Pittsburgh, PA, USA et al. 2 150 - 155    DOI : 10.1109/IROS.1995.526153
Gaspar J. , Winters N. , Santos-Victor J. 2000 “Vision-Based Navigation and Environmental Representations with an Omnidirectional Camera,” IEEE Trans. Robot. Autom. 16 (6) 890 - 898    DOI : 10.1109/70.897802
Bakowski A. , Jones G.A. 1999 “Video Surveillance Tracking Using Color Region Adjacency Graphs,” Image Process. Appl., Conf. Manchester, UK 2 (465) 794 - 798    DOI : 10.1049/cp:19990433
Tao H. , Sawhney H.S. , Kumar R. 2002 “Object Tracking with Bayesian Estimation of Dynamic Layer Representations,” IEEE Trans. Pattern Anal. Mach. Intell. 24 (1) 75 - 89    DOI : 10.1109/34.982885
Jeong W.Y. , Lee K.M. 2005 “CV-SLAM: A New Ceiling Vision-Based SLAM Technique,” IEEE/RSJ Int. Conf. Intell. Robot. Syst. Edmonton, Canada Aug. 2–6 3195 - 3200    DOI : 10.1109/IROS.2005.1545443
Yagi Y. , et al. 2005 “Iconic Memory-Based Omnidirectional Route Panorama Navigation,” IEEE Trans. Pattern Anal. Mach. Intell. et al. 27 (1) 78 - 87    DOI : 10.1109/TPAMI.2005.11
Jeong J.C. , et al. 2013 “High-Quality Stereo Depth Map Generation Using Infrared Pattern Projection,” ETRI J. et al. 35 (6) 1011 - 1020    DOI : 10.4218/etrij.13.2013.0052
Sohn H.J. , Kim B.K. 2008 “An Efficient Localization Algorithm Based on Vector Matching for Mobile Robots Using Laser Range Finders,” J. Intell. Robot. Syst. 51 (4) 461 - 488    DOI : 10.1007/s10846-007-9194-1
Graham R.L. 1972 “An Efficient Algorithm for Determining the Convex Hull of a Finite Planar Set,” Inf. Process. Lett. 1 (4) 132 - 133    DOI : 10.1016/0020-0190(72)90045-2
Choi Y.W. , et al. 2014 “Localization Using Ego Motion Based on Fisheye Warping Image,” J. Institute Contr., Robot. Syst. et al. 20 (1) 70 - 77    DOI : 10.5302/J.ICROS.2014.13.1935
Tong G. , Wu Z. 2012 “An Omni-directional vSLAM Based on Spherical Camera Model and 3D Modeling,” Proc. World Congress Intell. Contr. Autom. Beijing, China July 6–8 4551 - 4556    DOI : 10.1109/WCICA.2012.6359341
Kim J.R. , Lim M.-S. , Lim J.-H. 2011 “Omni Camera Vision-Based Localization for Mobile Robots Navigation Using Omni-directional Images,” J. Institute Contr., Robot. Syst. 17 (3) 206 - 210
Yoon S.J. , et al. 2007 “Global Localization of Mobile Robots Using Omni-directional Images,” Trans. KSME et al. 31 (4) 517 - 524
Goedeme T. , et al. 2007 “Omnidirectional Vision Based Topological Navigation,” Int. J. Comput. Vision et al. 74 (3) 219 - 236    DOI : 10.1007/s11263-006-0025-9
Menegatti E. , et al. 2006 “Omnidirectional Vision Scan Matching for Robot Localization in Dynamic Environments,” IEEE Trans. Robot. et al. 22 (3) 523 - 535    DOI : 10.1109/TRO.2006.875495