Advanced
Real-Time Precision Vehicle Localization Using Numerical Maps
Real-Time Precision Vehicle Localization Using Numerical Maps
ETRI Journal. 2014. Oct, 36(6): 968-978
Copyright © 2014, Electronics and Telecommunications Research Institute(ETRI)
  • Received : January 29, 2014
  • Accepted : October 12, 2014
  • Published : October 01, 2014
Download
PDF
e-PUB
PubReader
PPT
Export by style
Share
Article
Author
Metrics
Cited by
TagCloud
About the Authors
Seung-Jun Han
Jeongdan Choi

Abstract
Autonomous vehicle technology based on information technology and software will lead the automotive industry in the near future. Vehicle localization technology is a core expertise geared toward developing autonomous vehicles and will provide location information for control and decision. This paper proposes an effective vision-based localization technology to be applied to autonomous vehicles. In particular, the proposed technology makes use of numerical maps that are widely used in the field of geographic information systems and that have already been built in advance. Optimum vehicle ego-motion estimation and road marking feature extraction techniques are adopted and then combined by an extended Kalman filter and particle filter to make up the localization technology. The implementation results of this paper show remarkable results; namely, an 18 ms mean processing time and 10 cm location error. In addition, autonomous driving and parking are successfully completed with an unmanned vehicle within a 300 m × 500 m space.
Keywords
I. Introduction
Researches into autonomous vehicles by numerous companies and research institutions are on the rise. Owing to the achievements of such researches, it is clear that most vehicles will become either partially or fully unmanned in the near future. Localization is a core technology for an autonomous vehicle and belongs to the recognition part of the recognition, decision, and control technologies. Localization technology is particularly needed for the completion of other technologies [1] . This paper proposes an effective localization technology using only numerical maps and image information.
Researches on vehicle localization conducted thus far have been focused on general roads [2] [10] . Meanwhile, this paper aims at providing an autonomous valet parking (AVP) service. AVP is an extended technology of an already commercialized parking assistance system and is used to park in a surrounding parking space on behalf of a driver who just finished driving. AVP has been forecasted to reach a commercial model at the initial stage of autonomous vehicles.
For AVP, precise location and quick response times are required to safely park a vehicle in a narrow parking space. Because vehicles frequently move between buildings or in underground spaces and park in a single spot within a wide parking lot, the use of an existing global positioning system (GPS) may be impossible. For commercialization, a low cost is also required.
The purpose of this research is to develop a fast and precise localization technology that does not require the aid of a GPS, while at the same time keeping its cost to an absolute minimum to commercialize it in the near future. Toward this end, this study developed an efficient localization technology that makes use of only numerical maps, has a small capacity, uses low-cost image sensors (cameras), and that can accommodate large amounts of data with small storage.
Numerical maps are widely used in a geographic information system (GIS) or navigation system and have vector-type data reflecting the real world through measurements; therefore, they can be expressed with a small amount of data. With standardization, numerical maps can be compatible with most systems. Moreover, they can be updated immediately through wireless communications [11] ; thus, their utilization value is significantly high.
This paper proposes both a visual odometry (VO)–based vehicle ego-motion estimation (EE) technology and a road-marking extraction technology for the area surrounding a vehicle. In addition, this study proposes a method for finding and optimizing the location of a vehicle using numerical maps. By applying the localization technology to the autonomous vehicle shown in Fig. 1 , a mean processing time of 18 ms and a 10 cm location error were acquired, which are remarkable performance metrics. This study verified the usefulness of localization technology by carrying out an unmanned autonomous driving and parking experiment.
PPT Slide
Lager Image
Our autonomous vehicles: the front one is a sport utility vehicle, and the other one is an electrical vehicle. Each vehicle is equipped with six cameras (front stereo, rear stereo, and each side), two LIDAR sensors, and a high-performance GPS/INS system.
The main contribution of this paper is that it introduces a precision localization method using only image sensors and numerical maps. In addition, it proposes a new method for combining an extended Kalman filter (EKF) and particle filter, (PF) fast vehicle ego-motion estimation, and road feature extraction.
The technology for measuring the location of a vehicle or mobile robot can be divided into map-based localization technology that estimates the location from a pre-constructed map or land-marker [12] , and simultaneous localization and mapping (SLAM) technology that automatically produces a map and simultaneously estimates the location [13] .
In this paper, relevant studies are examined, focusing on the notable researches that have a direct bearing on this paper. Because localization technology is a foundation technology for mobile robots and autonomous vehicles, research has been widely conducted on the subject [12] .
Levinson described typical research on vehicle localization technology covering precise location estimation applied to a Google driverless car. The results were acquired using a PF [2] and histogram filters [3] through the input of the refraction intensity of the light detection and ranging (LIDAR) sensor and inertia measurement unit (IMU) sensor on the probability map produced using a LIDAR sensor.
Pink developed the first localization technology using a precise map generated by aerial images and road markings [4] . Since then, localization technology that recognizes road markings and that uses precise image maps has been partially attempted.
These researches were produced based on precise maps made from front-camera images and the road shape including road markers [5] or road markers only [6] , as well as maps produced by LIDAR and road markers [7] . Some researches show that position accuracy is improved by combining a GPS [8] .
However, producing precise rasterized maps for all spaces is not only difficult for map building, but may cause problems in terms of physical storage space.
For the first time, Laneurit proposed a localization method that made use of numerical maps. He demonstrated a technique to improve the accuracy of GPS on a numerical map through using multi-sensor fusion with a Kalman filter [10] .
Brubaker recently attempted, for the first time, to locate the global position of a vehicle using only numerical maps and image information [9] . He acquired about a three-meter precision matching between the vehicle’s driving path acquired from the VO and the road shape, using a probability method. This system performs similar to a commercial GPS and, although it lacks control, has been judged to be highly meaningful in that his attempt identified a vehicle’s global position with only image information and numerical map information in a vastly wide area.
This paper consists of the following sections. Section II describes the localization technology framework, and Sections III, IV, and V describe each technology in detail. Finally, Sections VI and VII conclude this paper with our experimental results and findings.
II. Framework for Vehicle Localization Using Numerical Maps
The purpose of this study is to develop a location recognition technology having low price, high speed, and high performance for unmanned vehicle driving and parking functions. Even though precise VO or SLAM technology is implemented, the location result acquired from such a technology is a relative location with regard to the reference position [13] [14] . Actually, an accumulation of errors may occur; therefore, it is difficult to fully trust the results.
This paper proposes a map-based localization technique using the numerical maps widely used in a GIS as a reference point. The location of a vehicle within a numerical map is predicted using vehicle ego-motion and is verified using the road marking (RM) feature and PF. The ego-motion and map-based vehicle location have many estimation errors (noise); thus, these noises make it impossible to detect vehicle poses stably, which can be addressed by estimating a robust vehicle state ultimately using an EKF. Radically, the localization technology proposed in this paper can be said to be based on simple EKF-based vehicle state estimation, and its observation is a PF.
An EKF is a self-recursive filter for estimating a nonlinear system state including noise [15] . An EKF consists of prediction and update stages, as illustrated in the following equation, where the nonlinear state transition and observation equations are provided as xk = f ( x k−1 , u k−1 , w k−1 ) and zk = h ( xk , vk ), respectively.
 Prediction:   { x ^ k|k1 =f( x ^ k1|k1 ,  u k1 ), P k|k1 = F k1 P k1|k1 F k1 T + Q k1 ,
Update:        { x ^ k|k = x ^ k|k1 + K k y ˜ k , P k|k =( I K k H k ) P k|k1 , K k = P k|k1 H T k S 1 k , y ˜ k = z ^ k h( x ^ k|k1 ), S k = H k P k|k1 H k T + R k .
In these equations, xk , uk , zk , wk , and vk are the state, input, output, process noise, and observation noise; Qk and Rk are the covariance of wk and vk ; and Kk , k , and Sk are the Kalman gain, innovation, and innovation covariance, respectively. Further, Fk and Hk are the Jacobean of the state transition and observation function, respectively. Here, if the vehicle pose is defined as state xk = [ x y θ ] T , and the ego-motion of the vehicle is defined as input uk = [ δx δy δθ ] T , then the kinematical nonlinear motion equation of a vehicle can be defined as follows:
( x t+1 y t+1 θ t+1 )=( x t y t θ t  )+( δxsin( θ t )δycos( θ t ) δxcos( θ t )+δysin( θ t )                       δ θ   ).
Although the environment has some difference in elevation, this assumption is well suited because the system uses only a 2D map. Figure 2 shows the framework of the algorithm proposed in this paper. The vehicle ego-motion (input uk ) is calculated by front stereo images, and RM are detected by surrounding images. Here, uk is used for a prediction of EKF, and RM, together with numerical maps, is used to calculate the vehicle pose; namely, observation value k . A PF is basically used; thus, the level of robustness against any sensing noise is enhanced, and the localization accuracy is improved through optimization after additionally including the previous path RM information. From the observation value k , calculated as such, an update of EKF is carried out. Additionally, Kalman filtering (KF) based on vehicle dynamics is conducted for processing the time-delay compensation.
PPT Slide
Lager Image
Algorithm of vehicle localization using numerical maps.
III. Ego-Motion Estimation
Ego-motion is the 3D motion of a camera acquired between consecutive times, and VO is the processing of ego-motion using images [14] . In his latest study, Badino developed a multi-frame feature integration (MFI) technology for effective VO [16] . Even though MFI delivers excellent performance and respectable real-time capability, the speed of MFI was slightly insufficient to the desired performance. Therefore, this paper proposes an algorithm highly improving the real-time performance of MFI by combining a high-speed feature tracking and fast convergence from solving the least square optimization.
- 1. Multi-frame Feature Integration
MFI is a technique proposing an augmented feature, which can be an integrated feature having the history of all tracked features and reducing the tracking errors by additionally including all of them [16] . From the stereo images, the i th feature set tracked from time t is defined as mi.t = [ u v d ] T , where ( u , v ) is the position of the feature in the left image, and d is its disparity. If the rotation matrix of the ego-motion is set as R , and the translation vector is set as T , then the rigid body motion of the tracked feature needs to meet the following:
𝒈( m i,t )=𝑹𝒈( m i,t1 )+𝑻.
Here, g (·) is a stereo triangulation equation. An objective function is calculated by multiplying both sides by an inverse function, h (·) = g (·) −1 .
m i,t =𝒉( 𝑹𝒈( m i,t1 )+𝑻 )=𝒓( m i,t1 ).
Calculating the ego-motion is a least square optimization problem that looks for R and T that minimize the following error function:
𝒆= i=1 n ω i 𝒓( m i,t1 ) m i,t 2 .
In this equation, ωi is the weighting. MFI extends the error function additionally as follows:
𝒆 ^ =α i=1 n ω i 𝒓( m i,t1 ) m i,t 2 +β i=1 n a i ω i 𝒓( m ¯ i,t1 ) m i,t 2 .
Here, α and β are normalized weighting factors, ai is the age of a feature at time t −1, and
m ¯ i,t−1
  is an integrated feature and is defined as
m ¯ i,t−1 = 𝒓( m i,t−1 )+ a i 𝒓( m ¯ i,t−1 ) 1+ a i .
  MFI can expect quite a performance improvement through the two following directions. The first is reducing errors occurring upon 3D coordinate conversion, through the direct use of the 2D feature position from the objective function definition in (5) [14] , and the other is including the previous information of the tracked features in the optimization stage in (7) [16] . However, the heavy operation process of feature tracking and obtaining the disparity from stereo images still remains.
- 2. Speed-Up Multi-frame Feature Integration
We propose a four-step technique to improve the processing speed of MFI. First, input images are transformed into multi-scale images through a pyramidal technique. Second, new features of every scaled image are extracted through FAST [17] feature detection, Shi–Tomasi [18] cost filtering, sub-pixel accuracy refinement [19] , and a FREAK [20] descriptor. Third, feature matching and ego-motion are updated as follows: feature matching on a narrow search area, starting from the smallest layer, is performed. This is made possible by predictions of a new feature position using the ego-motion of the previous stage and (4). The disparity of the matched feature is then computed and the ego-motion is updated using (6). Using (7), the final ego-motion is computed when the largest layer is reached. At this state, the initial values for solving the least square optimization problem are the previous stage values. Finally, new tracked features are selected. Every layer is divided into a grid and within this, we select the features up to a predefined number in descending order of Shi–Tomasi cost to maintain a better feature distribution.
IV. RM Feature Extraction
RM is the measurement feature of the proposed system. Studies on RM feature detection have been steadily carried out for the last two decades and have been mainly conducted for an advanced driver assistance system [21] . Lately, some researches have used an RM feature for localization [4] , [6] [8] .
- 1. Extended Symmetrical Local Threshold
Through previous studies, the symmetrical local threshold (SLT) method is well known in that it demonstrates a good performance with a small number of operations for RM feature extraction [21] . SLT uses the mean pixel intensity within a certain range placed at the right and left as a reference value for the target pixel threshold ( Fig. 3(a) ). Therefore, SLT can effectively detect a bright RM having a small amount of operations and drawn on a dark road. The threshold results of the image coordinate ( u , v ) in SLT are defined through the following equation:
i ^ u,v ={ 1       if   i u,v > μ l u,v ( x,y )+τ i u,v > μ r u,v +τ, 0       otherwise.
Here, iu,v is the pixel intensity at the image coordinate ( u , v ), and μl and μr are the mean of the left and right sections, and are defined as
μ l = 1 N ∑ k=u−1−N u−1 i(k,v)
  and
μ r = 1 N ∑ k=u+1 u+1+N i(u,v) ,
  respectively. Here, τ is the given threshold value. From our experience, the SLT method poses difficulties in detecting a horizontally placed straight line, and an error took place at the shadow boundary because SLT sets the threshold value using only the means of the pixel intensities of the horizontal axis.
PPT Slide
Lager Image
Averaging block for a local threshold: (a) original SLT and (b) extended symmetrical local threshold (ESLT).
To solve this problem, the current paper proposes an extended symmetrical local threshold (ESLT) technique, as shown in Fig. 3(b) . ESLT uses four directional block reference values, which are the means of the pixel intensities within the block surrounding the target pixel.
If the radius of the image coordinate ( u , v ) block is set as σ , then it can be defined through the following equation:
i ˜ u,v ={ 1     if    i u,v > μ E u,v,σ +τ i u,v > μ W u,v,σ +τ, 1     if    i u,v > μ N u,v,σ +τ i u,v > μ S u,v,σ +τ, 0     otherwise.
Here, μE , μW , μN , and μS are the means of the pixel intensities within each directional block and are defined as follows:
μ E u,v,σ = 1 ( 2σ+1 ) 2 k=yσ y+σ l=x( 2σ+2 ) x1 i ( l,k ) , μ W u,v,σ = 1 ( 2σ+1 ) 2 k=yσ y+σ l=x+1 x+( 2σ+2 ) i ( l,k ) , μ N u,v,σ = 1 ( 2σ+1 ) 2 k=y( 2σ+2 ) y1 l=xσ x+σ i ( l,k ) , μ S u,v,σ = 1 ( 2σ+1 ) 2 k=y+1 y+( 2σ+2 ) l=xσ x+σ i ( l,k ) .
Although a great number of operations are required to calculate the means of block intensity, this can be resolved by using the box-filter technique, which is a O (1) algorithm [22] . If the resulting value of the box-filter is set as b , then equation (10) can be modified briefly as follows:
μ E u,v,σ = b u( σ+1 ),v ,    μ W u,v,σ = b u+( σ+1 ),v , μ N u,v,σ = b u,v( σ+1 ) ,    μ S u,v,σ = b u,v+( σ+1 ) .
- 2. Size Relative Region Resampling
Not only is the size of the RM greatly diverse, but the road situation is also greatly variable owing to weather, changes in the amount of sunshine, and the presence of shadows. Furthermore, in an outdoor parking lot, where many vehicles are parked in a narrow space, the impact of a parked vehicle’s shadow is vast. In such a situation, it is extremely difficult to set the proper threshold value and filter size.
We reviewed the performance of all threshold values and filter sizes, but a common value appropriate to all situations was not found ( Fig. 4 ). In addition, although we attempted maximally stable extremal regions (MSER) [23] , not only did we not find remarkable improvements owing to highly sensitive threshold values in some particular images, the real-time performance also did not meet our expectation, as described in Section VI.
PPT Slide
Lager Image
Results of road marking feature detectors in a variety of difficult situations: the first row is a wide dark mark on the front camera view, the second row is a narrow lane marking between a packed car on the side camera view, and the other is a narrow marking in shadows. Each column is the (a) original, (b) ground truth, (c) best SLT [21], (d) best ESLT, (e) S3R image, and (f) DCS [21] of ESLT.
Basically, the filter size ( σ ) is related to the size of the marking, and the threshold value ( τ ) is concerned with the intensity of the marking. This paper proposes an algorithm for optimal RM feature extraction that is based upon this filter size — the filter size uses a low threshold value so that an RM can be detected in all possible cases and a stable region, in which region changes based on the filter size become minimized, can be found. In this paper, this is called size relative region resampling (S3R).
The difference between S3R and MSER is that S3R targets a stable region according to the filter size change and successively compares only two neighboring images for a fast performance speed, and does not find a stable region according to the change in threshold values. From our experience, the best results were acquired when S3R was performed by reducing σ in an exponential function manner with four or five steps. Consequently, each step σ is defined with the following equation if the ( N + 1)th step is selected from the σ -value range of σ = [ σ max , σ min ] in terms of the method used to define the value of σ .
σ n = σ min +( σ max σ min ) ( Nn N ) 2 ,   n=0,  1,  ...  ,  N.
V. Vehicle Pose Measurement
This paper proposes a method to measure the vehicle location using the RM feature and a PF, an optimization method to enhance the location precision, and a compensation method to decrease the processing time-delay.
- 1. Vehicle Pose Detection Using PF
A. PF
A PF is a Bayesian sequential-importance-sampling tool that calculates the approximate value of a posterior distribution by recursively using limited weight samples. The robustness of the localization using the PF is known to be excellent [12] .
A PF is divided into prediction and update stages. When all observable inputs are given as z 1:t−1 = { z 1 , ... , z t−1 } up to t −1 time, the posterior distribution of time t can be predicted as follows:
p( x t | z 1:t1 )= p( x t | x t1 , u t1 )p( x t1 | z 1:t1 ) dx t1 .
If zt is observable at time t , then the posterior distribution can be updated as follows through Bayes’ rule:
p( x t | z 1:t )= p( z t | x t )p( x t | z 1:t1 ) p( z t | z 1:t1 ) .
In this equation, likelihood p ( zt | xt ) is defined by the observation equation, and if the probability distribution of predicted particle
x ˜ t i
in the case of PF is calculated by q ( xt | x 1:t−1 , z 1:t ), then the likelihood can be approximated by the following weight:
w t i = w t1 i p( z t | x ˜ t i )p( x ˜ t i | x 1:t1 i ) q( x ˜ t | x 1:t1 ,   z 1:t ) .
Therefore, the posterior distribution can be calculated through a calculation of the predicted particle’s weight. In this case, state x is the vehicle state; namely, the vehicle pose, and measurement z is the location of the RM.
B. Vehicle Pose Detection
This study uses a PF to detect the approximated vehicle pose at a current point in time. The vehicle pose is detected by obtaining the result of (14). To this end, each particle’s prediction value,
x ˜ t i
, should first be calculated. This value is used to compute the nonlinear vehicle kinematic function of (3), where the ego-motion value is inputted on all particles. The RM feature should be changed into vector-type data, such as a numerical map. A line segment is calculated for the line model from the outer contour of the critical point of each RM feature. The line model
S l i = [ x l    y l    θ l    l l ] T
  is defined with the starting point, angle, and length of each line. On the map, the line segment of the object for which images are projected to map the domain W (
x ˜ t i
) is extracted, and the object line model
M k i = [ x k    y k    θ k    l k ] T
  is calculated.
This paper proposes a method to calculate the probability of a mapping hypothesis between the observable map object corresponding within the domain of the predicted vehicle pose
x ˜ t i
  and the RM feature observed from the images to calculate (14). Previous ego-motion and RM information are included in the mapping hypothesis function to increase the robustness against noise. The information is updated when the vehicle moves within a specific distance.
Figure 5 shows a hypothesis function, H (·) kl . The function stands for the probability that the k th data of the numerical map is mapped to the l th image data and is defined as follows:
H ( x ˜ t i ) kl = n=0 p a n i M ( x ˜ tn i ) k S ( x ˜ tn i ) l 2                              = n=0 p a n i d kl 2 + ( μ 1 δ θ kl π ) 2 + ( μ 2 δ l kl l Mk ) 2 .
Here, M (·) is a map’s data model function, S (·) is the image data model function, and
a n i
is the weight of the previous path. The more n increases (previous time), the less the value of the weight
a n i
decreases. Here, dkl is the distance between the line of the map vector segment and the central point of the image vector. Here, δθkl is the angle between the map segment and RM segment, δlkl is the difference in length between the two segments, and μ 1 and μ 2 are constants.
PPT Slide
Lager Image
Example mapping hypothesis between a map object and road marking segment.
The observation model calculates the probability of the mapping hypothesis, and the observation probability of the i th particle in time t can be defined as follows:
p t i = p( z t | x ˜ t i ) =1 1 M+N ( m=1 M F ( x ˜ t i , z t ) m + n=1 N G ( x ˜ t i , z t ) n ).
Here, M is the number of segments observed from the numerical maps, and N is the number of segments observed from the images. In addition, F (·) is the observation function of the referenced map, F (·) m = min{ H (·) kl | k = m , l = 1, 2, ..., N }, and G (·) is the observation function of the referenced image G (·) n = min{ H (·) kl | l = n , k = 1, 2, ..., M }, The result of the hypothetical function H (·) is a matrix of size M × N . The set of the minimum values in each row is denoted by F (·), and G (·) is the set of the minimum values in each column. When the observation probability of all particles is calculated, a new state (namely, the vehicle pose), is computed from a multiplication of the predicted particle and weight, as shown in (18).
x est = i=1 n w t i x ˜ t i ,   Here,    w t i = p t i k=1 n p t k .
Finally, particle resampling should be carried out.
- 2. Vehicle Pose Refinement
The robustness of a PF is really outstanding, but the resolution is relatively low. To enhance the resolution, the number of particles needs to increase; therefore, the number of operations should greatly increase [12] . This study proposes a refinement technique that uses the least square optimization from the approximated vehicle pose computed by a PF using a small number of particles.
Nowadays, state-of-the-art optimization algorithms have been reported [24] ; however, in this study, the authors chose the least square optimization technique because it can be easily implemented and works well.
By inputting an approximated vehicle state calculated from (17), the observation function F (·) is calculated using (16). If most neighboring segment pairs are computed, then the noisy segments of an image (not matched segments) are removed. State x is computed through the least square optimization for the norm of (16). The value calculated as such is the vehicle state observation value k and is used to update (2) for the EKF.
- 3. Processing Time-Delay Compensation
The vehicle state computed as such, (namely, the vehicle pose), is the information at the time at which the image is captured from a camera. That is, it is already past information when used for vehicle control; therefore, proper compensation is required. This is solved by estimating a vehicle’s dynamic characteristics using a linear KF.
The state vector of a vehicle dynamic model can be defined as
x k = [ x    x ˙     x ¨    y   y ˙    y ¨    θ   θ ˙     θ ¨ ] T ,
input vector as uk = [ δx δy δθ ] T , and output vector as yk = [ x y θ ] T If modeling is conducted with linear uniformly accelerated motion, then the state equation of a vehicle dynamic model is computed. Ego-motion is used for the input vector of the linear KF, while the output of the EKF is used for the observation vector. If a vehicle dynamic model state is estimated using a KF, then the vehicle state after random time δt passes can be predicted as follows:
x ˜ k+δt = x k + x ˙ k δt+ 1 2 x ¨ k δ t 2 .
The prediction of the vehicle pose as such compensates the time delay occurring during the operations, and more precise information can be offered to a vehicle controller when an unmanned vehicle is actually operated.
VI. Experimental Results
- 1. Experimental Environments
The two autonomous vehicles in this paper used a 2.6 GHz Intel Xeon E5-2670 and a 2.3 GHz Intel i7-3610QM, respectively, as the main controller, and were adapted for unmanned driving. They were also equipped with six Pointgrey blackfly cameras installed with a fisheye lens, and OxTS RT-2002 high-accuracy GPS/INS and Sick LIDAR. The six cameras were attached in a front stereo and rear stereo fashion, with one camera on both the right and left sides. For ego-motion estimation, the front stereo cameras were used, and four cameras on the front left, rear left, and left and right sides were used for RM feature extraction (a camera on the rear right side is not currently used). The algorithm under such an environment was developed using C++ language and was optimized using Single Instruction, Multiple Data (SIMD) [25] and Intel threading building blocks (TBB) [26] . An Eigen library [27] was used for the matrix operations, while a levmar library [28] was used as the optimization solver. We gathered more than 1 TB of images and GPS log information under various weather conditions for more than one year while carrying out this study. All cameras were collected at 20 frames per second using the same sync, and the GPS log was collected at 250 samples per second. The numerical map was built at 1:500 (mean error of 0.2 m) on a 300 m × 500 m space, while the size was only 530 kilobytes without compression. The test bed is located on a low hill, and the elevation of this place is 50 m to 62 m. Figure 6 shows some selected images and numerical maps from the dataset. In this study, the EKF parameters are tuned using GPS, and the number of particles is variable, up to 200, depending on their probability.
PPT Slide
Lager Image
Our dataset of some selected scenes in a variety of weather conditions and a numerical map: (a) sunny day, (b) rainy day, (c) cloudy day, (d) snowy day, (e) sunny day, evening, (f) sunny day, a wide open space, and (g) the numerical map.
- 2. Performance Analysis of Each Algorithm
Table 1 shows the mean processing time according to various situations on the technology proposed in this paper. This is the mean value of 1,000 data from each sample dataset. Time measurement results were divided into a case using only a single core and a case using multiple cores. Through the results using only a single core, the efficiency of the algorithm can be verified. The RM feature and localization (LO) are the processing results for four cameras, and the processing time of one camera is about 1/4. Although the processing time slightly increases owing to an increase in noise affected by shadows on a sunny day, respectable real-time attributes are shown, nonetheless. The excellent real-time attributes of the technology proposed by this study are due to fast EE and RM feature extraction.
Amount of processing time (ms).
Single core1) Quad-core1)total Octa-core2)total
EE RM3) LO3) Total
Sunny day 32.8 18.6 43.4 94.8 38.5 18.5
Rainy day 30.2 12.1 30.8 73.1 36.1 16.8
Cloudy day 30.8 12.6 31.1 74.5 36.8 17.1
1) i7-3610QM, 2.3 GHz. 2) Xeon E5-2670, 2.6 GHz. 3) Sum of four channels.
Table 2 shows the performance related with EE. The feature matching in the framework was compared with famous SURF [29] , KLT [18] , and FREAK [20] algorithms. For FREAK, the Shi–Tomasi [18] was used since there was no feature extractor. The proposed technique showed the most outstanding and accurate real-time performance. The reason is the selection of a good corner and distributed features (FAST+Shi–Tomasi), as well as the excellent matching performance of FREAK.
Performance of ego-motion estimation.
Time (ms) Trans. err. Rotation err.
SURF 212.4 3.01% 0.0081 deg/m
KLT 120.7 1.80% 0.0075 deg/m
ShiTomasi+FREAK 82.1 1.62% 0.0052 deg/m
Ours 32.8 1.56% 0.0051 deg/m
Table 3 shows the RM performance. When only SLT [21] and ESLT are used, a very fast performance time is shown. However, the ability to cope with a difficult situation, such as line detection between vehicles, is remarkably low. Although there may be an explicit performance improvement, when MSER [23] is combined, an increase in the performance time is conspicuous. S3R proposed in this paper meets both the performance and time requirements.
Performance of RM feature extraction.
Time (ms) DCS [21] of cloudy day DCS of sunny day
SLT 0.5 0.58 0.38
ESLT (IV.1) 0.7 0.63 0.42
ESLT+MSER 78.2 0.69 0.68
S3R (IV.2) 4.6 0.72 0.70
- 3. Accuracy Analysis
Figure 7 shows an outstanding driving path used for the experiment. This paper conducted analyses on free manual driving, ( Fig. 7(a) ), and automatic driving and parking, ( Fig, 7(b) ). The analysis results are shown in Table 4 . The distance root mean squared (DRMS) is a measurement method, the units of which are meters and degrees. Uniform results can be seen overall, especially when the influence of rain or snow is considerably small. With each of these, the robustness of the proposed algorithm can be ascertained. Note, the results acquired a higher accuracy than that of the map used in this study (mean error, 0.2 m). This is because the map errors (noise) are reduced through multi-stage filtering.
PPT Slide
Lager Image
Selected driving paths and their results: (a) manual driving, with a vehicle speed of 10 km/h to 30 km/h and driving distance of 1 km to 3 km; (b) autonomous valet parking, with a vehicle speed of 1 km/h to 3 km/h and driving distance of 80 m to 120 m (the vehicles drive autonomously, and park in a slot, and then return to the starting position).
Evaluation of localization (DRMS).
Pulse Driving AVP
Pos. err. Head. err. Pos. err. Head. err.
Sunny 0.19 m 0.99 deg 0.09 m 0.87 deg
Cloudy 0.12 m 0.85 deg 0.07 m 0.85 deg
Rainy 0.18 m 1.32 deg 0.12 m 1.21 deg
Snowy1) 0.18 m 1.22 deg 0.11 m 1.13 deg
Evening 0.12 m 0.88 deg 0.08 m 0.88 deg
Wide, open space - - 0.13 m 1.11 deg
1) But, snow accumulation does not.
Figure 8 demonstrates the location errors measured upon the actual driving in real time: (a) is the case where the processing time-delay is not compensated, (b) is the opposite case, and (c) is the plotted time domain of (b). In the case of (a), about a 50 ms time delay occurs with 30 ms for capturing and preprocessing after a camera is triggered, and a 20 ms delay occurs for the processing time; thus, location errors increase owing to the delay. Meanwhile, in the case of (b), which compensates the processing time-delay, the location error is remarkably reduced. Such time-delay compensation is judged to be essential for actual control, and the need for compensation increases if the vehicle speed is higher.
PPT Slide
Lager Image
Processing time-delay compensation: position error of the path of Fig. 7(a): (a) without compensation at a vehicle speed of 10 km/h to 30 km/h and a delay of 50 ms (image capture of 30 ms + processing time of 20 ms), and (b), (c) with compensation.
Table 5 shows the robustness of the proposed algorithm in the case of camera failures. The failure of one or two cameras did not cause noticeable performance degradation, in the case of three, autonomous driving and parking were possible over a short distance, but were not possible over a long distance.
Robustness in the case of camera failures (DRMS).
Camera failures Driving AVP
Pos. err. Head. err. Pos. err. Head. err.
1 0.10 m 0.99 deg 0.09 m 0.97 deg
2 0.12 m 0.85 deg 0.11 m 0.85 deg
3 - - 0.15 m 1.51 deg
VII. Conclusion
This paper proposed an efficient localization technology that can be used for actual autonomous vehicles. The proposed technology was developed using only numerical maps that are widely used in a geographic information system (GIS), or in the navigation system field and computer vision technology. This study proposes an efficient vehicle ego-motion and road marking feature extraction algorithm and method to compute the vehicle pose by combining them through an extended Kalman filter (EKF) and particle filter (PF). This paper also handles a time-delay compensation method for the actual control system. This paper conducted experiments on the technology in various weather conditions; thus, the robustness and efficiency of the proposed technology were verified.
The proposed technology was developed and aimed at actual unmanned autonomous vehicles, and was applied and experimented on two vehicles. The technology was verified as an effective technology and was close to its commercialization.
This technology has a constraint in that it requires relatively precise numerical maps. Most facilities, however, have relevant drawings, and acquiring numerical maps from them is not a difficult task. Many countries, including the Rep. of Korea, produce and manage accurate numerical maps for national territory management. If this technology is combined with general commercial GPSs, then the following will be possible: precise driving and parking using the technology proposed in this paper, within the facilities requiring high-precision location information, while driving on the road with the low-precision location and lane-keeping technology.
As such, localization technology utilization using numerical maps is expected to increase further. This research is significant in that it has developed a real-time precision vehicle localization using numerical maps for the first time.
This work was supported by the Industrial Strategic Technology Development Program (10035250, Development of Spatial Awareness and Autonomous Driving Technology for Automatic Valet Parking) funded by the Ministry of Knowledge Economy (MKE), Rep. of Korea.
BIO
Corresponding Author  joshua@etri.re.kr
Seung-Jun Han received his BEng degree in control and instrumentation engineering from Pukyong National University, Busan, Rep. of Korea, in 1998, He received his MS degree in electronics engineering from Pusan National University, Busan, Rep. of Korea, in 2000. From 2000 to 2010, he worked as a principal researcher for Sane System Co., Ltd., Kyonggi, Rep. of Korea. In 2011, he worked as a research fellow at the Department of Aerospace Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Rep. of Korea. Since 2012, he has been working as a senior researcher at the Electronics and Telecommunications Research Institute, Daejeon, Rep. of Korea. In addition, he has won several awards, including the 6th Samsung Humantech Thesis prize in 2000, the outstanding paper award at the 17th ITSWC in 2010, and the outstanding employee award at ETRI in 2014. His main research interests are structure from motion; simultaneous localization and mapping; and machine learning.
jdchoi@etri.re.kr
Jeongdan Choi received her PhD in image processing from Chungnam National University, Daejeon, Rep. of Korea, in 2006 and obtained her BS and MS degrees in computer graphics from Chung-Ang University, Seoul, Rep. of Korea, in 1993 and 1995, respectively. Since 1995, she has been working as a principal researcher at the Electronics and Telecommunications Research Institute (ETRI), Daejeon Rep. of Korea. She is also a director of the Car/Infra Fusion Research Team, Car/Ship IT Convergence Technology Department, ETRI. Her research interests are in automotive image processing and recognition; 3D modeling; and rendering.
References
Luettel T. , Himmelsbach M. , Wuensche H. “Autonomous Ground Vehicles—Concepts and a Path to the Future,” Proc. IEEE May 13, 2012 100 (special centennial issue) 1831 - 1839    DOI : 10.1109/JPROC.2012.2189803
Levinson J. , Montemerlo M. , Thrun S. 2007 “Map-Based Precision Vehicle Localization in Urban Environments,” Robot.: Sci. Syst. Conf. Atlanta, GA, USA
Levinson J. , Thrun S. “Robust Vehicle Localization in Urban Environments Using Probabilistic Maps,” IEEE Int. Conf. Robot. Autom. Anchorage, AK, USA May 3–7, 2010 4372 - 4378    DOI : 10.1109/ROBOT.2010.5509700
Pink O. , Moosmann F. , Bachmann A. “Visual Features for Vehicle Localization and Ego-Motion Estimation,” IEEE Intell. Veh. Symp. Xi’an, China June 3–5, 2009 254 - 260    DOI : 10.1109/IVS.2009.5164287
Napier A. , Newman P. “Generation and Exploitation of Local Orthographic Imagery for Road Vehicle Localisation,” IEEE Intell. Veh. Symp. Alcalá de Henares, Spain June 3–7, 2012 590 - 596    DOI : 10.1109/IVS.2012.6232165
Wu T. , Ranganathan A. “Vehicle Localization Using Road Markings,” IEEE Intell. Veh. Symp. Gold Coast, Australia June 23–26, 2013 1185 - 1190    DOI : 10.1109/IVS.2013.6629627
Schreiber M. , Knoppel C. , Franke U. “LaneLoc: Lane Marking Based Localization Using Highly Accurate Maps,” IEEE Intell. Veh. Symp. Gold Coast, Australia June 23–26, 2013 449 - 454    DOI : 10.1109/IVS.2013.6629509
Tao Z. “Mapping and Localization Using GPS, Lane Markings and Proprioceptive Sensors,” IEEE Int. Conf. Intell. Robot. Syst. Tokyo, Japan Nov. 3–7, 2013 406 - 412
Brubake M.A. , Geiger A. , Urtasun R. “Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization,” IEEE. Conf. Compt. Vision Pattern Recogn. Portland, OR, USA June 23–28, 2013 3057 - 3064    DOI : 10.1109/CVPR.2013.393
Laneurit J. , Chapuis R. , Chausse F. 2005 “Accurate Vehicle Positioning on a Numerical Map,” Int. J. Contr., Autom., Syst. 3 (1) 15 - 31
Min K. 2011 “A System Framework for Map Air Update Navigation Service,” ETRI J. 33 (4) 476 - 486    DOI : 10.4218/etrij.11.1610.0012
Thrun S. , Burgard W. , Fox D. 2005 Probabilistic Robotics MIT Press Cambridge, MA, USA
Durrant-Whyte H. , Bailey T. 2006 “Simultaneous Localisation and Mapping: Part I the Essential Algorithms,” Robot. Autom. Mag. 13 (2) 99 - 110    DOI : 10.1109/MRA.2006.1638022
Scaramuzza D. , Fraundorfer F. 2011 “Visual Odometry (tutorial) Part I: The First 30 Years and Fundamentals,” Robot. Autom. Mag. 18 (4) 80 - 92    DOI : 10.1109/MRA.2011.943233
Ribeiro M. 2004 “Kalman and Extended Kalman Filters: Concept, Derivation and Properties,” Institute Sys. Robot.;Instituto Superior Tecnico Lisbon, Portugal 1 - 42
Badino H. , Yamamoto A. , Kanade T. “Visual Odometry by Multi-frame Feature Integration,” IEEE Int. Conf. Comput. Vision Workshop Sydney, Australia Dec. 2–8, 2013 222 - 229    DOI : 10.1109/ICCVW.2013.37
Rosten E. , Drummond T. “Machine Learning for High-Speed Corner Detection,” European Conf. Compt. Vision Graz, Austria May 7–13, 2006 430 - 443    DOI : 10.1007/11744023_34
Shi J. , Tomasi C. “Good Features to Track,” IEEE Int. Conf. Compt. Vision Pattern Recogn. Seattle, WA, USA June 21–23, 1994 593 - 600    DOI : 10.1109/CVPR.1994.323794
Brown M. , Lowe D. “Invariant Features from Interest Point Groups,” British Mach. Vision Conf. Cardiff, UK Sept. 2–5, 2002 656 - 665
Alahi A. , Ortiz R. , Vandergheynst P. “Freak: Fast Retina Keypoint,” IEEE Conf. Compt. Vision Pattern. Recogn. Providence, RI, USA June 16–21, 2012 510 - 517    DOI : 10.1109/CVPR.2012.6247715
Veit T. “Evaluation of Road Marking Feature Extraction,” IEEE Int. Conf. Intell. Transp. Syst. Beijing, China Oct. 12–15, 2008 174 - 181    DOI : 10.1109/ITSC.2008.4732564
McDonnell M.J. 1981 “Box-Filtering Techniques,” Comput. Graph. Image Process. 17 (1) 65 - 70    DOI : 10.1016/S0146-664X(81)80009-3
Matas J. “Robust Wide Baseline Stereo from Maximally Stable Extremal Regions,” British Mach. Vision Conf. Cardiff, UK Sept. 2–5, 2002 1 384 - 396    DOI : 10.1016/j.imavis.2004.02.006
Rocca P. 2009 “Evolutionary Optimization as Applied to Inverse Scattering Problems,” Inverse Problems 25 (12) 1 - 41    DOI : 10.1088/0266-5611/25/12/123003
2013 Intel® 64 and IA-32 Archit. Softw. Developer’s Manual http://www.intel.com/content/www/us/en/processors/architectures-software-developer-manuals.html
2013 Intel Threading Building Blocks https://www.threadingbuildingblocks.org/
Jacob B. 2013 Eigen is a C++ Template Library for Linear Algebra http://eigen.tuxfamily.org/
Lourakis M. 2011 Levmar: Levenberg-Marquardt Nonlinear Least Squares Algorithms in C/C++ http://www.ics.forth.gr/~lourakis/levmar/
Bay H. 2008 “Speeded Up Robust Features (SURF),” Comput. Vision Image Understanding 110 (3) 346 - 359    DOI : 10.1016/j.cviu.2007.09.014