Advanced
Asynchronous Sensor Fusion using Multi-rate Kalman Filter
Asynchronous Sensor Fusion using Multi-rate Kalman Filter
The Transactions of The Korean Institute of Electrical Engineers. 2014. Nov, 63(11): 1551-1558
Copyright © 2014, The Korean Institute of Electrical Engineers
  • Received : July 02, 2014
  • Accepted : October 24, 2014
  • Published : November 01, 2014
Download
PDF
e-PUB
PubReader
PPT
Export by style
Article
Author
Metrics
Cited by
TagCloud
About the Authors
영 섭 손
Dept. of Electrical Engineering, Hanyang Univerity, Korea and Global R&D Center, MANDO Corp. Korea
원 희 김
Dept. of Electrical Engineering, Dong-A Univerity, Korea
승 희 이
Div. of Electrical and Biomedical Engineering, Hanyang, Univerity, Korea.
정 주 정
Corresponding Author : Div. of Electrical and Biomedical Engineering, Hanyang, Univerity, Korea. E-mail:cchung@hanyang.ac.kr

Abstract
We propose a multi-rate sensor fusion of vision and radar using Kalman filter to solve problems of asynchronized and multi-rate sampling periods in object vehicle tracking. A model based prediction of object vehicles is performed with a decentralized multi-rate Kalman filter for each sensor (vision and radar sensors.) To obtain the improvement in the performance of position prediction, different weighting is applied to each sensor’s predicted object position from the multi-rate Kalman filter. The proposed method can provide estimated position of the object vehicles at every sampling time of ECU. The Mahalanobis distance is used to make correspondence among the measured and predicted objects. Through the experimental results, we validate that the post-processed fusion data give us improved tracking performance. The proposed method obtained two times improvement in the object tracking performance compared to single sensor method (camera or radar sensor) in the view point of roots mean square error.
Keywords
1. 서 론
HRobust and reliable object tracking solution is crucial to the performance of collision avoidance, path generation, adaptive cruise control. Object tracking with the use of radar and vision processing systems has been researched [1] - [5] . Radar and vision have the complementary characteristics in directional measurement’s accuracy : the vision system provides accurate lateral information, whilst the radar system gives accurate range information of objects. The vehicle detection and tracking methods using only vision sensor were developed [6] , [7] . The methods were designed based on an edge-based constraint filter. However, the vision system cannot guarantee satisfactory performance under various environmental conditions such as light change in the day and night times, poor weather conditions, and lack of light source. On the other hand, the radar provides an unnecessary object information resulting from reflections on crash barriers and other cars [1] . The radar and vision sensor systems commonly used in industrial applications have uncertain and slow update rate compared to the vehicle electronic control unit (ECU). Thus it may be difficult for the radar and vision sensor systems to be synchronized together with lateral and longitudinal control systems. Furthermore, the slow update rate may limit the high performance of the vehicle control system.
The fusion of two sensors can complement each sensor’s properties. The conventional approach to the sensor fusion is the method which is not synchronized and updates the fused data at the slow sensor. Thus the fusion structure of two sensors may result in slow object detection then unavoidable collision may arise. In [2] , the vision data was only used to compensate for the object vehicle information from the radar data. They used a radar to detect relevant object vehicles, then used a vision senor to validate object vehicles’ lateral position, dimensions, and boundary of targets. Amditis et al. presented a multi-sensor collision avoidance system [8] . They solved the sensors’ asynchronization problem through time based generation but they fused raw data from sensors at only commonly updated instance. Richter et al. present a tracking algorithm through combination of asynchronized observation data using a movement model [4] . They integrated the longitudinal and lateral velocity measurements to obtain object positions at the required instance. The major problem with this integration method is that bias errors and noises in the sensors will result in serious drift in the position estimates.
This paper investigates multi-object vehicle tracking using a fusion of asynchronous sensors such as radar and vision sensors. We propose a multi-rate sensor fusion using a Kalman filter for object vehicle tracking. The multi-rate Kalman filter is developed proximately to synchronize the sensors with a higher sampling rate synchronized to the ECU. A model based prediction of object vehicles’ future behavior is performed by designing the decentralized multi-rate Kalman filter for each sensor (vision and radar sensors.) To obtain an improvement in the estimated object position, different weighting is applied to each sensor’s predicted object position from the multi-rate Kalman filter: large weighting on the lateral position from vision and longitudinal position from radar. The proposed method allows us to implement prediction and correction steps at the ECU sampling rate [9] [10] . It is experimentally validated that the proposed method provides the improved estimation of object positions at every sampling time of the ECU. For object tracking, it is necessary for each sensor to link between measured objects and predicted ones at every time when sensor information is updated. In this paper, we use the Mahalanobis distance in determining correspondence among the measured and estimated [11] . Through the experimental results, we validate that the post-processed fusion data has improved object tracking performance. The proposed post-processing method can be helpful in providing collision warning and/or take collision avoidance action by predicting the position and motion of an object vehicle even with intermittent sensor information. We obtained two times improvement in the object tracking performance compared to the single sensor method (camera or radar sensor) in the view point of roots mean square error.
2. System Structure
In the proposed method, vision and radar sensors are used. Fig. 1 shows the schematic block diagram of the proposed multi-rate sensor fusion system for object tracking. The structure is composed of six function modules: (a) two sensor modules, (b) two validity determination modules for filtering object vehicles within specified ranges (c) two correspondence determination modules for matching the correspondence between measured data and object data recorded in previous frame, (d) two multi-rate Kalman filters for prediction of object vehicle’s position, (e) a module treating interaction correspondence algorithm for finding the commonly measured object by vision and radar, and (f) a sensor fusion module that improves the position estimation by fusing the weighted position information from vision and radar.
PPT Slide
Lager Image
Structure of multi-rate sensor fusion system for object tracking
3. Object Modeling
Input data set of the post-processor are identity (ID) and information of object from each sensor. In this paper, variable with superscript [ lv ] denotes a variable from vision corresponding to the object with ID of lv while variable with superscript [ lr ] denotes a variable from radar as follows
PPT Slide
Lager Image
The relative distance, angle, and velocity from the ego vehicle to the measured object are obtained from a radar. From Polar-to-Cartesian conversion of the (distance, angle) information, we can obtain longitudinal distance, x , lateral distance, y , longitudinal relative velocity, , and lateral relative velocity, . The vision processor gives the relative position ( x,y ) of the object in Cartesian coordinate. Fig. 2 shows the coordinates of objects position with respect to the ego vehicle, and the both sensors’ configurations of the range and field of view.
PPT Slide
Lager Image
Configuration of the vision and radar sensors
Generally, the relative velocity of each object with respect to the ego vehicle does not vary significantly within a short time. Thus, we can describe the motion of object vehicles from vision processing system using the nonmaneuver target dynamic model [12] in terms of the state vector
PPT Slide
Lager Image
as follows
PPT Slide
Lager Image
and its system matrix Al is given by
PPT Slide
Lager Image
In regard to measurement matrix, we have
PPT Slide
Lager Image
wv denotes the measurement noise of vision which is assumed to be zero mean Gaussian white noise with covariance Qv , N (0, Qv ).
We can describe the object vehicles from radar in terms of the state vector
PPT Slide
Lager Image
as follows
PPT Slide
Lager Image
where the superscript [ lr ] denotes the object with an ID of lr and Al is the same as in (2). For radar, we have output matrix Cr = [ I 4×4 ] or Cr = [ I 2×2 0 2×2 ]. ωr denotes measurement noise of radar which is assumed to be zero mean Gaussian white noise with covariance Qr , N (0, Qr ). Then, for the sampling time Tc of the car ECU, one can obtain the zero order hold equivalent discrete-time model matrices ( Φl, Cv, Cr ) from ( Al, Cv, Cr ).
4. Multi-rate Sensing and Processing
In this section, the multi-rate Kalman filters for both radar and vision systems are proposed to achieve the prediction of the object’ motion at the fast sampling rate of ECU.
- 4.1 Design of Multi-rate State Estimator
Fig. 3 (a) shows the update period of each ECU, radar, and vision. The radar provides the object information at a sampling period of Trad while the vision processing system measures that at a sampling period of Tcam . As Fig. 3 (a) , the sampling times are asynchronous. The vision processing system provides an approximately one sample delayed information in the view point of ECU sampling rate. We thus need to design a multi-rate state estimator for the each object dynamics (2) and (3) to estimate objects’ position information at the same sampling period as Tc of the car ECU.
PPT Slide
Lager Image
Update period of ECU, and vision and radar sensers
Assumption 1: The update periods of radar and camera are fixed integer multiples of Tc such as
  • Tcam=RmvTc
  • Trad=RmrTc
  • whereRmv,Rmr∈ℕ andRmv,Rmr≥ 1 .◇
Then, under the Assumption 1, we can represent a time instant, t , such as
  • t= (kv+i/Rmv)Tcam
  • = (kr+j/Rmr)Trad
for
  • kv, kr= 0, 1,⋯
  • i= 0, 1,⋯,Rmv−1
  • j= 0, 1,⋯,Rmr−1,
where kv and kr denote the vision and radar update instants, respectively. i and j indicate the control update instants for the vision and radar, respectively. We design two multi-rate state estimators: one for the vision sensor and the other for the radar sensor. The multi-rate state estimator for the vision consists of two procedures. One is a prediction based on model (2) as (4)-(a):
PPT Slide
Lager Image
PPT Slide
Lager Image
The prediction error is corrected as (4)-(b) by a multi-rate state estimator gain, Lv , based on the given measurement data at ( kv , 0) instance such as The prediction error is computed based on the predicted states at ( kv , 0). This computation enables the correction states to avoid oscillation. Similarly we have the following prediction and correction for the radar system as
PPT Slide
Lager Image
We apply a discrete-time lifting procedure, then (4) and (5) lead to a lifted model
PPT Slide
Lager Image
where
PPT Slide
Lager Image
Here, we define
PPT Slide
Lager Image
Then, we can reduce the multi-rate problem to the single-rate, then and determine Lv and Lr from
PPT Slide
Lager Image
that guarantees
PPT Slide
Lager Image
and
PPT Slide
Lager Image
to be convergent [9] , [13] . From the designed decentralized multi-rate Kalman filters, we can obtain the synchronized information of object’s position at every sampling time of Tc as shown in Fig. 3 (b) . The update periods of a radar and camera are generally uncertain and vary at every update instant. The same authors showed that multi-rate state estimator are robust against Assumption 1, that is, uncertain update period [14] .
- 4.2 Correspondence between Measurements and object Vehicle
For the object detection and tracking, the determination of correspondence between measured objects and object list is a necessary procedure. Vision processor and radar systems give each list of measured objects at every update instant. In the validity determination step, the only data corresponding to objects within the pre-defined range are passed. In this paper, variable with superscript [ mv ] denotes variable from vision corresponding to the object with ID of mv while variable with superscript [ mr ] denotes variable from radar.
Here,
PPT Slide
Lager Image
and, in general, nv ′≠ nr ′. In regard to the measurement correspondence that finds a measurement - object vehicle link, we are to use the Mahalanobis distance [11] for the vision’s correspondence algorithm in Fig. 1 as follows:
PPT Slide
Lager Image
and in the radar’s module such as
PPT Slide
Lager Image
where Qv and Qr are the measurement noise covariances of vision and radar systems, respectively. Given object with ID of lv , the measurement ID of mv which minimizes the Mahalanobis distance such as:
argmin mvv dM,v ( mv,lv )
that corresponds to a candidate measurement for the object vehicle with ID of lv . In the case of radar, the same principle is adopted to determine the ID of mv corresponding to the vehicle with ID of lr such as:
argmin mr r dM,r ( mr,lr )
If the Mahalanobis distance exceeds a threshold value for all existing object vehicles, then we find that either a new object that corresponds to the measurement appears or disappears [11] . If there are remaining measurements which could not be assigned to any existing predicted object lists, they are candidates for new object. Among the prediction list, if there is a vehicle which is not linked to the measurements, it may run in the out of range or filed of view.
- 4.3 Vision and Radar Sensor Fusion
A prior procedure to sensor fusion is determining the interaction correspondence between objects from radar and vision. Due to the different range of view and field of view between the radar and camera, the number of object vehicles may be different. The sensor fusion should be applied to the object vehicles measured by the both sensors. Radar treats all of crash barriers and reflected ones as objects, while the vision system can give a list of objects including only object vehicles by using such as the edge-based constraint filter [6] . In general, n v < n r in (7). Through Mahalanobis distance decision analysis, we can get a list of objects that are commonly included in both sensor’s list. We define the Mahalanobis distance for fusion such as
PPT Slide
Lager Image
where C v = Cr . Then, given object ID in radar, lr , the ID that minimizes (8) is designated as common ID, l , for fusion as following:
PPT Slide
Lager Image
The radar and vision have complementary characteristics: The radar has a long range of view but its angle resolution is relatively low compared to vision system. Thus the radar sensor has large measurement uncertainty in lateral direction due to the radar beam forming problem while it allows for good estimations of distance and relative velocities of objects in the longitudinal direction of the ego vehicle. Vision’ application in the field of object detection results in large longitudinal position error due to the digitization error and short range of view. On the other hand, the camera provides the object vehicles’ positions in lateral direction from the ego vehicle with high accuracy compared to the radar.
With the considerations of these complementary characteristics, we propose a sensor fusion scheme to combine the estimated states: a high weighting for the range data, a higher weighting for the lateral data from the vision sensor from the radar sensor as following:
PPT Slide
Lager Image
where
PPT Slide
Lager Image
is a diagonal weighting matrix and
PPT Slide
Lager Image
5. Experimental Results
The proposed multi-rate estimators and sensor fusion methods were validated via experiments. The vision processing was updated every 60 ~ 80msec, while the radar signal was updated every 50msec. The vision had range of view of 120m,
PPT Slide
Lager Image
, and field of view of 38 deg while the radar had range of view,
PPT Slide
Lager Image
, of 200 m and field of view of 26 deg as shown in Fig. 2 . In order to evaluate the estimation performance of each method, we adopt the laser scanner as the reference since the laser scanner has relatively high accuracy object tracking performance within 0.04 m.
The vehicle ECU operated at a time period of 10 msec. Fig. 4 shows object’s position data expressed in terms of local coordinate systems where (0,0,0) coincides with the ego vehicle at 3[sec]. At that moment, there were totally 6 object vehicles including one ego vehicle on a straight road with three lanes. Fig. 4 (a) shows raw data from vision and radar. Fig. 4 (b) shows measured data,
PPT Slide
Lager Image
, and estimated data,
PPT Slide
Lager Image
. Performing the validity determination procedure resulted in the only valid data that were within the range of ±5 m according x-axis and 120 m according y-axis, respectively. The estimated object’s position was obtained through multi-rate Kalman filtering. Fig. 4 (c) shows the result data after interaction correspondence and sensor fusion procedures. In the procedure of interacting correspondence determination, the common objects from both sensors were retained and they were given ID of l as the same as lv by the ID determination rule (9). It was observed that the fusion data was positioned near the radar data compared to the vision data according to y-axis. This fusion result can be predictable because we weighted more weighting on the radar’s longitudinal data with consideration for directional accuracy.
PPT Slide
Lager Image
Experimental result at 3.0 [sec]

(+:measure from vision, *:measure from radar, ○:estimation from vision, □:estimation from radar, ◇:fusion)

Fig. 5 (a) and (b) show the lateral and longitudinal information of l = 1, respectively. In Fig. 5 , the green solid line shows fusion data while blue, red and black solid line show vision, radar and laser scanner's data, respectively. The object information was updated at every 10 msec as shown in Fig. 5 . The roots mean square errors (RMSEs) of three methods are shown in Table. 1 . It was observed that the proposed method, fusion, had the smallest RMSE among three methods.
Comparison of three methods
PPT Slide
Lager Image
Comparison of three methods
PPT Slide
Lager Image
Experimental result of l = 1
5. Conclusions
We propose a multi-rate sensor fusion of vision and radar using Kalman filter to solve problems of asynchronized and multi-rate sampling periods in object vehicle tracking. A model based prediction of object vehicles is performed with a decentralized multi-rate Kalman filter for each sensor (vision and radar sensors.) We developed weighting to each sensor’s predicted object position from the multi-rate Kalman filter. The Mahalanobis distance was used to make correspondence among the measured and predicted objects.
Through the experimental results, we validated that the post-processed fusion data give us improved tracking performance. We expect that the proposed multi-rate object tracking method with the optimal active steering controller [15] and the road lane estimation [16] will significantly contribute to realize the autonomous intelligent vehicles.
Acknowledgements
This work was supported by the Industrial Strategic Technology Development Program (10042808, Development of Driver Assistance Systems Using Camera, Radar and Road Characteristics and 10044620, Automatic Lane Change System for Novice Drivers) funded by the Ministry of Trade, Industry and Energy (MOTIE, Korea)
BIO
손 영 섭(Young Seop Son)
Young Seop Son received the B.S. and M.S. degrees in electronics engineering from Sogang University, Seoul, Korea, in 1997, 1999, respectively. He is currently working toward the Ph.D. degree at the Hanyang University. Since 1999, he is working in Global R&D Center, Mando Corp. His main research interests include intelligent vehicle, autonomous driving and robust control. Mr. Son is a member of the Korea Society of Automotive Engineers, the Institute of Control, Robotics and Systems, and the Korean Institute of Electrical Engineers.
김 원 희(Wonhee Kim)
Wonhee Kim received the B.S. and M.S. degrees in electrical computer engineering, and Ph.D. degree in electrical engineering from Hanyang University, Seoul, Korea, in 2003, 2005, and 2012, respectively. Dr. Kim is an assistant professor of the Department of Electrical Engineering, Dong-A University. From 2005 to 2007, he was with Samsung Electronics Co., Korea. In 2012, he was with the Power & Industrial Systems R&D Center of Hyosung Co., Korea. In 2013, Dr. Kim was a post doctoral researcher of the Institute of Nano Science & Technology, Hanyang University and was also a visiting scholar of the Department of Mechanical Engineering, University of California, Berkeley. His current research interests include nonlinear control, nonlinear observer, and their industrial applications.
이 승 희(Seung-Hi Lee)
Seung-Hi Lee received the B.S. degree in mechanical engineering from Korea University, Seoul, and the M.S. degree in mechanical engineering from Seoul National University, Seoul, Korea, in 1985 and 1987, respectively, and the Ph.D. degree in mechanical engineering and applied mechanics from the University of Michigan, Ann Arbor, in 1993. From 1988 to 1989, he was a Research Scientist with the Korea Institute of Science and Technology. Since 1994, he had been with Samsung Advanced Institute of Technology, Korea, where he was a team leader responsible for advanced servomechanical systems. In 2009, he joined Hanyang University, Seoul, Korea, as a Research Professor, where he is also teaching advanced control systems. His research interests include robust sampled-data feedback control of uncertain systems, and application to information storage, automotive, electromechanical, and manufacturing systems. Prof. Lee has served as a Member of the Board of Editors of International Journal of Control, Automation, and Systems.
정 정 주(Chung Choo Chung)
Chung Choo Chung received the B.S. and M.S. degrees in electrical engineering from Seoul National University, Seoul, Korea, and the Ph.D. degree in electrical and computer engineering from the University of Southern California, Los Angeles, in 1993. From 1994 to 1997, he was with Samsung Advanced Institute of Technology, Korea. In 1997, he joined the faculty of Hanyang University, Seoul, Korea. Dr. Chung was an Associate Editor for the Asian Journal of Control (AJC) from 2000 to 2002 and an Editor for the International Journal of Control, Automation and Systems (IJCAS) from 2003 to 2005. He was an Associate Editor for the 2003 IEEE Conference on Decision and Control (IEEE CDC), and an Associate Editor and the Co-Chair of Publicity of the International Federation of Automatic Control (IFAC) World Congress, Korea in 2008. He served as a program committee member of the American Society of Mechanical Engineers (ASME) International Conference on Information Storage and Processing Systems (ISPS) 2011. He was a guest editor for the special issue on advanced servo control for emerging data storage systems published by IEEE Trans. on Control System Technologies (TCST). Currently he is an Associate Editor of TCST and also a Program Co-Chair of 2015 IEEE Intelligent Vehicles Symposium.
References
Gern A. , Franke U. , Levi P. 2000 “Advanced lane recognition-fusing vision and radar,” in Proceedings of IEEE Intelligent Vehicles Symposium 45 - 51
Hofmann U. , Rieder A. , Dickmanns E. 2003 “Radar and vision data fusion for hybrid adaptive cruise control on highways,” Machine Vision and Applications 14 (1) 42 - 49    DOI : 10.1007/s00138-002-0093-y
Kato T. , Ninomiya Y. , Masaki I. 2002 “An obstacle detection method by fusion of radar and motion stereo,” IEEE Transactions on Intelligent Transportation Systems 3 (3) 182 - 188    DOI : 10.1109/TITS.2002.802932
Richter E. , Schubert R. , Wanielik G. 2008 “Radar and vision based data fusion-advanced filtering techniques for a multi object vehicle tracking system,” in Proceedings of IEEE Intelligent Vehicles Symposium 120 - 125
Steux B. , Laurgeau C. , Salesse L. , Wautier D. 2002 “Fade : A vehicle detection and tracking system featuring monocular color vision and radar data fusion,” in Proceedings of IEEE Intelligent Vehicle Symposium 632 - 639
Srinivasa N. 2002 “Vision-based vehicle detection and tracking method for forward collision warning in automobiles,” in Proceedings of IEEE Intelligent Vehicle Symposium 626 - 631
Coifman B. , Beymer D. , McLauchlan P. , Malik J. 1998 “A real-time computer vision system for vehicle tracking and traffic surveillance,” Transportation Research Part C: Emerging Technologies 6 (4) 271 - 288    DOI : 10.1016/S0968-090X(98)00019-9
Amditis A. , Polychronopoulos A. , Karaseitanidis I. , Katsoulis G. , Bekiaris E. 2002 “Multiple sensor collision avoidance system for automotive applications using an IMM approach for obstacle tracking,” in Proceedings of the International Conference on Information Fusion 2 812 - 817
Lee S.-H. , Lee Y. O. , Chung C. C. 2012 “Multi-rate active steering control for autonomous vehicle lane changes,” in Proceedings of IEEE Intelligent Vehicles Symposium 772 - 777
Lee D. , Tomizuka M. 2003 “Multi-rate optimal state estimation with sensor fusion,” in American Control Conference 4 2887 - 2892
Thrun S. , Burgard W. , Fox D. 2006 Probabilistic Robotics The MIT Press
Li X. R. , Jilkov V. P. 2003 “A survey of maneuvering target tracking. Part 1: Dynamic models,” IEEE Transactions on Aerospace and Electronics Systems 39 (4) 1333 - 1364    DOI : 10.1109/TAES.2003.1261132
Lee S. , Chung C. , Suh S. 2003 “Multi-rate digital control for high track density magnetic disk drives,” IEEE Transactions on Magnetics 39 (2) 832 - 837    DOI : 10.1109/TMAG.2003.808935
Lee S.-H. , Lee Y. O. , Son Y. , Chung C. C. 2011 “Robust active steering control of autonomous vehicles: A state space disturbance observer approach,” in Proceedings of International Conference on Control, Automation and Systems 596 - 598
Lee S.-H. , Lee Y. O. , Kim B-A. , Chung C. C. 2012 “Proximate model predictive control strategy for autonomous vehicle lateral control,” in Proceedings of American Control Conference 3605 - 3610
Lee S.-H. , Lee Y. O. , Son Y. , Chung C. C. 2011 “Road lane estimation using vehicle states and optical lane recognition,” in Proceedings of International Conference on Control, Automation and Systems 501 - 506