Advanced
Estimation of Angular Acceleration By a Monocular Vision Sensor
Estimation of Angular Acceleration By a Monocular Vision Sensor
Journal of Positioning, Navigation, and Timing. 2014. Mar, 3(1): 1-10
Copyright © 2014, null
  • Received : January 22, 2014
  • Accepted : February 07, 2014
  • Published : March 15, 2014
Download
PDF
e-PUB
PubReader
PPT
Export by style
Share
Article
Author
Metrics
Cited by
TagCloud
About the Authors
Joonhoo Lim
School of Electronics, Telecommunication & Computer Engineering, Korea Aerospace University, Goyang 412-791, Korea
Hee Sung Kim
School of Electronics, Telecommunication & Computer Engineering, Korea Aerospace University, Goyang 412-791, Korea
Je Young Lee
School of Electronics, Telecommunication & Computer Engineering, Korea Aerospace University, Goyang 412-791, Korea
Kwang Ho Choi
School of Electronics, Telecommunication & Computer Engineering, Korea Aerospace University, Goyang 412-791, Korea
Sung Jin Kang
School of Electronics, Telecommunication & Computer Engineering, Korea Aerospace University, Goyang 412-791, Korea
Sebum Chun
Korea Aerospace Research Institute, Daejeon 305-806, Korea
Hyung Keun Lee
Korea Aerospace Research Institute, Daejeon 305-806, Korea
hyknlee@kau.ac.kr

Abstract
Recently, monitoring of two-body ground vehicles carrying extremely hazardous materials has been considered as one of the most important national issues. This issue induces large cost in terms of national economy and social benefit. To monitor and counteract accidents promptly, an efficient methodology is required. For accident monitoring, GPS can be utilized in most cases. However, it is widely known that GPS cannot provide sufficient continuity in urban cannons and tunnels. To complement the weakness of GPS, this paper proposes an accident monitoring method based on a monocular vision sensor. The proposed method estimates angular acceleration from a sequence of image frames captured by a monocular vision sensor. The possibility of using angular acceleration is investigated to determine the occurrence of accidents such as jackknifing and rollover. By an experiment based on actual measurements, the feasibility of the proposed method is evaluated.
Keywords
1. INTRODUCTION
In case of hazardous material spill in urban area due to the accidents of vehicles carrying hazardous materials, secondary economic and social losses as well as the primary damage from vehicle accidents are enormous. Therefore, studies on the real-time detection and prevention of the accidents of vehicles carrying hazardous materials have been carried out. Kim et al. (2013) proposed a detection technique for vehicles carrying hazardous materials using moving reference based Real-Time Kinematic (RTK) byinstalling Global Positioning System (GPS) receivers at a tractor and a trailer, respectively, as shown in Fig. 1 . Also, Lee et al. (2013b) analyzed the performance of the accident detection of vehicles carrying hazardous materials basedon Hardware-In-the-Loop Simulation (HILS) experiment. Jack knifing and rollover were defined as the representative accidents of vehicles carrying hazardous materials, and a relative navigation system that combines GPS and Inertial Navigation System (INS) was used to detect these accidents. A method that detects jack knifing accidents using the yaw acceleration of the connection angle of a tractor and a semi-trailer and that detects rollover accidents using the roll acceleration information of a tractor was proposed. A relative navigation system that combines GPS/INS can estimate the relative angular acceleration of two-body vehicles carrying hazardous materials, and thus can be efficiently used for monitoring accidents.
PPT Slide
Lager Image
Navigation system for vehicles carrying hazardous materials (Kim et 20133)
GPS can be used to estimate the information necessary for monitoring the accidents of vehicles carrying hazardous materials. However, in the case of GPS, the accuracy and continuity of navigation solutions could deteriorate due to satellite signal blockage depending on the surrounding environments ( Soloviev & Venable 2010 ). To resolve the problems of GPS such as the deterioration of accuracy and continuity due to signal blockage, studies on hybrid navigation systems have been actively performed. A hybrid navigation system is a methodology for estimating the position, velocity, and posture of a vehicle through the efficient combination of various sensors such as GPS, INS, and vision sensor. In recent years, it has been actively studied in relation to the urban environment navigation of robots and unmanned vehicles. In this regard, for the relative navigation system suggested by Fosbury & Crassidis (2006) , a method that uses the navigation solution of GPS to maintain the accuracy of INS was proposed; and Wang et al. (2008) and Vu et al. (2012) proposed a hybrid navigation system for controlling an unmanned aerial vehicle and a hybrid navigation system for estimating the position of a vehicle in urban environments. Also, Kim et al. (2011) proposed a hybrid navigation system using GPS, INS, odometer, and polyhedral vision sensor in order to improve navigation performance in urban environments. A method that can maintain the positioning accuracy of a vehicle, even in sections where GPS signals cannot be used, by estimating the posture of the vehicle based on the vanishing points extracted from polyhedral vision sensor data was presented.
In this study, as the basic research for the performance improvement of a GPS/INS based relative navigation system, which is used for monitoring the accidents of vehicles carrying hazardous materials, a method using a vision sensor is proposed, and its feasibility is examined. A relative navigation system, which maintains the accuracy of INS in urban area using GPS, has a problem of diverging navigation solution in sections where GPS signal blockage occurs for a long period. Therefore, in this study, the feasibility of a technique, which detects the accidents of vehicles carrying hazardous materials by estimating angular acceleration using a monocular vision sensor in environments where the visibility of GPS signals is not secured for a long period, is evaluated.
The contents of this study are as follows. First, in using a vision sensor, as a method for the combination with GPS/INS systems, coordinate systems are defined, and the distortion calibration of a vision sensor is explained. Then, a process that extracts relative angular acceleration based on the changes in the relative angular velocities of the feature points extracted from a vision sensor is explained. In the experiment, to examine the validity of the proposed technique, performance evaluation is carried out by comparing the accident detection sections estimated using a vision sensor and the accident detection sections extracted from INS. By using an equipment set simulating a vehicle carrying hazardous materials, the feasibility of the proposed method is evaluated.
2. RELEVANT COORDINATE SYSTEMS
For hybrid navigation that combines various sensors, a procedure that clearly defines the relation among the coordinate systems associated with each sensor should be preceded. In this study, five coordinate systems were used as summarized in Table 1 ( Ligorio & Sabatini 2013 ).
Coordinate system.
PPT Slide
Lager Image
Coordinate system.
Fig. 2 shows the relation among the body frames of the tractor and the trailer carrying hazardous materials and the vision sensor frame. For the centers of the body frames of the tractor and the trailer, center points could vary relative to the position of the INS sensor depending on the installation position. The moving direction of the vehicle was set as Xb , the rightward direction of the moving direction was set as Yb , and the Zb axis was set by applying a right-handed coordinate system. Also, the center of the vision sensor was set as the center direction of the lens, the direction that views a landmark image was set Xs , the righward direction of the viewing direction set as Ys , and Zb axis was set by applying a righ-handed coordinate system. In additio, h is the height of the installed camera, which represents the distance from the groud to the lens of thd vision sensor, and it can be measured during the installation of the vision sensor.
PPT Slide
Lager Image
Illustration of Body and Vision frame
Fig 3. shows the relation between the three-dimenstonal vision sensor frame and the two-dimensional pixel frame. f represents the focal length, and the image coordinate of feature points are expressed as points on the image plane.
PPT Slide
Lager Image
Illustration of Vision sensor and Pixel frame.
Fig. 4 shows the landmark frame, which was used to extract feature points in this study. Based on the center point of the landmark, the upward direction was set as xt and the rightward dirction was set as yt . Also , to the geometric characteristics of feature points regarding a landmark, the width of the image was expressed as w and the length was expressed as l . Both w and l are measured before capturing image.
PPT Slide
Lager Image
Landmark frame
The relation bewteen the two-dimensional pixel frame and the three-dimenional vision sensor frame can be expressed as Eq, (1) ( Lim et al. 2012 , Tsai 1987 )
PPT Slide
Lager Image
where
PPT Slide
Lager Image
3. CALIBRATION OF THE VISION SENSOR
For an inexpensive vision sensor, it is essential to calibrate distortions that are formed during the manufacturing process. The radial distortion, which occurs due to the shape of a lens, and the tangential distortion, which is formed during the manufacturing process, need to be calibrated ( Bradski & Kaehler 2008 , Tsai 1987 )
- 3.1 Radial Distortion
As shown in Fig. 5 , barrel distortion occurs because the ray that passes through the part away from the center of a lens is refracted more than the ray that passes through the center part of a lens. In practice, radial distortion is insignificant when the distance to the center of an image, r = 0 , and this can be wxpressed using several terms of Taylor series as shown in Eq(2). For an inexpensive vision sensor, distortion can be expressed using the first two terms. In the case of radial distortion f ( r ) = 0 at the position of r = 0, and thus a 0 = 0 should be saatisfied. Also, a distortion function has the form of a symmeric function, and thus the coefficients of the terms in whice r is raised to an odd power should all be 0. Therefore, the characteristics of radial distortion in determined by the coefficients of the terms in which r is raised to an enen power. In this reagard, one is k 1 , and the other is k 2 .
PPT Slide
Lager Image
Radial distortion.
PPT Slide
Lager Image
where
  • x, y :distorition pixel point
  • xcorrected, ycorrected: un – distortion pixed point
- 3.2 Tangential Distortion
Besieds radial distortion, common distortion tht occurs in an image is tangential distortion shown Fig. 6 , which is mostly formed during the manufacturing process of a vision sensor. Tangential distortion, which is formed because the lens and the image plane are not completely parallel during the manufacturing process of a vision sensor, can be calibrated using the tangential coefficients, p 1 , p 2 , as shown in Eq. (3) ( Brown 1966 ).
PPT Slide
Lager Image
Tangential distortion.
PPT Slide
Lager Image
- 3.3. Calibration
As explained earlier, to calibrate the distortion of an inexpensive vision sensor, four intrinsic camera parameters ( fx, fy, cx,cy ) and four distortion parameter ( k1, k2, p1,p2 ) are required. The intrinsic camera parameters, fx, fy , represent the focal length of a vision sensor, and cx, cy represent the principal point, which is the actual center position of a vision sensor, In this study, to obtain these eight parameters, OpenCV fuctions were used ( Bradski & Kaehler 2008 ). Fig. 7 shows the flow chart for obtaining undistorted images based on the camera calibration algorithm implemneted using OpenCV funtions. For distortion calibration, OpenCV calculates distortion parameters for a planar object with a grid form using mult-view images. For a planar object whose general geometric form and length are known, the intrinsic parameters of the vision sensor were obtained through an indoor experiment ( Lim 2013 ). In this regard, Fig.8a shows the imput image for calculating camera parameters, and Fig. 8b showns the undistorted image obtained using the calculated parameters.
PPT Slide
Lager Image
Flow chart for camera calibration using OpenCV library.
PPT Slide
Lager Image
(a)Input image, (b)Un-distortion image.
4. ESTIMATION OF RELATIVE ANGULAR ACCELERATON
A vision sensor projects three-dimensional object information onto a two-dimensional image via dimension reduction. From another aspect, this indicates that three-dimentional position information is reduced angular acceleration, a number of intermediate processes and assumptions are required. In this study, to extract threedimensional angular acceleration from two-dimensional position information, following procedures are considered.
  • a) Conversion from two-dimensional image information to three-dimensional relative position information for a number of feature points.
  • b) Conversion from relative position information to relative angular velocity information for a number of feature points.
  • c) Conversion from relative angle information to relative angular acceleration information.
For the conversion of a), the length information of a known landmark is used; and for the conversion of b), the assumption that each feature point has been extracted from a rigid body is used. Lastly, for the conversion of c), the time interval of two images, from which a relative angle has been extracted, is required. First, four feature points that are detected in an image can be expressed as coordinate values for the twodimensional image coordinate system using Eq. (4).
PPT Slide
Lager Image
Also, when the same feature points are presented as coordinate values for the three-dimensional landmark fram, they can be expressed as Eq. (5) where h represents the height of feature points.
PPT Slide
Lager Image
If the perturbation technique is applied to Eqs. (1-5) based on the four feature points within the landmark image, Eqs. (6) and (7) can be obtained.
PPT Slide
Lager Image
PPT Slide
Lager Image
where
  • : skew-symmetric matrix generated by the vectorXs
Based on the relative position of each feature point estimated in Eq. (7), relative angular velocity can be estimated using Eqs. (8) and (9).
PPT Slide
Lager Image
PPT Slide
Lager Image
From the relative angular velocity obtained by Eq. (9), relative angular acceleration can be estimated using Eq. (10).
PPT Slide
Lager Image
Relative angular acceleration can be estimated by the time differencing of relative angular velocity, and can be estimated based on the changes in the position of feature points in two images that are obtained sequentially.
5. PERFOMANCE EVALUATION
To evaluate the accuracy of the accident detection estimation of vehicles carrying hazardous materials using the proposed technique, an indoor experiment, which simulated actual environments, was performed as shown in Fig. 9 . For the standard of relative angular acceleration, the data using MTi(X-Sens) were analyzed utilized ( Lee et al. 2013a ). Table 2 summarizes the edtailed specification of the Inertial Measurement Unit (IMU) used in the experiment, and the sampling of the IMU data was 100 Hz. Also, in the case of the vision sensor for obtaining a landmark image, HD Pro Webcam C920 (Logitech) was used, and the sampling period of the image data was 10 Hz. The AEK-4T GPS receiver (U-Blox) was used for the time synchronization between 100 Hz IMU data and 10 Hz image data ( Lim 2013 ).
PPT Slide
Lager Image
Simulation environments.
X-SENS MTi sensor performance
PPT Slide
Lager Image
X-SENS MTi sensor performance
Fig. 10 shows the detection of four feature points for an actual image depending on the changes in the angle. As shown in Fig. 10 , four feature points could be successfully extracted regardless of the angle of the landmark image. Also, it showed the possibility of the extraction of feature points when the feature point extraction algorithm using a landmark image is applied to actual vehicles carrying hazardous materials
PPT Slide
Lager Image
Detection of feature points.
Fig. 11a shows the relative angles calculated from the 100 Hz IMU data through 10 Hz time synchronization, Fig. 11b shows the relative angular acceleration calculated from IMU, and Fig. 11c shows the relative angular acceleration estimated using the vision sensor. Table 3 summarizes the angular acceleration thresholds of the yaw and roll for detecting jack knifing and rollover in the experiment environment ( Lee et al. 2013a ). Fig. 12 shows the graphs obtained by the estimation of accident detection through applying the thresholds to the estimated angular acceleration, and the dots expressed as “1” represent the accident detection sections. Figs. 12a and 12b show the results obtained by performing accident detection through applying the thresholds in Table 3 to the data acquired from the IMU data. Also, Tables 4 and 5 summarize the GPS time for each section obtained by performing the jack knifing and rollover accident detection. Using the IMU data, jack knifing accidents were detected at a total of five sections, and rollover accidents were detected at a total of three sections.
PPT Slide
Lager Image
Relative angle and angular acceleration.
PPT Slide
Lager Image
Detection of accidents.
Simulation environment of angular acceleration threshold for IMU data (Lee et al. 2013a).
PPT Slide
Lager Image
Simulation environment of angular acceleration threshold for IMU data (Lee et al. 2013a).
Detected jack knifing by IMU data.
PPT Slide
Lager Image
Detected jack knifing by IMU data.
Detected rollover by IMU data.
PPT Slide
Lager Image
Detected rollover by IMU data.
The accident detection thresholds suggested by Lee et al. (2013a) are the values calculated based on the data obtained from IMU, and thus are not appropriate for the technique proposed in this study. Therefore, studies on thresholds for the accident detection using a vision sensor are required. As shown in the graphs in Figs. 11b and 11c , the angular acceleration obtained from IMU and the angular acceleration estimated based on the vision sensor were found to be different. This indicates that the threedimensional angular acceleration estimated using the vision sensor is sensitive to the changes in the angles of the twodimensional feature points, and thus changes abruptly. In other words, for the accident detection of vehicles carrying hazardous materials using the vision sensor, new thresholds need to be established. In this study, to establish thresholds for the accident detection of vehicles carrying hazardous materials using the proposed vision sensor, IMU accident detection values and accident sections were analyzed. Accident detection thresholds using the data obtained from the vision sensor were experimentally estimated by analyzing the thresholds in Table 3 and the accident detection sections in Figs. 12a and 12b . The accident detection thresholds estimated based on the proposed vision sensor data were 0.04 rad/s2 for the yaw acceleration, and 0.17 rad/s2 for the roll acceleration. Fig. 12c shows the results for the detection of jack knifing using the vision sensor thresholds, and Fig. 12d shows the results for the detection of rollover. Tables 6 and 7 summarize the GPS time for the accident detection sections using the vision sensor data.
Detected jack knifing by Vision sensor data.
PPT Slide
Lager Image
Detected jack knifing by Vision sensor data.
Detected rollover by Vision sensor data.
PPT Slide
Lager Image
Detected rollover by Vision sensor data.
When the accident sections detected using the vision sensor data were compared with the IMU data, all the five jack knifing accident detection sections were detected, and the over detection at 5 section was observed. In the case of the rollover, all the three sections detected by IMU were detected, and the over detection at 1 and 2 sections were observed. To summarize the results of the experiment, the jack knifing and rollover accident sections, which had been detected by estimating accident thresholds using the data obtained from the vision sensor, were consistent with the accident detection sections detected using the IMU data, but they had a tendency toward overestimation. It is because the estimation of angular acceleration using the vision sensor is sensitive to the changes in the angles of the two-dimensional feature points. This could be improved by the application of a scale factor in estimating the threedimensional position from two-dimensional simplified position information.
6. CONCLUSIONS
In this study, to resolve the problems of existing systems that calibrate the error of INS in urban area using GPS, the feasibility of a technique that detects the accidents of vehicles carrying hazardous materials using only vision sensor data was examined. Based on the relative angular acceleration estimated using a vision sensor, the proposed technique experimentally estimates the jack knifing and rollover accident detection thresholds of a vehicle carrying hazardous materials, and detects accidents. In using a vision sensor, for the combination with existing GPS/ INS systems, five coordinate systems were defined, and a study on the process for improving the accuracy of feature points via image distortion calibration was performed. In this study, relative angular acceleration information estimated using a vision sensor was used in order to utilize the thresholds calculated based on the changes in relative angular acceleration that had been previously suggested. The performance evaluation of the proposed algorithm was carried out by comparing the accident detection sections based on the changes in the relative angular acceleration of the feature points extracted from a vision sensor and the accident detection sections detected using IMU data.
The accident detection thresholds calculated using the existing IMU data were not appropriate for the proposed algorithm, and thus vision sensor accident detection thresholds were presented based on actual experiment data. When the results of the accident detection performed using the accident detection thresholds calculated based on the vision sensor data in indoor environments that simulated a vehicle carrying hazardous materials were compared with the IMU data, all the accident detection sections were detected, but several overestimation cases were observed. It is thought that the overestimation cases occurred in the process of converting two-dimensional simplified image information into three-dimensional information, and that they were the errors that occurred because the experimental environment where the IMU accident detection values had been estimated was different from the environment where this study was performed.
The feasibility of the accident detection of vehicles carrying hazardous materials was examined through an experiment using only the data obtained from a vision sensor, and it was found that the calculated accident detection thresholds need to be analyzed based on a lot more data. To improve the reliability of vision sensor accident detection sections, further studies on image processing algorithm robust to surrounding environments are required. The sensitivity of the vision sensor to the changes in the brightness and luminosity of surrounding environments could be improved by more studies on adaptive feature point extraction algorithm using the histograms and changes of the average values of each image frame obtained from a vision sensor. Also, the accuracy of the accident detection of vehicles carrying hazardous materials could be improved by more studies on detection thresholds in order to reduce overestimation tendency.
Acknowledgements
This research was supported by Basic Science Research Program through the National research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology and a grant from National Agenda Project (NAP) funded by Korea Research Council of Fundamental Science and Technology. The first author was supported by Expert Education Program of Maritime Transportation Technology (GNSS Area), Ministry of Oceans and Fisheries (MOF) of Korean government.
BIO
JoonHoo Lim received the M. S. degree in School of Electronics, Telecommunication & Computer Engineering, Korea Aerospace University in 2013. He is a doctoral course student at Navigation and Information Systems Laboratory (NISL), Korea Aerospace University.His research interests include hybrid positioning system, integrity and its applications.
Hee Sung Kim received the M.S. degree in School of Electronics, Telecommunication & Computer Engineering, Korea Aerospace University in 2009. He is a doctoral course student at Navigation and Information Systems Laboratory (NISL), Korea Aerospace University. His research interests include real-time GNSS Long-baseline, Network and L1 RTK and its applications.
Je Young Lee received the M.S. degree in School of Electronics, Telecommunication & Computer Engineering, Korea Aerospace University in 2011. He is a doctoral course student at Navigation and Information Systems Laboratory (NISL), Korea Aerospace University. His research interests include fault detection, adaptive filtering, INS and its applications.
Kwang Ho Choi received the M.S. degree in School of Electronics, Telecommunication & Computer Engineering, Korea Aerospace University. Since 2012, he is a doctoral course student at Nav igation and Information Systems Laboratory (NISL), Korea Aerospace University. His research interests include network-based real-time high resolution TEC map generation and GPS/INS hybrid system applications.
Sung Jin Kang received the B.S. degree in School of Electronics, Telecommunication & Computer Engineering, Korea Aerospace University in 2013. He is a master course student at Navigation and Information Systems Laboratory (NISL), Korea Aerospace University. His research interests include embedded system and its applications.
Sebum Chun received his Ph. D. degree in Aerospace Engineering from Konkuk University in 2008. He is a senior researcher in the Satellite Navigation team, CNS/ATM and Satellite Navigation Research Center in Korea Aerospace Research Institute (KARI). His research interests include GPS, INS, SLAM and non-linear system state estimation.
Hyung Keun Lee received the Ph.D. degree from the School of Electrical Engineering and Computer Science from Seoul National University, Korea in 2002. Since 2003, he has been with the School of Electronics, Telecommunication and Computer Engineering at Korea Aerospace University, Korea, as an Associate Professor. His research interests include positioning and navigation systems, sensor networks, and avionics.
References
Bradski G. , Kaehler A. 2008 Learning OpenCV O’Reilly Media, Inc. CA 370 - 404
Brown D. C 1966 Decentering distortion of lenses Photometric Engineering 32 444 - 462
Fosbury A. M. , Crassidis J. L. 2006 Kalman Filtering for Relative Inertial Navigation of Uninhabited Air Vehicles AIAA AIAA Guidance, Navigation, and Control Conference Keystone, CO Aug 2006 http://dx.doi.org/10.2514/6.2006-6544 paper #2006-6544    DOI : 10.2514/6.2006-6544
Kim H. S. , Choi K. H. , Lee J. Y. , Lim J. H. , Chen S. B. 2013 Relative Positioning of Vehicles Carrying Hazardous Materials Using Real-Time Kinematic GPS JKGS http://dx.doi.org/10.11003/JKGS.2013.2.1.019 2 23 - 31    DOI : 10.11003/JKGS.2013.2.1.019
Kim S. B. , Bazin J. C. , Lee H. K. , Choi K. H. , Park S. Y. 2011 Ground Vehicle Navigation in Harsh Urban Conditions by Integrating Inertial Navigation System, Global Positioning System, Odometer and Vision Data IET Radar, Sonar and Navigation http://dx.doi.org/10.1049/iet-rsn.2011.0100 5 814 - 823    DOI : 10.1049/iet-rsn.2011.0100
Lee B. S. , Chun S. B. , Lim S. , Heo M. B. , Nam G. W. 2013 Performance Evaluation of Accident Detection using Hazard Material Transport Vehicle HILS 2013 KGS Conference Jeju, Korea 06-08 Nov., 2013
Lee J. Y. , Kim H. S. , Choi K. H. , Lim J. H. , Kang S. J. 2013 Design of Adaptive Filtering Algorithm for Relative Navigation Proceedings of the World Congress on Engineering and Computer Science San Francisco, USA 23-25 October 2 23 - 25
Ligorio G. , Sabatini A. M. 2013 Extended Kalman Filter- Based Methods for Pose Estimation Using Visual, Inertial and Magnetic Sensors: Comparative Analysis and Performance Evaluation Sensors 13 1919 - 1941    DOI : 10.3390/s130201919
Lim J. H. 2013 Estimation for Displacement of Vehicle based on GPS and Monocular Vision sensor, Master Dissertation Korea Aerospace University
Lim J. H. , Choi K. H. , Kim H. S. , Lee J. Y. , Lee H. K. 2012 Estimation of Vehicle Trajectory in Urban Area by GPS and Vision Sensor The 14th IAIN Congress 2012 Cairo, Egypt 01-03 Oct. 2012
Soloviev A. , Venable D. 2010 Integration of GPS and vision measurements for navigation in GPS challenged environments IEEE/ION Position Location and Navigation Symposium (PLANS) May 2010 826 - 833
Tsai R. Y. 1987 A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses Robotics and Automation, IEEE Journal http://dx.doi.org/10.1109/JRA.1987.1087109 3 323 - 344    DOI : 10.1109/JRA.1987.1087109
Vu A. , Ramanandan A. , Chen A. , Farrell J. A. , Barth M. 2012 Real-time computer vision/DGPS-aided inertial navigation system for lane-level vehicle navigation Intelligent Transportation Systems, IEEE Transactions on http://dx.doi.org/10.1109/TITS.2012.2187641 13 899 - 913    DOI : 10.1109/TITS.2012.2187641
Wang J. , Garratt M. , Lambert A. , Wang J.J. , Han S. 2008 Integration of GPS/INS/vision sensors to navigate unmanned aerial vehicles IAPRSSIS 37 (B1) 963 - 969