Advanced
State Machine and Downhill Simplex Approach for Vision-Based Nighttime Vehicle Detection
State Machine and Downhill Simplex Approach for Vision-Based Nighttime Vehicle Detection
ETRI Journal. 2014. Apr, 36(3): 439-449
Copyright © 2014, Electronics and Telecommunications Research Institute(ETRI)
  • Received : May 28, 2013
  • Accepted : October 09, 2013
  • Published : April 01, 2014
Download
PDF
e-PUB
PubReader
PPT
Export by style
Share
Article
Author
Metrics
Cited by
TagCloud
About the Authors
Kyoung-Ho Choi
Do-Hyun Kim
Kwang-Sup Kim
Jang-Woo Kwon
Sang-Il Lee
Ken Chen
Jong-Hyun Park

Abstract
In this paper, a novel vision-based nighttime vehicle detection approach is presented, combining state machines and downhill simplex optimization. In the proposed approach, vehicle detection is modeled as a sequential state transition problem; that is, vehicle arrival, moving, and departure at a chosen detection area. More specifically, the number of bright pixels and their differences, in a chosen area of interest, are calculated and fed into the proposed state machine to detect vehicles. After a vehicle is detected, the location of the headlights is determined using the downhill simplex method. In the proposed optimization process, various headlights were evaluated for possible headlight positions on the detected vehicles; allowing for an optimal headlight position to be located. Simulation results were provided to show the robustness of the proposed approach for nighttime vehicle and headlight detection.
Keywords
I. Introduction
Vision-based vehicle detection systems have been used for various applications such as intelligent vehicles, driver assistance systems [1] , [2] , autonomous vehicles [3] [5] , traffic monitoring systems [6] [8] , traffic-signal control systems [9] , [10] , and so on.
In-vehicle cameras are used to detect nearby vehicles, estimate the speed of vehicles, and recognize accidents on the road; thus, providing information to drivers to facilitate collision avoidance and safe driving. In addition, cameras are installed along the road to monitor highways and to estimate the speed of vehicles, traffic volume, and congestion.
In addition to cameras, various sensors have been used in the development of vehicle detection and traffic monitoring systems. The most popular vehicle detection and speed measurement sensor is the loop detector [11] . Others include radar [12] , laser [13] , and magnetic sensors [14] , [15] . The disadvantage of deploying a loop detector system on the road is that traffic flow is interrupted during installation and maintenance. On the contrary, vision-based vehicle detection and monitoring systems can be installed without traffic interruption, and visual information can be applied in many areas, providing live video of traffic and road conditions to drivers and travelers.
PPT Slide
Lager Image
Examples of traffic scenes captured by highway monitoring systems: (a) daytime video and (b) nighttime video.
Vision-based vehicle detection is a challenging problem, because the shape and color of vehicles varies widely, and lighting conditions change dynamically at night with the onset of sunset, the introduction of shadows, and so on [16] [23] . Figure 1 shows typical examples of daytime and nighttime traffic scenes taken from highway monitoring systems.
As shown in Fig. 1(b) , vehicle detection at night is difficult due to poor illumination conditions. For instance, headlights from surrounding vehicles affect the performance of conventional vehicle detection systems. Previous studies on vision-based vehicle detection systems are summarized in Table 1 . The background image provides the critical information for tracking an object, and headlights are a salient feature for nighttime vehicle detection. In addition, a classifier-based approach is the most popular for object detection and recognition. Thus, in this paper, vision-based vehicle detection approaches are grouped into three classes. The first class exploits the background information of a traffic video. Moving vehicles are extracted from the current frame by background subtraction [16] [19] . In [16] , background subtraction was used to extract a moving vehicle, and vehicle tracking was performed to overcome occlusion and false detection problems. The disadvantage of the background subtraction–based method is that background images are sensitive to shadow and lighting conditions. The second class is based on specific features of vehicles such as color, shape, headlights, and so on [20] , [21] . Objects are segmented out using chosen vehicle features, and headlights are modeled and tracked to locate vehicles at night. In [21] , a blob filter and a hypothesis verification step were proposed to locate headlights, exploiting their shape and size. The third class uses classifiers such as PCA, SVM, and Adaboost. In [22] , a two-layer vehicle detection system were presented, in which headlights were detected using shape information and the Adaboost algorithm, which was applied to detect the frontal part of a vehicle. A wavelet transform followed by PCA was proposed to detect vehicles by [23] .
In this paper, a novel state machine–based vehicle detection approach is presented, especially for nighttime video in which headlights cannot easily be segmented out. At a chosen location on the road, vehicle detection is considered as a sequential process of state transition; that is, “no vehicle,” “vehicle arrival,” “vehicle moving,” and “vehicle departure.” More specifically, vehicle detection is modeled as a state transition problem; that is, a vehicle arrives at a chosen region of the road, then the vehicle moves forward, and lastly, the vehicle moves out from the chosen region. The contribution of the proposed approach can be summarized as follows. First, we propose a state machine–based framework for a vision-based vehicle detection system. We also propose a two-stage vehicle detection approach consisting of a vehicle detection stage and a headlight-location verification stage. Secondly, the number of bright pixels and their differences in a chosen area are used as features of the proposed state machine for nighttime vehicle detection. Thirdly, the downhill simplex optimization technique is adopted for accurate headlight detection, designing a headlight-detection mask and a cost function to find an optimal headlight position.
A summary of previous vision-based vehicle detection approaches.
Group Basic idea Features Approaches Strong points
1 Background subtraction [16],[17] Color information, trajectories of moving objects Using color information to build background image and checking trajectories to avoid occlusion and false detection More robust than using B&W background image
Shadow removal and background subtraction [18] Lane information, vehicle size and linearity After lane is detected, line-based shadow elimination and Kalman filter–based vehicle tracking is used Robust to shadow and occlusion
Sub-block statistics of an object is different from the background [19] Contrast difference of sub-blocks between frames Detecting moving objects in sub-block levels and combining them to detect moving objects Suitable for detecting various types of objects
2 Headlights detection is a key for vehicle detection [20] Shape and distance between left and right headlights Headlight segmentation followed by headlight pattern analysis for vehicle detection Fast and effective for nighttime vehicle detection
Headlights are the brightest objects with circular shape [21] Shape, color, and movements of headlights Perspective blob filter is used to detect possible headlights and hypothesis verification step is followed for headlight pair detection Able to detect headlights and taillights at dusk and night
3 Adaboost algorithm and two-layer vehicle detection [22] Haar features and the shape of headlights Locating possible headlights using shape analysis and adopting Adaboost framework for vehicle-front recognition Robust by adopting classifiers in the second layer
PCA-based vehicle detection [23] Wavelet transform Wavelet transform is adopted to extract features and PCA is used to detect vehicles Applicable to static images
Proposed approach Modeling vehicle detection as a sequential process of state transition and headlight detection as a downhill simplex optimization Number of bright pixels and their difference Monitoring statistics of a chosen region of the road and deciding state of a vehicle detection system, that is, vehicle arrival, moving, and departure Simple and easy for implementation
II. Proposed Vehicle Detection Approach
The overview of the proposed state machine–based vehicle detection method and an optimization approach for headlight detection, is shown in Fig. 2 .
From nighttime video, lighting objects are extracted through a dynamic thresholding operation [24] . Then, the number of bright pixels inside a selected area of interest (AOI) and their differences are used as input data for the proposed state machine. Lastly, the downhill simplex optimization approach is applied for headlight detection. The proposed approach is based on the following analysis of bright pixels.
PPT Slide
Lager Image
Overview of proposed vehicle and headlight detection system.
- 1. Analysis for Bright Objects
Nighttime vehicle detection is quite different from daytime vehicle detection, because the lighting conditions at night are so different from conditions during the day. In the daytime, vehicles can be easily extracted from a background image. For instance, a vehicle detected from a daytime video is marked in the rectangular box in Fig. 3(c) . However, in nighttime video, vehicles cannot be easily extracted from input video. As shown in Fig. 3(f) , the foreground image includes noise due to headlights of surrounding vehicles.
PPT Slide
Lager Image
Examples of daytime and nighttime vehicle detection: (a) daytime input video; (b) background image of (a); (c) foreground image (a)-(b) and a detected vehicle marked by a yellow rectangle; (d) nighttime input video; (e) corresponding background image of (d); and (f) foreground image (d)-(e) with noise due to headlights.
In this paper, the signal obtained from lighting objects in a selected region is analyzed for nighttime vehicle detection. Because lighting objects are the most salient features of moving vehicles in nighttime video, a signal pattern of lighting objects in a chosen AOI in Fig. 4 , is analyzed. Consider the three-lane road shown in Fig. 4 . A vehicle passes through an AOI with size W × H marked by the blue rectangle. It is clear that monitored signals inside the AOI have different data patterns as the vehicle passes through the AOI. To model the vehicle movement, the number of bright pixels is counted and averaged for the chosen AOI using (1); that is, r ( n ) = 255 and r ( n ) = 0, indicate that the whole area inside the AOI at frame n is filled with bright pixels and that no bright pixel exists in the AOI, respectively.
(1) r[n]= 1 H×W ∑ i=1 W ∑ j=1 H L n (i,j) ,
where H and W denote the height and width of an AOI, and Ln ( i , j ) denotes the lighting objects inside the AOI. For lighting object Ln ( i , j ) extraction, a dynamic thresholding approach is used to take into account various illumination conditions. If a luminance value In ( i , j ) is higher than a chosen threshold then Ln ( i , j ) = 255; otherwise Ln ( i , j ) = 0 (Please refer to [24] for detailed information). A typical signal r ( n ) of a nighttime vehicle is shown in Fig. 5 .
There are two peak values; one is for reflected headlights on the road and the other is from the headlights themselves. The reflected headlights region has two sub regions: H R1 and H R2 . The value of r ( n ) increases in the region H R2 and decreases in the region H R1 as a vehicle approaches the AOI. The proposed algorithm is designed to detect a vehicle by locating the region H R1 in Fig. 5 and the location of headlights.
PPT Slide
Lager Image
Overview of vehicle detection by monitoring a sequential state transition inside an AOI. Red line indicates lane information, and blue rectangle denotes an AOI for right-most lane.
PPT Slide
Lager Image
A typical signal r(n) of a vehicle in a nighttime video. It shows the pattern of headlights inside an AOI as the vehicle passes through.
- 2. State Machine for Nighttime Vehicle Detection
A state machine is a well-known mathematical model for designing a system with a finite number of states. In [25] and [26] , vehicle detection systems using a state-machine approach were proposed. They used a magnetic sensor to build a sensor node and implemented a vehicle-detection algorithm based on a state machine, which is simple and fast for deployment in a sensor node. In this paper, the state-machine approach is applied to vision-based vehicle detection problems. Figure 6 shows the diagram of the proposed state machine for visionbased vehicle detection. A detailed explanation of the proposed state machine may be summarized as follows:
  • State S1 is “Calculatingr(n).” It checks how much of a chosen AOI is filled with lighting objects at framen. Ifr(n) is lower than a chosen thresholdTHv, it stays at state S1, which means no vehicle is present. The maximum value ofr(n) is 255, and the minimum value ofr(n) is 0 due to a normalization process as described in (1). Ifr(n) is higher thanTHv, it jumps to the next state, S2.
  • State S2 is for “Count the number ofr(n) >THv.” If a vehicle comes closer to the AOI, the value ofr(n) increases and is continuously higher than the thresholdTHv. In other words, the number ofr(n) that are higher than the thresholdTHvwill be counted. Then, it jumps to state S3, asr(n) is above the threshold continuously more thanNAtimes. However, ifr(n) is lower thanTHv, it moves to state S4.
  • In State S3, the difference ofr(n) is calculated; that is,d(n) =r(n) –r(n– 1), which is designed to detect the point at which the amount of headlight reflection is decreased, as shown inFig. 5. If the differenced(n) is negative, it jumps to state S5.
  • In state S4, the state ofr(n) is checked. Ifr(n) is lower than the threshold continuously, it moves back to state S1, or it jumps back to state S2 ifr(n) is higher thanTHvagain. State S4 is for taking care of noise due to headlights from vehicles in the next lane.
  • In state S5, it is determined that a vehicle is detected ifd[n] is negative continuously more thanNDtimes.
PPT Slide
Lager Image
State diagram of proposed state machine for vehicle detection.
- 3. Downhill Simplex Algorithm for Headlights Detection
Downhill simplex is an optimization technique that is based on a simplex and does not require derivative operations during the optimization process. By choosing an initial simplex; that is, an object with n +1 vertices ( n represents the number of variables), a downhill simplex algorithm is started, and the vertex with the highest function value is replaced with a new vertex. During the search procedure operations, such as reflection, expansion, contraction, and shrinkage, are performed until a local minimum point is reached [27] . As an output of the proposed state machine, a vehicle can be detected. However, to locate the position of headlights, post-processing is required. In this paper, a downhill simplex algorithm is adopted to locate the position of headlights. Examples of vehicles obtained as outputs of the proposed state machine are shown in Fig. 7 .
PPT Slide
Lager Image
Examples of headlights to be located by downhill simplex optimization.
Locating headlights is a challenging problem, because the shape and size of headlights vary. In addition, headlights may not be extracted easily due to noise pixels from surrounding lighting objects, as shown in Fig. 7 . To locate headlights, a simple headlight-detection mask can be designed, as shown in Fig. 8 . However, it is necessary to determine, the size of headlights, the distance between left and right headlights, and the center point between the headlights. The detailed procedures of the headlight detection process may be summarized as follows:
  • The width of a lane is denoted asW, as shown inFig. 4. Lane information can be extracted in many different ways; for example, through a Hough transform or an inverse perspective transform, or a conventional lane-detection algorithm[28]. Here,HLWdenotes the width of the headlights, and the width of the left and the right headlights are considered to be equal;HLH,HLD, andLc(x,y) denote the height of the headlights, the distance between the left and right headlights, and the center of a detected pair of headlights, respectively.
  • The regionsA1 andA2 are for detecting the left and right headlights, and the center regionA3 is for detecting a black region between the headlights.
  • The headlight detection problem is formulated as an optimization problem in this paper. A cost functionf(x,y) is defined as
(2) f(x,y)= f H (x,y)⋅ f R (x,y),
where fH ( x , y ) and fR ( x , y ) denote a normalized headlight-mask function based on a mask described in Fig. 8 and a reliability measure function, respectively. Each function is defined as follows:
(3) f H (x,y)=α⋅ f M (x,y),
(4) f M (x,y)= ∑ j=− H L H 2 j= H L H 2 ∑ i=−( H L D 2 +H L W ) i=− H L D 2 −1 L c (x+i,  y+j)⋅ w 1                 + ∑ j=− H L H 2 j= H L H 2 ∑ i=− H L D 2 i= H L D 2 L c (x+i,  y+j)⋅ w 3                 + ∑ j=− H L H 2 j= H L H 2 ∑ i= H L D 2 +1 i= H L D 2 +H L W L c (x+i,  y+j)⋅ w 2 ,
where α , w 1 , w 2 , and w 3 denote a normalizing factor for headlight-mask function fM ( x , y ), and weights for the left and right headlights and the center region between the headlights, respectively. The normalizing factor α is defined to be the inverse of the area of left and right headlight masks; that is, α =1/ (2 ⋅ HLH HLW ). The weights for the left and right headlights, w 1 and w 2 , are positive values.
PPT Slide
Lager Image
Description of a headlight-detection mask. Size of left and right headlights is defined as equal.
However, the weight for the center region w 3 is negative to compensate for bright pixels between the left and right headlights, which avoid determining a white region—for example, a region without a black region in the middle—as headlights. The reliability measure function fR ( x , y) is defined as
(5) f R (x,y)= f B (x,y)⋅ f S (x,y),
(6) where   f B (x,y)= ∑ for (x,y)∈A1 L(x,y) ∑ for (x,y)∈A2 L(x,y) ,                     for  ∑ for (x,y)∈A1 L(x,y)< ∑ for (x,y)∈A2 L(x,y) ,
(7) f S (x,y)=1− ∑ for (x,y)∈A3 L(x,y) H L H ×H L D .
Thus, headlights can be detected by maximizing (2) subject to the given constraints
(8) maximize  f(x,y)= f H (x,y)⋅ f R (x,y), subject to W⋅0.1≤H L W ≤W⋅0.4,                    W⋅0.1≤H L D ≤W⋅0.5,
where the width of a headlight HLW and the distance between the left and right headlights HLD are determined based on the width of a detected lane. As described in (5), the reliability measure function comprises two terms. The first term is the balance function in (6): It indicates how much of the lighting objects in the left and right headlights are balanced. For instance, the number of bright pixels in regions A 1 and A 2 are counted. Then, from those values, the lower value is divided by the higher value. If they are equally balanced, fB ( x , y ) = 1, and if they are poorly balanced, fB ( x , y ) = 0. The second term is the space-indicating function between the left and right headlights, as described in (7), indicating how much black space exists between the headlights. If region A 3 is totally filled with bright pixels, fS ( x , y ) = 0. However, as the number of bright pixels in region A 3 decreases, fS ( x , y ) increases to 1.
III. Experimental Results
In this section, the performance of the proposed state machine–based vehicle detection approach is presented. The proposed algorithm was tested on highway surveillance video sequences recorded from Korea Expressway Corporation, Rep. of Korea. The test video was recorded for more than two hours in different locations, covering various types of vehicles and headlights. The resolution and frame rate of the recorded video were 720 × 480 and 29.9 frames/sec, respectively. For implementation, Matlab R2010a was used with a Windows 7 operating system. Lane information was given manually in our experiments. Based on the lane information, an AOI for headlight pattern analysis was chosen automatically. The height of the AOI was determined based on an analysis of nighttime – quarter video, which was set to one-quarter of the lane width, as shown in Fig. 4 . According to our observations, the lane width and the height of headlights were dependent on each other, and the average height of headlights was about one-quarter of the lane width.
An example vehicle detection result obtained using the proposed approach is shown in Fig. 9 . A snapshot of an input video and the corresponding threshold image are shown in Figs. 9(a) and (b) , respectively. For each frame, r ( n ) and its difference r ( n ) were calculated in the chosen AOI, as shown in Figs. 9(c) and (d) . Using the proposed state-machine approach, a vehicle arrival point was determined to be the point at which r ( n ) was more than three times the threshold value, indicated as ‘Arrival’ at frame no. 915. The threshold value THv for the vehicle arrival detection can be chosen from values above one-quarter of the maximum value of r ( n ). In our simulations, THv = 70 and NA = 3 were chosen experimentally. After the vehicle arrival point was detected, the difference of r ( n ); that is, d ( n ), was calculated. As d ( n ) was a negative value consecutively more than two times (that is, ND = 2), it was determined that the region HR 1 was detected at frame no. 928, as shown in Fig. 9 (d) . The detected vehicle is displayed in the small window in Fig. 9(a) . As a vehicle approached, the number of bright pixels in the AOI; that is, r ( n ), increased due to headlights from the vehicle, as shown in Fig. 9 . Then, it decreased as the vehicle moved away from the AOI. However, when a series of vehicles approached, r ( n ) did not decrease, even though a vehicle moved away from the AOI due to headlights from the vehicles behind and the next lane. Figure 10 shows an example of consecutive vehicle detection in the leftmost lane, which means that traffic was becoming congested or that vehicles were moving very closely with each other. The left and right images are results of consecutive vehicle detection, which were captured at frames no. 663 and no. 693. The images in the second row are the corresponding threshold images. The values of r ( n ) and d ( n ) of the chosen AOI in the leftmost lane, for 100 frames, are plotted at the bottom of Fig. 10 . As shown in Fig. 10 , values of r ( n ) stayed above the chosen threshold, due to headlights from vehicles behind.
By calculating the difference of r ( n ); that is, d ( n ), vehicle departure points of two consecutive vehicles were detected successfully at frames no. 663 and no. 693 in the proposed approach. Although the proposed approach is robust in nighttime vehicle detection using bright pixels, it is sensitive to headlights from vehicles in the next lane. Examples of poorly detected vehicles due to noise from the next lane are shown in Fig. 11 .
PPT Slide
Lager Image
An example of a vehicle detection using proposed state-machine-based approach: (a) an input video, (b) corresponding thresholded image for rectangular region of a chosen AOI in (a), (c) calculated r(n), and (d) d(n). Vehicle arrival and headlights decreasing points (that is, HR1) were detected at frames no. 915 and no. 928, respectively. Red vertical lines indicate position of detected points, respectively.
PPT Slide
Lager Image
An example of consecutive vehicle detection in leftmost lane, where vehicles were moving close to each other. Two vehicles were detected at frames no. 663 and no. 693. The value r(n) was over the chosen threshold continuously. However, the value of d(n) showed negative values, allowing the algorithm to determine vehicle departure points.
PPT Slide
Lager Image
Examples of false-positive vehicle detection.
After vehicles were detected with the state machine–based approach, the headlight location was determined using the proposed downhill simplex method to filter out wrong results, as shown in Fig. 11 . In our implementation, the cost function −1× f ( x , y ) was minimized to find an optimal headlight position. In addition, w 1 and w 2 were set to 1, and w 3 was set to −2. Figure 12 shows an example of the downhill simplex search process.
Figure 12(a) is a snapshot of a detected vehicle and (b) is the corresponding threshold image. The proposed downhill simplex approach was applied to (b) . Figure 12(d) shows a 3-D plot of the cost function for various positions ( x , y ), and the detected headlight location is shown in (c) . The performance of the proposed downhill simplex method for headlights detection was tested for detected vehicles. Various types of vehicles and headlights were included, and some of the examples are shown in Fig. 13 . In addition, the headlight-detection rate is shown in Table 2 . The average headlight-detection rate was about 92.9%. Headlight detection was considered as a success if the detected left and right headlight centers were inside of the corresponding true headlights. Some reflected headlights on the road were falsely detected as true headlights. For instance, as shown in Figs. 13(a) and (b) , the reflected headlights of an SUV (at row=2, column=2) and a bus (at row=5, column=2) were detected as headlights. These errors are caused from the fact that headlights of buses are reflected on the road further in front of the vehicle that and fog lights of SUVs can be detected as headlights. The value of the cost function at the center of detected headlights is shown under each detected vehicle in Fig. 13(a) .
PPT Slide
Lager Image
An example of headlight detection using proposed downhill simplex method: (a) snapshot of a detected vehicle, (b) corresponding thresholded image, (c) detected headlight position, and (d) an example 3-D plot of cost fuction for various positions (x, y) with a chosen headlight.
Performance of headlight-detection rate using proposed downhill simplex method.
Success of headlight detection Reflected headlight detection
92.9% 3.0%
PPT Slide
Lager Image
Examples of headlight detection using proposed downhill simplex approach: (a) various types of vehicles and detected headlight positions are indicated by rectangular boxes. Value of cost function for each detected vehicle is shown under each vehicle as fmin. (b) corresponding threshold images.
In our simulations, the average cost function value for detected headlights was −0.366, and the cost function values for falsely detected vehicles, as shown in Fig. 11 , were higher than −0.02. The proposed headlight-detection approach using the downhill simplex method showed reasonable performance for various types of headlights, as shown in Figs. 13(a) and (b) . Although some headlights were connected with surrounding bright pixels due to headlights from other vehicles, the proposed approach detected headlights robustly.
PPT Slide
Lager Image
Vehicle detection results for different nighttime videos. Frame numbers for images 1 to 4 are 1631, 1642, 1663, and 1666, respectively. For instance, a vehicle is detected in the first lane at frame no. 1631 and a vehicle is detected in the second lane at frame no. 1642, and so on. Frame numbers for images 5 to 8 from a different nighttime video are 882, 894, 901, and 925, respectively. Vehicles are detected in lanes 4, 1, 3, and 4, sequentially.
Performance comparison of vehicle-detection approaches.
Methods Detection precision Detection recall
Nighttime vehicle detection Tracking based [20] 93.1% 46.7%
Contrast based [19] 88.6% 85.3%
Proposed approach 94.2% 86.0%
Daytime vehicle detection State machine-based approach [29] 98.4% 98.5%
Examples of final vehicle-detection results of the proposed state machine–based approach followed by the downhill simplex optimization are shown in Fig. 14 ; a rectangular box is shown at the location of detected headlights.
To compare the performance of the proposed approach with other approaches, two state-of-the-art methods (that is, contrast- and tracking-based approaches) were implemented and compared. Table 3 shows the performance of the proposed approach quantitatively.
For vehicle detection, the tracking-based approach showed adequate performance when headlights were segmented out correctly. However, in a situation where lighting objects were connected with surrounding bright pixels after the segmentation process, it failed to extract headlights correctly. In our Matlab simulations, an average processing time of the proposed algorithm is five frames per second. Overall, the proposed state machine–based approach combined with downhill simplex optimization showed reliable performance. The proposed approach was tested using daytime video as well. For daytime vehicle detection with the state machine–based approach, a threshold image after background subtraction was used rather than bright pixels (Please refer to [29] for details).
IV. Conclusion
Nighttime vehicle detection is a difficult task due to poor illumination conditions and various lighting objects from headlights, lamps, and so on. In this paper, a novel vision-based nighttime vehicle detection approach has been presented for highway monitoring systems. It is particularly targeted to situations in which vehicle headlights cannot be segmented out due to lighting objects from surrounding vehicles. The contributions of this work can be summarized as follows: (a) A state machine–based nighttime vehicle detection approach is proposed. In the proposed approach, vehicle detection is considered as a sequential process of state transition; that is, vehicle arrival, moving, and departure. (b) Because bright pixels are the most salient feature in nighttime video, the number of bright pixels and their differences are used as features for inputs of the proposed state machine. (c) Headlight detection is formulated as an optimization problem, and the downhill simplex approach is adopted to determine the location of headlights. More specifically, a headlight-detection mask is defined, and a novel cost function based on features of headlights is proposed. Various sizes of headlights are evaluated over a detected-vehicle image to determine an optimal set of headlights and their position in the proposed optimization approach.
Simulation results show that the proposed approach is robust for vehicle and headlight detection, even for a vehicle in which headlights cannot be segmented out from the surrounding lighting objects. Although the proposed state machine–based approach is simple to implement, it has a disadvantage in that vehicle detection is sensitive to headlights from surrounding vehicles. To overcome this limitation, the proposed downhill simplex approach is used as a filter to delete falsely detected vehicles. The proposed approach is based on the analysis of reflected headlights on the road. If headlight information is not available due to a heavy traffic jam, the performance of the proposed approach could be affected. For future research, other types of headlight-detection masks and optimization processes can be investigated to distinguish fog lights and surrounding bright pixels for various weather conditions. In addition, systematic approaches could be investigated to determine weights of the headlight-detection mask and threshold values for the proposed state machine–based approach.
This work was supported by the Ministry of Science, ICT and Future Planning/Korea Research Council for Industrial Science and Technology under an intelligent situation cognition and IoT basic technology development project, and also supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2009-0093828).
BIO
khchoi@mokpo.ac.kr
Kyoung-Ho Choi received his BS and MS degrees in electrical and electronics engineering from Inha University, Incheon, Rep. of Korea, in 1989 and 1991, respectively. He received his PhD degree in electrical engineering from the University of Washington, Seattle, USA, in 2002. Since January 1991, he had been with Electronics and Telecommunications Research Institute (ETRI), Daejeon, Rep. of Korea, where he was a leader of the telematicscontent- research team. He was also a visiting scholar at Cornell University, Ithaca, NY, USA, in 1995. He then took sabbatical leave from the University of Washington, Seattle, USA from 2011 to 2013. In March 2005, Dr. Choi joined the Department of Information and Electronic Engineering of Mokpo National University, Mokpo, Rep. of Korea. He is a program co-chair of the 19th Korea-Japan Joint Workshop of Frontiers of Computer Vision (FCV 2013). His research interests include multimedia signal processing, human behavior recognition, MPEG-HEVC, and sensor networks. He is a senior member of IEEE.
dohyun@etri.re.kr
Do-Hyun Kim received his BS and MS degrees in computer science and engineering, from Pusan National University, Busan, Rep. of Korea in 1995 and 1997, respectively. Since 2000, he has been a research member of Electronics and Telecommunications Research Institute (ETRI), Daejeon, Rep. of Korea. His research interests include databases, ubiquitous sensor networks, and ITS.
kskim@hunsol.com
Kwang-Sup Kim received his BS and MS degrees in electronics engineering from Inha University, Incheon, Rep. of Korea, in 1990 and 1992, respectively. From 1992 to 1998, he worked for Kia Information Systems Inc., Seoul, Rep. of Korea. From 1998 to 2008, he worked for Hitecom Systems Inc., Seoul, Rep. of Korea. Since 2008, he has been a CEO of HUNS Inc., Anyang, Rep. of Korea. For the last 20 years, he has been working in the area of image processing.
jwkwon@inha.ac.kr
Jang-Woo Kwon received his BS, ME, and PhD degrees in electronics engineering from Inha University, Incheon, Rep. of Korea, in 1990, 1992, and 1996, respectively. In 1992, he was a visiting researcher at the Department of Biomedical Engineering of Tokyo University, Tokyo, Japan. From 1996 to 1998, he was a deputy director of Korea Industrial Property Office, Daejeon, Rep. of Korea, where his responsibility was to examine patents. From 1998 to 2009, he was an associate professor of the Department of Computer Engineering at Tongmyoung University, Busan, Rep. of Korea. He had been a dean of the Research Institute for Information Eng. Tech. at Tongmyoung University from 2002 to 2006. From 2010 to 2012, he was an associate professor of the Department of Computer Eng. at Kyungwon University, Sungnam, Rep. of Korea. Since 2006, he had been a director of the Human Resource Development Division of National IT Industry Promotion Agency, Daejeon, Rep. of Korea for six years. In March 2012, he joined the Department of Computer Information Engineering of Inha University, Incheon, Rep. of Korea. His research area is in sensor networks and human computer interaction using biological signals. For the last 20 years, he has been working in biological signal analysis and its recognition using artificial intelligence.
leesi@mokpo.ac.kr
Sang-Il Lee received his BS and MS degrees in electronics engineering from Korea University, Seoul, Rep. of Korea, in 1991 and 1994, respectively. He received his PhD degree in electrical engineering from the University of Washington, Seattle, WA, USA, in 2002. In 2003, he joined the faculty of Mokpo National University, Mokpo, Rep. of Korea, where he is currently a professor in the Division of Information and Electronics Engineering. His main research interests are in the areas of wireless communications and digital systems.
chenken@nbu.edu.cn
Ken Chen received his MS in mechanical engineering from the University of Akron, OH, USA, in 1996 and his PhD in mechanical and aerospace engineering from West Virginia University, Morgantown, WV, USA, in 2000. He was a visiting scholar at the University of Washington, Seattle, WA, USA, in 2012. Currently, he serves as an associate professor at the Faculty of Information Science and Engineering, Ningbo University, Ningbo, Zhejiang, China. His research interests include image and video processing, artificial intelligence, and so on.
jhp@etri.re.kr
Jong-Hyun Park received his BS in space science from Kyung Hee University, Seoul, Rep. of Korea, in 1989 and his MS in astronomy and meteorology from Yonsei University, Seoul, Rep. of Korea, in 1991. He received his PhD degree in environmental engineering from Chiba University, Chiba Prefecture, Japan in 2000. Since 1991, he has been with Electronics and Telecommunications Research Institute (ETRI), where he is a director of the Intelligent Cognitive Technology Research Dept. His research interests include location-based services, robot systems, and spatial information.
References
McCall J.C. , Trivedi M.M. 2006 “Video-Based Lane Estimationand Tracking for Driver Assistance: Survey, System, andEvaluation,” IEEE Trans. Intell. Transp. Syst. 7 (1) 20 - 37    DOI : 10.1109/TITS.2006.869595
Chen Y.-L. “Nighttime Vehicle Detection for DriverAssistance and Autonomous Vehicles,” Int. Conf. Pattern Recogn. Hong Kong, China 1 687 - 690
Mandelbaum R. “Vision for Autonomous Mobility: Image Processing on the VFE-200,” IEEE Int. Symp. CIRA Gaithersburg, MD, USA Sept. 14–17, 1998 671 - 676
Zhao G. , Yuta S. “Obstacle Detection by Vision System for an Autonomous Vehicle,” Intell. Veh. Symp. July 14–16, 1993 31 - 36
Bertozzi M. , Broggi A. , Fascioli A. “Vision-Based Intelligent Vehicles: State of the Art and Perspectives,” Robot. Auton. Syst. 32 1 - 16    DOI : 10.1016/S0921-8890(99)00125-6
Kiratiratanapruk K. , Siddhichai S. “Practical Application for Vision-Based Traffic Monitoring System,” Int. Conf. Electr.Eng./Electron. Comput. Telecommun. Inf. Tech. Pattaya, Thailand May 6–9, 2009 2 1138 - 1141
Semertzidis T. 2010 “Video Sensor Network for Real-Time Traffic Monitoring and Surveillance,” Intell. Transp. Syst. IET 4 (2) 103 - 112    DOI : 10.1049/iet-its.2008.0092
Feris R. “Large-Scale Vehicle Detection in Challenging Urban Surveillance Environments,” IEEE Workshop Appl. Comp.Vision Kona, HI, USA Jan. 5–7, 2011 527 - 533
Deng L.Y. “Vision Based Adaptive Traffic Signal Control System Development,” Int. Conf. Adv. Inf. Netw. Appl. Mar. 28–30, 2005 2 385 - 388
Malhi M.H. “Vision Based Intelligent Traffic Management System,” Frontiers Inf. Technol. Islamabad, Pakistan Dec. 19–21, 2011 137 - 141
Coifman B. , Lee H. “Analytical Tools for Loop Detectors and Traffic Monitoring Systems,” IEEE Intell. Transp. Syst. Conf. Seattle, WA, USA 1086 - 1091
Shbat M.S. , Tuzlukov V. “Generalized Detector with Adaptive Detection Threshold for Radar Sensors,” Int. Radar Symp. Warsaw, Poland May 23–25, 2012 91 - 94
Hussain K.F. , Moussa G.S. 2005 “Automatic Vehicle Classification System Using Range Sensor,” Int. Conf. Inf. Technol.: Coding Comp. 2 107 - 112
Sifuentes E. , Casas O. , Pallas-Areny R. 2011 “Wireless Magnetic Sensor Node for Vehicle Detection with Optical Wake-Up,” IEEE Sensors J. 11 (8) 1669 - 1676    DOI : 10.1109/JSEN.2010.2103937
Barbagli B. “A Real-Time Traffic Monitoring Based onWireless Sensor Network Technologies,” IWCMC Istanbul, Turkey July 4–8, 2011 820 - 825
Lin S.P. , Chen Y.H. , Wu B.F. “A Real-Time Multiple-Vehicle Detection and Tracking System with Prior Occlusion Detectionand Resolution, and Prior Queue Detection and Resolution,” Int. Conf. Pattern Recogn. Hong Kong, China 1 828 - 831
Song X. , Nevatia R. 2007 “Robust Vehicle Blob Tracking with Split/Merge Handling,” Int. Evaluation, Conf. CLEAR 216 - 222
Hsieh J.W. 2006 “Automatic Traffic Surveillance System for Vehicle Tracking and Classification.” IEEE Trans. Intell. Transp.Syst. 7 (2) 175 - 187    DOI : 10.1109/TITS.2006.874722
Huang K. 2008 “A Real-Time Object Detecting and Tracking System for Outdoor Night Surveillance,” Pattern Recogn. 41 (1) 432 - 444    DOI : 10.1016/j.patcog.2007.05.017
Chen Y.-L. 2011 “A Real-Time Vision Systems for Nighttime Vehicle Detection and Traffic Surveillance,” IEEE Trans. Ind.Electron. 58 (5) 2030 - 2044    DOI : 10.1109/TIE.2010.2055771
Schamm T. , von Carlowitz C. , Zollner J.M. “On-Road Vehicle Detection during Dusk and at Night,” IEEE Intell. Veh.Symp. San Diego, CA, USA June 21–24, 2010 418 - 423
Wang W. “A Two-Layer Night-Time Vehicle Detector,” Int. Conf. DICTA Melbourne, VIC, Australia Dec. 1–3, 2009 162 - 167
Wu J. , Zhang X. , Zhou J. “Vehicle Detection in Static RoadImages with PCA-and-Wavelet-Based Classifier,” IEEE Intell. Transp. Syst. Oakland, CA, USA 740 - 744
Cheng H.Y. , Hsu S.H. 2011 “Intelligent Highway Traffic Surveillance with Self-Diagnosis Abilities,” IEEE Trans. Intell.Transp. Syst. 12 (4) 1462 - 1472    DOI : 10.1109/TITS.2011.2160171
Cheung S.Y. , Varaiya P. 2007 “Traffic Surveillance by Wireless Sensor Networks: Final Report,” U.C. Berkeley, CA, Californiapath research report ucb-its-prr-2007-4
Knaian A.N. 2000 A Wireless Sensor Network for Roadbeds and Intelligent Transportation systems, MS Thesis Dept. Electr. Eng. Comput. Sci., MIT Cambridge, MA
Ke S.R. “View-Invariant 3D Human Body Pose Reconstruction Using a Monocular Video Camera,” ACM/IEEEICDSC Ghent, Belgium Aug. 22-25, 2011 1 - 6
Choi K.-H. 2010 “Methods to Detect Road Features for Video-Based in-Vehicle Navigation Systems,” J. Intell. Trans. Syst. 14 (1) 13 - 26    DOI : 10.1080/15472450903386005
Kim D.H. 2013 “Finite State Machine for Vehicle Detection in Highway Surveillance Systems,” Korea-Japan Joint Workshop, FCV, Incheon Incheon, Rep. of Korea 84 - 87