Advanced
Development of an Edge-Based Algorithm for Moving-Object Detection Using Background Modeling
Development of an Edge-Based Algorithm for Moving-Object Detection Using Background Modeling
Journal of information and communication convergence engineering. 2014. Sep, 12(3): 193-197
Copyright © 2014, The Korea Institute of Information and Commucation Engineering
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • Received : January 06, 2014
  • Accepted : April 07, 2014
  • Published : September 30, 2014
Download
PDF
e-PUB
PubReader
PPT
Export by style
Article
Author
Metrics
Cited by
TagCloud
About the Authors
Won-Yong Shin
M. Humayun Kabir
M. Robiul Hoque
Sung-Hyun Yang
shyang@kw.ac.kr

Abstract
Edges are a robust feature for object detection. In this paper, we present an edge-based background modeling method for the detection of moving objects. The edges in the image frames were mapped using robust Canny edge detector. Two edge maps were created and combined to calculate the ultimate moving-edge map. By selecting all the edge pixels of the current frame above the defined threshold of the ultimate moving edges, a temporary background-edge map was created. If the frequencies of the temporary background edge pixels for several frames were above the threshold, then those edge pixels were treated as background edge pixels. We conducted a performance comparison with previous works. The existing edge-based moving-object detection algorithms pose some difficulty due to the changes in background motion, object shape, illumination variation, and noises. The result of the performance evaluation shows that the proposed algorithm can detect moving objects efficiently in real-world scenarios.
Keywords
I. INTRODUCTION
Moving-object detection methods detect moving objects by subtracting the background from the current image. The performance of moving-object detection depends on background modeling. A background model should overcome the illumination variation of the background and the noises. Such modeling methods can be classified into two types, namely pixel-based and edge-based methods, depending on the features used for the detection of a moving object. Pixelbased background modeling considers the changes in illumination and noise. These methods can produce moving edges in the background, which affect the moving-object detection. There has been a large amount of work addressing the issues of background model representation and adoptation of pixel-based methods [1 - 5] . Edge-based methods use edges that are less sensitive to intensity changes [6 - 10] . These methods work with fewer pixels than pixel-based methods.
Dailey et al. [6] presented an algorithm that uses the interframe differences of the three consecutive frames in order to obtain two difference images. Here, a Sobel edge detector was applied to the resulting images, and a threshold was applied on the resultant images to create binary images. Finally, the two binary images were intersected to obtain a moving-edge map. Kim and Hwang [7] presented an algorithm for the segmentation of moving objects with a robust double-edge map that is derived from the difference between two successive frames. After removing the edge points that belong to the previous frame, the remaining-edge map and the moving-edge map were combined to compute the final moving-edge map. Absolute background edges could be extracted from the first frame or by counting the number of edge occurrences for each pixel through the first several frames. This initialization of the background creates false-positive edges because it is impossible to obtain a background without moving objects in a real-world environment. Further, these two methods are sensitive to variations in the shape of the moving objects and to noise. These methods do not apply any background modeling; therefore, the detection of slow-moving objects is not possible. These limitations can be overcome by background modeling.
In this paper, we present an edge-based background modeling method based on the edge map of an interframe difference image that overcomes illumination variation, moving objects, and edge problems. We use a Canny edge detector to map the edges in the image frames. We create two edge maps: changing moving edge and stationary moving edge. These two edge maps are combined to calculate the ultimate moving-edge map. A temporary background-edge map is created by selecting all the edge pixels of the current frame above the threshold of the ultimate moving edges. The frequencies of the temporary background edge pixels for several frames are calculated. If the frequencies are above the threshold, then these edge pixels are treated as the background edge pixels and stored for a new background. Using this updated background, we can detect a stationary moving edge efficiently.
The rest of this paper is organized as follows: In Section II, we discuss the details of the proposed algorithm. Section III presents the results of the performance evaluation, followed by the overall conclusion in Section IV.
II. PROPOSED METHOD
The first stage of the proposed background modeling is edge detection. We used a Canny edge detector [11] in this stage and executed five separate steps: smoothing, finding gradients, non-maximum suppression, double thresholding, and edge tracking by hysteresis. In the smoothing step, the image is blurred to remove noise. A gradient operation is applied on the Gaussian convoluted image G*F in the finding gradient step. Non-maximum suppression is applied to the gradient magnitude to thin the edge. Double thresholding with hysteresis is applied to detect and link edges. The Canny edge maps can be expressed as follows:
PPT Slide
Lager Image
The edge extraction from the difference image in the successive frames results in a noise-robust difference edge map DEn because the Gaussian convolution included in the Canny operator suppresses the noise in the luminance difference.
PPT Slide
Lager Image
Fig. 1 shows the block diagram of the proposed movingobject detection algorithm. We extract the moving edge MEn of the current frame Fn using the difference edge map DEn , the current frame’s edge map En = ϕ (Fn), and the background-edge map Eb . Before modeling the background, the background-edge map can be extracted from the first frame.
PPT Slide
Lager Image
Block diagram of the proposed moving-object detection algorithm.
We define the edge map En = {e 1 , e 2 , e 3 , …, e k } as a set of all edge points detected by the Canny operator in the current frame Fn . Similarly, we denote the set of l moving-edge points MEn = {m 1 , m 2 , … …, m l }, where l ≤ k and MEn En . The moving-edge points in MEn detect the edge points of the inside boundary edge and the boundary edge of the moving object. DEn denotes the set of all pixels belonging to the edge map from the difference image. The moving-edge map generated by edge changing is given by selecting all edge pixels within a small distance Tchange of DEn , i.e.,
PPT Slide
Lager Image
For selecting the stationary moving edge, all the edge points are removed from the current frame, which belong to the previous moving-edge map. We can define the stationary moving edge
PPT Slide
Lager Image
as a set of all the edges that belong to the current edge frame within the distance Tstationary of the previous frame of the moving edge MEn-1 .
PPT Slide
Lager Image
The ultimate moving-edge map for the current frame is expressed by combining two maps.
PPT Slide
Lager Image
The temporary background-edge map E tb is given by selecting all edge pixels of the current frame above the distance T back of ME n , i.e.,
PPT Slide
Lager Image
For modeling the background, we counted the frequencies of the temporary background-edge map’s pixels for 200 frames. If the frequencies of the edge pixels exceed the threshold, then these edge pixels are considered the background edge pixels and stored as a new backgroundedge map.
PPT Slide
Lager Image
This updated background is used for detecting a stationary moving-edge map. For the extraction of moving objects, we consider the component connection algorithm [6] . After the extraction of the moving object, morphological operations are applied to remove the noise regions in the post-processing.
III. RESULTS AND ANALYSIS
We used the datasets from Performance Evaluation of Tracking and Surveillance (PETS) 2001 [12] dataset 3 (DS3) and dataset 4 (DS4), and PETS 2009 views 1, 5, and 6 [13] . PETS 2001 datasets are composed of five separate data sequences. All the datasets are multi-view (two cameras) and contain moving people and vehicles. DS3 has a more challenging sequence in terms of multiple targets and significant lighting variation. PETS 2009 datasets are multisensory sequences containing different crowd activities (walking around, standing, etc.). These datasets were captured from eight viewpoints. The PETS 2009 datasets lack ideal frames and have more challenging sequences for modeling backgrounds [14] . Using the ground truth [15] , we built the ground-truth edges of these datasets. We compared our test results with the results of two other edge-based methods: Dailey et al. [6] and Kim and Hwang [7] , as shown in Fig. 2 .
PPT Slide
Lager Image
Results of the comparison of the proposed method and the methods developed by Dailey et al. [6] and Kim and Hwang [7]. Each column represents one method: (a) frame 1397 from dataset 3 of Performance Evaluation of Tracking and Surveillance (PETS) 2001, (b) frame 1464 from dataset 3 of PETS 2001, (c) frame 1067 from dataset 4 of PETS 2001, (d) frame 468 from view 1 of PETS 2009, (e) frame 716 from view 1 of PETS 2009, (f) frame 12 from view 5 of PETS 2009, and (g) frame 32 from view 5 of PETS 2009.
Fig. 2 shows the column-wise comparison result of the proposed method and the methods developed by Dailey et al. [6] and Kim and Hwang [7] . The first column shows the image frames; the second column, the ground-truth images; the third column, the object-edge maps detected by the method developed by Dailey et al. [6] ; the fourth column, the object-edge maps detected using the method developed by Kim and Hwang; and the last column, the object-edge maps detected by using the proposed method. The comparison result shows that the methods developed by Dailey et al. [6] and Kim and Hwang [7] produce more scatter edges than the proposed method. The proposed method suppresses the false-positive edges and absorbs the scatter edges from the object-edge map by background modeling.
IV. CONCLUSIONS
We proposed an edge-based method to model the background for the detection of moving objects. We applied background modeling and updated the background after 200 frames. This method is more robust for the detection of slow-moving objects, shape changing, and noise suppression than the earlier methods. The proposed algorithm can be used in different applications, such as surveillance and content-based video coding.
Acknowledgements
This work was supported by the Industrial Strategic Technology Development Program (No. 10041788, Development of Smart Home Service based on Advanced Context- Awareness) funded by the Ministry of Trade, Industry & Energy of Korea and the Research Grant of Kwangwoon University 2014.
BIO
Won-Yong Shin
received his Bachelor of Engineering degree in Computer Engineering from Kwangwoon University, Seoul, Republic of Korea in 2009. Now he is pursuing his master’s degree at Kwangwoon University, Seoul, Republic of Korea. His main research interests are Object detection, Context-aware systems and Sensor Networks.
M. Humayun Kabir
rreceived his B.Sc. (Hon’s) and M.Sc. in Applied Physics and Electronics and Communication Engineering from Islamic University, Kushtia, Bangladesh, in 2001 and 2003, respectively. He was Assistant Professor at Islamic University, Kushtia, Bangladesh. Currently, he is pursuing his doctoral degree at Kwangwoon University, Republic of Korea. His main research interests are moving-object detection, context-aware systems for the Smart Home, embedded systems, M2M, and sensor networks.
M. Robiul Hoque
received his B.Sc. (Hon’s) and MSc in Computer Science and Engineering from the Dept. of Computer Science and Engineering, Islamic University, Kushtia, Bangladesh, in 2003 and 2004, respectively. He was Assistant Professor at Islamic University, Kushtia, Bangladesh. Currently, he is pursuing his doctoral degree at Kwangwoon University, Republic of Korea. His current research interests include context-aware systems, sensor networks, image processing, and speech processing.
Sung-Hyun Yang
received his B.S. and M.S. in Electrical Engineering from Kwangwoon University, Seoul, Republic of Korea, in 1983 and 1987, respectively. He completed his Ph.D. from Kwangwoon University in 1993. Currently, he is Professor at the Department of Electronics Engineering, Kwangwoon University, Seoul, Republic of Korea. He was Research Scientist at Boston University from 1996 to 1998, and Chairman of the Home Network Market Activation Section, Korean Association for Smart Home from 2007 to 2008. His main research interests are digital logic, embedded systems, M2M, next-generation ubiquitous home networks, and context-aware systems.
References
Gutches D. , Tarjkovics M. , Chohen-Solal E. , Lyons D. , Jain A. K. 2001 “A background model initialization algorithm for video surveillance,” in Proceedings of the 8th International Conference on Computer Vision Vancouver, Canada 733 - 740
Wren C. R. , Azarbayejani A. , Darrell T. , Pentland A. P. 1997 “Pfinder: real-time tracking of the human body,” IEEE Transactions on Pattern Analysis and Machine Intelligence 19 (7) 780 - 785    DOI : 10.1109/34.598236
Elgammal A. , Harwood D. , Davis L. 2000 “Non-parametric model for background subtraction,” Springer Heidelberg in Computer Vision 751 - 767
Haritaoglu I. , Harwood D. , Davis L. S. 2000 “W4: real-time surveillance of people and their activities,” IEEE Transactions on Pattern Analysis and Machine Intelligence 22 (8) 809 - 830    DOI : 10.1109/34.868683
Li L. , Huang W. , Gu I. Y. , Tian Q. 2004 “Statistical modeling of complex backgrounds for foreground object detection,” IEEE Transactions on Image Processing 13 (11) 1459 - 1472    DOI : 10.1109/TIP.2004.836169
Dailey D. J. , Cathey F. W. , Pumrin S. 2000 “An algorithm to estimate mean traffic speed using uncalibrated cameras,” IEEE Transactions on Intelligent Transportation Systems 1 (2) 98 - 107    DOI : 10.1109/6979.880967
Kim C. , Hwang J. N. 2002 “Fast and automatic video object segmentation and tracking for content-based applications,” IEEE Transactions on Circuits and Systems for Video Technology 12 (2) 122 - 129    DOI : 10.1109/76.988659
Jain V. , Kimia B. , Mundy J. 2007 “Background modeling based on subpixel edges,” in Proceedings of the IEEE International Conference on Imaging Processing San Antonio, TX 321 - 324
Yokoyama M. , Poggio T. 2005 “A contour-based moving object detection and tracking,” in Proceedings of the 2nd IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance Beijing, China 271 - 276
Yang Q. Y. , Gao X. Y. 2009 “Tracking on motion of small target based on edge detection,” in Proceedings of the 2009 WRI World Congress on Computer Science and Information Engineering Los Angeles, CA 619 - 622
Canny J. 1986 “A computational approach to edge detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence 8 (6) 679 - 698
Performance evaluation of tracking and surveillance 2001 [Internet] Available: http://ftp.pets.rdg.ac.uk/pub/PETS2001/
Performance evaluation of tracking and surveillance 2009 [Internet] Available: http://ftp.pets.rdg.ac.uk/pub/PETS2009/
Ferryman J. , Shahrokni A. 2009 “PETS2009: dataset and challenge,” in Proceedings of the 12th IEEE International Workshop on Performance Evaluation of Tracking and Surveillance Snowbird, UT 1 - 6
Laboratory for image and media understanding [Internet] Available http://limu.ait.kyushu-u.ac.jp/dataset/en/