Advanced
Depth Extraction of Partially Occluded 3D Objects Using Axially Distributed Stereo Image Sensing
Depth Extraction of Partially Occluded 3D Objects Using Axially Distributed Stereo Image Sensing
Journal of information and communication convergence engineering. 2015. Dec, 13(4): 275-279
Copyright © 2015, The Korean Institute of Information and Commucation Engineering
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • Received : August 21, 2015
  • Accepted : October 04, 2015
  • Published : December 31, 2015
Download
PDF
e-PUB
PubReader
PPT
Export by style
Article
Author
Metrics
Cited by
TagCloud
About the Authors
Min-Chul Lee
Department of Computer Science and Electronics, Kyushu Institute of Technology, Fukuoka 820-8502, Japan
Kotaro Inoue
Department of Computer Science and Electronics, Kyushu Institute of Technology, Fukuoka 820-8502, Japan
Naoki Konishi
Department of Computer Science and Electronics, Kyushu Institute of Technology, Fukuoka 820-8502, Japan
Joon-Jae Lee
Department of Game Mobile Contents, Keimyung University, Daegu 42403, Korea
Joonlee@kmu.ac.kr

Abstract
There are several methods to record three dimensional (3D) information of objects such as lens array based integral imaging, synthetic aperture integral imaging (SAII), computer synthesized integral imaging (CSII), axially distributed image sensing (ADS), and axially distributed stereo image sensing (ADSS). ADSS method is capable of recording partially occluded 3D objects and reconstructing high-resolution slice plane images. In this paper, we present a computational method for depth extraction of partially occluded 3D objects using ADSS. In the proposed method, the high resolution elemental stereo image pairs are recorded by simply moving the stereo camera along the optical axis and the recorded elemental image pairs are used to reconstruct 3D slice images using the computational reconstruction algorithm. To extract depth information of partially occluded 3D object, we utilize the edge enhancement and simple block matching algorithm between two reconstructed slice image pair. To demonstrate the proposed method, we carry out the preliminary experiments and the results are presented.
Keywords
I. INTRODUCTION
Depth extraction from three-dimensional (3D) objects in real world is attracting a great deal of interest in many diverse fields of computer vision, 3D display and 3D recognition [1 - 3] . Especially, 3D passive imaging makes it possible to extract depth information by recording different perspectives of 3D objects [3 - 9] . Among 3D passive imaging technologies, recently axially distributed stereo image sensing (ADSS) has been studied [8] where the stereo camera is translated along its optical axis and many elemental image pairs for a 3D scene are collected. The collected images are reconstructed as 3D slice images using a computational reconstruction algorithm based on ray back-projection. The ADSS is an attractive way to provide high-resolution elemental images and simple axial movement.
Partially occluded object has been one of the challenging problems in 3D image processing area. In the conventional stereo imaging case [1 , 2] , two different perspective images were used for 3D information extraction using a corresponding pixel matching technique. However, it is difficult to find accurate 3D information of partially occluded objects because 3D object image may be hidden by partial occlusion and the correspondences are lost.
In this paper, we present a computational method for depth extraction of partially occluded 3D objects using ADSS. In the proposed method, the high resolution elemental image pairs (EIPs) are recorded by simply moving the stereo camera along the optical axis and the recorded EIPs are used to reconstruct 3D slice images using the computational reconstruction algorithm. To extract depth information of partially occluded 3D object, we utilize the edge enhancement filter and block matching algorithm between two reconstructed slice image pair. To demonstrate the proposed method, we carry out the preliminary experiments and the results are presented.
II. DEPT EXTRACTION OF PARTIALLY OCCLUDED 3D OBJECTS USING ADSS
- A. System Structure
The proposed depth extraction method using ADSS for partially occluded 3D objects is shown in Fig. 1 . It has three processes: ADSS pickup process, digital reconstruction process and the depth extraction process.
PPT Slide
Lager Image
Proposed depth extraction method using ADSS.
- B. ADSS Pickup
Fig. 2 shows the recording process of 3D objects in the ADSS system. We record EIPs by moving the stereo camera along its optical axis. We suppose that optical axis is the center of the two cameras located at x axis and the distance between the imaging lens and sensor is g . Stereo camera can record a pair of elemental images according to its position on the z axis. They have two different perspectives for 3D objects. When 3D objects are located at the longitudinal distance zo away from the first stereo camera as shown in Fig. 2 , we can capture K EIPs. The k th EIP is located at a distance zk = z0 -( k -1) Δz from the object, where Δz is the axial separation between two adjacent stereo cameras. Since we capture each EIP at different distances, the object can be recorded on each EIP with different magnification factors.
PPT Slide
Lager Image
System structure of ADSS pickup process.
- C. Digital Reconstruction in ADSS
The second process of the proposed method is the computational reconstruction using the recorded EIPs. Fig. 3 illustrates the digital reconstruction process. The computational reconstruction is based on the inverse ray projection model [8] . We suppose that the reconstruction plane is at distance zr . In Fig. 3 , we assume that the first camera is located at z =0 and the k th camera is at z =( k -1) Δz . Let EkL and EkR be the left and right image of the k th EIP, respectively. The reconstructed image can be represented by
PPT Slide
Lager Image
where
  • T= [zr–(k–1)Δz+g]2+ (x2+y2)(1+1/Mk)2,
and
  • Mk= [zr–(k–1)Δz]/g.
PPT Slide
Lager Image
Ray diagram of digital reconstruction process.
Then, the reconstructed 3D slice image at ( x , y , zr ) is the summation of all the inversely mapped EIPs:
PPT Slide
Lager Image
- D. Depth Extraction Process
In the depth extraction process of the proposed method, we extract depth information using slice images from digital reconstruction process. The reconstructed slice images consist of different mixture images with focused images and blurred images according to the reconstruction distances.
The focused images of 3D object are reconstructed only at the original position of 3D object. While, blurred images are shown out of original position. Based on this principle, we find the focused image part in the reconstructed slice images. To do so, we firstly apply an edge-enhancement filter to the slice image. Next, we extract depth information using the edge-enhanced images as shown in Fig. 4 . The depth extraction algorithm used in this paper is shown in Fig. 4 . In general, block matching minimizes a measure of matching error between two images. The matching error between the blocks at the position ( x , y ) in the left image, IL , and the candidate bock at position ( x + u , y + v ) in the reference image, IR , is usually defined as the sum of absolute difference (SAD)
PPT Slide
Lager Image
where the block size is B×B. Using SAD result, the best estimate
PPT Slide
Lager Image
is defined to be the ( u , v ) which minimizes SAD. This estimate
PPT Slide
Lager Image
calculates and compares the SADs for all the search positions {( x + u , y + v )} in the right image IR . That is,
PPT Slide
Lager Image
PPT Slide
Lager Image
Depth extraction process for edge-enhanced images. (a) Left image and (b) right image.
III. EXPERIMENTS AND RESULTS
To demonstrate our depth extraction method using ADSS, we performed the preliminary experiments for partially occluded 3D objects. The experimental structure is shown in Fig. 5 (a). The 3D object is a toy car. It is positioned at 430 mm away from the first stereo camera. The occlusion is a tree which is located at 300 mm. Two different cameras are used for stereo camera. The baseline of the two cameras was 65 mm. Each camera has an image sensor of 2184×1456 pixels. Two lenses with focal length f =50 mm are used in this experiments. Then, g becomes 50 mm for computational reconstructions. The stereo camera is translated at z =5 mm increments for a total of K =41 EIPs.
PPT Slide
Lager Image
(a) Experimental structure. (b) reconstructed slice images.
Next, to reconstruct slice images for the partially occluded 3D objects, the recorded 41 EIPs were applied to the computational reconstruction algorithm as shown in Fig. 4 . The 2D slice images of the 3D objects were obtained according to the reconstruction distances. Some reconstructed slice images are shown in Fig. 5 (b). The reconstructed slice image is focused on the car object. Here, the distance of the reconstruction plane was 430 mm from the sensor where the ‘car’ object is originally located.
Now, we estimated the depth of the object using the depth extraction process described in Eqs. (3) and (4). We applied the edge-enhancement filter to the slice image as shown in Fig. 5 (b). Next, we extracted depth information using the edge-enhanced images. The block size was 8×8. The estimated depths are shown in Fig. 6 (b). This result reveals that the proposed method can extract the 3D information of partially occluded object shown in Fig. 6 (a).
PPT Slide
Lager Image
Depth extraction results for partially occluded car object. (a) 2D image and (b) extracted depth.
IV. CONCLUSIONS
In conclusion, we have presented a depth extraction method using ADSS. In the proposed method, the high-resolution EIPs were recorded by simply moving the stereo camera along the optical axis and the recorded EIPs were used to generate a set of 3D slice images using the computational reconstruction algorithm. To extract depth of 3D object, the edge enhancement filter and block matching algorithm between two slice images were used. To demonstrate our method, we performed the preliminary experiments of partially occluded 3D objects. The experimental results reveal that we can extract depth information because ADSS provides clear images for partially occluded 3D objects.
BIO
Min-Chul Lee
earned a B.S. degree in telecommunication engineering from Pukyong National University, Busan, Korea, in 1996, and M.S. and Ph.D. degrees from Kyushu Institute of Technology, Fukuoka, Japan, in 2000 and 2003. He is an assistant professor at Kyushu Institute of Technology in Japan. His research interests include medical imaging, blood flow analysis, 3D displays, 3D integral imaging, and 3D biomedical imaging.
Kotaro Inoue
earned a B.S. degree from Kyushu Institute of Technology, Fukuoka, Japan, in 2015. He is a master’s student at Kyushu Institute of Technology in Japan. His research interests include visual feedback control, 3D displays, and 3D integral imaging.
Naoki Konishi
received the B.S., M.S., and Ph.D. degrees from Kyushu Institute of Technology, Fukuoka, Japan, in 1991, 1993, and 1997. He is the associate professor at Kyushu Institute of Technology in Japan. His research interests are areas of medical imaging, blood flow analysis, and biomedical imaging.
Joon Jae Lee
received his B.S., M.S., and Ph.D. in Electronic Engineering from the Kyungpook National University, Daegu, South Korea, in 1986, 1990, and 1994, respectively. He worked for Kyungpook National University as a teaching assistant from September 1991 to July 1993. From March 1995 to August 2007, he was with the Computer Engineering faculty at the Dongseo University, Busan, South Korea. He is currently a professor of the Faculty of Computer Engineering, Keimyung University. He was a visiting scholar at the Georgia Institute of Technology, Atlanta, from 1998 to 1999, funded by the Korea Science and Engineering Foundation (KOSEF). He also worked for PARMI Corporation as a research and development manager for 1 year from 2000 to 2001. His main research interests include image processing, three-dimensional computer vision, and pattern recognition.
References
Okoshi T. 1976 Three-Dimensional Imaging Techniques. Academic Press New York, NY
Ku J. S. , Lee K. M. , Lee S. U. 2001 “Multi-image matching for a general motion stereo camera model,” Pattern Recognition 34 (9) 1701 - 1712    DOI : 10.1016/S0031-3203(00)00110-2
Stern A. , Javidi B. 2006 “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proceedings of the IEEE 94 (3) 591 - 607    DOI : 10.1109/JPROC.2006.870696
Passalis G. , Sgouros N. , Athineos S. , Theoharis T. 2007 “Enhanced reconstruction of three-dimensional shape and texture from integral photography images,” Applied Optics 46 (22) 5311 - 5320    DOI : 10.1364/AO.46.005311
Park J. H. , Hong K. , Lee B. 2009 “Recent progress in three-dimensional information processing based on integral imaging,” Applied Optics 48 (34) H77 - H94    DOI : 10.1364/AO.48.000H77
DaneshPanah M. , Javidi B. , Watson E. A. 2008 “Three dimensional imaging with randomly distributed sensors,” Optical Express 16 (9) 6368 - 6377    DOI : 10.1364/OE.16.006368
Shin D. , Javidi B. 2011 “3D visualization of partially occluded objects using axially distributed sensing,” Journal of Display Technology 7 (5) 223 - 225    DOI : 10.1109/JDT.2011.2124441
Shin D. , Javidi B. 2012 “Three-dimensional imaging and visualization of partially occluded objects using axially distributed stereo image sensing,” Optics Letters 37 (9) 1394 - 1396    DOI : 10.1364/OL.37.001394
Cho M. , Shin D. 2013 “Depth resolution analysis of axially distributed stereo camera systems under fixed constrained resources,” Journal of the Optical Society of Korea 17 (6) 500 - 505    DOI : 10.3807/JOSK.2013.17.6.500