Advanced
3D Visualization of Partially Occluded Objects Using Axially Distributed Image Sensing With a Wide-Angle Lens
3D Visualization of Partially Occluded Objects Using Axially Distributed Image Sensing With a Wide-Angle Lens
Journal of the Optical Society of Korea. 2014. Oct, 18(5): 517-522
Copyright © 2014, Journal of the Optical Society of Korea
  • Received : June 06, 2014
  • Accepted : September 09, 2014
  • Published : October 25, 2014
Download
PDF
e-PUB
PubReader
PPT
Export by style
Share
Article
Author
Metrics
Cited by
TagCloud
About the Authors
Nam-Woo Kim
Department of Ubiquitous-IT, Graduate School, Dongseo University, Busan 617-716, Korea
Seok-Min Hong
Department of Ubiquitous-IT, Graduate School, Dongseo University, Busan 617-716, Korea
Hoon Jae Lee
Department of Ubiquitous-IT, Graduate School, Dongseo University, Busan 617-716, Korea
Byung-Gook Lee
Department of Ubiquitous-IT, Graduate School, Dongseo University, Busan 617-716, Korea
Joon-Jae Lee
School of Computer Engineering, Keimyung University, Daegu 704-701, Korea
joonlee@kmu.ac.kr
Abstract
In this paper we propose an axially distributed image-sensing method with a wide-angle lens to capture the wide-area scene of 3D objects. A lot of parallax information can be collected by translating the wide-angle camera along the optical axis. The recorded wide-area elemental images are calibrated using compensation of radial distortion. With these images we generate volumetric slice images using a computational reconstruction algorithm based on ray back-projection. To show the feasibility of the proposed method, we performed optical experiments for visualization of a partially occluded 3D object.
Keywords
I. INTRODUCTION
The visualization of partially occluded 3D objects has been considered one of the most challenging drawbacks in the 3D-vision field [1 , 2] . To solve this problem, several multiperspective imaging approaches, including integral imaging and axially distributed image sensing (ADS), have been studied [3 - 9] . Integral imaging uses a planar pickup grid or a camera array. On the other hand, an ADS method, implemented by translating a camera along its optical axis, was proposed to take digital plane images that can be refocused after elemental images (EIs) have been taken for 3D visualization of partially occluded objects [10 - 14] . This method provides a relatively simple architecture to capture the longitudinal perspective information of a 3D object.
However, the capacity of the ADS method is related to how far the object is located from the optical axis. Due to a lower capacity for objects located close to the optical axis, wide-area elemental images are needed to reconstruct better 3D slice images under a large field of view (FOV).
In this paper, in order to capture a wide-area scene of 3D objects, we propose axially distributed image sensing using a wide-angle lens (WAL). Using this type of lens we can collect a lot of parallax information. The wide-area EIs are recorded by translating the wide-angle camera along its optical axis. These EIs are calibrated for compensation of radial distortion. With the calibrated EIs, we generate volumetric slice images using a computational reconstruction algorithm based on ray back-projection. To verify our idea, we carried out optical experiments to visualize a partially occluded 3D object.
II. SYSTEM CONFIGURATION
In general, a camera calibration process is needed for the images captured with the camera mounted with a WAL. Therefore, to visualize the correct 3D slice images in our ADS method, we introduce a camera calibration process to generate calibrated elemental images. This camera calibration method has never been applied to the conventional ADS system.
Figure 1 shows the scheme of the proposed ADS system with a WAL. It is composed of three different subsystems: (1) ADS pickup, (2) a calibration process for elemental images, and (3) a digital reconstruction process.
PPT Slide
Lager Image
Scheme of the proposed ADS method.
- 2.1. ADS Pickup Process
The ADS pickup of 3D objects in the proposed method is shown in Fig. 2 . Compared to the conventional method, the camera mounted with a WAL is translated along the optical axis. Let us define the focal length of the WAL as f . When 3D objects are located at a distance Z - z1 away from the first camera position, the wide-area EIs are captured along the optical axis by moving the camera along that axis. A total of K EIs can be recorded by moving a wideangle camera K -1 times. Here Δz is the separation between two adjacent camera positions. The k th EI is recorded at the camera position zk = z1 +( k −1) Δz . Since we capture each EI at a different camera position, each contains the object's image at a different scale level.
PPT Slide
Lager Image
Pickup process for 3D objects by moving a camera with a wide angle lens according to the proposed ADS method.
- 2.2. Calibration Process
In a typical imaging system, lens distortion usually can be classified among three types: radial distortion, decentering distortion, and thin prism distortion [15] . However, for most lenses the radial component is predominant. We assume that our WAL produces predominantly radial distortion, and ignore other distortions in a recorded image. Therefore, the image distortion should be corrected by a calibration process before the digital reconstruction used in the proposed method. Our calibration process is composed of two steps. In the first step, the radial distortion model is considered. We suppose that the center of distortion is ( cx , cy ) in the recorded image with radial distortion. Let Id be the distorted image and Iu the undistorted image. To correct the distorted image, the distorted point located at ( xd , yd ) in Id has to move to the undistorted point at ( xu , yu ) in Iu . If rd and ru are respectively defined as the distance between ( cx , cy ) and ( xd , yd ) and the distance between ( cx , cy ) and ( xu , yu ), the coordinates ( xu , yu ) can be calculated by [16 , 17]
PPT Slide
Lager Image
From Eq. (1) we can see that the distortion model has a set of five parameters Θd = [ cx , cy , k1 , k2 , k3 ].
In the second step, the point with coordinates ( xu , yu ) is projected to a new point ( xp , yp ) in the desired image using a projective transformation, which is the most general transformation that maps lines into lines. The new coordinates of ( xp , yp ) are given by [16]
PPT Slide
Lager Image
Here, it is seen that the projection parameters are Θp =[ m0 , m1 , m2 , m3 , m4 , m5 , m6 , m7 ]. Therefore, the parameter sets Θd and Θp must be found to obtain the corrected images.
Before recording 3D objects, we want to find the two parameter sets Θd and Θp for a given system. To do so, a chessboard pattern is used to apply the point-correspondences method [16] . Figure 3 shows the ADS pickup of the chessboard pattern for the calibration process. For the chessboard pattern located at a certain position, EIs are recorded by moving the wide-angle camera through its total range of motion, as shown in Fig. 3 . We can see that the recorded EIs have radial image distortion. In this paper, the coordinate mapping of distorted elemental images is performed by using the chessboard image to identify whether the distortion of the EIs has mapped the coordinates correctly. Based on recording the chessboard image, the flowchart of the calibration process we used is shown in Fig. 4 . The first step is to extract the corner feature points from the recorded chessboard pattern. We want to recover the mapping from the distorted EIs to the EIs using the extracted feature points. Using the Gauss-Newton method, we can find the two parameter sets Θd and Θp for the radial distortion and projective transformation [16] . The computed parameters are then used to correct the image distortion in the wide-area EIs of the desired 3D object. After repeating the computation process of the calibration parameters for each EI of the chessboard pattern, we store a set of calibration parameters in the computer. Based on the stored parameter sets, the recorded EIs of 3D objects are corrected.
PPT Slide
Lager Image
Optical pickup process to capture the elemental images, using a chessboard pattern for camera calibration.
PPT Slide
Lager Image
Flowchart of the calibration process to find the best parameters for radial and projective transformation.
- 2.3. Digital Reconstruction Process
The final process of our wide-angle ADS method is digital reconstruction using the calibrated EIs described in Section 2.2. In this process we generate a slice-plane image according to the reconstruction distance. Figure 5 shows the digital reconstruction process based on an inverse-mapping procedure through a pinhole model [7] . Each wide-angle camera is modeled as a pinhole camera with the calibrated EI located at a distance g from the camera. We assume that the reconstruction plane is located at a distance z = L . Each EI is inversely projected through each corresponding pinhole to the reconstruction plane at L . Then the ith inversely projected EI is magnified by Mi =( L - zi )/ g . At the reconstruction plane, all inversely mapped EIs are superimposed upon each other using the different magnification factors. In Fig. 3 , Ei is the i th EI with a size of p × q , where p and q are the pixel counts corresponding to width and height in the EI. IL is the superimposed image of all the inversely mapped images of the EI at the reconstruction plane L . IL can be calculated by the following equation:
PPT Slide
Lager Image
where Ui is the upsampling factor for magnification of Ei at the reconstruction plane L , and the size of IL is M1p × M1q .
PPT Slide
Lager Image
Digital reconstruction process based on ray back-projection in the proposed ADS method.
To reduce the computational load imposed by the large magnification factor, Eq. (3) is modified by using the downsampling factor Dr of the image by a factor of r . Then the superimposed image is given by
PPT Slide
Lager Image
In Eq. (4) IL is the reconstructed plane image after superimposing all EIs at the reconstruction distance L . To generate the 3D volume information, we should reconstruct the plane images for the desired depth ranges. To do so, the digital reconstruction process is repeated for the given distance range.
III. EXPERIMENTS AND RESULTS
We performed preliminary experiments to demonstrate our proposed ADS system for partially occluded object visualization. Figure 6 shows the experimental structure we implemented. As shown in Fig. 6 , we used two scenarios at the same time. The first scenario has a single object with a chessboard pattern with square size 100 mm × 100 mm. The second scenario has two objects: a tree as the occluder, and ‘DSU’ letter objects with letter size 100 mm × 70 mm to demonstrate partially occluded object visualization. The chessboard pattern and ‘DSU’ are located 350 mm from the first wide-angle camera position, as shown in Fig. 6 . The occluder is located 150 mm in front of the ‘DSU’ object.
PPT Slide
Lager Image
Experimental setup to capture the elemental images of 3D objects.
We use a 1/4.5 inch CMOS camera with a resolution of 640 × 480. The WAL has a focal length f =1.79 mm and maximum FOV angle 131°. The wide-angle camera was translated in z =1 mm increments for a total of K =150 EIs and a total displacement distance of 149 mm. Examples of recorded EIs are shown in Fig. 7 .
PPT Slide
Lager Image
Examples of the recorded elemental images with the image distortion (a) Chessboard patter for calibration process (b) 3D objects.
After recording the EIs using the wide–angle camera, we applied the calibration process to them. Each EI was corrected using the corresponding calibration parameters. The calculated parameters are shown in Tables 1 and 2 for the radial distortion and projective transformation ( Θd and Θp ) respectively. The calibrated EIs are shown in Fig. 8 . From the result in the top left of Fig. 8 , it is seen that the projection parameters were well computed. Based on these projection parameters, the EIs of 3D objects were calibrated. In our calibration process the calibrated images were cropped for the next digital reconstruction.
Computed parametersΘdfor radial distortion
PPT Slide
Lager Image
Computed parameters Θd for radial distortion
Computed parametersΘpfor projective transformation
PPT Slide
Lager Image
Computed parameters Θp for projective transformation
PPT Slide
Lager Image
Experimental results: (a) 150th recorded elemental images for chessboard pattern and 3D objects before calibration process, (b) 150th calibrated elemental images for chessboard pattern and 3D objects after calibration process.
With the calibrated EIs as shown in Fig. 8 , we reconstructed slice plane images for 3D objects according to the different reconstruction distances. The 150 calibrated EIs were used in the digital reconstruction algorithm employing Eq. (2). The slice image at the original position of the 3D objects is shown in Fig. 9 . For comparison, we include the results of using the conventional ADS method without a calibration process. From the experimental results, we can see that our method can be demonstrated successfully for visualizing a partially occluded object.
PPT Slide
Lager Image
3D slice images reconstructed at the original position of the ‘DSU’ object: (a) conventional method without calibration process, (b) proposed method with calibration process.
IV. CONCLUSION
In conclusion, we have presented a wide-angle ADS system to capture a wide-area scene of 3D objects. Using a WAL we can collect a lot of parallax information for a large scene. The calibration process was introduced to compensate for the image distortion due to the use of this type of lens. We performed a preliminary experiment of partially occluded 3D objects and demonstrated our idea successfully.
Acknowledgements
This research was supported by the BB21 project of Busan Metropolitan City and was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (Grant number: 2010-0023438)
References
Stern A. , Javidi B. 2006 “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE 94 591 - 607
Park J.-H. , Hong K. , Lee B. 2009 “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48 H77 - H94
Hong S.-H. , Javidi B. 2005 “Three-dimensional visualization of partially occluded objects using integral imaging,” J. Display Technol. 1 354 -
DaneshPanah M. , Javidi B. , Watson E. A. 2008 “Three dimensional imaging with randomly distributed sensors,” Opt. Express 16 6368 - 6377
Maycock J. , McElhinney C. P. , Hennelly B. M. , Naughton T. J. , McDonald J. B. , Javidi B. 2006 “Reconstruction of partially occluded objects encoded in three-dimensional scenes by using digital holograms,” Appl. Opt. 45 2975 - 2985
Shin D.-H. , Lee B.-G. , Lee J.-J. 2008 “Occlusion removal method of partially occluded 3D object using sub-image block matching in computational integral imaging,” Opt. Express 16 16294 - 16304
Zhou Z. , Yuan Y. , Bin X. , Wang Q. 2011 “Enhanced reconstruction of partially occluded objects with occlusion removal in synthetic aperture integral imaging,” Chin. Opt. Lett. 9 041002 -
Yeom S.-W. , Woo Y.-H. , Baek W.-W. 2011 “Distance extraction by means of photon-counting passive sensing combined with integral imaging,” Journal of the Optical Society of Korea 15 (4) 357 - 361
Rivenson Y. , Rot A , Balber S. , Stern A. , Rosen J. 2012 “Recovery of partially occluded objects by applying compressive Fresnel holography,” Opt. Lett. 37 1757 - 1759
Shin D. , Javidi B. 2011 “3D visualization of partially occluded objects using axially distributed sensing,” J. Disp. Technol. 7 223 - 225
Shin D. , Javidi B. 2012 “Three-dimensional imaging and visualization of partially occluded objects using axially distributed stereo image sensing,” Opt. Lett. 37 1394 - 1396
Hong S.-P. , Shin D. , Lee B.-G. , Kim E.-S. 2012 “Depth extraction of 3D objects using axially distributed image sensing,” Opt. Express 20 23044 - 23052
Piao Y. , Zhang M. , Shin D. , Yoo H. 2013 “Three-dimensional imaging and visualization using off-axially distributed image sensing,” Opt. Lett. 38 3162 - 3164
Cho M. , Shin D. 2013 “3D integral imaging display using axially recorded multiple images,” Journal of the Optical Society of Korea 17 (5) 410 - 414
Stein G. P. 1997 “Lens distortion calibration using point correspondences,” Proc. CVPR 602 - 608
Romero L. , Gomez C. , Stolkin R. 2007 Correcting Radial Distortion of Cameras with Wide Angle Lens Using Point Correspondences, Source: Scene Reconstruction, Pose Estimation and Tracking I-Tech Vienna, Austria 530 -
Kim N.-W. , Lee S.-J. , Lee B.-G. , Lee J.-J. 2007 “Vision based laser pointer interaction for flexible screens,” Lecture Notes in Computer Science 4551 845 - 853