Advanced
Depth Resolution Analysis of Axially Distributed Stereo Camera Systems under Fixed Constrained Resources
Depth Resolution Analysis of Axially Distributed Stereo Camera Systems under Fixed Constrained Resources
Journal of the Optical Society of Korea. 2013. Dec, 17(6): 500-505
Copyright © 2013, Journal of the Optical Society of Korea
  • Received : October 10, 2013
  • Accepted : November 11, 2013
  • Published : December 25, 2013
Download
PDF
e-PUB
PubReader
PPT
Export by style
Share
Article
Author
Metrics
Cited by
TagCloud
About the Authors
Myungjin Cho
Department of Electrical, Electronic, and Control Engineering, Hankyong National University, Ansung 456-749, Korea
mjcho@hknu.ac.kr
Donghak Shin
Institute of Ambient Intelligence, Dongseo University, Busan 617-716, Korea
Abstract
In this paper, we propose a novel framework to evaluate the depth resolution of axially distributed stereo sensing (ADSS) under fixed resource constraints. The proposed framework can evaluate the performance of ADSS systems based on various sensing parameters such as the number of cameras, the number of total pixels, pixel size and so on. The Monte Carlo simulations for the proposed framework are performed and the evaluation results are presented.
Keywords
I. INTRODUCTION
Three-dimensional (3D) imaging techniques have been considered as an important issue in the computer vision, target tracking, object recognition, and so on [1 - 5] .Various methods for capturing and visualization of 3D objects in space have been studied [6 - 14] including integral imaging, synthetic aperture integral imaging (SAII) and axially distributed image sensing (ADS). Among them, the extended version of ADS was reported called axially distributed stereo sensing (ADSS) which is implemented using a stereo camera [14] . In this method, a stereo camera is translated along its optical axis to obtain multiple image pairs and the computational reconstruction is implemented by the uniform superposition of the resized elemental image pairs. This can solve the problem that the collection of 3D information is not uniform across the sensor in the conventional ADS system.
Recently, the resolution analysis methods for various 3D imaging systems under the equally-constrained resources have been reported [15 - 18] . In 2012, the N -ocular imaging system was firstly analyzed using two point source resolution criteria. Here, the lateral and longitudinal resolutions of the N -ocular imaging systems were analyzed based on several factors such as the number of sensors, pixel size, imaging optics, relative sensor configuration, and so on [15] . In 2013, we proposed the resolution analysis of the ADS method based on two point source resolution criteria [18] . The analysis results for system parameters were presented.
In this paper, we propose a new framework for performance evaluation of ADSS systems under the equally-constrained resources. The evaluation parameters are a fixed number of pixels, a fixed moving distance, a fixed pixel size, and so on. For the resolution analysis of the proposed ADSS framework, we use the two point sources resolution criterion [15] . We evaluate the depth resolution through the Monte Carlo simulations.
II. REVIEW OF ADSS
In general, the ADSS method is composed of a pickup part and a digital reconstruction part [14] . Figure 1 shows the total system structure of the ADSS. In the pickup part of the ADSS as shown in Fig. 1 , we record an elemental image pair using stereo cameras with different distances. And, the distance between the two cameras is b . Then, we capture multiple elemental image pairs by moving the stereo camera along its optical axis.
PPT Slide
Lager Image
Pickup process of ADSS system.
With the recorded multiple elemental image pairs, we can generate 3D sliced images in the digital reconstruction part of the ADSS which is shown in Fig. 2 . The digital reconstruction process of 3D objects is the same with the inverse process of ADSS pickup. It can be implemented on the basis of an inverse mapping procedure through a virtual pinhole model as shown in Fig. 2 .
PPT Slide
Lager Image
Digital reconstruction process of ADSS system.
Let us assume that the first pinhole is located at z =0 and the k -th pinhole is located at z =( k -1) Δz in Fig. 2 . We define that
PPT Slide
Lager Image
and
PPT Slide
Lager Image
are the left and right images of the k -th elemental image pair, respectively. Then, Rk ( x , y , zr ) is the inversely mapped image of the k -th elemental image pair through the k -th pinhole at the reconstruction image plane zr and it becomes
PPT Slide
Lager Image
where Mk =[ zr - ( k - 1) Δz ] / g . Equation (1) means the uniform superposition of two resized elemental images. The final reconstructed 3D image at distance zr is obtained by summation of all resized elemental images. That is,
PPT Slide
Lager Image
In fact, the conventional ADS system cannot capture the correct 3D information near the optical axis. However, the ADSS system can overcome this problem by generating 3D images near the optical axis.
III. RESOLUTION ANALYSIS FOR ADSS SYSTEM
Figure 3 shows the general framework of ADSS system with stereo camera located at K different positions. In this framework, we want to use the equally-constrained resources regardless of the number of stereo camera. To do so, let us assume that the total number of pixels ( S ), the size of the pixel ( c ), and the moving range ( D ) are fixed. Thus, if we can use total S pixels in the image sensors, the pixel number of each stereo camera becomes S/K . Also, we assume that the used imaging lenses are identical with the same focal length ( f ) and the diameter ( A ).
PPT Slide
Lager Image
Framework for ADSS with K stereo cameras.
In the ADSS framework as shown in Fig. 3 , we can vary the positions of the stereo camera by K different positions. The moving range is limited within D . When K =1, the conventional stereo system can be constructed where two cameras with high resolution of S /2 are used. When K >>1, the ADSS system can be constructed where multiple cameras structure with each camera having the low resolution of S /2 K is implemented.
In this paper, we want to analyze the resolution performance for the ADSS framework as shown in Fig. 4 . To do so, the resolution analysis method based on two point sources resolution criteria is utilized [15] . We modified this analysis method into stereo camera case in this paper.
PPT Slide
Lager Image
Calculation diagram of pixel position for each camera of the first point source.
For simplicity, we present one-dimensional notation. We assume that there are two close point sources in space as shown in Fig. 4 . We first find the mapping pixel index for the first point source located at ( x1 , z1 ) as shown in Fig. 4 . Then, the point spread function (PSF) of the imaging lens for the first point source is recorded into stereo camera (left and right cameras) located at the different i -th positions. We can calculate the center position of the PSF of the imaging lens for the first point source in the i -th stereo camera. This becomes
PPT Slide
Lager Image
where f is the focal length of the imaging lens and
PPT Slide
Lager Image
and
PPT Slide
Lager Image
are the positions of the left and right cameras at i -th position. For the recorded point sources into
PPT Slide
Lager Image
and
PPT Slide
Lager Image
pixels, their pixel indexes are calculated by
PPT Slide
Lager Image
where⌈.⌉is the rounding operator.
Next, we calculate the unresolvable pixel area for the mapping pixels. The unresolvable pixel area means that two point sources cannot be separated within a single pixel. When the position of the second point source is located close to the first point source, we can resolve two point sources if the second point source is registered by at least one sensor so that it is on a pixel that is adjacent to the pixel that registered the first PSF. But, if the second PSF falls on the same pixel that recorded the PSF of the imaging lens for the first point source, we cannot resolve them. Based on this unresolved area, we can calculate the depth resolution.
Now let us consider the second point source located at ( x1 , z2 ). Then, the possible mapping pixel ranges of the second point source for left and right cameras are given by
PPT Slide
Lager Image
PPT Slide
Lager Image
where δ is 1.22 λf/A .
Then, we want to find the unresolvable pixel area for the left and right cameras. The unresolvable pixel area of the second point source for the mapping pixel is calculated by using ray back-projection into the plane of two point sources as shown in Fig. 5 . They are given by the following equations, respectively.
PPT Slide
Lager Image
PPT Slide
Lager Image
PPT Slide
Lager Image
Calculation diagram of unresolvable pixel area.
Finally, to calculate the depth resolution, we will find the common area for the unresolvable pixel areas calculated from all stereo cameras. The depth resolution can be considered as the common intersection of all unresolvable pixel ranges. Thus, the depth resolution for ADSS system becomes
PPT Slide
Lager Image
IV. MONTE CARLO SIMULATIONS AND RESULTS
For the proposed framework of the ADSS system, the Monte Carlo simulations were performed by computer. We calculated the depth resolution statistically using the two point sources resolution analysis method. In our Monte Carlo simulation, two point sources were located longitudinally far from the stereo camera as shown in Fig. 4 . We selected the random position of the first point source. And the second point source is moved in the longitudinal direction. Table 1 shows our experimental conditions of the Monte Carlo simulation for the proposed ADSS system.
Experimental conditions for Monte Carlo simulation
PPT Slide
Lager Image
Experimental conditions for Monte Carlo simulation
We selected the random locations of two point sources with 4,000 trials for the Monte Carlo simulations. The depth resolutions for this experiment were calculated as the sample mean. For the several parameters, we carried out the simulations in terms of the depth resolution for ADSS frameworks. First experiment is the calculation of depth resolution according to the number of cameras. The result is shown in Fig. 6 . According to the distance ( x ) between the optical axis and the point source plane, the results are plotted. From these results, we can observe that the depth resolution is improved as x becomes larger. Then, we calculated the depth resolution when x =0 (optical axis case). This means that the depth resolution exists along the optical axis in the ADSS method because 3D perspective information can be recorded into either left or right camera. However, the depth resolution cannot exist in the conventional ADS system. As shown in Fig. 6 , the depth resolution is improved as the number of stereo cameras increases because the use of many cameras is more effective for resolving two point sources in space. This means that the usage of more low-resolution stereo cameras may be useful to capture 3D objects in terms of the depth resolution compared with a single high-resolution stereo camera.
PPT Slide
Lager Image
Simulation results according to the number of cameras with various distances between optical axis and point source plane.
Figure 7 shows the depth resolution results according to the number of cameras and focal length of imaging lens when x =0 mm and x =100 mm. The depth resolution is improved when a large number of cameras and large focal length are used. And, the calculation results of the depth resolution are shown in Figs. 8 - 10 when the camera pixel size, total number of camera pixels and moving range of stereo cameras are varied. The improved depth resolution is obtained when small pixel size and large moving range are used as shown in Fig. 8 and 9 , in which we can see the large variations of depth resolution when N =2. This is because the depth resolution was calculated based on two-pixel information from a single stereo camera. This variation can be improved by using more position ranges of two point sources. However, it is seen that the total number of sensor pixels is not related to the depth resolution as shown in Fig. 10 .
PPT Slide
Lager Image
Simulation results according to the focal length of imaging lens. (a) When x=0 mm (b) When x=100 mm.
PPT Slide
Lager Image
Simulation results according to the pixel size of camera. (a) When x=0 mm (b) When x=100 mm.
PPT Slide
Lager Image
Simulation results according to the moving range of stereo camera. (a) When x=0 mm (b) When x=100 mm.
PPT Slide
Lager Image
Simulation results according to the total pixel number of stereo camera. (a) When x=0 mm (b) When x=100 mm.
V. CONCLUSION
In conclusion, we have analyzed the depth resolution for various ADSS frameworks under fixed-constrained resources. To evaluate the system performance of ADSS, we have considered system parameters including the number of cameras, the number of pixels, pixel size, and focal length. From the computational simulation, it is seen that the depth resolution in ADSS system can be improved with the large number of cameras and the large distance between optical axis and point source plane. The proposed analysis method may be one of a promising tool to design a practical ADSS system.
Acknowledgements
This work was supported in part by the IT R&D program of MKE/KEIT. [10041682, Development of high-definition 3D image processing technologies using advanced integral imaging with improved depth range] and this research was supported in part by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (NRF-2012R1A1A2001153).
References
Okoshi T. 1976 Three-dimensional Imaging Techniques Academic Press New York, USA
Ku J.-S. , Lee K.-M. , Lee S.-U. 2001 “Multi-image matching for a general motion stereo camera model,” Pattern Recognition 34 1701 - 1712
Stern A. , Javidi B. 2006 “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE 94 591 - 607
Cho M. , Javidi B. 2008 “Three-dimensional tracking of occluded objects using integral imaging,” Opt. Lett. 33 2737 - 2739
Yeom S.-W. , Woo Y.-H. , Baek W.-W. 2011 “Distance extraction by means of photon-counting passive sensing combined with integral imaging,” J. Opt. Soc. Korea 15 357 - 361
Stern A. , Javidi B. 2003 “Three-dimensional image sensing and reconstruction with time-division multiplexed computational integral imaging,” Appl. Opt. 42 7036 - 7042
Arimoto H. , Javidi B. 2001 “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26 157 - 159
Lee J.-J. , Shin D. , Lee B.-G. , Yoo H. 2012 “3D optical microscopy method based on synthetic aperture integral imaging,” 3D Research 3 2 -
Navarro H. , Barreiro J.C. , Saavedra G. , Martinez-Corral M. , Javidi B. 2012 “High-resolution far-field integral-imaging camera by double snapshot,” Opt. Express 20 890 - 895
Yoo H. 2013 “Axially moving a lenslet array for high-resolution 3D images in computational integral imaging,” Opt. Express 21 8873 - 8878
Schulein R. , DaneshPanah M. , Javidi B. 2009 “3D imaging with axially distributed sensing,” Opt. Lett. 34 2012 - 2014
Shin D. , Javidi B. 2011 “3D visualization of partially occluded objects using axially distributed sensing,” J. Disp. Technol. 7 223 - 225
Hong S.-P. , Shin D. , Lee B.-G. , Kim E.-S. 2012 “Depth extraction of 3D objects using axially distributed image sensing,” Opt. Express 20 23044 - 23052
Shin D. , Javidi B. 2012 “Three-dimensional imaging and visualization of partially occluded objects using axially distributed stereo image sensing,” Opt. Lett. 37 1394 - 1396
Shin D. , Daneshpanah M. , Javidi B. 2012 “Generalization of 3D N-ocular imaging systems under fixed resource constraints,” Opt. Lett. 37 19 - 21
Shin D. , Javidi B. 2012 “Resolution analysis of N-ocular imaging systems with tilted image sensors,” J. Display Technol. 8 529 - 533
Cho M. , Javidi B. 2012 “Optimization of 3D integral imaging system parameters,” J. Display Technol. 8 357 - 360
Cho M. , Shin D. 2013 “Resolution analysis of axially distributed image sensing systems under equally constrained resources,” J. Opt. Soc. Korea 17 405 - 409