Advanced
Resolution Analysis of Axially Distributed Image Sensing Systems under Equally Constrained Resources
Resolution Analysis of Axially Distributed Image Sensing Systems under Equally Constrained Resources
Journal of the Optical Society of Korea. 2013. Oct, 17(5): 405-409
Copyright © 2013, Journal of the Optical Society of Korea
  • Received : July 07, 2013
  • Accepted : October 10, 2013
  • Published : October 25, 2013
Download
PDF
e-PUB
PubReader
PPT
Export by style
Share
Article
Author
Metrics
Cited by
TagCloud
About the Authors
Myungjin Cho
Department of Electrical, Electronic, and Control Engineering, Hankyong National University, Ansung 456-749, Korea
mjcho@hknu.ac.kr
Donghak Shin
Institute of Ambient Intelligence, Dongseo University, Busan 617-716, Korea
Abstract
In this paper, a unifying framework to evaluate the depth resolution of axially distributed image sensing (ADS) systems under fixed resource constraints is proposed. The proposed framework enables one to evaluate the system performance as a function of system parameters such as the number of cameras, the number of pixels, pixel size, and so on. The Monte Carlo simulations are carried out to evaluate ADS system performance as a function of system parameters. To the best of our knowledge, this is the first report on quantitative analysis of ADS systems under fixed resource constraints.
Keywords
I. INTRODUCTION
Axially distributed image sensing (ADS) systems are capable of acquiring 3D information for 3D objects or partially occluded 3D objects [1 - 6] . The ADS scheme can provide simple recording architecture by translating a single camera along its optical axis. The recorded high-resolution elemental images generate the clear 3D slice images for the partially occluded 3D objects compared with the conventional integral imaging method [6 - 16] . The capacity of this method to collect 3D information is related to the object location from the optical axis. The 3D information cannot be collected when the coordinates of the object are close to the optical axis (i.e., on axis).
Recently, the resolution analysis methods to evaluate the performance of a 3D integral imaging system under the equally-constrained resources have been reported [17 - 19] . The lateral and longitudinal resolutions of the given 3D integral imaging systems by considering several system parameters such as the number of sensors, pixel size, imaging optics, relative sensor configuration, and so on.
In this paper, we present a framework for performance evaluation of ADS systems under the equally-constrained resources such as a fixed number of pixels, a fixed moving distance, a fixed pixel size and so on. For the given ADS framework, we analyze depth and lateral resolutions based on the two point sources resolution criterion [17] where two point sources and a spatial ray projection model are used. The Monte Carlo simulations are carried out for this analysis and the simulation results are presented.
II. RESOLUTION ANALYSIS FOR ADS SYSTEM
The typical ADS system is shown in Fig. 1 where a single camera is moved along the optical axis. The different elemental images are captured along the optical axis if 3D objects are located at a certain distance. Each elemental image has the different scales for 3D objects. On the other hand, the digital reconstruction process of 3D objects is the inverse process of pickup. 3D images can be obtained using computational reconstruction based on an inverse mapping procedure through a virtual pinhole model [3] . In fact, ADS pickup structure is highly related to the resolution of 3D reconstructed images. In this paper, we want to evaluate the performances of ADS according to the pickup structure.
PPT Slide
Lager Image
Typical ADS pickup and reconstruction.
Figure 2 illustrates the general framework of ADS system with N -cameras for the resolution analysis. To obtain equally-constrained resources regardless of N , the number of cameras, we assume that the total number of pixels ( K ), the size of the pixel ( c ), and the moving range ( R ) are fixed. Thus, the pixel number of each camera becomes K/N . Here, we consider that the imaging lenses, whose focal length and the diameter are f and A , respectively, are identical because the same camera is moving along the optical axis.
PPT Slide
Lager Image
Framework for ADS with N cameras.
In the ADS framework as shown in Fig. 2 , we can vary the number of cameras N to implement the generalized ADS systems. For example, when N = 2 (2 cameras), the conceptual design is shown in Fig. 3 where the image sensor is composed of K /2 pixels and the moving step between cameras is R . On the other hand, the N -camera based ADS system can be designed as shown in Fig. 2 where each camera has an image sensor with K/N pixels and the moving step is R /( N -1).
PPT Slide
Lager Image
ADS system when N = 2.
In order to analyze the resolution performance for the ADS framework of Fig. 2 , we utilize the resolution analysis method based on two point sources resolution criteria [17] , which is defined as the ability to distinguish two closely spaced point sources. The principle of the resolution analysis used can be explained through three steps as shown in Fig. 4 . We consider only one-dimensional notation for simplicity. We assume that there are two close point sources in space.
PPT Slide
Lager Image
(a) Calculation of pixel position for each camera of the first point source and (b) Calculation of unresolvable pixel area using second point source.
The first step is to find the mapping pixel index for the first point source located at ( x1, z1 ) as shown in Fig. 4(a) . Then, the point spread function (PSF) of the first point source is recorded by each image sensor. We can calculate the center position of the PSF of the first point source in the i th image sensor. This becomes
PPT Slide
Lager Image
where f is the focal length of imaging lens and Pi is the position of the i th imaging lens. After that, the recorded point source into
PPT Slide
Lager Image
pixel is pixelated and its pixel index can be calculated from the following equation:
PPT Slide
Lager Image
where
PPT Slide
Lager Image
is the rounding operator.
The second step is to calculate the unresolvable pixel area for the mapping pixels calculated in the first step. The unresolvable pixel area means that two point sources can be separated or not. That is, when the position of the second point source is closely located to the first point source, we can resolve two point sources if the second point source is registered by at least one sensor so that it is in a pixel that is adjacent to the pixel that registered the first PSF. But, if the second PSF falls on the same pixel that recorded the PSF of the first point source, we cannot resolve them. In this analysis, the unresolved area is considered to analyze the resolution. Thus, the possible mapping pixel range of the second point source is given by
PPT Slide
Lager Image
Procedure for calculation of the depth resolution using two point sources resolution analysis.
PPT Slide
Lager Image
We can calculate the unresolvable pixel area of the second point source for the mapping pixel using ray back-projection into the plane of two point sources. It is presented by
PPT Slide
Lager Image
In the third step of resolution analysis, we will find the common area for the unresolvable pixel areas calculated from all cameras. The depth resolution can be considered as the common intersection of all unresolvable pixel ranges. Thus, the depth resolution becomes
PPT Slide
Lager Image
III. SIMULATIONS AND RESULTS
The computational experiments are carried out for the depth resolution of various ADS frameworks. The two point source resolution analysis method was used to calculate their depth resolutions as described in the previous section.
Based on two point sources resolution analysis, the Monte Carlo simulations are performed to statistically compute the depth resolution for various ADS frameworks. Two point sources are placed longitudinally to determine the depth resolution. The first point source is located randomly in space and the second point source is moved in the longitudinal direction. The experimental conditions for the Monte Carlo simulation are shown in Table 1 .
Experimental constraints for Monte Carlo simulation.
PPT Slide
Lager Image
Experimental constraints for Monte Carlo simulation.
PPT Slide
Lager Image
Simulation results according to (a) the distance between optical axis and point source plane, (b) the focal length of imaging lens, (c) the pixel size of image sensor, and (d) total number of sensor pixels.
The Monte Carlo simulations were repeated for 2,000 trials where the locations of two point sources were selected randomly. After that, the depth resolutions were calculated as the sample mean. The simulation results of depth resolution for ADS frameworks are obtained while changing several system parameters. First, the simulation results of depth resolution according to the number of cameras are shown in Fig. 6(a) . Here, several distances ( g ) between optical axis and point source plane were investigated. The results indicate that the depth resolution is improved, as both N and g are larger. This is because more 3D perspective information can be recorded when large N and g are used. And, the calculation results of depth resolution are shown in Fig. 6(b) - (d) when the focal length of imaging lens, sensor pixel size, and total number of sensor pixels are varied. In Fig. 6(b) , the focal length ( f ) of the imaging lens was varied. For a large focal length, we obtained the small depth resolution (∆z) due to the small ray sampling interval. When the pixel size ( c ) of the image sensor is varied, we obtained the improved depth resolution in small pixel size. This is because the smaller pixel size provides more perspective information for 3D objects. And, in the Fig. 6(d) , we investigated the effect for the total number of sensor pixels. However, it is seen that total number of sensor pixels is not related to the depth resolution.
IV. CONCLUSION
In conclusion, a resolution analysis of various ADS frameworks under fixed-constrained resources has been presented. The system performance in terms of depth resolution as a function of system parameters such as the number of cameras, the number of pixels, pixel size, and focal length were evaluated. It has been shown that the depth resolution in an ADS system can be improved with the large number of cameras and the large distance between optical axis and point source plane. We expect that the proposed method can be useful to design a practical ADS system.
Acknowledgements
This work was supported in part by the IT R&D program of MKE/KEIT. [10041682, Development of high-definition 3D image processing technologies using advanced integral imaging with improved depth range] and this research was supported in part by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (NRF-2012R1A1A2001153).
References
Schulein R. , DaneshPanah M. , Javidi B. 2009 “3D imaging with axially distributed sensing,” Opt. Lett. http://dx.doi.org/10.1364/OL.34.002012 34 2012 - 2014
Xiao X. , Javidi B. 2011 “Axially distributed sensing for three-dimensional imaging with unknown sensor positions,” Opt. Lett. http://dx.doi.org/10.1364/OL.36.001086 36 1086 - 1088
Shin D. , Javidi B. 2011 “3D visualization of partially occluded objects using axially distributed sensing,” J. Disp. Technol. http://dx.doi.org/10.1109/JDT.2011.2124441 7 223 - 225
Shin D. , Javidi B. 2012 “Visualization of 3D objects in scattering medium using axially distributed sensing,” J. Disp. Technol. http://dx.doi.org/10.1109/JDT.2011.2176917 8 317 - 320
Hong S.-P. , Shin D. , Lee B.-G. , Kim E.-S. 2012 “Depth extraction of 3D objects using axially distributed image sensing,” Opt. Express http://dx.doi.org/10.1364/OE.20.023044 20 23044 - 23052
Shin D. , Javidi B. 2012 “Three-dimensional imaging and visualization of partially occluded objects using axially distributed stereo image sensing,” Opt. Lett. http://dx.doi.org/10.1364/OL.37.001394 37 1394 - 1396
Stern A. , Javidi B. 2006 “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE http://dx.doi.org/10.1109/JPROC.2006.870696 94 591 - 607
Park J.-H. , Hong K. , Lee B. 2009 “Recent progress in threedimensional information processing based on integral imaging,” Appl. Opt. http://dx.doi.org/10.1364/AO.48.000H77 48 H77 - H94
Kim S.-C. , Kim C.-K. , Kim E.-S. 2011 “Depth-of-focus and resolution-enhanced three-dimensional integral imaging with non-uniform lenslets and intermediate-view reconstruction technique,” 3D Research 2 6 -
Yeom S.-W. , Woo Y.-H. , Baek W.-W. 2011 “Distance extraction by means of photon-counting passive sensing combined with integral imaging,” J. Opt. Soc. Korea http://dx.doi.org/10.3807/JOSK.2011.15.4.357 15 (4) 357 - 361
Kakeya H. 2011 “Realization of undistorted volumetric multiview image with multilayered integral imaging,” Opt. Express http://dx.doi.org/10.1364/OE.19.020395 19 20395 - 20404
Yoo H. 2011 “Artifact analysis and image enhancement in threedimensional computational integral imaging using smooth windowing technique,” Opt. Lett. http://dx.doi.org/10.1364/OL.36.002107 36 2107 - 2109
Jang J.-Y. , Ser J.-I. , Cha S. 2012 “Depth extraction by using the correlation of the periodic function with an elemental image in integral imaging,” Appl. Opt. http://dx.doi.org/10.1364/AO.51.003279 51 3279 - 3286
Kavehvash Z. , Martinez-Corral M. , Mehrany K. , Bagheri S. , Saavedra G. , Navarro H. 2012 “Three-dimensional resolvability in an integral imaging system,” J. Opt. Soc. Am. A 29 525 - 530
Luo C.-G. , Wang Q.-H. , Deng H. , Gong X.-X. , Li L. , Wang F.-N. 2012 “Depth calculation method of integral imaging based on gaussian beam distribution model,” J. Display Technol. http://dx.doi.org/10.1109/JDT.2011.2165831 8 112 - 116
Li G. , Kwon K.-C. , Shin G.-H. , Jeong J.-S. , Yoo K.-H. , Kim N. 2012 “Simplified integral imaging pickup method for real objects using a depth camera,” J. Opt. Soc. Korea http://dx.doi.org/10.3807/JOSK.2012.16.4.381 16 (4) 381 - 385
Shin D. , Daneshpanah M. , Javidi B. 2012 “Generalization of 3D N-ocular imaging systems under fixed resource constraints,” Opt. Lett. http://dx.doi.org/10.1364/OL.37.000019 37 19 - 21
Shin D. , Javidi B. 2012 “Resolution analysis of N-ocular imaging systems with tilted image sensors,” J. Display Technol. http://dx.doi.org/10.1109/JDT.2012.2202090 8 529 - 533
Cho M. , Javidi B. 2012 “Optimization of 3D integral imaging system parameters,” J. Display Technol. http://dx.doi.org/10.1109/JDT.2012.2189551 8 357 - 360