In this paper, we propose a novel framework to evaluate the depth resolution of axially distributed stereo sensing (ADSS) under fixed resource constraints. The proposed framework can evaluate the performance of ADSS systems based on various sensing parameters such as the number of cameras, the number of total pixels, pixel size and so on. The Monte Carlo simulations for the proposed framework are performed and the evaluation results are presented.
Three-dimensional (3D) imaging techniques have been considered as an important issue in the computer vision, target tracking, object recognition, and so on
.Various methods for capturing and visualization of 3D objects in space have been studied
including integral imaging, synthetic aperture integral imaging (SAII) and axially distributed image sensing (ADS). Among them, the extended version of ADS was reported called axially distributed stereo sensing (ADSS) which is implemented using a stereo camera
. In this method, a stereo camera is translated along its optical axis to obtain multiple image pairs and the computational reconstruction is implemented by the uniform superposition of the resized elemental image pairs. This can solve the problem that the collection of 3D information is not uniform across the sensor in the conventional ADS system.
Recently, the resolution analysis methods for various 3D imaging systems under the equally-constrained resources have been reported
. In 2012, the
-ocular imaging system was firstly analyzed using two point source resolution criteria. Here, the lateral and longitudinal resolutions of the
-ocular imaging systems were analyzed based on several factors such as the number of sensors, pixel size, imaging optics, relative sensor configuration, and so on
. In 2013, we proposed the resolution analysis of the ADS method based on two point source resolution criteria
. The analysis results for system parameters were presented.
In this paper, we propose a new framework for performance evaluation of ADSS systems under the equally-constrained resources. The evaluation parameters are a fixed number of pixels, a fixed moving distance, a fixed pixel size, and so on. For the resolution analysis of the proposed ADSS framework, we use the two point sources resolution criterion
. We evaluate the depth resolution through the Monte Carlo simulations.
II. REVIEW OF ADSS
In general, the ADSS method is composed of a pickup part and a digital reconstruction part
shows the total system structure of the ADSS. In the pickup part of the ADSS as shown in
, we record an elemental image pair using stereo cameras with different distances. And, the distance between the two cameras is
. Then, we capture multiple elemental image pairs by moving the stereo camera along its optical axis.
Pickup process of ADSS system.
With the recorded multiple elemental image pairs, we can generate 3D sliced images in the digital reconstruction part of the ADSS which is shown in
. The digital reconstruction process of 3D objects is the same with the inverse process of ADSS pickup. It can be implemented on the basis of an inverse mapping procedure through a virtual pinhole model as shown in
Digital reconstruction process of ADSS system.
Let us assume that the first pinhole is located at
=0 and the
-th pinhole is located at
. We define that
are the left and right images of the
-th elemental image pair, respectively. Then,
) is the inversely mapped image of the
-th elemental image pair through the
-th pinhole at the reconstruction image plane
and it becomes
. Equation (1) means the uniform superposition of two resized elemental images. The final reconstructed 3D image at distance
is obtained by summation of all resized elemental images. That is,
In fact, the conventional ADS system cannot capture the correct 3D information near the optical axis. However, the ADSS system can overcome this problem by generating 3D images near the optical axis.
III. RESOLUTION ANALYSIS FOR ADSS SYSTEM
shows the general framework of ADSS system with stereo camera located at
different positions. In this framework, we want to use the equally-constrained resources regardless of the number of stereo camera. To do so, let us assume that the total number of pixels (
), the size of the pixel (
), and the moving range (
) are fixed. Thus, if we can use total
pixels in the image sensors, the pixel number of each stereo camera becomes
. Also, we assume that the used imaging lenses are identical with the same focal length (
) and the diameter (
Framework for ADSS with K stereo cameras.
In the ADSS framework as shown in
, we can vary the positions of the stereo camera by
different positions. The moving range is limited within
=1, the conventional stereo system can be constructed where two cameras with high resolution of
/2 are used. When
>>1, the ADSS system can be constructed where multiple cameras structure with each camera having the low resolution of
In this paper, we want to analyze the resolution performance for the ADSS framework as shown in
. To do so, the resolution analysis method based on two point sources resolution criteria is utilized
. We modified this analysis method into stereo camera case in this paper.
Calculation diagram of pixel position for each camera of the first point source.
For simplicity, we present one-dimensional notation. We assume that there are two close point sources in space as shown in
. We first find the mapping pixel index for the first point source located at (
) as shown in
. Then, the point spread function (PSF) of the imaging lens for the first point source is recorded into stereo camera (left and right cameras) located at the different
-th positions. We can calculate the center position of the PSF of the imaging lens for the first point source in the
-th stereo camera. This becomes
is the focal length of the imaging lens and
are the positions of the left and right cameras at
-th position. For the recorded point sources into
pixels, their pixel indexes are calculated by
where⌈.⌉is the rounding operator.
Next, we calculate the unresolvable pixel area for the mapping pixels. The unresolvable pixel area means that two point sources cannot be separated within a single pixel. When the position of the second point source is located close to the first point source, we can resolve two point sources if the second point source is registered by at least one sensor so that it is on a pixel that is adjacent to the pixel that registered the first PSF. But, if the second PSF falls on the same pixel that recorded the PSF of the imaging lens for the first point source, we cannot resolve them. Based on this unresolved area, we can calculate the depth resolution.
Now let us consider the second point source located at (
). Then, the possible mapping pixel ranges of the second point source for left and right cameras are given by
Then, we want to find the unresolvable pixel area for the left and right cameras. The unresolvable pixel area of the second point source for the mapping pixel is calculated by using ray back-projection into the plane of two point sources as shown in
. They are given by the following equations, respectively.
Calculation diagram of unresolvable pixel area.
Finally, to calculate the depth resolution, we will find the common area for the unresolvable pixel areas calculated from all stereo cameras. The depth resolution can be considered as the common intersection of all unresolvable pixel ranges. Thus, the depth resolution for ADSS system becomes
IV. MONTE CARLO SIMULATIONS AND RESULTS
For the proposed framework of the ADSS system, the Monte Carlo simulations were performed by computer. We calculated the depth resolution statistically using the two point sources resolution analysis method. In our Monte Carlo simulation, two point sources were located longitudinally far from the stereo camera as shown in
. We selected the random position of the first point source. And the second point source is moved in the longitudinal direction.
shows our experimental conditions of the Monte Carlo simulation for the proposed ADSS system.
Experimental conditions for Monte Carlo simulation
Experimental conditions for Monte Carlo simulation
We selected the random locations of two point sources with 4,000 trials for the Monte Carlo simulations. The depth resolutions for this experiment were calculated as the sample mean. For the several parameters, we carried out the simulations in terms of the depth resolution for ADSS frameworks. First experiment is the calculation of depth resolution according to the number of cameras. The result is shown in
. According to the distance (
) between the optical axis and the point source plane, the results are plotted. From these results, we can observe that the depth resolution is improved as
becomes larger. Then, we calculated the depth resolution when
=0 (optical axis case). This means that the depth resolution exists along the optical axis in the ADSS method because 3D perspective information can be recorded into either left or right camera. However, the depth resolution cannot exist in the conventional ADS system. As shown in
, the depth resolution is improved as the number of stereo cameras increases because the use of many cameras is more effective for resolving two point sources in space. This means that the usage of more low-resolution stereo cameras may be useful to capture 3D objects in terms of the depth resolution compared with a single high-resolution stereo camera.
Simulation results according to the number of cameras with various distances between optical axis and point source plane.
shows the depth resolution results according to the number of cameras and focal length of imaging lens when
=0 mm and
=100 mm. The depth resolution is improved when a large number of cameras and large focal length are used. And, the calculation results of the depth resolution are shown in
when the camera pixel size, total number of camera pixels and moving range of stereo cameras are varied. The improved depth resolution is obtained when small pixel size and large moving range are used as shown in
, in which we can see the large variations of depth resolution when
=2. This is because the depth resolution was calculated based on two-pixel information from a single stereo camera. This variation can be improved by using more position ranges of two point sources. However, it is seen that the total number of sensor pixels is not related to the depth resolution as shown in
Simulation results according to the focal length of imaging lens. (a) When x=0 mm (b) When x=100 mm.
Simulation results according to the pixel size of camera. (a) When x=0 mm (b) When x=100 mm.
Simulation results according to the moving range of stereo camera. (a) When x=0 mm (b) When x=100 mm.
Simulation results according to the total pixel number of stereo camera. (a) When x=0 mm (b) When x=100 mm.
In conclusion, we have analyzed the depth resolution for various ADSS frameworks under fixed-constrained resources. To evaluate the system performance of ADSS, we have considered system parameters including the number of cameras, the number of pixels, pixel size, and focal length. From the computational simulation, it is seen that the depth resolution in ADSS system can be improved with the large number of cameras and the large distance between optical axis and point source plane. The proposed analysis method may be one of a promising tool to design a practical ADSS system.
This work was supported in part by the IT R&D program of MKE/KEIT. [10041682, Development of high-definition 3D image processing technologies using advanced integral imaging with improved depth range] and this research was supported in part by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (NRF-2012R1A1A2001153).
Three-dimensional Imaging Techniques
New York, USA
“Three-dimensional image sensing, visualization, and processing using integral imaging,”
DOI : 10.1109/JPROC.2006.870696
“Three-dimensional tracking of occluded objects using integral imaging,”
DOI : 10.1364/OL.33.002737
“Distance extraction by means of photon-counting passive sensing combined with integral imaging,”
J. Opt. Soc. Korea
DOI : 10.3807/JOSK.2011.15.4.357
“Three-dimensional image sensing and reconstruction with time-division multiplexed computational integral imaging,”
DOI : 10.1364/AO.42.007036
“Integral three-dimensional imaging with digital reconstruction,”
DOI : 10.1364/OL.26.000157
“3D optical microscopy method based on synthetic aperture integral imaging,”
“High-resolution far-field integral-imaging camera by double snapshot,”
DOI : 10.1364/OE.20.000890
“Axially moving a lenslet array for high-resolution 3D images in computational integral imaging,”
DOI : 10.1364/OE.21.008873
“3D imaging with axially distributed sensing,”
DOI : 10.1364/OL.34.002012
“3D visualization of partially occluded objects using axially distributed sensing,”
J. Disp. Technol.
DOI : 10.1109/JDT.2011.2124441
“Depth extraction of 3D objects using axially distributed image sensing,”
DOI : 10.1364/OE.20.023044
“Three-dimensional imaging and visualization of partially occluded objects using axially distributed stereo image sensing,”
DOI : 10.1364/OL.37.001394
“Generalization of 3D N-ocular imaging systems under fixed resource constraints,”
DOI : 10.1364/OL.37.000019
“Resolution analysis of N-ocular imaging systems with tilted image sensors,”
J. Display Technol.
DOI : 10.1109/JDT.2012.2202090
“Resolution analysis of axially distributed image sensing systems under equally constrained resources,”
J. Opt. Soc. Korea
DOI : 10.3807/JOSK.2013.17.5.405