Advanced
Improved 3D Resolution Analysis of N-Ocular Imaging Systems with the Defocusing Effect of an Imaging Lens
Improved 3D Resolution Analysis of N-Ocular Imaging Systems with the Defocusing Effect of an Imaging Lens
Journal of information and communication convergence engineering. 2015. Dec, 13(4): 270-274
Copyright © 2015, The Korean Institute of Information and Commucation Engineering
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • Received : November 04, 2015
  • Accepted : November 28, 2015
  • Published : December 31, 2015
Download
PDF
e-PUB
PubReader
PPT
Export by style
Article
Author
Metrics
Cited by
TagCloud
About the Authors
Min-Chul Lee
Department of Computer Science and Electronics, Kyushu Institute of Technology, Fukuoka 820-8502, Japan
Kotaro Inoue
Department of Computer Science and Electronics, Kyushu Institute of Technology, Fukuoka 820-8502, Japan
Myungjin Cho
Department of Electrical, Electronic, and Control Engineering, IITC, Hankyong National University, Anseong 17579, Korea
mjcho@hknu.ac.kr

Abstract
In this paper, we propose an improved framework to analyze an N -ocular imaging system under fixed constrained resources such as the number of image sensors, the pixel size of image sensors, the distance between adjacent image sensors, the focal length of image sensors, and field of view of image sensors. This proposed framework takes into consideration, for the first time, the defocusing effect of the imaging lenses according to the object distance. Based on the proposed framework, the N -ocular imaging system such as integral imaging is analyzed in terms of depth resolution using two-point-source resolution analysis. By taking into consideration the defocusing effect of the imaging lenses using ray projection model, it is shown that an improved depth resolution can be obtained near the central depth plane as the number of cameras increases. To validate the proposed framework, Monte Carlo simulations are carried out and the results are analyzed.
Keywords
I. INTRODUCTION
Three-dimensional (3D) N -ocular imaging systems are considered a promising technology for capturing 3D information from a 3D scene [1 - 9] . They consist of the combination of an N -imaging system: either stereo imaging ( N =2) or integral imaging ( N >>2). In the well-known stereo imaging technique, two image sensors are used. On the other hand, many image sensors are used in a typical integral imaging system. Various types of 3D imaging systems have been analyzed using ray optics and diffraction optics [10 - 16] . Recently, a method to compare the system performance of such systems under equally-constrained resources was proposed because 3D resolution is dependent on several factors such as the number of sensors, the pixel size, and imaging optics [14 , 15] . Several constraints, including the number of cameras, total parallax, the total number of pixels, and the pixel size were considered in the calculation of 3D resolution, but the fact that the imaging lenses in the front of sensors produce a defocusing effect according to the object distance was ignored. In practice, this defocusing may prevent system analysis of a real N -ocular imaging system.
In this paper, we propose an improved framework for evaluating the performance of N -ocular imaging systems that takes into consideration the defocusing effect of the imaging lens in the sensor. The analysis is based on two-point-source resolution criteria using a ray projection model from image sensors. The defocusing effect according to the position of point sources is introduced to calculate the depth resolution. To show the usefulness of the proposed frame-work, Monte Carlo simulations were carried out, and the experimental results on depth resolution are presented here.
II. PROPOSED METHOD
A typical N -ocular imaging system is shown in Fig. 1 . Based on the value of N , the N sensors are distributed at equal intervals laterally. For objective analysis, the system design is considered to satisfy equally constrained resources. In Fig. 1 , it is assumed that the total parallax ( D ), the pixel size ( c ), and the total number of pixels ( K ) are fixed. In addition, we assume that the diameter of the imaging lens is identical with the diameter of the sensor. Let the focal length and the diameter of the imaging lens be f and A , respectively. The number of cameras is varied from a stereo system with N =2 (2 cameras) to integral imaging with N >>2 ( N cameras) under the N -ocular framework. When N =2 (stereo imaging), the conceptual design of the proposed framework is shown in Fig. 1 (a) where the image sensor is composed of K /2 pixels. On the other hand, Fig. 1 (b) shows an N -ocular imaging system with N cameras (known as integral imaging). Here it is composed of N cameras with K / N pixels.
PPT Slide
Lager Image
Frameworks for N-ocular imaging systems. (a) N=2 (b) N>>2.
In general, the imaging lens used in the image sensor has a defocusing effect according to the distance of the 3D object, as shown in Fig. 2 . We assume that the gap between the image sensor and the imaging lens is set to g . The central depth plane (CDP) is calculated by the lens formula [17] .
PPT Slide
Lager Image
where zg is the object distance from the imaging lens. We now consider a different position ( z 2 ) of the point source outside of the CDP as shown in Fig. 2 (b). From the geometry of the optical relationships and the lens formula, the diameter d for defocusing is given by [17] :
PPT Slide
Lager Image
where AN is the diameter of the lens in an N -ocular imaging system and z 2 is the distance of the object from the CDP.
PPT Slide
Lager Image
Ray geometry of the imaging lens. (a) Point source is located at the central depth plane (CDP). (b) Point source is away from the CDP.
For the N -ocular imaging system as shown in Fig. 1 , we calculate the depth resolution using the proposed analysis method. To do so, we utilize two-point-source resolution criteria with spatial ray back-projection from the image sensors to the reconstruction plane. In our analysis, the diameter parameter of the defocusing effect of the imaging lens as given by Eq. (2) is newly added to the analysis process previously described in [14] .
The procedure of the proposed analysis based on the resolution of two point sources is shown in Fig. 3 . Firstly, we explain in detail the calculation of the depth using two point sources. This basic principle is shown in Fig. 4 (a).
PPT Slide
Lager Image
Calculation procedure of 3D resolution using two-point-source resolution analysis.
PPT Slide
Lager Image
(a) Two point sources resolution model for depth resolution. (b) Ray projection for unresolvable depth ranges to calculate depth resolution.
We define the depth resolution as the distance that separates two closely spaced point sources. The separation of two point sources can be calculated by separating each of their point spread functions (PSFs) independently for two adjacent sensor pixels in at leastone image sensor out of the N cameras. As shown in Fig. 4 (a), two point sources are assumed to be located along the z axis. We assume that the first point source is located at ( x 1 , z 1 ). The first PSF of one point source is recorded by an image sensor. Note that the recorded beams are pixilated due to the discrete nature of the image sensor. The position of the recorded pixel in the sensor for the PSF of the point source is represented by
PPT Slide
Lager Image
where c is the pixel size of the sensor, f is the focal length of the imaging lens, and Pi is the position of the ith imaging lens, while ⎾⏋ is the rounding operator.
Next, we consider the second point source, as shown in Fig. 4 (a). We separate the PSF of the second point source in the pixel that registered the first PSF. In this paper, we consider the defocusing effect on the position of the two point sources as given by Eq. (2). In this case, when the center of the second PSF is within the s 1 i pixel, the unresolvable pixel area is given by
PPT Slide
Lager Image
Here, δ is the size of the main lobe of the PSF, which is 1.22 λf / AN . Fig. 5 shows the variation of the unresolvable pixel area according to the defocusing effect on the position of the two point sources. When the first point source is located near the CDP, the unresolvable pixel area can be calculated as shown in Fig. 5 (a). On the other hand, when the first point source is away from the CDP, it is calculated as shown in Fig. 5 (b). Next, we present spatial ray back-projection for all the calculated unresolvable pixel areas to calculate the depth resolution as shown in Fig. 4 (b). When the i th unresolvable pixel area is back-projected onto the z axis through its corresponding imaging lens, the projected range, which we call the unresolvable depth range in space lies in the following range:
PPT Slide
Lager Image
where
PPT Slide
Lager Image
PPT Slide
Lager Image
Calculation of unresolvable pixel area (a) near CDP, (b) out of CDP.
The unresolvable depth ranges associated with all N cameras are calculated for a given point x 1 . Then, the depth resolution is calculated when at least one image sensor can distinguish two point sources and can be shown to be the common intersection of all unresolvable depth ranges. Then, the depth resolution becomes
PPT Slide
Lager Image
III. EXPERIMENTS AND RESULTS
In order to statistically compute the depth resolution of Eqs. (5)–(7), we used Monte Carlo simulations. The experimental parameters are shown in Table 1 . We first set the gap distance between the sensor and the imaging lens at 50.2 mm, which corresponds to a 12,000 mm CDP. The first point source is placed near the CDP. The position of the second point source is then moved randomly in the longitudinal direction to calculate the depth resolution. Under equally constrained resources, we carried out the simulation of depth resolution. The simulation is repeated for 4,000 trials with different random positions of the point sources where z (the range) is from 11,000 mm to 13,000 mm and x is varied from –100 mm to 100 mm. We averaged all calculated depth resolutions.
Experimental parameters for Monte Carlo simulations
PPT Slide
Lager Image
Experimental parameters for Monte Carlo simulations
Fig. 6 shows the simulation results for depth resolution with different distances of the first point source according to the number of cameras. As the number of cameras increases, the depth resolution decreases. Fig. 7 shows the results according to the distance of the first point source.
PPT Slide
Lager Image
Depth resolution according to the number of cameras.
PPT Slide
Lager Image
Depth resolution according to the object distance and the number of cameras.
Depth resolution is calculated by averaging the common intersection of all unresolvable depth ranges produced from N cameras. Therefore, a larger N may produce little variation, as shown in Fig. 7 . Fig. 7 shows that the minimum depth resolution was obtained at z=12,000 mm because the CDP is 12,000 mm in this experiment. As the distance of the first point source moves further from the CDP, the depth resolution worsens. We also investigated the characteristics of depth resolution by changing the pixel size in the image sensors. Fig. 8 presents the results of analysis when the pixel size was varied. It was found that the depth resolution improved when the pixel size fell.
PPT Slide
Lager Image
Calculation results of depth resolution via various pixel sizes.
IV. CONCLUSIONS
To conclude, we have presented an improved framework for analyzing N -ocular imaging systems under fixed constrained resources. The proposed analysis included the defocusing effect of the imaging lenses when calculating depth resolution. We have investigated the system performance in terms of depth resolution as a function of sensing parameters such as the number of cameras, the distance of the point sources from each other, the pixel size, and so on. Experimental results reveal that the depth resolution can be improved when the number of sensors is large and the object is located near the CDP. We expect that our improved analysis will be useful to design practical N -ocular imaging systems.
Acknowledgements
This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2013R1A1A2057549).
BIO
Min-Chul Lee
earned a B.S. degree in telecommunication engineering from Pukyong National University, Busan, Korea, in 1996, and M.S. and Ph.D. degrees from Kyushu Institute of Technology, Fukuoka, Japan, in 2000 and 2003. He is an assistant professor at Kyushu Institute of Technology in Japan. His research interests include medical imaging, blood flow analysis, 3D displays, 3D integral imaging, and 3D biomedical imaging.
Kotaro Inoue
earned a B.S. degree from Kyushu Institute of Technology, Fukuoka, Japan, in 2015. He is a master’s student at Kyushu Institute of Technology in Japan. His research interests include visual feedback control, 3D displays, and 3D integral imaging.
Myungjin Cho
earned B.S. and M.S. degrees in telecommunication engineering from Pukyong National University, Pusan, Korea, in 2003 and 2005, and M.S. and Ph.D. degrees in electrical and computer engineering from the University of Connecticut, Storrs, CT, USA, in 2010 and 2011, respectively. He is an assistant professor at Hankyong National University in Korea. He worked as a researcher at Samsung Electronics in Korea from 2005 to 2007. His research interests include 3D displays, 3D signal processing, 3D biomedical imaging, 3D photon counting imaging, 3D information security, 3D object tracking, and 3D underwater imaging.
References
Lippmann G. 1908 “Epreuves reversibles donnant la sensation du relief,” Journal de Physique Théorique et Appliquée 7 (1) 821 - 825
Burckhardt C. B. 1968 “Optimum parameters and resolution limitation of integral photography,” Journal of the Optical Society of America 58 (1) 71 - 76    DOI : 10.1364/JOSA.58.000071
Yang L. , McCormick M. , Davies N. 1988 “Discussion of the optics of anew 3-D imaging system,” Applied Optics 27 (21) 4529 - 4534    DOI : 10.1364/AO.27.004529
Stern A. , Javidi B. 2006 “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proceedings of the IEEE 94 (3) 591 - 607    DOI : 10.1109/JPROC.2006.870696
Okano F. , Arai J. , Mitani K. , Okui M. 2006 “Real-time integral imaging based on extremely high resolution video system,” Proceedings of the IEEE 94 (3) 490 - 501    DOI : 10.1109/JPROC.2006.870687
Shin D. H. , Yoo H. 2008 “Scale-variant magnification for computational integral imaging and its application to 3D object correlator,” Optics Express 16 (12) 8855 - 8867    DOI : 10.1364/OE.16.008855
Martinez-Cuenca R. , Saavedra G. , Martinez-Corral M. , Javidi B. 2009 “Progress in 3-D multiperspective display by integral imaging,” Proceedings of the IEEE 97 (6) 1067 - 1077    DOI : 10.1109/JPROC.2009.2016816
Park J. H. , Baasantseren G. , Kim N. , Park G. , Kang J. M. , Lee B. 2008 “View image generation in perspective and orthographic projection geometry based on integral imaging,” Optics Express 16 (12) 8800 - 8813    DOI : 10.1364/OE.16.008800
Cho M. , Daneshpanah M. , Moon I. , Javidi B. 2011 “Three-dimensional optical sensing and visualization using integral imaging,” Proceedings of the IEEE 99 (4) 556 - 575    DOI : 10.1109/JPROC.2010.2090114
Burckhardt C. B. 1968 "Optimum parameters and resolution limitation of integral photography,” Journal of the Optical Society of America 58 (1) 71 - 76    DOI : 10.1364/JOSA.58.000071
Hoshino H. , Okano F. , Isono H. , Yuyama I. 1998 “Analysis of resolution limitation of integral photography,” Journal of the Optical Society of America 15 (8) 2059 - 2065    DOI : 10.1364/JOSAA.15.002059
Jin F. , Jang J. S. , Javidi B. 2004 “Effects of device resolution on three dimensional integral imaging,” Optics Letters 29 (12) 1345 - 1347    DOI : 10.1364/OL.29.001345
Kavehvash Z. , Mehrany K. , Bagheri S. 2011 “Optimization of the lens array structure for performance improvement of integral imaging,” Optics Letters 36 (20) 3993 - 3995    DOI : 10.1364/OL.36.003993
Shin D. , Daneshpanah M. , Javidi B. 2012 “Generalization of three-dimensional N-ocular imaging systems under fixed resource constraints,” Optics Letters 37 (1) 19 - 21    DOI : 10.1364/OL.37.000019
Shin D. , Javidi B. 2012 “Resolution analysis of N-ocular imaging systems with tilted image sensors,” Journal of Display Technology 8 (9) 529 - 533    DOI : 10.1109/JDT.2012.2202090
Cho M. , Javidi B. 2012 “Optimization of 3D integral imaging system parameters,” Journal of Display Technology 8 (6) 357 - 360    DOI : 10.1109/JDT.2012.2189551
Pertuz S. , Puig D. , Garcia M. A. 2013 “Analysis of focus measure operators for shape-from-focus,” Pattern Recognition 46 (5) 1415 - 1432    DOI : 10.1016/j.patcog.2012.11.011