In this paper we propose an axially distributed imagesensing method with a wideangle lens to capture the widearea scene of 3D objects. A lot of parallax information can be collected by translating the wideangle camera along the optical axis. The recorded widearea elemental images are calibrated using compensation of radial distortion. With these images we generate volumetric slice images using a computational reconstruction algorithm based on ray backprojection. To show the feasibility of the proposed method, we performed optical experiments for visualization of a partially occluded 3D object.
I. INTRODUCTION
The visualization of partially occluded 3D objects has been considered one of the most challenging drawbacks in the 3Dvision field
[1
,
2]
. To solve this problem, several multiperspective imaging approaches, including integral imaging and axially distributed image sensing (ADS), have been studied
[3

9]
. Integral imaging uses a planar pickup grid or a camera array. On the other hand, an ADS method, implemented by translating a camera along its optical axis, was proposed to take digital plane images that can be refocused after elemental images (EIs) have been taken for 3D visualization of partially occluded objects
[10

14]
. This method provides a relatively simple architecture to capture the longitudinal perspective information of a 3D object.
However, the capacity of the ADS method is related to how far the object is located from the optical axis. Due to a lower capacity for objects located close to the optical axis, widearea elemental images are needed to reconstruct better 3D slice images under a large field of view (FOV).
In this paper, in order to capture a widearea scene of 3D objects, we propose axially distributed image sensing using a wideangle lens (WAL). Using this type of lens we can collect a lot of parallax information. The widearea EIs are recorded by translating the wideangle camera along its optical axis. These EIs are calibrated for compensation of radial distortion. With the calibrated EIs, we generate volumetric slice images using a computational reconstruction algorithm based on ray backprojection. To verify our idea, we carried out optical experiments to visualize a partially occluded 3D object.
II. SYSTEM CONFIGURATION
In general, a camera calibration process is needed for the images captured with the camera mounted with a WAL. Therefore, to visualize the correct 3D slice images in our ADS method, we introduce a camera calibration process to generate calibrated elemental images. This camera calibration method has never been applied to the conventional ADS system.
Figure 1
shows the scheme of the proposed ADS system with a WAL. It is composed of three different subsystems: (1) ADS pickup, (2) a calibration process for elemental images, and (3) a digital reconstruction process.
Scheme of the proposed ADS method.
 2.1. ADS Pickup Process
The ADS pickup of 3D objects in the proposed method is shown in
Fig. 2
. Compared to the conventional method, the camera mounted with a WAL is translated along the optical axis. Let us define the focal length of the WAL as
f
. When 3D objects are located at a distance
Z

z_{1}
away from the first camera position, the widearea EIs are captured along the optical axis by moving the camera along that axis. A total of
K
EIs can be recorded by moving a wideangle camera
K
1 times. Here
Δz
is the separation between two adjacent camera positions. The
k
th EI is recorded at the camera position
zk
=
z_{1}
+(
k
−1)
Δz
. Since we capture each EI at a different camera position, each contains the object's image at a different scale level.
Pickup process for 3D objects by moving a camera with a wide angle lens according to the proposed ADS method.
 2.2. Calibration Process
In a typical imaging system, lens distortion usually can be classified among three types: radial distortion, decentering distortion, and thin prism distortion
[15]
. However, for most lenses the radial component is predominant. We assume that our WAL produces predominantly radial distortion, and ignore other distortions in a recorded image. Therefore, the image distortion should be corrected by a calibration process before the digital reconstruction used in the proposed method. Our calibration process is composed of two steps. In the first step, the radial distortion model is considered. We suppose that the center of distortion is (
c_{x}
,
c_{y}
) in the recorded image with radial distortion. Let
I_{d}
be the distorted image and
I_{u}
the undistorted image. To correct the distorted image, the distorted point located at (
x_{d}
,
y_{d}
) in
I_{d}
has to move to the undistorted point at (
x_{u}
,
y_{u}
) in
I_{u}
. If
r_{d}
and
r_{u}
are respectively defined as the distance between (
c_{x}
,
c_{y}
) and (
x_{d}
,
y_{d}
) and the distance between (
c_{x}
,
c_{y}
) and (
x_{u}
,
y_{u}
), the coordinates (
x_{u}
,
y_{u}
) can be calculated by
[16
,
17]
From Eq. (1) we can see that the distortion model has a set of five parameters
Θ_{d}
= [
c_{x}
,
c_{y}
,
k_{1}
,
k_{2}
,
k_{3}
].
In the second step, the point with coordinates (
x_{u}
,
y_{u}
) is projected to a new point (
x_{p}
,
y_{p}
) in the desired image using a projective transformation, which is the most general transformation that maps lines into lines. The new coordinates of (
x_{p}
,
y_{p}
) are given by
[16]
Here, it is seen that the projection parameters are
Θ_{p}
=[
m_{0}
,
m_{1}
,
m_{2}
,
m_{3}
,
m_{4}
,
m_{5}
,
m_{6}
,
m_{7}
]. Therefore, the parameter sets
Θ_{d}
and
Θ_{p}
must be found to obtain the corrected images.
Before recording 3D objects, we want to find the two parameter sets
Θ_{d}
and
Θ_{p}
for a given system. To do so, a chessboard pattern is used to apply the pointcorrespondences method
[16]
.
Figure 3
shows the ADS pickup of the chessboard pattern for the calibration process. For the chessboard pattern located at a certain position, EIs are recorded by moving the wideangle camera through its total range of motion, as shown in
Fig. 3
. We can see that the recorded EIs have radial image distortion. In this paper, the coordinate mapping of distorted elemental images is performed by using the chessboard image to identify whether the distortion of the EIs has mapped the coordinates correctly. Based on recording the chessboard image, the flowchart of the calibration process we used is shown in
Fig. 4
. The first step is to extract the corner feature points from the recorded chessboard pattern. We want to recover the mapping from the distorted EIs to the EIs using the extracted feature points. Using the GaussNewton method, we can find the two parameter sets
Θ_{d}
and
Θ_{p}
for the radial distortion and projective transformation
[16]
. The computed parameters are then used to correct the image distortion in the widearea EIs of the desired 3D object. After repeating the computation process of the calibration parameters for each EI of the chessboard pattern, we store a set of calibration parameters in the computer. Based on the stored parameter sets, the recorded EIs of 3D objects are corrected.
Optical pickup process to capture the elemental images, using a chessboard pattern for camera calibration.
Flowchart of the calibration process to find the best parameters for radial and projective transformation.
 2.3. Digital Reconstruction Process
The final process of our wideangle ADS method is digital reconstruction using the calibrated EIs described in Section 2.2. In this process we generate a sliceplane image according to the reconstruction distance.
Figure 5
shows the digital reconstruction process based on an inversemapping procedure through a pinhole model
[7]
. Each wideangle camera is modeled as a pinhole camera with the calibrated EI located at a distance
g
from the camera. We assume that the reconstruction plane is located at a distance
z
=
L
. Each EI is inversely projected through each corresponding pinhole to the reconstruction plane at
L
. Then the ith inversely projected EI is magnified by
M_{i}
=(
L

z_{i}
)/
g
. At the reconstruction plane, all inversely mapped EIs are superimposed upon each other using the different magnification factors. In
Fig. 3
,
E_{i}
is the
i
th EI with a size of
p
×
q
, where
p
and
q
are the pixel counts corresponding to width and height in the EI.
I_{L}
is the superimposed image of all the inversely mapped images of the EI at the reconstruction plane
L
.
I_{L}
can be calculated by the following equation:
where
U_{i}
is the upsampling factor for magnification of
E_{i}
at the reconstruction plane
L
, and the size of
I_{L}
is
M_{1}p
×
M_{1}q
.
Digital reconstruction process based on ray backprojection in the proposed ADS method.
To reduce the computational load imposed by the large magnification factor, Eq. (3) is modified by using the downsampling factor
D_{r}
of the image by a factor of
r
. Then the superimposed image is given by
In Eq. (4)
I_{L}
is the reconstructed plane image after superimposing all EIs at the reconstruction distance
L
. To generate the 3D volume information, we should reconstruct the plane images for the desired depth ranges. To do so, the digital reconstruction process is repeated for the given distance range.
III. EXPERIMENTS AND RESULTS
We performed preliminary experiments to demonstrate our proposed ADS system for partially occluded object visualization.
Figure 6
shows the experimental structure we implemented. As shown in
Fig. 6
, we used two scenarios at the same time. The first scenario has a single object with a chessboard pattern with square size 100 mm × 100 mm. The second scenario has two objects: a tree as the occluder, and ‘DSU’ letter objects with letter size 100 mm × 70 mm to demonstrate partially occluded object visualization. The chessboard pattern and ‘DSU’ are located 350 mm from the first wideangle camera position, as shown in
Fig. 6
. The occluder is located 150 mm in front of the ‘DSU’ object.
Experimental setup to capture the elemental images of 3D objects.
We use a 1/4.5 inch CMOS camera with a resolution of 640 × 480. The WAL has a focal length
f
=1.79 mm and maximum FOV angle 131°. The wideangle camera was translated in
z
=1 mm increments for a total of
K
=150 EIs and a total displacement distance of 149 mm. Examples of recorded EIs are shown in
Fig. 7
.
Examples of the recorded elemental images with the image distortion (a) Chessboard patter for calibration process (b) 3D objects.
After recording the EIs using the wide–angle camera, we applied the calibration process to them. Each EI was corrected using the corresponding calibration parameters. The calculated parameters are shown in
Tables 1
and
2
for the radial distortion and projective transformation (
Θ_{d}
and
Θ_{p}
) respectively. The calibrated EIs are shown in
Fig. 8
. From the result in the top left of
Fig. 8
, it is seen that the projection parameters were well computed. Based on these projection parameters, the EIs of 3D objects were calibrated. In our calibration process the calibrated images were cropped for the next digital reconstruction.
Computed parametersΘdfor radial distortion
Computed parameters Θ_{d} for radial distortion
Computed parametersΘpfor projective transformation
Computed parameters Θ_{p} for projective transformation
Experimental results: (a) 150^{th} recorded elemental images for chessboard pattern and 3D objects before calibration process, (b) 150^{th} calibrated elemental images for chessboard pattern and 3D objects after calibration process.
With the calibrated EIs as shown in
Fig. 8
, we reconstructed slice plane images for 3D objects according to the different reconstruction distances. The 150 calibrated EIs were used in the digital reconstruction algorithm employing Eq. (2). The slice image at the original position of the 3D objects is shown in
Fig. 9
. For comparison, we include the results of using the conventional ADS method without a calibration process. From the experimental results, we can see that our method can be demonstrated successfully for visualizing a partially occluded object.
3D slice images reconstructed at the original position of the ‘DSU’ object: (a) conventional method without calibration process, (b) proposed method with calibration process.
IV. CONCLUSION
In conclusion, we have presented a wideangle ADS system to capture a widearea scene of 3D objects. Using a WAL we can collect a lot of parallax information for a large scene. The calibration process was introduced to compensate for the image distortion due to the use of this type of lens. We performed a preliminary experiment of partially occluded 3D objects and demonstrated our idea successfully.
Acknowledgements
This research was supported by the BB21 project of Busan Metropolitan City and was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (Grant number: 20100023438)
Stern A.
,
Javidi B.
2006
“Threedimensional image sensing, visualization, and processing using integral imaging,”
Proc. IEEE
94
591 
607
DOI : 10.1109/JPROC.2006.870696
Park J.H.
,
Hong K.
,
Lee B.
2009
“Recent progress in threedimensional information processing based on integral imaging,”
Appl. Opt.
48
H77 
H94
DOI : 10.1364/AO.48.000H77
Hong S.H.
,
Javidi B.
2005
“Threedimensional visualization of partially occluded objects using integral imaging,”
J. Display Technol.
1
354 
DOI : 10.1109/JDT.2005.858879
DaneshPanah M.
,
Javidi B.
,
Watson E. A.
2008
“Three dimensional imaging with randomly distributed sensors,”
Opt. Express
16
6368 
6377
DOI : 10.1364/OE.16.006368
Maycock J.
,
McElhinney C. P.
,
Hennelly B. M.
,
Naughton T. J.
,
McDonald J. B.
,
Javidi B.
2006
“Reconstruction of partially occluded objects encoded in threedimensional scenes by using digital holograms,”
Appl. Opt.
45
2975 
2985
DOI : 10.1364/AO.45.002975
Shin D.H.
,
Lee B.G.
,
Lee J.J.
2008
“Occlusion removal method of partially occluded 3D object using subimage block matching in computational integral imaging,”
Opt. Express
16
16294 
16304
DOI : 10.1364/OE.16.016294
Zhou Z.
,
Yuan Y.
,
Bin X.
,
Wang Q.
2011
“Enhanced reconstruction of partially occluded objects with occlusion removal in synthetic aperture integral imaging,”
Chin. Opt. Lett.
9
041002 
DOI : 10.3788/COL201109.041002
Yeom S.W.
,
Woo Y.H.
,
Baek W.W.
2011
“Distance extraction by means of photoncounting passive sensing combined with integral imaging,”
Journal of the Optical Society of Korea
15
(4)
357 
361
DOI : 10.3807/JOSK.2011.15.4.357
Rivenson Y.
,
Rot A
,
Balber S.
,
Stern A.
,
Rosen J.
2012
“Recovery of partially occluded objects by applying compressive Fresnel holography,”
Opt. Lett.
37
1757 
1759
DOI : 10.1364/OL.37.001757
Shin D.
,
Javidi B.
2011
“3D visualization of partially occluded objects using axially distributed sensing,”
J. Disp. Technol.
7
223 
225
DOI : 10.1109/JDT.2011.2124441
Shin D.
,
Javidi B.
2012
“Threedimensional imaging and visualization of partially occluded objects using axially distributed stereo image sensing,”
Opt. Lett.
37
1394 
1396
DOI : 10.1364/OL.37.001394
Hong S.P.
,
Shin D.
,
Lee B.G.
,
Kim E.S.
2012
“Depth extraction of 3D objects using axially distributed image sensing,”
Opt. Express
20
23044 
23052
DOI : 10.1364/OE.20.023044
Piao Y.
,
Zhang M.
,
Shin D.
,
Yoo H.
2013
“Threedimensional imaging and visualization using offaxially distributed image sensing,”
Opt. Lett.
38
3162 
3164
DOI : 10.1364/OL.38.003162
Cho M.
,
Shin D.
2013
“3D integral imaging display using axially recorded multiple images,”
Journal of the Optical Society of Korea
17
(5)
410 
414
DOI : 10.3807/JOSK.2013.17.5.410
Stein G. P.
1997
“Lens distortion calibration using point correspondences,”
Proc. CVPR
602 
608
Romero L.
,
Gomez C.
,
Stolkin R.
2007
Correcting Radial Distortion of Cameras with Wide Angle Lens Using Point Correspondences, Source: Scene Reconstruction, Pose Estimation and Tracking
ITech
Vienna, Austria
530 
Kim N.W.
,
Lee S.J.
,
Lee B.G.
,
Lee J.J.
2007
“Vision based laser pointer interaction for flexible screens,”
Lecture Notes in Computer Science
4551
845 
853
DOI : 10.1007/9783540731078_93