In this paper, we present a three-dimensional (3-D) automatic target recognition system based on optical integral imaging reconstruction. In integral imaging, elemental images of the reference and target 3-D objects are obtained through a lenslet array or a camera array. Then, reconstructed 3-D images at various reconstruction depths can be optically generated on the output plane by back-projecting these elemental images onto a display panel. 3-D automatic target recognition can be implemented using computational integral imaging reconstruction and digital nonlinear correlation filters. However, these methods require non-trivial computation time for reconstruction and recognition. Instead, we implement 3-D automatic target recognition using optical cross-correlation between the reconstructed 3-D reference and target images at the same reconstruction depth. Our method depends on an all-optical structure to realize a real-time 3-D automatic target recognition system. In addition, we use a nonlinear correlation filter to improve recognition performance. To prove our proposed method, we carry out the optical experiments and report recognition results.
Object recognition’s many applications include its use in autonomous vehicles, artificial intelligence, and defense. Within this field, three-dimensional (3-D) automatic target recognition using integral imaging techniques has been an important topic of research
Integral imaging, first proposed by G. Lippmann in 1908
, employees lenslet or camera arrays to capture and display 3-D images. Each lenslet provides a different perspective on the subject, generating an elemental image for capture. The resulting array of 2-D images, illuminated from behind and projected through another lenslet array, displays a 3-D reconstruction. The technique provides full parallax, continuous viewing points, and relatively high depth resolution without the need for special glasses
, all desirable characteristics.
The mechanism for image reconstruction, itself, can be used to implement forms of auto target recognition. For example, in
, the authors propose a method for locating the position of target objects in 3-D space. They use the fact that computational integral imaging reconstruction (CIIR) can output a range of 3-D reconstructions, each at a different reconstruction depth
. Then, computers apply advanced correlation filters to the range of reconstructed images created by CIIR and complete the automatic target recognition task.
In practice, such methods are difficult to apply in real-time applications. The computation time and resources required for 3-D reconstruction limit its usefulness
. To address the limitation imposed by the computational cost of 3-D reconstruction, an all-optical structure for 3-D automatic target recognition offers an alternative.
This paper proposes such an all-optical integral imaging approach. In our method, we optically reconstruct 3-D reference and target images. We then perform automatic target recognition, applying a nonlinear correlation filter to the reference and target images. To assess this proposed method, we discuss optical experiments performed and present recognition results obtained in these experiments.
II. REVIEW OF CIIR-TYPE 3-D OBJECT CORRELATION METHOD
illustrates the principle of the conventional CIIR-type 3-D object correlation method. The process is separated into two main stages: pickup and recognition. In the pickup stage, we assume that
, the distance between the reference object and the lenslet array, is known, and that the distance between the target and the lenslet array,
, is unknown or arbitrary. For both reference and target objects, elemental images created by the lenslet array are recorded by the pickup device. In the recognition stage, a CIIR technique reconstructs images of the reference object at known reconstruction depths and the image of the target object at a range of reconstruction depths
. CIIR can be implemented by computational simulation of ray optics.
CIIR-type 3-D object correlation method. (a) Storing of reconstructed 3-D image for reference, (b) reconstruction of target images at various depths, followed by cross-correlation with reference.
(a) depicts the reference object method, and
(b) the steps in the target object method.
(b) also depicts the automatic target recognition step, implemented using cross-correlation between a single, reference 3-D image and the various target 3-D images.
When recognition is implemented with this method, a high correlation peak can be obtained only at
. Thus, this method can extract the depth information of a reconstructed target image that matches the depth of the reconstructed reference image.
While the method works, CIIR is very slow, computationally, and requires substantial computing resources. Therefore, if real-time processing is required, an alternative method may be superior.
III. PROPOSED 3-D CORRELATION METHOD BASED ON OPTICAL INTEGRAL IMAGING RECONSTRUCTION
illustrates our proposed 3-D object correlation method. In the conventional CIIR-type 3-D object correlation method presented above, 3-D images are reconstructed by computer simulation. In our proposed method, an optical display system reconstructs the 3-D images. The all-optical structure allows us to implement automatic target recognition in real-time.
Structure of the proposed 3-D object correlation method. (a) Storing of reconstructed 3-D image for reference object, (b) reconstruction of target object at all depth planes.
Our system has three main parts: pickup, display, and recognition. For pickup, let us assume that the reference object has known position
xr, yr, zr
) at a distance of
from the lenslet array and that the target object at
xo, yo, zo
) stands at an unknown (or arbitrary) distance,
, from the lenslet array. Elemental images for reference and target objects are obtained by the first image sensor (Sensor 1) by way of the lenslet array.
In the display part of the process, 3-D image reconstructtion begins when these elemental images are reproduced on a liquid crystal display (LCD). Light illuminating the LCD passes through the lenslet array, generating a 3-D optical image for capture by the second image sensor (Sensor 2) at the speed of light. This design choice produces a system that is much faster than those used to implement the conventional CIIR-type correlation method. The 3-D, optically-created images have relatively high 3-D resolution (lateral and depth) since all elemental images are superimposed by optical magnification.
For the reference object, Sensor 2 records its reconstructed 3-D image at the known distance
. By locating this sensor at the focal plane, we can obtain a high-quality image. We call the reconstructed 3-D image of the reference object at known
xr, yr, zr
). This image is stored in the computer for use as a reference image.
To record the reconstructed image of the target object, Sensor 2 is positioned at
, the identical position from which the reference object was recorded, as shown in
(a). Recall that, in the pickup stage, the target object was located at the
-plane. Images reconstructed at the other various reconstruction depths are noticeably out-of-focus or blurred. Only when
is the best, clearly reconstructed 3-D image obtained. This is a main feature of our 3-D object correlation method. Let
xo, yo, zr
) be the reconstructed 3-D image of the target object as captured by Sensor 2 at
Recognition proceeds by means of a correlation process comparing
xo, yo, zr
xr, yr, zr
) , the 3-D target and reference images reconstructed at a distance of
from the lenslet array. Correlation results are obtained by means of the following
th-law nonlinear correlation filter
are the Fourier transform and phase of the reconstructed reference image,
, at the distance
are the Fourier transform and phase of the reconstructed target image,
, at the distance
is the nonlinearity parameter for the correlation filter. When
= 0, Eq. (1) becomes a phase-only filter (POF). When
= 1, Eq. (1) becomes a linear correlation filter.
IV. EXPERIMENTAL RESULTS
We carried out optical experiments to assess our proposed 3-D object recognition method. A ‘die’, shown in
(a), was used as the reference and target object in this experiment. Its size was 2 cm × 2 cm × 2 cm (W × H × D). When serving as the reference object, the ‘die’ was positioned at (
xr, yr, zr
) = (0 cm, 0 cm, 6 cm). The experimental setup is depicted in
(b). Here, we used a 30 × 30 array of lenslets. Each lenslet measured 1.08 mm × 1.08 mm and was square in shape. The lenslet focal length was 3 mm. Total resolution of the elemental images was 900 × 900 pixels (H × V). Each elemental image had 30 × 30 pixels as the pixel pitch of the LCD panel shown in
(c) was 36 μm.
(a) 3-D object, (b) optical pickup setup, and (c) optical display setup.
A set of recorded elemental images is shown in
(a). We used a paper screen and a digital camera to form Sensor 2. The paper screen was placed
= 6 cm from the lenslet array. The digital camera captured images from the paper screen. An example of the images captured in this manner is shown in
(b). This reconstructed 3-D reference image was taken at
= 6 cm.
Reference 3-D object. (a) Elemental images, (b) reconstructed 3-D image at z = 6 cm.
When used as a target object, the ‘die’ was shifted along the z-axis or rotated by an arbitrary angle on the y-axis. Elemental images for each case were recorded using the optical pickup apparatus shown in
(b). Then, with a set of elemental images for each case, reconstructed 3-D images at various reconstruction depths could be obtained using the optical display system shown in
(c). Examples of reconstructed 3-D images at different reconstruction depths are shown in
(a). The figure clearly shows that the 3-D image reconstructed at
= 6 cm exhibits the sharpest focus.
(b) provides examples of the impact of target rotation angle on the reconstructed 3-D image. The figures demonstrate that reconstructed 3-D images of target objects depend on object distance and rotation angle.
Reconstructed 3-D images with (a) various reconstruction depths, (b) various rotation angles.
Finally, we present the cross-correlation between the reference image and the reconstructed 3-D images of the target object at various depths and rotation angles. We computed correlation using the
th-law nonlinear correlation filter described by Eq. (1). Correlation results, with
= 0.7, and
= 1, are shown in
. The highest peaks (
(a), (c), and (e)) correspond to
= 6 cm. In this case, the reconstruction plane matches the z-plane in which the target object was located in the pickup phase. Lower peaks were observed when the reconstruction plane and the z-plane during pickup did not match.
(b), (d), and (f) display correlation results for
= 10 cm.
kth-law nonlinear correlation results at z = 6 cm when (a) k = 1, (c) k = 0.7, (e) k = 0.5 at z = 10 cm when (b) k = 1, (d) k = 0.7, (f) k = 0.5.
We repeated the analysis for the other reconstructions of the target object image shown in
. Applying the correlation test to the reference and each reconstructed target image produced the results shown in
. Thus, we conclude that our proposed method is robust when the reconstruction plane shifts along the z-axis and when the target object is rotated.
Experimental results of nonlinear correlation under (a) various reconstruction depths, (b) various rotation angles.
In this paper, we presented a 3-D automatic target recognition system based on integral imaging and using an all-optical structure to provide real-time processing.
shows a practical application of our method. The two main processing stages in
may be combined into one imaging system using an auto-collimating screen
. In this system, the reconstructed 3-D images are generated at light speed and displayed on a spatial light modulator (SLM) to convert these images into coherent images. Then, a joint transform correlator (JTC) could serve as a correlation filter. This design may produce a real-time, compact 3-D object recognition system.
Practical system concept of real-time 3-D automatic target recognition using our proposed method.
In future work, we will investigate practical real-time systems that use our proposed method.
This research was supported in part by the Basic Science Research Program through the National Research Foundation of Korea (NRF), which is funded by the Ministry of Education (NRF-2013R1A1A2057549 and NRF-2015R1A-2A1A16074936).
received a B.S. degree in telecommunication engineering from Pukyong National University, Busan, Korea, in 1996, and M.S. and Ph.D. degrees from Kyushu Institute of Technology, Fukuoka, Japan, in 2000 and 2003, respectively. He is an assistant professor at Kyushu Institute of Technology in Japan. His research interests include medical imaging, blood flow analysis, 3-D display, 3-D integral imaging and 3-D biomedical imaging.
received a B.S. degree from Kyushu Institute of Technology, Fukuoka, Japan in 2015. He is a master’s student at Kyushu Institute of Technology. His research interests include visual feedback control, 3-D display, and 3-D integral imaging.
earned B.S. and M.S. degrees in telecommunication engineering from Pukyong National University, Pusan, Korea, in 2003 and 2005, and M.S. and Ph.D. degrees in electrical and computer engineering from the University of Connecticut, Storrs, CT, USA, in 2010 and 2011, respectively. He is an assistant professor at Hankyong National University in Korea. He worked as a researcher at Samsung Electronics in Korea from 2005 to 2007. His research interests include 3D displays, 3D signal processing, 3D biomedical imaging, 3D photon counting imaging, 3D information security, 3D object tracking, and 3D underwater imaging.
“Real-time three-dimensional object recognition with multiple perspectives imaging,”
DOI : 10.1364/AO.40.003318
“Digital three-dimensional image correlation by use of computer-reconstructed integral imaging,”
DOI : 10.1364/AO.41.005488
Park J. H.
“Three-dimensional optical correlator using a sub-image array,”
DOI : 10.1364/OPEX.13.005116
Park J. S.
Hwang D. C.
Shin D. H.
Kim E. S.
“Resolution-enhanced three-dimensional image correlator using computationally reconstructed integral images,”
DOI : 10.1016/j.optcom.2007.04.007
Lee J. J.
Lee B. G.
“Orthoscopic integral imaging display by use of the computational method based on lenslet model,”
Optics and Lasers in Engineering
DOI : 10.1016/j.optlaseng.2013.06.012
Shin D. H.
“Image quality enhancement in 3D computational integral imaging by use of interpolation methods,”
DOI : 10.1364/OE.15.012039
“Fast computational integral imaging reconstruction method using a fractional delay filter,”
Japanese Journal of Applied Physics
article ID. 022503
Jang J. Y.
Kim E. S.
“3D image reconstruction with controllable spatial filtering based on correlation of multiple periodic functions in computational integral imaging,”
Chinese Optics Letters
article ID. 031101
Jang J. Y.
“Fast computational integral imaging reconstruction by combined use of spatial filtering and rearrangement of elemental image pixels,”
Optics and Lasers in Engineering
DOI : 10.1016/j.optlaseng.2015.06.007
“La photographie integrale,”
Académie des Sciences
“Three-dimensional video system based on integral photography,”
DOI : 10.1117/1.602152
Jang J. S.
“Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,”
DOI : 10.1364/OL.27.000324
Jung S. Y.
Min S. W.
Park J. H.
“Three-dimensional display by use of integral photography with dynamically variable image planes,”
DOI : 10.1364/OL.26.001481
“Multifacet structure of observed reconstructed integral images,”
Journal of the Optical Society of America A
DOI : 10.1364/JOSAA.22.000597
“3D passive photon counting automatic target recognition using advanced correlation filters,”
DOI : 10.1364/OL.36.000861
“Three-dimensional imaging system: a new development,”
DOI : 10.1364/AO.27.004520