Advanced
Three-Dimensional Imaging and Display through Integral Photography
Three-Dimensional Imaging and Display through Integral Photography
Journal of information and communication convergence engineering. 2014. Jun, 12(2): 89-96
Copyright © 2014, The Korea Institute of Information and Commucation Engineering
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/li-censes/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • Received : April 10, 2014
  • Accepted : May 23, 2014
  • Published : June 30, 2014
Download
PDF
e-PUB
PubReader
PPT
Export by style
Article
Author
Metrics
Cited by
TagCloud
About the Authors
Héctor Navarro
Adrián Dorado
Genaro Saavedra
Manuel Martínez Corral
manuel.martinez@uv.es

Abstract
Here, we present a review of the proposals and advances in the field of three-dimensional (3D) imaging acquisition and display made in the last century. The most popular techniques are based on the concept of stereoscopy. However, stereoscopy does not provide real 3D experience, and produces discomfort due to the conflict between convergence and accommodation. For this reason, we focus this paper on integral imaging, which is a technique that permits the codification of 3D information in an array of 2D images obtained from different perspectives. When this array of elemental images is placed in front of an array of microlenses, the perspectives are integrated producing 3D images with full parallax and free of the convergence-accommodation conflict. In the paper we describe the principles of this technique, together with some new applications of integral imaging.
Keywords
I. INTRODUCTION
Nowadays, there is a trend of migrating from two-dimensional (2D) display systems to 3D ones. This migration, however, is not happening due to the fact that commercially available 3D display technologies are not able to stimulate all the mechanisms involved in the real observation of 3D scenes. The human visual system needs a set of physical and psychophysical clues to perceive the world as 3D. Among the psychophysical clues, we can list the linear perspective rule (two parallel lines converge at a far point), occlusions (occluded objects are further away than occluding ones), movement parallax (when the observer is moving, close objects displace faster than far objects), shadows, and textures.
Among the physical clues are accommodation (capacity of eye lens to tune its optical power), convergence (rotation of the visual axes to converge at the object point), and binocular disparity (horizontal shift between retinal images of the same object). It is remarkable that the closer the object is to the observer, the stronger are the accommodation, convergence, and binocular disparity. The brain makes use of these physical clues to obtain information about the depth position of different parts of a 3D scene.
Note that while psychophysical clues can be easily simulated in a 2D display, physical clues are difficult to simulate and indeed are not generated in currently commercially available 3D displays.
The aim of this review is to analyze a 3D imaging and display architecture that at present is still far from the stage of commercial implementation but has the potential of successfully simulating all physiological and physical 3D clues.
II. STEREOSCOPIC AND AUTOSTEREOSCOPIC TECHNIQUES
At present, there are many techniques for the display of 3D images. Usually, the term 3D display is used for referring to two different visualization situations: the stereoscopic display and the real 3D display. Stereoscopic systems provide the user with two different images of the 3D scene, obtained from different but close perspectives. The images are shown independently to the eyes of the observer so that they produce binocular disparity, which provides the brain with the information that allows it to estimate the depth contents of the scene. Stereoscopy has been conventionally implemented by the use of special glasses that transmit for each eye its corresponding image and block the other one. This effect has been produced by the use of anaglyph glasses [1] , polarizing glasses [2] , and more recently, shutter glasses based on polarization [3 , 4] . It is also possible to implement stereoscopy without the use of special glasses. The systems that perform this are known as auto-stereoscopic systems. Autostereoscopy can be imple- mented by means of lenticular sheets [5] or by parallax barriers [6 , 7] . However, the main drawback of stereoscopy is that it produces ocular fatigue due to the conflict between convergence and accommodation. This conflict occurs due to the fact that the accommodation is fixed at the screen where the two perspectives are projected, whereas the visual axes intersect at the distance where the scene is recon- structed.
The main difference between the stereoscopic and the real 3D displays is that the latter present different perspectives when the observer displaces parallel to the display. Among the real 3D displays, we can find multi-view systems based on lenticular sheets or on parallax barriers [8 - 10] , volumetric systems [11] , holographic systems [12] , and integral photography systems [13 - 15] . From a conceptual point of view, holography, which can render the wave field reflected by an object, is the technique that provides a better 3D experience and does not produce visual fatigue. However, since holography is based on the interference of coherent wavefronts, it is still far from been efficiently applicable to massive 3D display media.
Multi-view systems either based on parallax barriers or in lenticular sheets, provide the observer with multiple stereoscopic views of a 3D scene. These systems can provide different stereoscopic views to different observers, but have the drawback of flipping or double image, when the observer displaces parallel to the system. Multi-view systems do not provide images with vertical parallax, and more importantly, the users still suffer from the consequences of the convergence accommodation conflict.
Integral imaging is, together with holography, the only technique that can stimulate both the physical and the psychophysical mechanisms of 3D vision. Integral imaging systems reconstruct any point of the 3D scene through the intersection of many rays, providing the observer with fullparallax images and avoiding the conflict between the mechanisms of convergence and accommodation [16 , 17] . The main advantage of integral imaging nowadays is that it can be implemented with the available 2D imaging and display technology, such as charge-coupled device (CCD) or CMOS sensors and LED or LCD displays. In any case, there are still some physical limitations, such as the poor viewing angle, or technological limitations, such as the need for a higher resolution of pixelated monitors and a wider bandwidth for the transmission of integral images, that still prevent this technique from a rapid commercial spread. However, note that it is reasonably expected that these limitations will be overcome in the next few years.
III. INTEGRAL IMAGING
On March 2, 1908, Lippmann presented to the French Academy of Sciences his work ‘Epreuves réversibles photographies integrals’ [13] , which postulated the possibility of capturing 3D information of an object on a photographic film. He proposed the use of a transparent sheet of celluloid to record on one surface a large number of small notches with circular relief intended to serve as lenses. On the other side of the sheet, he proposed to mold a series of diopters but coated with a photographic emulsion. Each of these spherical diopters should be adapted to receive the image provided by each of the lenses of the opposite face (see Fig. 1 ). Henceforth, we will refer to each of these images as elemental images.
PPT Slide
Lager Image
Scheme of the molded sheet proposed by Lippmann.
During the capturing process, each lens forms the image of a slightly different perspective of the 3D scene on the photographic emulsion. In the display process, the positive of the developed image is pasted on the face where the photographic emulsion is applied and illuminated through a diffuser. Then, any point on the positive can generate a beam of parallel rays. As a result, a 3D scene is reconstructed by the intersection of these beams and can be observed within a range of angles (see Fig. 2 ).
PPT Slide
Lager Image
Reconstruction process as proposed by Lippmann.
Despite the simplicity of Lippmann’s concept, its experimental implementation faced numerous technical difficulties. Experimental tests performed with a thermally molded film produced poor results. In 1912, Lippmann [15] conducted a new experiment with 12 sticks of glass mounted in a rectangular matrix. In this experiment, he proved the existence of a 3D image that can be seen from different perspectives and whose angular size can be changed to zoom in or out.
Unfortunately, the technology for creating an array of small lenses was complex, and therefore, instead of using microlens arrays (MLAs), the experiments and investigations conducted during that time and in the following few years used pinhole arrays. However, pinholes have an important problem, in order to have sharp images, they need to have a small aperture, and thus, the light that goes through them is not sufficient to achieve an acceptable exposure time. In 1911, Sokolov [18] described and validated experimentally the integral photography method proposed by Lippmann. He built a pinhole array by piercing equidistant little cone-shaped holes in a sheet surface and applied a photographic emulsion ( Fig. 3 ).
PPT Slide
Lager Image
Recording of light proceeding from a point source in a photographic film through a pinhole array with a conic shape.
In the reconstruction process (see Fig. 4 ), the recorded image replaced the photographic film. Another drawback of the pinhole array arises during reconstruction: instead of a continuous image, the viewer sees a series of discrete points [19] .
PPT Slide
Lager Image
3D image reconstruction using a pinhole array.
One of the main problems of integral imaging is pseudoscopy: the projected images are reversed in depth. Sokolov [18] showed this problem in one of his figures, which is similar to Fig. 4 . The viewer sees point S closer to him than the other point if he is in front of the pinhole array, but if the viewer sees the 3D scene from the pinhole plane, he perceives point S to be farther than the other point . In 1931, Ives [20] analyzed this pseudoscopy problem in multi- perspective systems and proposed to capture, with an integral imaging system, the reconstructed image captured by another integral imaging system. This produces a double depth inversion that solves the problem.
In addition, Ives [21] was the first to propose the use of a field lens of a large diameter to form the image of an object through a parallax-barrier plate in order to obtain multi- perspective images (see Fig. 5 ). Later, in 1936, Coffey [22] proposed to combine the systems developed by Ives and Lippmann. He used a molded sheet with photographic emulsion, similar to the one designed by Lippmann. To avoid an overlap between the elemental images, Coffey [22] proposed the adjustment of the effective apertures of the field lens and the microlenses (see Fig. 6 ).
PPT Slide
Lager Image
Large-diameter lens projecting an image in the photographic plate through a parallax barrier.
PPT Slide
Lager Image
Procedure for adjusting the effective numerical aperture of the main lens with the numerical aperture of the microlenses.
The first commercial integral imaging camera was patented by Gruetzner [23] in 1955 In the patent, he reported a new method for recording a spherical lens pattern in a photographic film that was covered on one side by a light-sensitive emulsion.
Between the late 1960s and the 1980s, interest in integral imaging increased. During this period, various theoretical analyses and experimental systems were implemented. The most notable researchers of this time were De Montebello [24 - 27] , Burckhardt [28 - 30] , Dudley [31 , 32] , Okoshi [33 - 35] , and Dudnikov [36 - 41] . ‘MDH Products’, Montebello’s company, was the first to commercialize integral photographs for the general public. The first virtual integral images were produced by Chutjian and Collier [42] in 1968 while they worked for Bells Laboratory. These virtual images were obtained by calculating an integral image of computer-generated objects. The objects were calculated with inverted relief in order to produce a double depth inversion to obtain orthoscopic images.
From 1988 to the last decade, the Davis and McCormick [43 - 50] group had been the most active in the integral imaging area; they published numerous studies and filed a number of patents. In 1991, Adelson and Bergen [51] introduced the concept of plenoptic function, which describes the radiance of each luminous light ray in space as a function of angle, wavelength, time, and position. This function contains any parameter that is susceptible to being captured by an optic device, and it is closely related with what Gibson [52] called ‘the ambient light structure’.
The plenoptic function is a 5D function that in an environment free of occlusions and light absorption can be reduced to a 4D function. The first plenoptic camera was proposed in 1992 by Adelson and Wang [53] . In fact, they used the design proposed by Coffey [22] but added their plenoptic function for the formulation.
Because of the possibility of capturing and reproducing integral images using digital media, the computer graphics community became interested in the concept of integral imaging. In 1996, Levoy and Hanrahan [54] renamed the 4D plenoptic function as ‘Light Field’ and Gortler et al. [55] used the term ‘The Lumigraph’ to describe the same function. In both cases, the researchers proposed the use of just one digital camera moved to different positions for capturing different perspectives of the object.
In 1997, Okano et al. [56] captured, for the first time, integral images in real-time video frequency. Instead of using the habitual configuration, they used a high-resolution television camera to capture the different images formed behind an MLA. In 2002, Javidi and Okano [57] included transmission and real-time visualization. In particular, they proposed the use of a multicamera system organized in a matrix form for the capture, and an MLA placed in front of a high-resolution screen for the visualization.
The information recorded with an integral imaging system has more uses than just optics reconstruction of 3D scenes, and it is possible to achieve computational reconstruction with different applications. One such application was proposed by Levoy and Hanrahan [54] and Gortler et al. [55] . They synthetized new views of a 3D scene from a discrete set of digital pictures captured from different perspectives. These views can be orthographic or with perspective. By using the different perspectives captured with an integral imaging system, we can simulate an image captured by a camera with the main lens with a superior diameter than the microlenses or cameras that sampled the plenoptic function. The depth of field of these images is smaller than that of each of the elemental images. It is also possible to obtain images of the scene focused to different depths, in planes perpendicular to the optics axis of the synthetic aperture [58] or in planes with arbitrary inclinations with respect to this axis [59] .
In 2005, the first portable plenoptic camera was built by Ng et al. [60] , on the basis of that proposed by Adelson and Wang [53] . The key feature of this camera was to refocus computationally the photographs captured with a digital camera after they were taken. In order to overcome the low resolution of this system, Lumsdaine and Georgiev [61] implemented a new design named ‘Focused Plenoptic Camera’. They proposed to place the MLA in front of, or behind, the image plane. In such a case, the spatial resolution is increased at the expense of the angular resolution. Anyways, independently of the configuration used, a higher number of microlenses and a higher number of pixels behind each microlens will produce a higher spatial and angular resolution.
Miniaturization and integration of chips in the MLA and in the light sensor is the future of integral imaging. In 2008, Fife et al. [62] designed the first integrated plenoptic sensor. Instead of taking the usual approach of using a sensor and a microlenses array, they designed and built a 166 × 76 sensor with small groups of 16 × 16 pixels. Over any of these groups and separated by a dielectric film, a small lens was placed for focalizing the light onto the group of pixels placed behind it.
The interest in and the number of publications on the topic of integral imaging have increased exponentially during the last decade. The two main research topics have been to overcome the physical limitations and to search for new applications of integral imaging systems. For solving the physical limitations, several researchers have proposed solutions to the pseudoscopic problem [63 - 66] , to the uncertainty in the position and angle of the microlenses and the elemental images with respect to the sensor [67 - 70] , and to the limitation of the viewing angle [71 - 74] . Solutions to the limited depth of field [75 - 77] or to the detrimental effect of the facet braiding have also been proposed [78 , 79] .
On the other hand, new applications of integral imaging have arisen. Some examples of these are the visualization of 3D content and TV systems based on integral imaging [80 - 82] , and the automatic recognition of 3D objects [83 - 86] . Other interesting applications are the 3D image and processing systems of poorly illuminated 3D scenes based on multi-perspective photon counting [87 - 91] , the 3D imaging and pattern recognition of scenes that present partial occlusions or immersed in dispersive environments [92 , 93] , and the 3D microscopy after a single shot [94 - 99] .
IV. CONCLUSIONS
We have reported a review of the advances in the integral imaging technique. We have gone over more than one century of history of 3D imaging and found that this technique constitutes the most promising approach to the problem of showing 3D images in color to massive audiences. Besides, we have shown that this technique has many technological applications other than the 3D display.
Acknowledgements
This research was supported in by the Ministerio de Economia y Competitividad (MINECO), Spain, under Grant DPI2012-32994, and also by the Generalitat Valenciana under Grant PROMETEO2009-077. H. Navarro acknowledges a predoctoral grant from the Generalitat Valenciana. A. Dorado acknowledges a predoctoral grant from the MINECO.
BIO
Manuel Martinez-Corral was born in Spain in 1962. He received his M.Sc. and Ph.D. in Physics from the University of Valencia, Spain, in 1988 and 1993, respectively. He is currently Full Professor of Optics at the University of Valencia, where he is with the “3D Imaging and Display Laboratory.” Since 2010, he has been a Fellow of SPIE. His research interests include the scalar and vector properties of tightly focused light fields, resolution procedures in 3D scanning microscopy, and 3D imaging and display technologies. He has published over seventy technical articles in major journals, and delivered over twenty five invited and keynote presentations in international meetings. He has been a member of the Scientific Committee in over fifteen international meetings. He is Topical Editor of the IEEE/OSA Journal of Display Technology and of the Springer journal on 3D Research. He is Associate Editor of JICCE.
Genaro Saavedra was born in Spain in 1967. He received his B.Sc. and Ph.D. in Physics from University of Valencia, Spain, in 1990 and 1996, respectively. He is currently Full Professor with this University, and co-leads the “3D Imaging and Display Laboratory.” His current research interests are optical diffraction, integral imaging, 3D high-resolution optical microscopy, and phase-space representation of scalar optical fields. He has published on these topics about fifty technical articles in major journals and three chapters in scientific books. He has published over fifty conference proceedings, including 10 invited presentations. He is Topical Editor of the JICCE.
Adrián Dorado was born in Spain in 1988. He received his B.Sc. and M.Sc. in Physics from the University of Valencia, Spain, in 2011 and 2012, respectively. Since 2010, he has been with the 3D Imaging and Display Laboratory, Optics Department, University of Valencia. His research interests include 3D imaging acquisition and display.
Hector Navarro received his B.Sc. and M.Sc. in Physics from the University of Valencia, Spain, in 2008 and 2009, respectively. Since 2007, he has been with the “3D Imaging and Display Laboratory” at the Optics Department of the University of Valencia. H. Navarro is a member of SPIE since 2010. His research interests include focusing properties of light and 3D imaging acquisition and display. He has published 11 articles in major journals and authored 18 communications in prestigious physics conferences.
References
Rollmann W. 1853 “Notiz zur stereoskopie,” Annalen der Physik 165 (6) 350 - 351    DOI : 10.1002/andp.18531650614
Land E. H. 1937 “Polarizing optical system,”
Byatt D. W. 1981 “Stereoscopic television system,”
Bos P. J. , Koehler/Beran K. R. 1984 “The pi-cell: a fast liquidcrystal optical-switching device,” Molecular Crystals and Liquid Crystals 113 (1) 329 - 339    DOI : 10.1080/00268948408071693
Hess W. 1915 “Stereoscopic picture,”
Berthier A. 1896 “Images stéréoscopiques de grand format,” Le Cosmos (34) 205 - 210
Ives F. E. 1902 “A novel stereogram,” Journal of the Franklin Institute 153 (1) 51 - 52    DOI : 10.1016/S0016-0032(02)90195-X
Julesz B. 1963 “Stereopsis and binocular rivalry of contours,” Journal of the Optical Society of America 53 (8) 994 - 998    DOI : 10.1364/JOSA.53.000994
Kanolt C. W. 1918 “Photographic method and apparatus,”
Imai H. , Imai M. , Ogura Y. , Kubota K. 1996 “Eye-position tracking stereoscopic display using image-shifting optics,” Proceedings of SPIE 2653 49 - 55
Blundell B. G. , Schwarz A. J. 2000 Volumetric Three-Dimensional Display Systems John Wiley & Sons New York, NY
Gabor D. 1948 “A new microscopic principle,” Nature 161 (4098) 777 - 778    DOI : 10.1038/161777a0
Lippmann G. 1908 “Epreuves réversibles photographies integrals,” Comptes Rendus de l'Académie des Sciences 146 446 - 451
Lippmann G. 1908 “Epreuves reversibles donnant la sensation du relief,” Journal de Physique Théorique et Appliquée 7 (1) 821 - 825    DOI : 10.1051/jphystap:019080070082100
Lippmann G. 1912 “L'étalon international de radium,” Radium (Paris) 9 (4) 169 - 170    DOI : 10.1051/radium:0191200904016901
Kim Y. 2012 “Accommodative response of integral imaging in near distance,,” Journal of Display Technology 8 (2) 70 - 78    DOI : 10.1109/JDT.2011.2163701
Hiura H. , Yano S. , Mishina T. , Arai J. , Hisatomi K. , Iwadate Y. , Ito T. 2013 “A study on accommodation response and depth perception in viewing integral photography,” in Proceedings of the 5th International Conference on 3D Systems and Applications (3DSA)
Sokolov A. P. 1911 Autostereoscopy and integral photography by Professor Lippmann’s method Izd-vo MGU (Moscow State University) Moskva
Martinez-Corral M. , Martinez-Cuenca R. , Saavedra G. , Navarro H. , Pons A. , Javidi B. 2008 “Progresses in 3D integral imaging with optical processing,” Journal of Physics: Conference Series 139 (1) 012012 -    DOI : 10.1088/1742-6596/139/1/012012
Ives H. E. 1931 “Optical properties of a Lippman lenticulated sheet,” Journal of the Optical Society of America 21 (3) 171 - 176    DOI : 10.1364/JOSA.21.000171
Ives H. E. 1930 “Parallax panoramagrams made with a large diameter lens,” Journal of the Optical Society of America 20 (6) 332 - 340    DOI : 10.1364/JOSA.20.000332
Coffey D. F. w. 1936 “Apparatus for making a composite stereograph,”
Gruetzner J. T. 1955 “Means for obtaining three-dimensional photography,”
De Montebello R. L. 1970 “Integral photography,”
De Montebello R. L. 1971 “Process of making reinforced lenticular sheet,”
De Montebello R. L. 1977 “Wide-angle integral photography: the integram system,” Proceedings of the SPIE 120 73 - 91
Buck H. S. , de Montebello R. L. , Globus R. P. 1988 “Integral photography apparatus and method of forming same,”
Burckhardt C. B. , Collier R. J. , Doherty E. T. 1968 “Formation and inversion of pseudoscopic images,” Applied Optics 7 (4) 627 - 631    DOI : 10.1364/AO.7.000627
Burckhardt C. B. 1968 “Optimum parameters and resolution limitation of integral photography,” Journal of the Optical Society of America 58 (1) 71 - 74    DOI : 10.1364/JOSA.58.000071
Burckhardt C. B. , Doherty E. T. 1969 “Beaded plate recording of integral photographs,” Applied Optics 8 (11) 2329 - 2331    DOI : 10.1364/AO.8.002329
Dudley L. P. 1971 “Integral photography,”
Dudley L. P. 1972 “Methods of integral photography,”
Okoshi T. 1971 “Optimum design and depth resolution of lens-sheet and projection-type three-dimensional displays,” Applied Optics 10 (10) 2284 - 2291    DOI : 10.1364/AO.10.002284
Okoshi T. , Yano A. , Fukumori Y. 1971 “Curved triple-mirror screen for projection-type three-dimensional display,” Applied Optics 10 (3) 482 - 489    DOI : 10.1364/AO.10.000482
Okoshi T. 1976 Three-Dimensional Imaging Techniques Academic Press New York, NY
Dudnikov Y. A. 1970 “Autostereoscopy and integral photography,” Optical Technology 37 (7) 422 - 426
Dudnikov Y. A. 1971 “Elimination of pseudoscopy in integral photography,” Optical Technology 38 (3) 140 - 143
Dudnikov Yu. A. 1974 “Effect of three-dimensional moire in integral photography,” Soviet Journal of Optical Technology 41 (5) 260 - 262
Dudnikov Y. A. , Rozhkov B. K. 1978 “Selecting the parameters of the lens-array photographing system in integral photography,” Soviet Journal of Optical Technology 45 (6) 349 - 351
Dudnikov Y. A. , Rozhkov B. K. 1979 “Limiting capabilities of photographing various subjects by the integral photography method,” Soviet Journal of Optical Technology 46 (12) 736 - 738
Dudnikov Y. A. , Rozhkov B. K. , Antipova E. N. 1980 “Obtaining a portrait of a person by the integral photography method,” Soviet Journal of Optical Technology 47 (9) 562 - 563
Chutjian A. , Collier R. J. 1968 “Recording and reconstructing three-dimensional images of computer-generated subjects by Lippmann integral photography,” Applied Optics 7 (1) 99 - 103    DOI : 10.1364/AO.7.000099
Yang L. , McCormick M. , Davies N. 1988 “Discussion of the optics of a new 3-D imaging system,” Applied Optics 27 (21) 4529 - 4534    DOI : 10.1364/AO.27.004529
Davies N. , McCormick M. , Yang L. 1988 “Three-dimensional imaging systems: a new development,” Applied Optics 27 (21) 4520 - 4528    DOI : 10.1364/AO.27.004520
Davies N. , McCormick M. 1991 “Imaging system,”
Davies N. A. , Brewin M. , McCormick M. 1994 “Design and analysis of an image transfer system using microlens arrays,” Optical Engineering 33 (11) 3624 - 3633    DOI : 10.1117/12.181580
Davies N. , McCormick M. 1997 “Imaging system,”
Davies N. , McCormick M. 1997 “Lens system with intermediate optical transmission microlens screen,”
Davies N. , McCormick M. 1997 “Imaging arrangements,”
Davies N. , McCormick M. 2000 “Imaging arrangements,”
Adelson E. H. , Bergen J. R. 1991 “The plenoptic function and the elements of early vision,” Computational Models of Visual Processing 1 (2) 3 - 20
Gibson J. J. 1966 The Senses Considered as Perceptual Systems Houghton Mifflin Boston, MA
Adelson E. H. , Wang J. Y A. 1992 “Single lens stereo with a plenoptic camera,” IEEE Transactions on Pattern Analysis and Machine Intelligence 14 (2) 99 - 106    DOI : 10.1109/34.121783
Levoy M. , Hanrahan P. 1996 “Light field rendering,,” in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques 31 - 42
Gortler S. J. , Grzeszczuk R. , Szeliski R. , Cohen M. F. 1996 “The lumigraph,,” in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques 43 - 54
Okano F. , Hoshino H. , Arai J. , Yuyama I. 1997 “Real-time pickup method for a three-dimensional image based on integral photography,,” Applied Optics 36 (7) 1598 - 1603    DOI : 10.1364/AO.36.001598
Javidi B. , Okano F. 2002 Three-Dimensional Television, Video, and Display Technologies Springer Berlin
Isaksen A. , McMillan L. , Gortler S. J. 2000 “Dynamically reparameterized light fields,,” in Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques 297 - 306
Vaish V. , Garg G. , Talvala E. V. , Antunez E. , Wilburn B. , Horowitz M. , Levoy M. 2005 “Synthetic aperture focusing using a shear-warp factorization of the viewing transform,,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshop 129 - 129
Ng R. , . Levoy M. , Bredif M. , Duval G. , Horowitz M. , Hanrahan P. 2005 “Light field photography with a hand-held plenoptic camera,,” Computer Science Technical Report 2 1 - 11
Lumsdaine A. , Georgiev T. 2009 “The focused plenoptic camera,,” in Proceedings of IEEE International Conference on Computational Photography 1 - 8
Fife K. , El Gamal A. , Wong H. S. P. 2008 “A multi-aperture image sensor with 0.7 µm pixels in 0.11 µm CMOS technology,” IEEE Journal of Solid-State Circuits 43 (12) 2990 - 3005    DOI : 10.1109/JSSC.2008.2006457
Arai J. , Okano F. , Hoshino H. , Yuyama I. 1998 “Gradient-index lens-array method based on real-time integral photography for three-dimensional images,” Applied Optics 37 (11) 2034 - 2045    DOI : 10.1364/AO.37.002034
Min M. S. , Hong J. , Lee B. 2004 “Analysis of an optical depth converter used in a three-dimensional integral imaging system,” Applied Optics 43 (23) 4539 - 4549    DOI : 10.1364/AO.43.004539
Arai J. , Kawai H. , Kawakita M. , Okano F. 2008 “Depth-control method for integral imaging,” Optics Letters 33 (3) 279 - 281    DOI : 10.1364/OL.33.000279
Navarro H. , Martinez-Cuenca R. , Saavedra G. , Martinez-Corral M. , Javidi B. 2010 “3D integral imaging display by smart pseudoscopic-to-orthoscopic conversion (SPOC),” Optics Express 18 (25) 25573 - 25583    DOI : 10.1364/OE.18.025573
Arai J. , Okui M. , Kobayashi M. , Okano F. 2004 “Geometrical effects of positional errors in integral photography,” Journal of the Optical Society of America A 21 (6) 951 - 958
Tavakoli B. , Daneshpanah M. , Javidi B. , Watson E. 2007 “Performance of 3D integral imaging with position uncertainty,” Optics Express 15 (19) 11889 - 11902    DOI : 10.1364/OE.15.011889
Aggoun A. 2006 “Pre-processing of integral images for 3-D displays,” Journal of Display Technology 2 (4) 393 - 400    DOI : 10.1109/JDT.2006.884691
Sgouros N. P. , Athineos S. S. , Sangriotis M. S. , Papageorgas P. G. , Theofanous N. G. 2006 “Accurate lattice extraction in integral images,” Optics Express 14 (22) 10403 - 10409    DOI : 10.1364/OE.14.010403
Jung S. , Park J. H. , Choi H. , Lee B. 2003 “Viewing-angleenhanced integral three-dimensional imaging along all directions without mechanical movement,” Optics Express 11 (12) 1346 - 1356    DOI : 10.1364/OE.11.001346
Lee B. , Jung S. , Park J. H. 2002 “Viewing-angle-enhanced integral imaging by lens switching,” Optics Letters 27 (10) 818 - 820    DOI : 10.1364/OL.27.000818
Jang J. S. , Javidi B. 2003 “Improvement of viewing angle in integral imaging by use of moving lenslet arrays with low fill factor,” Applied Optics 42 (11) 1996 - 2002    DOI : 10.1364/AO.42.001996
Martinez-Cuenca R. , Navarro H. , Saavedra G. , Javidi B. , Martinez-Corral M. 2007 “Enhanced viewing-angle integral imaging by multiple-axis telecentric relay system,” Optics Express 15 (24) 16255 - 16260    DOI : 10.1364/OE.15.016255
Martinez-Corral M. , Javidi B. , Martinez-Cuenca R. , Saavedra G. 2004 “Integral imaging with improved depth of field by use of amplitude-modulated microlens arrays,” Applied Optics 43 (31) 5806 - 5813    DOI : 10.1364/AO.43.005806
Martinez-Cuenca R. , Saavedra G. , Martinez-Corral M. , Javidi B. 2005 “Extended depth-of-field 3-D display and visualization by combination of amplitude-modulated microlenses and deconvolution tools,” Journal of Display Technology 1 (2) 321 - 327    DOI : 10.1109/JDT.2005.858883
Castro A. , Frauel Y. , Javidi B. 2007 “Integral imaging with large depth of field using an asymmetric phase mask,” Optics Express 15 (16) 10266 - 10273    DOI : 10.1364/OE.15.010266
Martinez-Cuenca R. , Saavedra G. , Pons A. , Javidi B. , Martinez-Corral M. 2007 “Facet braiding: a fundamental problem in integral imaging,” Optics Letters 32 (9) 1078 - 1080    DOI : 10.1364/OL.32.001078
Navarro H. , Martinez-Cuenca R. , Molina-Martin A. , Martinez-Corral M. , Saavedra G. , Javidi B. 2010 “Method to remedy image degradations due to facet braiding in 3D integral-imaging monitors,” Journal of Display Technology 6 (10) 404 - 411    DOI : 10.1109/JDT.2010.2052347
Okano F. , Arai J. , Mitani K. , Okui M. 2006 “Real-time integral imaging based on extremely high resolution video system,” Proceedings of the IEEE 94 (3) 490 - 501    DOI : 10.1109/JPROC.2006.870687
Mishina T. 2010 “3D television system based on integral photography,” in Proceedings of the 28th Picture Coding Symposium 20 - 20
Arai J. , Okano F. , Kawakita M. , Okui M. , Haino Y. , Yoshimura M. , Sato M. 2010 “Integral three-dimensional television using a 33-megapixel imaging system,” Journal of Display Technology 6 (10) 422 - 430    DOI : 10.1109/JDT.2010.2050192
Matoba O. , Tajahuerce E. , Javidi B. 2001 “Real-time threedimensional object recognition with multiple perspectives imaging,” Applied Optics 40 (20) 3318 - 3325    DOI : 10.1364/AO.40.003318
Kishk S. , Javidi B. 2003 “Improved resolution 3D object sensing and recognition using time multiplexed computational integral imaging,” Optics Express 11 (26) 3528 - 3541    DOI : 10.1364/OE.11.003528
Hong S. H. , Javidi B. 2006 “Distortion-tolerant 3D recognition of occluded objects using computational integral imaging,” Optics Express 14 (25) 12085 - 12095    DOI : 10.1364/OE.14.012085
Schulein R. , Do C. M. , Javidi B. 2010 “Distortion-tolerant 3D recognition of underwater objects using neural networks,” Journal of the Optical Society of America A 27 (3) 461 - 468    DOI : 10.1364/JOSAA.27.000461
Yeom S. , Javidi B. , Watson E. 2007 “Three-dimensional distortiontolerant object recognition using photon-counting integral imaging,” Optics Express 15 (4) 1513 - 1533    DOI : 10.1364/OE.15.001513
Tavakoli B. , Javidi B. , Watson E. 2008 “Three dimensional visualization by photon counting computational integral imaging,” Optics Express 16 (7) 4426 - 4436    DOI : 10.1364/OE.16.004426
DaneshPanah M. , Javidi B. , Watson E. 2010 “Three dimensional object recognition with photon counting imagery in the presence of noise,” Optics Express 18 (25) 26450 - 26460    DOI : 10.1364/OE.18.026450
Moon I. , Javidi B. 2009 “Three-dimensional recognition of photonstarved events using computational integral imaging and statistical sampling,” Optics Letters 34 (6) 731 - 733    DOI : 10.1364/OL.34.000731
Aloni D. , Stern A. , Javidi B. 2011 “Three-dimensional photon counting integral imaging reconstruction using penalized maximum likelihood expectation maximization,” Optics Express 19 (20) 19681 - 19687    DOI : 10.1364/OE.19.019681
Hong S. H. , Javidi B. 2005 “Three-dimensional visualization of partially occluded objects using integral imaging,” Journal of Display Technology 1 (2) 354 - 359    DOI : 10.1109/JDT.2005.858879
Moon I. , Javidi B. 2008 “Three-dimensional visualization of objects in scattering medium by use of computational integral imaging,” Optics Express 16 (17) 13080 - 13089    DOI : 10.1364/OE.16.013080
Jang J. S. , Javidi B. 2004 “Three-dimensional integral imaging of micro-objects,” Optics Letters 29 (11) 1230 - 1232    DOI : 10.1364/OL.29.001230
Levoy M. , Ng R. , Adams A. , Footer M. , Horowitz M. 2006 “Light field microscopy,” ACM Transactions on Graphics 25 (3) 924 - 934    DOI : 10.1145/1141911.1141976
Javidi B. , Moon I. , Yeom S. 2006 “Three-dimensional identification of biological microorganism using integral imaging,” Optics Express 14 (25) 12096 - 12108    DOI : 10.1364/OE.14.012096
Levoy M. , Zhang Z. , McDowall I. 2009 “Recording and controlling the 4D light field in a microscope using microlens arrays,” Journal of Microscopy 235 (2) 144 - 162    DOI : 10.1111/j.1365-2818.2009.03195.x
Shin D. , Cho M. , Javidi B. 2010 “Three-dimensional optical microscopy using axially distributed image sensing,” Optics Letters 35 (21) 3646 - 3648    DOI : 10.1364/OL.35.003646
Navarro H. , Martinez-Corral M. , Javidi B. , Sanchez-Ortiga E. , Doblas A. , Saavedra G. 2011 “Axial segmentation of 3D images through syntetic-apodization integral-imaging microscopy,” in Proceedings of Focus on Microscopy Conference