Integral Imaging Monitors with an Enlarged Viewing Angle
Integral Imaging Monitors with an Enlarged Viewing Angle
Journal of information and communication convergence engineering. 2015. Jun, 13(2): 132-138
Copyright © 2015, The Korean Institute of Information and Commucation Engineering
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License ( which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • Received : November 18, 2014
  • Accepted : February 02, 2015
  • Published : June 30, 2015
Export by style
Cited by
About the Authors
Adrián, Dorado
Genaro, Saavedra
Jorge, Sola-Pikabea
Manuel, Martínez-Corral

Enlarging the horizontal viewing angle is an important feature of integral imaging monitors. Thus far, the horizontal viewing angle has been enlarged in different ways, such as by changing the size of the elemental images or by tilting the lens array in the capture and reconstruction stages. However, these methods are limited by the microlenses used in the capture stage and by the fact that the images obtained cannot be easily projected into different displays. In this study, we upgrade our previously reported method, called SPOC 2.0. In particular, our new approach, which can be called SPOC 2.1, enlarges the viewing angle by increasing the density of the elemental images in the horizontal direction and by an appropriate application of our transformation and reshape algorithm. To illustrate our approach, we have calculated some high-viewing angle elemental images and displayed them on an integral imaging monitor.
In order to reach the rank of commercial devices for three-dimensional (3D) vision, integral imaging (InI) displays need to overcome some drawbacks, such as a small viewing angle, pseudoscopic problem, and limited depth of field. Previous studies have addressed these issues, and some solutions have been reported to overcome the pseudoscopic problem [1 - 5] , the viewing angle limitation [6 - 11] , and the problem of limited depth of field [12 - 14] of InI monitors; these solutions show that the research in this field is on track to achieve a commercial-quality device.
This study is based on the proposal for increasing the viewing angle of elemental images (EIs) by Miura et al. [10] . Their method does not require any special device or complex algorithms, but their results are very promising. However, their proposal shows some lack of flexibility, and the images obtained cannot be easily projected on an InI monitor, since the spatial and angular resolution of these images cannot be adapted to the monitor characteristics. Our research group has developed a new method, which benefits from the concept of Miura et al. [10] but overcomes the abovementioned drawbacks.
The rest of this paper is organized as follows: first, the relationships between the parameters related to the viewing angle of a 3D plenoptic image are obtained. Second, the method developed by Miura et al. [10] and SPOC 2.0, and the proposed method are explained. Next, the experimental process and the results are described. Finally, we present the conclusions of this research.
The viewing angle is an important feature of any 3D visualization system because it establishes the range of positions from the display in which a viewer can observe the 3D reconstruction. In an InI monitor, the collection of microimages is projected onto the high-resolution display and the microlens array is placed in front of it, so that every microimage in under one microlens. The best visualization is obtained when the microimages are set at the focal plane of the microlenses (see 0). Then, from this figure, we can easily obtain the horizontal (α) and vertical (β) viewing angles of the 3D reconstruction. Their expressions are as follows:
PPT Slide
Lager Image
PPT Slide
Lager Image
where w denotes the width of the microimage, h represents the height, and f indicates the focal length of the microlenese. From Eq. (1), one can realize that the larger the microimages are, the bigger is the viewing angle. However, for a proper visualization, one should be careful to avoid the overlapping of the microimages.
PPT Slide
Lager Image
Parameters involved in the magnitude of the viewing angle.
In this section, we will explain the method proposed for the enlargement of the viewing angle of InI monitors. First, we will vriefly explain the two ideas already reported (Miura et al.'s method [10] and SOPC 20. [5] ). Second, we will show that a proper fusion of the concepts of these two methods leads to the development of a new and effective mehod for increasing the viewing angle.
- A. Enlarged View
The basic idea of Miura et al. [10] involves changing the size of the EIs by increasing their width while decreasing the height, keeping invariant the total number of pixels. To do so, the researchers tiled the microlens array in the capture and reconstruction processe. The overlapping issue in the capture stage was solved by inserting an aperture stop in the capture system. Following this architecture, the size of the EIs and the viewing angles were properly modified. In 0, the effect of tilting a microlens array, with a square arrangement, is shown. In this configuration, the size of the EIs are calculated using Eq. (3),
PPT Slide
Lager Image
whereas th viewing angle of the enlarged Eis is expressed as follows:
PPT Slide
Lager Image
In the above equations, p denotes the pitch of the microlenses, f represents their focal length, and φ indicates the tilted angle. Although this idea is an easy and good solution for improving the viewing angle, it has the disadvantage of a lack of flexibility: the microlenses used in the capture must be the same as the ones use in the reconstruction stage, and, consequently, both processes are limited.
PPT Slide
Lager Image
Microimages size and distribution. continuous line is standard elemental images captured using a rectangular microlens array. Dotted line is enlarged elemental images captured using the same micorlens array but titled φ = 45°. The width increases while the height decreases, keeping invariant the total number of pixels.
- B. SPOC 2.0
We reported a method called SPOC 2.0 that can be used to solve this type of problems [5] . Basically, SPOC 2.0 takes advantage of three facts: 1) the flexibility that InI offers in the capture process, 2) the possibility of obtaining an equivalent plenoptic image from an integral image [5 , 15] , and 3) the fact that a plenoptic image can be easily projected into a display. The last fact enables the projection of an InI image by transforming it into a plenoptic image already adapted to the chosen microlens array and display. Therefore, SPOC 2.0 provides the freedom to perform the capture process and, additionally, adapts InI to any given display and any given microlenses. Besides, the images obtained are already orthoscopic and we can chose, with some degree of freedom, the reference plane and the field of view of the scene.
The characteristics of the integral image determine the characteristics of the equivalent plenoptic image. For example, the number of elemental images in an integral image fixes the pixels per microimage of the equivalent plenoptic image. In addition, the number of pixels of any elemental image determines the microimage count of the equivalent plenoptic image. However, there is another question to account for: does the spatial configuration of the pixels within any elemental image determine the distribution of the microimages centers? Note that in order to build a microimage, the same pixel from any EI is mapped taking into account its relative position in the integral image. Therefore, the first pixel of every EI corresponds to the first microimage. Then, we can assume that every pixel in the EI relates with one microimage, and therefore, the relative position of the pixels in an EI determines the distribution of microlenses. For example, in a normal camera, the pixels are distributed in a rectangular grid, and therefore, the microimage distribution will be equivalent. However, if the pixels in the EIs were distributed in a hexagonal grid, the equivalent microimages would be rearranged following a hexagonal pattern. Therefore, we can obtain a hexagonal microimage distribution from EIs with a hexagonal pixel distribution. This can be easily obtained with a normal camera just by resizing or averaging pixels because of the fact that the EIs captured with the synthetic aperture method have more pixels than required, permitting considerable flexibility in the adapting process.
- C. SPOC 2.1
The proposed method is based on combining the concepts behind the two previously described methods. Our goal is the generation of a plenoptic image with an enlarged viewing angle that can be projected into any InI monitor by using any microlenses for 3D visualization.
As stated before, from an integral image, we can obtain an equivalent plenoptic image. To reproduce Miura et al.’s enlarged EIs, we need to obtain a plenoptic image with asymmetric microimages arranged in a hexagonal distribution. Therefore, the integral image captured has to have an asymmetric number of photographs in which the EIs have a hexagonal pixel distribution. The equivalent microimage size and distribution of the plenoptic image depend on the tilted microlens array that will be used in the visualization stage. Taking this into account, we proceed to describe the steps needed to produce plenoptic images with an enlarged viewing angle.
First, the display and the microlenses that are going to be used in the 3D reconstruction must be chosen. Second, the number of pixels of the selected display that will form a microimage, which has the same size of a microlens of the chosen array, is calculated. This will be the size in pixels of a microimage of the standard projected plenoptic image. Third, we decide the tilting angle of the microlenses, we can determine the increase in the viewing angle. Note that, because of the overlapping of the microimages, not all rotating angles are valid [11] . Fourth, we calculate the new size in pixels, using Eq. (2) in the case of a rectangular microlens array, of the microimages after tilting the microlens array. This will be the size in pixels of one microimage of the enlarged view plenoptic image. Fifth, the integral image of the 3D scene, with the number of EIs equal to the number of pixels per microimage of the standard plenoptic image, is captured. Sixth, another integral image with the number of EIs equal to the number of pixels per microimage of the enlarged view plenoptic image is captured. Seventh, we transform the two integral images into plenoptics images by applying SPOC 2.0, but resizing the EIs of the enlarged view case so that they have a hexagonal distribution and taking into account that the final plenoptic image has to have a hexagonal microimage distribution. Eighth, resize both images so that each has the correct number of pixels per microimage. Ninth, project the images onto the display.
In order to test our proposal, we capture two integral images, transform them into plenoptic images, and finally project them onto a display. While one of the plenoptic images is a standard image, with square microimages (the same number of pixels in the x and y directions), the other has an enlarged horizontal viewing angle, with rectangular microimages (with the width of the EI greater than the height). Both captures are performed following the classic synthetic aperture method: shifting the position of the camera per shot [16] .
For the display process, we used a rectangular microlens array with focal length f = 3.3 mm and pitch p = 1 mm. The display is an iPad with a pixel density of 10.39 pixels/mm. The microimages in the normal plenoptic image have the same size as the microlenses, w = 1 mm and h = 1 mm.
In our method, the tilt angle for the enlarge case is φ = π /4; therefore, the size of the microimages is
PPT Slide
Lager Image
The integral image is recorded using a Canon 450 D with an objective focal length of 18 mm, focused at the infinitum. The photographs have a resolution of 4272 × 2848 pixels. The whole 3D scene is captured by moving the camera with a pair of motors. The camera moves in the ( x, y ) directions with a step of 5 mm between photographs. The scene is composed by a background 52.5 cm away from the camera, a toy placed at a distance of 40 cm from the camera, and two steel cylinders at 35 cm from the camera.
To obtain the correct microimages for both images, the number of photos captured should be equal to the number of pixels per microimage. However, as these values must be integers, for the standard image, 11 × 11 photographs are recorded, and for the enlarged view case, we capture 15 × 8 photographs. To these experimental photographs, SPOC 2.0 is applied. Both images are processed to obtain 139 × 139 microimages for each. After the transformation, both plenoptic images are resized in order to match the correct dimensions of the microimages. The final images are shown in Fig. 3 .
PPT Slide
Lager Image
(a) Standard plenoptic image with 139 × 139 microimages and 10.39 × 10.39 pixels per microimage, (b) enlarged viewing angle plenoptic image with 139 × 139 microimages with 14.6937 × 7.3468 pixels per microimage, (c) microimages of the standard plenoptic image, and (d) microimages of the enlarged viewing angle plenoptic image.
To show the increase in the viewing angle, we recorded a series of photographs simulating the head movement of an observer visualizing the projected images. The distance between neighboring photographs corresponded to an angular difference of 0.5°. We found a viewing angle of 15° for the standard image and 22° for the enlarged image, see Figs. 4 and 5 .
PPT Slide
Lager Image
Selection of photographs composing one loop of the viewing angle for the standard image case.
PPT Slide
Lager Image
Selection of photographs composing one loop of the viewing angle for the enlarged view case.
Although it is clear that the viewing angle is increased, we are interested in comparing the experimental value with the theoretical one. Using Eq. (3), we obtain the theoretical values of 17.23° and 24.19° for the normal and enlarged view cases, respectively. Obviously, there is some discrepancy between the theoretical and experimental values. This discrepancy can be attributed to the fact that Eq. (3) is obtained assuming that the microimages are placed at the focal plane of the microlenses. Experimentally, this condition is not fulfilled because of the existence of a plate protecting the pixels in the iPad, making it impossible to place the image at the focal plane of the microlens array.
To check whether this measured discrepancy is really produced because of the crystal layer, another measurement of the viewing angle is conducted. In this experiment, the standard plenoptic image was printed on photographic paper and placed at the focal distance of the microlenses. For this case, the measured viewing angle is 17.5° ± 0.5°, very close to the theoretical value within the experimental error. From this result, we can conclude that the measured discrepancy in the viewing angle for the standard and the enlarged view plenoptic images is principally due to the crystal layer of the iPad.
We have reported a new algorithm for the calculation of microimages, with an enlarged viewing angle, which can be projected on an InI monitor. We have presented experimental results proving this algorithm. Our algorithm is an improved version of SPOC 2.0, which adds the idea of reshaping the EIs to increase the viewing angle.
This research was supported in by the Ministerio de Economia y Competitividad (MINECO), Spain, under Grant DPI2012-32994, and from Generalitat Valenciana under Grant PROMETEOII2014-072. A. Dorado acknowledges a predoctoral grant from the MINECO.
Manuel Martinez-Corral
was born in Spain in 1962. He received his M.Sc. and Ph.D. in physics from the University of Valencia, Spain, in 1988 and 1993, respectively. He is currently Full Professor of Optics at the University of Valencia, where he is with the “3D Imaging and Display Laboratory.” Since 2010, he has been Fellow of SPIE. His research interests include scalar and vector properties of tightly focused light fields, resolution procedures in 3D scanning microscopy, and 3D imaging and display technologies. He has published more than 70 technical articles in major journals, which have received more than 1,500 citations, and delivered more than 25 invited and keynote presentations in international meetings. He has been member of the Scientific Committee in more than 15 international meetings. He is Topical Editor of the IEEE/OSA Journal of Display Technology and of the Springer journal 3D Research. He is Associate Editor of JICCE.
Bora Kim
received her B.S. degree in 2013 from Department of Electric Engineering, Mokpo National Maritime University in Mokpo, Korea. She is currently pursuing her M.S. degree at Department of Electronic Materials Engineering, Kwangwoon University, Seoul, Korea. Her research interests include 3D image signal processing, digital holography, and watermarking.
Adrián Dorado
was born in Spain in 1988. He received his B.Sc. and M.Sc. in physics from the University of Valencia, Spain, in 2011 and 2012, respectively. Since 2010, he has been with the 3D Imaging and Display Laboratory, Optics Department, University of Valencia. His research interests include 3D imaging acquisition and display.
Jorge Sola-Pikabea
received his BSc in Physics from the University of Valencia, Spain, in 2014 and is currently pursuing his MSc degree. Since 2013, he has been with the “3D Imaging and Display Laboratory” at the Optics Department of the University of Valencia. His research interests include 3D microscopy.
Arai J. , Okano F. , Hoshino H. , Yuyama I. 1998 “Gradient-index lens-array method based on real-time integral photography for three-dimensional images,” Applied Optics 37 (11) 2034 - 2045    DOI : 10.1364/AO.37.002034
Min S. W. , Hong J. , Lee B. 2004 “Analysis of an optical depth converter used in a three-dimensional integral imaging system,” Applied Optics 43 (23) 4539 - 4549    DOI : 10.1364/AO.43.004539
Arai J. , Kawai H. , Kawakita M. , Okano F. 2008 “Depth-control method for integral imaging,” Optics Letters 33 (3) 279 - 281    DOI : 10.1364/OL.33.000279
Navarro H. , Martínez-Cuenca R. , Saavedra G. , Martínez-Corral M. , Javidi B. 2010 “3D integral imaging display by smart pseudoscopic-to-orthoscopic conversion (SPOC),” Optics Express 18 (25) 25573 - 25583    DOI : 10.1364/OE.18.025573
Martínez-Corral M. , Dorado A. , Navarro H. , Saavedra G. , Javidi B. 2014 “Three-dimensional display by smart pseudoscopic-toorthoscopic conversion with tunable focus,” Applied Optics 53 (22) E19 - E25    DOI : 10.1364/AO.53.000E19
Jung S. , Park J. H. , Choi H. , Lee B. 2003 “Viewing-angle-enhanced integral three-dimensional imaging along all directions without mechanical movement,” Optics Express 11 (12) 1346 - 1356    DOI : 10.1364/OE.11.001346
Lee B. , Jung S. , Park J. H. 2002 “Viewing-angle-enhanced integral imaging by lens switching,” Optics Letters 27 (10) 818 - 820    DOI : 10.1364/OL.27.000818
Jang J. S. , Javidi B. 2003 “Improvement of viewing angle in integral imaging by use of moving lenslet arrays with low fill factor,” Applied Optics 42 (11) 1996 - 2002    DOI : 10.1364/AO.42.001996
Martínez-Cuenca R. , Navarro H. , Saavedra G. , Javidi B. , Martinez-Corral M. 2007 “Enhanced viewing-angle integral imaging by multiple-axis telecentric relay system,” Optics Express 15 (24) 16255 - 16260    DOI : 10.1364/OE.15.016255
Miura M. , Arai J. , Mishina T. , Okui M. , Okano F. “Integral imaging system with enlarged horizontal viewing angle,” SPIE in Three-Dimensional Imaging, Visualization, and Display, Proceedings of the SPIE vol. 8384 Bellingham, WA 2012
Miura M. , Arai J. , Okui M. , Okano F. “Method of enlarging horizontal viewing zone in integral imaging,” SPIE in Three-Dimensional Imaging, Visualization, and Display, Proceedings of the SPIE vol. 8043 Bellingham, WA 2011
Martínez-Corral M. , Javidi B. , Martínez-Cuenca R. , Saavedra G. 2004 “Integral imaging with improved depth of field by use of amplitude-modulated microlens arrays,” Applied Optics 43 (31) 5806 - 5813    DOI : 10.1364/AO.43.005806
Martínez-Cuenca R. , Saavedra G. , Martínez-Corral M. , Javidi B. 2005 “Extended depth-of-field 3-D display and visualization by combination of amplitude-modulated microlenses and deconvolution tools,” Journal of Display Technology 1 (2) 321 - 327    DOI : 10.1109/JDT.2005.858883
Castro A. , Frauel Y. , Javidi B. 2007 “Integral imaging with large depth of field using an asymmetric phase mask,” Optics Express 15 (16) 10266 - 10273    DOI : 10.1364/OE.15.010266
Navarro H. , Barreiro J. C. , Saavedra G. , Martínez-Corral M. , Javidi B. 2012 “High-resolution far-field integral-imaging camera by double snapshot,” Optics Express 20 (2) 890 - 895    DOI : 10.1364/OE.20.000890
Jang J. S. , Javidi B. 2002 “Three-dimensional synthetic aperture integral imaging,” Optics Letters 27 (13) 1144 - 1146    DOI : 10.1364/OL.27.001144