An accelerated spatial redundancybased novellookuptable (ASRNLUT) method based on a new concept of the Npoint onedimensional subprincipal fringe pattern (Npoint 1D subPFP) is implemented on a graphics processing unit (GPU) for fast calculation of computergenerated holograms (CGHs) of threedimensional (3D) objects. Since the proposed method can generate the Npoint twodimensional (2D) PFPs for CGH calculation from the prestored Npoint 1D PFPs, the loading time of the Npoint PFPs on the GPU can be dramatically reduced, which results in a great increase of the computational speed of the proposed method. Experimental results confi rm that the average calculation time for oneobject point has been reduced by 49.6% and 55.4% compared to those of the conventional 2D SRNLUT methods for each case of the 2point and 3point SR maps, respectively.
1. Introduction
Thus far, a number of approaches to generate the computergenerated holograms (CGHs) of threedimensional (3D) objects have been proposed [1135]. One of them is the novellookuptable (NLUT) which can greatly enhance the computational speed as well as massively reduce the total number of precalculated interference patterns required for CGHs generation of 3D objects
[3]
.
In fact, the memory capacity and the calculation time have been known as two most challenging issues in the NLUT method. For reducing the memory, a new type of NLUT based on onedimensional (1D) subprincipal fringe pattern (1D subPFPs) decomposed from the conventional 2D PFPs, which is called 1D NLUT, has been proposed
[4]
. In this method, the gigabyte (GB) memory of the conventional 2D PFPsbased NLUT, which is called 2D NLUT, could be dropped down to the order of megabyte (MB) memory. In addition, for enhancing the computational speed, the NLUT method employs various image compression methods for removal of both spatially and temporally redundant data of 3D objects and 3D moving scenes
[5

8]
. Among them, for removing the intraframe redundant data, a spatial redundancybased NLUT (SRNLUT) was proposed
[6]
, in which spatiallyredundant object data between the adjacent pixels of the 3D image are removed with the runlength encoding (RLE) algorithm, then the
N
point PFP is applied to the NLUT for CGH generation.
Actually, for practical application of the NLUT methods mentioned above, the original NLUT and 1D NLUT algorithms have been attempted to be implemented on fieldprogrammablegatearrays (FPGAs) or graphicprocessingunits (GPUs), respectively
[9
,
10]
. However, due to the limited bandwidth of the bus between the main memory directly connected to the CPU and memories in the GPU, a restoring process of the 2D PFPs from the prestored 1D subPFPs may deteriorate the computational performance of the GPUbased SRNLUT system. That is, it might be practically impossible to transmit a large amount of 2D PFPs data from the host computer to the GPU in realtime.
Therefore, in this paper, a new type of the
N
point 1D subPFP to greatly accelerate the CGHs calculation speed while using the small memory capacity is proposed. The proposed method is implemented on the GPU for confirming its feasibility in the practical applications. Here, the
N
point 1D subPFPs are generated by combined use of the 1D subPFPs and the RLE algorithm. Then, in the calculation process, the
N
point PFPs are generated from these precalculated
N
point 1D subPFPs, and with these the CGH patterns of 3D objects are finally generated. A remarkable reduction of the memory capacity as well as the dramatic enhancement of the calculation speed expects to be obtained by using this new concept of the
N
point 1D subPFPs. Experiments with test 3D objects are carried out and the results are comparatively analyzed with those of the conventional NLUT methods in terms of the number of calculated object points and the calculation time.
2. Proposed method
 2.1 Conventional NLUT method
A geometric structure to compute the Fresnel fringe pattern of a volumetric 3D object is shown in
Fig. 1
. Here, the location coordinate of the
p
th object point is specified by (
x_{p}, y_{p}, z_{p}
), and each object point is assumed to have an associated realvalue magnitude and phase of
α_{p}, φ_{p},
respectively. The hologram pattern to be calculated is also assumed to be positioned on the depth plane of
z
=0
[3]
.
Geometry for generating the Fresnel hologram pattern of a 3D object
Actually, a 3D object can be treated as a set of image planes discretely sliced along the
z
direction, and each image plane having a specific depth plane is approximated as a collection of selfluminous object points of light. In the NLUT, only the 2D PFPs representing the fringe patterns of the object points located on the centers of each depth plane are precalculated and stored
[3]
. Here, we can define the unitymagnitude PFP for the object point (
x_{p}, y_{p}, z_{p}
) positioned on the center of a depth plane of
z_{p}
,
T
(
x, y; z_{p}
) as Eq. (1).
Where the wave number
k
is defined as
k
= 2
π/λ
, in which
λ
means the freespace wavelength of the light. Thus, the fringe patterns for other object points on the depth plane of
z_{p}
can be obtained by simply shifting the PFP of Eq. (1). These shifted versions of PFPs are added together to get the CGH pattern for the depth plane of
z_{p}
. In addition, this process is carried out for all depth planes of the 3D object to get the final CGH pattern of that object. Therefore, in the NLUT method, the CGH pattern for an object
I
(
x, y
) can be expressed in terms of the shifted versions of precalculated PFPs of Eq. (1) as shown in Eq. (2).
Where
P
denotes the number of object points. Equation (2) shows that the NLUT may enable obtaining the CGH pattern of a 3D object just by combination of shifting and adding operations of the PFPs on each depth plane of the 3D object.
 2.2. Proposed method
Figure 2
shows an overall blockdiagram of the proposed 1D SRNLUT method for the accelerated computation of CGH patterns for the 3D object, which is largely composed of four steps. In the first step, spatial redundancy of the intensity and depth data of the 3D object is preprocessed by using the RLE method and they are regrouped into the
N
point redundancy map according to the number of the neighboring object points having the same 3D value. In the second step,
N
point subPFPs corresponding to the
N
point redundancy maps are calculated by shifting and adding the 1point subPFP of the conventional 1D NLUT. In the third step, the CGH pattern of the 3D object is calculated with this precalculated
N
point subPFPs. In the fourth step, the 3D object image is reconstructed from the calculated CGH pattern.
Block diagram of the proposed method for generation of the CGH pattern for the 3D image
 2.2.1 Extraction of the spatial redundancy from a 3D object
In case adjacent pixels of a 3D object have a same value of color and depth, it is called a spatial redundancy both in intensity and depth data of the input 3D image [6,17619].
Figure 3
shows a concept of spatial redundancy in the 3D input image.
Spatial redundancy of the 3D input image: (a) Gray scale of the test image, (b) Spatial redundancy map
Figure 3(a)
shows a 3D object with 5x5 resolutions at a depth plane and only three kinds of gray values such as 10, 150, and 255.
Figure 3(b)
also shows a SR map that is horizontally extracted from
Fig. 3(a)
using the RLE method. Here, ‘3/255’ means that there exist three adjacent image pixels having the same gray value of ‘255’ in the corresponding row.
As seen in
Fig. 3
, 13(=4+6+3) calculation processes are needed in the proposed method for generation of the CGH pattern contrary to the conventional NLUT method where 25(=5×5) calculation processes are normally needed. That is, in the proposed method, 13 calculation processes can be reduced in CGH generation. There exist fifteen empty spaces called ‘don’t care condition’ in
Fig. 3(b)
, in which ‘don’t care condition’ means no need of CGH calculation.
 2.2.2 Generation of Npoint PFPs and Npoint subPFPs
In the SRNLUT method, the 1point PFP for one object point is defined as Eq. (1) mentioned above in the conventional NLUT. And, the 2point PFP for two adjacent object points with unity magnitude and depth of
z_{p}
can be expressed by Eq. (3).
Where
d
represents a discretization step of adjacent points
[3
,
6]
. Likewise, the
N
point PFP for
N
adjacent object points with unity magnitude and depth of
z_{p}, T_{n}
(
x, y;z_{p}
) can be expressed by Eq. (4).
Therefore, the
N
point PFP of Eq. (5) can be derived by using a set of 1D subPFPs.
Where,
S
_{c,1}
,
S
_{s,1}
,
S_{c,n}
and
S_{s,n}
mean the 1point 1D cosine subPFP, 1point 1D sine subPFP,
N
point 1D cosine subPFP and
N
point 1D sine subPFP, respectively. Therefore, the
N
point cosine and sine subPFPs can be expressed be Eq. (6).
Figure 4
shows a flowchart to generate the
N
point subPFPs for an arbitrary depth plane of Eq. (6) using the proposed method. Here, we consider three adjacent points located on a depth plane of
z
_{1}
:
A
(0, 0,
z
_{1}
),
B
(
d
, 0,
z
_{1}
) and
C
(2
d
, 0,
z
_{1}
) as shown in
Fig. 4(a)
.
Fig. 4(b)
also shows the 1point subPFP for an arbitrary depth plane of
z
_{1}
in the conventional 1D NLUT. Thus, the 3point subPFP for three adjacent object points can be calculated by simple shifting and adding operations of the 1point subPFP.
A generation process of the 3point subPFP from the 1D subPFP
That is, the 1D diffraction pattern of the object point
A
(0, 0,
z
_{1}
), which is located on the center of the image plane of
z
_{1}
, can be obtained by simply locating the center of the 1point PFP of
Fig. 4(b)
on this object point as seen in
Fig. 4(c1)
. Then, the 1D diffraction pattern for the object point
B
(
d
, 0,
z
_{1}
) can be obtained just by shifting the center of the 1point PFP of
Fig. 4(b)
to the xdirection with an amount of +
d
as shown in
Fig. 4(c2)
. Likewise, the 1D diffraction pattern for the object point of
C
(2
d
, 0,
z
_{1}
) can be obtained by shifting the center of the 1point PFP of
Fig. 4(b)
to the xdirection with an amount of +2
d
as seen in
Fig. 4(c3)
. By adding these three shifted versions of the 1point PFP, the 3point PFP can be finally obtained, which is shown in
Fig. 4(d)
.
 2.2.3 Calculation of the CGH pattern of a 3D object
Figure 5
shows a flowchart to generate a CGH pattern for an arbitrary 3D object using the proposed method. Here we consider a test image plane of an arbitrary 3D object, which locates on the depth plane of
z
_{1}
and has five object points:
A
(
x
_{1}
,
y
_{1}
,
z
_{1}
),
B
(
x
_{1}
+
d
,
y
_{1}
,
z
_{1}
),
C
(
x
_{1}
+2
d
,
y
_{1}
,
z
_{1}
),
D
(
x
_{2}
, 
y
_{2}
,
z
_{1}
) and
E
(
x
_{2}
+
d
, 
y
_{2}
,
z
_{1}
) as shown in
Fig. 5(a)
. As seen in
Fig. 5(b)
and
5(c)
, there are two groups of object points: one group (‘Group I’) is composed of adjacent three object points of
A
(
x
_{1}
,
y
_{1}
,
z
_{1}
),
B
(
x
_{1}
+d,
y
_{1}
,
z
_{1}
) and
C
(
x
_{1}
+2
d
,
y
_{1}
,
z
_{1}
), and the other group (‘Group II’) is composed of adjacent two object points of
D
(
x
_{2}
, 
y
_{2}
,
z
_{1}
) and
E
(
x
_{2}
+
d
, 
y
_{2}
,
z
_{1}
). They all have the same intensity and depth values and separated each other with a discretization step of
d
. Therefore, the CGH pattern for these three adjacent object points of ‘Group I’ can be calculated just by using the 3point sine/cosine subPFPs and 1point sine/cosine subPFPs as seen in
Fig. 5(b)
.
A CGH generation process of the 3D object using the proposed method: (a) Object points and their spatial redundancy, (b) Hologram generation for the ‘Group I’ with the 1D subPFP, (c) Hologram generation for the ‘Group II’ with the 1D subPFP, (d) Generated CGH pattern for the ‘Group I’, (e) Generated CGH pattern for the ‘Group II’, (f) Finally generated CGH pattern for all object points
That is, the fringe pattern for three adjacent object points of the ‘Group I’ as shown in
Fig. 5(d)
can be obtained by using Eq. (5). Similarly the fringe pattern for two adjacent object points of the ‘Group II’ as shown in
Fig. 5(e)
can be also obtained by using the 2point sine/cosine subPFPs and 1point sine/cosine subPFPs as shown in
Fig. 5(c)
. Finally, the CGH pattern for five object points can be generated by adding the hologram patterns of
Fig. 5(d)
and
5(e)
together. That is, the final CGH pattern in
Fig. 5(f)
can be calculated with Eq. (7).
Where
P_{n}
represents the number of object points having the same intensity and depth values.
3. Experiments and the results
In the experiment, three types of 3D objects: ‘Dice’, ‘Car’ and ‘House and Car’, are used as the test objects, and their intensity and depth images are shown in
Fig. 6
. Here, the resolutions of each test 3D object and the CGH pattern are assumed to be 300×300×256 pixels and 1,920×1,080 pixels, respectively in which each pixel size is given by 10
μm
×10
μm
. The horizontal and vertical discretization steps of less than 30
μm
(100
mm
×0.003 = 30
μm
) are chosen since the viewingdistance is assumed to be 100
mm
. Accordingly, to fully display the fringe patterns, the 2D PFP must be shifted by 900 pixels (300×3 pixels = 900 pixels) horizontally and vertically. Thus, the total resolution of the 2D PFP becomes 2,820 (1,920 + 900) × 1,980 (1,080 + 900) pixels. For the 1D NLUT and proposed methods, only two sets of
N
point 1D subPFPs are needed. Therefore, the total resolution of the 1D PFP becomes 2,820 (1,920 + 900) × 1 pixels.
3D test object images: (a)(c) Intensity images and (d)(f) Depth images
In the experiment, a PC system employing an Intel Pentium i73770 operating at 3.4 GHz, 8 GB RAM and a Linux CentOS as well as the Nvidia GTX titan are used for hardware implementation of the proposed method.
Figure 7
shows the SR maps extracted by horizontal scanning of the test objects of
Fig. 6
using the RLE algorithm. In the extracted redundancy maps, the gray color means that there are no adjacent object points having the same intensity and depth values, while the green and blue colors mean that two and three adjacent objectpoints have the same intensity and depth values, respectively. In addition, the white color means the object points of ‘don’t care condition’.
SRmaps extracted by horizontal scanning of the test 3D objects with the RLE method
Table 1
shows a distribution of the spatially redundant data of the test 3D objects along the horizontal direction. As shown in
Table 1
, the spatially redundant data of the ‘Dice’ object are estimated to be 15,317, 11,295, 8,507, respectively for each case of the conventional and proposed methods (2 and 3point cases). Thus, the numbers of object point to be calculated of the proposed method have been reduced by 32.8% and 44.5%, respectively for 2point and 3points cases, respectively by using the spatial redundancy when they are compared to that of the conventional 1D NLUT method.
Comparison results of the number of object points to be calculated
Comparison results of the number of object points to be calculated
Likewise, for the case of ‘Car’ the numbers of object points to be calculated have been reduced by 17.4% (2point case) and 23.7% (3point case) compared to that of the conventional method. Furthermore, for the case of the ‘House & Car’, the numbers of object points to be calculated have been also reduced by 25.9% (2point case) and 34.0% (3point case) compared to that of the conventional method, respectively.
Figure 8
shows three types of object images reconstructed from the CGH patterns generated with the conventional and proposed methods. In case of ‘Dice’
Fig. 8(a)
shows the focused images of the front ‘Die’ reconstructed at the distance of 684
mm
and the rear ‘Die’ reconstructed at the distance of 720
mm
from the CGH pattern generated by using the conventional and proposed methods, respectively.
As seen in
Fig. 8(a)
, in all cases objects images have been successfully reconstructed. That is, object images of the front ‘Die’ are clearly focused but object images of the rear ‘Die’ are blurred at the reconstruction distance of 720
mm
. On the other hand, at the reconstruction distance of 684
mm
, object images of the rear ‘Die’ are focused, but the object images of the rear ‘Die’ are out of focused.
Similarly, in case of ‘Car’,
Fig. 8(b)
shows the focused images of the rear part of ‘Car’ reconstructed at the distance of 684
mm
and the front part of ‘Car’ reconstructed at the distance of 721
mm
from the CGH pattern generated by using the conventional and proposed methods, respectively. For the ‘House & Car’ object,
Fig. 8(c)
also shows the focused images of the rear ‘House’ reconstructed at the distance of 666
mm
and the front ‘Car’ reconstructed at the distance of 683
mm
from the CGH pattern generated with the conventional and proposed methods, respectively. As seen in
Fig. 8
, in all cases object images have been successfully reconstructed.
3D object images reconstructed from the CGH patterns generated with the conventional and proposed methods
Table 2
shows the CGH calculation times for each case of the conventional 2D NLUT, 2D SRNLUT, 1D NLUT and proposed methods.
Table 2
also shows the detailed calculation times for each step of the CGH generation process such as hologram generation, preprocessing and loading time of the PFP. As seen in
Table 2
, in case of ‘Dice’, 422.76
ms
, 307.10
ms
and 269.82
ms
are needed to generate the CGH pattern for each case of
N
=1, 2, 3 in the 2D NLUT and 2D SRNLUT methods. That is, the CGH calculation time gets decreased by removing the spatially redundant data from the 3D object. However, the total calculation time is given by 794.75
ms
, 771.31
ms
and 796.94
ms
for each case of the 2D NLUT and 2D SRNLUT methods. That is, the total calculation time decreases for the case of
N
=2, while it increases for the case of
N
=3. By comparing the cases of
N
=2 and
N
=3 in the 2D SRNLUT method, the loading time of the PFP is increased by 62.90
ms
, even though the hologram generation time is decreased by 37.28
ms
. That is, the loading time for the
N
point PFPs gets larger than the hologram calculation time for the reduced object points by removing the redundant data of 3D object. That is, the loading time of the
N
point 2D PFP may increase if the
N
number gets increased because the number of
N
point 2D PFPs to be loaded on the GPU is increased if the
N
number gets increased. In the conventional 2D NLUT and 2D SRNLUT methods, the loading time of the PFP on the GPU is composed of a large portion of the total calculation time. That is, the loading time of the
N
point 2D PFP occupies 57.5% of the total calculation time. On the other hand, the preprocessing time required for extraction of the spatial redundancy of the 3D object, possess an extremely small portion on the total calculation time. That is, the preprocessing time occupies only 0.05% in the total calculation time.
Calculation time for each of the conventional 2D NLUT, 2D SRNLUT, 1D NLUT and proposed methods
Calculation time for each of the conventional 2D NLUT, 2D SRNLUT, 1D NLUT and proposed methods
That is, the numbers of PFPs in the 1D methods are same with those of the 2D methods, but the memory sizes of each
N
point PFP in the 1D methods are extremely small compared to those of the 2D methods. Therefore, the loading time of the
N
point 1D subPFPs occupied only 2.2% in the total calculation time. And then, the total calculation times becomes 434.17
ms
, 333.59
ms
and 303.83
ms
for each case of the 1D NLUT and proposed methods (2 and 3point cases), respectively. By comparing the cases of
N
=2 and
N
=3 in the proposed methods, the loading time of the PFP is increased by only 1.11
ms
even though the CGH generation time is decreased by 30.88
ms
. That is, the total calculation time has been decreased just by removing the spatially redundant data of the 3D object. On the other hand, the preprocessing time occupies 0.11% in the total calculation time in the 1D NLUT methods.
In the same way, in case of ‘Car’ the loading time of the PFP gets decreased from 454.06
ms
to 7.42
ms
by applying the
N
point 1D subPFPs. Therefore, the total calculation time is given by 792.17
ms
, 881.60
ms
and 987.33
ms
in the 2D NLUT and 2D SRNLUT (2 and 3points cases) methods, respectively. That is, the total calculation time is increased despite the spatially redundant data of the 3D object gets removed. However, the total calculation time is given by 437.06
ms
, 397.40
ms
and 391.30
ms
in the 1D NLUT and proposed (2 and 3points cases) methods, respectively. That is, the total calculation time has been decreased by removing the spatially redundant data of the 3D object.
Likewise, in case of ‘House and car’ object, the loading time of the PFP is decreased from 125.26
ms
to 4.20
ms
by applying the
N
point 1D subPFP. The loading time of this case is very small compare to other cases because only 49, 77 and 101 PFPs are loaded for the case of
N
=1, 2 and 3, respectively, because the ‘House and Car’ object has only 49 depth layers in all 3D object. The total calculation time is given by 518.01
ms
, 488.44
ms
and 495.48
ms
in the 2D NLUT and 2D SRNLUT (2 and 3points cases) methods, respectively. However, the total calculation time is given by 450.06
ms
, 361.93
ms
and 336.55
ms
in the 1D NLUT and proposed (2 and 3points cases) methods, respectively.
Table 3
shows the average calculation time for oneobject point in the conventional 2D NLUT, 2D SRNLUT, 1D NLUT and proposed methods, respectively. As seen in
Table 3
, in case of ‘Dice’ object, the average calculation time for oneobject point is given by 51.89
μs
, 50.36
μs
and 52.03
μs
in the conventional 2D NLUT and 2D SRNLUT (2 and 3points cases) methods, respectively. That is, the average calculation time for oneobject point for the case of
N
=2 is reduced by 2.9% compared to that of the 2D NLUT method. However, the average calculation time for oneobject point for the case of
N
=3 is increased by 0.3% and 3.2% compared to those of the 2D NLUT and 2D SRNLUT (
N
=2) methods. That is, as mentioned above, the loading time for the
N
point PFPs gets larger than the CGH calculation time for the reduced object point by removing the redundant data of the 3D object.
Average calculation time and required memory space for oneobject point in each case of the conventional 2D NLUT, 2D SRNLUT, 1D NLUT and proposed methods
Average calculation time and required memory space for oneobject point in each case of the conventional 2D NLUT, 2D SRNLUT, 1D NLUT and proposed methods
By comparison with the 2D NLUT and 2D SRNLUT methods, the conventional 1D NLUT and 1D SRNLUT methods need smaller loading times. As shown in
table 3
, in case of ‘dice’, the average calculation time for oneobject point is given by 28.35
μs
, 21.78
μs
and 19.84
μs
in the conventional 1D NLUT and 1D SRNLUT methods. That is, the average calculation time for oneobject point is reduced by 45.4%, 58.0% and 61.8% compared to those of the 2D NLUT method, respectively.
In the same way, in case of ‘Car’ object, average calculation time for oneobject point is given by 50.69
μs
, 56.42
μs
and 63.18
μs
in the conventional 2D NLUT and 2D SRNLUT methods, respectively. That is, the average calculation time for oneobject point is increased in spite of removing the spatially redundant data of 3D object. However, the average calculation time for oneobject point is given by 27.97
μs
, 25.43
μs
and 25.04
μs
in the conventional 1D NLUT and proposed methods, respectively. That is, the average calculation time for oneobject point is reduced by 44.8%, 54.9% and 60.4% compared to each of the 2D methods, respectively, by replacing the
N
point 2D PFPs into the
N
point 1D subPFPs.
Likewise, in case of ‘House and car’ object, the average calculation time for oneobject point is given by 29.26
μs
, 27.59
μs
and 27.99
μs
in the conventional 2D NLUT and 2D SRNLUT methods, respectively. However, the average calculation time for oneobject point is given by 25.42
μs
, 20.44
μs
and 19.01
μs
in the conventional 1D NLUT and proposed methods, respectively. That is, the average calculation time for oneobject point is reduced by 13.1%, 25.9% and 32.1% compared to each of the 2D methods, respectively, by replacing the
N
point 2D PFPs into the
N
point 1D subPFPs.
Thus, the average calculation time for oneobject point for all three cases is given by 43.95
μs
, 44.79
μs
, 47.73
μs
, 27.24
μs
, 22.55
μs
and 21.29
μs
in the conventional 2D NLUT, 2D SRNLUT (
N
=2, 3), 1D NLUT and proposed methods (
N
=2, 3), respectively. That is, the average calculation time for oneobject point is reduced by 38.0%, 49.6% and 55.4% compared to each of the 2D methods, respectively by replacing the
N
point 2D PFPs into the
N
point 1D subPFPs.
In addition, the memory capacities required in the conventional 2D NLUT, 2D SRNLUT and 1D NLUT methods as well as the proposed method are also calculated. As seen in
Table 3
, the total memory size required for storing all
N
point PFPs of the 3D image volume of 300 × 300 × 256 pixels in the conventional 2D NLUT and 2D SRNLUT methods are calculated to be 1.33GB, 2.66GB and 3.99GB, respectively, in which image data for one PFP is assumed to be 5.32MB (= 2820×1980×8bit). For 1D NLUT and proposed methods, only two sets of
N
point 1D subPFPs are needed, therefore, the total memory size required for storing all
N
point subPFPs are calculated to be 1.38MB, 2.75MB and 4.13MB for the 1D NLUT and proposed methods, respectively. In other words, the proposed method only uses 0.2%, 0.3% of the memory volume of the conventional 2D NLUT method for each case of the 2, 3point SR maps, respectively. Therefore, the loading time of the
N
point PFPs can be dramatically reduced compared to those of the conventional 2D NLUT and 2D SRNLUT methods.
4. Conclusion
In this paper, a novel approach to massively reduce the memory capacity as well as to dramatically reduce the calculation time of the conventional NLUT method has been proposed by combined use of the 1D subPFP and the spatial redundancy of the 3D object for its GPUbased implementation. Experiments with three types of test 3D objects confirm that the average calculation times for oneobject point of the proposed method have been reduced by 49.6% and 55.4% compared to those of the 2D SRNLUT methods for each case of the 2, 3point SR maps. In addition, the proposed method has been found to use only 0.2% and 0.3% of the memory volume of the conventional 2D NLUT method for each case of the 2point and 3point SR maps.
Kuo C. J.
,
Tsai M. H.
2002
ThreeDimensional Holographic Imaging
John Wiley & Sons
Poon T.C
2007
Digital Holography and ThreeDimensional Display
Springer Verlag
Kim S.C.
,
Kim E.S.
“Effective generation of digital holograms of 3D objects using a novel lookup table method,”
Appl. Opt.
(2008)
47
D55 
D62
Kim S.C.
,
Kim J.M.
,
Kim E.S.
2012
“Effective memory reduction of the novel lookup table with onedimensional subprinciple fringe patterns in computergenerated holograms,”
Opt. Express
20
12021 
12034
Kim S.C.
,
Yoon J.H.
,
Kim E.S.
2009
“Fast generation of 3D video holograms by combined use of data compression and lookup table techniques,”
Appl. Opt.
47
5986 
5995
Kim S.C.
,
Kim E.S.
2009
“Fast computation of hologram patterns of a 3D object using runlength encoding and novel lookup table methods,”
Appl. Opt.
48
1030 
1041
Kim S.C.
,
Na K.D.
,
Kim E.S.
2013
“Accelerated computation of computergenerated holograms of a 3D object with N×Npoint principle fringe patterns in the novel lookup table method,”
Opt. Laser Eng.
51
185 
196
Kim S.C.
,
Choe W.Y.
,
Kim E.S.
“Accelerated computation of hologram patterns by use of interline redundancy of 3D object images,”
Opt. Eng.
50
091305 
Kwon DW.
,
Kim S.C.
,
Kim E.S.
2011
“Hardware implementation of NLUT method using fieldprogrammablegatearray technology,”
Proc. SPIE
7957
79571C 
Kwon M.W.
,
Kim S.C.
,
Kim E.S
2014
“GPUbased implementation of onedimensional novellookuptable for realtime computation of Fresnel hologram patterns of threedimensional objects,”
Opt. Eng.
53
035103 
Kim S.C.
,
Kim J.H.
,
Kim E.S.
2011
“Effective reduction of the novel lookup table memory size based on a relationship between the pixel pitch and reconstruction distance of a computergenerated hologram,”
Appl. Opt.
50
3375 
3382
Kwon D.W.
,
Kim S.C.
,
Kim E.S.
2011
“Memory size reduction of the novel lookuptable method using symmetry of Fresnel zone plate,”
Proc. SPIE
7957
79571B 
Shimobaba T.
,
Nakayama H.
,
Masuda N.
,
Ito T.
2010
“Rapid calculation algorithm of Fresnel computergeneratedhologram using lookup table and wavefrontrecording plane methods for threedimensional display,”
Opt. Express
18
19504 
19509
Yang Z.
,
Fan Q.
,
Zhang Y.
,
Liu J.
,
Zhou1 J.
2012
“A new method for producing computer generated holograms,”
J. Opt.
14
095702 
Masuda N.
,
Ito T.
,
Tanaka T.
,
Shiraki A.
,
Sugie T.
2006
“Computer generated holography using a graphics processing unit,”
Opt. Express
14
587 
592
Hariharan P.
1996
Optical Holography; Principles, Techniques, and Applications
Cambridge Studies in Modern Optics
Dorf R. C.
1997
Electrical Engineering Handbook
(2nd edition)
CRC press
Higgins J.
2004
Introduction to SNG and ENG microwave
ButterworthHeinemann
Ngan K. N.
,
Yap C. W.
,
Tan K. T.
2001
Video Coding for Wireless Communication Systems
Marcel Dekker Ltd