Quaternions have been commonly employed in color image processing, but when the existing pure quaternion representation for color images is used in perceptual hashing, it would degrade the robustness performance since it is sensitive to image manipulations. To improve the robustness in color image perceptual hashing, in this paper a full quaternion representation for color images is proposed by introducing the local image luminance variances. Based on this new representation, a novel Full Quaternion Discrete Cosine Transform (FQDCT)based hashing is proposed, in which the Quaternion Discrete Cosine Transform (QDCT) is applied to the pseudorandomly selected regions of the novel full quaternion image to construct two feature matrices. A new hash value in binary is generated from these two matrices. Our experimental results have validated the robustness improvement brought by the proposed full quaternion representation and demonstrated that better performance can be achieved in the proposed FQDCTbased hashing than that in other notable quaternionbased hashing schemes in terms of robustness and discriminability.
1. Introduction
I
llegal copy and distribution of digital images over the Internet have become increasingly easily as the Internet and multimedia technologies develop, and therefore the demand for image authentication is growing. Perceptual image hashing is a solution to deal with these issues, in which the content of digital images can be identified by the comparison of their hash values
[1]

[4]
. Unlike cryptographic hashing where one bit of change in the input will lead to a totally different hash value
[5]
, the perceptual image hashing is robust to resist contentpreserving manipulations. Additionally, the perceptual image hashing generates different hash values for distinct images and provides the capability of discriminability.
Recently, some perceptual hashing schemes for color images are developed. In the literature
[6]
, the histogram of the color vector angles in the inscribed circle is calculated and compressed by onedimensional DCT to get a hash value composed of the first
n
AC coefficients. This method is resilient to rotation, but it disregards information outside the inscribed circle. A hashing approach concatenating the invariant moments of each component of the HSI and YCbCr color spaces to be the hash value is given in the literature
[7]
. In the literature
[8]
local color features are extracted by calculating the block means and variances from each component of the HSI and YCbCr spaces, but the hash value obtained by concatenating the Euclidian distances between the block features and a reference feature cannot resist rotation operations well. In the perceptual hashing schemes proposed in the literatures
[9]
and
[10]
, the input color image is represented as a pure quaternion matrix by representing the RGB components as the three imaginary parts of the pure quaternions. In the literature
[9]
, the nonoverlapping blocks of the pure quaternion matrix are transformed by Quaternion Fourier Transform (QFT) and a hash value in binary is generated by comparing the block mean frequency energies with the global mean frequency energy. In the literature
[10]
intermediate features are extracted by applying Quaternion Singular Value Decomposition (QSVD) to the pseudorandomly selected regions of the pure quaternion matrix, and then the QSVD is applied to these features to construct the final hash value. However, the robustness performance of these two quaternionbased hashing schemes needs to be enhanced.
Quaternion introduced by Haminlton
[11]
is a generalization of complex number which consists of one real part and three imaginary parts. The classical pure quaternion representation for color images presented by Sangwine et al.
[12]
offers a sound way to simultaneously deal with three color channels and is employed in color image analysis
[13]

[15]
, registration
[16]
, watermarking
[17]
and perceptual hashing
[9]
[10]
. However, since this pure quaternion representation is sensitive to image manipulations, it would degrade the robustness when it is used in color image perceptual hashing. The motivation of our research work is to improve the robustness and our contributions are twofolds. First, a new full quaternion representation for color images is proposed, in which each color pixel is represented as a full quaternion by setting the local luminance variance as the real part and the RGB values as the three imaginary parts respectively. Second, based on this new representation and Quaternion Discrete Cosine Transform (QDCT), a Full Quaternion Discrete Cosine Transform (FQDCT)based hashing is proposed by applying QDCT to the pseudorandomly selected regions of the novel full quaternion image. Two quaternion feature matrices are constructed by exploiting the QDCT coefficients and a hash value in binary is computed from these two feature matrices. In our experiments, the effect of the proposed full quaternion representation is verified and performance evaluation of the proposed FQDCTbased hashing is carried out.
The rest of this paper is organized as follows. Section 2 describes the proposed full quaternion representation for color images. Section 3 details the proposed FQDCTbased hashing. Experimental results and analysis are shown in Section 4 and the conclusions are given in Section 5.
2. The Proposed Full Quaternion Representation for Color Images
 2.1 Basic Knowledge of Quaternions
A quaternion
q
is made up of four components in a hypercomplex form as:
where
q_{r}
,
q_{i}
,
q_{j}
,
q_{k}
∈
R
and
i,j,k
satisfy the following relations:
Eq. (2) reveals that the multiplication of quaternions is not commutative. A quaternion can also be considered as the sum of a scalar and a vector part:
where
s
(
q
) =
q_{r}
and
v
(
q
) =
q_{i}⋅i
+
q_{j}⋅j
+ ,
q_{k}⋅k
.
q
can be defined as a pure quaternion if
s
(
q
) = 0, otherwise
q
is called a full quaternion.
 2.2 The Proposed Full Quaternion Representation for Color Images
The classical pure quaternion representation for color images
[12]
represents each color pixel as a pure quaternion and can be described as:
where (
x,y
) is the pixel coordinate and
f_{R}
(
x,y
),
f_{G}
(
x,y
),
f_{B}
(
x,y
) are the RGB values of the pixel, respectively.
Based on this representation, a color image can be represented intuitively as a twodimensional pure quaternion matrix, which provides the ability to preserve the interrelations among the color channels
[9]
[10]
[13]

[17]
. However, this pure quaternion representation is sensitive to the changes of the RGB components of the image, even though the visual content of the image is preserved. When it is used in color image perceptual hashing, the robustness performance will be weakened when features are extracted from this pure quaternion matrix. Moreover, setting the real parts of the quaternions to zero does not make full use of the nature of quaternions.
Local statistical characteristic values have the ability to describe image structures and are not sensitive to contentpreserving manipulations, therefore in our proposed full quaternion representations for color images, local statistical characteristic values are introduced as the real parts to enhance the robustness. In this paper, local luminance variances are adopted due to their outstanding robustness and computational efficiency. Suppose that the size of the input image has been normalized to
S×S
. By dividing the luminance layer into nonoverlapping
L×L
blocks, where S is an integral multiple of
L
, the local luminance variance
V
(
blk
) of each block
blk
can be calculated as follows:
where
y_{l}
(1≤
l
≤
L
^{2}
) is the luminance value of each pixel in the block and
is gotten by:
For a pixel in the block
blk
, the real part of its full quaternion representation can be defined as:
Pixels in the same block will have the same real parts. Then, each color pixel can be represented as a full quaternion as:
Thus the full quaternion representation for color images is gotten and each color image can be represented as a full quaternion matrix using this novel representation method.
3. The Proposed FQDCTbased Hashing
In this section, a novel perceptual image hashing based on the proposed full quaternion representation and QDCT, termed as the FQDCTbased hashing, is proposed, and its block diagram is shown in
Fig. 1
.
Block diagram of the FQDCTbased hashing
In the preprocessing, the input color image is first resized to
S×S
to ensure that the generated hash value is of fixed length and is robust against image scaling operations. Then the resized image is blurred by a Gaussian lowpass filter to remove insignificant noise without endangering the image content. After preprocessing, the color image is represented as a full quaternion matrix by using the full quaternion representation proposed in Section 2. Two new quaternion feature matrices are constructed by using QDCT and a hash value in binary is calculated from these two feature matrices. The feature matrices construction and the hash value computation modules are detailed below.
 3.1 Feature Matrices Construction by Using QDCT
In this module, a secret key
K
is used to pseudorandomly select
N
blocks of size
M×M
from the full quaternion matrix for security consideration
[18]
. Then QDCT
[19]
is applied to each selected block to obtain the features of the image. Similar to the traditional DCT, QDCT has a strong energy compaction property and the lowfrequency QDCT coefficients contain most of the signal information. Due to the noncommutative multiplication property of quaternions, QDCT has both lefthand and righthand forms, but the difference between using these two forms is very small. Therefore without loss of generality, the lefthand QDCT
[19]
is adopted in this paper.
Let
B_{n}
(1 ≤
n
≤
N
) be the
n^{th}
selected block, its lefthand QDCT matrix
Q_{n}
can be calculated by:
where 0 ≤
p
≤
M
 1 and 0 ≤ s ≤
M
 1.
u_{q}
can be any pure quaternion which satisfies
, and in this paper it is given by:
α
(
p
),
α
(
s
) and
N
(
p,s,x,y
) are defined as:
The coefficients in the first row and column of each
Q_{n}
are used to construct two quaternion feature matrices because these coefficients don’t change much under contentpreserving manipulations. Let
Q_{n}
(
r,c
) denote the element in the (
r
+ 1)
^{th}
row and (
c
+ 1)
^{th}
column of
Q_{n}
, where 0 ≤
r
≤
M
 1 and 0 ≤
c
≤
M
 1. For the first row of
Q_{n}
,
t
coefficients (
t
≤
M
 1) from the second to the (
t
+ 1)
^{th}
element are used to form a vector
d_{n}
as:
For the
N
blocks a quaternion feature matrix
D
is available by taking
d_{n}
as the
n^{th}
column as follows:
Similarly, the second to the (
t
+ 1)
^{th}
element of the first column of
Q_{n}
are exploited to generate a vector
e_{n}
as:
Accordingly, another quaternion feature matrix
E
is obtained as below:
 3.2 Hash Value Computation
To enhance the robustness, data normalization is applied to the two quaternion feature matrices. Suppose
z
= [
z
_{1}
,
z
_{2}
,...,
z_{N}
] is a row of matrix
D
or
E
.
z
is normalized to
z
' by using:
where
z_{n}
and
are the
n^{th}
elements of
z
and
z
' while
μ
and
σ
are the mean and standard deviation of
z
, respectively.
Let
D
^{1}
and
E
^{1}
be the normalized versions of
D
and
E
, and
be the
n^{th}
column of
D
^{1}
and
E
^{1}
. Then the
L
_{2}
norm distance
dis_{n}
between
are calculated for initial compression and thus the sequence
dis
= [
dis
_{1}
,
dis
_{2}
,...,
dis_{N}
] is available. A threshold
τ
is decided to quantize all the
L
_{2}
norm distances to make a hash value
h
= [
h
_{1}
,
h
_{2}
,...,
h_{N}
] as follows:
where
dis_{n}
and
h_{n}
are the
n^{th}
elements of dis and
h
. When the amount of zeros and the amount of ones in
h
are approximately the same, it will guarantee the highest information content of the extracted
N
tuple
[20]
, and therefore the median of
dis
is chosen to be the threshold
τ
. It should be noted that the length of the hash value is equal to the amount of the pseudorandomly selected blocks.
4. Experimental Results and Analysis
 4.1 Similarity Metric and Image Database
Some methods for similarity measurement have been studied, including the enhanced perceptual distance functions
[21]
, the robust structured subspace learning (RSSL) algorithm
[22]
, the neighborhood discriminant hashing (NDH)
[23]
and the normalized hamming distance (NHD)
[24]
. In this paper the NHD is adopted to measure the similarity between two images for its simplicity and low complexity. The NHD between two hash values
h
and
h
' is defined as:
where
N
denotes the length of the hash value,
h_{n}
and
are the
n^{th}
elements of
h
and
h
', respectively. If the NHD between the hash values of two images is less than a given threshold
η
, the two images are judged as similar and otherwise distinct.
Images in the uncompressed color image database (UCID)
[25]
are used to evaluate the performance of the proposed hashing. These original images are of 512×384 or 384×512 pixels and some examples are displayed in
Fig. 2
. To test the robustness performance of the proposed hashing, for an original image, 100 similar versions are generated by using the 10 kinds of contentpreserving manipulations listed in
Table 1
referring to the StirMark BenchMark
[26]
. The illustrations of these manipulations are shown in
Fig. 3
.
Examples of test images
Contentpreserving manipulations and parameters
Contentpreserving manipulations and parameters
Illustrations of the contentpreserving manipulations in Table 1. (a) Original, (b) Affine, (c) Cropping, (d) JPEG compression, (e) Median filter, (f) Noise, (g) Scaling, (h) Random distortion, (i) Rotation, (j) Rotationcropping, (k) Rotationscaling
 4.2 Setting of Parameters
As mentioned in Section 3, the input color image is resized to 512×512 and blurred by a 3×3 Gaussian lowpass filter. Since the size of each block
blk
in Section 2 relates to the real parts
f_{V}
(
x,y
) and will affect the performance of the color image hashing, in our experiment six sizes of the
blk
are tested: 2×2, 4×4, 8×8, 16×16, 32×32 and 64×64. The length of the hash value is equal to the amount of the pseudorandomly selected blocks
N
, so if the amount is too small, it will easily cause collision and if the amount is too large, it will decrease the computation and matching efficiency. In such consideration, 150 overlapping blocks are pseudorandomly selected from the full quaternion matrix by using a secret key and the sizes of the blocks
B_{n}
are set to be 64 x 64 to adequately cover the whole matrix. For each selected block
B_{n}
, the 2
^{nd}
to 33
^{rd}
element in the first row and column of its QDCT matrix
Q_{n}
are used to construct the two feature matrices
D
and
E
, respectively. Receiver operating characteristic (ROC) curves
[27]
are used to compare the performance of the FQDCTbased hashing when the size of the
blk
varies.
ROC curves can be attained by setting various thresholds
η
and computing the true positive rate (TPR) and false positive rate (FPR) for each threshold. Actually, higher TPR and lower FPR indicate better robustness and discriminability respectively and they can be calculated as:
where
n
_{1}
corresponds to the amount of pairs of visually similar images correctly judged as similar,
n
_{2}
is the amount of pairs of distinct images falsely judged as similar, while
N
_{1}
and
N
_{2}
denote the total number of pairs of visually similar and distinct images, respectively.
The experiment is carried on an image set including 10100 images consisting of the first 100 original images in the UCID and their manipulated versions according to
Table 1
. The ROC cueves correspond to the 6 sizes of the
blk
are shown in
Fig. 4
, from which we can see that the proposed FQDCTbased hashing achieves the best performance when the size of the
blk
is set to be 32×32. For performance and computational efficiency consideration, the size of the
blk
in the FQDCTbased hashing is decided to be 32×32.
Performace comparison for the FQDCTbased hashing when the size of the blk varies
 4.3 Test on the Proposed Full Quaternion Representation for Color Images
To verify the robustness improvement brought by our proposed full quaternion representation, a new FQFTbased hashing is developed by combining this new representation with the QFTbased hashing
[9]
. In the FQFTbased hashing, after being resized to 128×128 as in the literature
[9]
, the input color image is represented as a full quaternion matrix using our proposed full quaternion representation, and the size of the
blk
for calculating the real parts is set to be 4×4.
Four wellknown images, Sailboat, Baboon, Lena and Peppers are used to illustrate the robustness performance of the QFTbased and the FQFTbased hashing. For each of the four images, 100 similar versions are generated by using the manipulations listed in
Table 1
and respectively correspond to the 100 image indices given in
Table 1
. The 100 intradistances, i.e. the 100 NHDs between the original image and its 100 similar versions, are calculated for each of the four images by using the QFTbased hashing and the FQFTbased hashing respectively.
Fig. 5
shows the comparison results, where the xaxises denote the image indices of the similar versions. For the Sailboat image, 94% of the intradistances calculted by using the FQFTbased hashing are smaller than those calculated by using the QFTbased hashing, and the numbers for the Baboon, Lena, and Peppers images are 78%, 70%, 87%, respectively. The results show that most of the intradistances calculated by using the FQFTbased hashing are smaller than those calculated by using the QFTbased hashing, which indicates that the FQFTbased hashing can achieve better robustness performance. The robustness improvement is attributed to the proposed full quaternion representation for color images. Since the local luminance variances are approximately invariant if the content of the image is not attacked, taking them as the real parts improves the robustness in resisting contentpreserving manipulations when features are extracted from the full quaternion matrix.
Intradistances calculated by using the QFTbased and the FQFTbased hashing (a) Sailboat, (b) Baboon, (c) Lena, (d) Peppers
 4.4 Intradistances and Interdistances Analysis
The distributions of the intradistances and interdistances are calculated to illustrate the robustness and discriminability performance. The first 100 original images in the UCID and their manipulated versions are used to calculate the distribution of the intradistances. For each original image and its manipulated versions there will be
=5050 intradistances and 505000 intradistances can be gotten in total. The first 1000 original images in the UCID are used to calculate the distribution of the interdistances and
=499500 interdistances are gotten.
Fig. 6
shows the distributions of the intradistances and interdistances for the QFTbased hashing, the FQFTbased hashing and the FQDCTbased hashing, respectively. In a dual distribution, the overlap between the two distributions determines the error rate
[28]
, thus lower overlap indicates better robustness and discriminability performance. It can be observed that the FQFTbased hashing yields distributions that have lower overlap than the QFTbased hashing, which manifests the performance improvement by using the proposed full quaternion representation. At the same time, the proposed FQDCTbased hashing yields distributions that are better separated than the FQFTbased hashing, which reveals that the feature construction by using QDCT and the hash computation process are efficient in generating robust hash values.
Distributions of intradistances and interdistances for different hashing methods (a) the QFTbased hashing, (b) the FQFTbased hashing, (c) the FQDCTbased hashing
 4.5 Performance Comparison by Using ROC Curves
To evaluate the performance of the proposed FQDCTbased hashing in resisting different kinds of manipulations, the ROC curves for each kind of the manipulations listed in
Table 1
are calculated and shown in
Fig. 7
, where the proposed FQDCTbased hashing is compared with the QFTbased hashing
[9]
, the FQFTbased hashing and the QSVDbased hashing
[10]
. These experiments are carried on the image set consisting of the first 100 original images in the UCID and their manipulated versions. It can be observed that for all the manipulations, the proposed FQDCTbased hashing attains a higher TPR for a given FPR, and for a given TPR it attains a lower FPR. Hence, the proposed FQDCTbased hashing outperforms the other hashing methods in terms of robustness and discrimination.
Performance comparison for different manipulations (a) Affine, (b) Cropping, (c) JPEG compression, (d) Median filter, (e) Noise, (f) Scaling, (g) Random distortion, (h) Rotation, (i) Rotationcropping, (j) Rotationscaling
The area under the curve (AUC) of the ROC curve can be used to measure the performance of the hashing schemes, and the bigger AUC stands for better performance. The AUC values of each manipulation for the four hashing schemes are calculated and presented in
Table 2
. The comparison results show that the FQFTbased hashing achieves better performance than the QFTbased hashing, which further proves the robustness and discriminability improvement brought by the proposed full quaternion representation. Moreover, it can be seen that the FQDCTbased hashing performs very well for all the given manipulations. The proposed FQDCTbased hashing is robust due to three main reasons: first, the proposed full quaternion representation of the input color image provides invariant property when the image goes through contentpreserving manipulations; second, this invariant property is preserved in the QDCT coefficients selected to construct the two feature matrices; and finally the hash computation compresses the feature matrices in an efficient method to generate similar hash values for similar images and distinct hash values for distinct images.
AUC values of each manipulation for different hashing schemes
AUC values of each manipulation for different hashing schemes
5. Conclusions
In this paper, a full quaternion representation for color images is proposed. Rather than representing each color pixel as a pure quaternion, we represent each color pixel as a full quaternion by setting the local luminance variance as the real part and the RGB values as the three imaginary parts, which improves the robustness in color image perceptual hashing schemes. Then a novel FQDCTbased hashing scheme combining the proposed full quaternion representation and QDCT is proposed, where two new quaternion feature matrices are constructed by exploiting the QDCT coefficients and the hash value in binary is calculated from the two feature matrices. Our experiment results and analysis indicate that the proposed full quaternion representation can improve the robustness in color image perceptual hashing and the proposed FQDCTbased hashing has superior performance in terms of robustness and discrimination compared with existing notable quaternionbased color image perceptual hashing.
Acknowledgements
This work was supported by Shenzhen Engineering Laboratory of Broadband Wireless Network Security, and the Science and Technology Development Fund of Macao SARFDCT056/2012/A2 and UM Multiyear Research Grant MYRG144(Y1L2) FST11ZLM.
BIO
Xiaomei Xing received the B.S. degree from South China University of Technology in 2013. She is currently working toward the M.S. degree in Peking University. Her research interests include image processing and perceptual image hashing.
Yuesheng Zhu received his B.Eng. degree in Radio Engineering, M.Eng. degree in Circuits and Systems and Ph.D. degree in Electronics Engineering in 1982, 1989 and 1996, respectively. He is currently working as a professor at the Lab of Communication and Information Security, Shenzhen Graduate School, Peking University. He is a senior member of IEEE, fellow of China Institute of Electronics, and senior member of China Institute of Communications. His interests include digital signal processing, multimedia technology, communication and information security.
Zhiwei Mo received his B.S. degree from South China University of Technology. He is currently a graduate in Shenzhen Graduate School, Peking University. His research interests include image security and forensics.
Ziqiang Sun received his B.S. degree in Electronics Engineering from Lanzhou University in 2009. He is currently a Ph.D. candidate in Shenzhen Graduate School, Peking University. His research interests include video fingerprint and multimedia technology.
Zhen Liu received his B.S. degree in Information Engineering from South China University of Technology in 2013. He is currently a graduate student in Shenzhen Graduate School, Peking University. His research interests include digtal watermarking and compressed sensing.
Schneider M.
,
Chang S.F.
“A robust content based digital signature for image authentication,”
in Proc. of IEEE Int. Conf. Image Processing
1996
vol. 3
227 
230
Lu C.S.
,
Hsu C.Y.
,
Sun S.W.
,
Chang P.C.
“Robust meshbased hashing for copy detection and tracing of images,”
in Proc. of IEEE Int. Conf. Multimedia and Expo
2004
vol. 1
731 
734
Xu Z.
,
Ling H.
,
Zou F.
,
Lu Z.
,
Li P.
“Robust image copy detection using multiresolution histogram,”
in Proc. of ACM Int. Conf. Multimedia Information Retrieval
2010
129 
136
Zhao Y.
,
Wang S.
,
Zhang X.
,
Yao H.
2013
“Robust hashing for image authentication using Zernike moments and local features,”
IEEE Trans. Inf. Forensics and Security
8
(1)
55 
63
DOI : 10.1109/TIFS.2012.2223680
Menezes A. J.
,
Van Oorschot P. C.
,
Vanstone S. A.
2010
“Handbook of applied cryptography,”
CRC press
Tang Z.
,
Dai Y.
,
Zhang X.
,
Zhang S.
“Perceptual image hashing with histogram of color vector angles,”
in Proc. of the 8th Int. Conf. Active Media Technology, Lecture Notes in Computer Science
2012
vol. 7669
237 
246
Tang Z.
,
Dai Y.
,
Zhang X.
2012
“Perceptual hashing for color images using invariant moments,”
Appl. Math. Inf. Sci.
6
(2S)
643S 
650S
Tang Z.
,
Zhang X.
,
Dai X.
,
Yang J.
,
Wu T.
2013
“Robust image hash function using local color features,”
Int. J. Electron. and Comm.
67
(8)
717 
722
DOI : 10.1016/j.aeue.2013.02.009
Laradji I. H.
,
Ghouti L.
,
Khiari E. H.
“Perceptual hashing of color images using hypercomplex representations,”
in Proc. of IEEE Int. Conf. Image Processing
2013
vol. 4
4402 
4406
Ghouti L.
“Robust perceptual color image hashing using quaternion singular value decomposition,"
in Proc. of IEEE Int. Conf. Acoustics, Speech and Signal Processing
2014
3794 
3798
Hamilton W. R.
1866
Elements of Quaternions
Longmans, Green
London, U.K
Sangwine S. J.
1996
“Fourier transforms of colour images using quaternion, or hypercomplex, numbers,”
Electron. Lett.
32
(21)
979 
1980
DOI : 10.1049/el:19961331
Pei S. C.
,
Ding J. J.
,
Chang J. H.
2001
“Efficient implementation of quaternion Fourier transform, convolution, and correlation by 2D complex FFT,”
IEEE Trans. Signal Processing
49
(11)
2844 
2852
DOI : 10.1109/78.960432
Pei S. C.
,
Chang J. H.
,
Ding J. J.
“Quaternion matrix singular value decomposition and its applications for color image processing,”
in Proc. of IEEE Int. Conf. Image Processing
2003
vol. 1
805 
808
Ell T. A.
,
Sangwine S. J.
2007
“Hypercomplex Fourier transforms of color images,”
IEEE Trans. Image Processing
16
22 
35
DOI : 10.1109/TIP.2006.884955
Wang Q.
,
Wang Z.
2012
“Color image registration based on quaternion Fourier transformation,”
Optical Engineering
51
(5)
1 
8
Tsui T. K.
,
Zhang X. P.
,
Androutsos D.
2008
“Color image watermarking using multidimensional Fourier transforms,”
IEEE Trans. Inf. Forensics and Security
3
(1)
16 
28
DOI : 10.1109/TIFS.2007.916275
Monga V.
,
Evans B. L.
2006
“Perceptual image hashing via feature points: Performance evaluation and tradeoffs,”
IEEE Trans. Image Processing
15
(11)
3453 
3466
DOI : 10.1109/TIP.2006.881948
Feng W.
,
Hu B.
“Quaternion discrete cosine transform and its application in color template matching,”
in Proc. of IEEE Int. Cong. Image and Signal Processing
2008
vol. 2
252 
256
Fridrich J.
,
Goljan M.
“Robust hash functions for digital watermarking,”
in Proc. of IEEE Int. Conf. Information Technology: Coding Computing
2000
178 
183
Qamra A.
,
Meng Y.
,
Chang E. Y.
2005
“Enhanced perceptual distance functions and indexing for image replica recognition,”
IEEE Trans. Pattern Anal. Mach. Intell.
27
(3)
379 
391
DOI : 10.1109/TPAMI.2005.54
Li Z.
,
Liu J.
,
Tang J.
2015
“Robust structured subspace learning for data representation,”
IEEE Trans. Pattern Anal. Mach. Intell.
37
(10)
2085 
2098
DOI : 10.1109/TPAMI.2015.2400461
Tang J.
,
Li Z.
,
Wang M.
,
Zhao R.
2015
“Neighborhood discriminant hashing for largescale image retrival,”
IEEE Trans. Image Processing
24
(9)
2827 
2840
DOI : 10.1109/TIP.2015.2421443
Swaminathan A.
,
Mao Y.
,
Wu M.
2006
“Robust and secure image hashing,”
IEEE Trans. Inf. Forensics and Security
1
(2)
215 
230
DOI : 10.1109/TIFS.2006.873601
Schaefer G.
,
Stich M.
“UCID: An uncompressed color image database,”
in Proc. of SPIE. Storage and Retrieval Methods and Applications for Multimedia
2004
vol. 5307
472 
480
Steinebach M.
,
Petitcolas F. A. P.
,
Raynal F.
,
Dittmann J.
,
Fontaine C.
,
Seibel S.
,
Fates N.
,
Ferri L. C.
in Proc. of IEEE Int. Conf. Information Technology: Coding and Computing
2001
49 
54