In this paper, a robust watermarking scheme is proposed that uses the scaleinvariant feature transform (SIFT) algorithm in the discrete wavelet transform (DWT) domain. First, the SIFT feature areas are extracted from the original image. Then, one level DWT is applied on the selected SIFT feature areas. The watermark is embedded by modifying the fractional portion of the horizontal or vertical, highfrequency DWT coefficients. In the watermark extracting phase, the embedded watermark can be directly extracted from the watermarked image without requiring the original cover image. The experimental results showed that the proposed scheme obtains the robustness to both signal processing and geometric attacks. Also, the proposed scheme is superior to some previous schemes in terms of watermark robustness and the visual quality of the watermarked image.
1. Introduction
With the rapid developments in multimedia technology and the Internet over the last decade, digital data, i.e., images, videos, audio, and text, can be easily copied and altered when they are transmitted via the Internet. Hence, protection of the ownership of digital data has become a very essential issue. Many solutions have been proposed to solve this issue, and digital watermarking is one of the most promising. Watermarking schemes can embed specific data, also referred to as a ‘watermark,’ into the original image. The watermark cannot be easily seen, which means that an unauthorized person cannot visually detect that embedded data in the content of the watermarked image. To prove the ownership of the image, the embedded watermark is extracted and detected. In watermarking applications, it is most important that the watermark must be sufficiently robust to withstand different attacks. There are two types of watermark attacks, i.e., signal processing and geometric attacks. Watermarking schemes
[1

17
,
20

25]
can be classified into templatebased
[1
,
2]
, invariant transform domainbased
[3

5]
, histogrambased
[6
,
7]
, momentbased
[8
,
9]
, and featurebased schemes
[10

17]
.
Many featurebased image watermarking algorithms
[10

17]
have been introduced in the literature. This is because these algorithms can resist signal processing attacks, and they are robust against geometric attacks. In
[13]
, Bas et al. utilized the Harris detector technique to determine the feature points from the original image. A Delaunay Tessellation technique is implemented to divide the image into a set of disjointed triangles. Then, the watermark is embedded into each triangle of the tessellation. However, in this scheme, the feature points extracted from the original image and from the attacked image are not the same. In other words, their schemes cannot extract exactly the embedded watermark once the watermarked image is attacked. In
[11]
, Li and Guo applied the Harris detector to determine the nonoverlapped circular areas and embedded the watermark into the spatial domain of these areas. However, in using the spatial domain for embedding the watermark, their scheme had limited robustness against both signal processing attacks and geometric attacks. In
[14
,
15]
, Seo and Yoo proposed two watermarking schemes using a multiscale Harris detector. In their schemes, the image is decomposed into disjointed, local circular regions. In
[14]
, a circular, symmetric watermark was embedded after scale normalization processing according to the local characteristic scale, whereas, in
[15]
, Seo and Yoo extract the selected regions and the watermark is embedded in these regions after geometric normalization processing adopting to the shapes of these regions. In
[12]
, Lee et al. extracted local circular regions by using scaleinvariant feature transform (SIFT). In the extracted local circular regions, pixel values are modified to embed the watermark. Because the watermark is embedded in spatial domain in these schemes
[11
,
12
,
14
,
15]
, their embedded watermarks are less robust. To further improve the robustness of watermarking schemes, Wang et al.
[17]
proposed the featurebased watermarking scheme in the transform domain. Their scheme achieved high robustness of the watermark against signal processing attacks. However, the size of the watermark is quite small, i.e., only32 bits. To enhance the watermark’s robustness and size, Li et al.
[16]
proposed a new watermarking scheme in the DWT domain. The watermark is first resized to the size of the horizontal and vertical highfrequency DWT subbands of the images. Then, the difference between the horizontal and vertical highfrequency DWT coefficients is expanded to embed the watermark. Their scheme outperformed schemes proposed by Lee et al.
[12]
and Wang et al.
[17]
.
In this paper, a new, robust, watermarking scheme is proposed. The proposed scheme uses the SIFT algorithm to extract the feature areas for embedding the watermark. To achieve better robustness, the DWT coefficients of the SIFT areas is utilized for carrying the watermark. Our experimental results showed that the proposed scheme is resilient to both signal processing attacks and geometric attacks, e.g., Salt and Pepper noise, Gaussian filtering, rotation, and cropping. In addition, the visual quality of the watermarked image is excellent in the proposed scheme.
The rest of this paper is organized as follows. Section 2 provides the concept of the SIFT algorithm
[18]
to give readers sufficient background knowledge. Section 3 describes the proposed scheme, consiting of the watermark embedding and extracting phases. Section 4 presents the experimental results and illustrates the superiority of the proposed scheme. Finally, we present our conclusions in Section 5.
2. SIFT Algorithm
In 2004, Lowe
[18]
first introduced the scaleinvariant feature transform (SIFT) algorithm, which proved that the extracted feature points are stable to geometric transformations, i.e., scaling, rotation, and translation transformation. The SIFT algorithm extracts the feature points from the scale space of the image. The scale space of image is denoted as L(x, y, α), and is defined in Equation (1):
where
I(x, y)
is the digital image, * denotes the convolution operation, and
Gau(x, y, α)
is the variablescale Gaussian kernel with a standard deviation
α. Gau(x, y, α)
is defined in Equation (2):
To detect the SIFT feature points, scalespace extrema in the differenceofGaussian (DoG) function,
D(x, y, α)
is applied, and it can be computed using Equation (3):
where
k
is a constant, multiplicative factor;
Gau(x, y, kα)
is the variablescale Gaussian kernel with the standard deviation
α
and the constant multiplicative factor
k
; and
L(x, y, kα)
is the scale space of the image with the constant multiplicative factor
k
. To determine the candidates of the differenceofGaussian function,
D(x, y, α)
, each point is first compared with its eight neighbors points in the current image. Then, it is also compared to the nine neighbors in the scale above and below. The point is selected only when its value is larger or smaller than the value of all of these neighbors. In addition, the positions that have low contrast or are poorly localized are deleted by stability function.
After obtaining the feature points, one or more orientations are assigned to each point on the basis of the local image gradient directions. Let the gradient magnitude be gm(x, y), and orientation of the feature point (
x, y
) be
O(x, y
), which are calculated using Equations (4)(5), respectively.
where
L
is the Gaussian smoothed image
[3]
. The descriptor of feature area is constructed by first calculating the gradient magnitude and orientation at each image sample point in an area around the feature point location. Then, these samples are accumulated into orientation histograms that summaries the contents over 4 × 4 subareas, with the length of each orientation bin corresponding to the sum of the gradient magnitudes near that direction within the area. The feature area is formed with containing the values of all the orientation histograms entries, corresponding to the lengths of the orientation bins. In the SIFT feature descriptor, the radius of an area around the feature point is determined as a constant times the detection scale of the feature point, which can be motivated by the property of the scale selection mechanism in the feature point detector of returning a characteristic size estimate associated with each feature point [26].The reader can refer to
[18]
for the detailed algorithms that are used to extract the features of SIFT. In the research reported in this paper, the SIFT algorithm was used to extract the feature areas of the image in which the watermark was embedded.
3. The Proposed Schemes
After applying the SIFT algorithm, several SIFT feature areas were determined in an image. However, a problem arises if the SIFT feature areas are generated by using the SIFT algorithm, since some of the feature areas may overlap. In order to hide the watermark, some local areas must be removed. If two areas are overlapped, only the area whose size is larger than or equal to 60 × 60 pixels is reserved. The main reason is that areas of these sizes are most suitable for embedding watermark sizes of 32 × 32 pixels in the proposed scheme. In addition, if two areas larger than 60 × 60 pixels are overlapped, only the one that has the larger
DoG
function value, calculated by Equation (3), is reserved because the larger
DoG
value offers the better stability.
Fig. 1
shows examples of the extracted SIFT feature areas using SIFT.
Example of extracted SIFT feature areas from the image Lena
Each extracted SIFT area is processed independently for embedding the watermark. To further improve the robustness of the watermark,
N
extracted SIFT areas are used to carry the same copy of the watermark. Notably, since all circular SIFT areas larger than 60 × 60 pixels are selected for embedding the watermark, then
N
is equal to 20 if there are 20 areas having the size larger than 60 × 60 pixels. In addition, watermark embedding and watermark detection are performed in the DWT domain. DWT is a mathematical technique that can be used to transform the image in the spatial domain into the frequency domain. Basically, the DWT image is obtained repeatedly by filtering the current image on a rowbyrow and a columnbycolumn. After each level DWT transformation, four subbands, i.e.,
CAi, CHi, CVi
, and
CDi
, are generated.
Fig. 2
shows an example of onelevel DWT transformation.
Example of onelevel DWT transformation
In the DWT domain, the
CAi
subband is not used for carrying a watermark. It is because
CAi
is a lowfrequency subband, and this subband will contain essential information about the image. Therefore, small modifications in this subband can easily cause the image to be distorted. Similarly, embedding the watermark into subband
CDi
also is not applied, because this subband can be eliminated easily, for example, via JPEG compression. Thus, in the work described in this paper, the
CHi
and
CVi
subbands were used to carry the watermark. In addition, since the watermark W is partitioned and embedded into both the horizontal and vertical frequency subbands, i.e.,
CH_{i}
, and
CV_{i}
, of the selected SIFT areas. Therefore, the size of watermark
W
that is used should be the smaller or equal to the half size of the horizontal and vertical frequency subbands, i.e.,
CH_{i}
, and
CV_{i}
, of the selected SIFT areas. The main reason is to guarantee robustness of watermark when the watermark with the original size is completely embedded into these subbands.
In this paper, to resist the geometric attacks, i.e. scaling and rotation, the circular feature area is used for watermarking. As a result, scaling invariance can be obtained because the radius of area is directly proportional to the scale of feature area. Moreover, to achieve the rotation invariance, circular area should be required.
Fig. 3
shows how pixels present in the circular are and rectangle area. The proposed scheme contains two phases, i.e., the embedding phase and the extracting phase, and these two phases are described in Subsections 3.1 and 3.2, respectively.
Different shapes of feature areas
 3.1 Watermark Embedding Phase
In this subsection, the same watermark is embedded repeatedly into N extracted SIFT areas.
Fig. 4
shows the flowchart of watermark embedding phase. Then the watermark embedding algorithm is described.
Flowchart of watermark embedding phase
Step 1:
First,
N
SIFT areas having its size larger than 60 × 60 pixels are selected from the original image, and the size of the SIFT area is determined by the size of the watermark. In this paper, the size of the SIFT area is larger than 60 × 60 pixels, and the size of the image is 512 × 512 pixels; the size of the watermark can be 32 × 32 pixels.
Step 2:
Each selected SIFT area is resized to 64 × 64 pixels, and onelevel DWT is applied to yield four frequency subbands {
CA_{i}, CH_{i}, CV_{i}, CD_{i}
}.
Step 3:
Partition the watermark image
W
into two parts,
W
_{1}
and
W
_{2}
, as shown in
Fig. 4
.
Step 4:
Embed the two parts of the watermark image,
W
_{1}
and
W
_{2}
, by modifying the coefficients in the horizontal and vertical frequency subbands, i.e.,
CH_{i}
, and
CV_{i}
, respectively, as shown in
Fig. 5
. It is noticeable that only the DWT coefficients locate inside the left, upper half of the inscribed circle, which is modified for embedding the watermark.
Demonstration of watermark embedding
In the proposed scheme,
W
_{1}
is embedded into the horizontal frequency subband
CH_{i}
, and
W
_{2}
is embedded into the vertical frequency subband
CV_{i}
. To embed watermark bit
W
_{1}
(
x, y
), the corresponding horizontal, highfrequency coefficient, that has the similar coordinates are selected as
CH_{i}
(
x, y
), is used. Then, the two digits
Hi
of the fractional portion of
CH_{i}
(
x, y
) are extracted to embed watermark bit
W
_{1}
(
x, y
). Notably, the value of
Hi
is extracted from the value of
CH_{i}
, and
Hi
is the two digits that are selected starting from the first nonzero digit of the fractional portion of
CH_{i}
. For example, if
CH_{i}
(
x, y
) = 0.00
1052
, the value of
Hi
will be
10
. Similarly,
W
_{2}
(
x, y
) is embedded into the two digits
Vi
that are selected starting from the first nonzero digit of the fractional portion of
CV_{i}
(
x, y
). To explain further, let
b
denote
W
_{1}
(
x, y
). (or
W
_{2}
(
x, y
)) and let
p
denote the result of
Hi mod T
(or
Vi mod T
). Note that
Hi
(or
Vi
) is two digits that are selected starting from the first nonzero digit of the fractional portion of
CH_{i}
(
x, y
) (or
CV_{i}
(
x, y
)), hence,
Hi
(or
Vi
) is in range [10, 99]. Notably, the value of
Vi
is extracted from the value of
CVi
, and
Vi
is first two nonzero digits of the fractional portion of
CVi
. Here,
T
is the threshold value. Therefore,
p
=
Hi mod T
(or
Vi mod T
) must be in the range [0,
T
). The watermark bit
b
is embedded into the value
p
using Equation (6):
where
T
is the threshold value;
p
is the result value of
Hi mod T
if
CHi
is used to embed watermark
W
_{1}
(
x, y
) (or
Vi mod T
, if
CVi
is used to embed watermark
W
_{2}
(
x, y
)); and
b
is the watermark bit
W
_{1}
(
x, y
) (or
W
_{2}
(
x, y
)).
Fig. 6
shows the value of
p
and the value of
p’
based on the threshold
T
after embedding watermark bit
b
.
Value of p’ after embedding the watermark
Step 5:
If
CHi
is used to embed watermark
W
_{1}
(
x, y
), set
CHi’
to the value of
Hi’
=
p’
. Otherwise, if
CVi
is used to embed watermark
W
_{2}
(
x, y
), set
CVi’
to the value of
Hi’
=
p’
.
Step 6:
Utilize onelevel IDWT to reconstruct the watermarked SIFT area. This area is resized to its original size and is used to substitute for the original SIFT area in the image.
Step 7:
Repeat Steps 2 through 5 until N SIFT areas have been completely processed.
 3.2 Watermark Extracting Phase
This Subsection describes how watermark
W
is extracted from the watermarked image. The extracting algorithm is implemented without requirement of the original image. Therefore, the proposed scheme meets the condition of blindness. Some of the steps in the watermark extracting algorithm are exactly the same as those used in watermark embedding phase. Specifically, SIFT algorithm is used to extract
N
watermarked SIFT areas from the watermarked image and resized them to 64 × 64 pixels. Then, the watermark is extracted from each watermarked SIFT area.
Fig. 7
shows the watermark extracting processes and the watermark extracting algorithm is performed in each SIFT area, as shown below.
Watermark extracting procedure
Step 1:
Utilize onelevel DWT on each watermarked SIFT area to produce four frequency subbands {
CAi’, CHi’, CVi’, CDi’
}.
Step 2:
For the coefficients in two subbands, i.e.,
CHi’
or
CVi’
, the value
p’
is determined as first two nonzero digits of the fractional portion of
CHi’
or
CVi’
to extract watermark bit
b’
. Note that only coefficients inside the left, upper half of the inscribed circle of subbands,
CHi’
and
CVi’
, are processed. In this scenario, if
b’
is extracted from
CHi’
coefficients (or
CVi’
coefficients), it belongs to
W
_{1}
(or
W
_{2}
), respectively. Watermark bit
b’
is extracted using Equation (7).
Step 3:
Combine two watermarks
W
_{1}
and
W
_{2}
to obtain the extracted watermark
W’
. Then, based on the demonstration of the extracted watermark, the final decision will be made. In addition, the watermark similarities should be calculated to judge objectively.
4. Experimental Results
In this section, the performance of the proposed scheme in terms of the watermark’s being invisible and robustness is demonstrated. The experiments were performed on a standard image with the size of 512 × 512 pixels. The watermark is a circular binary image size of 32 × 32 pixels.
N
SIFT areas were selected for embedding the watermark. In this experiment, three different values of
N
, i.e. 3, 4, and 5, are used.
The peak signaltonoise ratio (PSNR) was used to estimate the visual qualiy of the watermark. The PSNR was computed using Equation (8):
where
M
is the size of the original image, and
I_{i}
and
I’_{i}
are the pixel values before and after embedding the watermark, respectively.
Table 1
lists the PSNRs of the watermark image with different values of threshold
T
. It is apparent that the proposed scheme provided watermarked images with high visual quality and different values of threshold
T
.
Fig. 8
shows image Lena before and after the watermark was embedded.
Fig. 8
clearly shows that the proposed scheme made the watermark invisible when the value of PSNR is greater than 84 dB.
PSNR of the watermarked image with different values of thresholdT
PSNR of the watermarked image with different values of threshold T
Watermark’s invisibility
To demonstrate the robustness of the proposed scheme, we use the normalized correlation coefficient (
NC
) to measure the similarity between the embedded watermark
W
and the extracted watermark
W'
.
NC
is calculated using Equation (9):
where
W_{h}
and
W_{w}
are the height and width of the embedded watermark, respectively. Note that, in this experiment, the highest NC value from
N
SIFT areas is selected for comparisons.
Fig. 9
shows performance of the proposed scheme with several thresholds
T
, i.e., 20, 30, 40, and 50. To obtain the tradeoff between visual quality of the watermarked image and robustness, the threshold
T
= 30 was selected in the proposed scheme.
Performance of the proposed scheme with several thresholds
Signal processing and geometric attacks listed in the benchmark software, Stirmark 4.0
[19]
, were used on the watermarked images. These attacks try to weaken or remove the embedded watermark from the watermarked images.
Fig. 10
shows that the embedded watermark can be extracted clearly from all watermarked images under Salt and Pepper noise, JPEG 100, and Gaussian attacks with different parameters. As can be seen in
Fig. 10
, the proposed scheme obtained a high value of
NC
, i.e., greater than 0.9. This indicates that our proposed scheme was resilient against these attacks and that it ensured the high visual quality of the embedded watermarked images, as shown in
Table 1.
Watermarked images, extracted watermarks, and NC values for Salt and Pepper noise, JPEG 100, and Gaussian attacks
Fig. 11 shows the results of the proposed scheme for rotation attacks and cropping attacks. In the rotation attacks, three cases of rotating the watermarked images were simulated, i.e., 2°, 5°, and 10°. In the cropping attacks, three different cases were performed to crop three watermarked images. In the first case, the corner of the watermarked image was cropped by 25%. In the second and the third cases, the outsides of the watermarked images were cropped by 50% and 75%, respectively. Fig. 11 shows that the watermark was extracted completely from the watermarked images.
Watermarked images, extracted watermarks, and NC values for rotation and cropping attacks
To further validate the proposed scheme, we compared it with two previous schemes
[12
,
16]
. These schemes were selected to estimate the proposed scheme because they are similar to the proposed scheme in terms of using invariant feature points to conceal the watermark.
Table 2
shows that our proposed scheme obtained much higher PSNR than those of the previous schemes
[12
,
16]
, even though the proposed scheme and Li et al.’s scheme both used the DWT domain to embed the watermark. However, in
[16]
, Li et al. modified the horizontal and the vertical highfrequency DWT coefficients to hide the watermark bits. Whereas, the proposed scheme only hides the watermark bits into the fractional portion of the horizontal or the vertical highfrequency DWT coefficients. Consequently, more distortions of the watermarked images occurred in Li et al.’s scheme than in our proposed scheme.
PSNR of watermarked and original images (dB)
PSNR of watermarked and original images (dB)
Table 3
compares the robustness of the watermarks for Li et al.’s scheme, Lee et al.’s scheme, and our proposed scheme, and it shows that the proposed scheme had greater robustness. In Lee et al.’s scheme, the pixels in the SIFT regions are modified to embed the watermark. Hence, since more pixels have their values altered in an attack, the extracted watermark will have more distortion. Instead of embedding the watermark in the spatial domain, Li et al.’s scheme embeds watermark into the DWT domain. As a result, their scheme obtained higher robustness than Lee et al.’s scheme. However, in Li et al.’s scheme, the difference between the horizontal and vertical high frequency DWT coefficients was expanded to embed the watermark. Therefore, if the value of one of the horizontal or the vertical high frequency DWT coefficients is modified, the extracted watermark bit will be changed. Conversely, in the proposed scheme, only the value of the horizontal or vertical high frequency DWT coefficient is altered to embed the watermark bit.
Table 4
shows the robustness of the proposed scheme with different values of
N
.
Comparisons of the robustness of the watermark among Li et al.’s scheme, Lee et al.’s scheme, and the proposed scheme
Comparisons of the robustness of the watermark among Li et al.’s scheme, Lee et al.’s scheme, and the proposed scheme
Robustness of the proposed scheme with different values ofN
Robustness of the proposed scheme with different values of N
Based on experimental results we can see same results were obtained the same with different
N
s. This is because only the highest NC results are selected in our scheme, and the highest results can be obtained when
N
=3. Therefore, we set
N
as 3 here.
5. Conclusions
In this paper, a new image watermarking scheme was proposed to obtain high visual quality of the watermarked image and resistance against various attacks. In the proposed scheme, the SIFT feature areas are extracted, and a watermark image in binary form is embedded repeatedly into the horizontal and vertical highfrequency DWT coefficients of the selected SIFT feature areas. The experimental results showed that the proposed scheme is resilient to various attacks, i.e., signal processing and geometric attacks. In addition, the proposed scheme achieved better visual quality of the watermarked image and stronger robustness to various attacks than some previous schemes. It can be concluded that our watermarking scheme satisfied the demand of realtime applications. In some attacks, i.e. Median filter and Shearing, the robustness of the proposed scheme is not truly high. Therefore, to achieve better robustness to these attacks can be the future work.
BIO
WanLi Lyu was born in Anhui province, China, in 1974. She received the M.S. degree in computer science and technology with Guangxi University and the Ph.D. degree in computer science and technology with Anhui University. Since July 2004, she is a Lecturer in School of Computer Science and Technology, Anhui University. Currently, she is a postdoctoral research fellow in Department of Information Engineering and Computer Science at Feng Chia University from August. 2013. Her current research interests include image processing, computer cryptography and information security.
ChinChen Chang received the B.S. degree in applied mathematics in 1977 and the M.S. degree in computer and decision sciences in 1979, both from National Tsing Hua University, Hsinchu, Taiwan, and the Ph.D. degree in computer engineering from National Chiao Tung University, Hsinchu, in 1982. From 1980 to 1983, he was with the faculty of the Department of Computer Engineering, National Chiao Tung University. From 1983 to 1989, he was with the faculty of the Institute of Applied Mathematics, National Chung Hsing University, Taichung, Taiwan. From August 1989 to July 1992, he was the Head and a Professor with the Institute of Computer Science and Information Engineering, National Chung Cheng University, Chiayi, Taiwan. From August 1992 to July 1995, he was the Dean of the College of Engineering, National Chung Cheng University. From August 1995 to October 1997, he was the Provost with National Chung Cheng University. From September 1996 to October 1997, he was the Acting President with National Chung Cheng University. From July 1998 to June 2000, he was the Director of the Advisory Office of the Ministry of Education of Taiwan. From 2002 to 2005, he was a Chair Professor with National Chung Cheng University. Since February 2005, he has been a Chair Professor with the Department of Information Engineering and Computer Science, Feng Chia University, Taichung. He is also with the Department of Computer Science and Information Engineering, Asia University, Taichung. He has served as a Consultant with several research institutes and government departments. His current research interests include database design, data structures, computer cryptography, and image processing.
ThaiSon Nguyen received the bachelor degree in information technology from Open University, HCM city, Vietnam, in 2005. From December 2006, he has been lecturer of TraVinh University, TraVinh, Vietnam. In 2011, he received M.S. degree in computer sciences from FengChia University, TaiChung, Taiwan. He is currently pursuing the Ph.D. degree with the Department of Information Engineering and Computer Science, Feng Chia University, Taichung, Taiwan. His current research interests include data hiding, image processing, database security and information security.
ChiaChen Lin received the B. S. degree in information management from the Tamkang University, Taipeo, Taiwan, R.O.C., in 1992. She received the M.S. degree in information management and the Ph.D. degree in information management from Chiao Tung University, Hsinchu, Taiwan, in 1994 and 1998, respectively. She was a Visiting Associate Professor at Business School, University Illinois at Urbana Champain, during August 2006 to July 2007. She is currently a Professor in the Department of Computer and Information Management, Providence University, ShaLu, Taiwan. Her research interests include image and signal processing, image data hiding, mobile agent, and electronic commerce.
Pereira S.
,
Pun T.
2000
“Robust template matching for affine resistant image watermark,”
IEEE Trans. Image Processing
Article (CrossRef Link)
9
(6)
1123 
1129
DOI : 10.1109/83.846253
Kang X.
,
Huang J.
,
Shi Y. Q.
,
Lin Y.
2003
“A DWTDFT composite watermarking scheme robust to both affine transform and JPEG compression,”
IEEE Trans. Circuit Syst. Video Technol.
Article (CrossRef Link)
13
(8)
776 
786
DOI : 10.1109/TCSVT.2003.815957
Lin Y. T.
,
Huang C. Y.
,
Lee G. C.
2011
“Rotation, scaling, and translation resilient watermarking for images,”
IET Image Processing
Article (CrossRef Link)
5
(4)
328 
340
DOI : 10.1049/ietipr.2009.0264
Zheng D.
,
Zhao J.
,
ElSaddik A.
2003
“RSTinvariant digital image watermarking based on logpolar mapping and phase correlation,”
IEEE Trans. Circuit Syst. Video Technol.
Article (CrossRef Link)
13
(8)
753 
765
Roy S.
,
Chang E. C.
2004
“Watermarking color histograms,”
Proc. ICIP
Article (CrossRef Link)
2191 
2194
Xiang S.
,
Joong H.
,
Hua J.
2008
“Invariant image watermarking based on statistical feature in lowfrequency domain,”
IEEE Trans. Circuit and Syst. Video Technol.
Article (CrossRef Link)
18
(6)
777 
790
DOI : 10.1109/TCSVT.2008.918843
Alghoniemy M.
,
Tewfik A. H.
2004
“Geometric invariant in image watermarking,”
IEEE Trans. Image Processing
Article (CrossRef Link)
13
(2)
145 
153
DOI : 10.1109/TIP.2004.823831
Dong P.
,
Brankow J. G.
,
Galatsanos N. P.
,
Yang Y. Y.
,
Davoine F.
2005
“Digital watermarking robust to geometric distortions,”
IEEE Trans. Image Process.
Article (CrossRef Link)
14
(2)
2140 
2150
DOI : 10.1109/TIP.2005.857263
Deng C.
,
Gao X. B.
,
Li X. L.
,
Tao D. C.
2009
“A Local Tchebichef momentsbased robust image watermarking,”
Signal Processing
Article (CrossRef Link)
89
(8)
1531 
1539
DOI : 10.1016/j.sigpro.2009.02.005
Li L. D.
,
Guo B. L.
2009
“Localized image watermarking in spatial domain resistant to geometric attacks,”
AEUInt. J. Electron. Commun.
Article (CrossRef Link)
63
(2)
123 
131
DOI : 10.1016/j.aeue.2007.11.007
Lee H. Y.
,
Kim H.
,
Lee H. K.
2007
“Robust image watermarking using local invariant features,”
Optical Engineering
Article (CrossRef Link)
45
(3)
1 
10
Bas P.
,
Chassery J.
,
Macq B.
2002
“Geometrically invariant watermarking using feature points,”
IEEE Trans Image Processing
Article (CrossRef Link)
11
(9)
1014 
1028
DOI : 10.1109/TIP.2002.801587
Seo J.
,
Yoo C.
2004
“Localized image watermarking based on feature points of scale space representation,”
Pattern Recognition
Article (CrossRef Link)
37
(7)
1365 
1375
DOI : 10.1016/j.patcog.2003.12.013
Seo J.
,
Yoo C.
2006
“Image watermarking based on invariant region of scalespace representation,”
IEEE Trans. Signal Process
Article (CrossRef Link)
54
(4)
1537 
1549
DOI : 10.1109/TSP.2006.870581
Li L.
,
Qian J. S.
,
Pan J. S.
2011
“Characteristic region based watermark embedding with RST invariance and capacity,”
AEUInt. J. Electon. Commun.
Article (CrossRef Link)
65
(5)
435 
442
DOI : 10.1016/j.aeue.2010.06.001
Wang X. Y.
,
Wu J.
,
Niu P. P.
2007
“A new digital image watermarking algorithm resilient to desynchronization attacks,”
IEEE Trans. Inf. Forensics Secur.
Article (CrossRef Link)
2
(4)
655 
663
DOI : 10.1109/TIFS.2007.908233
Petitcolas F.A.
2000
“Watermarking scheme evaluation,”
IEEE Signal Process. Mag.
Article (CrossRef Link)
17
(5)
58 
64
DOI : 10.1109/79.879339
Benhocine Abdelhamid
,
Laouamer Lamri
,
Nana Laurent
,
Pascu Anca Christine
2013
“New Images Watermarking Scheme Based on Singular Value Decomposition,”
Journal of Information Hiding and Multimedia Signal Processing
Article (CrossRef Link)
4
(1)
9 
18
Latif Alimohammad
2013
“An Adaptive Digital Image Watermarking Scheme using Fuzzy Logic and Tabu Search,”
Journal of Information Hiding and Multimedia Signal Processing
Article (CrossRef Link)
4
(4)
250 
271
Huang HsiangCheh
,
FengCheng Chang
2014
“Robust Image Watermarking Based on Compressed Sensing Techniques,”
Journal of Information Hiding and Multimedia Signal Processing
Article (CrossRef Link)
5
(2)
275 
285
Tian H. W.
,
Zhao Y.
,
Ni R. R.
,
Qin L. M.
,
Li X. L.
2013
“LDFTBased Watermarking Resilient to Local Desynchronization Attacks,”
IEEE Trans. on Cybernetics
Article (CrossRef Link)
43
(6)
2190 
2201
DOI : 10.1109/TCYB.2013.2245415
Tsai J. S.
,
Huang W. B.
,
Kuo Y. H.
2011
“On the selection of optimal feature region set for robust digital image watermarking,”
IEEE Trans. on Image Process
Article (CrossRef Link)
20
(3)
735 
743
DOI : 10.1109/TIP.2010.2073475
Gao X. B.
,
Deng C.
,
Li X. L.
,
Tao D. C.
2010
“Geometric distortion insensitive image watermarking in affine covariant regions,”
IEEE Trans. Systems, Man, and Cybernetics  Part C: Applications and Reviews
Article (CrossRef Link)
40
(3)
278 
286
DOI : 10.1109/TSMCC.2009.2037512
Lindeberg T.
1998
“Feature detection with automatic scale selection,”
International Journal of Computer Vision
Article (CrossRef Link)
30
(2)
79 
116
DOI : 10.1023/A:1008045108935