A Model-Based Image Steganography Method Using Watson’s Visual Model

ETRI Journal.
2014.
Apr,
36(3):
479-489

- Received : February 18, 2013
- Accepted : November 05, 2013
- Published : April 01, 2014

Download

PDF

e-PUB

PubReader

PPT

Export by style

Share

Article

Metrics

Cited by

TagCloud

This paper presents a model-based image steganography method based on Watson’s visual model. Model-based steganography assumes a model for cover image statistics. This approach, however, has some weaknesses, including perceptual detectability. We propose to use Watson’s visual model to improve perceptual undetectability of model-based steganography. The proposed method prevents visually perceptible changes during embedding. First, the maximum acceptable change in each discrete cosine transform coefficient is extracted based on Watson’s visual model. Then, a model is fitted to a low-precision histogram of such coefficients and the message bits are encoded to this model. Finally, the encoded message bits are embedded in those coefficients whose maximum possible changes are visually imperceptible. Experimental results show that changes resulting from the proposed method are perceptually undetectable, whereas model-based steganography retains perceptually detectable changes. This perceptual undetectability is achieved while the perceptual quality — based on the structural similarity measure — and the security — based on two steganalysis methods — do not show any significant changes.
^{1)}
coefficients whose changes are acceptable based on Watson’s visual model are called “proper AC coefficients.” Then, the low-precision histogram of proper AC coefficients is computed and the distribution parameters are estimated. The message bits are encoded to obtain new symbols based on the model. Finally, new symbols are embedded in proper AC coefficients such that the estimated distribution is satisfied.
Experimental results show that the proposed steganography algorithm improves the model-based steganography algorithm
[6]
in terms of visual imperceptibility, image quality, and security.
The rest of the paper is organized as follows. In section II, we give a brief overview of the related work involving model-based steganography and human visual models. Watson’s visual model is explained in section III. The details of the proposed steganography method are explained in section IV. In section V, using some experiments, we investigate the imperceptibility and quality of the resulting images and the security of the algorithm. Finally, in section VI we conclude the paper and outline some possible future works.
t
, in which each
t
[
i
,
j
] :
i
,
j
= 1,..., 8, is to be the smallest visible value of the corresponding DCT coefficients in a block without any masking noise. Smaller values of
t
[
i
,
j
] indicate higher sensitivity of the human eye to the corresponding frequency
[4]
,
[5]
. The resulting matrix is
t
-values. The high sensitivity of the human eye to these frequencies results in detecting perceptible phenomena in the image, despite the triviality of the introduced changes.
t
[
i
,
j
] in each block
k
according to the block’s average luminance or DC term. The luminance masking threshold
t_{L}
[
i
,
j
,
k
] is defined as
C_{o}
[0,0,
k
] is the DC coefficient of the
k
th block of the cover image,
C
_{0,0}
is the average of DC coefficients in all cover image blocks, and
a_{T}
is the constant 0.649
[4]
,
[5]
.
s
[
i
,
j
,
k
] is defined as
w
[
i
,
j
] is a constant between 0 and 1 (The value 0.7 has been used by Watson for all
i
,
j
[4]
,
[5]
).
C_{c}
and a stego image
C_{s}
. The perceptual distance
d
[
i
,
j
,
k
] between the (
i
,
j
)th DCT coefficients of the
k
th block in both images is defined as
D_{wat}
(
C_{c}
,
C_{s}
), given by
p
. The value of
p
determines type or degree of pooling. For example if
p
=1, the pooling is a linear summation of absolute values, but if
p
=2, the pooling is a quadratic combination of errors
[4]
,
[5]
.
x
denote an image that is an instance of a random variable
X
and therefore probability has distribution
P_{X}
(
x
). Then,
x
can be separated into two distinct parts:
x_{α}
, which is not changed, and
x_{β}
, which may be changed to
x_{α}
represents the seven most significant bits and
x_{β}
represents the least significant bit of the binary representation of the selected coefficients.
The main idea of this approach is treating
x_{α}
and
x_{β}
as instances of two dependent random variables
X_{α}
and
X_{β}
, respectively and estimating the conditional distribution
x^{'}_{β}
is selected to obey this conditional distribution, the resulting
x^{'}
= (
x_{α}
,
x^{'}_{β}
) will be correctly distributed according to model
P_{X}
to estimate
x^{'}_{β}
such that it conveys the intended message and is also distributed according to the estimate of
u
is the coefficient value and
p
> 1,
s
> 0 are the two parameters of the model. This function is a better fit than other parametric models to the coefficient distributions
[6]
.
Figure 1
depicts the histogram of the (2,2) DCT coefficient for a sample image and the model density function fitted using the maximum-likelihood approach. The corresponding cumulative density function is
Histogram of DCT coefficient (2, 2) for goldhill image and model pdf with parameters s =18.28 and p =6.92 [6] .
x
,
x_{β}
consists of the
k
least significant bits and
x_{α}
the (8 –
k
) most significant bits of the coefficient. Changing coefficient values only changes
x_{β}
, which results in the coefficient bin offset. Therefore, the maximum change in the value of coefficients is
step
– 1. This change may visually be perceptible to the human eye. We use Watson’s visual model
[4]
to prevent perceptible changes. Using this model, for each coefficient of each block, if the maximum change (
step
– 1) is visually imperceptible, then the embedding is valid. These coefficients are called “proper AC coefficients,” and the rest of the procedure is applied only to them.
The pseudocode for the proposed method is given in
Fig. 2
, and the method for selecting such coefficients is presented in detail in
Fig. 3.
In this method, after computing the DCT of the image, the frequency sensitivity matrix
t
is defined. Then, the luminance masking and contrast masking thresholds are computed for each block. Then for each coefficient, the maximum possible change, which is equal to (
step
– 1), is compared with the corresponding thresholds of frequency sensitivity, luminance masking, and contrast masking through the
IsLower
procedure. If the maximum possible change is lower than or equal to each threshold
t_{i}
,
tl_{k,i}
or
sl_{k,i}
, this procedure returns the true value; otherwise it returns the false value. As shown in line 13 of
Fig. 3
, all coefficients whose maximum possible changes are lower than or equal to the corresponding thresholds of frequency sensitivity, luminance masking and contrast masking are returned as proper coefficients for embedding.
Pseudocode of proposed algorithm.
Pseudocode for selecting proper coefficients.
In the next steps, some procedures are applied to the proper coefficients of each type of AC coefficients. The
LowPrecHist
procedure in
Fig. 2
constructs a low-precision histogram for the
x_{α}
’s. In this histogram, all coefficient values
x
with equal
x_{α}
but different
x_{β}
are counted for each bin. The term “low-precision histogram” is used to indicate the counting of the coefficient values x, which have equal most-significant bits, but unequal or equal least-significant bits in the same bin.
The bin size, which is called the
step
, is defined as the number of consecutive coefficient values that are assigned to a bin; and is equal to 2
^{k}
, where
k
shows the number of least significant bits that comprise
x_{β}
. Each coefficient value is represented by a histogram-bin index and a symbol that indicates its offset within the bin from 0 to 2
^{k}
–1. The bin indices for all the coefficients comprise
x_{α}
, which will remain unchanged, and the bin offsets comprise
x_{β}
, which may be changed during the embedding procedure.
After computing all low-precision histograms, the model parameters
p
and
s
, of (6), are fitted to these histograms individually by
FitGCD
. The steganography decoder does not know the least significant portions
x_{β}
’s, which may have been changed by the encoder. During embedding, the proper AC coefficients are changed only within these low-precision histogram bins (only the bin offsets are changed) so that the same estimates of
p
and
s
for each coefficient type may be obtained by the decoder.
Once the model parameters
p_{i}
and
s_{i}
are fitted to the histograms, the procedure
ComputeCDF
computes the cumulative distribution function
cdf_{i}
based on the GCD model parameters
p_{i}
and
s_{i}
. Then, the
SymbolProbability
procedure computes the probability of each possible offset symbol for a coefficient given its bin index, which is then assigned to the variable
symP_{i}
. These offset symbols, and their respective probabilities, are passed to a non-adaptive arithmetic entropy decoder, along with the secret message. The details of arithmetic encoding can be found in
[39]
. The order in which coefficients are used for encoding the message by
ArithEncode
is previously determined by the
RandPermutation
procedure. This procedure, which is called permutative straddling
[37]
, computes a pseudo-random permutation to avoid changing coefficients only in a part of the image. This straddling procedure was proposed in the F5 algorithm
[37]
to shuffle all coefficients using a permutation. This permutation is an order for embedding and is dependent upon a key. The offset symbols returned by the entropy decoder comprise
x_{α}
’s ) by the
NewCoefficients
procedure. This combination forms the new coefficient value
x'
, which is called
newXw
in
Fig. 2
.
We summarize the method in the following steganography algorithm.
The Proposed Algorithm :
step
is set equal to 2.
Figure 4
depicts an application of the model-based and proposed methods on the goldhill image. The original cover (goldhill) image is shown in
Fig. 4(a)
. Stego images generated by the model-based and the proposed methods are depicted in
Figs. 4(b)
and
4(c)
, respectively. In both cases
step
= 2.
Figs. 4(d)
and
4(e)
depict perceptible changes for the model-based and proposed methods, respectively. The white dots correspond to the DCT coefficients whose changes during the embedding procedure are perceptible to human eyes. They are obtained based on Watson’s visual model of the human visual system in the DCT domain. As
Fig. 4
shows, the model-based method results in perceptible changes, but the proposed method does not retain any perceptible results. Perceptually undetectable changes in the proposed method are guaranteed. We used a large data set of cover images called BossBase
[40]
. This data set contains ten thousand 512 pixels × 512 pixels gray-scale images, which are originally in raw PGM
^{2)}
format. These images were converted to JPEG format and then the model-based and the proposed method were applied to them. In all experiments
step
= 2. The proposed method did not retain any perceptible changes based on Watson’s visual model while the model-based method retained some visually perceptible changes.
Perceptible changes in DCT coefficients based on Watson’s visual model: (a) goldhill cover image, (b) stego image generated by model-based steganography, (c) stego image generated by proposed steganography, (d) white dots are perceptible changes in DCT coefficients after model-based steganography, and (e) no white noise (no perceptible change) resulted after the proposed steganography.
In another quality test, we examined the proposed method based on the structural similarity (SSIM) index. SSIM measures structural similarity, which assesses the perceptual quality of the image. Wang and others
[41]
defined SSIM between two images
x
and
y
as
μ_{x}
and
x
and are defined as
μ_{y}
and
σ_{xy}
is defined as
C
_{1}
and
C
_{2}
are two small constants included to avoid instability. We set both of them equal to 0.001 in our experiments.
For different payloads, the SSIM indexes of all images are computed for each method and their average values are reported in
Fig. 5
. In our experiments, the payloads are measured based on the “bpac” unit, which stands for “bit per non-zero AC coefficients.” In this measure, the payload is specified as the ratio of the message size to the number of non-zero AC coefficients of all DCT coefficients. This experiment shows that the stego images resulting from the two methods have similar qualities. The maximum difference between the corresponding values in the two methods is less than 10
^{–5}
of their values. Therefore, considering only the SSIM measure, it can be said that the proposed method has no significant effect on the images compared to the model-based method. Other statistics—as in the maximum, minimum, and standard deviation are also reported for this experiment in
Table 1
. Similar results for these statistics are reported for both methods. For example, a comparison of the minimum SSIM’s demonstrates that although our method has higher minimum SSIM values for payloads of 0.02 bpac and 0.04 bpac, it has lower values for payloads of 0.01 bpac, 0.03 bpac, and 0.05 bpac.

SSIM vs. payload. Stego images are the result of model-based and proposed method for different payloads.
Figure 6
demonstrates a sample cover image and its corresponding stego image generated by the model-based and proposed method with a payload of 0.03 bpac. The SSIM values reported for these images show that the stego image generated by the proposed method has slightly better SSIM than the SSIM of the other stego image.
Witnessing better results for the model-based method in some cases is due to the nature of the SSIM. This criterion gives structural similarity and not content similarity. SSIM measures the difference between two images based on the means and variances of the images and is not sensitive to the places where changes appear. On the other hand, Watson’s model considers the difference between the two images, as well as the place where the change appears. If the change appears in a place that is sensitive to the human eye, the change is important; otherwise, it is not.
Although the main contribution of the proposed method is human visual imperceptibility, we tested the security of the proposed method in the other experiments with two steganalysis attacks.
Both steganalysis methods consist of two parts: feature extraction and classification. In the first steganalysis method, the extracted features include 274 merged extended DCT and Markov features (as introduced in
[42]
,) which are calibrated either by difference
[42]
or by the Cartesian product (as introduced in
[43]
), thus, yielding a total of 548 features. For classification, we applied the ensemble classifiers described by Kodovský and others
[44]
to the extracted features. We call this method the “DCT-Mkv-548-ensCls”. In the second steganalysis method, we tested our method by the higher-order statistics steganalysis method
[45]
,
[46]
. This method uses a wavelet-like decomposition to build higher-order statistical models of natural images. Then a Fisher linear discriminant analysis is used for the classification.
SSIM for a sample cover image and its corresponding stego image generated by model-based and proposed methods with a payload of 0.03 bpac.

We applied DCT-Mkv-548-ensCls and higher-order statistics method to the model-based method and our proposed methods.
Table 2
compares the detection rates of both methods for different payloads. We define the detection rate of an algorithm as the average of the correct recognition rate of the cover images and the correct recognition rate of the stego images produced by the algorithm. Obviously, lower detection rates are desirable for a steganalyzer. It shows that both methods have similar securities. In the DCT-Mkv-548-ensCls attack, the model-based method has better results for lower payloads, but our method remains more secure than the model-based method when increasing the payloads. At a payload of 0.3 bpac, the two methods have the same detection rates. For payloads of more than 0.3 bpac, our method is more secure than the model-based method. In the higher-order statistics attack, both methods have about the same results. The results show that neither the model-based nor the proposed method can be detected by the higher-order statistics attack, because their detection rates are near 50% (random detection).
1) Alternating Current. In DCT, the upper-left-most element in the coefficients matrix of each block is the DC or direct current coefficient and other coefficients are the AC coefficients.
2) Portable Gray Map
m-fakhredanesh@aut.ac.ir
Mohammad Fakhredanesh received his BS and MS degrees in computer science from the Mathematical and Computer Science Department, Amirkabir University of Technology (Tehran Polytechnic), Iran, in 2005 and 2007, respectively. He is currently a PhD candidate at the Computer Engineering Department, Amirkabir University of Technology (Tehran Polytechnic), Iran. His research interests are in the fields of data hiding, image processing, computer vision, and pattern recognition.
Corresponding Author safa@aut.ac.ir
Reza Safabakhsh received his BS degree in electrical engineering from the Sharif University of Technology, Tehran, Iran, in 1976 and his MS and PhD degrees in electrical and computer engineering from the University of Tennessee, Knoxville, USA, in 1980 and 1986, respectively. He worked at the Center of Excellence in Information Systems, Nashville, TN, USA, from 1986 to 1988. Since 1988, he has been with the Computer Engineering Department, Amirkabir University of Technology, Tehran, Iran, where he is currently a professor and a director of the Computer Vision Laboratory. His current research interests include computer vision, neural networks, and information hiding. He is a member of the IEEE and several honor societies, including Phi Kappa Phi and Eta Kapa Nu. He was the founder and a member of the Board of Executives of the Computer Society of Iran and was the president of this society for the first four years.
rahmati@aut.ac.ir
Mohammad Rahmati received his MS in electrical engineering from the University of New Orleans, USA, in 1997 and his PhD degree in electrical and computer engineering from the University of Kentucky, Lexington, KY, USA, in 2003. He is currently an associate professor at the Computer Engineering Department, Amirkabir University of Technology (Tehran Polytechnic), Iran. His research interests are in the fields of pattern recognition, image processing, bioinformatics, video processing, and data mining. He is the research coordinator of the department and is a member of the IEEE Signal Processing Society.

Data hiding image steganography
;
model-based steganography human visual system
;
Watson’s visual model
;
discrete cosine transform

I. Introduction

Steganography involves hiding a secret message in a cover medium so that only the intended recipient becomes aware of the existence of the hidden message
[1]
. For higher security, the message is usually coded before being embedded into the cover medium. The result of this coding procedure is a pseudorandom sequence of 0’s and 1’s, which is then embedded in the cover. Embedding the message bits in an image usually introduces some visual artifacts and changes the image statistics. The image statistics are analyzed by steganalysis methods, whereas visual artifacts may be detected by the human eye.
Adaptive steganography utilizes image features and content information to improve the embedding process
[2]
. For example, less detectable artifacts may be introduced into the cover image by the embedding method
[3]
. In some cases these artifacts are detected with difficulty by steganalysis methods due to trivial changes introduced in image statistics. However, the artifacts may be visually perceptible to the human eye. Therefore, perceptual undetectability is very important in steganography. This means that applying a steganography method to an image should not leave any visually perceptible phenomena. To support this idea and improve robustness of watermarking methods, perceptual models, such as Watson’s visual model
[4]
, were proposed to describe the human visual system. The methods which use the human visual model in steganography are discussed in the next section.
Different approaches, such as statistics-preserving steganography, model-based steganography, and masking embedding as natural processing methods, exist for image steganography. In model-based steganography, a model is assumed for the cover image
[5]
. The embedding procedure changes image elements (pixel values or transform coefficients) so that they do not violate the assumed cover model. For example, Sallee
[6]
proposed a generalized Cauchy distribution (GCD) to model the distribution of each discrete cosine transform (DCT) coefficient and assumed that all covers obey this model. The idea of using a model for image statistics in his approach was a novel idea. But, unfortunately, steganalysis methods easily detect stego images generated by this method
[7]
,
[8]
.
While the model-based steganography approach attempts to model image statistics, visual models attempt to model the human visual system. We propose to use both visual and cover models for improving steganography techniques. In the proposed method, Watson’s visual model
[4]
is used to strengthen model-based steganography. In this scheme, Watson’s visual model prevents perceptual phenomena during the embedding phase. In addition, we model the DCT coefficients via the GCD.
First, the maximum acceptable changes are extracted based on Watson’s model
[4]
. The DCT AC
II. Related Work

The human visual system has been considered in JPEG image compression
[9]
. Based on Watson’s study
[4]
, this compression standard selects higher quantization coefficients for frequencies that are less sensitive to the human eye. Ahumada and Peterson proposed a luminance-based model to approximate the perceptual sensitivity of DCT coefficients to quantization errors
[10]
. Peterson and others psychophysically measured the smallest coefficient of each frequency that causes a visible phenomenon
[11]
. These coefficients provided threshold amplitudes for the DCT basis functions. Watson called this model the image-independent perceptual (IIP) approach and proposed the image-dependent perceptual (DWT) method [4]. The IDP model is an extension of the IIP model, which is optimized for a specific image. It, as described in the next section, provides luminance masking, contrast masking, and error pooling. Podilchuk and Zeng used this IDP model for watermarking still images
[12]
. The IIP model, which is independent of image content, is not a suitable choice for data embedding. Adaptive data hiding implies using local image content during the embedding process. Kankanhalli and others proposed such a new analysis method of the noise sensitivity for every pixel
[13]
. Awrangjeb and others used the method of Kankanhalli and others
[13]
for their data-hiding method in the spatial domain
[14]
, since this method is applicable to this domain.
Cox and others dedicated one chapter of their book to perceptual watermarking
[5]
. They noted that a perceptual model generally considers sensitivity, masking, and pooling phenomena in a perceptive sense, such as vision. Watson’s visual model
[4]
properly modeled such phenomena in the DCT domain, while Kankanhalli and others
[13]
limited their model to edge, texture, smooth areas, and brightness sensitivity. Therefore, we chose Watson’s model to apply a human visual model in the DCT domain.
Human visual models usually measure the distance between the original image and the watermarked images. This distance can be used as a restriction on changes of coefficients and improves imperceptibility
[15]
–
[19]
and robustness of watermarking
[20]
–
[24]
. In addition, it is used as a measure for determining suitable regions in an image in which to embed more data
[25]
,
[26]
, a measure of distortion
[26]
, and to simplify the computation of watermark insertion
[27]
.
In some studies, the human visual system has been used with a neural network
[28]
,
[29]
or a fuzzy system
[30]
to recover the watermarking signals from the watermarked images.
Watson’s visual model
[4]
is used to model the human visual system in the DCT domain. However, in some investigations it is modeled in other transform domains
[20]
. Levicky and Foris combined three human visual models in the DCT, discrete wavelet transform (DWT), and spatial frequency domains
[31]
. Xie and others compared two human visual models: the DCT-based Watson’s model and the DWT-based pixel-wise masking model and showed that Watson’s model was the more superior.
[32]
. Pan and others analyzed the influences of luminance, frequency, and texture masking factor to cover distortion for embedding in the DWT domain
[33]
. Shu and others proposed two watermarking methods for the discrete curvelet and discrete contourlet transform domains
[34]
,
[35]
. They concluded that these domains have higher capacity than the wavelet domain. Kim and others proposed a watermarking method based on the human visual system in the DWT domain, which is robust against JPEG compression, smoothing, cropping, collusion, and multiple watermarking
[36]
.
III. Watson’s Visual Model

A perceptual model usually measures the perceptibility of a hidden message. The human visual system is the basis for visual models such as Watson’s model
[4]
,
[5]
. Watson’s model attempts to account for frequency sensitivity, luminance masking, and contrast masking; all of which are in the DCT domain.
- 1. Frequency Sensitivity

Watson’s visual model defines the frequency sensitivity in a matrix
(1) t=[ 1.40 1.01 1.16 1.66 2.40 3.43 4.79 6.56 1.01 1.45 1.32 1.52 2.00 2.71 3.67 4.93 1.16 1.32 2.24 2.59 2.98 3.64 4.60 5.88 1.66 1.52 2.59 3.77 4.55 5.30 6.28 7.60 2.40 2.00 2.98 4.55 6.15 7.46 8.71 10.17 3.43 2.71 3.64 5.30 7.46 9.62 11.58 13.51 4.79 3.67 4.60 6.28 8.71 11.58 14.50 17.29 6.56 4.93 5.88 7.60 10.17 13.51 17.29 21.15 ],

where lower frequencies (upper-left corner of the matrix) have smaller
- 2. Luminance Masking

The human eye sensitivity also depends on the average intensity of the block. This means that in brighter blocks, larger changes in DCT coefficients can be imperceptible. Watson’s model modifies matrix
(2) t L [i,j,k]=t[i,j] ( C o [0,0,k]/ C 0,0 ) a T ,

where
- 3. Contrast Masking

The reduction of visibility of a change in a frequency due to its energy is called contrast masking. The contrast masking threshold
(3) s[i,j,k]=max{ t L [i,j,k],| C c [i,j,k] | w[ i,j ] t L [i,j,k] 1−w[i,j] },

where
- 4. Pooling

A level of distortion perceptible for half of the experimental trails is called a just-noticeable difference (JND)
[5]
. Perceptual distance is defined to allow comparison of a cover image
(4) d[i,j,k]= C s [i,j,k]− C c [i,j,k] s[i,j,k] ,

which measures the distance between them as a fraction or multiple of one JND. The perceptual distances are pooled into a single perceptual distance
(5) D wat ( C c , C s )= ( ∑ i,j,k |d[i,j,k] | p ) 1 p ,

where Watson proposed a value of four for
IV. The Proposed Method

In this section, we propose a steganography method that uses both a cover model and Watson’s visual model
[4]
for steganography. The cover modelling assumes a model for the cover image statistics. Watson’s visual model perceptually considers differences between the cover and stego images.
- 1. Model-Based Steganography in JPEG Images

The proposed method uses a model-based approach for embedding message bits in the DCT coefficients of JPEG images, based on the method proposed by Sallee
[6]
. Let
x ′ β

during embedding. For example, if we use naive least significant bit embedding,
P ^ X β | X α ( X β | X α = x α )

based on the proposed model distribution
P ^ X

. In this approach, if
P ^ X

. Sallee
[6]
proposed to use a parametric model of
P ^ X β | X α

used to select
P ^ X β | X α

[6]
.
This approach skips zero-value DCT coefficients due to the large number of these coefficients and sensitivity of the image to their changes
[37]
,
[38]
. In addition, no bit is embedded into the DC coefficients, since changing these coefficients results in more detectable statistical and visual artifacts
[5]
,
[37]
. The remaining AC coefficients are modeled using a specific form of a GCD as a parametric density function, which is defined as
(6) P(u)= p−1 2s (1+| u s |) −p ,

where
(7) D(u)={ 1 2 (1+| u s |) 1−p if u≤0, 1− 1 2 (1+| u s |) 1−p if u>0.

PPT Slide

Lager Image

- 2. Data-Embedding Algorithm

The pseudocode of the data-embedding algorithm is given in
Fig. 2.
In the first step of the algorithm, after reading a cover image and generating a message, the DCT of the image is computed as done in the JPEG standard. For each coefficient
PPT Slide

Lager Image

PPT Slide

Lager Image

x ′ β ’s

, which are combined with the bin indices (
V. Experimental Results

In the following discussion, we use the term “model-based” for the steganography method proposed by Sallee
[6]
and use the “proposed method” for our method. The model-based steganography method embeds message bits so that the DCT changes may or may not be perceptible to the human eye. In this work, we restrict changes of DCT coefficients to be visually imperceptible changes.
The algorithm was implemented based on the model-based algorithm implementation using MATLAB.
We also used a JPEG toolbox developed for manipulating DCT coefficients of the JPEG images. In the following experiments, the bin size or
PPT Slide

Lager Image

(8) SSIM(x,y)= (2 μ x μ y + C 1 )(2 σ xy + C 2 ) ( μ x 2 + μ y 2 + C 1 )( σ x 2 + σ y 2 + C 2 ) ,

where
σ x 2

are, respectively, the mean and variance of image
(9) μ x = 1 N ∑ i=1 N x i ,

(10) σ x 2 = 1 N−1 ∑ i=1 N ( x i − μ x ) 2 .

The variables
σ y 2

are similarly defined. In addition,
(11) σ xy = 1 N−1 ∑ i=1 N ( x i − μ x ) ( y i − μ y ),

and
Average, maximum, minimum, and standard deviation of the SSIM vs. payload. Stego images are produced by model-based and proposed methods for different payloads.

Method | Statistics | Payload (bpac) | ||||
---|---|---|---|---|---|---|

0.01 | 0.02 | 0.03 | 0.04 | 0.05 | ||

Model-based | Average | 0.99997742 | 0.99995518 | 0.99993287 | 0.99991101 | 0.99988911 |

Maximum | 0.99999957 | 1 | 1 | 1 | 1 | |

Minimum | 0.99949459 | 0.99895124 | 0.99853924 | 0.99793887 | 0.99739842 | |

Standard deviation | 0.00002419 | 0.00004775 | 0.00007145 | 0.00009513 | 0.00011845 | |

Proposed | Average | 0.99997607 | 0.99995249 | 0.99992900 | 0.99990575 | 0.99988268 |

Maximum | 0.99999946 | 1 | 0.99999849 | 1 | 1 | |

Minimum | 0.99949405 | 0.99900100 | 0.99842562 | 0.99794140 | 0.99730463 | |

Standard deviation | 0.00002509 | 0.00004927 | 0.00007405 | 0.00004927 | 0.00012218 |

PPT Slide

Lager Image

PPT Slide

Lager Image

Detection rates of steganalysis attack[38]–[40]for different DCT-based steganography methods.

Steganalysis method | DCT-Mkv-548-ensCls | Higher-order statistics | ||
---|---|---|---|---|

Payload (bpac) | Model-based (%) | Proposed (%) | Model-based (%) | Proposed (%) |

0.01 | 58.42 | 58.83 | 49.95 | 50.07 |

0.02 | 66.58 | 67.23 | 49.97 | 49.95 |

0.03 | 73.71 | 74.60 | 49.95 | 49.98 |

0.04 | 80.08 | 81.04 | 50.13 | 49.91 |

0.05 | 85.04 | 86.09 | 50.04 | 49.99 |

0.1 | 97.20 | 97.43 | 50.01 | 50.03 |

0.2 | 99.76 | 99.78 | 50.06 | 50.11 |

0.3 | 99.95 | 99.95 | 50.15 | 50.15 |

0.4 | 99.98 | 99.97 | 50.32 | 50.35 |

0.5 | 99.98 | 99.97 | 50.50 | 50.42 |

VI. Conclusion and Future Work

Model-based steganography is an important approach for image steganography. Although this approach is based on a novel idea, experimental steganalysis methods easily detect stego images generated with it. Watson’s visual model is a model of the human visual system in the DCT domain. Several studies in data hiding have used the human visual model in which stego images do not show any perceptible changes.
In this paper, we combined these two approaches to improve model-based steganography. We proposed utilizing Watson’s visual model for model-based steganography, which improves the model-based steganography method in terms of visual imperceptibility.
Experimental results show that the proposed method does not retain any perceptible change in the image while the model-based method retains many perceptible changes in the stego images. This is the purpose and the main contribution of the present work. We tested our method with the SSIM perceptual quality measure, which showed that the proposed method does not leave significant effects on images as compared to the model-based method. Furthermore, we tested our method by two steganalysis attacks called DCT-Mkv-548-ensCls and higher-order statistics. The results showed that the two methods have similar securities.
Determination of the optimum changes based on the perceptual and statistical undetectability and extending the proposed method to other transform domains, such as wavelet and contourlet transform constitute the ideas for future works.
BIO

Marvel L.M.
,
Boncelet C.G.
,
Retter C.T.
1999
“Spread Spectrum Image Steganography,”
IEEE Trans. Image Process.
8
(8)
1075 -
1083
** DOI : 10.1109/83.777088**

A. Cheddad
2010
“Digital Image Steganography: Survey and Analysis of Current Methods,”
Signal Process.
90
(3)
727 -
752
** DOI : 10.1016/j.sigpro.2009.08.010**

Fridrich J.
,
Du R.
“Secure Steganographic Methods for PaletteImages,”
Proc. IH
Dresden, Germany
Sept. 29 – Oct. 1, 1999
47 -
60

Watson A.B.
“DCT Quantization Matrices Visually Optimized forIndividual Images,”
Proc. SPIE
San Jose, CA, USA
Jan. 31, 1993
1913
202 -
216

Cox I.J.
2008
Digital Watermarking and Steganography
2nd ed.
Morgan Kaufmann
Burlington

Sallee P.
“Model-Based Steganography,”
Digit. Watermarking, LNCS
Seoul Rep. of Korea
Oct. 20–22, 2004
2939
154 -
167

Fridrich J.
“Feature-Based Steganalysis for JPEG Images and itsImplications for Future Design of Steganographic Schemes,”
Proc. IH, LNCS
Toronto, Canada
May 23–25, 2005
3200
67 -
81

Bohme R.
,
Westfeld A.
“Breaking Cauchy Model-BasedJPEG Steganography with First Order Statistics,”
Proc. ESORICS, LNCS
3193
125 -
140

Wallace G.
1992
“The JPEG Still Picture Compression Standard,”
IEEE Trans. Consum. Electron.
38
(1)
18 -
34
** DOI : 10.1109/30.125072**

Ahumada A.J.
,
Peterson H.A.
“Luminance-Model-Based DCT Quantization for Color Image Compression,”
Proc. SPIE,Human Vis., Visual Process. Digital Display III
San Jose, CA, USA
Aug. 27, 1992
1666
365 -
374

Peterson H.A.
“Quantization of Color Image Components inthe DCT Domain,”
Proc. SPIE, Human Vis., Visual Process., Digital Display II
San Jose, CA, USA
June 1, 1991
1453
210 -
222

Podilchuk C.I.
,
Zeng W.
“Perceptual Watermarking of StillImages,”
Proc. IEEE Multimedia Signal Process.
Princeton, NJ, USA
June 23–25, 1997
363 -
368

Rajmohan M.S., Kankanhalli
,
Ramakrishnan K.R.
“ContentBased Watermarking of Images,”
Proc. MULTIMEDIA
Bristol, UK
61 -
70

Awrangjeb M.
,
Kankanhalli M.
“Reversible Watermarking Using a Perceptual Model,”
J. Electron. Imag.
14
(1)
1 -
8
** DOI : 10.1117/1.1877523**

Delaigle J.F.
,
de Vleeschouwer C.
,
Macq B.
“Watermarking Algorithm Based on a Human Visual Model,”
Signal Process.
66
(3)
319 -
335
** DOI : 10.1016/S0165-1684(98)00013-9**

Awrangjeb M.
,
Kankanhalli M.S.
“Lossless Watermarking Considering the Human Visual System,”
Digit. Watermarking, LNCS
Seoul Rep. of Korea
Oct. 20-22, 2003
2939
581 -
592

Zhu G.
,
Sang N.
“An Adaptive Quantitative Information Hiding Algorithm Based on DCT Domain of New Visual Model,”
Proc. ISISE
SShanghai, China
Dec. 20–22, 2008
1
546 -
550

Ahmidi N.
,
Neyestanak A.L.
“A Human Visual Model forSteganography,”
Proc. CCECE
Niagara Falls, ON, USA
May 4–7, 2008
1077 -
1080

Hu R.
,
Chen F.
,
Yu H.
“Incorporating Watson’s PerceptualModel into Patchwork Watermarking for Digital Images,”
Proc. ICIP
Hong Kong, China
Sept. 26–29, 2010
3705 -
3708

Chen T.H.
,
Horng G.
,
Wang S.H.
2003
“A Robust Wavelet-Based Watermarking Scheme Using Quantization and Human Visual System Model,”
Pakistan J. Inf. Technol.
2
(3)
213 -
230

Jung Y.J.
,
Hahn M.
,
Ro Y.M.
“Spatial Frequency Band Division in Human Visual System Based Watermarking,”
Proc. IWDW
Seoul, Rep. of Kore
Nov. 21–22, 2003
2613
224 -
234

Porter J.
,
Rajan P.
“Image Adaptive Watermarking Techniques Using Models of the Human Visual System,”
Proc. SSST
Cookeville, TN, USA
Mar. 5–7, 2006
354 -
357

Jayalakshmi M.
,
Merchant S.N.
,
Desai U.B.
“Significant Pixel Watermarking Using Human Visual System Model in Wavelet Domain,”
Proc. ICVGIP
Madurai, India
Dec. 13–16, 2006
206 -
215

Kwon O.H.
,
Kim Y.S.
,
Park R.H.
“Watermarking for Still Images Using the Human Visual System in the DCT Domain,”
Proc. ISCAS
Orlando, FL, USA
4
76 -
79

Jung S.W.
,
Ha L.T.
,
Ko S.J.
2011
“A New Histogram Modification Based Reversible Data Hiding Algorithm Considering the Human Visual System,”
IEEE Signal Process. Lett.
18
(2)
95 -
98

Zhang X.
,
Wang S.
2005
“Steganography Using Multiple-base Notational System and Human Vision Sensitivity,”
IEEE Signal Process. Lett.
12
(1)
67 -
70
** DOI : 10.1109/LSP.2004.838214**

Li Y.
“An Adaptive Blind Watermarking Algorithm Basedon DCT and Modified Watson’s Visual Model,”
Proc. ISECS
Guangzhou, China
Aug. 3-5, 2008
904 -
907

Lou D.C.
,
Liu J.L.
,
Hu M.C.
“Adaptive Digital Watermarking Using Neural Network Technique,”
Proc. IEEE, ICCST
Oct. 14–16, 2003
325 -
332

Zhang Y.
“Blind Watermark Algorithm Based on HVS, and RBF Neural Network in DWT Domain,”
WSEAS
Stevens Point, WI, USA
8
(1)
174 -
183

Oueslati S.
,
Cherif A.
,
Solaiman B.
2010
“A Fuzzy Watermarking Approach Based on the Human Visual System,”
Int. J. Image Process.
4
(3)
218 -
231

Levicky D.
,
Foris P.
2004
“Human Visual System Models in Digital Image Watermarking,”
Radioeng.
13
(4)
38 -
43

Xie G.
,
Swamy M.
,
Ahmad M.O.
“Perceptual-Shaping Comparison of DWT-Based Pixel-Wise Masking Model with DCT-Based Watson Model,”
Proc. IEEE, ICIP
Atlanta, GA, USA
Oct. 8–11, 2006
1381 -
1384

Pan F.
“Steganography Based on Minimizing Embedding Impact Function and HVS,”
Proc. ICECC
Zhejiang, China
Sept. 9–11, 2011
490 -
493

Shu Z.
“Watermarking Algorithm Based on Contourlet Transform and Human Visual Model,”
Proc. ICESS
Sichuan, China
July 29–31, 2008
348 -
352

Shu Z.
“Watermarking Algorithm Based on Curvelet Transform and Human Visual Model,”
Proc. ISECS
Nanchang, China
May 22–24, 2009
1
208 -
212

Kim Y.S.
,
Kwon O.H.
,
Park R.H.
“Wavelet Based Watermarking Method for Digital Images Using the Human Visual System,”
Proc. IEEE, ISCAS
Orlando, FL, USA
4
80 -
83

Westfeld A.
“F5-A Steganographic Algorithm,”
Proc. IH
Pittsburgh, PA, USA
Apr. 25–27, 2001
289 -
302

Provos N.
“Defending Against Statistical Steganalysis,”
Proc. USENIX Security Symp.
Washington, WA, USA
Aug. 13–17, 2001
323 -
335

Cover T.
,
Thomas J.
1991
Elements of Information Theory
NY: Wiley
New York

“BOSSBase (v1.01).”
http://exile.felk.cvut.cz/boss/BOSS Final/index.php?mode=VIEW&tmpl=materials

Wang Z.
2004
“Image Quality Assessment: From Error Visibility to Structural Similarity,”
IEEE Trans. Image Process.
13
(4)
600 -
612
** DOI : 10.1109/TIP.2003.819861**

Pevny T.
,
Fridrich J.
“Merging Markov and DCT Features for Multi-class JPEG Steganalysis,”
Proc. SPIE
San Jose, CA, USA
Jan. 28, 2007
6505

Kodovsky J.
,
Fridrich J.
“Calibration Revisited,”
Proc. ACMMM & Sec.
Princeton, NJ, USA
Sept. 7–8, 2009
63 -
74

Kodovsky J.
,
Fridrich J.
,
Holub V.
2012
“Ensemble Classifiers forSteganalysis of Digital Media,”
IEEE Trans. Inf. Forensics Security
7
(2)
432 -
444
** DOI : 10.1109/TIFS.2011.2175919**

Farid H.
“Detecting Hidden Messages Using Higher-OrderStatistical Models,”
Proc. ICIP, Rochester, NY, USA
Rochester, NY, USA
Sept. 22-25, 2002
2
905 -
908

www.cs.dartmouth.edu/farid/downloads/research/steg.m

Citing 'A Model-Based Image Steganography Method Using Watson’s Visual Model
'

@article{ HJTODO_2014_v36n3_479}
,title={A Model-Based Image Steganography Method Using Watson’s Visual Model}
,volume={3}
, url={http://dx.doi.org/10.4218/etrij.14.0113.0171}, DOI={10.4218/etrij.14.0113.0171}
, number= {3}
, journal={ETRI Journal}
, publisher={Electronics and Telecommunications Research Institute}
, author={Fakhredanesh, Mohammad
and
Safabakhsh, Reza
and
Rahmati, Mohammad}
, year={2014}
, month={Apr}