Rapid popularization of smart cell phone equipped with camera has led to a number of new legal and criminal problems related to multimedia such as digital image, which makes cell phone source identification an important branch of digital image forensics. This paper proposes a classifier combination based source identification strategy for cell phone images. To identify the outlier cell phone models of the training sets in multiclass classifier, a oneclass classifier is orderly used in the framework. Feature vectors including color filter array (CFA) interpolation coefficients estimation and multifeature fusion is employed to verify the effectiveness of the classifier combination strategy. Experimental results demonstrate that for different feature sets, our method presents high accuracy of source identification both for the cell phone in the training sets and the outliers.
1. Introduction
A
ccording to the IDC’s report
[1]
, worldwide smart phone market has achieved a total of 1 billion in 2013 for the first time. The advantages of lowcost devices and easy access to amateur users have opened the smart phone floodgate. This means a new life style that people share photos with WiFi, Bluetooth etc, and send images by MMS (Multimedia Message Service). As a result, in the forensics context, the fastgrowing smart phone trend has brought in an increasing number of image evidence captured by cell phone. Therefore, it is important to check the integrity and authenticity of the cell phone images presented as evidence in court. Digital image forensics which aims at ballistic analysis and exposing potential semantic manipulation of the image, has become necessary for legal purpose and security investigation
[2]
.
In a practical blind digital forensics scenario, an analyst is assumed to gather clues and evidence from a given cell phone image without access to the device that created it
[3]
. An important piece of evidence is the identity of the source camera. Thus, the source identification for cell phone image becomes a branch of digital image forensics, whose task is to determine the cell phone that was used for capturing the given image.
The cell phone image source includes two different meanings. One is in term of mobile model that denotes products from different manufacturers. The other means the alternate cell phones of the same model
[4

8]
. In this study, we focus on the cell phone model identification.
In the area of cell phone model identification, several residual artifacts have been exploited in previous literatures. In
[9]
, Celiktutan
et al
explored three sets of source identification features, namely binary similarity measures, image quality measures and high order wavelet statistical features. They further compared three types of decisionlevel fusion schemes including confidencelevel fusion, ranklevel fusion and abstractlevel fusion in their experiments, in conjunction with SVM (Support Vector Machine) classifier
[10]
. By using 16 cell phones in 6 brands as experimental samples, the method received an overall average accuracy of 95.1%. Similar work is accomplished by Tsai
et al
in
[11]
. Also, Sun
et al
proposed a new method for source cell phone identification based on multifeature fusion
[12]
. Features are selected by SFFS (Sequential Floating Feature Selection) method from three sets, which consist of higherorder statistics, image quality measures and CFA interpolation coefficients. For 8 cell phones in 3 brands, an overall average accuracy of 95% was achieved. Furthermore, they discussed the situation of classification of the different brands and the same brand. For 3 cell phones from different brands, a perfect accuracy of 100% was achieved, although the number of experimental samples seemed a little insufficient. In the more difficult scenario of classification of 4 cell phones from the same brand Nokia, the method proposed in
[12]
also got a good performance of 95%. Besides, the parameters of lateral chromatic aberration are also used to identify source cell phone, by maximizing the mutual information between different color components
[13]
.
As a result of using a structured color filter array in front of sensor to obtain a mosaic image rather than full RGB color component image in the cell phone, the CFA interpolation is indispensable to recreate the missing color component for each pixel. The CFA interpolation artifacts, which are thus considered as one of the most important components in image pipeline, are widely exploited as a fingerprint for cell phone identification, as well as digital camera identification. Chuang
et al
[14]
presented a study of cell phone camera model linkage based on CFA interpolation. Furthermore, they evaluated the dependency on the content of training image collection via variance analysis. Gökhan and Ismail use SVD (Singular Value Decomposition) to obtain the micro and macro statistical feature vector introduced by CFA interpolation
[15]
. Most of these algorithms could achieve the classification accuracy of 90% and even higher, for several cell phone branches.
Although there are differences between cell phone and digital camera in terms of sensor, aperture, zoom and so on, the imaging pipeline is almost the same. Similar works could be found in the correlated area of digital camera identification in recent years, and most of these algorithms of digital camera identification perform well in cell phone identification
[16

23]
. The typical algorithm was proposed by Swaminathan
et al
[23]
, using a linear model to estimate the CFA coefficients. The details of the method could be found in Section 2.1.
To our best knowledge, most of the cell phone and digital camera source identification methods extract multidimensional features and use Fisher’s linear discriminant or SVM as the classifier. As a typical multiclass classification problem in pattern recognition, this implies a tacit assumption that the given image was captured by the camera models existed in the training process because these classifiers can only distinguish the classes included in the training model. This assumption is impractical because it is impossible to train the camera models including the entire cameras in the market. In this case, the assumption means that an inevitable false classification would occur if there is an image captured by a new unknown device. In this paper, we define the device "outlier" when it is a new unknown device and out of the training model. Though the assumption is impractical, the scenario could be acceptable for digital camera source identification. The reason is that the number of mainstream digital cameras is well limited.
As for cell phone source identification, it is obvious that the assumption of traversal of all cell phone models can not be satisfied. The mainstream cell phone models are many more than those of digital cameras. Besides, various copycat cell phones increase the difficulty in the construction of training models. In this case, the previous algorithms based on traditional multiclass classification could be considered as impractical methods for real world source identification.
The proposed scheme in this paper differs from the previous works in term of unknown cell phone model identification. In this paper, we present a MC (multiclass) and OC (oneclass) classifier combination method to distinguish the unknown mobile model in source identification for cell phone images.
The paper is organized as follows. In Section 2, the CFA coefficient features and multifeature fusion consisting of image quality measure and highorder statistics extracted for classifier is described. The strategy of OC and MC classifier combination is presented and discussed in Section 3. The experiments are demonstrated in Section 4, where we indicate the performance of proposed method for 20 different cell phone models. Finally the paper is concluded in Section 5.
2. Feature Sets
Related prior studies on camera source identification have provide several efficient ways to determine the image source. These solutions can be classified into two classes: component parameter based methods and statistical characteristics based methods. Typical component parameter based methods can be found in
[3
,
14
,
19

23]
, which widely discuss the information about CFA pattern and interpolation coefficients and present high performance in term of identification accuracy. The statistical characteristics based methods usually use one or several sets of characteristics, such as binary similarity measures
[9]
, image quality metrics
[10]
, high order wavelet statistical features
[12]
, SVD features
[15]
and so on. In this paper, a CFA coefficient feature set proposed in
[23]
and a feature set of multifeature fusion proposed in
[12]
are used, separately.
 2.1 CFA Coefficient Feature Set
As is known to all, the image formation pipeline of digital camera equipped on the cell phone can be described as
Fig. 1
:
Image formation pipeline
The rays from the scene of the real world first pass through the lens and a sophisticated designed filter, which is called color filter array, CFA. A typical CFA pattern called Bayer CFA consists of one red and blue color component and two green components in a 2×2 cell. The following sensor detects sampled R/G/B component at different pixel locations according to the CFA pattern. The output of sensor is considered as a mosaic image because there is only one color component at every single pixel. To rebuild the truecolor image, the missing color components of each pixel are interpolated using the local area sampled data, which is called CFA interpolation. After that, a postprocessing such as white balancing or gamma correction is carried out, and finaly the image is stored as preset format such as JPEG. Obviously, CFA interpolation is an important step to maintain the image quality in the image formation pipeline, because 2/3 of the image data is rebuilt by the interpolation processing. There are several different CFA interpolation algorithms with different performance
[24
,
25]
. As a unique feature set of camera brand identification, CFA interpolation coefficients are considered as an important parameter for identifying the camera source of an image.
In this paper, we use the nonintrusive algorithm, which is proposed in
[23]
, to estimate the interpolation coefficients as the feature vector. The CFA interpolation coefficients estimation algorithm consists of two parts. First, the interpolation coefficients are preliminarily estimated with a linear model. The pixels in the image are first divided into three categories according to the texture information as following:
H_{x,y}
and
V_{x,y}
respectively denote the secondorder gradient values of horizontal and vertical gradients, which can be computed in equations (2) and (3), and
T
is a suitably chosen threshold.
where
I_{x,y}
denotes the pixel value at the location (
x,y
) in the image. Without doubt, the image pixels are finally divided into nine sets according to three categories in R, G and B components. Suppose that we have a matrix of the pixel values directly captured by the cell phone, denoted by
A
of dimension
N_{e}
×
N_{u}
, the linear interpolation model can be represented as following:
b
of dimension
N_{e}
× 1 denotes the pixel values to be interpolated, and
x
of dimension
N_{u}
× 1 stands for the interpolation coefficients to be estimated. Of course this is an idealized model for the CFA interpolation, as there is always perturbation introduced by the other image operations such as gamma correction, white balance and especially, lossy JPEG compression. Considering the perturbation, the model should be revised as:
A solution for
x
with this model is to solve the minimization problem:
The Frobenius norm of the matrix [
E r
] can be computed as:
After the CFA interpolation coefficients are preliminarily estimated, an interpolation error, which computed by a weighted sum of errors of nine pixels categories, is obtained to evaluate the veracity of the estimation. Also, detection statistics deduced by the errors are obtained as a sorting index to search different CFA patterns. Considering the high complexity, we simplify the CFA pattern process in our method. We use a typical diagonal Bayer pattern for the CFA. A full brute force search of different CFA patterns can be easily implemented in the extension.
 2.2 MultiFeature Fusion Set
As illustrated in
Fig. 1
, although the image pipeline is similar in different cell phones, the parameters in CFA interpolation and JPEG compression are different, which may cause differences in the quality of the image as well as the higherorder statistic features of the image. These tiny differences may hardly be detected by the naked eyes, but they can be used as the unique features of the image, thus provide evidences to identify the source cellphones. A multifeature fusion method proposed in
[12]
has combined the higherorder statistics and image quality measure to identify the image source of cell phone.
Image quality measure have been used for steganalysis
[26]
and tampering detection
[27]
. Typically, 13dimonsional statistical features related to image quality measure are involved in the multifeature fusion.
Table 1
shows the three categories of image quality measures and their corresponding detailed descriptions.
Image Quality Measures
Also, the higherorder statistics have been proved as an effective tool for steganalysis and tampering detection
[28]
. The statistical model for photographic images could be built upon several frequencydomain transformations. Without loss of generality, we use waveletlike decomposition as the model. The processing of decomposition consists several separable quadrature mirror filters, which splits the frequency space of the image into multiple scales and orientations, typically a vertical, a horizontal and a diagnal subband. For fullcolor RGB images, the three color channels are decomposed separately.
V_{k}
(
i,j
),
H_{k}
(
i,j
) and
D_{k}
(
i,j
) denote the vertical, horizontal and diagnal subbands respectively. In each orientation, the mean, variance, skewness and kurtosis of the coefficients in each subband are used to construct the feature vector, as (8) to (11) shown.
The computation are applied in three color channel of an image, and a feature vector consists of 36 features are generated.
To restrain the correlation in the feature sets, a feature selection algorithm are implemented. There are several different feature selection algorithms. A simple and effective method is SFFS, which brute force search all of the combination of the features. For a specified dimensional feature vector, SFFS selects the feature combination with the highest accuracy. For all of the dimensions, a correlation curve between feature subsets and performance could be achieved, which is further used for feature selection. More details could be found in
[12]
and
[29]
. Respect to the work in
[12]
, we use 19 effective features to construct the feature vector.
3. Classifier Combination
The source identification of cell phone image is traditionally considered as a pattern recognition problem. The typical solution is that for several different classes with training samples as sideinformation, we mark the classes with different labels, and extract distinguishing feature vector. By feeding a classifier with the feature vector, a model is expected to be built to predict the best matching label for a given new sample. In this methodology, the classifier usually constructs a linear boundary or nonlinear hyperplane in the two or high dimension space. Thus a key assumption is that the classifier must have the sideinformation of the training samples, as well as the class label. And also the classifier can only be assigned to the test sample with matching labels where the classifier has already known in the training process. Is this practical for the cell phone source identification in term of forensics?
Our proposed work say no unfortunately. The task of cell phone source identification is to determine the source of the image, which means we do not know how we obtain the image. However, the assumption of classification is selfcontradictory because it includes an implication that the test image belongs to one of the training classes. Thus, for a more practical scenario, the forensic analyst obtains a multiclass model with training image samples, including a large cell phone model set as large as he/she could obtain. Nevertheless, the problem which he has to face with is that the test image could be captured by any cell phone in the market. If the multiclass model is directly used to predict the category of the test image, an inevitable misclassification would occur when the test image is from an outlier cell phone.
To address this issue, a combined classifier consisting of MC and OC classifiers is proposed. In the combination strategy, the multiclass classifier is supposed to provide a tool that determines the best match label in the training model, while the oneclass classifier exposes the outliers of the training model. In another word, the MC classifier is used to answer the question that which cell phone captures the test image, and the OC classifier is expected to answer if the classification result of the MC classifier is correct.
The combination strategy of MC and OC classifier is illustrated as
Fig. 2
. Supposing we have an image data set consisting of image samples from N cell phone models, it is easy to obtain all of the OC classifier models
M
_{OC1}
,
M
_{OC2}
,⋯,
M_{OCN}
. When we extract the feature vector of the test image, the MC classifier model, called
M_{MC}
, is first used to predict a best matching label, denoted by
C_{i}
. Then, the corresponding OC classifier model
M_{OCi}
is used to identify whether the test image is captured by the specific cell phone. A positive result confirms that the test image is captured by the cell phone, and a negative result exposes an unknown cell phone source of the test image.
Combination strategy of MC and OC classifiers
An unavoidable fact of the classifier combination is the propagation of errors. For the MC classifier, without loss of generality, we use
to denote the misclassification ratios, defined as following.
N_{miscmc}
denotes the number of misclassified samples of multiclass classifier, while
N_{i}
means the number of samples belonging to class
i
. For the OC classifier, a false positive ratio
and a false negative ratio
for each model are defined as following:
Where
N_{ci}
denotes the number of samples classified as class
i
, and
N_{noni}
means the number of samples NOT belonging to class
i
, while
N_{miscoc}
means the number of misclassified samples of oneclass classifier. We evaluate and compare the performance between the strategies of traditional MC classifier and the proposed classifier combination, in the term of misclassification ratio.
For the previous work with only MC classifier, the misclassification ratio for each class is obviously
, when the test image is indeed captured by some of the cell phones in the training set. Thus the average misclassification ratio is undisputed
. Of course, when the test image comes from an outlier cell phone, the misclassification ratio can be easily obtained as 100%, as equation (11) demonstrates.
Then we discuss the misclassification ratio of the combined classifier strategy. When the test image source is included in the training model, we obtain an error probability of
for each model, if theMC classifier misclassifies the test image in the first step. Because if there is an error occurring in MC classifier, the output of the OC classifier, no matter it is positive result for the false cell phone model or negative result for the outlier, would be a misclassification as well. If we obtain a correct result in MC classifier with probability of
for each model, the probability of misclassification will be
according to the performance of the OC classifier. When the test image is an outlier of the training model, the ratio becomes as simple as
. Finally, we get the average misclassification in the case of classifier combination in (16).
The RBF (Radial Based Function) kernel based MC SVM
[30]
and OC SVM
[31]
are adopted in this study as the specific MC and OC classifiers. Other OC and MC classifiers are also applicative in our classifier combination framework.
4. Experimental Results and Analysis
An image data set containing 24 cell phone models from 9 manufacturers is used in our experiments. The brief introduction of the image samples from these cell phones is shown in
Table 2
. For each cell phone, we collect 150 different image samples, consequently a total of 3600 samples are included. These images are collected under a variety of uncontrolled conditions, such as different resolutions, indoor/outdoor scenes, natural/ artificial scenes, different compression quality factors, and so on. 17 cell phones (No. 1 to No. 17) are selected as the models and the forensic analyst can access several training samples to obtain a MC classifier model and 17 OC classifier models. 100 images from each cell phone, a total of 1700, are randomly selected as training samples. And the rest of 50 images for each of the 17 cell phones are used for test. The rest of 7 cell phones are treated as outliers, which means there is none prior knowledge about these devices. For the outlier cell phones, all of the 150 image samples are used for test.
Image data sets used in the experiments
Image data sets used in the experiments
The experimental results are shown as following. In
Table 3
, the accuracy of source classification for all 24 cell phones in the training model and outlier is presented, compared with the CFA pattern search simplified algorithm
[23]
and the multifeature fusion algorithm
[12]
. For the cell phones in the training model, we receive anticipative deteriorations of the results from 93.8% to 88.7% in the term of average identification accuracy using CFA pattern search simplified algorithm, because of the propagation of errors, as shown in
Table 4
. A same deterioration could be found for the multifeature fusion algorithm. Though there is nearly 6% deduction of average accuracy of our method compared with that in
[23]
, we consider the deduction as a small and acceptable range. The different performances for the proposed methods between different feature sets verify that the CFA coefficient features are better than multifeature fusion sets for the term of camera source identification. Meanwhile, the method in
[23]
and
[12]
is totally invalid for the 7 outliers as we expected, because the classifier used in
[23]
and
[12]
misclassifies the outliers as the cell phones in the training model. However, we obtain an average identification accuracy of 75.3% and 66.9% for the 7 outlier cell phones, as
Table 4
shows. For all 24 cell phones, our method also achieves a higher average accuracy of 84.8% and 77.9%, compared with 66.4% and 63.8% achieved by the method in
[23]
and
[12]
. The confusion matrix shown in
Table 5
and
Table 6
describes the details of the experimental results of the method
[23]
and
[12]
, which are the input of the OC classifiers in the combination strategy of the proposed method. The 17 columns corresponds to the 17 cell phones in the training model, and 24 rows corresponds to all of the cell phones. The (
i, j
) element in the confusion matrix gives the percentage of images from cell phone
i
that are classified as belonging to cell phone
j
. The symbol "*" denotes percentage of 0. The gray cells in
Table 5
and
Table 6
demonstrate the classification results of 7 outlier cell phones. For the image samples from these cell phones, the classification accuracy is 0 because the inevitable misclassifications always occurs.
Identification accuracy for all 24 cell phones
Identification accuracy for all 24 cell phones
Average accuracy comparison for 17 cell phones in the training model, 7 outlier cell phones and all 24 cell phones
Average accuracy comparison for 17 cell phones in the training model, 7 outlier cell phones and all 24 cell phones
Confusion matrix of method in[23]
Confusion matrix of method in [23]
Confusion matrix of method in[12]
Confusion matrix of method in [12]
Considering the term of time complexity, the proposed methods is obviously higher than the baseline
[12]
and
[23]
, because they combines the MC classifier with several OC classifiers in the classifier strategy. To be fair, we compare the time cost of the methods without considering the training process, because the traning process could be finished offline. That means the time cost of the proposed methods consists of three components: feature extraction, multiclass classification and oneclass classification. Compared with the corresponding baseline
[12]
and
[23]
, the time costs of feature extraction and multiclass classification is completely the same, while the oneclass classification is the additional time complexity. The beforementioned experiments are implemented via Matlab 2009 with a PC equipped with Intel Core i75960X 3.0GHz CPU and 32G Ram.
Table 7
demonstrates the segmented time costs of the proposed methods compared with the baseline
[12]
and
[13]
, for all 1900 test images. The identification of all of the test images spends 442 minutes and 1030 minutes for methods in
[12]
and
[23]
. For the proposed methods, the corresponding time costs are 443 minutes and 1032 minutes, in other words, about 14 seconds and 33 seconds for each test image sample.
Time complexity of the identification
Time complexity of the identification
5. Conclusion
This paper proposed a classifier combination strategy for identifying the source cell phone of digital images. A framework of successive detections with MC classifier and OC classifier is used to obtain an acceptable average accuracy for cell phone models in the training model, and a high average identification ratio for outlier cell phones. The classifier combination strategy is implemented with two effective source camera identification algorithms, using CFA interpolation coefficients estimation and multifeature fusion as feature vectors. Experiments indicate that the average accuracies of 88.7% and 75.3% with CFA coefficient features, 82.5% and 66.9% withmultifeature fusion, are achieved for cell phones that in and out of the training model, respectively.
In the practical scenario of image source identification for cell phones, the classification of outlier is a significant but difficult task. The classifier combination strategy is used to introduce an "outlier" label for image source identification. Though the strategy is feasible, we still plan to improve the performance of the classifier combination, and design new ingenious combination of classifiers for specific feature sets.
BIO
Bo Wang received his B.S. degree in electronic and information engineering, M.S. degree and Ph.D. degree in signal and information processing from Dalian University of Technology, Dalian, China, in 2003, 2005 and 2010, respectively. From 2010 to 2012, he was a postdoctoral research associate in faculty of management and economics in Dalian University of Technology. From 2012 up to now, he is an assistant professor in school of information and communication engineering in Dalian University of Technology. His current research interests focus on the areas of multimedia processing and security, such as digital image processing and forensics.
Yue Tan received her B.S. degree in photo communication from Jilin University, China, in 2013. She is currently pursuing the M.S. degree in information and communication engineering from Dalian University of Technology, Dalian, China. Her current research interests focus on source camera identification and tampering detection in the area of digital image forensics.
Meijuan Zhao received her B.S. degree in communication engineering from Dalian Nationalities University, Dalian, China, in 2014. She is currently pursuing the M.S. degree in information and communication engineering from Dalian University of Technology, Dalian, China. Her current research interests focus on source camera identification in the area of digital image forensics.
Yanqing Guo received his B.S. degree in electronic and information engineering, M.S. degree and Ph.D. degree in signal and information processing from Dalian University of Technology, Dalian, China, in 2002, 2004 and 2009, respectively. From 2009 up to now, he was employed by school of information and communication engineering in Dalian University of Technology, and he was an associate professor since 2012. His current research interests focus on the areas of social media computing, digital multimedia processing and computer vision.
Xiangwei Kong received her B.S. degree in electronic engineering, M.S. degree in electronic and communication engineering in 1985 and 1988 from Harbin Engineering University. And she received her Ph.D. degree in management science and engineering in 2003 from Dalian University of Technology. From 2002 up to now, she was a professor in school of information and communication engineering in Dalian University of Technology. Her current research interests focus on the areas of multimedia processing and security, content based image retrieval, knowledge management and intelligence, information fusion and so on.
http://www.idc.com/getdoc.jsp?containerId=prUS24645514
Swanminathan A.
,
Wu M.
,
Liu K. J. R.
2007
“Nonintrusive component forensics of visual sensors using output images,”
IEEE Transaction on Information Forensics and Security
2
(1)
91 
106
DOI : 10.1109/TIFS.2006.890307
Lukáš J.
,
Fridrich J
,
Goljan M.
“Digital “bullet scratches” for images,”
in Proc. of IEEE International Conference on Image Processing
September 1114, 2005
III65 
68
Steinebach M.
,
Ouariachi M.
,
Liu H.
,
Katzenbeisser S.
“Cell phone camera ballistics: attacks and countermeasures,”
in Proc. of Electronic Imaging, Multimedia on Mobile Devices
January, 2010
vol.7542
B1 
9
Steinebach M.
,
Ouariachi M.
,
Liu H.
,
Katzenbeisser S.
“On the reliability of cell phone camera fingerprint recognition,”
Ditital Forensics and Cyber Crime
September 30October 2, 2009
69 
76
Kang X.
,
Li Y.
,
Qu Z.
,
Huang J.
2012
“Enhancing source camera identification performance with a camera reference phase sensor pattern noise,”
IEEE Transaction on Information Forensics and Security
7
(2)
393 
402
DOI : 10.1109/TIFS.2011.2168214
Celiktutan O.
,
Avcibas I.
,
Sankur B.
,
Ayerden N. P.
,
Capar C.
“Source cellphone identification,”
in Proc. of IEEE 14th Signal Processing and Communications Applications
April 1719, 2006
1 
3
Celiktutan O.
,
Sankur B.
,
Avcibas I.
2008
“Blind identification of source cellphone model,”
IEEE Transaction on Information Forensics and Security
3
(3)
553 
566
DOI : 10.1109/TIFS.2008.926993
Tsai M.
,
Wang C.
,
Liu J.
,
Yin J.
2012
“Using decision fusion of feature selection in digital forensics for camera source model identification,”
Computer Standards & Interfaces
34
(3)
292 
304
DOI : 10.1016/j.csi.2011.10.006
Sun X.
,
Dong L.
,
Wang B.
,
Kong X.
,
You X.
“Source Cellphone Identification Based on Multifeature Fusion,”
in Proc. of International Conference on Image Processing, Computer Vision, and Pattern Recognition
2010
590 
596
Lanh V.
,
Emmanuel S.
,
Kankanhalli M. S.
“Identifying source cell phone using chromatic aberration,”
in Proc. of IEEE International Conference on Multimedia and Expo
July 25, 2007
883 
886
Chuang W.
,
Min W.
“Semi nonintrusive training for cellphone camera model linkage,”
in Proc. of IEEE International Workshop on Information Forensics and Security
December 1215, 2010
1 
6
Gökhan G.
,
Avcibas I.
“Source cell phone camera identification based on singular value decomposition,”
in Proc. of IEEE International Workshop on Information Forensics and Security
December 69, 2009
171 
175
Kharrazi M.
,
Sencar H.
,
Memon N.
“Blind source camera identification,”
in Proc. of IEEE International Conference on Image Processing
October 2427, 2004
709 
712
Choi K. S.
,
Lam E. Y.
,
Wong K. Y.
2006
“Automatic source camera identification using the intrinsic lens radial distortion,”
Optics Express
14
(24)
11551 
11565
DOI : 10.1364/OE.14.011551
Meng F.
,
Kong X.
,
You X.
“A new featurebased method for source camera identification,”
in Proc. of IFIP WG 11.9 International Conference on Digital Forensics
September 1214, 2008
702 
705
Bayram S.
,
Sencar H.
,
Memon N.
,
Avcibas I.
“Source camera identification based on CFA interpolation,”
in Proc. of IEEE International Conference on Image Processing
September 1114, 2005
69 
72
Bayram S.
,
Sencar H.
,
Memon N.
“Identifying digital cameras using CFA interpolation,”
in Proc. of IFIP WG 11.9 International Conference on Digital Forensics
2006
Long Y.
,
Huang Y.
“Image based source camera identification using demosaicking,”
in Proc. of IEEE 8th Workshop on Multimedia Signal Processing
October 46, 2006
419 
424
Swaminathan A.
,
Min W.
,
Liu K. J. R.
“Nonintrusive forensic analysis of visual sensors using output images,”
in Proc. of IEEE International Conference on Acoustics, Speech and Signal Processing
May 1419, 2006
V 
V
Swaminathan A.
,
Min W.
,
Liu K.J.R.
2007
“Nonintrusive component forensics of visual sensors using output images,”
IEEE Transaction on Information Forensics and Security
2
(1)
91 
106
DOI : 10.1109/TIFS.2006.890307
Park J.
,
Chong J.
2014
“Edgepreserving Demosaicing method for digital cameras with bayerlike WRGB color filter array,”
KSII Transaction on Internet and Information Systems
8
(3)
1011 
1025
DOI : 10.3837/tiis.2014.03.017
Sung D.
,
Tsao H.
2015
“Demosaicing using subbandbased classifiers,”
Electronics Letters
51
(3)
228 
330
DOI : 10.1049/el.2014.1557
Avcıbaş I.
,
Memon N.
,
Sankur B.
2003
“Steganalysis using image quality metrics,”
IEEE Transactions on Image Processing
12
(2)
221 
229
DOI : 10.1109/TIP.2002.807363
Li Y.
,
Wang B.
,
Kong X.
,
Guo Y.
2014
“Image tampering detection using noreference image quality metrics,”
Journal of Harbin Institute of Technology
21
(6)
51 
56
Pudil P.
,
Ferri J.
,
Novovicova J.
“Floating search methods for feature selection with nonmonotonic criterion functions,”
in Proc. of 12th International Conference on Pattern Recognition
October 913, 1994
279 
283
Boser B. E.
,
Guyon I.
,
Vapnik V.
“A training algorithm for optimal margin classifiers,”
in Proc. of the Fifth Annual Workshop on Computational Learning Theory
1992
144 
152
Schölkopf B.
,
Platt J. C.
,
Taylor J. S.
,
Smola A. J.
,
Williamson R. C.
2001
“Estimating the support of a highdimensional distribution,”
Neural Computation
13
(7)
1443 
1471
DOI : 10.1162/089976601750264965