Multiview face detection has become an active area for research in the last few years. In this paper, a novel multiview human face detection algorithm based on improved real Adaboost is presented. Real Adaboost algorithm is improved by weighted combination of weak classifiers and the approximately best combination coefficients are obtained. After that, we proved that the function of sample weight adjusting method and weak classifier training method is to guarantee the independence of weak classifiers. A coarsetofine hierarchical face detector combining the high efficiency of Haar feature with pose estimation phase based on our real Adaboost algorithm is proposed. This algorithm reduces training time cost greatly compared with classical real Adaboost algorithm. In addition, it speeds up strong classifier converging and reduces the number of weak classifiers. For frontal face detection, the experiments on MIT+CMU frontal face test set result a 96.4% correct rate with 528 false alarms; for multiview face in real time test set result a 94.7 % correct rate. The experimental results verified the effectiveness of the proposed approach.
1. Introduction
O
ver the past decade, face recognition has emerged as an active research area in computer vision with numerous potential applications including biometrics, surveillance, humancomputer interaction, videomediated communication and contentbased access of images and video databases.
Face detection is the first step in automated face recognition. Its reliability has a major influence on the performance and usability of the entire face recognition system. Given a single image or a video, and ideal face detector should be able to identify and locate all the present faces regardless of their position, scale, orientation, age, and expression. Furthermore, the detection should be irrespective of extraneous illumination conditions and the image and video content. Face detection can be performed based on several cues: skin color, motion, facial or head shape, facial appearance, or a combination of these parameters
[1]
.
Most successful face detection algorithms are appearancebased without using other cues. The processing is done as follow: An input image is scanned at all possible locations and scales by a subwindow. Face detection is posed as classifying the pattern in the subwindow as either face or nonface. The face or nonface classifier is learned from face and nonface training examples using statistical learning methods.
Statistical methods have been widely adopted in face detection
[2]
. Moghaddam and Pentland
[3

6]
introduced the Eigenface method, where the probability of face patterns is modeled by the “distanceinfeaturespace” (DIFS) and “distancefromfeaturespace” (DFFS) criteria. Osuna et al.
[7
,
8]
presented an SVMbased approach to frontalview face detection. Unlike the Eigenface method where only the positive density is estimated, this approach seeks to learn the boundary between face and nonface patterns. After learning, only the ‘important’ examples located on the boundary are selected to build the decision function. Soulie et al.
[9]
described a system using neural networks (NNs) for face detection. They implemented a multimodal architecture where various rejection criteria are employed to tradeoff false recognition against false rejection. Sung and Poggio
[10]
also presented a NN based face detection system. They designed six positive prototypes (faces) and six negative prototypes (nonfaces) in the hidden layer. Supervised learning is performed to determine the weights of these prototypes to the output node. Rowley et al.
[11]
introduced a NN based upright frontal face detection system. A retinal connected NN examines small windows of an image and decides whether each window contains a face. This work was later extended to rotation invariant face detection by designing an extra network to estimate the rotation of faces in the image plane
[12
,
13]
.
Multiview face detection (MVFD) is used to detect upright faces in images that with ±90degreee rotationoutof plane (ROP) pose changes. Rotation invariant means to detect faces with 360degree rotationinplane (RIP) pose changes
[14]
.
Recently, Viola and Jones
[15]
presented an approach to fast face detection using simple rectangle features which can be efficiently computed from the socalled Integral Image. This system prompted the development of more general systems such as rotation invariant frontal face detection and MVFD
[16]
. Adaboost and cascade methods are then used to train a face detector based on these features. Li et al.
[16
,
17]
adopted similar but more general features which can be computed from block differences. Also, Floatboost is proposed to overcome the monotonicity of the sequential Adaboost learning. Existing work on MVFD includes Schneiderman et al.’s work
[18]
based on Bayesian decision rule and Li et al.’s
[16]
pyramidstructured detector which was reported as the first realtime MVFD system. In Hongming Zhang et al.’s work
[19]
, they emphasized on designing efficient binary classifiers by learning informative features through minimizing the error rate of the ensemble ECOC multiclass classifier. In order to meet the needs of various applications, a realtime rotation invariant MVFD system is the ultimate goal. Although present MVFD methods can be applied to this problem by means of rotating images and repeating the procedure, the process becomes time consuming and the rate of false alarm increases.
In this paper, we propose a novel improved real Adaboost algorithm for MVFD based on Schapire and Singer’s improved Adaboost classifiers
[20]
. LUT weak classifier is used to train processed Haar feature (mirror and rotation) by our real Adaboost algorithm. In addition, a pose estimator based on confidence of strong classifier is proposed in this paper, which used first four layers to estimate face pose. We tested our MVFD method on CMU and MIT face database, the experimental results show us that the performance of our detection system is better than discrete Adaboost and real Adaboost algorithm, in addition, it speeds up strong classifier converging, reduces the number of weak classifiers and decreases detection time. The detection results and its timeliness are satisfactory.
2. Improved Real Adaboost Learning Algorithm
 2.1 Real Adaboost Algorithm
For Adaboost learning, a complex nonlinear strong classifier
H_{M}
(
x
) is constructed as a linear combination of M simpler, easily constructible weak classifiers in the following form. The Adaboost learning procedure is aimed at learning a sequence of best weak classifiers to combine
h_{m}
(
x
) and the combining weights
α_{m}
in:
It solves the following three fundamental problems: (1) learning effective features from a large features set; (2) constructing weak classifiers, each of which is based on one of the selected features; and (3) boosting the weak classifiers to construct a strong classifier.
Real Adaboost algorithm deals with a confidencerated weak classifier that is a map from a sample space
X
to a realvalued space
R
instead of Boolean prediction, it has the following form
[20]
,
[21]
:
● Given dataset
S
= {(
x
_{1}
,
y
_{1}
), ... ,(
x_{m}
,
y_{m}
)};
Where(
x
_{1}
,
y
_{1}
) ∈
X
× {1, +1}, the weak classifier pool
H
and the number of weak classifiers to the selected
T
.
● Initialize the sample distribution
D
_{1}
(
i
)= 1/
m
.
● For
t
= 1, ... ,
T
(1). For each weak classifier
h
in
H
do:

a. PartitionXinto several disjoint blocksX1, ... ,Xn.

b. Under the distributionDtcalculate
where
l
= ±1.

c. Set the output ofhon eachXjas
where
ε
is a small positive constant.

d. Caluclate the normalization factor
(2). Select the
h_{t}
minimizing
Z
, i.e.
(3). Update the sample distribution
And normalize
D
_{t+1}
to a probability distribution function.
● The final strong classifier
H
is
It can be seen that (2) and (3) define the output of the weak classifier, so all that is left to the weak learner is to partition the domain
X
.
 2.2 Improved Real Adaboost Algorithm
The real Adaboost algorithm proposed by Schapire et al.
[20]
extended the criterion of discrete twovalue to output of continuous confidence. This algorithm selected Equation (3) as the output; all of each weak classifier will have this judgment function as well; the core objective is to confirm a partition on these datasets. So, we do some improvement on partition of datasets and the control of weights adjustment.
● Given dataset
S
= {(
x
_{1}
,
y
_{1}
), ... ,(
x_{m}
,
y_{m}
)};
Where(
x
_{1}
,
y
_{1}
) ∈
X
× {1, +1}, the weak classifier pool
H
and the number of weak classifiers to the selected
T
, the number of features for classifier is s, we give three weights parameters as
w_{base}
,
w_{factor}
,
w_{index}
.
● Initialization
(1) Initialize the sample distribution.
(2) Initialize the Gray value interval distribution of Haar feature.
Preprocess each Haar feature corresponding to weak classifier, divide range evenly into n subranges. A partition on the range corresponds to a partition on
X
:
X
_{1}
^{k}
, ... ,
X_{n}^{k}
,
K
=
1
,
2
, ...,
s
.
● For t = 1, T
(1) For each weak classifier
h
ʹ in
H
ʹ do:

a. Under the distributionDtʹ calculate
where
l
= ±1,
k
=
1
,…,
s
,
j
=
1
, …,
n
.

b. Set the output ofhon eachXjunder sample weights as:
where ε is a small positive constant.

c. Calculate the normalization factor
(2). Select the
h
ʹ
_{t}
minimizing
Z
’, i.e.
(3). Update the sample distribution
For training weight value
β_{t}
as
(4). Normalize
D
ʹ
_{t+1}
to a probability distribution function
● The final strong classifier H is
Where
b
is a threshold whose default is zero. The confidence of
H
is defined as
 2.3 Analysis of Improved Real Adaboost Algorithm
Original real Adaboost algorithm used local optimum criteria, the convergence is better than discrete Adaboost algorithm; but the time complexity of real Adaboost algorithm is still large. The number of training samples is defined as
N
, the number of selecting class features as
M
, and the number of weak classifiers for strong classifier as
T
. The time complexity of weak classifier is
O
(
M
*
N
*
N
), and the time complexity of strong classifier is
O
(
T
*
M
*
N
*
N
).
The improved real Adaboost algorithm we proposed used weak classifiers
h
to divide sample space
X
into n subranges evenly; each
h
corresponding to space division
X
_{1}
^{k}
, … ,
X_{n}^{k}
can be obtained, and no longer to redistribute training sample space while selecting the best weak classifier each step, by only using the updated weights distribution
D_{t}
ʹ, and calculating
Wʹ
_{l}
^{j}
under
D_{t}
ʹ by addition. So the complexity for each weak classifier is only depended on the calculated amount of training sample weights by addition statistics. The time complexity of weak classifier is
O
(
M
*
N
), and the time complexity of strong classifier is
O
(
T
*
M
*
N
), the training speed is enhanced by
O
(
N
).
Real Adaboost algorithm is improved by weights adjustment and partition of datasets. Reference
[22]
had proved that prepartition on datasets is practicable; we will prove that the control of weights adjustment is as well.
h_{t}
(
x
) is on the partition
S
=
S
_{1}
U
S
_{2}
U … U
S_{n}
,
the definition of
W_{l}^{j}
is as before. We define a group of stochastic variable as:
Let
The strong classifier of real Adaboost algorithm is the sum of all weak classifiers, now we improve it by adding weights as a sum of weights added, the strong classifier is defined as:
β_{t}
is given as (16), its mean value and variance can be expressed as:
Let
which mean value and variance are μ and
the theorem below could be proved.
THEOREM
: When
h_{t}
(
x
) is independent, μ ˃ 0 and combination coefficients
the error rate of
H
(
x
) is
; when
T
is very large, if
is bounded but
is large, these combination coefficients approximately are the best.
PROOF:
The error rate of
H
(
x
)can be regarded as the probability of
R
≤ 0, namely
ε
=
P_{r}
[
R
≤ 0]. As
H
(
x
) has a error if and only if
and it is equivalent to
We write the probability density function of R as
g(r)
, so:
And
Namely
σ
^{2}
/
μ
^{2}
can obtain the minimum when
input (21) into Equation (20), we can obtain:
So far, first half of the the theorem is proved. Next step, we will prove that when
T
→ ∞, the minimum point of
σ
^{2}
/
μ
^{2}
also is the minimum point of
ε
=
P_{r}
[
R
≤ 0] When
both of the mean value and variance of are
β_{t}R_{t}
are
we write it as
and then
σ
^{2}
/
μ
^{2}
= 1 /
θ
. As
is bounded, on the basis of limit theorem; when
T
→ ∞
is the normal distribution with mean value =
θ
/
T
and variance
is standard normal distribution. For stochastic variable Y, which is belonging to standard normal distribution, when v → ∞ ,
is the monotone function about
v
. When
T
→ ∞ and θ → ∞ ,
So ε is minimal when
σ
^{2}
/
μ
^{2}
= 1 / θ is minimal.
This shows that the ratio of meanvariance could be regarded as a normalization, because that of
y_{i}β_{t}h_{t}
(
x_{i}
) is 1; the theorem indicates that the confidence of weak classifiers are better under normalization condition. The weight should adjusted by
3. Haar Feature for MultiView Face Detection
The rectangular masks used for visual object detection are rectangles tessellated by black and white smaller rectangles. Those masks are designed in correlation to visual recognition tasks to be solved, and known as Haarlike wavelets. By convolution with a given image they produce Haarlike features. Viola
[15]
has used four features (
Fig. 1
(a)~(d)) for face detection and these features performed well on face. Fasel
[23]
has proposed a new ones (see
Fig. 1
(e)).
Face detection feature that proposed by Viola and Fasel; (a) , (b) edge features, (c) Line feature, (d) diagonal feature, (e) center surround features.
This study aims to solve multiview face detection (MVFD), namely to detect upright faces in images that with ±90° rotationoutofplane (ROP) changes and rotation invariants, which means to detect faces with 360° rotationinplane (RIP) pose changes. According to ROP angle, 180degree isdivided into [90° 75°], [75° 45°], [45° 15°], [15° +15°], [+15° +45°], [+45° +75°], [+75° +90°]. According to the RIP angle, faces are divided into twelve categories, each covering 30 degree. So there are total 7*12 view categories corresponding to 84 detectors. Based on Adaboost algorithm, 84 categories face detectors should be designed for 84 categories multiview faces, it is an enormous work for training. Since the Haar features can be flipped horizontally and rotated 90 degree, some of others can be generated from the original ones, see
Fig. 2
and
Table 1
.
Geometric Transformation of Haar Feature (“R”means Rotate 90° and “M”means Mirror Transform)
Geometric Transformation of Haar Feature (“R”means Rotate 90° and “M”means Mirror Transform)
Mirror and rotate detectors.
The original detectors of frontal view are the 60degree and 90degree ones. The original detectors of half profile view are the left half profile, whicharethe 60degree, 90degree and 120degree ones. Full profile situation is the same as half profile.
According to the analysis above, it should build 7*12=84 categories classifiers originally, but it only needs4*3=12 categories classifiers now. “4”means four angle regions of seven regions based on flipping horizontally, which cannotbe obtained by transformation, and “3”means containing at least 0°, 30° and 60° in twelve categories according to RIP angle. Other situationscan be obtained by transformation of Haar feature as we introduced.
Adaboost algorithm is a learning algorithm that selects a set of weak classifier from a large hypothesisspace to construct a strong classifier. Basically speaking, the performance of the final strong classifier originates in the characteristics of its weak hypothesisspace. In
[12]
, threshold weak classifiers are used as the hypothesis space input to boosting procedure, the main disadvantage of the threshold model is too simple to fit complex distributions. So, in this paper we use LUT weak classifier
[24]
in order to use real Adaboost algorithm. Therefore we use a realvalued LUT weak classifier.
Assuming
f_{Haar}
is the Haar feature and
f_{Haar}
has been normalized to [0, 1], the range is divided evenly into n subranges, then the jth LUT item corresponding to a subrange:
A partition on the range corresponds to a partition on
X
. Thus, the weak classifier can be defined as: if
f_{Haar}
(
x
) ∈
bin_{j}
, then
Where
l
= ±1,
j
= 1, … ,
n
. Given the characteristic function
Where
j
= 1, … ,
n
.
The LUT weak classifier can be formally expressed as:
LUT classifier almost can simulate any kinds of probability distribution. When size of sample subwindow is confirmed, all of candidate Haar feature will be confirmed as well, each Haar feature corresponds to one LUT weak classifier, which make up our weak classifiers space
H
.
Each layer of cascade is strong classifier training by improved real Adaboost algorithm we proposed. We set up a threshold of each layer as b as Equation (17), let 99.9% of face can pass it, and try best to throw out counterexample. The classifiers on the later place are more complexity, which contains more LUTbased classifiers; therefore, they have stronger classification capability.
The improved real Adaboost algorithm performs better than the Discrete Adaboost algorithm in our cascade classifier since the real Adaboost algorithm can output a more continuous confidence value (as Equation (18)) than Discrete Adaboost algorithm, and our LUT weak classifier functions especially well with the Real Adaboost algorithm. The training algorithm can be explained as follows:
● Setting the maximum false positive rate per layer as f, the minimum passing rate per layer as d, and the overall false positive rate as
F_{target}
. Given the positive training set as Pos, and the negative training set as Neg.
● Initialize:
F_{all}
= 1,
i
= 1.
● While
F_{i}
˃
F_{target}
1) Training ith layer by using Pos and Neg; setting threshold value b to let false positive rate f
_{i}
lower than f, and let passing rate higher than d.
2)
F_{i}
_{+1}
=
F_{i}
×
F_{i}
;
i
=
i
+ 1;
Neg
← ∞.
3) If F
_{i+1}
˃ F
_{target}
, scanning nonface images using current cascade classifier, collecting all the false positive set Neg.
4. Pose Estimation based on Cascade Classifier
Ideally, it needs to scan all of candidate subwindows by trained face detector in linear detection process. But it will take too long time for performing in real time. So we present a strategy for improving timelinesspose estimation.
Rowley firstly presented pose estimation in his multiview face detection based on ANN
[25]
, he designed some subclasses according to different face poses, and then to train an ANN for these classes, called estimator. When the system decides whether subwindows belong to face or not in the input image, it should estimate the pose firstly using pose estimator and then sent these subwindows to corresponding face detectors for processing again. Obviously, the pose estimator has a direct influence on detection results.
In our study, we do not train a pose estimator separately, instead to use the confidence of strong classifier, which defined by Equation (18) as before. Suppose there are
d
detectors multiviewbased and each detector has
n
layers. Writing the confidence of the
jth
layer of
ith
detector as
Conf_{i}
^{(j)}
,
i
=
1
,…,
d
,
j
=
1
,…,
n
. The confidence of the first
k
layers can be expressed as:
Then the pose estimator of first
m
layers can be defined as:
For this processing,
x
will be regarded as nonface if it had been thrown out at first
m
layer, and it needn’t pose estimate again. Unlike Rowley’s method, in our method, pose estimator is not separate from face detecting; therefore it will not introduce extra computation problem.
Fig. 3
illustrates the pose estimation procedure we proposed.
Pose estimation frame
In this study, there are four viewbased detectors; cascade classifier contains 16 layers and first four layers are used to estimate the pose of face.
5. Experimental Results and Analysis
The multiview face detection system we designed is running on the hardware environment of Intel (R) Core (TM) 2 (2.93GHz), a Web camera, and the software environment of Windows 7 and Visual Studio 2008.
Our face dataset include MITCBCL database and CMU face database, which contains frontal faces and profile faces.
 5.1 Comparison of Convergence on Adaboost algorithms
 A. Experiment for Convergence of Single Strong Classifier
To compare the convergence performance of original real Adaboost and improved real Adaboost algorithm in this paper, we design an experiment on single strong classifier; training set includes 3500 faces and 10000 nonfaces, when the detection rate achieve 99.9%, training will be stopped. The coefficients of dynamic weights of improved real Adaboost algorithm are set as:
W_{base}
= 1.5,
W_{factpr}
= 0.5,
W_{index}
= 2.0 experimental result is shown as
Fig. 4
; the “Detection Rate” is the data of the training set.
The performance curve in
Fig. 4
shows that, the convergence performance of improved real Adaboost algorithm is better than original real Adaboost algorithm. We can find that our real Adaboost has met the stopping criterion when it reached the 31 layer weak classifiers, however original need to 67 layers.
Comparison of two Adaboost algorithms
 B. Experiment for Cascade Classifier
Under the same training condition, we do experiments on Discrete Adaboost, original real Adaboost and ours for testing the performance of these three cascade classifiers. 3500 face pictures with resize of 20 × 20are selected for training set, and we use bootstrapping method to select 10000 nonface samples from 500 pictures, which are without human face. 127 face pictures (include 332 faces) are used for testing set. Each algorithm use the same 5000 Haarlike features,
Table 2
shows us the detail of performance comparison.
Performance Comparison of Three Adaboost Algorithms
Performance Comparison of Three Adaboost Algorithms
We can find that from
Table 2
: our algorithm has better performance than Discrete Adaboost algorithm, and detection time is shorter than original real Adaboost algorithm by 8ms. The number of training layers is more than original real algorithm, because the stopping criterion is based on detection rate, each layer can reach high detection rate by using less weak classifiers; since the number of rejecting as nonface is few, more layers are necessary to reject all the nonfaces. Moreover, the number of weak classifiers is less than original real Adaboost algorithm by 1042 in this paper.
 5.2 Experiments of MultiView Face Detection
 A. Frontal Face Detection
We use the CMU frontal face database and MIT face database
[26]
which consists of a training set with 6977 images (2429 faces and 4548 nonfaces) and a test set with 24045 images (472 faces and 23573 nonfaces). The images are 19 × 19 grayscale and we renormalize them to 20 × 20. With these samples, we trained them by improved real Adaboost algorithm that has 16 layers and 742 weak classifiers. Compared to the Viola’s method that has 62 layers and 4297 features and Schneiderman’s method
[18]
, our method is much more efficient.
Fig. 5
is the ROC curve and
Fig. 6
show us face detection results. Obviously, our method has better convergence performance on training set.
The ROC curves of our detectors on the MIT frontal face test set
Frontal face detection results on CMU+MIT test set
We tested the profile detector on CMU profile face test set with 208 images (441 faces and 307 of them are nonfrontal face). For the training set, about 34000 normalized images (about 8000 half profile faces, 10000 full profile faces and 16000 frontal faces) are selected on Internet. According to the analysis in this paper, only 12 categories classifiers are trained for multiview face detection with ROP and RIP angle.
Table 3
and
Fig. 7
show the experimental results of multiview face detection.
Multiview face detection results on CMU profile face test set.
Multiview face detection results on CMU profile face test set.
Multiview face detection results on CMU profile face test set.
In reference
[18]
reached 85.5% pass rate with 91 false drops, but we can find that the performance of our method is much better than Schneiderman’s method; it can reach a high detection rate with less false drop. On the other hand, the first layers of cascade classifiers are used to estimate the face pose, the results with PE is better than that without PE as
Fig. 8
shown.
The ROC curves of our algorithm on CMU profile face data set
For testing our method in real time, we design a multiview face detection system based on VS 2008 and OpenCV library, and three persons with different poses, expression and other factors (glasses, hair blocking and so on) are selected for testing set. The detection results are shown as
Fig. 9
.
Multiview faces detection in real time
The size of each frame is set as 320×240 and the running time for detection is about 60ms with high correct detection rate. The performance of detection system is robust and effective.
6. Conclusion
In this paper, we present a novel algorithm for multiview face detection. An improved real Adaboost algorithm for training Haar feature is proposed that used dynamic weights and prepartitioning samples, and we prove its viability based on probability. Compared to original real Adaboost algorithm, the time complexity of weak classifier is
o
(
M
*
N
), and the time complexity of strong classifier is
o
(
T
*
M
*
N
), the training speed is enhanced by
o
(
N
).
Based on our processing of Haar features for multiview face detection (mirror image and rotation), only 12 categories classifiers are needed to build for ROP and RIP angle changing, it greatly reduces the complexity of training. After that, LUT weak classifier is used to boost Haar features in order to use real Adaboost algorithm. Moreover, instead of training a pose estimator separately, we use the confidence of strong classifier, which defined on real Adaboost algorithm in this paper. 4 of 16 layers of cascade classifiers are used to estimate face pose, there is no extra computation for pose estimating.
We tested our algorithm on CMU and MIT face database. The experimental results showed that, the convergence performance of improved real Adaboost algorithm is better than original real Adaboost algorithm. In addition, our multiview face detection system has a high detection rate within an acceptable number of false drops and satisfactory timeliness.
BIO
Wenkai Xu received his B. S. at Dalian Polytechnic University in China (20062010) and Master degree at Tongmyong University in Korea (20102012). Currently, he is studying in Department of Information and Communications Engineering Tongmyong University for PH. D. His main research areas are image processing, computer vision, biometrics and pattern recognition.
EungJoo Lee received his B. S., M. S. and Ph. D. in Electronic Engineering from Kyungpook National University, Korea, in 1990, 1992, and Aug. 1996, respectively. Since 1997 he has been with the Department of Information & Communications Engineering, Tongmyong University, Korea, where he is currently a professor. From 2000 to July 2002, he was a president of Digital Net Bank Inc. From 2005 to July 2006, he was a visiting professor in the Department of Computer and Information Engineering, Dalian Polytechnic University, China. His main research interests include biometrics, image processing, and computer vision.
Xu Wenkai
,
Lee EungJoo
2012
“A Combinational Algorithm for MultiFace Recognition,”
International Journal of Advancements in Computing Technology
4
(13)
146 
154
Lee YuBu
,
Lee Sukhan
2011
“Robust Face Detection Based on KnowledgeDirected Specification of BottomUp Saliency,”
ETRI Journal
33
(4)
600 
610
DOI : 10.4218/etrij.11.1510.0123
Moghaddam B.
,
Pentland A.
1994
“Face Recognition Using Viewbased And Modular Eigenspaces,”
in Proc. of Automatic Systems for the Identification and Inspection of Humans
DOI : 10.1117/12.191877
Moghaddam B.
,
Pentland A.
1997
“Probabilistic visual learning for object representation,”
IEEE Transactions on Pattern Analysis and Machine Intelligence
19
(7)
137 
143
DOI : 10.1109/34.598227
Moghaddam B.
,
Wahid W.
,
Pentland A.
1998
“Beyond Eigenfaces: probabilistic matching for face recognition,”
in Proc. of IEEE International Conference on Automatic Face and Gesture Recognition
30 
35
DOI : 10.1109/AFGR.1998.670921
Pentland A.
,
Moghaddam B.
,
Starner T.
1994
“Viewbased and modular eigenspaces for face recognition,”
in Proc. of IEEE Conference on Computer Vision and Pattern Recognition
84 
94
DOI : 10.1109/CVPR.1994.323814
Osuna E.
,
Freund R.
,
Girosi F.
1997
“Support vector machines: training and applications,”
Technical report, Massachusetts Institute of Technology, AI Memo 1602
Osuna E.
,
Freund R.
,
Girosi F.
1997
“Training support vector machines: an application to face detection,”
in Proc. of Computer Vision and Pattern Recognition
130 
136
DOI : 10.1109/CVPR.1997.609310
Soulie F.
,
Viennet F.
,
Lamy B.
1993
“Multimodular neural network architectures: applications in optical character and human face recognition,”
International Journal of Pattern Recognition and Artificial Intelligence
7
(4)
721 
755
DOI : 10.1142/S0218001493000364
Sung K.
,
Poggio T.
1994
“Examplebased learning for viewbased human face detection,”
Technical report, Massachusetts Institute of Technology, AI MEMO 1521
DOI : 10.1109/34.655648
Rowley H.
,
Baluja S.
,
Kanade T.
1997
“Neural networkbased face detection,”
in Proc. of IEEE Conference on Computer Vision and Pattern Recognition
203 
207
DOI : 10.1109/34.655647
Rowley H.
,
Baluja S.
,
Kanade T.
1997
“Rotation invariant neural networkbased face detection,”
Technical report, CMUCS97201
DOI : 10.1109/CVPR.1998.698585
Rowley H.
,
Baluja S.
,
Kanade T.
1998
“Neural networkbased face detection,”
IEEE Transactions on Pattern Analysis and Machine Intelligence
20
(1)
23 
38
DOI : 10.1109/34.655647
Mathew Abraham
,
Radhakrishnan R.
2013
“Versatile Approach for Feature Extraction of Image Using HaarLike Filter,”
KIIT Journal of Research & Education
2
(2)
18 
22
Viola P.
,
Jones M.
2001
“Rapid Object Detection using a Boosted Cascade of Simple Features”
in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition
511 
518
DOI : 10.1109/CVPR.2001.990517
Li S.
,
Zhu L.
,
Zhang Z.
,
Zhang H.
2002
“Statistical learning of multiview face detection,”
in Proc. of ECVC’ 02
67 
81
DOI : 10.1007/3540479791_5
Zhang Z.
,
Zhu L.
,
Li S.
,
Zhang H.
2002
“Realtime multiview face detection,”
in Proc. of IEEE International Conference on Automatic Face and Gesture Recognition’ 02
149 
DOI : 10.1109/AFGR.2002.1004147
Schneiderman H.
,
Kanade T.
2000
“A Statistical Method for 3D Object Detection Applied to Faces and Cars,”
in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition’ 00
546 
751
DOI : 10.1109/CVPR.2000.855895
Zhang Hongming
,
Gao Wen
,
Chen Xilin
,
Shan Shiguang
,
Zhao Debin
2006
“Robust MultiView Face Detection Using Error Correcting Output Codes,”
in Proc. of ECCV’ 2006
1 
12
DOI : 10.1007/11744085_1
Schapire R. E.
,
Singer Y.
1999
“Improved Boosting Algorithms Using Confidencerated Predictions,”
Machine Learning
37
(3)
297 
336
DOI : 10.1023/A:1007614523901
Baskoro Hendro
,
Kim JunSeong
,
Kim ChangSu
2011
“MeanShift Object Tracking with Discrete and Real Adaboost Techniques,”
ETRI Journal
31
(3)
282 
291
DOI : 10.4218/etrij.09.0108.0372
Wu Bo
,
Huang Chang
,
Ai Haizhou
2005
“A Multiview Face Detection Based on Real Adaboost Algorithm,”
Journal of Computer Research and Development
42
(9)
812 
817
DOI : 10.1360/crad20050924
Fasel Ian
,
Fortenberry B.
,
Movellan J.
2005
“A generative framework for real time object detection and classification,”
Computer Vision and Image Understanding
98
(1)
182 
210
DOI : 10.1016/j.cviu.2004.07.014
Feraud R.
,
Bernier O.J.
,
Viallet JeanEmmanuel
,
Collobert Michel
2001
“A Fast and accurate face detector based on neural networks.”
IEEE Transaction on Pattern Analysis and Machine Intelligence
23
(1)
42 
53
DOI : 10.1109/34.899945
Weyrauch B.
,
Huang J.
,
Heisele B.
,
Blanz. V.
2004
“Componentbased Face Recognition with 3D Morphable Models,”
in Proc. of First IEEE Workshop on Face Processing in Video
DOI : 10.1109/CVPR.2004.41