Face Recognition Based on Improved Fuzzy RBF Neural Network for Smar t Device

Journal of Korea Multimedia Society.
2013.
Nov,
16(11):
1338-1347

- Received : September 12, 2013
- Accepted : October 17, 2013
- Published : November 30, 2013

Download

PDF

e-PUB

PubReader

PPT

Export by style

Article

Metrics

Cited by

TagCloud

Face recognition is a science of automatically identifying individuals based their unique facial features. In order to avoid overfitting and reduce the computational reduce the computational burden, a new face recognition algorithm using PCA-fisher linear discriminant (PCA-FLD) and fuzzy radial basis function neural network (RBFNN) is proposed in this paper. First, face features are extracted by the principal component analysis (PCA) method. Then, the extracted features are further processed by the Fisher's linear discriminant technique to acquire lower-dimensional discriminant patterns, the processed features will be considered as the input of the fuzzy RBFNN. As a widely applied algorithm in fuzzy RBF neural network, BP learning algorithm has the low rate of convergence, therefore, an improved learning algorithm based on Levenberg-Marquart (L-M) for fuzzy RBF neural network is introduced in this paper, which combined the Gradient Descent algorithm with the Gauss-Newton algorithm. Experimental results on the ORL face database demonstrate that the proposed algorithm has satisfactory performance and high recognition rate.
Flowchart of proposed recognition algorithm.
I
_{1}
,
I
_{2}
,...,
I_{M}
, each image can be expressed as
I
(
x
,
y
). Let the training set of face images be
.
(2) Calculating the average face
:
where
M
presents the number of training face images and
denotes
i^{th}
face image vector. (3) Calculating the mean subtracted face
Փ_{i}
by
The objective of this formula is to project
to a lower dimension space.
(4) Calculating the covariance matrix
C
where A = (
Փ
_{1}
,
Փ
_{2}
,...,
Փ_{M}
).
(5) Calculating the
v_{k}
and
λ_{k}
, which indicate eigenvectors and eigenvalues of
C
matrix. Where
v_{k}
determines linear combinations of the
M
training set of face images from the eigenface
u
[15]
:
Here, we obtain a set of eigenface vectors
U
= [
u
_{1}
,
u
_{2}
,...,
u_{M}
].
Fig. 2
shows the first 10 eigenfaces on ORL face database.
First 10 eigenfaces with highest eigenvalues.
(6) In order to find a best subspace for classification, and maximize the ratio of between-class scatter and within-class scatter, so computing between-class scatter matrix
S_{b}
and within-class scatter matrix
S_{w}
[16]
:
where
(
i
= 1,2,...,
M
) is the mean of eigenface images and
is the mean of
i^{th}
set of eigenface.
c
is the number of the classes and
n^{i}
represents the number of
i^{th}
class.
(7) The optimal subspace,
E_{optimal}
by the FLD is determined as
Where [
w
_{1}
,
w
_{2}
,...
w_{c}
_{-1}
] is the set of generalized eigenvectors of
S_{b}
and
S_{w}
corresponding to the
c
-1 largest generalized eigenvalues
λ_{i}
,
i
= 1,2,...,
c
-1. Thus, the feature vectors
Ω
for any query face images
I
in the most discriminant sense can be calculated as follows:
The best subspace
E_{optimal}
is calculated by Lagrange multiplier. Meanwhile, the
Ω
is considered as the input unit of the fuzzy RBF neural network.
R^{r}
→
R^{s}
.
Let
P
∈
R^{r}
be the input vector and
C_{i}
∈
R^{r}
(1 ≤
i
≤
u
) be the prototype of the input vectors. The output of each RBF unit is as follows:
where ∥‧∥ indicates the Euclidean norm on the input space. Usually, the Gaussian function is preferred among all possible radial basis functions due to the fact that it is factorizable. Hence
where
σ_{i}
is the width of the
i^{th}
RBF unit. The
j^{th}
output
y_{i}
(
P
) of an RBF neural network is
where
R
_{0}
= 1,
w
(
j
,
i
) is the weight or strength of the
i^{th}
receptive field to the
j^{th}
output and
w
(
j
,0) is the bias of the
j^{th}
output.
We can see from (10) and (11) that the outputs of an RBF neural classifier are characterized by a linear discriminant function. They generate linear decision boundaries (hyperplanes) in the output space. Consequently, the performance of an RBF neural classifier strongly depends on the separability of classes in the u-dimensional space generated by the nonlinear transformation carried out by the u RBF units.
Geometrically, the key idea of an RBF neural network is to partition the input space into a number of subspaces which are in the form of hyperspheres.
R^{i}
: If
, then
.
Where
x_{i}
is input vector,
y
is output vector;
u_{Aij}
(
x_{i}
) is the membership function (Gaussian function) of
x_{i}
for
A_{i}^{j}
,
i
= 1,2...,
n
,
j
= 1,2,...,
m
;
w_{ik}
corresponds to
k^{th}
output of
i^{th}
rule, there are
m
rules totally and
o
output vectors.
The output of fuzzy system can be expressed as
Where
π_{i}
indicates the incentive intensity of
i^{th}
rule, namely
Due to the function equivalence between RBF neural networks and fuzzy inference system, and these two systems can be unified by functions. We correspond the number of pattern clustering in RBF neural network to the number of fuzzy rules. Thus, the parameters are with fuzzy inference ability, which structure the fuzzy RBF neural network.
The topology of fuzzy RBFNN.
Suppose there are a
k
dimensional characteristic space
F
= {
f
_{1}
,
f
_{2}
,...,
f_{k}
} and
m
classes of patterns
c
_{1}
,
c
_{2}
,...,
c_{m}
, which follow normal distribution.
N
samples are selected as training set from total samples, we write them as training samples space:
X
= {
X
_{1}
,
X
_{2}
,...,
X_{N}
}. If the number of samples belong to
i^{th}
pattern
c_{i}
is
n_{i}
, then
. Through statistical calculation to these training samples, the mean vector
θ_{i}
= (
θ_{i}
_{1}
,
θ_{i}
_{2}
,...,
θ_{ik}
)
^{k}
and the variance vector
δ_{i}
= (
δ_{i}
_{1}
,
δ_{i}
_{2}
,...,
δ_{ik}
) of each pattern can be obtained, where
θ_{ij}
and
δ_{ij}
denote the mean value and variance value of
j^{th}
feature in
i^{th}
pattern, respectively.
Based on the analysis above, all of the samples can be processed by fuzzification when they are inputted RBF neural network, multiple input-one output system will be transformed to a multiple input-multiple outputs fuzzy neural classifier.
The structure of fuzzy RBF neural network consists of four layers as follows:
The first layer (Input layer): each node denotes an input linguistic variable and they are transferred to the next layer directly.
The second layer (Fuzzification layer): in this layer, the Gaussian radial basis function is adopted as a fuzzy membership function of neuron. This layer compose by
n
neurons, which is divided into
m
groups, each group contains
k
neurons, then
n
=
m
×
k
. Thus, the input-output relationship of
j^{th}
neuron in
i^{th}
group is
where
y_{ij}
denotes the probability density of
j^{th}
feature belong to the membership of pattern
C_{i}
, the output of
i^{th}
(
i
= 1,2,...,
m
) group of input neuron: (
y_{i}
_{1}
,
y_{i}
_{2}
,...,
y_{ik}
)
^{T}
constitutes the membership vector of pattern
c_{i}
for input samples. The
k
-dimensional eigenvector from the input layer was translated into the membership of each feature to reach pattern by processing of fuzzification.
The third layer (Fuzzy inference layer): each node corresponds to a fuzzy rule and this layer implements the mapping from fuzzy rules to output space. In this paper, we defined the product of all input signals as the output of each node as follow.
where
w_{i}
is the weight function of fuzzy rule,
y_{ij}
is the output in last layer.
The fourth layer (Output layer): this layer is a linear combination of outputs in last layer for defuzzification computation.
where
.
c_{i}
and the radius
δ_{i}
of basis function. Usually, the adjustment algorithm of
c_{i}
and
δ_{i}
takes K-NN clustering algorithm under unsupervised learning. The weight
w_{i}
mostly used BP algorithm for adjusting, which is a supervised learning algorithm. Though we can apply the gradient paradigm to find the entire set of optimal parameters, the paradigm is generally slow and likely to become trapped in local minima. For solving this problem, Levenberg-Marquart (L-M) training algorithm was used to adjust
w_{i}
in this paper. L-M algorithm is an improved training algorithm of BP, which combined the advantages of gradient paradigm and newton method, it has both local performance and entire performance. The L-M training process can be illustrated as follow.
Assume that there are
N
samples {
p
_{1}
,
p
_{2}
,...,
p_{N}
}, the expected output of network are
d
_{1}
,
d
_{2}
,...,
d_{N}
and the actual output are
y
_{1}
,
y
_{2}
,...,
y_{N}
. When input the
i^{th}
sample, the output
y_{ij}
(
j
= 1,2,...,
m
) can be received, the error is the sum of each output error, that can be expressed by
where
e
(
x
) is error, its gradient ∇
E
(
x
) and Hessian matrix ∇
^{2}
E
(
x
) are
where
J
is Jacobian Matrix, as
Assume that
x
^{(k)}
,
x
^{(k+1)}
denote the vector consists of weight and threshold by
k^{th}
and (
k
+1)
^{th}
iteration. Thus
For L-M training algorithm,
where proportion coefficient
μ
(
μ
>0) is constant,
I
denotes unit matrix. When
μ
= 0, L-M algorithm equals to Gaussian-Newton method; when
μ
is very large, it is approximate to gradient paradigm. In practical application,
μ
is a tentative parameter, it should be adjusted based on Δ
x
.
The process of using L-M algorithm for training fuzzy RBF neural network can be described as below.
(1) Normalizing the training samples;
(2) Setting up predetermined training error
ε
,
β
,
μ
_{0}
and initialize weight and threshold vector, let
k
= 0,
μ
=
μ
_{0}
;
(3) Calculating the output of network and error function
E
(
x
^{(k)}
);
(4) Calculating Jacobian matrix
J
(
x
) using E. q. (21);
(5) Calculating Δ
x
using E. q. (23);
(6) If
E
(
x
^{(k)}
) < ε, jump to step (8);
(7) Calculating
E
(
x
^{(k+1)}
) based on weight and threshold
x
^{(k+1)}
, if
E
(
x
^{(k+1)}
) <
E
(
x
^{(k)}
), then weight and threshold will be updated, namely, let
x
^{(k)}
=
x
^{(k+1)}
and
, return to step (3); otherwise, keep the weight and threshold, let
μ
=
μ
×
β
and return to step (5);
(8) Stop.
The error function was defined as
where
E_{p}
also be called as learning objective function, in another words, the objective of training is to minimize
E_{p}
by adjusting the parameters;
y
^{*k}
means the actual output of
k^{th}
unit in output layer;
denotes the expected output of
k^{th}
unit in output layer.
Samples of ORL face database.
In our experiments, we select 20 persons and 5 images each person in random. The selected 100 images are implemented as the set of training. Meanwhile, we select the other 100 images form 20 persons for testing. The original image with size of 92*112 can be translated into a 10304*1 matrix by PCA algorithm, and we select fisrt 60 eigenfaces for experiments. Then, the resulting features are further projected into the Fisher's optimal subspace in which the ratio of the between-class scatter and the within-class scatter is maximized.
So, the number of input units to fuzzy RBF neural network is 60, namely
k
= 60; the number of classes is 20,
m
= 20; thus, the number of nodes in fuzzification layer is 60×20 = 1200, which includes 20 groups and each group contains 60 neurons. The number of fuzzy inference layer is 20×5 = 100 (select 20 persons and 5 images each person).
The expected output
i^{th}
output-node's value is 1, the others are 0, but the actual outputs are around the expected value range. Based on competitive choice rule, the category of the input samples is determined by the maximum value of actual output in fuzzy RBF neural network's output layer. If there are not only one maximum value, the network will refuse to make a judgment. The experimental results are shown as
Table 1
,
Table 2
and
Table 3
.
Experimental results of fuzzy RBF neural network using BP learning algorithm
Experimental results of fuzzy RBF neural network using L-M learning algorithm
Performance of network training algorithm(BP and L-M)
From
Table 1
and
Table 2
we can find that: There is no "Reject" and with low recognition error rate, the learning ability of proposed method is strong; When the number of learning times is small, the recognition rate could be improved by increasing learning coefficient; When the number of learning times is beyond a certain range (very large), the recognition rate will not be enhanced, but tends to stabilize.
Table 3
. indicates that when the number of classes is low, fuzzy RBF neural network classifier has good performance. the recognition rate will reduce as the number increase. From
Table 1
to
Table 3
, result data indicate that L-M training algorithm is better than BP algorithm, the performance of fuzzy RBF neural network trained by L-M is more stable and with faster convergence ability than the one trained by BP algorithm.
Eung-Joo Lee
received his B. S. , M. S. and Ph. D. in Electronic Engineering from Kyungpook National University, Korea, in 1990, 1992, and Aug. 1996, respectively. Since 1997 he has been with the Department of Information & Communications Engineering, Tongmyong University, Korea, where he is currently a professor. From 2000 to July 2002, he was a president of Digital Net Bank Inc.. From 2005 to July 2006, he was a visiting professor in the Department of Computer and Information Engineering, Dalian Polytechnic University, China. His main research interests includes biometrics, image processing, and computer vision.

1. INTRODUCTION

Human face recognition from still and video images has become an active research area in the communities of image processing, pattern recognition, neural networks and computer vision. This interest is motivated by wide applications ranging from static matching of controlled format photographs such as passports, credit cards, driving licenses, and mug shots to real-time matching of surveillance video images presenting different constraints in terms of processing requirements
[1]
.
Face recognition involves the extraction of different features of the human face from the face image for discrimination it from other persons, it has evolved as a popular identification technique to perform verification of human identity.
During the past 30 years, many different face-recognition techniques have been proposed, motivated by the increased number of real-world applications requiring the recognition of human faces. PCA algorithm is known as Eigen face method; In PCA method, the images are projected onto the facial value and called "eigenspace"
[2
,
3]
. PCA approach reduces the dimension of the data by means of basic data compression method
[4]
and reveals the most effective low dimensional structure of facial patterns
[5]
. LFA method of recognition is based on the analysis the face in terms of local features e.g. eye, nose etc. by what is referred LFA kernels. Recognition by Neural Network
[6]
and
[7]
are based on learning of the faces in an "Example Set" by the machine in the “Training Phase” and carrying out recognition in the "Generalization Phase". Support Vector Machines (SVM) technique is in fact one of the binary classification methods. The support vectors consist of a small subset of training data extracted by the algorithm given in
[8]
. Face recognition based on template matching represents a face in terms of a template consisting of several enclosing masks the projecting features e.g. the mouth, the eyes and the nose
[9]
. In
[10]
, a face detection method based on half face-template is discussed.
Although researchers in psychology, neural sciences and engineering, image processing and computer vision have investigated a number of issues related to face recognition by human beings and machines, it is still difficult to design an automatic system for this task, especially when real-time identification is required. The reasons for this difficulty are two-fold: 1) Face images are highly variable and 2) Sources of variability include individual appearance, three dimensional (3-D) pose, facial expression, facial hair, makeup, and so on and these factors change from time to time. Furthermore, the lighting, back-ground, scale, and parameters of the acquisition are all variables in facial images acquired under real-world scenarios
[1]
. This makes face recognition a great challenging problem.
In recent work, researchers always use a hybrid method, which combines some linear and nonlinear projection methods for obtaining better recognition results. In many of the hybrid methods, the combination of Neural Network and Fuzzy System has become a hot research area in recent years. Reference
[11]
applied the theory of fuzzy to designing of RBF Neural Network, combined the respective advantage of Neural Network and fuzzy function and derived satisfactory results. But since the problem of learning speed with Fuzzy Neural Network, the optimal procedure is easily stacked into the local minimal value and it causes slow convergence. Generally speaking, multi-layer networks usually coupled with the backpropagation (BP) algorithm, are most widely used in face recognition
[12]
. Yet, two major criticisms are commonly raised against the BP algorithm: 1) It is computationally intensive because of its slow convergence speed and 2) there is no guarantee at all that the absolute minima can be achieved. On the other hand, RBF neural networks have recently attracted extensive interests in the community of neural networks for a wide range of applications
[13
,
14
,
22]
.
In order to avoid these cases, we propose an improved RBF neural network for face recognition in this paper. The whole recognition procedure includes three stages, feature extraction and classification. Firstly, dimension reduction and face feature extraction using PCA-FLD algorithm is presented; and then the structure and L-M learning algorithm of the fuzzy RBF neural network (RBFNN) are introduced for face classifier.
Fig. 1
shows the flow chart of proposed algorithm in this paper.
PPT Slide

Lager Image

2. Dimension Reduction and Feature Extraction

As the problem of face recognition, face image data are usually high-dimensional and large-scale, recognition has to be performed in a high-dimensional space. So it is necessary to find a dimensional reduction technique to cope the problem in a lower-dimension space. Researchers have presented many linear and nonlinear projection algorithms such as the eigenfaces
[15]
, Principal Component Analysis (PCA)
[15]
, Linear Discriminant Analysis (LDA)
[16
,
17]
, Fisher faces
[6]
, Direct LDA (DLDA)
[16
,
18]
, Discriminant Common Vector (DCV)
[19]
and Independent Component Analysis (ICA)
[20]
, etc.
In this paper, we use a hybrid algorithm, which combine PCA algorithm with FLD algorithm, for feature extraction. The number of input variables is reduced through feature selection, i.e., a set of the most expressive features is first generated by the PCA and FLD is then implemented to generate a set of the most discriminant features so that different classes of training data can be separated as far as possible and the same classes of patterns are compacted as close as possible. The procedures are as follows:
(1) Obtaining face images
PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

3. The Structure of Fuzzy RBF Neural Network

- 3.1 Instruction of RBF Neural Network

An RBF neural network can be considered as a mapping
[21]
:
PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

- 3.2 Fuzzy RBF Neural Network

Based on the RBF neural network above, we construct the environment parameters of network to give the fuzzy inference ability to it, fuzzy characteristics improved the learning generalization ability of neural networks and made a better approximation to the actual models.
We assume that there are fuzzy rules as:
PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

- 3.3 The Structure of Fuzzy RBF Neural Network

The structure of fuzzy RBF neural network we proposed in this paper consists of four layers: input layer, fuzzification layer, fuzzy inference layer and output layer. The topology of the fuzzy RBF neural network is shown as
Fig. 3
.
PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

4. Learning Algorithm of Fuzzy RBF Neural Network

The learning process of fuzzy RBF neural network mainly based on updating the central value
PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

5. Experimental Results

The proposed face recognition system is running on the hardware environment of Intel(R) Core(TM) 2 (2.93GHz) and the software environment of Windows 7 and Matlab R2009a.
The experiment uses the ORL Database of faces. Their Database of Faces, formerly “The ORL Database of Faces”, contains a set of face images taken between April 1992 and April 1994 at the lab. There are ten different images of each of 40 distinct subjects. For some subjects, the images were taken at different times, varying the lighting, facial expressions (open/closed eyes, smiling/not smiling) and facial details (glasses/no glasses). All the images were taken against a dark homogeneous background with the subjects in an upright, frontal position, with tilting and rotation tolerance up to 20 degree, and tolerance of up to about 10% scaly. The files are in BMP format. The size of each image is 92*112 pixels, with 256 grey levels per pixel.
Fig. 4
shows part samples in ORL face database.
PPT Slide

Lager Image

Experimental results of fuzzy RBF neural network using BP learning algorithm

PPT Slide

Lager Image

Experimental results of fuzzy RBF neural network using L-M learning algorithm

PPT Slide

Lager Image

Performance of network training algorithm(BP and L-M)

PPT Slide

Lager Image

6. CONCLUSION

In this paper, a general design approach using an fuzzy RBF neural network classifier for face recognition to cope with small training sets of high-dimensional problem is presented. Firstly, face features are first extracted by PCA algorithm. Then, the resulting features are further projected into the Fisher's optimal subspace in which the ratio of the between-class scatter and the within-class scatter is maximized. As a widely applied algorithm in fuzzy RBF neural network, BP learning algorithm has the low rate of convergence, therefore, an improved learning algorithm based on L-M for fuzzy RBF neural network is introduced in this paper, which combined the Gradient Descent algorithm with the Gauss-Newton algorithm, it has both local performance and entire performance.
The experimental results shown that the proposed algorithm works well on ORL face database with different expressions, poses and illumination conditions. So, this algorithm has good capability of generation, and it can effectively reduce the dimensional of classification. Meanwhile, this algorithm can also reduce the computational complexity. The feature vectors are only extracted from gray-scale images, more features extracted from both gray-scale and spatial texture information and a real-time face recognition system will be studied in the future work.
BIO

Chellappa R.
,
Wilson C.L.
,
Sirohey S.
1995
"Human and Machine Recognition of Faces: A Survey"
Proc. the IEEE
83
(5)
705 -
740
** DOI : 10.1109/5.381842**

Turk M.
,
Pentland A.
1991
"Eigenfaces for Recognition"
Journal of Cognitive Neuroscience
3
(1)
71 -
86
** DOI : 10.1162/jocn.1991.3.1.71**

Chin Tat-Jun
,
Suter D.
2004
A Study of the Eigenface Approach for Face Recognition, Technical Report of Monash University, Dept. of Elect & Computer Systems Engineering
1 -
18

Blackburn D.
,
Bone M.
,
Phillips P.
2000
Face Recognition Vendor Test 2000: Evaluation Report
National Institute of Science and Technology

Phillips P.J.
,
Moon H.
,
Rizvi S.
,
Rauss P.
2000
"FERET Evaluation Methodology for Face Recognition Algorithms"
IEEE Transaction on Pattern Analysis and Machine Intelligence
22
(10)
1090 -
1103
** DOI : 10.1109/34.879790**

Bryliuk D.
,
Starovoitov V.
2002
"Access Control by Face Recognition using Neural Networks and Negative Examples"
Proc. 2nd International Conference on Artificial Intelligence
428 -
436

Nazeer S.A.
,
Omar N.
,
Khalid M.
2007
"Face Recognition System using Artificial Neural Networks Approach"
Proc. IEEE International Conference on Signal Processing, Communications and Networking
420 -
425

Huang X. Shao
,
Wechsler H.
1998
Face Pose Discrimination Using Support Vector Machines, Technical report of George Mason University and University of Minnesota
Minneapolis Minnesota
Vol. 1
154 -
156

Brunelli R.
,
Poggio T.
1993
"Face Recognition: Features versus Templates"
IEEE Transactions on Pattern Analysis and Machine Intelligence
15
(10)
1042 -
1052
** DOI : 10.1109/34.254061**

Chen W.
,
Sun T.
,
Yang X.
,
Wang L.
2009
"Face Detection based on Half Face Template"
Proc. the IEEE Conference on Electronic Measurement and Instrumentation
54 -
58

Dai Yang-chun
,
Xie Fang
2006
"Face Recognition Based on the Fuzzy RBF Neural Network"
Techniques of Automation and Applications
25
(6)
112 -
119

Xu Wenkai
,
Lee Eung-Joo
2012
"A Combinational Algorithm for Multi Face Recognition"
International Journal of Advancements in Computing Technology
4
(13)
146 -
154

Esposito A.
,
Marinaro M.
,
Oricchio D.
,
Scarpetta S.
2000
"Approximation of Continuous and Discontinuous Mappings by a Growing Neural RBF-based Algorithm"
Neural Networks
12
(1)
651 -
665
** DOI : 10.1016/S0893-6080(00)00035-6**

Virginia E.D.
2000
"Biometric Identification System using a Radial Basis Network"
Proc. IEEE International Conference of Security Technology
47 -
51

Kirby M.
,
Sirovich L.
1990
"Application of the Karhunen-Loeve Procedure for The Characterization of Human Faces"
IEEE Transactions on Pattern Analysis and Machine Intelligence
12
(1)
103 -
108
** DOI : 10.1109/34.41390**

Lu J.
,
Kostantinos N. Plataniotis
,
Anastasios N. Venetsanopoulos
2003
"Face Recognition using LDA-based Algorithm"
IEEE Transactions on Neural Networks
14
(1)
195 -
200
** DOI : 10.1109/TNN.2002.806647**

Martinez A.M.
,
Kak A.C.
2001
"PCA versus LDA"
IEEE Transactions on Pattern Analysis and Machine Intelligence
23
(2)
228 -
233
** DOI : 10.1109/34.908974**

Liang Z.
,
Shi P.F.
2005
"Uncorrelated Discriminant Vectors using a Kernel Method"
Pattern Recognition
38
(2)
307 -
310
** DOI : 10.1016/j.patcog.2004.06.006**

Belhumeur N. Peter
,
Hespanha P. Joao
,
Kriegman David J.
1997
"Eigenfaces vs. Fisherfaces: Using Class Specific Linear Projection"
IEEE Transactions on Pattern Analysis and Machine Intelligence
14
(2)
239 -
256

Marian B.
,
Javier R. Movellan
,
Terrence J. Sejnowski
2002
"Face Recognition by Independent Component Analysis"
IEEE Transactions on Neural Networks
13
(6)
1450 -
1464
** DOI : 10.1109/TNN.2002.804287**

Er Meng Joo
,
Wu Shiqian
,
Lu Juwei
,
Toh Hock Lye
2002
"Face Recognition with Radial Basis Function Neural Networks"
IEEE Transactions on Neural Networks
13
(3)
697 -
710
** DOI : 10.1109/TNN.2002.1000134**

Xu Wenkai
,
Lee Eung-Joo
2012
"Dynamic Human Activity Recognition Based on Improved FNN Model"
Journal of Korea Multimedia Society
15
(4)
417 -
424
** DOI : 10.9717/kmms.2012.15.4.417**

Citing 'Face Recognition Based on Improved Fuzzy RBF Neural Network for Smar t Device
'

@article{ MTMDCW_2013_v16n11_1338}
,title={Face Recognition Based on Improved Fuzzy RBF Neural Network for Smar t Device}
,volume={11}
, url={http://dx.doi.org/10.9717/kmms.2013.16.11.1338}, DOI={10.9717/kmms.2013.16.11.1338}
, number= {11}
, journal={Journal of Korea Multimedia Society}
, publisher={Korea Multimedia Society}
, author={Lee, Eung-Joo}
, year={2013}
, month={Nov}