Advanced
Genetic Outlier Detection for a Robust Support Vector Machine
Genetic Outlier Detection for a Robust Support Vector Machine
International Journal of Fuzzy Logic and Intelligent Systems. 2015. Jun, 15(2): 96-101
Copyright © 2015, Korean Institute of Intelligent Systems
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • Received : April 08, 2015
  • Published : June 25, 2015
Download
PDF
e-PUB
PubReader
PPT
Export by style
Share
Article
Author
Metrics
Cited by
TagCloud
About the Authors
Heesung Lee
Department of Railroad Electrical and Electronics Engineering, Korea National University of Transportation, Gyeonggi-do, Korea.
Euntai Kim
School of Electrical and Electronic Engineering, Yonsei University, Seoul, Korea.

Abstract
Support vector machine (SVM) has a strong theoretical foundation and also achieved excellent empirical success. It has been widely used in a variety of pattern recognition applications. Unfortunately, SVM also has the drawback that it is sensitive to outliers and its performance is degraded by their presence. In this paper, a new outlier detection method based on genetic algorithm (GA) is proposed for a robust SVM. The proposed method parallels the GA-based feature selection method and removes the outliers that would be considered as support vectors by the previous soft margin SVM. The proposed algorithm is applied to various data sets in the UCI repository to demonstrate its performance.
Keywords
1. Introduction
Support vector machine (SVM) was proposed by Vapnik et al. [1 , 2] ; it implements structural risk minimization [3] . Beginning with its early success with optical character recognition [1] , SVM has been widely applied to a range of areas [4 - 6] . SVM possesses a strong theoretical foundation and enjoys excellent empirical success in pattern recognition problems and industrial applications [7] . However, SVM also has the drawback of sensitivity to outliers and its performance can be degraded by their presence. Even though slack variables are introduced to suppress outliers [8 , 9] , outliers continue to influence the determination of the decision hyperplane because they have a relatively high margin loss compared to those of the other data points [10] . Further, when quadratic margin loss is employed, the influence of outliers increases [11] . Previous research has considered this problem [8 - 10 , 12 - 14] .
In [12] , an adaptive margin was proposed to reduce the margin losses (hence the influence) of data far from their class centroids. The margin loss was scaled based on the distance between each data point and the center of the class. In [13 , 14] , the robust loss function was employed to limit the maximal margin loss of the outlier. Further, a robust SVM based on a smooth ramp loss was proposed in [8] . It suppresses the influence of outliers by employing the Huber loss function. Most works have aimed at reducing the effect of outliers by changing the margin loss function; only a small number have aimed at identifying the outliers and removing them in the training set. For example, Xu et al. proposed an outlier detection method using convex optimization in [10] . However, their method is complex and relaxation is employed to approximate the optimization.
In this paper, a new robust SVM based on a genetic algorithm (GA) [15] is proposed. The proposed method locates the outliers among the samples and removes them from the training set. The basic idea of this SVM parallels that of genetic feature selection, wherein GAs locate the irrelevant or redundant features and remove them by mimicking natural evolution. In the proposed method, GA detects and removes outliers that would be considered as support vectors by the previous soft margin SVM
The remainder of this paper is organized as follows. In Section 2, we offer preliminary information on GAs. In Section 3, we describe the proposed method. Section 4 details the experimental results that demonstrate the performance and our conclusions are presented in Section 5.
2. Genetic Algorithms
Genetic Algorithms (GAs) are engineering models obtained from the natural mechanisms of genetics and evolution and are applicable to a wide range of problems. GAs typically maintain and manipulate a population of individuals that represents a set of candidate solutions for a given problem. The viability of each candidate solution is evaluated based on its fitness and the population evolves better solutions via selection, crossover, and mutation. In the selection process, some individuals are copied to produce a tentative offspring population. The number of copies of an individual in the next generation is proportional to the individual’s relative fitness value. Promising individuals are therefore more likely to be present in the next generation. The selected individuals are modified to search for a global optimal solution using crossover and mutation. GAs provide a simple yet robust optimization methodology [16] .
3. Genetic Outlier Selection For Support Vector Machines
In this section, a new outlier detection method based on a genetic algorithm is proposed. First, dual quadratic optimization is formulated in a soft margin SVM and support vector candidates are selected from the training set based on the Lagrange multiplier. Then, the candidates are divided into either support vectors or outliers using GA. Figure 1 presents the overall procedure for the proposed method.
PPT Slide
Lager Image
Procedure of the proposed method.
Suppose that M data points { x 1 , x 2 , ..., x M } ( xi Rn ) are given, each of which is labeled with a binary class yi ∈ {−1, 1}. The goal of the SVM is to design a decision hyperplane
PPT Slide
Lager Image
that maximally separates two classes
PPT Slide
Lager Image
and
PPT Slide
Lager Image
where W and w 0 are the weight and bias of the decision function, respectively. The SVM is trained by solving
PPT Slide
Lager Image
subject to
PPT Slide
Lager Image
PPT Slide
Lager Image
where X = [ x 1 , x 2 , ..., x M ] T , Y = diag ( y 1 , y 2 , ..., y M ), 1 = [1, 1, ..., 1] T , and 0 = [0, 0, ..., 0] T . Ξ = [ ξ 1 , ..., ξ M ] T is a slack variable and implies a margin loss at each data point. C is a constant and denotes the penalty for a misclassification. The above formulation can be recast into
PPT Slide
Lager Image
subject to
PPT Slide
Lager Image
PPT Slide
Lager Image
where Λ = [λ 1 , λ 2 , ..., λ M ] T is a Lagrange multiplier vector and the nonnegative number λ i is a Lagrange multiplier associated with xi . In a standard SVM, the data points with positive λ i are support vectors and contribute to the decision hyperplane according to
PPT Slide
Lager Image
The interesting point is that if outliers are included in the training set, the outliers are likely to have positive margin loss and contribute to the hyperplanes. Further, the outliers tend to have relatively large margin loss and significantly influence the determination of the hyperplane, thereby making the SVM sensitive to the presence of outliers. In this paper, a robust SVM design scheme is proposed based on GA. First, a set of support vector candidates
PPT Slide
Lager Image
is prepared by collecting the data points with positive Lagrange multipliers. As stated, not only the support vectors but also some outliers may be included in S . The goal of support vector selection is to determine a subset S v S that includes only support vectors such that the classification accuracy of the SVM is maximized while the number of data points in the subset card ( Sv ) is minimized, where card (·) denotes the cardinality. This is a bi-criteria combinatorial optimization problem and is usually intractable because it has an NP-hard search space. The implementation of the support vector selection parallels the feature selection method. The use of GA is a promising solution to this bi-criteria optimization problem because the feature selection methods based on GA outperform the non-GA feature selection methods [16 - 18] . To retain the support vectors and discard the outliers in subset Sv , the GA chromosome is represented by a binary string consisting of ones and zeros, as illustrated in Figure 2 . In this figure, “1” and “0” indicate whether the associated data points should be retained or discarded in the set of support vectors, respectively. Genetic operators are applied to generate new chromosomes in the new generation. There are two types of genetic operators: crossover and mutation. The purpose of the crossover is to exchange information among different potential solutions. The mutation introduces genetic material that may have been missing from the initial population or lost during crossover operations [19] . In this paper, one-point crossover and bit-flip mutation [20] are employed as genetic operators. When a validation set V is denoted as V = { v 1 , v 2 , ... v m }, the fitness function of a chromosome is computed using
PPT Slide
Lager Image
Chromosome used in the support vector selection.
PPT Slide
Lager Image
where
PPT Slide
Lager Image
In this equation, m is the number of validation data points and α is a design coefficient. The fitness function actually implies the bi-criteria that the classification accuracy of the SVM should be maximized while the number of data points in the subset card ( Sv ) should be minimized. The first term is aimed at improving the classification performance and the second term is aimed at the compactness of the SVM. The coefficient α plays an essential role in striking a balance between the classification performance and the classification cost. The parameters of the GA and SVM are given in Table 1 .
Experiment parameters
PPT Slide
Lager Image
Experiment parameters
In Table 1 , α is set to 0.1 to emphasize the classification accuracy over the classification cost.
4. Experimental Results
In this section, the validity of the proposed scheme is demonstrated by applying it to five databases of the UCI repository [21] . The UCI repository has been widely used within the pattern recognition community as a benchmark problem for machine learning algorithms. The five databases are the Wine, Haberman, Transfusion, Garman, and Pima sets. All the sets except the Wine set are binary; first and second classes are used in the Wine set. The databases used in the experiments are summarized in Table 2 .
Datasets used in the experiments
PPT Slide
Lager Image
Datasets used in the experiments
In this experiment, the databases are randomly divided into four equal-sized subsets. Two subsets are used for training and the remaining two subsets are used for validation and testing. The training and validation sets are used to design a robust SVM and the test sets are used to evaluate the performance of the algorithms. To demonstrate the robustness of the proposed method against outliers, approximately 5% and 10% of the training samples were randomly selected from each class and their labels were reversed. Five independent runs were performed for statistical verification; the linear kernel was used for SVM. The performances of the proposed method and the general soft margin SVM were compared in terms of average testing accuracy and the number of support vectors. The results are summarized in Tables 3 and 4 . In the tables, the proposed robust SVM is denoted as GASVM. It is observed that the standard SVM exhibits a marginally better performance than that of the proposed method for only the Australian database in the non-outlier case. In the majority of the cases, the proposed method achieves superior classification accuracy using a smaller number of support vectors than that of the standard SVM. That is, the proposed method is less sensitive to outliers and requires a reduced number of support vectors compared to the standard SVM. Further, by comparing the cases with 5% outliers and 10% outliers as indicated in Figure 3 and Figure 4 , it can be observed that in the standard SVM, the greater the number of outliers that are included in the training set, the greater the number of support vectors generated and hence, the more the performance is degraded. In the proposed method, however, less sensitivity is exhibited toward the outliers and the increase in support vectors is limited. The reason for the improved performance of the proposed method is that only useful and discriminatory support vectors are selected and the brunt of the outlier influence on the SVM training is removed. To highlight the robustness of the proposed method, the test accuracy of the GASVM was normalized with respect to that of the standard SVM and the relative performances of the two SVMs are presented in Figure 5 . In this figure, the length of the bar l denotes
PPT Slide
Lager Image
Comparing the results of the proposed method (GASVM) with those of a previous method (SVM) in terms of testing accuracy
PPT Slide
Lager Image
Comparing the results of the proposed method (GASVM) with those of a previous method (SVM) in terms of testing accuracy
Comparing the results of the proposed method (GASVM) with those of a previous method (SVM) in terms of the number of support vectors
PPT Slide
Lager Image
Comparing the results of the proposed method (GASVM) with those of a previous method (SVM) in terms of the number of support vectors
PPT Slide
Lager Image
Correct classification ratio of the SVM and GASVM.
PPT Slide
Lager Image
Number of support vectors of the SVM and GASVM.
PPT Slide
Lager Image
Relative performance of the proposed method compared to a general SVM.
where CGASV M and CSV M are the correct classification rates of GASVM and standard SVM, respectively. From this figure, it is clear that the greater the number of outliers included, the higher the relative excellence of the proposed method over the standard method.
5. Conclusions
In this paper, we presented a new method for detecting outliers to improve the robustness of SVM. The proposed method detected outliers within the support vectors assigned by soft margin SVM using GA, and demonstrated recognition performance and a total number of support vectors superior to those of previous methods. Using the proposed method, the robustness of SVM was improved and the SVM was simplified by outlier deletion. The validity of the suggested method was demonstrated through experiments with five databases from the UCI repository.
Acknowledgements
This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (NRF-2013R1A2A2A01015624 )
BIO
Heesung Lee received the BS, MS, and PhD degrees in electrical and electronic engineering from Yonsei University, Seoul, Korea, in 2003, 2005, and 2010, respectively. From 2011 to 2014, he was a managing researcher with the S1 Corporation, Seoul, Korea. Since 2015, he has been with the railroad electrical and electronics engineering at Korea National University of Transportation Gyeonggi-do, Korea, where he is currently an assistant professor. His current research interests include computational intelligence, biometrics, and intelligent railroad system.
Euntai Kim received the BS (with top honors), MS, and PhD degrees in electronic engineering from Yonsei University, Seoul, Korea, in 1992, 1994, and 1999, respectively. From 1999 to 2002, he was a full-time lecturer with the Department of Control and Instrumentation Engineering at Hankyong National University, Gyeonggi-do, Korea. Since 2002, he has been with the School of Electrical and Electronic Engineering at Yonsei University, where he is currently professor. He was a visiting scholar with the University of Alberta, Edmonton, Canada, in 2003, and is now a visiting researcher with the Berkeley Initiative in Soft Computing (BISC), UC Berkeley, USA. His current research interests include computational intelligence and machine learning and their application to intelligent service robots, unmanned vehicles, home networks, biometrics, and evolvable hardware.
References
Cortes C. , Vapnik V. 1995 “Support-vector networks,” Machine Learning 20 (3) 273 - 297
Vapnik V. N. 1998 Statistical Learning Theory Wiley
Jun S. 2008 “An outlier data analysis using support vector regression,” Journal of The Korean Institute of Intelligent Systems 18 (6) 876 - 880
Hoang V. , Le M. , Jo K. 2014 “Hybrid cascade boosting machine using variant scale blocks based HOG features for pedestrian detection,” Neurocomputing 135 357 - 366    DOI : 10.1016/j.neucom.2013.12.017
Seo S. , Yang H. , Sim K. 2008 “Behavior learning and evolution of swarm robot system using support vector machine,” Journal of The Korean Institute of Intelligent Systems 18 (5) 712 - 717    DOI : 10.5391/JKIIS.2008.18.5.712
Shin H. , Jung H. , Cho K. , Lee J. 2012 “A prediction method of learning outcomes based on regression model for effective peer review learning,” Journal of The Korean Institute of Intelligent Systems 22 (5) 624 - 630    DOI : 10.5391/JKIIS.2012.22.5.624
Kumar S. 2005 Neural Networks: A Classroom Approach McGraw-Hill
Wang L. , Jia H. , Li J. 2008 “Training robust support vector machine with smooth ramp loss in primal space,” Neurocomputing 71 3020 - 3025    DOI : 10.1016/j.neucom.2007.12.032
Lee H. , Hong S. , Lee B. , Kim E. 2010 “Design of robust support vector machine using genetic algorithm,” Journal of The Korean Institute of Intelligent Systems 20 (3) 375 - 379    DOI : 10.5391/JKIIS.2010.20.3.375
Xu L. , Crammer K. , Schuurmans D. 2006 “Robust support vector machine training via convex outlier ablation,” Proc. the 21st National Conference on Artificial Intelligence 536 - 542
Suykens J. A. K. , Vandewalle J. 1999 “Least squares support vector machine classifiers,” Neural Processing Letters 9 (3) 293 - 300    DOI : 10.1023/A:1018628609742
Song Q. , Hu W. , Xie W. 2002 “Robust support vector machine with bullet hole image classification,” IEEE Trans. Systems, Man, and Cybernetics-Part C: Applications and Reviews 32 (4) 440 - 448    DOI : 10.1109/TSMCC.2002.807277
Krause N. , Singer Y. 2004 “Leveraging the margin more carefully,” Proc. the 21st International Conference on Machine Learning 69
Bartlett P. , Mendelson S. 2002 “Rademacher and Gaussian complexities: risk bounds and structural results,” Journal of Machine Learning Research 3 463 - 482
Davis L. 1991 Handbook of Genetic Algorithms Van Nostrand Reinhold
Lee H. , Kim E. , Park M. 2007 “A genetic feature weighting scheme for pattern recognition,” Integrated Computer-Aided Engineering 14 161 - 171
Kuncheva L. , Jain L. 1999 “Nearest neighbor classifier: simultaneousediting and feature selection,” Pattern Recognition Letters 20 1149 - 1156    DOI : 10.1016/S0167-8655(99)00082-3
Oh I. , Lee J. , Moon B. 2004 “Hybrid genetic algorithms for feature selection,” IEEE Trans. Pattern Analysis and Machine Intelligence 26 (11) 1424 - 1437    DOI : 10.1109/TPAMI.2004.105
Juo H. , Chang H. 2004 “A new symbiotic evolution-based fuzzy-neural approach to fault diagnosis of marine propulsion systems,” Artificial Intelligence 17 919 - 930
Michalewicz Z. 1996 Genetic Algorithms + Data Structures = Evolution Programs Springer
Murphy P. M. , Aha D. W. 1994 “UCI Repository for Machine Learning Databases,” Technical report Dept. of Information and Computer Science, Univ. of California Irvine, Calif.