Clustering methods are very useful in many fields such as data mining, classification, and object recognition. Both the supervised and unsupervised grouping approaches can classify a series of sample data with a predefined or automatically assigned cluster number. However, there is no constraint on the number of elements for each cluster. Numbers of cluster members for each cluster obtained from clustering schemes are usually random. Thus, some clusters possess a large number of elements whereas others only have a few members. In some areas such as logistics management, a fixed number of members are preferred for each cluster or logistic center. Consequently, it is necessary to design a clustering method that can automatically adjust the number of group elements. In this paper, a kmeans based clustering method with a fixed number of cluster members is proposed. In the proposed method, first, the data samples are clustered using the kmeans algorithm. Then, the number of group elements is adjusted by employing a greedy strategy. Experimental results demonstrate that the proposed clustering scheme can classify data samples efficiently for a fixed number of cluster members.
1. INTRODUCTION
Rapid development in information technology in the last few decades has led to increased digital data which are widely used in many fields such as medical science, military, banking, manufacture, and logistics
[1

6]
. Consequently, automatically processing a large amount of information is essential. Thus far, many informationprocessing algorithms have been proposed and considerably benefited people
[7

8]
. Among these algorithms, clustering methods, which are one of popular techniques, are being widely used in data mining, object classification, pattern recognition, machine learning, image analysis, information retrieval and bioinformatics
[9

15]
. There are two types of clustering approaches: the supervised algorithm and the unsupervised algorithm
[16]
. For supervised clustering, the category of each sample should be known in advance, whereas this is not required for the unsupervised clustering scheme.
However, to the best of our knowledge, the number of cluster members are random for all existing clustering methods, i.e, the number of cluster members is dependent on the clustering criteria such as statistical inference, similarity measurement, and density
[17

18]
. There is no adjustment for the number of cluster elements. Indeed, adjustments for cluster elements are not necessary in most of the cases. Nevertheless, in some fields such as logistical management, it would be necessary to adjust cluster members to ensure that the number of cluster elements satisfies the predefined requirement. For example, when a logistics company intends to schedule its logistics center and demands each logistics center to be responsible for a fixed number of cities, the conventional clustering algorithm cannot achieve its purpose. On the other hand, a clustering method with a fixed number of cluster members would be suitable to satisfy this type of requirement.
In this paper, we propose a kmeans based clustering method with fixed number of cluster members. First, the data samples or patterns extracted from objects are grouped into k categories using the kmeans clustering algorithm
[17

18]
. Then, the number of cluster members is adjusted to satisfy the required number using the greedy strategy
[19]
. The adjustment procedure is the key step for the proposed clustering method and is explained in detail in section 3. In addition, we show improvement of the work present in
[27]
. The biggest difference between the proposed method and that in
[27]
is the adjustment order for the original clusters obtained with kmeans algorithm. In
[27]
, the cluster with minimum members is firstly selected to adjust its members at each iteration. In this paper, however, we adjust the cluster that its cluster center point is on the convex hull which contain all of the unadjusted cluster center points at each iteration. When there are only two clusters, the two methods would lead to the same results. But, the proposed method will be superior to that in
[27]
when more than two clusters are existed. We demonstrate that our method can achieve good results when the number of cluster members is predefined and can result in better performance than that in
[27]
.
This paper is organized as follows. In Section 2, we describe the conventional kmeans clustering algorithm. In Section 3, we describe the procedure for the proposed clustering method with a fixed number of cluster elements. In Section 4, we present the experimental results. The conclusion is presented in Section 5.
2. KMEANS CLUSTERING ALGORITHM
Kmeans
[17

19]
is a robust clustering method and has been successfully used in image segmentation, object classification, pattern recognition, data mining and logistics management
[20

24]
. Hugo Steinhaus first presented the idea of kmeans in 1957 and the standard kmeans algorithm was proposed by Stuart Lloyd in 1957. However, the term “kmeans” was first used by James MacQueen in 1967
[25

27]
. It is one of the simplest yet robust deterministic clustering algorithms, which aims to partition N observations or data samples into k userdefined clusters where each observation belongs to the cluster with the nearest mean that is regarded as a prototype of the corresponding cluster
[27

28]
. The flowchart of the well known kmeans approach achieved by Stuart Lloyd is presented in
Fig. 1
.
Flowchart of the conventional kmeans algorithm.
The flowchart in
Fig. 1
can be explained in detail as follows: First, the number of clusters
k
has to be defined. Then,
k
points or observations are randomly chosen as the centroid points of the
k
clusters. Each centroid point is viewed as the prototype of one cluster. The final clustering results correspond to the choice of the initial
k
centroid points. In general, it is better to select
k
centroid points that are far away from each other
[29]
. Further, every sample point is classified into the cluster with the nearest distance, i.e, comparing each point with
k
centroid points and assigning that point to the centroid point representing one cluster with the minimum distance. The distance can be measured using the Minkowski distance, Euclidean distance or Cityblock distance
[18]
. When all observations have been assigned to their corresponding clusters, one round of clustering is completed. The next step is to update the centroid point of each cluster. The centroid point is calculated using the cluster members at the current cluster as achieved from previous clustering rounds. Consequently, each point can be assigned to a new centroid point as per the shortest distance criterion. This iteration is continued until the centroid points converge.
In fact, the abovementioned kmeans algorithm attempts to minimize the withincluster sum of squares. In other words, the kmeans technique seeks to minimize the following term
[27

29]
:
where
k
is the cluster number,
S_{i}
is the set including the corresponding cluster member of the
i
th cluster,
μ_{i}
is the mean value of observations in the
i
th cluster and is used to denote the prototype of the
i
th cluster, and
d
^{2}
(
x_{j}
,
μ_{i}
) is the distance between the sample
x_{i}
and centroid point
μ_{i}
. Because kmeans algorithms are highly dependent on the initial cluster centroids, Eq. (1) will always converge to a local minimum value. In order to achieve a better minimum result, the kmeans algorithm can be conducted repeatedly and the minimum withincluster sum of squares can be chosen. Moreover, the third and fourth step in
Fig. 1
can be expressed as Eq. (2) and Eq. (3) respectively
[27

28]
:
where (
t
) denotes the
t
th iteration and
represents the number of members in the
i
th set. Here, each point
x_{p}
is only allotted to exactly one cluster
S_{i}
even if the point
x_{p}
achieves the same distance as that of more than one centroid points in the
k
clusters.
The computational time complexity of a general kmeans algorithm is linearly proportional to the number of samples, the number of clusters, the number of observation dimensions, and the number of iterations resulting in convergence. One of the drawbacks of the traditional kmeans algorithm is that clustering results can be affected by noise or outlier observations. However, this problem can be mitigated or avoided by employing outlier analysis methods such as Random Sample Consensus (RANSAC)
[30

31]
on the sample data before implementing the kmeans algorithm. Thus far, many algorithms based on the kmeans approach have been proposed and widely used in the area of image processing, computer vision and data mining
[20

23]
.
3. PROCEDURES OF THE PROPOSED CLUSTERING METHOD
Based on description of the kmeans algorithm in section 2, note that the number of members in each cluster is not fixed but random after implementing kmeans. However, in some cases, a fixed number of elements in each category are expected. For example, when kmeans is used to group
N
observations into two clusters and each cluster contains
N
/2 members, the conventional kmeans fails because it cannot adjust the cluster elements as per the user’s requirement. In this section, a kmeans based clustering method with a management when the number of group elements needs to be adjusted to satisfy a predefined number. The procedure for the proposed clustering method is described as follows:
step 1: Classify
N
observations into
k
clusters using the conventional kmeans algorithm. Denote the resulting
k
centroid points as
μ_{1}
,
μ_{2}
,...,
μ_{k}
.
step 2: Calculate the center point of set
μ_{1}
,
μ_{2}
,...,
μ_{k}
. The computed center point is expressed as
C_{μ}
. Here,
μ_{1}
,
μ_{2}
,...,
μ_{k}
are prototypes of
k
clusters.
C_{μ}
and
μ_{1}
,
μ_{2}
,...,
μ_{k}
can be used to obtain the corresponding distance between
C_{μ}
and
μ_{i}
(0<
i
≤
k
) as
d_{i}
=
d
(
C_{μ}
,
μ_{i}
). This step is used to decide which cluster member should be adjusted initially. In
[27]
, the cluster with the minimum cluster member size is selected to adjust its elements. However, this method results in errors when the cluster with the minimum number of elements is not located at the margin. Here, the margin means the cluster center point that is on the convex hull containing all of the cluster center points. This phenomenon is illustrated in the experiment.
step 3: Choose cluster
j
from the
k
clusters that have the maximum distance to
C_{μ}
. i.e.,
d_{j}
=max(
d_{i}
=
d
(
C_{μ}
,
μ_{i}
)) where 0<
i
≤
k
. Then, adjust the element in the
j
th cluster. Assume that the required number of elements for the
k
clusters is
a
_{1}
,
a
_{2}
,⋯,
a_{k}
where
a
_{1}
+
a
_{2}
+⋯ +
a_{k}
=
N
. The number of elements in the
k
clusters obtained from kmeans algorithm in step 1 is assumed to be
n
_{1}
,
n
_{2}
,⋯,
n_{k}
. Once the cluster used to adjust its element is determined, the number of elements can be denoted by
n_{j}
(here, the
j
th cluster is selected). However, the desired number of elements for this cluster is unknown, and it should be chosen from the set of (
a
_{1}
,
a
_{2}
,⋯,
a_{k}
). The greedy strategy is applied to select the desired number of elements of the jth cluster. The value that is most similar to
n_{j}
in (
a
_{1}
,
a
_{2}
,⋯,
a_{k}
) is selected as the desired number of members for the
j
th cluster because this value can make adjustments with the minimum element. Assume that the obtained desired number is
a_{j}
for the
j
th cluster.
step 4: The current number of elements in the
j
th cluster is
n_{j}
and the desired number of elements is
a_{j}
. In this step, we have to determine whether this cluster needs to recruit new members or it has to discard redundant elements. This decision can be taken easily by comparing
n_{j}
with
a_{j}
. If
n_{j}

a_{j}
>0, the
j
th cluster needs to discard
n_{j}

a_{j}
elements. If
n_{j}

a_{j}
<0,
a_{j}

n_{j}
new members should be recruited into the
j
th cluster. Of course, when
n_{j}

a_{j}
=0, it is unnecessary to adjust the cluster element. Here, the recruitment and discard process is designed as follows. Both the recruitment and discard process represent the greedy strategy because they attempt to minimize the withincluster sum of squares while adjusting the cluster element at the same time.
Recruitment process: Recruit
a_{j}

n_{j}
members from the
N

n_{j}
observation samples (excluding the
n_{j}
points that are already in the
j
th cluster) with the shortest distance criterion. In this paper, the Euclidean distance is employed although other distance measurement methods such as the Minkowski and Cityblock distance can also be adopted. Thus,
a_{j}

n_{j}
points that are nearest to
μ_{j}
are recruited into the
j
th cluster from other clusters to satisfy the desired number of cluster members. Here,
μ_{j}
is the centroid point of the
j
th cluster.
Discard process: The
n_{j}

a_{j}
elements in the
j
th cluster will be discarded with the same shortest distance criterion. However, the
n_{j}

a_{j}
elements that are nearest to the centroid points of other clusters are discarded. It is confusing because one may assume that it is better to discard the
n_{j}

a_{j}
elements that are farthest to
μ_{j}
which is the centroid point of the current cluster. Nevertheless, this will result in incorrect adjustments and considerably increase the withincluster sum of squares. This is demonstrated in
Fig. 2
. When one point needs to be discarded from cluster 2, it is noted that discarding one point (point
p
in
Fig. 2
(a)) that is nearest to the other cluster’s (cluster 1 in
Fig. 2
) centroid points is better than discarding the point (point
q
) that is farthest to the current cluster’s (cluster 2 in
Fig. 2
) centroid point. Otherwise, the result will affect the other cluster as showed in
Fig. 2
(c). It is obvious that the discarded point
q
in
Fig. 2
(c) is recruited by cluster 1, which results in poor element distribution in cluster 1 because the elements are considerably dispersive and pass through the elements in cluster 1. On the other hand, discarding point
p
from cluster 2 as shown in
Fig. 2
(b) makes the adjustment reasonable.
Illustration of the discard Process. (a) Clustering result with kmeans. (b) Adjustment results by discarding point p. (c) Adjustment result by discarding point q.
step 5: Set
k
=
k
1,
N
=
N

a_{j}
, remove
a_{j}
from set {
a
_{1}
,
a
_{2}
,⋯,
a_{k}
}, and determine the updated set denoted as {
a
_{1}
,
a
_{2}
,⋯,
a
_{k}
_{1}
}. If
k
≠1, go to step 1 and continue the adjustment process. Otherwise, terminate the iteration and update all centroid points for new cluster members with a criterion of minimizing the following term
[27

28]
:
where
x_{j}
is the element in a new set
S_{i}
of the
i
th cluster and
μ_{i}
is the updated centroid point of the
i
th cluster.
The procedure for the proposed clustering algorithm indicates that this method can be implemented recursively thus making the implementation convenient and efficient. The pseudo code for the kmeans based clustering method with fixed cluster members is presented in
Fig. 3
. In
Fig. 3
,
N
=
N

a_{j}
means the remaining unadjusted data. That is, the recursion does kmeans only on the remaining data after taking out the data points in clusters already adjusted.
Pseudo code for the proposed kmeans based clustering method.
4. EXPERIMENTAL RESULTS
In this section, a computer simulation is conducted to demonstrate the feasibility of the kmeans based clustering method with a fixed number of cluster members. Concurrently, we compare our results with that present in
[27]
leading to arbitrary results in some cases. The main difference between the proposed kmeans based method and the one in
[27]
is that the manner in which the first cluster is selected where the cluster member should be initially adjusted in each iteration. For clustering with only two clusters, the two approaches will achieve the same clustering results. However, when the number of clusters is more than two, the method in
[27]
produces problems in some cases. Consequently, the simulation in this paper is executed with two types of clusters: One with two clusters and the other with three clusters.
Fig. 4
shows the clustering results with two clusters. The number of cluster members is 20 and 40 which are obtained using the conventional kmeans algorithm whereas the total number of data samples is 60, which are randomly generated. Here, the sample data are generated from two groups that follow Gaussian distribution.
Fig. 4
(c) and
4
(d) show the clustering results when the number of members for the two clusters is adjusted to occupy 50% and 50% of the total number of observations with the method in
[27]
and the proposed method respectively.
Fig. 4
(e) and
4
(f) are the adjustment results when the number of members in the two clusters are adjusted to possess 20% and 80% with the method in
[27]
and the proposed method, respectively. It is noted that for the adjustment of cluster members with two clusters, the method in
[27]
and the proposed approach achieve the same acceptable performance. In order to numerically view the results, the sum of withincluster sum of square [see Eq. (1)] for the two methods [method in
[27]
and method in this paper] with two clusters are given in
Table 1
. It is noted from
Table 1
that the sums of withincluster sum of square for the two methods are the same in condition of two clusters.
Clustering results with two clusters. (a) data samples. (b) clustering result using the conventional kmeans algorithm. (c) and (d) clustering results using the method in [27] and the proposed approach when the number of cluster elements is assigned to be 50% and 50% of the total samples, respectively. (e) and (f) clustering results using the method in [27] and the proposed approach when the number of cluster elements is assigned to be 20% and 80% of the total samples, respectively.
Sum of withincluster sum of square for method in[27]and this new method
Sum of withincluster sum of square for method in [27] and this new method
Fig. 5
presents the clustering results with three clusters. 200 samples are randomly generated and classified with conventional kmeans, the clustering approach in
[27]
, and the proposed clustering algorithm. Also, these sample data are generated from three groups that follow Gaussian distribution. The number of elements in the three clusters obtained by the conventional kmeans algorithm is 86, 48, and 66 as shown in
Fig. 5
(b).
Fig. 5
(c) and
5
(d) are the clustering results with method in
[27]
and the proposed method, respectively, whereas the number of cluster members is adjusted to be 33%, 33%, and 34% of the total data samples.
Fig. 5
(e) and
5
(f) are the clustering results with method in
[27]
and the proposed method, respectively when the number of cluster elements is adjusted to be 20%, 40%, and 40% of the total data samples. It is apparently shown that the cluster members for cluster 3 in
Figs. 5
(c) and
5
(e) are considerably poor when they are adjusted by starting from cluster 1. On the other hand, the proposed method in this paper can make up the deficiency in
[27]
. The experimental results discover that this proposed algorithm can achieve better adjustment performance than that in
[27]
when multiple clusters are considered. In other words, this means that the choice of the first cluster for updating the corresponding element is very important and affects the final clustering results. Here, we have to note that the method in
[27]
can also adjust multiple cluster members well when the data samples are not long and narrow as shown in the experimental part in
[27]
. Also, the sum of withincluster sum of square [see Eq. (1)] for the two methods [method in
[27]
and method in this paper] with multiple clusters are provided in
Table. 2
. It is proved that the new method proposed in this paper can achieve much smaller sum of withincluster sum of square when it is compared with method in
[27]
under the condition of multiple clusters.
Clustering results with three clusters. (a) data samples. (b) clustering result by conventional kmeans algorithm. (c) and (d) clustering results with method in [27] and the proposed approach in this paper when the number of cluster elements are assigned to be 33%, 33%, and 34% of the total samples respectively. (e) and (f) clustering results with the method in [27] and the proposed approach when the number of cluster elements is assigned to be 20%, 40%, and 40% of the total samples, respectively.
Withincluster sum of square for method in[27]and this new method
Withincluster sum of square for method in [27] and this new method
In this simulation, each point can be viewed as a city while the centroid point of each cluster can be considered as the logistics center. Sometimes, logistics company need each logistics center to handle business with a fixed number of cities. Thus, the proposed method in this paper can be applied to choose the cities (cluster members) for each cluster. In other words, the number of cities will be random when a simple clustering algorithm such as kmeans is applied to all of the cities. Thus, each cluster members need to be adjusted in order to satisfy with the predefined number of cities which are managed by each logistics center. In addition, the proposed method can be helpful to image segmentation, object classification and pattern recognition when some priori knowledges such as the occupation rate of each class of object are known.
5. CONCLUSIONS
In this paper, a kmeans based clustering method with a fixed number of cluster members was proposed and demonstrated. Experimental results show that the proposed algorithm worked well when the number of cluster elements had to satisfy some predefined values. Further, simulation results revealed that the proposed clustering method is superior to the conventional kmeans and previously proposed clustering algorithm for a fixed number of cluster members. This algorithm is suitable for data sets with two or more clusters (more than two clusters). The proposed method is achieved based on the greedy strategy and can be implemented recursively to make the algorithm convenient and efficient. Moreover, the proposed method can be applied to other clustering methods and is not limited to the kmeans algorithm. This method would be useful in image segmentation, object classification, pattern recognition and logistics management. Moreover, some outlier analysis approaches can be applied to the data samples before performing the proposed clustering algorithm to make the algorithm robust against noise.
BIO
Faliu Yi received the B.E. degree from Yunnan University, Kunming, China, in 2008 and the M.E. degree in computer engineering from Chosun University, Gwangju, Korea, in 2012, and is currently working toward the Ph.D. degree in computer engineering from the same university. His research interests include 3D image processing, computer vision, image segmentation, object tracking, pattern recognition and parallel computing.
Inkyu Moon received the B.S. degree in electronics engineering from Sung KyunKwan University, Korea, in 1996, and the Ph.D. degree in electrical and computer engineering from University of Connecticut in 2007. He Joined Chosun University, Korea, in 2009, and is currently an associate professor at the School of Computer Engineering, His research interests include digital holography, biomedical imaging, and optical information processing. Dr. Moon is member of IEEE, OSA and SPIE.
View Fulltext
Kim J.
2014
"A Comparative Study on Classification Methods of Sleep Stages by Using EEG"
Journal of Korea Multimedia Society
17
(2)
113 
123
DOI : 10.9717/kmms.2014.17.2.113
Gudivada V.
,
Raghavan V.
1995
"Content based Image Retrieval Systems"
Computer
28
(9)
18 
22
DOI : 10.1109/2.410145
Liberatore M.
,
Breem D.
1997
"Adoption and Implementation of Digitalimaging Technology in the Banking and Insurance Industries"
IEEE Transactions on Engineering Management
44
(4)
367 
377
DOI : 10.1109/17.649867
Wang B.
,
He S.
2009
"Robust Optimization Model and Algorithm for Logistics Center Location and Allocation under Uncertain Environment"
Journal of Transportation Systems Engineering and Information Technology
9
(2)
69 
74
DOI : 10.1016/S15706672(08)600562
Chen R.
,
Li Y.
2006
"Logistics Center Locating Based on TOPSIS method"
Journal of Anhui University of Technology
23
(2)
221 
224
Näätänen R
1990
"The Role of Attention in Auditory Information Processing as Revealed by Eventrelated Potentials and other Brain Measures of Cognitive Function"
Behavioral and Brain Sciences
13
(2)
201 
233
DOI : 10.1017/S0140525X00078407
Wu C.
,
Shen B.
,
Chen Q.
2013
"Vehicle Classification in Pantiltzoom Videos Via Sparse Learning"
Journal of Electronic Imaging
22
(4)
041102 
041102
DOI : 10.1117/1.JEI.22.4.041102
Schwartz W.R.
,
de Siqueira F.R.
,
Pedrini H.
2012
"Evaluation of Feature Descriptors for Texture Classification"
Journal of Electronic Imaging
21
(2)
023016 
17
DOI : 10.1117/1.JEI.21.2.023016
Nadas A.
,
Mercer R.
,
Bahl L.
,
Bakis R.
,
Cohen P.
,
Cole A.
,
Jelinek F.
,
Lewis B.
1981
"Continuous Speech Recognition with Automatically Selected Acoustic Prototypes Obtained by either Bootstrapping or Clustering"
IEEE International Conference on Acoustics, Speech, and Signal Processing
Vol. 6
113 
1155
Gu C.
,
Zhang S.
,
Sun Y.
2011
"Realtime Encrypted Traffic Identification using Machine Learning"
Journal of Software
6
(6)
1009 
1016
Patil R.A.
,
Sahula V.
,
Mandal A.S.
2012
"Features Classification using Support Vector Machine for a Facial Expression Recognition System"
Journal of Electronic Imaging
21
(4)
043003 
1
DOI : 10.1117/1.JEI.21.4.043003
Rangayyan R.
,
Oloumi F.
2012
"Fractal Analysis and Classification of Breast Masses using the Power Spectra of Signatures of Contours"
Journal of Electronic Imaging
21
(2)
023018 
1
DOI : 10.1117/1.JEI.21.2.023018
Bemis. K.
,
Eberlin L.
,
Ferreira C.
,
Cooks R.
,
Vitek O.
2012
"Spatial Segmentation and Feature Selection for Desi Imaging Mass Spectrometry Data with Spatiallyaware Sparse Clustering"
BMC Bioinformatics
13
A8 
DOI : 10.1186/1471210513S18A8
Guerra L.
,
McGarry L.
,
Robles V.
,
Bielzal C.
,
Larranaga P.
,
Yusteet R.
2011
"Comparison between Supervised and Unsupervised Classifications of Neuronal Cell Types: A Case Study"
Developmental Neurobiology
71
(1)
71 
82
DOI : 10.1002/dneu.20809
Gonzalez R.
,
Woods R.
2002
Digital Imaging Processing
Prentice Hall
Englewood Cliffs, NJ, USA
Gose E.
,
Johnsonbaugh R.
1996
Pattern Recognition and Image Analysis
Prentice Hall
Upper Saddle River, NJ, USA
Kleinberg J.
,
Tardos E.
2005
Algorithm Design
Pearson Education
England
Paul T.
,
Bandhyopadhyay S.
2012
"Segmentation of Brain Tumor from Brain MRI Images Reintroducing KMeans with Advanced Dual Localization Method"
International Journal of Engineering Research and Applications
2
(3)
226 
231
Ramamurthy B.
,
Chandran K.
2011
"CBMIR: Shapebased Image Retrieval using Canny Edge Detection and Kmeans Clustering Algorithms for Medical Images"
International Journal of Engineering Science and Technology
3
(3)
209 
212
Ngai E.
,
Xiu L.
,
Chau D.
2009
"Application of Data Mining Techniques in Customer Relationship Management: A Literature Review and Classification"
Expert Systems with Applications
36
(2)
2592 
2602
DOI : 10.1016/j.eswa.2008.02.021
He Y.
,
Zhen Q.
2013
"Logistics Customer Segmentation Modeling on Attribute Reduction and KMeans Clustering"
Journal of Communication and Computer
10
1114 
1119
Xiao J.
,
Hays J.
,
Ehinger K.
,
Oliva A.
,
Torralba A.
2010
"Sun Database: Largescale Scene Recognition from Abbey to Zoo"
IEEE Conference on Computer Vision and Pattern Recognition
3485 
3492
Steinhaus H.
1957
"Sur La Division Des Corps Matériels En Parties"
Bulletin of the Polish Academy of Science
4
(12)
801 
804
Yi F.
,
Moon I.
2013
"Extended KMeans Algorithm"
IEEE International Conference on Intelligent HumanMachine Systems and Cybernetics
263 
266
Costantini L.
,
Capodiferro L.
,
Carli M.
,
Neri A.
2013
"Texture Segmentation based on Laguerre Gauss functions and K means Algorithm Driven by Kullback–Leibler Divergence"
Journal of Electronic Imaging
22
(4)
043015 
043015
DOI : 10.1117/1.JEI.22.4.043015
Liu R.
,
Dey D.
,
Boss D.
,
Marquet P.
,
Javidi B.
2011
"Recognition and Classification Red Blood Cells using Digital Holographic Microscopy and Data Clustering with Discriminant Analysis"
Journal of the Optical Society of America A
28
(6)
1204 
1210
DOI : 10.1364/JOSAA.28.001204
Fang S.
,
Shi Q.
,
Cao Y.
2013
"Adaptive Removal of Real Noise from a Single Image"
Journal of Electronic Imaging
22
(3)
033014 
033014
DOI : 10.1117/1.JEI.22.3.033014
Hui C.
,
Wang R.
2010
"The Application of Random Sample Consensus in PhotogramMetric Relative Orientation"
IEEE International Conference on Information Science and Engineering
4113 
4116