We consider the problem of designing independently operating local quantizers at nodes in distributed estimation systems, where many spatially distributed sensor nodes measure a parameter of interest, quantize these measurements, and send the quantized data to a fusion node, which conducts the parameter estimation. Motivated by the discussion that the estimation accuracy can be improved by using the quantized data with a high probability of occurrence, we propose an iterative algorithm with a simple design rule that produces quantizers by searching boundary values with an increased likelihood. We prove that this design rule generates a considerably reduced interval for finding the next boundary values, yielding a low design complexity. We demonstrate through extensive simulations that the proposed algorithm achieves a significant performance gain with respect to traditional quantizer designs. A comparison with the recently published novel algorithms further illustrates the benefit of the proposed technique in terms of performance and design complexity.
I. INTRODUCTION
In distributed estimation systems, many sensor nodes randomly placed at known locations collect their sensor readings for a parameter of interest and quantize them before transmission to a fusion node where the parameter estimation is executed on the basis of the received data. For powerconstrained systems such as sensor networks,
locally quantizing the measurement
at each node has been a crucial task, particularly when the data exchange between sensor nodes is precluded because of the communication overhead that it entails. Thus, the problem of designing locally operating quantizers has been addressed by many researchers, who have developed practically efficient quantization techniques, showing significant performance gains with respect to standard designs
[1

8]
. The generalized Lloyd design framework has been adopted to design independently operating quantizers for distributed systems that minimize global metrics such as the estimation error. However, design difficulties arise because the Lloyd design has been developed to minimize the local metric, i.e., the reconstruction error; in particular, the quantization partitions should be constructed so as to minimize the global metric, and the encoding of local measurements into such partitions at each node would require the measurements from the other nodes to compute the metric, implying a failure of the local or independent encoding.
To optimize the global metric at each iteration and ensure the independent quantization, novel algorithms have been presented; for example, for distributed detection systems, the distance between the distributions in the case of two hypotheses was used for generating a manageable design procedure
[2]
. A heuristic algorithm that minimizes the upper bound of the probability of error was devised for a distributed detection system
[1]
. Lam and Reibman
[8]
derived the necessary conditions for optimal quantization partitions in distributed estimation systems. A weighted cost function (i.e., local +
λ
× global) was employed in the design process for acoustic sensor networks
[6
,
7]
to guarantee the nonincreasing cost function without deviating from the regular design framework, which is accomplished by a search for appropriate weights. Further, note that the resulting scalar quantizers should be
nonregular
in order to reduce the distortion, implying that the same code word is assigned to several disjoint intervals
[9]
. Regular quantizers are shown to be systematically transformed into nonregular ones in order to achieve a substantial improvement in the estimation performance
[10]
. Recent work
[4]
has focused on designing nonregular quantizers for distributed estimation systems in the Lloyd design framework. As expected, nonregular quantization achieves a significant performance gain with increased design complexity as compared to regular designs.
In this paper, we focus on one of the estimation techniques, i.e., the maximum likelihood (ML) estimation for quantizer design, and propose an iterative design algorithm that seeks to find boundary values with an increased likelihood for the construction of quantization partitions. This approach is motivated by the discussion that the estimation performance can be significantly improved by using the received quantized data with a high probability of occurrence. We present a simple design rule to allow us to determine the interval in which boundary values with an increased likelihood are very likely to be found. We prove that the design rule is guaranteed to generate substantially reduced intervals for the boundary values, facilitating a rapid construction of the quantization partitions. We evaluate the proposed algorithm through extensive experiments, demonstrating that an obvious design benefit can be gained in terms of performance and design complexity as compared to typical designs and the novel techniques recently published
[4
,
7]
.
The rest of this paper is organized as follows: the problem formulation of the quantizer design is provided in Section II. A simple design rule for boundary values is presented in Section IIIA, and the proposed iterative design algorithm is summarized in Section IIIB and applied to a source localization system in acoustic sensor networks; this application is briefly discussed in Section IV. The simulation results are given in Section V, and the conclusions are presented in Section VI.
II. PROBLEM FORMULATION
Assume that there are
M
sensor nodes deployed at certain known spatial locations, denoted by
x
_{i}
∈
R
^{2}
,
i
= 1, …,
M
, and each node measures the unknown parameter
θ
in a sensor field
S
⊂
R
^{N}
to be estimated. Therefore, the measurement, denoted by
z_{i}
at node
i
, can be expressed as follows:
where
f_{i}
(
θ
) represents the sensing model employed at node
i
and
ω_{i}
denotes the measurement noise that can be approximated by normal distribution
N
(0,
σ
_{i}
^{2}
). Note that the measurements are assumed to be independent of each other given the parameter; that is,
P
(
z
_{1}
,…,
z_{M}

θ
) = Π
^{M}_{i}
P
(
z_{i}

θ
). It is also assumed that the
i
th node is equipped with an
R_{i}
bit quantizer
α_{i}
(
z_{i}
) with the dynamic range [
b_{i}
^{1}
b_{i}
^{Li}
^{+1}
] to generate the quantization index
Q_{i}
∈
I_{i}
= {1, …, 2
^{Ri}
=
L_{i}
}, where
L_{i}
denotes the quantization level and {
b_{i}^{j}
},
j
= 1, …,
L_{i}
represents a set of boundary values that determines the quantization partitions. For example, the quantizer
α_{i}
generates the
j
th quantization index
Q_{i}^{j}
whenever
z_{i}
belongs to the
j
th quantization partition given by the boundary values [
b_{i}^{j}
b_{i}
^{j}
^{+1}
]. It is assumed that each node quantizes its measurement and sends its quantization index to a fusion node, which produces the estimate of the parameter
on the basis of quantization indices
Q
_{1}
, …,
Q_{M}
from
all
nodes.
 A. Unconstrained Quantization
Obviously, we can make a better decision with more information. Suppose that a local measurement
z_{i}
(
θ
) belongs to the
k
th quantization bin at sensor
i
, say
α_{i}
(
z_{i}
(
θ
)) =
k
. Since only the measurement
z_{i}
is available to sensor
i
, the encoder will send an index for the interval that
z_{i}
belongs to. These standard regular quantizers apply decision rules based on the local distance (see
Fig. 1
). However, once we know the indices from the other sensors, we can make a better decision for quantization so as to improve the estimation accuracy. Formally, for some parameter
θ
, we may have
where
α_{i}
(
z_{i}
(
θ
)) =
k
with the other encoders
α_{m}
,
∀m
≠
i
fixed.
Standard quantization vs. unconstrained quantization.
This indicates that if the sensor reading
z_{i}
is assigned to the
j
th bin at sensor
i
, the estimation error can be reduced. Such quantization is called unconstrained quantization (UQ)
α_{i}^{U}
(·) since the uantization is conducted with all the available quantized sensor readings. UQ will be possible if the local communication between sensors is allowed at the expense of increased power consumption. However, since we adopt a framework where only the communication between sensors and the fusion node is permitted, UQ is practically impossible to conduct. Here, we propose to design quantizers by exploiting the concept of UQ. Notice that UQ may yield a different mapping,
α_{i}^{U}
from
α_{i}
, which can be constructed as follows:
Clearly, the set
V_{i}^{j}
contains sample measurements for which better estimation accuracy would be achieved if they are assigned to the
j
th quantization bin. Ideally, if we can obtain the same mapping
α_{i}
as
α_{i}^{U}
, we can minimize the estimation error by designing such a quantizer, although this would be impossible in our framework.
In this work, we focus on a quantizer design based on the sets
V_{i}^{j}
. Note that there may be several techniques about how to exploit
V_{i}^{j}
in order to design independent quantizers, which reduce the estimation error. See
[6
,
7]
where a simple distance rule is applied to obtain quantizers with the possibility of divergence. Here, we seek to design scalar quantizers by presenting a design rule for searching boundary values that are very likely to increase the likelihood and thus, improve the estimation error at each step.
III. QUANTIZER DESIGN ALGORITHM
 A. Design Criterion: Maximum Likelihood
The estimation has been typically conducted using certain criteria such as ML or minimum mean square error (MMSE). Suppose that there are quantized observations,
Q
_{1}
, …,
Q_{M}
and we can build estimators on the basis of the observations. For example, the ML estimator
is given by
where
Q
= (
Q
_{1}
, …,
Q_{M}
) denotes the observations sent by M sensor nodes and the likelihood function
L
(
θ
;
Q
) is identical to the joint probability density function (PDF) of
Q
but is viewed as a function of
θ
. Similarly, the MMSE estimator is obtained using the conditional mean
E
(
θ

Q
). Note that the estimators explicitly use the observations, and thus, the performance will heavily depend upon them. The following questions arise here: What if better observations are received for the estimation? Is there any way of generating better observations for the given estimators so as to achieve good estimation accuracy?
Suppose that there are two quantizers
α_{i}
and
at sensor
i
, which generate observations
Q_{i}
and
respectively, for a given parameter
θ
to be estimated. We construct the ML estimators denoted by
on the basis of the observations
Q
^{1}
= (
Q
_{1}
,…,
Q_{i}
,…,
Q_{M}
) and
respectively. Formally,
Then, which estimator would provide a better estimate for
θ
? If max
P
(
Q
^{1}

θ
) ≥ max
P
(
Q
^{2}

θ
), we can expect
to be more likely to happen than
Equivalently,
α_{i}
can be said to generate a better observation than
. On the basis of this discussion, we show that UQ can be constructed to find better observations, which in turn will be used for designing better quantizers.
In this work, we consider a quantizer design that generates better observations in terms of the ML because the ML estimator is often used for estimation because of its low complexity and reasonable performance. Then, the quantization partition for UQ will be constructed as follows:
Our strategy for quantizer design is to update the boundary values of our quantizer
α_{i}
whenever there are some sample measurements
z_{i}
(
θ
) with
α_{i}
(
z_{i}
) ≠
α_{i}^{U}
(
z_{i}
) such that the MLbased estimation error in (5) is minimized. In our design, each boundary value is updated in a sequential manner; for example, by updating
b_{i}^{j}
, we collect the sample measurements
z_{i}
(
θ
) with
α_{i}
(
z_{i}
) =
j
− 1 and
α_{i}^{U}
(
z_{i}
) =
j
or with
α_{i}
(
z_{i}
) =
j
and
α_{i}^{U}
(
z_{i}
) =
j
− 1 and find the boundary value minimizing the estimation. We present a design rule and prove that it provides a substantially reduced interval in which boundary values with an increased likelihood are very likely to be found.
Now, we are in a position to prove the theorem that states a simple design rule to determine the search interval for
b_{i}^{j}
; this rule is applicable to the other boundary values as well.
Theorem 1.
Suppose that the quantization partitions are constructed for UQ by using (5) to generate
α_{i}^{U}
and the sample
z_{i}
’s with
α_{i}
(
z_{i}
) =
j
− 1 and
α_{i}^{U}
(
z_{i}
) =
j
(or the samples with
α_{i}
(
z_{i}
) =
j
and
α_{i}^{U}
(
z_{i}
) =
j
− 1) are collected to form the set denoted by
S_{i}
^{j}
^{−}
(or
S_{i}
^{j}
^{+}
). Let
b_{i}^{j}
be the
j
th the boundary alue for
α_{i}
at the current iteration and
be the updated value of
b_{i}^{j}
for
at the next iteration for
S_{i}
^{j}
^{−}
and
S_{i}
^{j}
^{+}
, respectively. Then,
where
L
(
θ
;
Q
,
b_{i}^{j}
) represents the likelihood function computed by using
b_{i}^{j}
and
δ
denotes a sufficiently small positive value that satisfies
respectively.
Proof
. We first prove the case of the samples
z_{i}
∈
S_{i}^{j}
^{–}
. In this case, the quantizer
in the next iteration is updated by using the rule presented in (6) and the boundary values are given by
Then, we have the following from
where
and
Q_{i}
indicate the quantization indices generated by
and
α_{i}
, respectively. Note that the second term in (8) is reduced to the likelihood generated by
a_{i}^{U}
because
a_{i}^{U}
(
z_{i}
) =
j
,
z_{i}
∈
S_{i}^{j}
^{}
, and (9) follows from the UQ construction given in (5). Similarly, we can prove the case of samples
z_{i}
∈
S_{i}^{j}
^{+}
. Therefore, we conclude that the boundary values increasing the likelihood function are very likely to be found in the reduced interval given by
Thus, the theorem states that quantizers can be designed to produce better observations in terms of the ML by searching only the reduced interval
for the next boundary value
, whereas the typical interval for
in a quantizer design would be [
b_{i}^{j}
^{1}
b_{i}^{j}
^{+1}
] in the case of a random sequential search framework (see
Fig. 2
). Formally,
Sequential search framework: the next boundary value is updated by searching the reduced interval
In our experiments, we demonstrated that the update rule given in (10) allows us to efficiently design quantizers that reduce the MLbased estimation error at each iteration as compared to a random search for the boundary values.
 B. Summary of Proposed Algorithm
Given the number of quantization levels,
L_{i}
= 2
^{Ri}
, at sensor
i
, the algorithm is summarized as follows and is iteratively conducted over all sensors
i
= 1, ...,
M
until no change in
α_{i}
,
i
= 1, ...,
M
is achieved.
Algorithm 1.
Iterative quantizer design algorithm at sensor
i

Step 1: Initialize the encoderSet thresholdϵand iteration indexκ= 1.

Step 2: Construct the partitionViby using (5).

Step 3: Update each boundary valuesequentially by using (10).

Step 4: Compute the metric in the current iteration given by

Step 5: Ifstop; otherwise, continue with

Step 6:κ=κ+ 1 and go to Step 2.
IV. APPLICATION OF DESIGN ALGORITHM
We briefly introduce a source localization system in acoustic sensor networks for an evaluation of the proposed algorithm. For collecting the sensor readings at the nodes, an energy decay model was proposed and verified by the field experiment in
[11]
and was also used in
[12

14]
. In particular, the signal energy measured at sensor
i
and denoted by
z_{i}
can be written as
where
θ
denotes the source location to be estimated and the sensing model
f_{i}
(
θ
) consists of the gain factor of the
i
th sensor
g_{i}
and an energy decay factor
α
that is approximately equal to 2. The source signal energy
a
is assumed to be known in our evaluation although the signal energy is typically unknown and can be jointly estimated with the source location
[14]
. The measurement noise term
w_{i}
can be approximated by using a normal distribution,
N
(0,
σ
_{i}
^{2}
).
V. SIMULATION RESULTS
In this section, we first generate a training set from the assumption of a uniform distribution of the source locations and the model parameters in (11) given by
a
= 50,
α
= 2,
g_{i}
= 1, and
σ_{i}
^{2}
=
σ
^{2}
= 0, and iteratively optimize the quantizers by using the algorithm proposed in Section IIIB. In designing our quantizer denoted by the maximum likelihoodbased quantizer (MLQ), we use the equal distancedivided quantizer (EDQ) for the initialization of the quantizers. Note that EDQ was first introduced in
[6]
and verified to be very efficient for initialization because of its good localization performance in acoustic sensor networks
[7]
. We also designed typical quantizers such as uniform quantizers and Lloyd quantizers from the same training sets for the sake of comparison. In the experiments, we generated 100 different configurations of
M
= 5 sensor odes deployed in a sensor field measuring 10 m × 10 m. Further, for each configuration, we designed quantizers by using various algorithms and evaluated them by generating test sets of 1000 source locations from the same simulation conditions. For a performance comparison, we computed the average localization error
by using the ML estimation for fast computation.
 A. Performance Comparison with Typical Designs
In this experiment, we examined the performance gain achieved by our quantizer with respect to typical designs such as uniform quantizer (Unif Q) and Lloyd quantizer (Lloyd Q). Furthermore, we designed quantizers, denoted as random searchbased quantizers (RSQs), by randomly searching the boundary values without using the proposed rule. The random search continued iteratively until the design complexity amounted to that of the proposed algorithm for a fair comparison. A test set for each configuration was generated to collect the signal energy measurements with
σ_{i}
= 0, and the measurements were then quantized by the three different quantizers by varying
R_{i}
= 2, 3, 4 bits. The average localization error in meters for each rate
R_{i}
was computed and is plotted in
Fig. 3
.
Comparison of maximum likelihoodbased quantizer (MLQ) with uniform quantizer, Lloyd quantizer, and random searchbased quantizer (RSQ). The average localization error in meters is plotted vs. the rate R_{i} (bits).
It can be easily seen that our design technique yields a significant performance improvement over typical designs because of its design process that generates quantized data with a high probability of occurrence.
 B. Performance Comparison with Previous Novel Designs
In this experiment, we compared the proposed algorithm with the novel designs recently published such as the distributed optimized quantizers (DOQ) in
[4]
and the localizationspecific quantizer (LSQ) in
[7]
. Note that the DOQ causes a huge design complexity while achieving its good performance. We evaluated the design algorithms by generating the two test sets of 1000 random source locations with
σ_{i}
= 0 and
σ_{i}
= 0.15, respectively. In
Fig. 4
, the rate–distortion (R–D) performance curves are plotted for the sake of comparison and the proposed algorithm is thus observed to perform well with a reasonable design complexity.
Comparison of MLQ with recent designs. The average localization error in meters is plotted vs. the total rate (bits) consumed by 5 sensors with σ^{2} = 0 (left) and σ^{2} = 0.15^{2} (right), respectively. LSQ: localizationspecific quantizer, DOQ: distributed optimized quantizer, MLQ: maximum likelihoodbased quantizer.
 C. Performance Evaluation in the Presence of Measurement Noise
We investigated how the design algorithms operate in the presence of measurement noise by generating a test set of 1000 source locations for each configuration with
a
= 50 and SNR in the range from 20 dB to 100 dB by varying
σ
. Here, the SNR was computed by 10 log10
a
^{2}
/
σ
^{2}
, which was measured at a distance of 1 m from the source. Thus, the noise level that the sensor nodes actually experienced was much severe; for example, SNR = 40 dB is measured as SNR ≈ 5.5 dB at each sensor location on average. Note that practical vehicle targets often generate a much higher SNR than 40 dB and the noise variance is typically
σ
^{2}
= 0.052 (=60 dB)
[11
,
12]
.
Fig. 5
illustrates that the proposed design also provides a competitive advantage in a noisy environment.
Performance evaluation under noisy conditions. The average localization error is plotted vs. SNR (dB) with M = 5, R_{i} = 3, and a = 50. LSQ: localizationspecific quantizer, MLQ: maximum likelihoodbased quantizer, SNR: signaltonoise ratio.
VI. CONCLUSIONS
In this paper, we have proposed an iterative design algorithm that allows us to construct quantization partitions with average probabilities of occurrence that increase at each step. We have presented a simple design rule that reduces the sequential search interval for boundary values while maintaining an increased likelihood. By comparing with typical standard designs and previously published novel ones, we demonstrated that the proposed algorithm offers a noteworthy performance improvement with a low design complexity. In the future, we will focus on practical design methodologies for distributed systems by proposing novel distributed clustering techniques.
Acknowledgements
This study was supported by a 2015 research fund from Chosun University.
BIO
Yoon Hak Kim
received his B.S. and M.S. in Electronic Engineering from Yonsei University, Seoul, Korea, in 1992 and 1994, respectively, and his Ph.D. in Electrical Engineering from University of Southern California in 2007. He is currently with the Department of Electronic Engineering, College of Electronics & Information Engineering, Chosun University. His research interests include distributed compression/estimation in sensor networks with a focus on applicationspecific compression techniques, distributed source coding, and image compression/ enhancement.
Hashlamoun W. A.
,
Varshney P. K.
1996
“Nearoptimum quantization for signal detection,”
IEEE Transactions on Communications
44
(3)
294 
297
DOI : 10.1109/26.486322
Longo M.
,
Lookabaugh T. D.
,
Gray R. M.
1990
“Quantization for decentralized hypothesis testing under communication constraints,”
IEEE Transactions on Information Theory
36
(2)
241 
255
DOI : 10.1109/18.52470
Kim Y. H.
2013
“Functional quantizer design for source localization in sensor networks,”
EURASIP Journal on Advances in Signal Processing
2013
1 
10
DOI : 10.1186/1687618020131
Kim Y. H.
2014
“Quantizer design optimized for distributed estimation,”
IEICE Transactions on Information and Systems
97
(6)
1639 
1643
DOI : 10.1587/transinf.E97.D.1639
Kim Y. H.
2014
“Weighted distancebased quantization for distributed estimation,”
Journal of Information and Communication Convergence Engineering
12
(4)
215 
220
DOI : 10.6109/jicce.2014.12.4.215
Kim Y. H.
,
Ortega A.
“Quantizer design for source localization in sensor networks,”
in Proceedings of IEEE International Conference on Acoustic, Speech, and Signal Processing (ICASSP2005)
Philadelphia, PA
2005
857 
860
Kim Y. H.
,
Ortega A.
2011
“Quantizer design for energybased source localization in sensor networks,”
IEEE Transactions on Signal Processing
59
(11)
5577 
5588
DOI : 10.1109/TSP.2011.2163401
Lam W.
,
Reibman A.
1993
“Design of quantizers for decentralized estimation systems,”
IEEE Transactions on Communications
41
(11)
1602 
1605
DOI : 10.1109/26.241739
Wernersson N.
,
Karlsson J.
,
Skoglund M.
2009
“Distributed quantization over noisy channels,”
IEEE Transactions on Communications
57
(6)
1693 
1700
DOI : 10.1109/TCOMM.2009.06.070482
Kim Y. H.
,
Ortega A.
2010
“Distributed encoding algorithms for source localization in sensor networks,”
EURASIP Journal on Advances in Signal Processing
article ID. 781720
2010
1 
13
Li D.
,
Hu Y. H.
2003
“Energybased collaborative source localization using acoustic microsensor array,”
EURASIP Journal on Applied Signal Processing
2003
(4)
321 
337
DOI : 10.1155/S1110865703212075
Liu J.
,
Reich J.
,
Zhao F.
2003
“Collaborative innetwork processing for target tracking,”
EURASIP Journal on Applied Signal Processing
2003
(4)
378 
391
DOI : 10.1155/S111086570321204X
Hero A. O.
,
Blatt D.
“Sensor network source localization via projection onto convex sets (POCS),”
in Proceedings of IEEE International Conference on Acoustic, Speech, and Signal Processing (ICASSP2005)
Philadelphia, PA
2005
689 
692
Kim Y. H.
,
Ortega A.
“Maximum a posteriori (MAP)based algorithm for distributed source localization using quantized acoustic sensor readings,”
in Proceedings of IEEE International Conference on Acoustic, Speech, and Signal Processing (ICASSP 2006)
Toulouse, France
2006