Advanced
Weighted Distance-Based Quantization for Distributed Estimation
Weighted Distance-Based Quantization for Distributed Estimation
Journal of information and communication convergence engineering. 2014. Dec, 12(4): 215-220
Copyright © 2014, The Korean Institute of Information and Commucation Engineering
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • Received : September 12, 2014
  • Accepted : December 10, 2014
  • Published : December 31, 2014
Download
PDF
e-PUB
PubReader
PPT
Export by style
Article
Author
Metrics
Cited by
TagCloud
About the Authors
Yoon Hak Kim
yhk@chosun.ac.kr

Abstract
We consider quantization optimized for distributed estimation, where a set of sensors at different sites collect measurements on the parameter of interest, quantize them, and transmit the quantized data to a fusion node, which then estimates the parameter. Here, we propose an iterative quantizer design algorithm with a weighted distance rule that allows us to reduce a system-wide metric such as the estimation error by constructing quantization partitions with their optimal weights. We show that the search for the weights, the most expensive computational step in the algorithm, can be conducted in a sequential manner without deviating from convergence, leading to a significant reduction in design complexity. Our experments demonstrate that the proposed algorithm achieves improved performance over traditional quantizer designs. The benefit of the proposed technique is further illustrated by the experiments providing similar estimation performance with much lower complexity as compared to the recently published novel algorithms.
Keywords
I. INTRODUCTION
In distributed estimation systems where many sensor nodes located at different sites operate on power-limited batteries, each node monitors its environmental conditions, such as temperature, pressure, or sound, related to the parameter of interest and sends the data to a fusion node, which then estimates the parameter value. In such powerconstrained systems, the quantization of sensing data has been an attractive topic of research for signal processing researchers since efficient quantization at each node has a significant impact on the rate-distortion performance of the system.
For a source localization system where each sensor measures the signal energy, quantizes it, and sends the quantized sensor reading to a fusion node where the localization is performed, the maximum likelihood (ML) estimation problem using quantized data was addressed and the Cramer–Rao bound (CRB) was derived for comparison [1] , assuming that each sensor used identical (uniform) quantizers. However, if the node locations are known prior to the quantization process, sensor nodes can exploit the correlation between their measurements to design quantizers minimizing the system-wide metric (i.e., the estimation error) to replace typical quantizers, which would be devised to minimize the local metric, such as the reconstruction error. It has been demonstrated that a significant performance gain can be achieved by using such quantizers with respect to simple uniform quantization at all nodes.
There is a difficulty in designing independently and locally operating quantizers that minimize a global metric (e.g., the estimation error is a function of measurements from all nodes). To circumvent this, a cooperative designseparate encoding approach was suggested for a decentralized hypothesis testing system [2] where a distributional distance was used as a criterion for quantizer design in order to yield a manageable design procedure. The necessary conditions for the optimal quantization partition construction were formulated for distributed estimation systems [3] . A related issue is quantizer design at each node in distributed source coding (DSC) frameworks, in which data collected at different locations must be encoded separately and communicated to a fusion center by using limited transmission rates. Practical quantization methods for correlated sources have been studied in [4 - 6] .
Previously, we proposed iterative quantizer design algorithms in the Lloyd algorithm framework (see also [7 - 9] ), which design independently operating quantizers by reducing the estimation error at each iteration. Since the Lloyd algorithm is focused on minimizing a local metric (e.g., reconstruction error of local sensor readings) during quantizer design, some modification would be inevitable in order to achieve convergence with the global metric in the Lloyd design. Hence, we suggested a weighted sum of both the metrics as a cost function (i.e., local + λ × global, λ ≥ 0) along with a search for appropriate weights. Finding the weight λ that guarantees the convergence is the most expensive computational process and should be repeated in each iteration, leading to a high computational cost. In [10] , novel non-regular mapping between quantization partitions and their codewords was iteratively constructed to achieve improved performance over typical designs. In particular, quantizers with a many-to-one correspondence were shown to yield a significant improvement at the cost of a huge computational complexity.
In this paper, we propose an iterative design algorithm in the generalized Lloyd framework that seeks to design independently operating scalar quantizers by partitioning quantization regions based on the weighted distance rule so as to minimize the estimation error. We search for the weights corresponding to the codewords such that the weighted partition of the regions using the codewords will result in an iterative reduction in the system-wide performance metric. To avoid a high computational complexity, we suggest a sequential search for the weights without causing performance degradation. We show that convergence of the proposed algorithm is guaranteed by reducing the global metric in each iteration and applying our design algorithm to a source localization system where the acoustic amplitude sensor model proposed in [11] is considered. We demonstrate through experiments that improved performance over traditional quantizer designs can be achieved by using our design technique. We also evaluate the proposed algorithm by a comparison with recently published novel design techniques [9 , 10] , both of which were recently developed for source localization in acoustic sensor networks, the application considered in this work. As expected, the weighted distance rule adopted in the proposed algorithm shows obvious advantage over the regular designs in terms of localization accuracy.
The rest of this paper is organized as follows: we present the problem formulation for quantizer design in distributed estimation in Section II. We then consider quantization based on weighted distance and elaborate the proposed design process in the Lloyd algorithm framework in Section III. As an application system, we consider source localization in acoustic sensor networks in Section IV and then, evaluate the proposed technique for this system in Section V. Finally, we present the conclusions and future directions for research on distributed systems in Section VI.
II. PROBLEM FORMULATION
In distributed estimation systems, it is assumed that M sensor nodes are spatially placed at known locations in a sensor field, denoted by x i R 2 , i = 1, …, M , and collect measurements on the unknown parameter θ ⊂ R N to be estimated. The measurement zi at node i can be expressed as follows:
PPT Slide
Lager Image
where fi ( θ ) denotes the sensing model employed at node i and ω i represents the measurement noise approximated by normal distribution N (0,
PPT Slide
Lager Image
). It is assumed that a quantizer of Ri bits with a dynamic range [ zi,min zi,max ] is employed at sensor i . Note that the quantization range can be selected for each sensor on the basis of the desirable properties of their respective sensing ranges [12] .
PPT Slide
Lager Image
denotes the j -th codeword at sensor i , which is generated whenever the measurement zi belongs to the j -th quantization partition
PPT Slide
Lager Image
. Each node quantizes its measurement and sends the quantized result to a fusion node, which estimates the value of the parameter
PPT Slide
Lager Image
on the basis of the quantized measurements,
PPT Slide
Lager Image
, i = 1, …, M from all nodes. In this work, it is assumed that we are given a good estimator
PPT Slide
Lager Image
, which is used to compute the estimation error during the quantizer design process. For details about estimators that operate on quantized data, see [13 , 14] .
III. QUANTIZATION BASED ON WEIGHTED DISTANCE
Standard quantization follows the minimum distance rule on which the next quantization partition is conducted as well as the codeword computation in an iterative manner. Formally, the Voronoi region construction (i.e., quantization partitions) is defined as follows:
PPT Slide
Lager Image
where Li = 2Ri represents the number of quantization partitions and
PPT Slide
Lager Image
consists of measurement samples that would minimize the distortion | zi
PPT Slide
Lager Image
| 2 if they were assigned to the j -th quantization partition. The codewords are also updated to minimize the distortion over the corresponding quantization partitions:
PPT Slide
Lager Image
Clearly, the two main steps defined as Eqs. (2) and (3) in the typical Lloyd design minimize the average reconstruction error E | zi
PPT Slide
Lager Image
| 2 . However, minimizing the reconstruction error would not produce quantizers that minimize the estimation error E θ -
PPT Slide
Lager Image
2 , which is our system-wide distortion to be minimized in the proposed design algorithm. In this work, we propose a weighted distance rule to build quantization partitions such that the estimation error is reduced at each iteration. First, we construct the Voronoi region for each weight
PPT Slide
Lager Image
, j = 1, …, Li as follows:
PPT Slide
Lager Image
It is noted that the minimum distance rule is generated with
PPT Slide
Lager Image
= ½, which will create the standard construction defined in Eq. (2). Our weighted distance rule considers various cases of partitioning determined by the weights 0 <
PPT Slide
Lager Image
< 1. This will produce a set of candidate partitions for our design process.
Second, we search for the optimal weight starting with
PPT Slide
Lager Image
that minimizes the estimation error over the corresponding quantization partition:
PPT Slide
Lager Image
where
PPT Slide
Lager Image
(
PPT Slide
Lager Image
) , the abbreviated notation for
PPT Slide
Lager Image
, can be computed by replacing
PPT Slide
Lager Image
with
PPT Slide
Lager Image
for all i . Note that the search for the optimal weight can be conducted in a sequential manner without causing the divergence problem that typically exists in iterative algorithms. Furthermore, in calculating the j -th optimal weight, only samples zi >
PPT Slide
Lager Image
, zi
PPT Slide
Lager Image
can be considered since only these samples create a relative difference in the estimation error as the weight
PPT Slide
Lager Image
is varied in Eq. (5). These benefits of the proposed algorithm allow us to achieve a substantial reduction in computational complexity while maintaining the non-increasing distortion at each step.
Once the optimal weights are obtained, the next step would be to update the codewords over the Voronoi regions determined by their corresponding optimal weights: formally,
PPT Slide
Lager Image
Note that the construction of the Voronoi regions, the search for optimal weights, and the subsequent computation of the codewords given in Eqs. (4), (5), and (6), respectively, are repeated at each sensor node until a certain criterion is satisfied. Clearly, these procedures are guaranteed to reduce the estimation error in each iteration, leading to convergence of the design algorithm.
- A. Summary of Proposed Algorithm
Given Li = 2Ri at sensor i , the algorithm is summarized as follows and is iteratively conducted over all sensors i = 1, ..., M until a certain criterion in the estimation error is achieved:
Algorithm 1 : Iterative quantizer design algorithm at sensor i
Step 1 : Initialize the quantizers with
PPT Slide
Lager Image
and
PPT Slide
Lager Image
, j = 1, …, Li . Set thresholds ϵ and iteration index κ = 1.
Step 2 : For each
PPT Slide
Lager Image
, construct the partitions
PPT Slide
Lager Image
, j = 1, …, Li by using Eq. (4). Once the Voronoi region construction is completed, compute the optimal weight by using Eq. (5).
Step 3 : Given a set of optimal weights {
PPT Slide
Lager Image
}, update the codewords
PPT Slide
Lager Image
, j = 1, …, Li by using Eq. (6).
Step 4 : Compute the average distortion D= E θ -
PPT Slide
Lager Image
2 by using the updated codewords
PPT Slide
Lager Image
.
Step 5 : If (D κ−1 − D κ )/D κ < ϵ , stop; otherwise, continue. D κ indicates the average distortion D at the κ -th iteration.
Step 6 : Replace
PPT Slide
Lager Image
by
PPT Slide
Lager Image
i . κ = κ + 1 and go to Step 2.
Note that the quantizers are designed offline by using a training set that is generated on the basis of Eq. (1) and the parameter distribution p ( θ ) for a given sensor node configuration; clearly, this design process makes use of information about all measurements, and when the resulting quantizers are in actual use, each sensor uses its optimal weights to quantize the measurement available to it independently . A discussion of the sensitivity of our quantizer to the parameter perturbation of the sensor model is presented in Section V.
IV. APPLICATION OF QUANTIZER DESIGN ALGORITHM
We apply our design algorithm to a source localization system where sensor nodes equipped with acoustic amplitude sensors measure source signal energy, quantize it, and transmit the quantized data to a fusion node for estimation. For collecting the signal energy readings, we use an energy decay model proposed in [11] , which was verified by a field experiment and was also used in [15 , 16] . This model is based on the fact that the acoustic energy emitted omnidirectionally from a sound source will attenuate at a rate that is inversely proportional to the square of the distance in free space [17] . When an acoustic sensor is employed at each node, the signal energy measured at node i and denoted by zi can be expressed as follows:
PPT Slide
Lager Image
Note that θ indicates the source location to be estimated and the sensor model fi ( θ ) in Eq. (1) is replaced by the acoustic sensor model, which consists of the gain factor of the i -th node gi , an energy decay factor α , which is approximately equal to 2 in free space, and the source signal energy a . The measurement noise term ωi can be approximated using a normal distribution, N (0,
PPT Slide
Lager Image
). It is assumed that the signal energy a is known prior to estimation in this work, but the signal energy is typically modeled as a uniform distribution and can be jointly estimated along with the source location [13] .
V. SIMULATION RESULTS
In this section, we discuss a weighted distance-based quantizer (WDQ), the quantizer designed using the algorithm proposed in Section III-A. In designing WDQ, we use the equally distance-divided quantizer (EDQ) to initialize quantizers (see Step 1 in Section III-A) because EDQ can be used as an efficient initialization for quantizer design due to its good localization performance in acoustic amplitude sensor networks [8 , 9] . We generate a training set from the uniform distribution of source locations and the model parameters in Eq. (7) set as a = 50, α = 2, gi = 1, and
PPT Slide
Lager Image
= σ 2 = 0. Lloyd quantizers corresponding to
PPT Slide
Lager Image
= 1/2, j are also designed from the same training set for comparison. In the experiments, we consider a sensor network where M (=5) sensors are deployed in a sensor field measuring 10 × 10 ㎡. We design quantizers by various algorithms and evaluate them by using test sets generated from the same model parameters. The experimental results are provided in terms of the average localization error E θ -
PPT Slide
Lager Image
2 . In these experiments, the localization error is computed using the ML estimation for fast computation.
- A. Performance Comparison with Typical Designs
In this experiment, 100 different five-node configurations are generated in a sensor field measuring 10 × 10 ㎡. A test set of 1000 random source locations is generated for each configuration to collect signal energy measurements with σi = 0. These measurements are then quantized by three different quantizers, namely, uniform quantizer (Unif Q), Lloyd quantizer (Lloyd Q), and WDQ with Ri = 2, 3, and 4 bits, respectively. The localization error in meters is averaged over 100 node configurations for each rate Ri . Fig. 1 shows the overall rate-distortion (R-D) performance for the different quantizers. As expected, WDQ outperforms the typical designs since the proposed technique exploits the correlation between the distributed measurements to construct the weighted quantization partitions that minimize the localization error.
PPT Slide
Lager Image
Comparison of weighted distance-based quantizer (WDQ) with the uniform quantizer and the Lloyd quantizer: the average localization error in meters is plotted vs. the total rate (bits) consumed by five sensors.
- B. Performance Evaluation: Comparison with Previous Novel Designs
In this experiment, we evaluate the proposed algorithm by comparing it with the previous novel designs, such as distributed optimized quantizers (DOQs) in [10] and the localization-specific quantizer (LSQ) in [9] since both of them are optimized for distributed source localization and can be viewed as DSC techniques, which are developed as a tool to reduce the rate required to transmit data from all nodes to the sink. These design techniques are applied for 100 five-sensor configurations by using the EDQ initialization with Ri = 2, 3, 4 bits and tested by generating the two test sets of 1000 random source locations with σi = 0 and σi = 0.15, respectively. The R-D performance curves are plotted for comparison in Fig. 2 . It should be observed that the proposed algorithm offers good performance without incurring high computational complexity as compared to DOQ and LSQ. Note that DOQ operates in a non-regular framework, which requires a huge computation [10] .
PPT Slide
Lager Image
Comparison of weighted distance-based quantizer (WDQ) with novel design techniques: The average localization error in meters is plotted vs. the total rate (bits) consumed by five sensors with σ2 = 0 (left) and σ2 = 0.152 (right). LSQ: localization-specific quantizer, DOQ: distributed optimized quantizer.
- C. Sensitivity Analysis of Design Algorithms
In this section, we first investigate how the performance of the proposed design algorithm can be affected by the perturbation of the model parameters with respect to the assumptions made during the quantizer design phase. We further examine the design algorithms to understand how sensitive the localization results will be with respect to the presence of the measurement noise.
- 1) Sensitivity of WDQ to Parameter Perturbation
We design WDQ with Ri = 3 for each of the 100 different five-sensor configurations and test them under various types of mismatch conditions. In this experiment, a test set of 1,000 source locations is generated with a = 50 for each configuration by varying one of the model parameters. The simulation results are tabulated in Table 1 . As seen in Table 1 , WDQ shows robustness to mismatch situations where the parameters used in quantizer design are different from those characterizing the simulation conditions.
Localization error (LE) of weighted distance-based quantizer (WDQ) withRi= 3 due to variations of the model parameters
PPT Slide
Lager Image
LE is averaged in meters over 100 five-sensor configurations.
- 2) Sensitivity of Design Algorithms to Noise Level
We design quantizers with Ri = 3 for each of the 100 different five-sensor configurations by using various design algorithms. The localization error E θ -
PPT Slide
Lager Image
2 is averaged over 100 configurations. We investigate the sensitivity to the noise level by generating a test set of 1,000 source locations for each configuration with a = 50 and the signal-to-noise ratio (SNR) in the range of 40–100 dB by varying σ . Note that the SNR is computed using 10 log 10 a 2 / σ 2 and measured at 1 m from the source. For example, SNR = 40 dB corresponds to SNR = 5.5 dB measured at each sensor location on average. For a practical vehicle target, it is often much higher than 40 dB. A typical value of the variance of measurement noise σ 2 is 0.05 2 (=60 dB) [11 , 16] . Fig. 3 shows that the proposed quantizer is as robust to the measurement noise as the other designs.
PPT Slide
Lager Image
Sensitivity to noise level: the average localization error is plotted vs. signal-to-noise ratio (SNR) with M = 5, Ri = 3, and a = 50. LSQ: localization-specific quantizer, WDQ: weighted distance-based quantizer.
VI. CONCLUSIONS
In this paper, we proposed an iterative quantizer design algorithm optimized for distributed estimation. Since the goal was to design independently operating quantizers that minimized the global metric such as the estimation error, we suggested a weighted distance rule that allowed us to partition quantization regions so as to reduce the metric iteratively in the generalized Lloyd algorithm framework. We showed that the system-wide metric could be minimized for quantizer design by searching the optimal weights in a sequential manner, yielding a substantial reduction in computational complexity. The proposed algorithm was shown to perform quite well in comparison with typical standard designs and previous novel ones. Furthermore, we demonstrated that the proposed quantizer operated robustly to mismatches of the sensor model parameters. In the future, we will continue to work on novel quantizer design methodologies and their theoretical analysis for distributed systems.
Acknowledgements
This study was supported by a research fund from Chosun University, 2014.
BIO
Yoon Hak Kim
received his B.S. and M.S. in Electronic Engineering from Yonsei University, Seoul, Korea, in 1992 and 1994, respectively, and his Ph.D. in Electrical Engineering from University of Southern California in 2007. He is currently with Department of Electronic Engineering, College of Electronics & Information Engineering, Chosun University. His research interests include distributed compression/estimation in sensor networks with a focus on application-specific compression techniques, distributed source coding, and image compression/enhancement.
References
Niu R. , Varshney P. “Target location estimation in wireless sensor networks using binary data” in Proceedings of the 38th Annual Conference on Information Sciences and Systems Princeton, NJ 2004 1 - 6
Longo M. , Lookabaugh T. D. , Gray R. M. 1990 “Quantization for decentralized hypothesis testing under communication constraints” IEEE Transactions on Information Theory 36 (2) 241 - 255    DOI : 10.1109/18.52470
Lam W. M. , Reibman A. R. 1993 “Design of quantizers for decentralized estimation systems” IEEE Transactions on Communications 41 (11) 1602 - 1605    DOI : 10.1109/26.241739
Pradhan S. S. , Ramchandran K. 2003 “Distributed source coding using syndromes (DISCUS): design and construction” IEEE Transactions on Information Theory 49 (3) 626 - 643    DOI : 10.1109/TIT.2002.808103
Saxena A. , Nayak J. , Rose K. 2010 “Robust distributed source coder design by deterministic annealing” IEEE Transactions on Signal Processing 58 (2) 859 - 868    DOI : 10.1109/TSP.2009.2031283
Wernersson N. , Karlsson J. , Skoglund M. 2009 “Distributed quantization over noisy channels” IEEE Transactions on Communications 57 (6) 1693 - 1700    DOI : 10.1109/TCOMM.2009.06.070482
Kim Y. H. , Ortega A. “Quantizer design and distributed encoding algorithm for source localization in sensor networks” in Proceedings of the 4th International Symposium on Information Processing in Sensor Networks (IPSN2005) Los Angeles, CA 2005 231 - 238
Kim Y. H. , Ortega A. “Quantizer design for source localization in sensor networks” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP2005) Philadelphia, PA 2005 857 - 860
Kim Y. H. , Ortega A. 2011 “Quantizer design for energy-based source localization in sensor networks” IEEE Transactions on Signal Processing 59 (11) 5577 - 5588    DOI : 10.1109/TSP.2011.2163401
Kim Y. H. 2014 “Quantizer design optimized for distributed estimation” IEICE Transactions on Information and Systems 97 (6) 1639 - 1643    DOI : 10.1587/transinf.E97.D.1639
Li D. , Hu Y. H. 2003 “Energy-based collaborative source localization using acoustic microsensor array” EURASIP Journal on Applied Signal Processing 2003 321 - 337    DOI : 10.1155/S1110865703212075
Yang H. , Sikdar B. “A protocol for tracking mobile targets using sensor networks” in Proceedings of the 1st IEEE International Workshop on Sensor Network Protocols and Applications Anchorage, AK 2003 71 - 81
Kim Y. H. , Ortega A. “Maximum a posteriori (MAP)-based algorithm for distributed source localization using quantized acoustic sensor readings” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP2006) Toulouse, France 2006
Kim Y. H. 2011 “Distributed estimation based on quantized data” IEICE Electronics Express 8 (10) 699 - 704    DOI : 10.1587/elex.8.699
Hero A. O. , Blatt D. “Sensor network source localization via projection onto convex sets (POCS)” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP2005) Philadelphia, PA 2005 689 - 692
Liu J. , Reich J. , Zhao F. 2003 “Collaborative in-network processing for target tracking” EURASIP Journal on Applied Signal Processing 2003 378 - 391    DOI : 10.1155/S111086570321204X
Rappaport T. S. 1996 Wireless Communications: Principles and Practice. Prentice Hall Upper Saddle River, NJ