This paper addresses the problem of signal acquisition with a sparse representation in a given orthonormal basis using fewer noisy measurements. The authors formulate the problem statement for randomly measuring with strong signal noise. The impact of white Gaussian signals noise on the recovery performance is analyzed to provide a theoretical basis for the reasonable design of the measurement matrix. With the idea that the measurement matrix can be adapted for noise suppression in the adaptive CS system, an adapted selective compressive sensing (ASCS) scheme is proposed whose measurement matrix can be updated according to the noise information fed back by the processing center. In terms of objective recovery quality, failure rate and meansquare error (MSE), a comparison is made with some nonadaptive methods and existing CS measurement approaches. Extensive numerical experiments show that the proposed scheme has better noise suppression performance and improves the support recovery of sparse signal. The proposed scheme should have a great potential and bright prospect of broadband signals such as biological signal measurement and radar signal detection.
1. Introduction
A
s an alternative paradigm to the ShannonNyquist sampling theorem, compressive sensing (CS) enables sparse signal to be acquired by subNyquist analogtodigital converters (ADC), thus launch a revolution in signal collection, transmission and processing. The CS theory points out that if the signal is compressible or sparse in a transform domain, it can be recovered exactly with high probability from fewer measurements via
l
_{1}
norm optimization
[1]
. Rather than the classical ShannonNyquist sampling theorem, which requires sampling signals at twice the bandwidth, CS promises to reduce the sampling bandwidth, which depends on the sparsity of the signal. Compared with the traditional radio frequency (RF) signal acquisition system, the sampling frontend of CS operates at a lower speed and then lowers the cost of the frontend sensor (such as size, weight and power consumption). The parts with intensive computation of the acquisition process are removed from the frontend sensor and are transferred to a central processing backend. Due to the potential use in signal processing applications, CS has attracted vast interests in signal acquisition
[2]
, radar detection
[3]
, cognitive radio
[4]
and Massive antenna arrays
[5]
.
CS has been considered from an adaptive perspective in
[6]

[10]
. The parameterized Bayesian model
[6]
is proposed in to dynamically determine whether a sufficient number of CS measurements have been performed. In
[7]
, an empirical Bayesian model based multitask learning algorithm is developed to improve the performance of the inversion. An analogous work has been done in localization in wireless LANs
[8]
. These Bayesian methods have been demonstrated to achieve better recovery performance. However, they often require fewer noisy observations to recover sparse signals than nonadaptive competitors in practice. In
[9]
, an adaptive optimal measurement matrix design has been studied in CSbased multipleinput multipleoutput (MIMO) radar to improve the detection accuracy. In
[10]
, an adaptive CS radar scheme is proposed where the transmission waveform and measurement matrix can be updated by the target scene information fed back by the recovery algorithm, which achieves better detection performance than the traditional CS radar system.
Generally speaking, the measurement noise in CS can be classified into two categories in terms of the generation mechanism
[11]
. The first category is the signal noise, i.e., jammers and interference in the transmission channel. The second category is the processing noise caused by the processing and acquisition hardware, i.e., the quantization error in the acquisition system. Most of the previous literatures focus on CS acquisition and recovery with the processing noise. The recent works in CS show that the measurement process would causes the noise folding phenomenon
[12]
, which implies that the noise in the signal eventually is amplified by the measuring process. The study of has raised concerns from some scholars
[13]
[14]
. In
[13]
, the authors evaluate the performance of the CS based wideband radio receiver in both signal noise and processing noise environments, and some effective suggestions are given for the CS receiver evaluation. In
[14]
, an enhanced
l
_{1}
minimization recovery algorithm is developed for signal noise suppression, which has been proven that the algorithm providing relatively simple and precise theoretical guarantees. All the above studies can be summarized as optimization methods after sparse signals and noise have been acquired. Once the acquisition system faced with strong signal noise scene, the benefit from these optimization methods may be diminished. Nonetheless, signal noise suppression has not been taken into account from the perspective of measurement matrix optimization in the signal acquisition current system.
In this paper, we provide a new insight into sparse signal acquisition oriented toward strong signal noise scene. The mechanism of how does the signal noise exacerbates the recovery performance is investigated. An adapted selective compressive sensing (ASCS) scheme is proposed for signal noise suppression in the acquisition system. The measurement matrix can be adapted according to the noise strength so as to selective measure the signal noise, thus provides fewer noisy measurements. For robust noise priori estimation, the multiple measurement vectors (MMV)
[15]
model is used. A method of joint projection filtering in the compressive domain and the subspace estimation are proposed in this paper. We evaluate the performance via simulations and compare the proposed scheme with a nonadaptive implementation.
The rest of the paper is organized as follows. In section 2, we present the signal model and analyze the impact of measurement process on signal noise. Section 3 provides the proposed ASCS scheme. The simulation results are given in section 4. Finally, conclusions are given in Section 5.
Notation: Lower case and capital letters in bold denote, respectively, vectors and matrices. The superscript(•)
^{T}
, (•)
^{H}
, (•)
^{1}
and (•)
^{†}
represent the operators of transpose, Hermitian transpose, inverse and pseudoinverse, respectively; The subscript ∥•∥
_{i•}
and ∥•∥
_{•j}
accounts for the
i
th row and
j
th column of a matrix; ∥•∥
_{1}
, ∥•∥
_{2}
and ∥•∥
_{F}
separately denote the
l
_{1}
norm,
l
_{2}
norm and Frobenius norm.
2. Signal Model
 2.1 Compressive Sensing
The CS theory states that if a signal is compressible or sparse in a transform domain, it can be recovered exactly with high probability from much fewer samples than that required by the traditional ShannonNyquist sampling theorem. Without loss of generality, for any
x
∈
¡
^{N×1}
, if there exist unique coefficients
such that
where
Ψ
denote an
N
×
N
orthogonal transform basis with the
n
th column given by
φ_{n}
∈
¡
^{N×1}
, and
s
= [
s
_{1}
,
s
_{2}
,
L
,
s_{N}
]
^{T}
is a complexvalued vector with length of
N
. The signal
x
is called
K
sparse if no more than
K
elements of its sparse representation
s
are nonzero, i.e. ∥
s
∥
_{0}
=
K
with
K
=
N
. The support of
x
is
In order to recover
x
one must identify supp (
x
) . Therefore, a natural recovery strategy for signal recovery is support identification.
Now we consider a linear projection operator that computes
M
(
K
<
M
<
N
) inner products between
x
and a set of vectors
We collect the measurements and form a vector
y
= [
y
_{1}
,
y
_{2}
,
L
,
y_{M}
]
^{T}
. By arranging the projection operators
as rows of an
M
×
N
measurement matrix
Φ
, the noisy measurement process in (3) can be represented as
where
e
= [
e
_{1}
,
e
_{2}
,
L
,
e_{M}
]
^{T}
represents the noisy environment effects with each entry
e_{m}
being zero mean Gaussian random variable with variance
. As
M
is typically much smaller than
N
, the matrix
Θ
=
ΦΨ
represents a dimensionality reduction since it maps
¡
^{N}
into
¡
^{M}
. (4) is turned to be an underdetermined system. The sparse solutions
to the linear inverse problem from (4) can be formulated as the following convex problem
In general, this problem is NPhard.
[16]
states that the
l
_{0}
norm optimization in (5) can be approximated by the
l
_{1}
norm relaxation with a bounded error under certain conditions
To ensure stable recovery of sparse vector s by
l
_{1}
norm minimization, the matrix
Θ
need satisfying the restricted isometry property (RIP)
[17]
of the order
K
with a very small constant
δ_{K}
, so that
In other word,
Θ
acts as an approximate isometry on the set of vectors that are
K
sparse. Note that Gaussian matrices, Bernoulli matrix and uniformly random partial Fourier matrix provide reasonable constants for RIP. A typical means of solving (6) is through an unconstrained
l
_{1}
norm regularized formulation
where
η
is a tradeoff parameter balancing the estimation quality. The basic framework in (8) can be solved by techniques such as greedy algorithms
[18]
and Bayesian algorithms.
 2.2 Noise Folding in CS
The basic CS model in (4) is adequate when faced with the measured error or noise. However, in many practical scenarios, the signal itself is contaminated by the signal noise, which is not accounted for in (4). In
[11]
, the authors present a generalized CS model
where
n
stands for the white signal noise with variance
, and
e
represents the processing noise. Basically, this is equivalent to stating that
is only approximately sparse. The noise situation in (9) is subtly different from the basic setting because the signal noise has been acted upon by the matrix
Θ
, and it is possible that
Θn
could be potentially rather large. Our chief interest here is to understand how
n
impacts the recovery performance.
Before establishing our main result concerning white signal noise, some useful assumptions are suggested for our deduction. We suppose that the measurement matrix
Θ
∈
£
^{M×N}
fulfills the RIP of the order
K
and constant
δ_{K}
. Furthermore, we suppose that:
1)
. each row
Θ
_{m•}
(
m
= 1,2,
L
,
M
) of
Θ
is orthogonal to others, i.e.,
, and each column of
Θ
_{•n}
(
n
= 1,2,
L
,
N
) is normalized to 1, namely ∥
Θ
_{•n}
∥
_{2}
= 1 .
2)
. each row has the same norm. Since
, with the hypothesis in
1)
, we have
.
3)
. acquisition noise
e
is ignored in our discussion, i.e.
e
=
0
.
In our formulation, we use the notation
λ_{j}
(
Θ
) to denote the
j
th largest eigenvalue of
Θ
, and we use
s_{j}
(
Θ
) to denote the
j
th largest singular value of
Θ
, thus we obtain
s_{j}
(
Θ
) =
λ
(
Θ
^{H}
Θ
) . To establishing our main result concerning white signal noise, a useful lemma is firstly cited, which has been proven in
[19]
.
Lemma
(Lemma 7.1 of
[19]
). Suppose that
Θ
is a
M
×
N
matrix and let Λ be a set of indices with ∥Λ∥
_{0}
≤
K
. If
Θ
satisfies the RIP of order
K
and constant
δ_{K}
, for
k
= 1,2,
L
,
K
we have
We begin by noting that
, the expectation of the measured noise power is
which establishes
. From the RIP, we can
, which implies that the sparse signal power hardly changed during the measurement process. In order to quantify the impaction of signal noise to the random measurement process, we defined the impaction factor
Gain_{noise}
as ratio of the recovered noise power
to the power of the noise component that attached to the sparse signal
, therefore
. Let Λ to be the indices set with the elements represent the indexes corresponding to the location of nonzero elements in
s
, i.e. Λ = supp(
x
) . The leastsquares optimal recovery of
s
restricted to the index set Λ is given by
Since
Θn
is a white Gaussian process, we have
Combining (13) with (10) yields
. In the event that the noise
n
is a white random vector, there exists
, thus
From which we observe that the noise added to the signal itself can be highly amplified by the measurement process as
M
=
N
. In the literature, this effect is known as noise folding.
3. Adaptive Compressive Sensing
Although prior research have validated the benefits of exploiting RIP in measurement design
[9]
[10]
, such as improving the recovery probability, decreasing the recovery error and so on, these benefits diminished when faced with strong signal noise scene. From the above analysis, the expected
Gain_{noise}
closely related to parameters
M
and
N
, which account for the number of rows and columns in
Θ
. Generally,
M
is related to the RIP condition (which is bound by
K
,
N
and
δ_{K}
), and
N
in
Gain_{noise}
is related to the measured support of noise in
Θ
. However, only
Θ
_{Λ}
contribute to the sparse vector
s
and
entirely measured the signal noise. In the traditional ShannonNyquist sampling system, to avoid the noise off the passband, an antialiasing filter is applied before the sampling process. Inspired by the necessity of antialiasing filtering in bandpass signal sampling, a selective measuring scheme is proposed in this paper. The measurement matrix would only sense the interested spectrum, where most likely the sparse spectrum lying.
The measurement matrix in our scheme is modified into
where
A
∈ □
^{M×N}
is a random matrix, Ω is an index set.
ℑ
_{Ω}
(
A
) is defined as a selective operation which setting the
n
th (
n
∈ Ω) column of
A
to zeroes, act as an antialiasing filter in our scheme.
 3.1 Projection Filtering in the Compressed Domain
The core of the proposed scheme is estimation of the index set Ω, where most likely the noise spectrum lying. It is necessary for us to extract the information that each vector
Θ
_{•n}
hide in
y
. The simplest way is to projection
y
to each vector
Θ
_{•n}
. However, due to the nonorthogonality between the columns of the matrix
Θ
, the projection results would interfere with each other, and the low SNR scene increased the difficulty of information extraction. To minimize the projection interference, a set of projection filters are applied. The output of the
n
th (
n
= 1,2,
L
,
N
) filter is formed as
The output energy is defined as
with the correlation matrix of the measured signal
R
_{y}
=
E
{
yy^{H}
} . Our objective function is minimized the output energy. In order to avoid the trivial solution such as
h
_{n}
=
0
, a set of linear constraints are added to the objective function, which can be expressed as
The minimum output energy can be achieved with a proper choice of
h_{nopt}
. We can solve the general constrained minimization problem of equation (17) to obtain
h_{nopt}
by applying Lagrange multiplier method, which resulting in the following unconstrained objective function
To force the gradient of the objective function to be zero, i.e.,
, we obtain
. The optimal values for
h_{nopt}
that minimize the objective function can be evaluated as follows
The optimal output energy of
z_{n}
is
and the desired output of the filter banks
z
= [
z
_{1}
,
z
_{2}
,
L
,
z_{N}
] are
with
and
D
=
diag
(
d
_{1}
,
d
_{2}
,
L
,
d_{N}
) denotes a diagonal matrix with principal diagonal elements being
d
_{1}
,
d
_{2}
,
L
,
d_{N}
in turn, where
. Note that in the ideal case the matrix
Q
can be estimated precisely. Therefore, the Lagrange method converges to the optimal solution in a single iteration, as expected for a quadratic objective function.
 3.2 Noise Information Estimation Using Subspace Method
The model in (9) is a typical single measurement vector (SMV) model. When a sequence of measurement vectors are available, (9) can be extended to the multiple measurement vectors (MMV) model, which provides informative coupling between the vectors. The noisy MMV problem can be stated as solving the following underdetermined systems of equations
where
L
is the number of measurement vectors. Since the matrix
Θ
is common to each of the representation problem, (21) can be rewritten as
where
Y
= [
y
_{1}
,
y
_{2}
,…,
y_{L}
] ,
S
= [
s
_{1}
,
s
_{2}
,…,
s_{L}
] and
E
= [
e
_{1}
,
e
_{2}
,…,
e_{L}
] . Additional assumptions are that the solution vectors
s
_{1}
,
s
_{2}
,…,
s_{L}
are sparse and have the same sparsity profile. It is equal to state that
S
is an unknown source matrix with nonzero rows representing the targets. In many applications, such as wireless communication and radar detection, the spectrum that signals occupied is slowly timevarying, hence the common sparsity assumption is valid.
The presence of multiple measurements can be helpful in estimating the set Ω. With multiple measurements, the desired output in (25) can be represented as
where
N
=
Q
^{H}
E
and
H
=
Q
^{H}
Θ
. The covariance matrix of the filtered signal is
R
_{z}
=
E
{
ZZ
^{H}
} . The eigenvalue decomposition of
R
_{z}
is
where
Σ
=
diag
(
λ
_{1}
,
λ
_{2}
,…,
λ_{N}
) , the eigenvalues are complied with
λ
_{1}
≥ … ≥
λ_{K}
>
λ
_{K+1}
= … =
λ_{N}
=
. The eigenvectors
u
_{1}
,
u
_{2}
,…,
u_{k}
corresponding to the
K
larger eigenvalues
λ
_{1}
,
λ
_{2}
,…,
λ_{N}
construct signal subspace
U
_{s}
=[
u
_{1}
,
u
_{2}
,…,
u_{k}
] , with
Σ
_{s}
= [
λ
_{1}
,
λ
_{2}
,…,
λ_{K}
] . Similarly, the later
N

K
eigenvalue are depending on the noise and their numeric values are
. The eigenvectors
u
_{K+1}
,
u
_{K+2}
,…,
u_{N}
corresponding to
λ
_{K+1}
,
λ
_{K+2}
,…,
λ_{N}
construct noise subspace
U
_{n}
= [
u
_{K+1}
,
u
_{K+2}
,…,
u_{N}
] , and
Σ
_{n}
= [
λ
_{K+1}
,
λ
_{K+2}
,…,
λ_{N}
] . Let Λ stands for the index set corresponding to the
K
nonzero rows of
S
, we have
where
. It can be seen from (25) that
. Since
R
_{SΛ}
is a nonsingular matrix, we get
, thus
. This indicates that the column vectors in
H
_{Λ}
is orthogonal to the subspace of the noise. The spectrum function of sparse location can be deduced
With the change of
n
, there would be
K
large values in (26), which correspond to the sparse position. The peak values are obvious with high SNR, but this superiority dwindles under the condition of low SNR. However, the nonorthogonality between
H
_{Λ}
and
U
_{n}
barely affected. Hence the index corresponding to the smallest
N

K
values could be treated as the positions of the noise, which should be ignored by the measurement process for noise suppression. In order to avoid causing any confusion with strong signal noise level, we consider the index corresponding to the smallest
P
(2
K
<
P
<
N
) values in (26) seemed a high possibility to be noise. (26) also can be expressed as
where
, which represents the projection matrix of
H
_{•n}
.
 3.3 Signal Reconstruction
Recovery of the signal from the linear projections can be accomplished by solving (8). A variety of optimization algorithms are available for the recovery problem, such as Orthogonal Matching Pursuit (OMP), Compressive Sampling Matched Pursuit (CoSaMP), FOCal Underdetermined System Solver (FOCUSS) and Sparse Bayesian Learning (SBL). The regularized MFOCUSS
[15]
is chosen for its perfect compromise between computation complexity and reconstruction accuracy, which can be summarized as the following iteration steps
where
β
stands for the regularization parameter, the
p
norm always set to
p
= 0.8 as suggested by the authors for robust solution.
The regularized MFOCUSS algorithm can be treat as solving at each iteration a weighted least squares. The initial solution
was firstly set to a nonzero weight matrix, with the iteration of the algorithm,
would tend to be stable. The algorithm could be terminated once the maximum iteration number reached or a convergence criterion has been satisfied
where
ε
is a userselected parameter. The proposed adaptive CS scheme can be summarized as following
(1)
. Initialize measurement matrix
Φ
as (15), set Ω = Ø . Collect the compressed data
Y
, and calculate
Ζ
using (23).
(2)
. Estimate the compressed signal covariance matrix
R
_{Z}
, then perform an EVD for
R
_{Z}
, and isolate the subspace of the noise
U
_{n}
.
(3)
. Compute the spectrum function in (26) or in (27), select the index corresponding to the
P
smallest value in (26) or the
P
largest value in (27).
(4)
. Update measurement matrix
Φ
using (15).
(5)
. Measure the signal
x
using the updated measurement matrix
Φ
, recovery the sparse information using the iterations of (28) until (29) being satisfied.
4. Experimental Results and Analysis
Extensive computer experiments have been conducted and a few representative and informative results are presented. We consider a signal sparse in Fourier domain. Unless specifically stated otherwise, the following conditions are applied. We set
N
= 150 and
K
= 3 , the compressive measured dimension is
M
= 50 , the dimension of the multiple measurement vectors is
L
= 10 , and the selective parameter is set to
P
= 50 . In our simulation, the SNR is defined as SNR = 20log (∥
S
∥
_{2}
/∥
N
∥
_{2}
) , where
N
stands for the signal noise matrix. The proposed adaptive method is compared to the adaptive compressive sensing (ACS) method in
[10]
(using MFOCUSS algorithm for sparse reconstruction) and traditional nonadaptive scheme with the recovery algorithms OMP, MFOCUSS and MSBL. To assess the optimization performance of the proposed scheme, 1000 Monte Carlo simulations are conducted. In each trial the initial measurement matrix was created with columns uniformly drawn from the surface of a unit hypersphere, and the source matrix
S
∈
¡
^{N×L}
was randomly generated with
K
nonzero rows (i.e., sources). In each trial the indexes of the sources were randomly chosen. Two measures were applied for performance assessment, the first one is the failure rate defined in
[20]
, and a failed trial was recognized if the indexes of estimated sources with the largest norms were not the same as the true indexes. The second one is the mean square error (MSE) defined as
, where
represents the reconstructed sources
S
.
We explored the recovery performance with different signal noise levels.
Fig. 1
depicts the performance curve, from which we conclude that the adaptive scheme outperform the ACS method and the nonadaptive one with the same noise environment. With the increasing SNR, as expected, both schemes would achieve better performance. But meanwhile we noticed when SNR ≤ −4dB, the benefit from multiple measurement vectors diminished, and the failure rate deteriorate sharply. One obvious observation is that the proposed adaptive scheme would achieve lower failure rate with extreme noise conditions. According to the RIP in CS, once
M
≥
Cμ
(
Θ
)
K
log
N
(
C
is a constant and
μ
(
Θ
) is defined as the maximum absolute value of the normalized inner product between all columns in
Θ
), one could accurately recovery the sparse vector with high probability. In our setup, the RIP is satisfied, and therefore further optimization for the measurement matrix couldn’t improve the performance significantly. However, our ASCS scheme would suppress the signal noise, hence provides highprecision recovery performance.
Performance comparison with various SNR
Fig. 2
depicts simulation results with different signal sparsity, the SNR is set to 0dB. As this figure shows, the increasing of sparsity K leads to the decreasing of the recovery performance, but the proposed adaptive method still achieves better performance with respect to failure rate and MSE. This phenomenon can be explained as follows. The configured parameters in our simulation is only robust for K ≤ 6 according to the RIP
[15]
. In the case of K ≥ 7, the RIP diminished, thus the recovery algorithm fail to recovery S with high probability. The proposed adaptive method enables fewer signal noise being measured through the measurement process, therefore the adaptive method performs better than the nonadaptive ones with the same configuration.
Performance comparison with different sparsity K
Fig. 3
shows that the failure rate decreases exponentially with the number of the measurement vectors L, and the increasing L narrows the performance gap between the adaptive scheme and the nonadaptive one. In practical applications, under the common sparsity assumption of source S, we cannot obtain many measurement vectors, as the sparsity profile of practical signals is timevarying, such as frequency hopping system. So the common sparsity assumption is valid for only a small L in the MMV model. Future research will pay much attention to this problem.
Performance under different number of measurements L
Finally, we investigated the application of the proposed method for diectionofarrival (DOA) estimation in spatial CS based multipinput and multipoutput (MIMO) radar system
[21]
. In this application, the MIMO radar system is configurated with 10 transmit antennas, 10 receiver antennas, the snapshots number is 5, and 3 targets are located in the far field with DOA
θ
= [15,40,65] . Unlike the Fourier basis that used in the above simulation, sparse dictionary in the application is consist of a series of interesting steering vectors with angel range from 0˚ 90˚ and resolution is 0.25˚ .
Fig. 4
depicts the performance comparson with different SNR and different measurements. As shown in the figures, the OMP method owns high failure rate, this is caused by the severe mutual coherence between the atoms of the dictionary. The greedy OMP algorithm ensures the residual is orthogonal to the atoms that chosen in the last iteration, which may destroy the information hiding in the residual when updating the new residual. Thanks for the noise suppression function, the proposed scheme could provide almost precise estimation results.
Performance comparison with applied in spatial compressive sensing MIMO radar
5. Conclusion
In this paper, we proposed an ASCS scheme for signal noise suppression in CS based signal acquisition system. A computational framework for the measurement matrix design is investigated, which transforms the measurement matrix design into the noise priori estimation. A twostep process is developed for locating the noise spectrum precisely. A set of projection filter banks are firstly used for minimizing the projection interferences. A subspace method is then applied for the noise information estimation. Simulation results demonstrated the effectiveness of the proposed scheme. From the view point of future implementation, measurement noise should be taken into consideration in the system, and more efficient algorithms have to be developed for source preestimation with low SNR. On the other hand, how to deal with the real world signal (e.g., image, video, or audio) is a problem need for further study.
BIO
Fangqing Wen was born in 1988. He received the B.S. degree in electronic engineering from Hubei University of Automotive Technology, Shiyan, China, 2011. From 2011 to 2013, he was pursuing the postgraduate study in College of Electronics and Information Engineering, Nanjing University of Aeronautics and Astronautics (NUAA). He is a Ph.D. candidate in College of Electronics and Information Engineering, NUAA, His research interests are compressive sensing, wireless communication and signal processing.
Email: wfqsyyy@163.com
Gong Zhang was born in 1964. He received the Ph.D. degree in electronic engineering from the Nanjing University of Aeronautics and Astronautics (NUAA), Nanjing, China, in 2002. From 1990 to 1998, he was a Member of Technical Staff at No724 Institute China Shipbuilding Industry Corporation (CSIC), Nanjing. Since 1998, he has been with the College of Electronic and Information Engineering, NUAA, where he is currently a Professor. His research interests include radar signal processing and classification recognition. Dr. Zhang is a member of Committee of Electromagnetic Information, Chinese Society of Astronautics (CEICSA) and a Senior Member of the Chinese Institute of Electronics (CIE).
Email: wfqzhc@163.com
Ben De was born in 1938. He received the bachelor degree in radar engineering from the Harbin Institute of Technology, Harbin, China, in 1963. Since 1964, he was a member of technical staff at Nanjing Research Institute of Electronics Technology, Nanjing. He is currently a Professor in College of Electronic and Information Engineering, NUAA. His research interests include radar system and radar signal processing. Mr. Ben is a member of Academician of China Engineering Academy.
Email: wfqzhc@163.com
Wen F.
,
Tao Y.
,
Zhang G.
2015
“Analog to Information Conversion using MultiComparator based IntegrateandFire Sampler,”
Electronics Letters
51
(3)
246 
247
DOI : 10.1049/el.2014.1950
Li Y.J.
,
Song R.F.
2012
“A new compressive feedback scheme based on distributed compressed sensing for timecorrelated mimo channel,”
KSII Transactions on Internet and Information Systems
6
(2)
580 
592
Anh H.
,
Koo I.
2013
“Primary user localization using Bayesian compressive sensing and pathloss exponent estimation for cognitive radio networks,”
KSII Transactions on Internet and Information Systems
7
(10)
2338 
2356
DOI : 10.3837/tiis.2013.10.001
Gao H.Q.
,
Song R.F.
2014
“Distributed compressive sensing based channel feedback scheme for massive antenna arrays with spatial correlation,”
KSII Transactions on Internet and Information Systems
8
(1)
108 
122
DOI : 10.3837/tiis.2014.01.007
Ji S.H.
,
Ya X.
,
Carin L.
2008
“Bayesian Compressive Sensing,”
IEEE Transactions on Signal Processing
56
(6)
2346 
2356
DOI : 10.1109/TSP.2007.914345
Ji S.H.
,
Dunson D.
,
Carin L.
2009
“Multitask Compressive Sensing,”
IEEE Transactions on Signal Processing
57
(1)
92 
106
DOI : 10.1109/TSP.2008.2005866
Li R.P.
,
Zhao Z.F.
,
Zhang Y.
,
Palicot J.
,
Zhang H.G.
2014
“Adaptive multitask compressive sensing for localization in wireless local area networks,”
IET Communications
8
(10)
1736 
1744
DOI : 10.1049/ietcom.2013.1019
Yao Y.
,
Petropulu A.P.
,
Poor H.V.
2011
“Measurement matrix design for compressive sensingbased mimo radar,”
IEEE Transactions on Signal Processing
59
(11)
5338 
5352
DOI : 10.1109/TSP.2011.2162328
Zhang J.D.
,
Zhu D.Y.
,
Zhang G.
2012
“Adaptive compressed sensing radar oriented toward cognitive detection in dynamic sparse target scene,”
IEEE Transactions on Signal Processing
60
(4)
1718 
1729
DOI : 10.1109/TSP.2012.2183127
Iwen M.A.
,
Tewfik A.H.
2012
“Adaptive strategies for target detection and localization in noisy environments,”
IEEE Transactions on Signal Processing
60
(5)
2344 
2353
DOI : 10.1109/TSP.2012.2187201
AriasCastro E.
,
Eldar Y.C.
2011
“Noise Folding in Compressed Sensing,”
IEEE Signal Processing Letters
18
(8)
478 
481
DOI : 10.1109/LSP.2011.2159837
Davenport M.A.
,
Laska J.N.
,
Treichler J.
,
Baraniuk R.G.
2012
“The pros and cons of compressive sensing for wideband signal acquisition: noise folding versus dynamic range,”
IEEE Transactions on Signal Processing
60
(9)
4628 
4642
DOI : 10.1109/TSP.2012.2201149
Marco A.
,
Massimo F.
,
Steffen P.
2013
“Damping noisefolding and enhanced support recovery in compressed sensing,”
: 13075725
Cotter S.F.
,
Rao B.D.
,
Engan K.
,
KreutzDelgado K.
2005
“Sparse solutions to linear inverse problems with multiple measurement vectors,”
IEEE Transactions on Signal Processing
53
(7)
2477 
2488
DOI : 10.1109/TSP.2005.849172
Candes E.J.
2008
“The restricted isometry property and its implications for compressed sensing,”
Comptes Rendus Mathematique
346
(9)
589 
592
DOI : 10.1016/j.crma.2008.03.014
Candes E.J.
,
Tao T.
2005
“Decoding by linear programming,”
IEEE Transactions on Information Theory
51
(12)
4203 
4215
DOI : 10.1109/TIT.2005.858979
Tropp J.A.
,
Gilbert A.C.
2007
“Signal recovery from random measurements via orthogonal matching pursuit,”
IEEE Transactions on Information Theory
53
(12)
4655 
4666
DOI : 10.1109/TIT.2007.909108
Davenport M.A.
,
PhD thesis
2010
“Random observations on random observations: sparse signal acquisition and processing,”
Rice University
PhD thesis
Wipf D.P.
,
Rao B.D.
2007
“An empirical Bayesian strategy for solving the simultaneous sparse approximation problem,”
IEEE Transactions on Signal Processing
55
(7)
3704 
3716
DOI : 10.1109/TSP.2007.894265
Rossi M.
,
Haimovich A. M.
,
Eldar Y. C.
2014
“Spatial compressive sensing for MIMO radar”
IEEE Transactions on Signal Processing
62
(2)
419 
430
DOI : 10.1109/TSP.2013.2289875