To improve the ratedistortion performance of distributed video compressive sensing (DVCS), the adaptive sparse basis and nonlocal similarity of video are proposed to jointly reconstruct the video signal in this paper. Due to the lack of motion information between frames and the appearance of some noises in the reference frames, the sparse dictionary, which is constructed using the examples directly extracted from the reference frames, has already not better obtained the sparse representation of the interpolated block. This paper proposes a method to construct the sparse dictionary. Firstly, the examplebased data matrix is constructed by using the motion information between frames, and then the principle components analysis (PCA) is used to compute some significant principle components of data matrix. Finally, the sparse dictionary is constructed by these significant principle components. The merit of the proposed sparse dictionary is that it can not only adaptively change in terms of the spatialtemporal characteristics, but also has ability to suppress noises. Besides, considering that the sparse priors cannot preserve the edges and textures of video frames well, the nonlocal similarity regularization term has also been introduced into reconstruction model. Experimental results show that the proposed algorithm can improve the objective and subjective quality of video frame, and achieve the better ratedistortion performance of DVCS system at the cost of a certain computational complexity.
1. Introduction
T
he basic idea of Compressive Sensing (CS) is to sample the signal by the way of direct dimensionality reduction while compressing the signal and then recover the original signal by exploiting the sparse prior of signal. Due to its ability to sample signal at the subNyquist rate, the theory of CS has been widely applied into the various fields of image and video processing
[1]
,
[2]
. The measurement approach of CS is realized by the linear innerproduct and thus it has a low computational complexity, however, it requires the high computational costs to nonlinearly reconstruct signal. This feature of light coding and heavy decoding makes CS theory easily be combined into the Distributed Video Coding (DVC)
[3]
, which produces a new video compression technology  Distributed Video Compressive Sensing (DVCS)
[4]

[6]
.
In the DVCS system, the primary problem is the requirement of the huge memory burden in CS measurement. Currently there are two schemes to effectively resolve this problem. The first method is to use the Structurally Radom Matrices (SRMs)
[7]
,
[8]
to achieve the measurement data. The SRMs use the fast orthogonal transformation to realize CS measurement, and thus avoid to construct the measurement matrix requiring lots of memory. The another method is to perform CS measurement by the Block Compressed Sensing (BCS)
[9]
. This approach can not only realize a lowmemory CS measurement but also measure and transmit the video block one by one, and therefore it is very appropriate for the realtime applications and widely used in various DVCS systems
[10]

[11]
. The DVCS firstly divides the video stream into the key frames and nonkey frames. The key frame can realize codec by either the traditional video coding technology (e.g., H.264) or measuring video frame at a higher measurement rate and using stillimage CS reconstruction algorithm
[12]

[14]
to recover the original video frame. Due to the low measurement rate of nonkey frames, its reconstruction requires to combine intra and inter frame correlation. Ref.
[5]
uses the previous and following frames to interpolate the Side Information (SI) of nonkey frame by motion compensation and then regards the SI as the initial solution of GPSR algorithm
[15]
to construct the final interpolated frame. Ref.
[6]
uses the temporalneighboring blocks to construct the sparse dictionary of each interpolated block in the nonkey frame and then performs the appropriate minimum
l
_{1}
norm algorithm to predict the SI, and finally reconstructs the residual frame between the SI and original frame by using the stillimage CS reconstruction algorithm. Ref.
[16]
firstly uses CS reconstruction algorithm to independently perform intraframe recovery and then utilizes the previous and following frames to predict the SI by motion estimation and motion compensation, and finally recovers the residual. Ref.
[17]
uses the Multiple Hypotheses (MH) concept in the traditional video coding to construct the candidate set of each interpolated block, and then replaces the sparse regularization item in the way of
l
_{1}
norm with the Tikhonov regularization item in the way of
l
_{2}
norm to predict the SI of nonkey frame, and this method can effectively improve the predictive precision and reconstruction speed.
Although the above methods can obtain the better reconstructed quality of nonkey frame, there are still the two defects: (a) the sparse dictionary cannot adaptively change in terms of the reconstructed quality of reference frame and remove the noise; (b) they only use the sparse prior and overlook the other prior knowledge of video frame. Aim to the first defect, an adaptive construction of sparse dictionary is proposed in this paper. Firstly, it uses the motion information between frames to find the bestmatching block in reference frames of each interpolated block and extracts its temporalneighboring blocks to produce the data matrix. Due to the noises existing in the reference frames, the Principle Components Analysis (PCA) is then used to compute the significant principle components, and finally these significant principle components are used to construct the sparse dictionary. The PCAbased sparse dictionary has a big correlation with the interpolated block, and therefore it can exploit the sparse property of nonkey frame to improve the accuracy of reconstruction. For the second defect, this paper uses the Nonlocal Similarity (NL) of video frame to model the regularization item and combines the sparse prior knowledge to generate the joint CS reconstruction model, and finally an appropriate reconstruction algorithm is designed to solve the joint model. Since the NL is help for preserving edge details and suppressing noises, the proposed joint model can improve the performance of CS reconstruction algorithm. Experimental results show that the proposed joint reconstruction algorithm can effectively improve the ratedistortion performance of DVCS system and achieve the better objective and subjective quality of reconstructed nonkey frame.
2. Framework of Proposed DVCS System
The framework of proposed DVCS system is shown in
Fig. 1
. The original video stream is firstly divided into key frames and nonkey frames, and they are measured by the BCS proposed by Ref.
[9]
. An
I
_{c}
×
I
_{r}
video frame
x_{t}
with
N
=
I
_{c}
×
I
_{r}
pixels in total is divided into
L
small blocks with size of
B
×
B
. Let
x
_{t,n}
represents the vectorized signal of the
n
th block though raster scanning, and each block
x
_{t,n}
is measured by using the same Gaussian random measurement matrix
Φ
_{B}
, and the corresponding output CS vector
y
_{t,n}
with the length
M
_{B}
can be obtained. The above process can be described as
Framework of proposed DVCS system
The measurement rate is defined as
S
=
M
_{B}
/B
^{2}
. When the nonkey frame is reconstructed jointly, the reconstruction quality of previous and following frame can affect seriously the performance of joint reconstruction model. Therefore, the measurement rate
S
_{K}
of key frame should be higher than the measurement rate
S
_{NK}
of nonkey frame. The high measurement rate of key frame guarantees also the better reconstruction quality by only using the stillimage CS reconstruction algorithm to independently reconstruct key frame, and therefore the key frame is also called as I frame.
Since the nonkey frame is measured at a low measurement rate, the sufficient employment of interframes and spatial correlation can just guarantee the high quality of reconstructed nonkey frame. If the previous key frame is only used, then the current nonkey frame is called as P frame. If the previous and following frame are both used, then the current nonkey frame is called as B frame. The adaptive PCA sparse dictionary and nonlocal similarity can be generated by using the neighboring reference frames and the current nonkey frame, and then they are used to construct joint reconstruction model, and then the corresponding algorithm is performed to solve the SI
x
_{SI}
of current nonkey frame. To further improve the reconstruction quality of nonkey frame, the residual between SI and original frame is reconstructed , and the steps are described as follows,
Step 1) Initialization:
x_{t}
^{(0)}
=
x
_{SI}
, the initial iteration
k
is set to 0, the maximum number iterations
maxiter
is set to 5.
Step 2) The CS measurement of residual between SI and original frame can be calculated as
Step 3) The residual frame
r
_{t,n}
^{(k)}
is computed by using BCSSPLDCT algorithm proposed by Ref.
[13]
, and the
k
+1 iteration solution
x_{t}
^{(k+1)}
can be get as follows,
Step 4)
k
=
k
+1, if
k
≤
maxiter
and ║
r
_{t,n}
^{(k)}
║
_{2}
≥ 10
^{4}
·
N
, then go back to Step 2) and continue to the process of iteration, otherwise stopping the iteration.
3. Proposed Joint CS Reconstruction
 3.1 Construction of Adaptive PCA Sparse Dictionary
Since the statistic characteristic of video frame is nonstationary, there is not the best fixed sparse dictionary (e.g., DCT dictionary, wavelet dictionary, etc.). To exploit the sparse property of video frame, the adaptive sparse dictionary correlated with the content of video frame should be constructed. Ref.
[6]
and Ref.
[7]
use directly the temporalneighboring blocks to construct sparse dictionary, however, although this dictionary can adaptively be adjusted with the variational statistic characteristic of video frame, it cannot always keep the high correlation with the interpolated block. The main reasons of this problem have the following two points: (a) the motion information between frames; (b) the reconstructed key frames contain some noises. To overcome the above defects, we firstly use the CS measurement of the interpolated block to do motion estimation and find its best matching block in the reference frame, and then the spatial neighboring blocks of the best matching block in the reference frame are extracted to generate the data matrix. However, the data matrix contains a certain noises, and therefore the PCA is used to compute the principle components of data matrix, and then we select the significant principle components to construct the final sparse dictionary to suppress the noises. Take the situation of P frame as a example, the concrete construction steps of proposed sparse dictionary are described as follows:
Step 1) Suppose the CS measurement of the interpolated block
x
_{t,n}
is
y
_{t,n}
. Due to the Restricted Isometry Property (RIP)
[18]
of Gaussian measurement matrix, the matching error between
x
_{t,n}
and the candidate matching block
x
_{c,j}
retains approximately unchanged, i.e.,
Therefore, the blockmatching based motion estimation can be performed in the measurement domain as follows:
where
S
_{1}
is the search window with size of 2
S
_{1}
×2
S
_{1}
. As shown in
Fig. 2
, we extract the blocks
x
_{p,k}
with size of
B
×
B
pixelbypixel in the search window with the centre
x
_{b,n}
, and then each extracted block is converted into the vector by raster scanning, and all extracted blocks are combinend into the data matrix
X
_{p}
= [
x
_{p,1}
,
x
_{p,2}
, …,
x
_{p,K}
] in which
K
= 2
S
_{2}
×2
S
_{2}
.
Illustration of data matrix X_{p} construction
Step 2) Each block
x
_{p,k}
in data matrix
X
_{p}
contains noises, and thus it is not the best scheme that
X
_{p}
is directly regarded as the sparse dictionary. The PCA can compute the orthogonal transformation matrix
P
which can remove the redundant information between pixels in
x
_{p,k}
. If
P
is used to transform image blocks, and the useful information and noises of
X
_{p}
can be effectively divided. Firstly, the covariance matrix
Ω
_{p}
with size of
d
×
d
(
d
=
B
^{2}
) corresponding to
X
_{p}
can be calculated as follows,
and then we can compute
d
eigenvalues
η
_{1}
≥
η
_{2}
≥ … ≥
η_{d}
of the covariance matrix
Ω
_{p}
and their corresponding normalized eigenvectors (principle components)
p
_{1}
,
p
_{2}
, …,
p_{d}
, and finally we can construct the orthogonal transformation matrix
P
= [
p
_{1}
,
p
_{2}
, …,
p_{d}
].
Step 3) To effectively divide noises and useful information in the data matrix
X
_{p}
, we should be to find the sparse dictionary
D_{n}
which can sparsely represent all blocks in
X
_{p}
as far as possible, i.e., the
D_{n}
should satisfy the following formula,
where
Ʌ_{n}
is coefficient matrix of
X
_{p}
, ║·║
_{F}
is Frobenius norm. The
r
significant principle components in
P
are used to generate the dictionary
D
_{n,r}
= [
p
_{1}
,
p
_{2}
, …,
p_{r}
], and the coefficient matrix
Ʌ_{n}
can be simply calculated by
Ʌ
_{n,r}
=
D
_{n,r}
^{T}
·
X
_{p}
. The reconstruction error ║
X
_{p}

D
_{n,r}
Ʌ
_{n,r}
║
_{F}
^{2}
in Eq. (8) will decrease as r increases, and the item ║
Ʌ
_{n,r}
║
_{1}
is otherwise increasing. Therefore, the best value
r
^{*}
of
r
can be selected by the following formula,
Finally, the sparse dictionary
D_{n}
= [
p
_{1}
,
p
_{2}
, …,
p
_{r*}
] of the interpolated block
x
_{t,n}
can be achieved.
Step 4) The CS reconstruction model can be constructed by using the
D_{n}
from PCA training as follows,
The sparse representation
α
_{t,n}
of
x
_{t,n}
is obtained by using GPSR algorithm to solve Eq. (10), and finally the interpolated block is reconstructed by
 3.2 Nonlocal Similarity Regularization Item
Although the adaptive PCA sparse dictionary can exploit the sparse property of video frame, it cannot preserve edge and texture features well since the edge and texture features have a low sparse degree.
Fig. 3
shows that the reconstructed
Foreman
13th frame (P situation) when the measurement rate
S
_{NK}
is 0.1 and block size
B
is 16. It can be observed that edge and texture regions appear the obvious blurring and blocking artifacts. Therefore, to retain the clear edge and texture details, in addition to using the sparse priori knowledge, the other priori knowledge requires also to be added.
Comparison between original frame and the reconstructed frame from adaptive PCA sparse dictionary for Foreman 13th frame.
For image and video, the pixel is not isolated but jointly describe the image features with its neighboring pixels. The window with center pixel (it is also called as patch) can usually present details of a pixel. The center of patch is corresponding to a pixel of image, then an image can be represented by the overcomplete set composed by all patches. In the edge and texture regions usually exist lots of periodical repetitive patterns and they have a high selfsimilarity, and therefore the patches locating at the different positions have a strong similarity. This property of image and video is called as nonlocal similarity
[19]

[21]
. The nonlocal similarity of video presents that patches have not only spatial correlation but also temporal correlation. As shown in
Fig. 4
, the patch labeled by red color and the patch labeled by blue color can find the similar patches in spatial and temporal neighboring regions. The nonlocal similarity is very helpful to improve the quality of reconstructed frame, especially for preserving edge and texture structure features, and therefore this property can become a priori knowledge to mix into Eq. (10) and effectively remove the blurring and blocking artifacts in edge and texture regions.
Nonlocal similarity of video
Take the P situation as an example, the following content describes the construction of nonlocal similarity regularization item in details. Any pixel in
x
_{t,n}
is denoted as
x
_{t,n}
(
i
),
i
= 1,2, …,
d
, and
x
_{t,n}
(
i
) denotes the patch whose center and radius are
x
_{t,n}
(
i
) and
b
respectively. For each patch
x
_{t,n}
(
i
), we find its similar patches in the current block
x
_{t,n}
and the bestmatching block
x
_{b,n}
in the previous frame, and each patch
x
_{t,n}
^{m}
(
i
) should satisfy
e_{i}^{m}
= ║
x
_{t,n}
(
i
) 
x
_{t,n}
^{m}
(
i
)║
_{2}
≤
t
, therefore
x
_{t,n}
(
i
) can be predicted by
where
n_{i}
is the additional noise item. Suppose
β_{i}
is the vector containing all elements
β_{i}^{m}
,
x
_{t,n}
^{m}
(
i
) corresponding to
β_{i}^{m}
can be generated as
g_{i}
, and thus Eq. (12) can be equal to
Considering the nonlocal similarity of video, the predictive error ║
x
_{t,n}
(
i
)
β_{i}
^{T}
·
g_{i}
║
_{2}
should be smaller, and thus it can be regarded as the regularization item to mix Eq.(10) as follows,
where
λ
_{2}
is the regularization factor used to balance the nonlocal similarity item. Eq. (15) can be equal to
where
I
is the identify matrix,
H
_{n,1}
and
H
_{n,2}
satisfy
To solve Eq. (16), it can be further simplified as the following
l
_{1}

l
_{2}
norm minimum model,
where
Since the construction of
H
_{n,1}
and
H
_{n,2}
requires the interpolated block
x
_{t,n}
, however
x
_{t,n}
is unavailable in the process of reconstruction. Therefore,
H
_{n,1}
and
H
_{n,2}
will be updated using the iteration solution in the process of solving Eq. (19). The steps of solving Eq. (19) are described as follows,
Step 1) Initialization:
a) the initial solution
x_{t}
^{(0)}
is firstly acquired by using Eq. (10) and Eq. (11);
b)
H
^{(0)}
_{n,1}
and
H
^{(0)}
_{n,2}
are constructed by using the initial solution
x_{t}
^{(0)}
in term of Eq. (17) and Eq. (18), and then we use them to generate
ỹ
^{(0)}
_{t,n}
and
Φ
^{(0)}
_{n}
;
c) the number of iteration
k
is set to 0, and the maximum number of iteration
maxiter
is set to 10.
Step 2) Combining
ỹ
^{(k)}
_{t,n}
and
Φ
^{(k)}
_{n}
into Eq. (19), and GPSR algorithm is used to compute the sparse representation coefficients
α
^{(k)}
_{t,n}
, and then we use Eq. (11) to obtain the (
k
+1)th iteration solution
x
^{(k+1)}
_{t,n}
of each block. Finally, all the interpolated blocks are combined into the estimation
x_{t}
^{(k+1)}
of current frame.
Step 3)
k
=
k
+1, if
k
≤
maxiter
and ║
x_{t}
^{(k+1)}

x_{t}
^{(k)}
║
_{2}
≥ 10
^{4}
·
N
, then
H
^{(k)}
_{n,1}
,
H
^{(k)}
_{n,2}
,
ỹ
^{(k)}
_{t,n}
and
Φ
^{(k)}
_{n}
can be updated as
H
^{(k+1)}
_{n,1}
,
H
^{(k+1)}
_{n,2}
、
ỹ
^{(k+1)}
_{t,n}
and
Φ
^{(k+1)}
_{n}
by using
x_{t}
^{(k+1)}
and the iteration goes back to Step 2), otherwise the algorithm will be stopped.
The predict frame
x
_{SI}
can be obtained by using CS joint reconstruction after the above steps perform several iterations, and finally the reconstruction of residual frame is performed to achieve final nonkey reconstructed frame
.
4. Simulation results and analysis
The proposed algorithm is evaluated by using the first 61 frames of four test sequences with CIF formant including
Foreman
,
Mobile
,
Bus
and
News
. The key frame is the odd frame (I frame), and the nonkey frame is the even frame (P or B frame). In terms of the style of nonkey frame, the proposed algorithm is performed under the two different predictive model, i.e., IPI model and IBI model. The key frame is independently reconstructed by the MHBCSSPL algorithm proposed by Ref.
[14]
, and the nonkey frame is reconstructed by the proposed algorithm and the four compared algorithms proposed by Ref.
[5]
, Ref.
[6]
, Ref.
[16]
and Ref.
[17]
respectively. The proposed algorithm is divided into two parts to do the comparison experiments: the algorithm uses only adaptive PCA sparse dictionary (i.e., reconstruction model (10)), and it is named as APCA; the algorithm uses adaptive PCA sparse dictionary and nonlocal similarity regularization item (i.e., reconstruction model (16)), and it is named as APCANL. The block size
B
in all algorithms is set to 16, the measurement rate
S
_{K}
of key frame is set to 0.7, the range of the measurement rate
S
_{NK}
of key frame is [0.1, 0.5]. The parameter setting of proposed algorithm is as follows: the radiuses
S
_{1}
and
S
_{2}
of search window are both set to
B
; the radius
b
of patch is set to 3; the threshold
t
selecting patch is set to 20; the regularization factors
λ
_{1}
and
λ
_{2}
are set to 0.2 and 0.5/
k
respectively; the other parameter
c
is 10.
The objective quality of reconstructed frame is evaluated by using the Peak SignaltoNoise Ratio (PSNR) and the Structural Similarity (SSIM)
[22]
, and the reconstruction time reveals the computational complexity. The hardware platform of experiments is a PC with 3.20 GHz CPU and 8 GB RAM, and the software platform is the MATLAB 7.6 under the system Windows 7 64 bits.
Table 1
presents the average PSNR and SSIM of all reconstructed nonkey frames at the different measurement rate when the predictive model is IPI. It can be observed that the proposed algorithms APCA and APCANL have the higher PSNR and SSIM than the other compared algorithm at any measurement rate. Comparing APCA algorithm with APCANL algorithm , it can be seen that the performance of APCANL algorithm outperforms the APCA algorithm at the high measurement rate (
S
_{NK}
is 0.4 or 0.5), e.g., when
S
_{NK}
is 0.5, the APCANL algorithm obtains the PSNR gain 2.09 dB and SSIM gain 0.0058 than APCA algorithm for all test sequences, and but APCANL algorithm acquires a little performance improvement at the low measurement rate since the inaccurate motion estimation in measurement domain and lots of noises in the initial solution result in the fact that the added regularization item cannot better describe the nonlocal similarity of video. Besides, since the edge and texture regions of
Mobile
and
Bus
sequences have the complex structural features and do not contain lots of periodical predictive patterns, and their nonlocal similarity is low, which causes that APCANL algorithm cannot effectively improve performance for
Mobile
and
Bus
sequences in the basis of APCA algorithm and even degrades the quality of the reconstructed frame.
Fig. 5
shows the subjective visual quality of the reconstructed
Foreman
8th frame for various algorithm when
S
_{NK}
is 0.3. It can be seen that the proposed algorithm can remove the blurring and blocking artifacts around lap, and the better subjective visual quality is obtained.
Average PSNR (dB) and SSIM of test sequences for the proposed and existing algorithms under IPI Model
Average PSNR (dB) and SSIM of test sequences for the proposed and existing algorithms under IPI Model
When S_{NK} is 0.3, the comparison of subjective visual quality on Foreman 8th frame for various algorithms under IPI model.
Table 2
presents the average PSNR and SSIM of all reconstructed nonkey frames at the different measurement rate when the predictive model is IBI. Firstly, when compared with IPI model, the reconstructed quality of all test sequences is effectively improved, and this is because that the situation of B frame uses not only the information on the previous reconstructed frame but also performs the information on the following reconstructed frame. The performance variances of different algorithms are similar to the those of IPI model, the performance of proposed algorithms APCA and APCANL outperforms the other compared algorithm, and the APCANL algorithm can effectively improve the quality of reconstructed video frame at the high measurement rate.
Fig. 6
shows the subjective quality of the Mobile 4th frame for various algorithms, and it can be seen that the proposed algorithms obtain the better subjective visual quality.
Average PSNR (dB) and SSIM of test sequences for the proposed and existing algorithms under IBI Model
Average PSNR (dB) and SSIM of test sequences for the proposed and existing algorithms under IBI Model
When S_{NK} is 0.3, the comparison of subjective visual quality on Mobile 4th frame for various algorithms under IBI model.
Table 3
presents the average reconstruction time (s/frame) of various algorithms. It can be observed that the reconstruction time under IPI model is lower than that of IBI model, which presents that the reconstructed quality is improved at the cost of the increasing computational complexity under IBI model. Besides, the proposed two algorithms increase the computational complexity and obtain the improvement of reconstructed quality, which presents that the better performance of proposed algorithms are achieved at the cost of the high computational complexity.
Average reconstruction time (s/frame) comparison of various algorithms
Average reconstruction time (s/frame) comparison of various algorithms
5. Conclusions
This paper combines the adaptive PCA sparse dictionary constructed by the correlation between frames and the regularization item constructed by the nonlocal similarity to propose a joint reconstruction algorithm for improving the ratedistortion performance of DVCS system. With the various temporalspatial statistic characteristic, the fixed sparse dictionary cannot effectively exploit the sparse property of video frame, and although the sparse dictionary extracted from neighboring frames can change as the content of video frame is change, it is not the best one, this is because that the examplebased sparse dictionary lacks the motion estimation between frames and the reference frame contains noises. The proposed construction of sparse dictionary firstly uses the CS measurements of current nonkey frame to perform motion estimation in measurement domain, and then uses the motion information between frames to extract the example to produce the data matrix, and finally uses PCA to compute the significant principle components of data matrix for constructing the sparse dictionary. The sparse priori knowledge cannot still recover the edge and texture details of video frame well. To improve the quality of edge and texture regions, the nonlocal similarity of video frame is used to construct the regularization item, and the regularization item is mixed into the joint CS reconstruction model to remove the blurring and blocking artifacts in edge and texture regions. Experimental results show that the proposed algorithm can effectively improve the ratedistortion performance of DVCS system at the cost of a certain computational complexity, and achieve the better subjective and objective reconstructed quality.
BIO
WU Minghu received the M.S. degree from Huazhong University of Science and Technology, and Ph.D. degree from Nanjing University of Posts and Telecommunications in 2002 and.2013, respectively. He is currently an associate professor in the School of Electrical and Electronic Engineering at Hubei University of Technology. His major research interests include signal processing, video coding and compressive sensing.
ZHU Xiuchang received his B.S. and M.S. degrees from Nanjing University of Posts and Communications in 1982 and 1987, respectively. He has been working in Nanjing University of Posts and Communications since 1987. At present, he is a Professor and the direct of Jiangsu Key Library of Image Processing and Image Communications. His current research interests focus on multimedia information, especially on the collection, processing, transmission and display of image and video.
Eldar Y. C.
,
Kutyniok G.
2012
Compressed Sensing: Theory and Applications
Cambridge University Press
Cambridge
Article (CrossRef Link)
1 
5
Girod B.
,
Aaron A. M.
,
Rane S.
,
RebolloMonedero D.
2005
"Distributed video coding"
Proceedings of the IEEE
Article (CrossRef Link)
93
(1)
71 
83
DOI : 10.1109/JPROC.2004.839619
Baron D.
,
Duarte M. F.
,
Wakin M. B.
,
Sarvotham S.
,
Baraniuk R. G.
2009
"Distributed compressive sensing"
[Online], Available: , 2009. 1. Article (CrossRef Link)
Kang L. W.
,
Lu C. S.
2009
"Distributed video compressive sensing"
in Proc. of IEEE International Conference on Acoustics, Speech and Signal Processing
April
Article (CrossRef Link)
1169 
1172
Do T. T.
,
Chen Y.
,
Nguyen D. T.
,
Nguyen N.
,
Gan L.
,
Tran T. D.
2009
"Distributed compressed video sensing"
in Proc. of IEEE International Conference on Image Processing
November
Article (CrossRef Link)
1393 
1396
Do T.
,
Gan L.
,
Nguyen N.
,
Tran T. D.
2012
"Fast and efficient compressive sensing using structurally random matrices"
IEEE Transactions on Signal Processing
Article (CrossRef Link)
60
(1)
139 
154
DOI : 10.1109/TSP.2011.2170977
Li K.
,
Gan L.
,
Ling C.
2013
"Convolutional compressed sensing using deterministic sequences"
IEEE Transactions on Signal Processing
Article (CrossRef Link)
61
(2)
740 
752
DOI : 10.1109/TSP.2012.2229994
Gan L.
2007
"Block compressed sensing of natural images"
in Proc. of International Conference on Digital Siagnal Processing
July
Article (CrossRef Link)
403 
406
Oechard G.
,
Zhang J.
,
Suo Y.
,
Dao M.
,
Nguyen D. T.
,
Sang C.
,
Posch C.
,
Tran T. D.
,
EtienneCummings R.
2013
"Real time compressive sensing video reconstruction in hardware"
IEEE Journal on Emerging and Selected Topics in Circuits and Systems
Article (CrossRef Link)
2
(3)
604 
614
DOI : 10.1109/JETCAS.2012.2214614
Holloway J.
,
Sankaranarayanan A. C.
,
Veeraraghavan A.
,
Tambe S.
2012
"Flutter shutter video camera for compressive sensing of videos"
IEEE International Conference on Computational Photography
April
Article (CrossRef Link)
1 
9
Wu X. L.
,
Dong W. S.
,
Zhang X. J.
,
Shi. G. M.
2012
"Modelassisted adaptive recovery of compressed sensing with imaging application"
IEEE Transaction on Image Processing
Article (CrossRef Link)
21
(2)
451 
458
DOI : 10.1109/TIP.2011.2163520
Mun S.
,
Fowler J. E.
2009
"Block compressed sensing of images using directional transforms"
in Proc. of International Conference on Image Processing
Cario, Egypt
November
Article (CrossRef Link)
3021 
3024
Chen C.
,
Tramel E. W.
,
Fowler J. E.
2011
"Compressed sensing recovery of images and video using multihypothesis predictions"
in Proc. of Conference Record of the Forty Fifth Asilomar Conference
November
Article (CrossRef Link)
1193 
1198
Figueiredo M. A. T.
,
Nowak R. D.
,
Wright S. J.
2007
"Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems"
IEEE Journal Selected Topics in Signal Processing
Article (CrossRef Link)
1
(4)
586 
597
DOI : 10.1109/JSTSP.2007.910281
Mun S.
,
Fowler. J. E.
2011
"Residual reconstruction for blockbased compressed sensing of video"
in Proc. of Data Compression Conference
March
Article (CrossRef Link)
183 
192
Tramel E. W.
,
Fowler J. E.
2011
"Video compressed sensing with multihypothesis"
in Proc. of Data Compression Conference
March
Article (CrossRef Link)
193 
202
Candes E.
,
Wakin M.
2008
"An introduction to compressive sampling"
IEEE Signal Processing Magazine
Article (CrossRef Link)
25
(2)
21 
30
DOI : 10.1109/MSP.2007.914731
Ren J.
,
Zhuo Y.
,
Liu J. Y.
,
Guo Z.
2012
"Illuminationinvariant nonlocal means based video denoising"
in Proc. of IEEE International Conference on Image Processing
September
Article (CrossRef Link)
1185 
1188
Shen Y. C.
,
Wang P. S.
,
Wu J. L.
2012
"Progressive side information refinement with nonlocal means based denoising process for WynerZiv video coding"
in Proc. of Data Compression Conference
April
Article (CrossRef Link)
219 
226
Kim D.
,
Keum B.
,
Ahn H.
,
Lee H.
2013
"Empirical nonlocal algorithm for image and video denoising"
in Proc. of IEEE International Conference on Consumer Electronics
January
Article (CrossRef Link)
498 
499
Wang Z.
,
Bovik A. C.
,
Sheikh H. R.
,
Simoncelli E. P.
2004
"Image quality assessment : from error visibility to structural similarity"
IEEE Transaction on Image Processing
Article (CrossRef Link)
13
(4)
600 
611
DOI : 10.1109/TIP.2003.819861