A series of kernel regression (KR) algorithms, such as the classic kernel regression (CKR), the 2 and 3D steering kernel regression (SKR), have been proposed for image and video superresolution. In existing KR frameworks, a single algorithm is usually adopted and applied for a whole image/video, regardless of region characteristics. However, their performances and computational efficiencies can differ in regions of different characteristics. To take full advantage of the KR algorithms and avoid their disadvantage, this paper proposes a kernel regression framework for video superresolution. In this framework, each video frame is first analyzed and divided into three types of regions: flat, nonflatstationary, and nonflatmoving regions. Then different KR algorithm is selected according to the region type. The CKR and 2D SKR algorithms are applied to flat and nonflatstationary regions, respectively. For nonflatmoving regions, this paper proposes a similarityassisted steering kernel regression (SASKR) algorithm, which can give better performance and higher computational efficiency than the 3D SKR algorithm. Experimental results demonstrate that the computational efficiency of the proposed framework is greatly improved without apparent degradation in performance.
1. Introduction
S
uperresolution (SR) is a process of reconstructing a highresolution (HR) image from multiple lowresolution (LR) inputs. Its basic idea is to enhance the resolution of reference image by making full use of the information contained in both reference and auxiliary images. A variety of algorithms have been presented to solve the SR problem. The frequency domain method was firstly introduced by Tsai and Huang
[1]
, and extended by their successors
[2]
[3]
. However, the performance of the frequency domain methods is usually limited by the global translational motion and spatially invariant degradation. Thus, a variety of the spatial domain SR methods have been proposed. The iteration backprojection (IBP) algorithm
[4]
yields a HR image by iteratively backprojecting the error between the simulated LR images and the observed ones. The maximum a posteriori (MAP) method utilizes the spatial domain observation model and the prior knowledge of the target HR image to estimate the target HR image under a Bayesian theorem framework
[5]
[6]
[7]
[8]
[9]
. The projection on convex sets (POCS) method tends to incorporate the prior knowledge of the target HR image into the convex constraint sets and to restrict the SR solution to be a member of the convex sets
[10]
[11]
. An extensive review of the SR methods can be seen in
[12]
[13]
.
Although the SR technique has been extensively studied in the past three decades, the superresolution on general video sequences still remains an open problem. In existing video superresolution (VSR) algorithms, either the motion models are oversimplified, or the computational efficiency is unsatisfactory. Several VSR algorithms presented in
[14]
[15]
[16]
limit their motion model to the case of translational motion. As a result, these algorithms cannot achieve good performance on general video sequences with arbitrary motion model. The fundamental difficulty for the superresolution on general video sequences is to provide accurate subpixel motion estimations. Recent progresses have focused on two types of VSR methods. One type is simultaneous VSR method. Keller et al.
[17]
presented a VSR algorithm, which simultaneously estimates a HR sequence and its motion field via the calculus of variations. Liu et al.
[18]
proposed an adaptive VSR algorithm, which simiultaneously estimates HR frame, motion field, blur kernel and noise level in a Bayesian framework. However, their performance is affected by the accuracy of optical flow estimation. Another type is nonmotionestimationbased VSR method. Danielyan et al.
[19]
created a VSR algorithm by extending the blockmatching 3D filter, in which the explicit motion estimation is avoided by classifying the image patches using block matching. Protter et al.
[20]
generalized the nonlocal means (NLM) algorithm (a denoising algorithm) to enhance the resolution of general video sequences without explicit motion estimation. Takeda et al.
[21]
extended the 2D steering kernel regression (SKR) approach
[22]
to 3D for video superresolution. With similar ideas, K.Zhang et al.
[23]
extended the 2D normalized convolution approach to 3D case for video superresolution. H.Zhang et al.
[24]
[25]
presented a nonlocal kernel regression (NLKR) framework and applied it to SR reconstruction. The NLKR framework exploits both the nonlocal selfsimilarity and local structure regularity for a more reliable and robust estimation. Their works provide new thinking and methods to achieve superresolution on general video sequences. However, these approaches share a common defect: low computational efficiency.
Takeda et al.
[21]
[22]
proposed a series of kernel regression (KR) algorithms for image and video superresolution, such as the classic kernel regression (CKR), the 2 and 3D steering kernel regression. The CKR algorithm is computationally efficient but its performance on the edge regions is poor. The 2 and 3D SKR algorithms improve the performance on the edge regions by exploiting 2 and 3D local radiometric structure information. However, the use of more information leads to higher computational cost. In order to improve computational efficiency, this paper proposes a fast kernel regression framework for video superresolution, which takes full advantage of three KR algorithms and avoids their disadvantage. In this framework, each video frame is first analyzed and divided into three types of regions: flat, nonflatstationary, and nonflatmoving regions. Then different KR algorithm is selected according to the region type. The CKR and 2D SKR algorithms are applied to flat and nonflatstationary regions, respectively. For nonflatmoving regions, this paper proposes a similarityassisted steering kernel regression (SASKR) algorithm, which is an extension of the NLKR algorithm. The SASKR algorithm exploits the supplementary information from local spatial and temporal orientations separately. It consists of two parts: the local SKR and nonlocal SKR terms. The local SKR term exploits the supplementary information contained in the local spatial orientations while the nonlocal SKR term makes use of the supplementary information contained in the local temporal orientation. The SASKR algorithm can provide better performance and higher computational efficiency than the 3D SKR algorithm.
The remainder of this paper is organized as follows. Section 2 briefly reviews several KR algorithms and presents a similarityassisted steering kernel regression algorithm. A fast kernel regression framework for video superresolution is described in Section 3 and experimental results are illustrated in Section 4. Finally, conclusions are summarized in Section 5.
2. Similarityassisted Steering Kernel Regression Algorithm
In this section, a brief technical review of several KR algorithms is firstly presented. Then an extension of nonlocal kernel regression algorithm, called similarityassisted steering kernel regression, is proposed.
 2.1 Classic Kernel Regression Algorithm for Image Superresolution
The classic kernel regression (CKR) algorithm usually performs in a local manner, i.e., a pixel value of interest is estimated from the samples within a small neighborhood of that pixel. For two dimensional cases, the regression model is
where
y_{i}
is a noise sample at position
X
_{i}
= [
x_{i}
_{,1}
,
x_{i}
_{,2}
]
^{T}
(
x_{i}
_{,1}
and (
x_{i}
_{,2}
are spatial coordinates),
z
(·) is a regression function,
n_{i}
is a zero mean additive Gaussian noise, and
p
is the total number of samples within the neighborhood. The generalization of kernel regression estimate
is given by solving the following weighted least squares problem ]undefined
[21]
.
where
α
_{n}
= (
n
=0
N
) is the regression coefficient, and
α
_{0}
is the desired pixel value estimation
Φ
is the regression base,
vect
(·) is an operator that extracts the lowertriangular part of a symmetric matrix and lexicographically orders it into a column vector, and
k
(
x_{i}x
) is a kernel function which represents a weight for each sample
y_{i}
. The Gaussian kernel function is defined as
where
h
is the global smoothing parameter.
Fig. 1
illustrates how the CKR algorithm is applied to image superresolution. The input image is firstly upsampled into the HR grid. Then each missing pixel (denoted as white circle) value is estimated from the samples (denoted as black circles) within a small neighborhood of that pixel. The CKR algorithm is simple and computationally efficient. However, its performance on the edge regions is poor.
The classic kernel regression for image superresolution
 2.2 2D Steering Kernel Regression Algorithm
The 2D steering kernel regression (SKR) algorithm is proposed to improve the performance on the edge regions
[21]
. The 2D SKR algorithm defines its kernel function as
where
c_{i}
is estimated as the covariance matrix of gradients of 2D neighboring pixels. are the firstorder derivative along the directions of two coordinate axes, respectively.
with
where
are the firstorder derivative along the directions of two coordinate axes, respectively.
is a sample position that fall into the analysis window centered on
x_{i}
, and
m
is the total number of samples within the analysis window. The 2D SKR algorithm captures the local radiometric structures and feeds the structure information to the kernel function. Thus, this algorithm gives better performance on the edge regions but lower computational efficiency than the CKR algorithm.
In order to process video superresolution, Takeda et al.
[21]
generalized the 2D SKR to the 3D SKR algorithm. The 3D SKR algorithm exploits the supplementary information contained in local spatial and temporal orientations to achieve good video superresolution results. However, the 3D SKR algorithm has an inherent limit: the size of the spatiotemporal neighborhood must be small. Thus, many auxiliary frames, which contain supplementary information but are far away from the reference frame, can not be exploited.
 2.3 Similarityassisted Steering Kernel Regression Algorithm
In proposed framework, a similarityassisted steering kernel regression (SASKR) algorithm is introduced as a replacement to the 3D SKR algorithm. The SASKR algorithm is similar with the NLKR algorithm. As shown in
Fig. 2
, the NLKR algorithm makes use of the local patch and the nonlocal similar patches to estimate a pixel value of interest via kernel regression method. The NLKR algorithm can give a more reliable and robust estimation. However, it is computationally heavy. the SASKR algorithm has two advantages over NLKR algorithm. Firstly, the SASKR algorithm improves the computational efficiency. As shown in
Fig. 3
, the SASKR algorithm exploits only similar pixels along motion trajectory instead of all nonlocal similar pixels. Thus, the SASKR algorithm can achieve higher computational efficiency. Secondly, the SASKR algorithm adopts the SKR technique in the local and nonlocal terms, which can give better performance.
The NLKR algorithm for image and video restoration
The similarityassisted steering kernel regression algorithm for video superresolution
Next, a detailed description of the SASKR algorithm is given. As shown in
Fig. 3
, the pixel of interest is estimated from the sample values within the local and nonlocal neighborhoods via 2D SKR technique. A local neighborhood is a region centered on the pixel of interest and a nonlocal neighborhood is a region centered on each similar pixel along motion trajectory. Mathematically, the SASKR algorithm can be formulated into an optimization problem:
where
Y
_{0}
= [
y
_{0,1}
,
y
_{0,2}
,,
y_{0,p}
]
^{T}
and
Y
_{t}
= [
y
_{t,1}
,
y
_{t,2}
,,
y_{t,p}
]
^{T}
(
t
≠0 ) are column vectors composed of the sample values within the local and nonlocal neighborhoods, respectively.
w_{t}
is a weight for the similar pixel at frame
t
, which is calculated by measuring the similarity between
Y
_{0}
and
Y
_{t}
.
The Eq.(5) includes two parts: the local SKR and nonlocal SKR terms. The proposed algorithm exploits the supplementary information contained in the spatial and temporal orientations by the two terms. The first element of the regression coefficients
is taken as an estimate value of the pixel of interest.
By the above analysis, it can be seen that the SASKR algorithm has the capability to exploit all the auxiliary frames containing supplementary information. Thus, it breaks the inherent limit of the 3D SKR algorithm. The experimental results in Section 4.2 verify the effectiveness of the SASKR algorithm. As we can see, the SASKR algorithm gives higher PSNR values and offers better visual effect than the 3D SKR.
3. The Fast Kernel Regression Framework for Video SuperResolution
It can be seen that the performance and computational efficiency of KR algorithms will be different when applied to regions of different characteristics within a single video frame. The CKR algorithm has high computational efficiency but its performance on the edge regions is poor; The SKR algorithm gives better performance on the edge regions but lower computational efficiency. The SASKR algorithm offers the best performance but the lowest computational efficiency. On the basis of the consideration of performance and computational efficiency, this paper proposes a fast kernel regression framework, in which the KR algorithms with different computational complexity are automatically selected for estimating the pixels in different regions.
The chart flow of the proposed framework is shown in
Fig. 4
. The estimation process of each pixel is divided into two stages.
A flow chart of the proposed framework
In the first stage, a 2D local neighborhood of the pixel of interest is extracted and the region type of the local neighborhood is analyzed. The regions in video frames are classified into three categories: flat, nonflatstationary, and nonflatmoving regions. The analysis process is divided into two steps.
The first step determines whether the region type of the local neighborhood is flat region by analyzing the 2D local radiometric structure. As described previously, the local radiometric structure can be captured by the covariance matrix of the spatial gradient vectors within the local neighborhood. The eigenvalues (
λ
_{1}
and
λ
_{2}
) of the covariance matrix are a measurement of the gradient strength in two perpendicular directions. Since the constant region can be characterized by
λ
_{1}
=
λ
_{2}
= 0, the smoothness of a region
ς
, which is defined in
[26]
, can be adopted for distinguishing between the flat and nonflat region.
The region is a flat region when
ς
is less than a certain threshold
ε_{ς}
.
If the region is not a flat region, the second step further determines whether the local region has movement between video frames. This can be done by many methods. In this paper, a simple method is adopted, which is based on the intensity difference between the local neighborhood
Y
_{0}
and the corresponding region
Y
_{1}
in the same position of next or previous frame. The intensity difference is defined as follow
Ideally, the region without movement can be characterized by
PD
= 0. However, the intensity difference is easily affected by noise. So a certain threshold
ε_{PD}
is predefined according to the variance of the noise. The region is a nonflatstationary region when
PD
is less than
ε_{PD}
. Otherwise the region is a nonflatmoving region.
In the second stage, a suitable KR algorithm is selected to estimate the pixel of interest according to the region type of its local neighborhood. The CKR algorithm is used to estimate the value of the pixels in the flat regions aiming to improve computational efficiency. Its implementation is described in subsection 2.1. The 2D SKR algorithm is used to estimate the value of the pixels in the nonflatstationary regions aiming to improve the performance on the edge regions. The implementation of this algorithm is divided into three steps. First, the gradients
which are at all the sample positions
within the local neighborhood, are estimated. This is socalled “pilot estimate” in
[21]
. Second, the covariance matrix
C_{i}
of each sample
y_{i}
is estimated from the initial “pilot”. Finally, the 2D SKR algorithm is applied to estimate the pixel of interest from the local neighborhood (which is embedded in a HR grid). The SASKR algorithm is used to estimate the value of the pixels in the nonflatmoving regions aiming to make full use of the supplementary information contained in local spatial and temporal orientations. The implementing steps of this algorithm are similar to the 2D SKR algorithm. But the SASKR algorithm exploits the samples within both the local and nonlocal neighborhood.
4. Experimental Results
Two sets of experiments are carried out in this section. First, in Section 4.1, the computational efficiency of the proposed framework is validated by presenting the computational times of processing several realworld video sequences. Second, in Section 4.2, the performance of the proposed framework is examined by presenting the obtained results of super resolving several video sequences.
In all experiments, the degraded videos are obtained by the following manner: the original videos are blurred using a 33 uniform point spread function (PSF), spatially downsampled by a factor of 3:1 in the horizontal and vertical directions, and then contaminated by an additive white Gaussian noise with standard deviation 2. The all experiments are performed using MATLAB on Intel Core i73770 CPU 3.4GHz Microsoft windows 7 platform.
 4.1 Computational Efficiency
In order to validate the computational efficiency of the proposed framework, three experiments are implemented. In first experiment, six realworld videos, namely, “Foreman”(288×351×30), “gsaleman” (288×351×30), “Miss America” (270×189×30), “Suzie” (240×351×30), “gbus” (288×351×30), “coastguard”(144×174×30), are used as original videos. The proposed framework is compared with the 3D SKR and SASKR algorithms. The implementation code of the 3D SKR algorithm is downloaded from the author’s website. The parameter setting of the 3D SKR algorithm is the same as the literature
[21]
. As for the parameter settings of the SASKR algorithm and the proposed framework, the value of the global smoothing parameter
h
is set to 1.5; the size of the local neighborhood is fixed to 7×7 ; the support of the similarity searching is fixed to be a15×15×11local cubicle centered on the pixel of interest ; the thresholds
ε_{ς}
= 10 and
ε_{PD}
= 20.
The computational times of three comparison algorithms for whole video (30 frames) are summarized in
Table 1
. As we can see, the speed of the proposed framework is much faster than other two algorithms. The main reason is that the KR algorithms with different computational complexity are applied to estimate the pixels in the regions of different categories. The results indicate that the proposed framework greatly improves the computational efficiency. Note that the improvement of computational efficiency of different videos with same size and resolution is also different. For instance, the computational time of “Foreman” sequence is 434.84 seconds, whereas that of “gsalesman” sequence is 82.93 second. The main reason is that the number of the regions of different categories contained in each video sequence is different. In addition, the speed of the SASKR algorithm also is faster than the 3D SKR. The increase of computational speed of the SAPSKR algorithm is due to the decrease of the computational cost of the kernel function. The kernel function of the SASKR algorithm is 2D instead of 3D.
The computaional times (second) of three comparison algorithms for six videos
The computaional times (second) of three comparison algorithms for six videos
In second experiment, The proposed framework is compared with some other stateoftheart methods
[18]
[20]
. Since the codes for these methods are not currently available publicly, only limited comparisons are possible. Under same experimental conditions, the method of
[20]
requires approximately 20 seconds per frame when super resolving the “Suzie” sequences with highresolution frame size of 210×250 pixels, whereas the proposed framework needs only 0.5146 seconds. The C++ implementation of the method in
[18]
takes about two hours when super resolving a 720×480 frame using 30 adjacent frames at an upsampling factor of 4, whereas the MATLAB implementation of the proposed framework only takes 3.7308 seconds.
The third experiment is to study how the computational efficiency of the proposed framework is sensitive to the scene in the videos. In this experiment, the whole “gbus” sequence(288×351×150) is used as original videos.The whole “gbus” sequence includes some segments in which the scene changes frequently. A cropped sequence is obtained by taking the top 30 frames of the whole sequence. The scene in the cropped sequence changes slowly. The average computational time on the whole sequence is 31.664 second per frame, whereas the average computational time on the cropped sequence is 11.5359 second per frame. therefore, the computational effciency of the proposed framework is parameter sensitive to the scene in the videos. The main reason is that the method of determining the region types in the proposed framework is simple. The propblem can be solved by adopting some more complex methods of determining the region types.
 4.2 Performance
In this section, the performance of the proposed framework is examined. Performance comparisons are implemented with related stateoftheart algorithms. For a fair comparison, the TVbased deblurring algorithm
[27]
is used for image deblurring. The first experiment is to compare the proposed framework with the 3D SKR and SASKR algorithms on six video sequences. Both the objective and subjective quality assessments are adopted to evaluate different algorithms. For the objective quality assessment, the PeakSignaltoNoise Ratio (PSNR) and the Structural Similarity (SSIM) index
[28]
are adopted as the evaluation metric. The graphs in
Fig. 5

10
illustrate the framebyframe PSNR values of reconstructed videos by three comparison algorithms. The average PSNR values are summarized in
Table 2
and the average SSIM values are summarized in
Table 3
. As can be seen, the performance of three algorithms is very close.
PSNR values of each reconstructed HR frame by three algorithms for the Foreman sequences
PSNR values of each reconstructed HR frame by three algorithms for the Miss America sequences
PSNR values of each reconstructed HR frame by three algorithms for the gsalesman sequences
PSNR values of each reconstructed HR frame by three algorithms for the Suzie sequences
PSNR values of each reconstructed HR frame by three algorithms for the gbus sequences
PSNR values of each reconstructed HR frame by three algorithms for the coastguard sequences
The average PSNR values for six reconstructed HR videos
The average PSNR values for six reconstructed HR videos
The average SSIM values for six reconstructed HR videos
The average SSIM values for six reconstructed HR videos
As for the subjective quality assessment, the assessment is obtained from human visual system. The SR results on Foreman and MissAmerica sequences are shown in
Fig. 11
for visual comparison. The difference between the reconstructed frames by three algorithms is also not apparent.
Samples of the reconstructed frames by three algorithms. (a) foreman, (b) MissAmerica , the first column is the results of the 3D SKR algorithm, the second column is the results of the SASKR algorithm, and the third column is the results of the proposed framework
The second experiment is to compare the proposed framework with some other stateoftheart methods such as BM3D
[19]
, GNLM
[20]
, NLKR
[25]
. Similarly, since the codes for these methods are not currently available, only limited comparisons are implemented. Here, the PSNR and SSIM index are used for objective evaluation. The average PSNR values are summarized in
Table 4
and the average SSIM values are summarized in
Table 5
. Note that the results from three comparison algorithms are cited directly from
[25]
. Since different Foreman sequences are adopted, the results of the proposed framework on Foreman sequences are different from
Table 2
and
Table 3
. For a fair comparison, Foreman sequences being used in
[20]
are adopted in this experiment, which are size of 288×312×30. As we can see from
Table 4
and
Table 5
, the performance of the proposed framework is slightly decreased as compared with the BM3D and NLKR algorithms.
The average PSNR values for two reconstructed HR videos
The average PSNR values for two reconstructed HR videos
The average SSIM values for two reconstructed HR videos
The average SSIM values for two reconstructed HR videos
In conclusion, although the performance of the proposed framework is slightly degraded as compared with the BM3D and NLKR algorithms, its computational efficiency is greatly improved. Therefore, it is believe that the proposed framework strikes a good balance on the computational efficiency and performance.
5. Conclusion
A fast kernel regression framework is proposed in this paper. In this framework, the video regions are classified into three categories: flat, nonflatstationary, and nonflatmoving regions. The KR algorithms with different computational complexity can be adaptively selected for estimating each highresolution pixel in different regions. Thus, the proposed framework makes the best use of the advantages of different KR algorithms and greatly improves the computational efficiency. In addition, a similarityassisted steering kernel regression algorithm is proposed for estimating the pixels in the nonflatmoving regions. The SASKR can give better performance and higher computational efficiency than the 3D SKR algorithm
BIO
Wensen Yu is currently a Ph.D. student in College of Computer Science at Sichuan University, Chengdu, China. He is working as an associate professor of College of Mathematics and Computer Science in WuYi University. He received his M. S. degree in software engineering from Jiangxi Normal University, Nanchang, China, in 2004. His current research interests include image and video superresolution, image fusion, sparse representation.
Minghui Wang was born in Xi’an, China, in 1971. He is working as a professor of College of Computer Science in Sichuan University. He has been working as a postdoctoral research fellow of information and communication engineering in Tsinghua University. He received the Ph.D and M. S. degrees from Northwestern Polytechnical University and Xidian University, respectively. His current research interests are information fusion and computer vision.
Huawen Chang is currently a Ph.D. student in College of Computer Science at Sichuan University, China. He is working as a lecturer of College of Computer and Communication Engineering in Zhengzhou University of Light Industry. He received his M. S. degree in computer science from Guilin University of Technology, China, in 2007. His current research interests include image and video quality assessment, image fusion, sparse representation, independent component analysis and wavelet analysis.
Shuqing Chen received her M.S. degree from Southcentral University for nationalities, China, in 2005. Currently, she is working for her Ph.D degree in Sichuan University and also a lecturer with the Department of Electronics and Information Engineering in Putian University, China. Her research interests include image & video feature extraction, matching and alignment, and reconstructions.
Tsai R. Y.
,
Huang T. S.
1984
“Multiframe image restoration and registration”
Adv. Comput. Vis. Image Processing
1
317 
339
Kim S. P.
,
Bose N. K.
,
Valenzuela H. M.
1990
“Recursive reconstruction of high resolution image from noisy undersampled multiframes”
IEEE Trans. Acoustics, Speech, Signal Processing
Article (CrossRef Link)
38
1013 
1027
DOI : 10.1109/29.56062
Kaltenbacher E.
,
Hardie R.C.
1996
“High Resolution Infrared Image Reconstruction Using Multiple, Low Resolution, Aliased Frames”
in Proc. of IEEE Nat. Aerospace Electronics Conf
May
vol.2, Article (CrossRef Link)
702 
709
Irani M.
,
Peleg S.
1991
“Improving resolution by image registration”
CVGIP: Graphical Models and Image Processing
Article (CrossRef Link)
53
(3)
231 
239
DOI : 10.1016/10499652(91)90045L
Farsiu S.
,
Robinson N.D
,
Elad M.
,
Milanfar P.
2004
“Fast and robust multiframe super resolution”
IEEE Trans. Image Processing
Article (CrossRef Link)
13
(10)
1327 
1344
DOI : 10.1109/TIP.2004.834669
Shen H.
,
Zhang L.
,
Huang B.
,
Li P.
2007
“A MAP Approach for Joint Motion Estimation, Segmentation, and Super Resolution”
IEEE Trans. Image Processing
Article (CrossRef Link)
16
(2)
479 
490
DOI : 10.1109/TIP.2006.888334
Chantas G.
,
Galatsanos N.
,
Woods N.
2007
“Superresolution based on fast registration and maximum a posteriori reconstruction”
IEEE Trans. Image Processing
Article (CrossRef Link)
16
(7)
1821 
1830
DOI : 10.1109/TIP.2007.896664
Li X.
,
Hu Y.
,
Gao X.
,
Tao D.
2010
“A multiframe image superresolution method”
Signal Processing
Article (CrossRef Link)
90
(2)
405 
414
DOI : 10.1016/j.sigpro.2009.05.028
Zhang L.
,
Zhang L.
,
Shen H.
,
Li P.
2010
“A superresolution reconstruction algorithm for surveillance images”
Signal Processing
Article (CrossRef Link)
90
(3)
848 
859
DOI : 10.1016/j.sigpro.2009.09.002
Tekalp A.M.
,
Ozkan M.K.
,
Sezan M.I.
1992
“Highresolution image reconstruction from lowerresolution image sequences and spacevarying image restoration”
in Proc. IEEE Int. Conf. Acousics, Speech, Signal Processing
March
Article (CrossRef Link)
169 
172
Fan C.
,
Zhu J.
,
Gong J.
,
Kuang C.
2006
“POCS superresolution sequence image reconstruction based on improvement approach of keren registration method”
Intelligent Systems Design and Applications
Article (CrossRef Link)
2
333 
337
Borman S.
,
Strevenson R.L.
1999
“SuperResolution from Image Sequences – A Review”
In Proc.1998 Midwest Symp.Circuits and Systems
Article (CrossRef Link)
374 
378
Park S.C.
,
Park M.K.
,
M.G Kang
2003
“Superresolution image reconstruction: a technical overview”
IEEE Trans. Signal Processing Magazine
Article (CrossRef Link)
20
(3)
21 
36
DOI : 10.1109/MSP.2003.1203207
Farsiu S.
,
Elad M.
,
Milanfar P.
“VideotoVideo Dynamic SuperResolution for Grayscale and Color Sequences”
EURASIP Journal on Applied Signal Processing
Article (CrossRef Link)
2006
1 
15
DOI : 10.1155/ASP/2006/61859
Hardie R.
2007
“A Fast Image SuperResolution Algorithm Using an Adaptive Wiener Filter”
IEEE Trans. Image Processing
Article (CrossRef Link)
16
(12)
2953 
2964
DOI : 10.1109/TIP.2007.909416
Islam M.M.
,
Asari V.K.
,
Islam M.N.
,
Karim M.A.
2010
“SuperResolution Enhancement Technique for Low Resolution Video”
IEEE Trans.Consumer Electron.
Article (CrossRef Link)
56
(2)
919 
924
DOI : 10.1109/TCE.2010.5506020
Keller S.H.
,
Lauze F.
,
Nielsen M.
2011
“Video SuperResolution Using Simultaneous motion and Intensity Calculations”
IEEE Trans. image processing
Article (CrossRef Link)
20
(7)
1870 
1884
DOI : 10.1109/TIP.2011.2106793
Liu C.
,
Sun D.
2011
“A Bayesian Approach to Adaptive Video Super Resolution”
Proc. of 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2011)
June
Article (CrossRef Link)
209 
216
Danielyan A.
,
Foi A.
,
Katkovnik V.
,
Egiazarian K.
2008
“Image and video superresolution via spatially adaptive blockmatching filtering”
Presented at the Int.Workshop on Local and Non Local Approximation in Image Processing
Lausanne, Switzerland
August
Article (CrossRef Link)
Protter M.
,
Elad M.
,
Takeda H.
,
Milanfar P.
2009
“Generalizing the NonlocalMeans to SuperResolution Reconstruction”
IEEE Trans. image processing
Article (CrossRef Link)
18
(1)
36 
51
DOI : 10.1109/TIP.2008.2008067
Takeda H.
,
Milanfar P.
,
Protter M.
,
Elad M.
2009
“SuperResolution without Explicit Subpixel Motion Estimation”
IEEE Trans.image processing
Article (CrossRef Link)
18
(9)
1958 
1975
DOI : 10.1109/TIP.2009.2023703
Takeda H.
,
Farsiu S.
,
Milanfar P.
2007
“Kernel Regression for Image Processing and Reconstruction”
IEEE Trans.image processing
Article (CrossRef Link)
16
(2)
349 
366
DOI : 10.1109/TIP.2006.888330
Zhang K.
,
Mu G.
,
Yan Y.
,
Gao X.
,
tao D.
2012
“Video superresolution with 3D adaptive normalized convolution”
Neurocomputing
Article (CrossRef Link)
94
(1)
140 
151
Zhang H.
,
Yang J.
,
Zhang Y.
,
Huang T.
2010
“NonLocal Kernel Regression for Image and Video Restoration.”
Lecture Notes in Computer Science
Computer VisionECCV 2010, Article (CrossRef Link)
6313
566 
579
Zhang H.
,
Yang J.
,
Zhang Y.
,
Huang T.
2013
“Image and Video Restorations via Nonlocal Kernel Regression”
IEEE Transactions on cybernetics
Article (CrossRef Link)
43
(3)
1035 
1046
DOI : 10.1109/TSMCB.2012.2222375
Su H.
,
Tang L.
,
Wu Y.
,
Tretter D.
,
Zhou J.
2012
“Spatially Adaptive Blockbased Superresolution”
IEEE Transactions on Image Processing
Article (CrossRef Link)
21
(3)
1031 
1045
Getreuer P.
2012
[Online].Available:.
Wang Z.
,
Bovik A.C.
,
Sheikh H.R.
,
Simoncelli E.P.
2004
“Image quality assessment: From error visibility to structural similarity”
IEEE Transactions on Image Process
Article (CrossRef Link)
13
(4)
600 
612
DOI : 10.1109/TIP.2003.819861