Advanced
Performance Evaluation of Pansharpening Algorithms for WorldView-3 Satellite Imagery
Performance Evaluation of Pansharpening Algorithms for WorldView-3 Satellite Imagery
Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography. 2016. Aug, 34(4): 413-423
Copyright © 2016, Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • Received : July 26, 2016
  • Accepted : August 23, 2016
  • Published : August 31, 2016
Download
PDF
e-PUB
PubReader
PPT
Export by style
Article
Author
Metrics
Cited by
About the Authors
Gu Hyeok, Kim
Dept. of Civil Engineering, Chungbuk National University
Nyung Hee, Park
Dept. of Civil Engineering, Chungbuk National University
Seok Keun, Choi
School of Civil Engineering, Chungbuk National University
Jae Wan, Choi
Corresponding Author, School of Civil Engineering, Chungbuk National University

Abstract
Worldview-3 satellite sensor provides panchromatic image with high-spatial resolution and 8-band multispectral images. Therefore, an image-sharpening technique, which sharpens the spatial resolution of multispectral images by using high-spatial resolution panchromatic images, is essential for various applications of Worldview-3 images based on image interpretation and processing. The existing pansharpening algorithms tend to tradeoff between spectral distortion and spatial enhancement. In this study, we applied six pansharpening algorithms to Worldview-3 satellite imagery and assessed the quality of pansharpened images qualitatively and quantitatively. We also analyzed the effects of time lag for each multispectral band during the pansharpening process. Quantitative assessment of pansharpened images was performed by comparing ERGAS ( Erreur Relative Globale Adimensionnelle de Synthèse ), SAM (Spectral Angle Mapper), Q-index and sCC (spatial Correlation Coefficient) based on real data set. In experiment, quantitative results obtained by MRA (Multi-Resolution Analysis)-based algorithm were better than those by the CS (Component Substitution)-based algorithm. Nevertheless, qualitative quality of spectral information was similar to each other. In addition, images obtained by the CS-based algorithm and by division of two multispectral sensors were shaper in terms of spatial quality than those obtained by the other pansharpening algorithm. Therefore, there is a need to determine a pansharpening method for Worldview-3 images for application to remote sensing data, such as spectral and spatial information-based applications.
Keywords
1. Introduction
Various remotely sensed satellite sensors, such as Kompsat-2/3/3A, Geoeye-1, QuickBird and Worldview-2, provide panchromatic image with high-spatial resolution and multispectral images. To acquire multispectral images with high-spatial resolution, researchers have developed pansharpening or image fusion algorithms; these algorithms sharpen the spatial resolution of multispectral images by using panchromatic images with high-spatial resolution ( Alparone , 2006 ). Pansharpening technique is essential to various applications of remotely sensed images based on image interpretation and processing. The two major issues with the pansharpening process are spectral distortion and decrease of spatial information in the pansharpened image ( Choi , 2011 ). Various algorithms proposed for solving these problems tend to tradeoff between spectral distortion and spatial enhancement. In particular, the development of FIHS (Fast Intensity-Hue-Saturation) fusion method, which can quickly merge huge volume of satellite data and be extended from three to four or more bands, has accelerated the technical advancement in the pansharpening field ( Tu , 2004 ). Laben and Brower (2000) proposed the GS (Gram–Schmidt) pansharpening algorithm, which is implemented on ENVI (ENvironment for Visualizing Images) software. Choi (2011) mentioned that a pansharpening algorithm can be divided into CS (Component Substitution)- and MRA (Multi-Resolution Analysis)-based methods depending on the method used for generating low-spatial-resolution intensity images. Pansharpened images produced using the CS-based method tend to distort spectral information when compared with those developed using the MRA-based method. On the other hand, the MRA-based pansharpened images show relatively decreased sharpness. Aiazzi (2009) analyzed global and context-adaptive parameters of CS-based and MRA-based pansharpening algorithms, including the GS algorithm. Choi (2011) proposed an image fusion methodology for minimizing the local displacement by spatial difference among multispectral bands in a Worldview-2 image. Recently, Vivone (2015) conducted quantitative comparison of about eighteen pansharpening algorithms. They also made the MATLAB (MATrix LABoratory) toolbox for easy comparisons with the state-of-the-art pansharpening algorithm to remote sensing community ( Vivone , 2015 ).
In 2014, the Worldview-3 satellite sensor that provides panchromatic images with 0.3 m spatial resolution, 1.2 m multispectral resolution (8 bands) and 3.7 m short-wave infrared resolution (8 bands), was launched ( Kruse and Perry, 2013 ). Worldview-3 images have higher spatial resolution than those obtained with other commercial satellite sensors, and their characteristics are similar to those of Worldview-2 images. In addition, as mentioned by Choi (2011) , the pansharpening process of 8-band multispectral data should be different from that of 4-band data because of local displacement of spatial information by time lag. However, performance evaluation and related research for pansharpening of Worldview-3 imagery are weaker than those for pansharpening of basic 4-band multispectral images. In this study, various state-of-the-art pansharpening algorithms were applied to Worldview-3 satellite imagery, and the quality of pansharpened images was assessed qualitatively and quantitatively. In addition, by extension of experiments of Vivone (2015) , we analyzed effects of time lag for each 8-band of Worldview-3 image during the pansharpening process. Quantitative assessment of a pansharpened image was performed by using total four measures on real dataset. The organization of our manuscript is as follows. Section 2 presents an overview of pansharpening methods; Section 3 describes the workflow for evaluating pansharpening performance; Section 4 show the study area and data set; Section 5 illustrates experimental results using real data; and finally section 6 concludes the study results.
2. Overview of Pan-sharpening Algorithms
A general pansharpening algorithm can be defined as Eq. (1):
PPT Slide
Lager Image
where
PPT Slide
Lager Image
: pansharpened image with high spatial resolution,
PPT Slide
Lager Image
: original multispectral image with low spatial resolution, wi : the coefficient for pansharpening, I : an intensity image, and P : panchromatic image with high spatial resolution.
The intensity image I differs depending on the type of pansharpening algorithm. In the CS-based method, the intensity image is obtained by combining each multispectral band, but, in the MRA-based method, the intensity image is generated by degrading the panchromatic image. In this study, we chose state-of-the-art algorithms based on the results of the comparative study by Vivone (2015) for assessing pansharpening quality for Worldview-3 images. Detailed description is provided below.
- 2.1 BDSD (Band-Dependent Spatial-Detail with local parameter estimation)
BDSD algorithm optimizes the fusion parameter for injecting high frequency information of panchromatic image in a MMSE (Minimum Mean Square Error) sense. In BDSD, the fusion parameter is designed based on the MVU (Minimum-Variance-Unbiased) estimator using the panchromatic image, resampled multispectral bands, and spatially degraded multispectral bands ( Garzelli , 2008 ). BDSD can be used in the local and global injection model, similar to the CS-based pansharpening model. Many researchers used this algorithm as a state-of-the-art method in the pansharpening field.
- 2.2 GSA (Gram-Schmidt Adaptive) algorithm
GSA pansharpening algorithm is an adaptive version of GS. Fusion parameters for the GSA are determined using statistical characteristics of images, such as variance of intensity image and covariance between intensity and multispectral band. Thereafter, an optimal intensity image is generated based on multiple linear regression between multispectral bands and the degraded panchromatic image. Aiazzi (2009) have described this method in detail.
- 2.3 PRACS (Partial Replacement Adaptive Component Substitution) algorithm
Spectral distortion in the general CS-based pansharpening algorithm is caused by spectral dissimilarity between panchromatic and multispectral bands. Choi (2011) derived band-dependent panchromatic image by using weighted summation of the original panchromatic and multispectral bands with the intensity image; they also proposed the optimal fusion parameter for minimizing the local spectral instability error. This algorithm has framework similar to the CS-based algorithm, but its performance for minimizing spectral distortion is more efficient than the CS- and MRA-based algorithms.
- 2.4 HPF (High Pass Filtering) algorithm
HPF is one of the simplest ways to sharpen a multispectral image. In the HPF algorithm, high-frequency information from panchromatic image, which is extracted using HPF, is injected into the multispectral image ( Chavez 1991 ). The fusion parameter for injection is determined by a statistical model. It is implemented in some commercial remote sensing software, such as ERDAS Imagine.
- 2.5 AWLP (Additive Wavelet Luminance Proportional) algorithm
In the MRA-based pansharpening algorithm, wavelet transformation is a representative technique to extract high-frequency information from panchromatic images. In wavelet transformation, panchromatic images with high-spatial resolution are decomposed into a set of low-spatial resolution images with corresponding spatial details, that is, the wavelet coefficients. By extracting wavelet coefficients, high-frequency information is directly injected into each multispectral band. In particular, the AWLP algorithm uses injection parameters based on the proportion between the average value of multispectral images and each multispectral band ( Otazu , 2005 ).
- 2.6 GLP (Generalized Laplacian Pyramid) with MTF (Modulation Transfer Function)-matched filter and CBD (Context-Based Decision) model (referred to as MTF-GLP-CBD algorithm)
MTF-GLP-CBD is known as one of the efficient and representative MRA-based pansharpening algorithm. In MTF-GLP-CBD, a low-spatial panchromatic image is generated using a GLP to extract high-frequency information according to each image pyramid level ( Vivone , 2015 ). In particular, the spatial filter of GLP is composed by exploiting the MTF. Meanwhile, the fusion parameter, which is known as the CBD method, is determined based on the standard deviation of multispectral images and low-resolution panchromatic images and their correlation.
3. Comparison Methodology for Evaluating the Pansharpening Quality of Worldview-3 Imagery
- 3.1 Quality estimation protocol of pansharpened images
Pansharpened images can be evaluated using the synthesis and consistency property ( Palsson , 2016 ). In general, pansharpened images obtained using the original panchromatic and multispectral images do not have a reference for comparison purpose. This is a critical limitation for estimating pansharpening data. Therefore, each paradigm tries to solve this problem by using various techniques. First, the synthesis paradigm uses spatially degraded panchromatic and multispectral images. The amount of resolution reduced is defined as the ratio between original panchromatic and multispectral images. Spatial resolution of the pansharpened image obtained from spatially degraded images is identical to that of the original multispectral image. Therefore, the original image can be considered as the reference image for estimating the quality of the pansharpened image. Fig. 1 represents the synthesis protocol. An original multispectral image and a panchromatic image with spatial resolution of 1.2 m and 0.3 m, respectively, are spatially degraded at a spatial ratio using MTF ( Vivone , 2015 ). Multispectral and panchromatic images with 4.8 m and 1.2 m spatial resolution, respectively, are generated. Finally, a pansharpened image acquired by degraded images has identical resolution to the original multispectral image.
PPT Slide
Lager Image
Framework of synthesis protocol
However, the use of synthesis protocol does not guarantee the quality of pansharpened image by real data, because most users do not pansharp spatially degraded data. So, some researchers proposed the consistency paradigm to use original multispectral image as reference data. Fig. 2 describes the consistency paradigm for evaluating pansharpened images. In the consistency protocol illustrated in Fig. 2 , an original multispectral image and a panchromatic image with spatial resolution of 1.2 m and 0.3 m, respectively, are fused, and a pansharpened image with 0.3 m spatial resolution image is created. The pansharpened image is spatially degraded on 1.2 m resolution using MTF. Thus, a degraded pansharpened image that can be compared to the original multispectral image is obtained.
PPT Slide
Lager Image
Framework of consistency protocol
Meanwhile, QNR (Quality No References) metrics have been used in many researches for evaluation of pansharpened image. However, research of Palsson (2016) concluded that QNR metrics are not efficient to estimation of quantitative quality for pansharpened image. In addition, because quantitative evaluation by synthesis and consistency protocol shows a similar trend to each other, researcher prefer consistency protocol based on real dataset ( Palsson , 2016 ). Therefore, consistency protocol can be used as a reliable paradigm for estimating performance of pansharpening algorithm.
- 3.2 Quality estimation indices for pansharpened images
To evaluate the quality of pansharpened images, various measurements based on consistency and synthesis paradigms have been used. In this study, we applied four matrices of ERGAS (Erreur Relative Globale Adimensionnelle de Synthèse) , SAM (Spectral Angle Mapper), Q-index and sCC (spatial Correlation Coefficient). Let us consider the fused and reference images F and R .
  • 1) ERGAS: It estimates the relative global error of spectral information global quality. The equation for ERGAS is as follows (Vivoneet al., 2015):
PPT Slide
Lager Image
  • wherehandl: the spatial resolution of panchromatic and multispectral images (e.g.,handlare 0.3m and 1.2m, respectively, for Worldview-3 imagery),N: the number of bands, andRMSE(Bi): the root mean square error betweenFiandRiat the ithband.
  • Lower the ERGAS index, the lower the spectral distortion of the pansharpened image.
  • 2) SAM: It is used in various applications for measuring the spectral difference among pixels, such as image classification, target detection and change detection. It quantifies the spectral angle difference between the corresponding pixels ofFandRas per Eq. (3) below:
PPT Slide
Lager Image
  • where : the dot product betweenAandB, and ‖A‖: the norm ofA.
  • SAM index is acquired by averaging the SAM values over all the pixels (Vivoneet al., 2015). If SAM approaches zero, it means that the pansharpened image is spectrally undistorted.
  • 3) Q-index: Q-index, which is also known as UIQI (Universal Image Quality Index), was developed by Wang and Bovik (2000). It measures spectral distortion of pansharpened image as three factors: loss of correlation, luminance distortion and contrast distortion (Wang and Bovik, 2000). It is defined as Eq. (4):
PPT Slide
Lager Image
  • whereσA: the standard deviation ofA,σA,B: the covariance ofAandB, andA: the mean ofA.
  • Q-index is closer to one as spectral information of pansharpened image is more similar to that of the reference image.
  • 4) sCC: sCC represents spatial quality of a pansharpened image, in contrast to ERGAS, SAM and Q-index which evaluate the spectral quality of a pansharpened image. sCC is a measure of similarity of high-frequency information between pansharpened and panchromatic images (Zhouet al., 1998). First, a Laplacian filter with a 3×3 window size is applied to each image for extracting high-frequency information. Thereafter, the correlation coefficient is calculated between extracted information from pansharpened and panchromatic images. It has a range of [0, 1]. The higher the sCC, the higher is the spatial quality of the pansharpened image.
- 3.3 Pansharpening process for Worldview-3 satellite imagery
The Worldview-3 satellite sensor includes two multispectral sensors (MS1 and MS2) with one panchromatic sensor. The MS1 sensor provides spectral channels of blue, green, red and NIR1, while the MS2 sensor provides coastal, yellow, red edge, and NIR2. The general high-spatial resolution satellite sensor has a time lag between multispectral and panchromatic sensors, because of the technical limitation. The time lag between MS1 and MS2 is 0.26 seconds, while that between multispectral and panchromatic image is 0.13 seconds in Worldview-3 ( Gao , 2014 ). Therefore, when a pansharpening algorithm is applied to images acquired at different times, some fringes or artifacts of moving objects appear in the obtained pansharpened image, as shown in Fig. 3 .
PPT Slide
Lager Image
An example of spatial dissimilarity by time lag among images
In this study, by applying the pansharpening process of Choi (2011) , we analyzed the effect of time lag between MS1 and MS2 for each pansharpening algorithm. First, the original pansharpened image was generated by fusing the panchromatic and multispectral images ( Fig.4(a) ). In addition, MS1 and MS2 images were divided based on the time lag between MS1 and MS2. Then, each MS1 and MS2 image was pansharpened by using the panchromatic image. Finally, the 8-band pansharpened image was obtained by layer-stacking the two pansharpened images ( Fig. 4(b) ).
PPT Slide
Lager Image
Pansharpening process according to time lag: (a) 8-band pansharpening process, (b) pansharpening based on MS1 and MS2 division
4. Study Site and Data
In this study, Worldview-3 satellite imagery, which is obtained from DigitalGlobe, was used to evaluate the quality of pansharpened images. Table 1 describes the data specifications. As shown in Fig. 5 , two sites were selected for experiments. Beolgyo (site 1) is a complex region, which is acquired at 2015/07/26, with urban and vegetated areas, and Jeongok (site 2) is a vegetated area, which is acquired at 2015/08/27, in Korea.
Specifications of Worldview-3 satellite imagery
PPT Slide
Lager Image
Specifications of Worldview-3 satellite imagery
PPT Slide
Lager Image
Experimental data
5. Experimental Results and Discussion
As mentioned in section 2, the consistency paradigm was used to evaluate the quantitative quality of pansharpened images. Quantitative assessment of a pansharpened image was performed by comparing the ERGAS, SAM, Q-index and sCC. The MTF of Worldview-3 sensor for degrading the spatial resolution of pansharpened image was assumed to be 0.29 and 0.15 for multispectral and panchromatic images, respectively. In total, six pansharpening algorithms, such as BDSD, GSA, PRACS, HPF, AWLP and MTF-GLP-CBD, were selected, as described in section 2. In addition, the pansharpened images by using the division of MS1 and MS2 sensors was applied. Tables 2 and 3 represent the results of quantitative evaluation by assessing quality indices of the pansharpened image.
Quantitative results of pansharpened images (Site 1)
PPT Slide
Lager Image
Quantitative results of pansharpened images (Site 1)
Quantitative results of pansharpened images (Site 2)
PPT Slide
Lager Image
Quantitative results of pansharpened images (Site 2)
As shown in Tables 2 and 3 , overall pansharpening results by MRA-based algorithm represent better spectral quality than those by CS-based algorithm. However, spatial quality showed no specific trends or difference between MRA- and CS-based algorithms. In addition, PRACS and HPF methods showed best performance for CS- and MRA-based algorithms, respectively, in terms of spectral quality when compared with other pansharpening algorithms. Regarding spatial quality, BDSD, GSA, and HPF showed higher sCC values than other algorithms. Spectral quality of the pansharpened images obtained from BDSD was poor in our study. This suggests that some state-of-the-art pansharpening algorithm could generate pansharpened images of low quality. Meanwhile, spectral quality of the 8-band pansharpening process was better than the pansharpening based on MS1 and MS2 division, while the spatial quality showed the reverse, in CS-based methods. In the case of pansharpening based on MS1 and MS2 division, intensity image have only spatial detail about MS1 or MS2, while intensity image by 8-band pansharpening process includes spatial characteristics of both MS1 and MS2 by time lag. Therefore, during the injection of spatial details in the pansharpening based on MS1 and MS2 division, spatial characteristics between intensity and multispectral image can be more efficiently offset by subtraction operation of the Eq. (1), compared with 8-band pansharpening process. It mean that pansharpened image by Fig. 4(b) only include spatial details panchromatic images, while result by Fig. 4(a) include some artifacts by time lag of multispectral bands. On the other hand, with the MRA-based method, the 8-band pansharpening process and pansharpening based on division of MS1 and MS2 afforded similar outcomes; this could be because spatial characteristics about multispectral bands do not affect spatial quality of pansharpened image MRA-based method. Therefore, MRA-based algorithms are independent of MS1 and MS2 division during the pansharpening process. Especially, pansharpened image MRA-based method seems to be of mixed spatial details of panchromatic and multispectral bands. Fig. 6 represents details of the pansharpened image for evaluating the visual and qualitative quality in Beolgyo. Almost all pansharpening results show similar color. However, the images from BDSD and GSA algorithms are clearer than those by PRACS and MRA-based algorithms, suggesting that quantitative analysis of pansharpened images may be different from visual inspection of a Worldview-3 image. It indicates that CS-based algorithm has better spatial-based application. The difference in pansharpening quality of spatial information among algorithms is remarkable when MS1 and MS2 division is applied for pansharpening images. Fig. 7 illustrates details of pansharpening results. As shown in Fig. 7(b) , pansharpening image obtained by the CS-based method with MS1 and MS2 division, when compared with other algorithms, do not contain any artifact or blurring at area of moving objects; this indicates that pansharpening with MS1 and MS2 division can be effective for image interpretation or feature detection in Worldview-3 imagery.
PPT Slide
Lager Image
Pansharpening result according to each algorithms (R: red, G: green, B: blue)
PPT Slide
Lager Image
Detailed image of pansharpening result according to each algorithms (R: red edge, G: yellow, B: green)
6. Conclusion
This study conducted a quantitative and qualitative comparison among pansharpened images based on CS- and MRA-based algorithms for Worldview-3 satellite imagery. After comparing various estimation paradigms for pansharpened image quality, the consistency paradigm was decided as the experimental methodology. Thereafter, six pansharpening algorithms were applied to Worldview-3 image. In addition, to analyze the effects of time lag between multispectral images, the original 8-band pansharpening process and the pansharpening method based on MS1 and MS2 division were applied. In this experiment, qualitative quality of spectral information was similar to each other, while quantitative results obtained by MRA-based algorithm were better than those by the CS-based algorithm. In addition, images obtained by the CS-based algorithm were shaper in terms of spatial quality than those obtained by the MRA-based algorithm. In particular, images obtained by division of MS1 and MS2 have an advantage of sharp quality of moving targets. Therefore, for application of remote sensing data (e.g., CS-based pansharpening for image interpretation and MRA-based pansharpening for land cover classification), an effective pansharpening algorithm should be selected for optimal utilization of Worldview-3 imagery.
Acknowledgements
This work was supported by the research grant of Chungbuk National University in 2015 and the Space Core Technology Development Program through the National Research Foundation of Korea funded by the Ministry of Science, ICT and Future Planning under Grant NRF-2014M1A3A3A03034798.
References
Aiazzi B. , Baronti S. , Lotti F. , Selva M. (2009) A comparison between global and context-adaptive pansharpening of multispectral images IEEE Geoscience and Remote Sensing Letters 6 (2) 302 - 306    DOI : 10.1109/LGRS.2008.2012003
Alparone L. , Wald L. , Chanussot J. , Thomas C. , Gamba P. , Bruce L. M. (2006) Comparison of pansharpening algorithms: Outcome of the 2006 GRS-S data fusion contest IEEE Transactions on Geoscience and Remote Sensing 45 (10) 3012 - 3021
Chavez P. S. , Sides S. C. , Anderson J. A. (1991) Comparison of three different methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic Photogrammetric Engineering & Remote Sensing 57 (3) 295 - 303
Choi J. (2011) A Worldview-2 satellite imagery pansharpening algorithm for minimizing the effects of local displacement Journal of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography (in Korean with English abstract) 29 (6) 577 - 582    DOI : 10.7848/ksgpc.2011.29.6.577
Choi J. , Yu K. , Kim Y. (2011) A new adaptive component-substitution based satellite image fusion by using partial replacement IEEE Transactions on Geoscience and Remote Sensing 49 (1) 295 - 309    DOI : 10.1109/TGRS.2010.2051674
Garzelli A. , Nencini F. , Capobianco L. (2008) Optimal MMSE pan sharpening of very high resolution multispectral images IEEE Transactions on Geoscience and Remote Sensing 46 (1) 228 - 236    DOI : 10.1109/TGRS.2007.907604
Gao F. , Li B. , Xu Q. , Zhong C. (2014) Moving vehicle information extraction from single-pass Worldview-2 imagery based on ERGAS-SNS analysis Remote Sensing 6 (7) 6500 - 6523    DOI : 10.3390/rs6076500
Kruse F. A. , Perry S. L. (2013) Mineral mapping using simulated Worldview-3 Short-Wave-Infrared Imagery Remote Sensing 5 (6) 2688 - 2703    DOI : 10.3390/rs5062688
Laben C. A. , Brower B. V. (2000) Process for enhancing the spatial resolution of multispectral imagery using pan-sharpening U.S. Patent 6011875 Eastman Kodak Company
Otazu X. , González-Audícana M. , Fors O. , Núñez J. (2005) Introduction of sensor spectral response into image fusion methods, Application to wavelet-based methods IEEE Transactions on Geoscience and Remote Sensing 43 (10) 2376 - 2385    DOI : 10.1109/TGRS.2005.856106
Palsson F. , Sveinsson J. R. , Ulfarsson M. O. , Benediktsson J. A. (2016) Quantitative quality evaluation of pansharpened imagery: consistency versus synthesis IEEE Transactions on Geoscience and Remote Sensing 54 (3) 1247 - 1259    DOI : 10.1109/TGRS.2015.2476513
Tu T.M. , Huang P. S. , Hung C. L. , Chang C. P. (2004) A fast intensity-hue-saturation fusion technique with spectral adjustment for IKONOS imagery IEEE Geoscience and Remote Sensing Letters 1 (4) 309 - 312    DOI : 10.1109/LGRS.2004.834804
Vivone G. , Alparone L. , Chanussot J. , Dalla Mura M. , Garzelli A. , Licciardi G. A. , Restaino R. , Wald L. (2015) A critical comparison among pansharpening algorithms IEEE Transactions on Geoscience and Remote Sensing 53 (5) 2565 - 2586    DOI : 10.1109/TGRS.2014.2361734
Wang Z. , Bovik A. C. (2002) A universal image quality index IEEE Signal Processing Letters 9 (3) 81 - 84    DOI : 10.1109/97.995823
Zhou J. , Civco D.L. , Silander J.A. (1998) A wavelet transform method to merge Landsat TM and SPOT panchromatic data International Journal of Remote Sensing 19 (4) 743 - 757    DOI : 10.1080/014311698215973