The bidimensional empirical mode decomposition (BEMD) algorithm with high adaptability is more suitable to process multiple image fusion than traditional image fusion. However, the advantages of this algorithm are limited by the end effects problem, multiscale integration problem and number difference of intrinsic mode functions in multiple images decomposition. This study proposes the multiscale selfcoordination BEMD algorithm to solve this problem. This algorithm outside extending the feather information with the support vector machine which has a high degree of generalization, then it also overcomes the BEMD end effects problem with conventional mirror extension methods of data processing, . The coordination of the extreme value point of the source image helps solve the problem of multiscale information fusion. Results show that the proposed method is better than the wavelet and NSCT method in retaining the characteristics of the source image information and the details of the mutation information inherited from the source image and in significantly improving the signaltonoise ratio.
1. Introduction
A
long with the continuous development of information technology, the application of sensing technology in processing images has been greatly improved, and the number of available to obtain target image resources has rapidly increased. However, all of these methods for obtaining image sensors and channels are characterized by certain advantages and also include some limitations. Developing new technologies and methods is still an important task in this field, whose aim is to address multiple source images and to obtain details of the information on the characteristics of target images. Image fusion provides a solution to such problem by combining multiple images into a single image, and thereby improving the information content of the resulting image
[1

2]
. Many researchers have proposed various schemes of image fusion in the spatial and transform domains using different fusion rules, such as pixel averaging, weighted average, maximum value selection, region energy, and region variance
[1
–
9]
.
Based on the process flow and the approach of abstracting information, image fusion can be divided into the following three levels: pixel, feature, and decision. Pixellevel image fusion has lowlevel information; thus, the highprecision alignment of the image data space, high accuracy of the alignment of the image data field level, and other improvements, such as in image processing, are crucial to this level. Most scholars focus on image fusion technologies. The traditional method of image fusion based on weighted average is simple, but it results in a fused image with high noise. After fusion, both the signaltonoise ratio (SNR) and the integration of the SNR of the image decrease. The splicing trace is obvious when grayscale differences in the image fusion are significant. This situation is not conducive to postprocesses, such as image recognition. Image fusion based on neural networks easily integrates multiple images into a single fused image and improves the process flow
[8]
. However, when this method is applied to actual image fusion and network models, the network level, node number, learning strategies, and other issues need to be addressed
[9]
. The method of multiscale image fusion can be used to solve multiscale decomposition.
A recent study proposed a scheme of image fusion based on image decomposition using selffractional Fourier functions
[10]
. In this scheme, the fusion quality of the images is optimized by changing the number of decomposition levels and by using transform before SIFT decomposition. Bivariate empirical mode decomposition
[11]
has also been used in image fusion
[12]
.
However, the bidimensional empirical mode decomposition (BEMD) cannot be used when two source images to be fused are complex or when more than two images are to be fused
[13

14]
. NASA’s Huang introduced the EMD method
[15]
, a nonlinear and nonstationary method of signal processing, to solve this problem. The EMD method depends on the characteristic time scale of the data in decomposing the signal. Given its high adaptability and precision, this method has been applied in many fields, such as seismology
[16]
, mechanical fault diagnosis
[17]
, health
[18]
, biology
[19]
, and marine science
[20]
.
Nunes et al.
[21]
extended the first EMD method from 1D to 2D and established the BEMD algorithm. They should be the first scholars to apply this algorithm to image processing. Because of its high adaptability, the BEMD has attracted considerable attention since it was proposed and has been applied to many processes, including the compression, denoising
[22
–
24]
, segmentation, scaling
[25]
, and feature extraction of images
[26]
. A recent study has applied BEMD to image fusion
[27
,
28]
. However, in multiscale fusion, this algorithm is limited by the presence of bidimensional intrinsic mode functions (BIMF). A solution to this problem has been not yet to be formulated.
The present study proposes a new processing method called support vector machine (SVM) modeling outside postpone billiton closed end effect
[29]
and an image processing technology to solve the problem of the end effect. SVM receives more than adjacent BIMF through BEMD decomposition and constitutes a new multipleBIMF (mBIMF)
[30]
method. Coordinating the SVM with BEMD for multiscale image fusion generates a number of BIMFs that address different fusion problems. Experimental analysis shows that this method may effectively eliminate end effects found in the BEMD algorithm. It can be solved after the image fusion in the BEMD decomposition because the generated BIMFs address the different problems of multiscale fusion, thereby significantly improving the image fusion.
The rest of this paper is organized as follows. Section 2 describes the basic principle of BEMD and the processing of the end effect. Section 3 presents the proposed multiscale coordination BEMD method. Section 4 explains the principle of the proposed multiscale selfcoordination of BEMD in image fusion. Section 5 gives the simulation results, and Section 6 ends with some conclusions.
2. BEMD end effect and multiscale selfcoordination decomposition
 2.1 Basic principle of BEMD
The EMD proposed in
[15]
is a completely datadriven technique of multiscale decomposition. It is highly suitable for nonlinear and nonstationary signal processing. Signals in EMD decomposition receive multiple components called IMFs. The coarsest component is termed residue
[15
,
31]
. The IMFs of a given signal are extracted through sifting
[15]
.
BEMD is an algorithm that is based the EMD and it extends the EMD extended from 1D to 2D signal processing. Its basic principle and properties are similar to those of EMD. The decomposition of any image into BIMFs is a unique process. The number of BIMFs essentially depends on the characteristics of the image itself. The extreme detection method, interpolation technique, and stopping criteria of the iterations result in varying numbers of BIMFs. As such, each image has an infinite number of BIMF sets
[4]
. BEMD extracts the local extremum of a 2D image signal point to accomplish the 2D screening of BIMF and image processing. This screening process is entirely based on the features of the image signal and is highly adaptable to the features of multiscale analysis. BIMF must meet the following two conditions: (1) the image mean and local mean value during decomposition must be zero, and (2) the decomposition of the image of the maximum and minimal points must be positive and negative, respectively.
We use a 2D image
f
(
x
,
y
),
x
= 1, … , M,
y
= 1, … , N as an example, where M and N are the total number of rows and columns in the 2D image, respectively. The basic steps of the BEMD decomposition are summarized as follows
[15
,
18
,
21
,
22]
:
(1) Image initialization:
r
_{0}
(
x
,
y
) =
f
(
x
,
y
) (the residual) and
i
= 1 (index number of BIMF).
(2) Extraction of the ith BIMF:
a) Internal initialization:
h
_{0}
(
x
,
y
) =
r
_{i1}
(
x
,
y
),
j
= 1.
b) Identify all the extrema involved in the local maximum and minimum of
h
_{k1}
.
c) Employ the cubic spline interpolation method to step (3) to obtain the maximum and minimum points of interpolation, and determine the upper and lower envelope surface fitting
u
_{max}
(
x
,
y
) and
u
_{min}
(
x
,
y
).
d) Calculate the mean of the upper and lower envelopes:
e) Update
f) If
h_{k}
(
x
,
y
) satisfies the given standard deviation (SD) in filtering the iterative SD (0.2 to 0.3), then stop the condition. The equation for SD is
g) Repeat steps b) to f) until
SD
≤
ξ
, where
ξ
is an a priori constant the constant of a priori choice, and
f_{i}
(
x
,
y
) =
h_{j}
(
x
,
y
) is the
i
th BIMF.
(3) Update the residual
r_{i}
(
x
,
y
) =
r
_{i1}
(
x
,
y
)
f_{i}
(
x
,
y
)
(4) Repeat steps (2) and (3) with
i
=
i
+ 1 until the number of the extrema in
r_{i}
(
x
,
y
) is less than 2.
After the BEMD decomposition, the original image
f
(
x
,
y
) can be reconstructed using the following equation:
where
c_{k}
(
x,
y
) is the kth BIMF, and
r_{n}
(
x,
y
) denotes the final residue image.
The termination of decomposition is determined by the stop condition of the SD. The SD value has a certain relationship to the number of the BEMD decomposition of the IMF. In practical applications, the normal SD values range from 0.2 to 0.3, within which the BIMFs reflect well the details of the original image.
The EMD decomposition of the residual volume reflects only the information and not the impact of the signal for later analysis. The 2D BEMD decomposition of the residual quantity generally has the characteristics of the original image or detailed information. Delayed image analysis has an obvious effect and cannot be ignored; thus, its influence on the original image composition or contribution should be considered.
 2.2 BEMD end effect and processing
BEMD must exist in the end effect. The actual image signal is generally weak; hence, the end effect is particularly serious. However, decomposition is a constant process of screening. When the screening process is continuous, this influence becomes serious. Even BIMF decomposition produces distortion; therefore, the image signal must be processed in BEMD to restrain or eliminate it in the screening end effect. This study combines the advantages of the SVM and mirror continuation to solve the problem of the end effect in BEMD. The specific process are described detailed in 3.1.
3. Multiscale coordination BEMD decomposition
 3.1 BEMD End effect processing method based on SVM regression and outside extensition model
 3.1.1 Regression model
Give the training data
X
= {(
x_{1}
,
y_{1}
), …, (
x_{i}
,
y_{i}
)}, where
x_{i}
∈
R^{m}
represents the column of the input vector, and
y_{i}
∈
R
represents the corresponding output values. The SVM model is used to obtain the regression function.
Where {,} represents the inner product,
w
∈
R^{k}
describes function
f
(
x
) complexity,
R^{m}
represents the transformed data into the original highdimensional space,
b
is the constant offset term, and
b
∈
R
. The constraints of Equation (5) are
The objective function is
Where
is the slack variable, and ε denotes the upper and lower error bound of
y_{i}

w^{T}
⋅
φ
(
x_{i}
)−
b
training error ε. The risk of Vapnik ε is insensitive to the measure of the loss function. C is a constant, where C > 0 and is the control of the beyond punitive ε samples.
The Lagrange function is used to solve this optimization problem.
Where the
α_{i}
,
,
η_{i}
, and
for the Lagrange multipliers are greater than zero under KKT conditions
[29]
. Thus, the nonlinear regression problem can be converted into a dual problem. Solving Equation (6),
Where
K
(
x_{i}
,
x_{j}
)(
i
,
j
= 1, 2, … ,
l
) represents the SVM kernel function. Solving this problem yields Equation (10):
 3.1.2 Extension outside postponed
Taking the 2D image
f
(
x
,
y
) as the matrix, we can use the SVM regression model to postpone the outside extension steps as follows:
(1) For all the ranks taken as the data sample within the matrix of the SVM regression model of xi, the matrix ranks the value of the output of the SVM regression model
f
(
x
). Select data
X
= {(
x
_{1}
,
y
_{1}
), … , (
x_{i}
,
y_{i}
)} as the sample. The radial basis kernel function is taken as the kernel function, and the penalty coefficient C has the value of 10. The regression model is obtained.
(2) Perform step (1) on the regression models obtained from the training from the right to the left of the column to postpone the values outside the billiton m data {(
x
_{i+1}
,
y
_{i+1}
), … , (
x
_{i+m}
,
y
_{i+m}
)}, where
x
_{i+m}
represents the column values, and
y
_{i+m}
represents the specific predictive value.
(3) Perform step (1) on the regression models obtained from the training from the right to the left of the column to postpone the values outside the billiton n data {{(
x
_{i+1}
,
y
_{i+1}
),…, (
x
_{i+n}
,
y
_{i+n}
)}, where
x
_{i+n}
represents the column values, and
y
_{i+n}
represents the specific predictive value.
(4) Perform steps (2) and (3) to predict the data added to the original matrix. The outside postpone billiton is obtained after the 2D image is obtained.
 3.1.3 Mirroring end effect of closing the process
The mirror technology is used to address the outside postpone extension data and to eliminate the end effect. The basic idea my be outlined as follows:
(1) The predicted value of the judgment obtained by continuation m line and n column determines whether or not the value is the local extreme value point. If it is the local extreme value point, then the continuation is stopped; otherwise, proceed with the continuation until the local extremum points are achieved.
(2) Perform the “mirror” in steps (1) to obtain the extreme value point and form closed sequences, such as reusing BEMD decomposition, thereby avoiding internal contamination. As such, the problem of the end effect is solved in the process.
 3.2 Basic principle of Multiscale coordination BEMD decomposition
BEMD decomposition does not require predefined basis functions, in which the data are completely driven by adaptive decomposition. Decomposition may consist of an image decomposition and result in multiple BIMF images. If an image is just considered, the decomposition results in multiple BIMF images. However, multiscale image fusion often requires multiple images of the original fusion.Obviously, in multiscale image fusion, the multiple BIMF images are associated with the decomposed results from all images to be fused}. If the image is to be decomposed, In fact, in decomposing an image among them,} BEMD decomposition should not include the respective its corresponding coordinate. The decomposition of these frequent BIMF characteristics and trends in the difference image is large. They are fused, and the quality of the generated final image may be poor. In other words, in using this method for image fusion, the necessary coordination must be performed to solve the problems related to the BEMD decomposition of each image.
The maxima and minima set of multiple images are used in the process of coordination in BEMD decomposition.
We can obtain two images (
X
and
Y
) according to the extreme point of 1.1 BEMD decomposition. In the maxima example, we assume that the maximum value of the original image points is
Y_{1}
{
X
(
x_{1}
), … ,
X
(
x_{s}
) } and that the maximum point of the original image
Y
is {
Y
(
y_{1}
), … ,
Y
(
y_{t}
) }. These maxima correspond to the position of the two images, which merge the following formulas:
As such, the original image yields the following X and Y coordinates: {
X
(
z_{1}
), …,
X
(
z_{v}
) } and {
Y
(
z_{1}
), …,
Y
(
z_{v}
) }. The original image can be obtained after the
X
and
Y
coordinates achieve their minimum point. The maxima and minima in the BEMD decomposition are processed to coordinate the interpolation of the original images
X
and
Y
for subsequent operations.
After the extreme points of the original image are coordinated in the operation, match the two images by using the adaptive basis functions to get a common adaptive basis function. The physical characteristics and trends of the BIMF in this image show a plurality of images obtained after the decomposition of the adaptive BEMD . Multiscale image fusion requires the same or similar physical characteristics of the image of the obtained multiscale fusion treatment to obtain significant improvement.
 3.3 Coordinating the treatment of the BEMD decomposition
The two images after the BIMF BEMD decomposition are not necessarily the same. Thus, the corresponding BIMFs may differ in terms of frequency. The fusion effect is often unsatisfactory if BIMFs are directly fused. To address this shortcoming, we propose the mBIMF, which is adjacent to BIMF decomposition, and which is reconstructed into a new BIMF:
mBIMF is composed of a plurality of BIMFs. BIMFs operate from high to low frequencies; thus, mBIMF is also distributed from high to low frequencies. The first and second BIMFs to be added constitute the mBIMF
^{1}
with the highest frequency. The third and fourth BIMFs to be added constitute the highfrequency submBIMF
^{2}
and so on. The original image can be expressed as
where
f
(
x
,
y
) represents the original image,
J
is the number of the mBIMF divisions, and
res
represents the residual amount.
Redefining the mBIMF in the same manner may solve the problem of the inconsistency in the original number of BIMF decomposition. In addition, the ground texture characteristics of the mBIMF are better than those of the single BIMF. The mBIMF also meets the defined conditions of the BIMF with good scale and texture characteristics. Its flexible structure fulfills the requirements of multiscale image fusion, thereby resulting in satisfactory results in the fusion process.
Redefining the mBIMF as such solves the inconsistency in the original number of BIMF decomposition. It also results in an mBIMF that has more features than BIMF.
4. Principle of the multiscale selfcoordination of BEMD in image fusion
The BEMD decomposition multiscale for image fusion includes the selfcoordination and integration phases. Image processing needs to be decomposed to coordinate the use of the theoretical section of its extracted extreme points and to ensure the consistency and overall trend of the BIMF decomposition characteristics of the final image. In the integration phase, the problem of the inconsistent number of BIMF must be solved for decomposition by image fusion in the reconstructed mBIMF must be solved. The integration of the image feature information is then strengthened, and a clear image fusion is ultimately obtained. The basic principle is shown in
Fig. 1
.
Flowchart of BEMD decomposition in multiscale image fusion
The basic steps of the multiscale selfcoordination of the BEMD algorithm in image fusion are as follows:
(1) The images to be fused are X and Y; the process of the BEMD decomposition of the extreme points of the two source images are processed through coordination to determine the number of BIMF components and residues.
(2) After BEMD, when the two images to be obtained from the fusion component do not have the same number of BIMFs, the adjacent BIMF component must be reconstructed into a new component mBIMF, and the common number
n
must be set, so as to stabilize the two decompositions of the images into a BIMF. The original image can then be expressed as
(3) According to step (2), the
n
mBIMF of the new components reconstructs the weight obtained in the same space, weight scale in the weighted linear fusion. The fusion rules are as follows:
Where
F
(
x
,
y
),
α_{Xj}
, and
α_{Yj}
,respectively denote the images to be fused and the weighing factor of the fused images X and Y for each mode function.
The reconstruction yields different components of the mBIMF in a linearweighted fusion. The key lies in how it reflects the minutiae of the original image inherent in this component of gravity. In view of this concept, we propose a linearweighted fusion method to reflect the characteristics of the components of the calculation method. The information entropy in each reconstructed image of mBIMF is calculated. Their characteristics are compared with the corresponding information entropy scale space, and the corresponding frequency band is calculated to obtain the correct weights. Information entropy is calculated by
Where
P
is the probability value for each pixel, and
H
is the entropy.
The corresponding weight of the right mBIMF component is
This formula can be used to calculate the coefficients that correspond to the mBIMF fusion component.
(4) Perform step (3) of the fusion method for residual fusion.
(5) The MBIMF and residual
res
are fused using inverse transform to obtain the final image from the fusion of Equation (4).
5. Analysis of Examples
 5.1 Experiment 1
To demonstrate the effectiveness of the proposed method,
Fig. 2
shows the two different images of alarm clocks and the ideal focus of the reference image.
Fig. 2
(a) focuses on the right side of the alarm clock as input image 1.
Fig. 2
(b) focuses on the left side of the alarm clock as input image 2.
Fig. 2
(c) is ideal for the synthesis of artificial images.
Fig. 3
shows the fusion image processed by methods of the proposed, NSCT and wavelet
Various focus and fusion alarm clock
Different focuses of an alarm clock under the multiscale coordination of BEMD in image fusion and under wavelet image fusion
Table 1
shows that, after coordination, the peak signaltonoise ratio (PSNR) of multiscale BEMD in image fusion becomes significantly better than that of wavelet image fusion (PSNR = 33.25) and NSCT image fusion. The PSNR of wavelet image fusion is only 31.622. The PSNR of NSCT image fusion is only 30.298, too.
Fig. 3
shows that applying the multiscale coordination BEMD algorithm in image fusion results in a fused image close to the ideal manmade fused image.
PSNR and Entropy comparison of the fused image obtained from the three methods
PSNR and Entropy comparison of the fused image obtained from the three methods
 5.2 Experiment 2
Fig. 4
(a) shows a blurred version of the input image.
Fig. 4
(b) shows a fourweek blurred version of the input image 2.
Fig. 4
(c) is ideal for the synthesis of artificial images. Image fusion using the proposed method and that using the wavelet method are shown in
Fig. 5
.
Blurred images of chili under different regions and artificial synthesis of the ideal fusion of images
Different fuzzy regions of the image of a pepper under multiscale selfcoordination BEMD versus those obtained from the wavelet method
As shown in
Table 1
, the PSNR of the multiscale BEMD in image fusion after the coordination algorithm is 38.254, whereas that of the wavelet image fusion is 35.653 and NSCT image fusion is 35.372. As shown in
Fig. 5
, the proposed method of image fusion can result in an image fusion close to the ideal image fusion.
Integrating these three methods for the resulting image require feature information from the source image. The fused image obtained by this method not only inherits the characteristics of the better information of the source image but also retains the details of the source image and mutation information. The wavelet and NSCT image fusion method can only generate better information to continue sourcing some of the characteristics of the image but cannot keep some of the good details and mutation information of the source image, which is not conducive to the postprocessing of the image and analysis of sound. These results indicate the superiority of the proposed method.
 5.3 Experiment 3
To further confirm the effectiveness of the proposed approach in terms of image fusion, this experiment select a group of medical images, which is an original CT image and an MRI image (as shown in
Fig. 6
(a), (b)).
Fig. 7
shows the fused image using the proposed method, the wavelet method, and NSCT methods.
Input image
Results of the fused CT and MRI images
Data in
Table 1
shows that, PSNR of the fused image processed by BEMD multiscale coordination algorithm is 27.398, and the PSNR of the fused image obtained by wavelet and NSCT respectively are 26.465 and 25.796, whose fused image quality are significantly lower than that obtained by the proposed method.
For the fused image obtained by these three methods, they all can fuse the feature information of the source image into the resulting image, and it is clear that the image contrast of NSCT fusion method is decline, which is not very satisfactory, and there are also image contrast declining problem for wavelet method; while the proposed method can solve this problem;
The fused images obtained in this article not only inherited the characteristic information of the source image, but also retained the details of the source image. Meanwhile, the fused image also retains high edge characteristics.
 5.4 Analysis and Discussion
To illustrate the effectiveness of the proposed method in terms of image fusion and to examine the fusion performance from the perspective of the method, we calculated the PSNR and measured the quality of the fused image:
Where
R
is the image of the gray middle weight, and MSE is the mean square error, which is calculated as
Where
f
(
x
,
y
) represents the original image,
f
(
x
,
y
) represents the fused image, and
m
×
n
represents the image size.
In addition, we also use the information entropy (Entropy) to evaluate the fused image’s quality, according to Shannon theory, entropy is defined as:
Where
l
is the total number of gray levels of the image,
P
(
i
,
j
) is ratio of pixels number of gray value i to total number of pixels of the image, i.e.,
P
(
i
)=
N_{i}
/
N
, the larger the fused image information’s entropy, the more informationrich the fused image, and the better the image fusion.
As shown in
Table 1
and
Fig. 2
to
7
, using the proposed method in source image fusion leads to satisfactory results. The proposed method retains the image feature information and detailed mutation information of the source image. In addition, the PSNR and Entropy of the wavelet and NSCT image fusion method is superior to that in this study. The image fusion method is completely dependent on datadriven multiscale coordination. The use of these two types of wavelet image fusion method shows that other issues need to be considered to ignore image detail. In a word, a significant adaptive capacity can be seen in the image fusion method proposed in this paper. It shows that the proposed 2D empirical mode decomposition method is suitable for multiscale image fusion.
6. Conclusion
This study developed a BEMD multiscale image fusion method to analyze selfcoordination. The image fusion obtained by this method can not only retain the feature information of the source image but also inherit the mutations of the source image.
The BEMD multiscale image fusion algorithm is based on the idea of selfcoordination. BEMD and the data are completely driven by decomposition, which is highly adaptive to Fourier and wavelet decomposition changes. Therefore, BEMD is an adaptive method of image decomposition, particularly for 2D nonlinear and nonstationary data processing. The results of different image fusion experiments show that the proposed BEMD multiscale image fusion from coordination can satisfactorily perform image fusion.
BEMD is rarely used to coordinate multiscale images. Future studies should continue to conduct indepth investigations on the development of this algorithm.
Acknowledgements
This work is supported by the Fundamental Research Funds for the Central Universities of China.
BIO
FengPing An received the bachelor degree in information management and information system from Hefei University, Hefei, China, in 2008. In 2011, he received M.S. degree in Information Management and Information System from Hebei University of Engineering, Handan, China. He is currently pursuing the Ph.D. degree with the School of computer and Communication Engineering, Beijing University of Science and Technology, Beijing, China. His current research interests include big data, image processing, and information security.
XianWei Zhou received the bachelor degree in metal materials from Southwest Jiaotong University, Chengdu, China, in 1986. In 1992, he received M.S. degree in applied mathematics from Zhengzhou University, Zhengzhou, China. He received the Ph.D degree in computer engineering from Southwest Jiaotong University, Chengdu, China, in 1992. From 1992 to 2001, he was a Researcher at Zhengzhou People's Liberation Army Air Defense Forces Institute. From 2001, he has been a Chair Professor at Beijing University of Science and Technology. His current research interests include computer cryptography, big data, image processing, image compression, and data structures.
DaChao Lin received the bachelor degree in metal material from Kunming Institute of Technology, Kunming, China, in 1983. In 1986, he received M.S. degree in machinery manufacturing from Xi'an Jiaotong University, Xi’an, China. He received the Ph.D degree in weapons systems and application engineering from Beijing Institute of Technology, Beijing, China, in 2001. From 1986 to 1998, he was a Researcher at Kunming Institute of Technology. From 2001, he has been a Chair Professor at Beijing University of Science and Technology. His current research interests include microseismic, machine learning, big data, image processing and signal processing.
View Fulltext
Blum R.S.
,
Liu Z.
2005
MultiSensor Image Fusion and Its Applications
Taylor and Francis
Matsopouios G K
,
Marshall S
,
Brunt J
1994
“Multiresolution morphological fusion of MR and CT images of the human brain”
IEEE Transactions on Image and Signal Processing
141
(3)
137 
142
DOI : 10.1049/ipvis:19941184
Nunez J
,
Otazu X
,
Fors O
1999
“Multiresolutio based image fusion with additive wavelet decomposition”
IEEE transactions on geoscience and remote sensing
37
(3)
1204 
1211
DOI : 10.1109/36.763274
Nguyen ThaiSon
,
Chang ChinChen
,
Chung TingFeng
2014
“A TamperDetection Scheme for BTCCompressed Images with HighQuality Images”
KSII Transactions on Internet and Information Systems
8
(6)
2005 
2021
DOI : 10.3837/tiis.2014.06.011
Yin S
,
Cao L
,
Ling Y
2010
“One color contrast enhanced infrared and visible image fusion method”
Infrared Physics&Technology
53
(2)
146 
150
DOI : 10.1016/j.infrared.2009.10.007
Liu Z
,
Liu C
2010
“Fusion of color local spatial and global frequency information for face recognition”
Pattern Recognition
43
(8)
2882 
2890
DOI : 10.1016/j.patcog.2010.03.003
Sharma K.K.
,
Sharma Mohit
2014
“Image fusion based on image decomposition using selffractional Fourier functions”
Signal Image & Video Process
8
(7)
1335 
1344
DOI : 10.1007/s1176001203638
Rilling G.
,
Flandrin P.
,
Goncalves P.
,
Lilly J.M.
2007
“Bivariate empirical mode decomposition”
IEEE Signal Process. Letter
14
(12)
936 
939
DOI : 10.1109/LSP.2007.904710
Rehman N
,
Looney D
,
Rutkowski T.M
,
Mandic D.P.
"Bivariate EMDbased image fusion"
Statistical Signal Processing, 2009. SSP '09. IEEE/SP 15th Workshop on
2009
57 
60
Ahmed M.U.
,
Mandic D.P.
"Image fusion based on Fast and Adaptive Bidimensional Empirical Mode Decomposition"
Information Fusion, 2010 13th Conference on
2010
1 
Sharma J.B.
,
Sharma K.K.
,
Sahula Vineet
2013
“Digital image dual watermarking using selffractional Fourier functions, bivariate empirical mode decomposition and error correcting code”
J. Opt.
42
(3)
214 
227
DOI : 10.1007/s1259601301251
Huang N E
,
Shen Z
,
Long S R
1998
“The Empirical Mode Decomposition and the Hilbert spectrum for nonlinear and nonstationary time series analysis”
Proceeding of Royal Society London: A
454
(12)
903 
995
DOI : 10.1098/rspa.1998.0193
Zhang R R
,
Ma S
,
Hartzell S
2003
“Signatures of the seismic source in EMDbased characterization of the 1994 Northridge, California, earthquake recordings”
Bulletin of the Seismological Society of America
93
(1)
501 
518
DOI : 10.1785/0120010285
Liu B
,
Riemenschneider S
,
Xu Y
2006
“Gearbox fault diagnosis using empirical mode decomposition and Hilbert spectrum”
Mechanical Systems and Signal Processing
20
(3)
718 
734
DOI : 10.1016/j.ymssp.2005.02.003
Echeverria J.C
2001
“Application of empirical mode decomposition to heart rate variability analysis”
Medical & Biological Engineering & Computing
39
(4)
471 
479
DOI : 10.1007/BF02345370
Huang W
,
Shen Z
,
Huang N.E.
1998
“Use of intrinsic modes in biology: examples of indicial response of pulmonary blood pressure to step hypoxia”
Proc.Natl Aead.Seci USA
95
(22)
12766 
12771
DOI : 10.1073/pnas.95.22.12766
Bonato P.
,
Ceravolo R.
,
DE Stefano A.
,
Molinari F.
2000
“Use of crosstimefrequency estimators for structural identification in nonstationary conditions and under unknown excitation”
Journal of Sound and Vibration
237
(5)
775 
791
DOI : 10.1006/jsvi.2000.3097
Nunes J C
,
Bouaouue Y
,
Delechelle E
,
Niang O
,
Bunel Ph
2003
“Image analysis by bidimensional empirical mode decomposition”
Image and Vision Computing
21
(12)
1019 
1026
DOI : 10.1016/S02628856(03)000945
Linderhe Anna
“2D empirical mode decompositions in the spirit of image compression [J],”
Proceeding of SPIE (S0277786X)
2002
vol. 4738
25 
33
He Lulu
,
Wang Hongyuan
"Spatialvariant Image Filtering Based on Bidimensional Empirical Mode Decomposition"
Pattern Recognition, 2006. ICPR 2006. 18th International Conference
2006
vol. 2
1196 
1199
Pun C.M
,
Lee M.C
“Rotationinvariant texture classification using a twostage wavelet packet feature approach”
IEE Proceedings  Vision, Image and Signal Processing
2001
422 
428
Linderhed A.
2004
“Adaptive Image Compression with Wavelet Packets and Empirical Mode Decomposition”
OgC sgdrhr+Khmjn¨ping University
Lee JenChun
,
Huang P.S
,
Chiang ChungShi
,
Tu T.
,
Chang ChienPing
"An Empirical Mode Decomposition Approach for Iris Recognition"
Image Processing, 2006 IEEE International Conference
2006
289 
292
Hariharan H
,
Gribok A
,
Abidi MA
,
Koschan A
2006
“Image Fusion and Enhancement via Empirical Mode Decomposition”
Journal of Pattern Recognition Research
16 
32
DOI : 10.13176/11.6
Looney D
,
Mandic D.P
2009
"Multiscale Image Fusion Using Complex Extensions of EMD"
Signal Processing, IEEE Transactions
57
(4)
1626 
1630
DOI : 10.1109/TSP.2008.2011836
Vapnik Vladimir N.
1995
The nature of statistical learning theory.
Springer Science & Business Media
Feng Kongsen
,
Xiaoli Zhang
,
Li Xiongfei
2011
“A Novel Method of Medical Image Fusion Based on Bidimensional Empirical Mode Decomposition”
Journal of Convergence Information Technology
6
(12)
84 
91
DOI : 10.4156/jcit.vol6.issue12.11
Wu Z.
,
Huang N.E.
2004
“A study of the characteristics of white noise using the empirical mode decomposition method”
Proc. R. Soc. A
471
(2176)
1597 
1611
DOI : 10.1098/rspa.2003.1221