A water wall system is one of the most important components of a boiler in a thermal power plant, and it is a nonlinear MultiInput and MultiOutput (MIMO) system, with 6 inputs and 3 outputs. Three models are developed and comp for the controller design, including a linear model, a multilayer feedforward neural network (MFNN) model and an Echo State Network (ESN) model. First, the linear model is developed by linearizing a given nonlinear model and is analyzed as a function of the operating point. Second, the MFNN and the ESN are developed by using training data from the nonlinear model. The three models are validated using Matlab with nonlinear inputoutput data that was not used during training.
1. Introduction
Thermal power plants supply approximately 65% of the world’s electric power
[1]
, and a water wall system consists of a collection of metal tubes in the furnace of a thermal power plant. Recirculating water in the water wall is heated by combustion energy, and its phase changes from water to steam. Therefore, it is very important to properly model, control and analyze the water wall system in order to manage the entire boilerturbine system
[2]
.
Several principles, including the energy balance, mass balance and thermal equilibrium, can be used to explain the dynamics of the water wall system
[3]
. As a result, the water wall system is modeled as a dynamic nonlinear multiinput multioutput (MIMO) system. However, the controller for a nonlinear system is usually designed using a linear model based on a particular operating point. Therefore, the degree of nonlinearity of a water wall system needs to be investigated to adequately apply a linear control system.
If the object system has a severe nonlinearity or a wide operating range, the linear model will have a limitation in its ability to describe the dynamics of the object system
[4]
. An alternative for this kind of nonlinear control problem involves implementing a neural network for the modeling and controller design
[5]
. Since it is difficult to develop a reliable mathematical model of the nonlinear dynamics in practice, the neural network is typically trained according to the measured inputoutput data. Therefore, a mathematical nonlinear dynamic model is not necessary for this approach.
The multilayer feedforward neural network (MFNN) has been successfully applied to model static nonlinear systems
[6]
. However, it requires a number of tapped delays of the output neurons as input signals for dynamic systems, resulting in an increased number of input neurons and an extensive training time. Recurrent neural networks (RNN) with feedback connections are potential approaches that better represent dynamic systems, including Elman networks
[7]
and diagonal recurrent neural networks (DRNN)
[8]
. However, training for recurrent neural networks is relatively more complex than that for MFNN
[9]
.
Recently, eco state network (ESN), a special kind of three layer RNN, was proposed by Jaeger
[10]
. The basic idea is to use a large “dynamic reservoir” to supply interesting dynamics from which the desired output is combined. One of the most interesting things of ESN is that it can be trained in a oneshot fashion without repetitively passing through the training set that is usually required to train other RNN
[11]
.
In
[12]
, Jaeger and others used the ESN to optimize problems and demonstrated simple applications. Lin and others used ESN to predict the stock price and to suggest stock trades
[13]
. Ishii and others used ESN to identify the yawing acceleration of an underwater robot
[14]
. Dai and others used ESN to predict the harmonic current from nonlinear loads
[15]
. Pan and Wang proposed a model for predictive control based on recurrent neural network, where they used ESN to identify an unknown nonlinear dynamic system
[16]
.
In this paper, three models, including a linearized model, a MFNN model and a nonlinear ESN model, are developed and compared from the point of view of controller design for a practical water wall system in a 600 MW oil drum boilerturbine system. The three models are developed and compared using Matlab on a personal computer environment. First, we introduce the nonlinear water wall model with the energy balance, mass balance and thermal equilibrium equation. Then, the linear model is developed with a Taylor series expansion. The changes in the pole and steady state gain according to the change in the operating point are introduced. To train the MFNN and the ESN, Pseudo Random Binary Noise Signal (PRBNS) inputs are applied to the nonlinear water wall model. After training the MFNN and ESN, the performance of the linear model, MFNN and ESN is evaluated, presented and discussed.
2. A Water Wall Modeling
 2.1 A nonlinear water wall model
In this paper, we consider a water wall model in a 600MW oil drum boilerturbine system
[3]
. The water wall is located between the recirculating pump and the drum in the boiler. The circulating water in the water wall is heated by combustion in the furnace, and its phase changes from water to steam, and then it is poured into the drum.
We assume that the water wall system is a nonlinear model, as used by Usoro
[3]
. The mass balance equation, energy equilibrium equation, and thermal equilibrium equation can be defined according to the boiler structure and physical principles. The description of the water wall system is as follows:
Mass balance equation is given by:
where,
W_{wwo}
: Mass flow of water wall outlet,
W_{rw}
: Mass flow of recirculating water,
K_{wrps}
: Mass flow of recirculating pump.
According to
[3]
,
K_{wrps}
accounts for recirculating pump leakages and seal injection. It is small and may be neglected.
Energy equilibrium equation is given by:
where,
M_{wwme}
: Effective mass of water wall metal,
K_{swwm}
: Specific heat of water wall metal,
T_{wwm}
: Temperature of water wall metal,
Q_{wwgm}
: Heat transfer rate of gas to metal,
Q_{wwmw}
: Heat transfer rate of metal to water.
where,
K_{mwwm}
: Mass of water wall metal,
K_{vww}
: Volume of water wall,
R_{drw}
: Density of drum water,
H_{drw}
: Enthalpy of drum water,
where,
K_{uwwmw}
: Constant with dimension of [W/K
^{3}
],
T_{drs}
: Temperature of drum steam,
Thermal equilibrium equation of circulating water is given by:
where,
H_{rpo}
: Enthalpy of recirculating pump outlet,
H_{wwo}
: Enthalpy of water wall outlet,
The following constant values are used:
K_{mwwm}
=1063000,
K_{vww}
=2318.61,
K_{swwm}
=0.11,
K_{uwwmw}
= 173.5205
For (1)(5), we define the inputs, the outputs, and the states as follows,
Then, the nonlinear model of water wall system is represented in the following equation,
From the above equations, we notice that
y
_{2}
and
y
_{3}
of the water wall system have severe nonlinearity, while
y
_{1}
is
u
_{1}
, which is a simple linear output.
 2.2 Effect of operating point to linear model
The nonlinear model, given by (9)(12), is linearized using the firstorder Taylor approximation at an operating point. We generalized the operating points as follows:
Operating point=[
u
_{10}
,
u
_{20}
,
u
_{30}
,
u
_{40}
,
u
_{50}
,
u
_{60}
,
x
_{0}
,
y
_{10}
,
y
_{20}
,
y
_{30}
]
As an example, the operating point at a power output of 400 MW is shown in
Table 1
, where steady state values of 6 inputs, 3 outputs, and a state can be observed. When all the variables are defined as deviations from the operation values in
Table 1
, the linear model at 400 MW is given as,
The operating of water wall
The operating of water wall
We notice that (13) is a simple first order system with a pole at s = 0.4129, and the overall system is stable without an oscillation mode. The transfer function matrix,
, of (13)(16) for the 6input 3output system is given by
where
G_{ij}
is a transfer function of the ith output from
j
th input. According to (13) and (17), the linear model at 400 MW is a first order system with a pole on the left half splane.
To analyze the nonlinear model, the above linearization is performed as a function of the electrical power output. Then, the linear models that are generated at each operating point are compared from the point of view of the pole and the steadystate gain.
Fig. 1
shows the movement of the pole with the operating points. The pole at 100 MW is at −0.2335 and moves to the left as the power increases. The pole at 600MW is at −0.6084. This means that the transient is becoming shorter by several times as the load demand increases.
Movement of the pole according to operating point
Fig. 2
shows the steadystate gain changes of
y
_{2}
and
y
_{3}
as a function of the operating points from 300 [MW] to 600 [MW], which are normalized with a gain at 300 [MW]. The gain for
y
_{1}
is not represented because it has no dynamics. In
Fig. 2
, the steadystate gain changes significantly. For example, the amplitude of the steadystate gain for
G
_{21}
at 600 [MW] negatively increases to three times that of 300 [MW]. From
Figs. 1
and
2
, even with a good nonlinear water wall model, suitable performance is hard to achieve for wide range operation using a conventional linear control technique.
The change in the steadystate gain for y_{2} and y_{3} with the operating point
3. Two Neural Network Models
 3.1 PRBNS training data
As an alternative to conventional linear modeling and control, neural networks have been extensively applied to nonlinear modeling and controller design
[5]
. Due to its ability to learn nonlinear functions, neural networks do not need a mathematical model of the object system. Instead, the model is developed with the given inputoutput data. Therefore, it is very important to make a suitable selection of the training data.
To identify the dynamic system, a Pseudo Random Binary Noise Signal (PRBNS) is a popular input signal that is used to excite the system over a wide frequency range
[17]
. In order to obtain training data for the nonlinear water wall system considered in this paper, the PRBNS for six inputs is generated for 4000 [sec] with 0.1 [sec] sampling time.
Fig. 3
shows generated PRBNS input, and
Fig. 4
shows the corresponding output using a nonlinear water wall system in (9)(12). In these figures, the first 3000 [sec] data are used to train the neural network, and the latter 1000 [sec] data are left for validation. That is, the data between 3000 [sec] and 4000 [sec] is not used for training and is instead used to evaluate the trained neural network model.
PRBNS input for nonlinear water wall system
Output of PRBNS input for nonlinear water wall system
 3.2 Multilayer feedforward neural network model
The multilayer feedforward neural network (MFNN) is a standard neural network
[6]
. It has been extensively applied with the “error backpropagation” training algorithm to various nonlinear modeling and control problems
[5]
.
In this paper, MFNN is first used to model a nonlinear water wall system. To train the MFNN, the training data is rearranged to reflect the water wall system of (9)(12) as follows,
where,
n
is the discrete time step. And,
U
(
n
)=[
u
_{1}
(
n
), ···,
u
_{6}
(
n
)],
Y
(
n
)=[
y
_{1}
(
n
),
y
_{2}
(
n
),
y
_{3}
(
n
)] and
Y
(
n
+1)=[
y
_{1}
(
n
+1),
y
_{2}
(
n
+1),
y
_{3}
(
n
+1)]. The MFNN used in this paper has a threelayer feedforward neural network, including an input layer, a hidden layer and an output layer. The input of MFNN is
U
(
n
) and
Y
(
n
), and the number of the input node is 9. The output of the MFNN is
Y
(
n
+1), the number of output node is 3.
The purpose of the training is to minimize the total square error between the training output data and the MFNN output data. This is actually an unconstraint optimal problem where the error function is defined as follows,
where
W
represents all the weights.
Y_{MFNN}
represents the output from the MFNN,
Y
represents the output for the training data,
N
represents the number for the training data, which is 30,000 and
n
is the discrete time step. To consider the different range of each output, each output is normalized in Eq. (19). The optimal problem, (19) is solved using the back propagation algorithm
[18]
. The Matlab toolbox is used to train the MFNN in this paper.
One of the most important parameters for the MFNN is the number of hidden neurons. Usually, a larger number of hidden neurons decreases the error but increases the training time of the MFNN. Therefore, some kind of compromise is necessary. In this paper, different numbers of hidden neurons are independently tested.
Fig. 5
shows the
E_{MFNN}
and the corresponding training time for 10 different numbers of hidden neurons, from 10 to 100. From
Fig. 5
, considering the increase in training time, hidden neurons by more than 50 are redundant to decrease the error. Therefore, in this paper, the number of hidden neurons of the MFNN is selected to be 50 for the nonlinear water wall model, which has 3.00×10
^{6}
of
E_{MFNN}
and 3853.6 [sec] of training time.
Training time of MFNN and E_{MFNN} as a function of hidden neuron number
 3.3 Echo state network model
The ESN is a special form of RNN with three layers: an input layer, a hidden layer and an output layer. The significant characteristic of the ESN is a hidden layer with a large number of neurons that are sparsely and randomly interconnected and/or selfconnected. This is meant to imitate a biological human brain system. The node is commonly referred to as a “dynamic reservoir” that can be excited by connecting it with the input unit and/or the feedback output unit. A more detailed explanation of the ESN was described by Jaeger
[10]
. Such ESNs have been recently applied successfully for dynamic system identification and control
[11
,
19

21]
.
In this paper, the ESN is used as a second neural network model to describe the nonlinear dynamics of the water wall system.
Fig. 6
shows the structure of the ESN used in this paper. In the figure, the ESN has 6 inputs and 3 outputs that are the same as those of a water wall system. The connection weights are divided into 4 categories,
W^{in}
,
W^{dr}
,
W^{fb}
and
W^{out}
, which are the input weight matrix, internal weight matrix, output feedback weight matrix and output weight matrix, respectively.
In this figure,
W^{in}
is represented with solid lines from the input nodes to the reservoirs;
W^{dr}
is represented with solid lines among reservoirs;
W^{fb}
is represented with solid lines from the output nodes to the reservoirs; and
W^{out}
is represented with dotted lines from the input, reservoirs and output nodes to the output nodes. When the number of reservoirs is
N_{dr}
, the dimensions of
W^{in}
,
W^{dr}
,
W^{fb}
and
W^{out}
are (
N_{dr}
×6), (
N_{dr}
×
N_{dr}
), (
N_{dr}
×3) and (3×(
N_{dr}
+9)), respectively.
The state of the reservoir and output from input in
Fig. 6
are computed as follows:
The architecture of an ESN for a water wall system
where
S
(
n
)=[
s
_{1}
(
n
), ···,
s_{Ndr}
(
n
)]
^{T}
is the state of the dynamic reservoir,
U
(
n
)=[
u
_{1}
(
n
), ···,
u
_{6}
(
n
)]
^{T}
is the input,
Y
(
n
)=[
y
_{1}
(
n
),
y
_{2}
(
n
),
y
_{3}
(
n
)]
^{T}
is the output,
f
(·) is the hyperbolic tangent function and n is a discrete time step.
In the ESN,
W^{in}
,
W^{dr}
and
W^{fb}
are fixed with random numbers that were initially generated while
W^{out}
is trained with the given inputoutput data. This is a significant difference between ESN and other RNNs, which enable ESN training through a simple linear regression
[10
,
22]
. Randomly generated (
W^{in}
,
W^{dr}
,
W^{fb}
) determine the “echo state properties” which means that “if the network has been run for a long time, the current network state is uniquely determined by the history of the input and the teacher forced output.”
[12]
. A more detailed analysis of the sufficient conditions and necessary conditions for the echo state property are presented by Zhang et al.
[23]
.
To guarantee the echo state properties,
W^{dr}
is usually generated as follows
[16
,
19]
:
where,
is the initial internal weight matrix with a sparse connectivity of 5% over the range [−0.5, 0.5], and its mean value is about zero, λ
_{max}
is the highest eigenvalue of
, and α is a constant of less than one, which is referred to as a spectral radius. Also,
W^{in}
and
W^{fb}
are randomly chosen in the range [−1, 1].
The training of the ESN is an offline calculation of the output weight
W^{out}
to minimize the error square as follows:
where
Y_{ESN}
represents the output of the ESN and
Y
represents the normalized output of the training data. In (23), initial part of the training data is not considered into the
E_{ESN}
since the initial dynamic reservoir,
S
(0), is initially set with an arbitrary value that affects the output. This part is referred to as the “initial washout time” and is usually selected empirically
[10
,
15
,
16
,
19
,
22]
. In this paper, this time is selected as 500 [sec], and therefore, the number of training data
N
is 25,000 [step] for ESN.
To minimize the
E
_{ESN}
, the training input
U
(
n
), the internal states
S
(
n
+1) and the training output
Y
(
n
+1) are collected together into a new row of a matrix
M
. That is,
M
is the concatenated matrix of
U
,
S
and
Y
, and its size is
N
×(
N_{dr}
+9). At the same time, the training outputs
Y
(
n
) are collected into another new matrix
T
with size (
N
×3). Then, the desired
W^{out}
, which minimize the
E_{ESN}
, are obtained by multiplying the pseudoinverse of
M
with
T
as follows
[15
,
19
,
22]
.
where,
M
^{+}
denotes the pseudoinverse of the
M
matrix. Since training in the ESN is simply performed with (24), the training time for the ESN is mainly determined as the time to calculate the pseudoinverse. In this paper, the Matlab function
pinv
( ) is used to calculate the pseudoinverse using singular value decomposition. This simple training is a major benefit of the ESN over other neural networks.
In practice, the number of reservoirs (
N_{dr}
) and the value of the spectral radius (α) are important parameters for the performance of the ESN. Usually, a large
N_{dr}
supplies various dynamics to decrease the error while increasing the redundancy for the ESN. A small α decreases the interconnected weights and the selfconnected weights of the dynamic reservoirs. Therefore, a small α might be sufficient to describe the dynamics with a small time constant while a large α is required to describe the necessary dynamics with a large time constant
[24]
.
In this paper, different values of
N_{dr}
and α are tested independently.
Fig. 7
shows the
E_{ESN}
as a function of
N_{dr}
, and α when
N_{dr}
is 100, 200, …, 1000 and α is 0.1, 0.2, …, 1. Since the
E_{ESN}
is dependent on a randomly generated initial weight set (
W^{in}
,
W^{dr}
,
W^{fb}
), the
E_{ESN}
of each point in
Fig. 7
is selected with the best value of ten initial weight sets. From
Fig. 7
, the ESN shows a small error with a large
N_{dr}
and small α. From a comparison with the MFNN, an ESN with more than 700 reservoirs and less than 0.2 of α shows a smaller error than that of MFNN. In this paper, the final parameters of the ESN model used to describe the water wall system are selected with 600 for
N_{dr}
and 0.2 for α, which has a 2.73×10
^{6}
E_{ESN}
.
E_{ESN} as a function of the reservoir number and spectral radius
Fig. 8
shows the training time of the ESN with respect to
N_{dr}
. Since α does not affect the calculation time with (24), the training time is represented with
N_{dr}
at 0.2 of α in this figure. Naturally, a large
N_{dr}
requires a large training time. In
Figs. 5
and
8
, the training time for the ESN is noted to be of less than 10 [sec] while that of MFNN is of several thousand [sec]. This is one of the important benefits of ESN, which can be trained in a oneshot manner with (24) to minimize (23). The training time for the selected ESN model is of 3.5 [sec] in this paper.
Training time of the ESN and E_{ESN} as a function of the reservoir number
4. Comparison Results
The performance of the three models, i.e., the linear, MFNN and ESN models, is tested using PRBNS data that were not used during training. To evaluate the performance, the PRBNS input data in
Fig. 3
are independently applied to the three models. Then, nonlinear outputs between 3000 [sec] and 4000 [sec] in
Fig. 4
, which is not used during training, are compared with the three outputs of the MFNN, ESN and linear models.
Fig. 9
shows the comparison of
y
_{1}
for the linear, MFNN and ESN models. Since
y
_{1}
is a linear output, the linear model is identical to the nonlinear model. In this case, both MFNN and ESN show almost the same response as that of the nonlinear model.
Comparison of y_{1} for validation data
Fig. 10
shows the validation results for
y
_{2}
, and
Fig. 11
is an enlargement of
Fig. 10
. Since
y
_{2}
is a nonlinear output, the linear model shows some mismatch. This mismatch is expected from the analysis in Section 2.2. In this case, the MFNN and ESN show almost the same results as the nonlinear system. These validation results show that MFNN and ESN have the capability to describe the
y
_{2}
of the nonlinear water wall model properly.
Fig. 12
shows the validation results of
y
_{3}
, and
Fig. 13
is an enlargement of
Fig. 12
. Since
y
_{3}
is a nonlinear output, the linear model shows some mismatch. The MFNN and ESN show almost the same results as the nonlinear system in the figure. These validation results show that MFNN and ESN have the capability to properly describe the entire dynamics of the nonlinear water wall model.
Comparison of y_{2} for validation data
An enlargement of Fig. 10
Comparison of y_{3} for validation data
An enlargement of Fig. 12
A quantitative comparison with the mean square error is listed in
Table 2
. Although both MFNN and ESN exhibit better results than the linear model, ESN shows slightly better results than the MFNN in this case.
Table 3
shows the training time for the MFNN model and the ESN model. The simple and fast learning time of ESN is one of the most important benefits over other neural network models.
Comparison of MSE for three models
Comparison of MSE for three models
Comparison of the training time between MFNN and ESN
Comparison of the training time between MFNN and ESN
5. Conclusion
In this paper, we present a comparison among linear model, multilayer feedforward neural network (MFNN) model and echo state neural network (ESN) model for a practical water wall system in a 600 [MW] thermal power plant, which is a MIMO nonlinear system.
First, we present an analysis of the results of the linearization of a mathematical water wall model. The change in the pole and steadystate gain is presented as a function of the electric power output. As a result, the system shows quite severe nonlinearity in the design of a linear controller.
Second, we developed an MFNN model and an ESN model without using a mathematical water wall model. Though ESN shows slightly better performance, the results show that both neural network models can provide a satisfactory description of the nonlinear water wall model. Meanwhile, the ESN shows significantly faster training than the MFNN.
From the point of view of control system design, the change in the pole and steadystate gain in this paper can be applied. Though the use of ESN for control and analysis remains to be further developed in future research, we identify the ability of the ESN as a good neural network model for practical nonlinear MIMO identification problems.
Acknowledgements
This research was supported by the ChungAng University Excellent Student Scholarship in 2015 and by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (20100025555) and by the Korea Institute for Advancement of Technology (KIAT) grant funded by the Korea Government  Ministry of Trade Industry and Energy(MOTIE). (No. N0001075)
BIO
UnChul Moon received his B.S., M.S. and Ph.D. degrees from Seoul National University, Korea, in 1991, 1993 and 1996, respectively, all in Electrical Engineering. From 2000, he was with the WooSeok University, Korea, and from 2002 with the ChungAng University, Korea, where he is currently an Associate Professor of Electrical Engineering. His current research interests are in the areas of power system analysis, computational intelligence and automation.
Jaewoo Lim received his B.S. degree in Electrical and Electronics Engineering from the ChungAng University, Seoul, Korea, in 2014. He is currently working toward the M.S degree in ChungAng University of Electrical and Electronics Engineering. His research interests include control and modeling of fossil power plant and power system analysis.
Kwang Y. Kee (F’01) received the B.S. degree in Electrical Engineering from Seoul National University, Seoul, Korea, in 1964, the M.S. degree in Electrical Engineering from North Dakota State University, Fargo, in 1968, and the Ph.D. degree in System Science from Michigan State University, East Lansing, in 1971. He has been on the faculties of Michigan State, Oregon State, University of Houston, Penn State, and Baylor University, where he is a Professor and Chair of Electrical and Computer Engineering. His interests are power system control, operation and planning and intelligent system techniques, and their application to power system and power plant control. Dr. Lee is an Editor of IEEE Transactions on Energy Conversion and former Associate Editor of IEEE Transactions on Neural Networks. Dr. Lee is a Fellow of the IEEE.
Liu C.
,
Wang H.
,
Ding J.
,
Zhen C.
2011
“An Overview of Modelling and Simulation of Thermal Power Plant,”
Proc. of the 2011 International Conference on Advanced Systems
Zhengzhou, China
86 
91
Xueqin L.
,
Gang L.
,
Shangqing L.
2008
“The development of the boiler Water Wall Tube Inspection,”
Third International Conference on Electric Utility Deregulation and Restructuring and Poewer Technologies
Nanjing
2415 
2420
Usoro P. B.
1977
“Modeling and Simulation of a Drum BoilerTurbine Power Plant Under Emergency State Control,” Master Degree
Massachusetts Institute of Technology
138 
139
Moon U.C.
,
Lee K. Y.
2011
“An Adaptive Dynamic Matrix Control with FuzzyInterpolated StepResponse Model for a DrumType BoilerTurbine System,”
IEEE Transactions on Energy Conversion
26
(2)
393 
401
DOI : 10.1109/TEC.2011.2116023
Kecman V.
2002
Learning and Soft Computing: Support Vector Machines, Neural Networks and Fuzzy Logic Models
IEEE MIT Press
Piscataway, NJ
Snadburg I.
,
Lo J.
,
Fancourt C.
,
Principe J.
,
Katagiri S.
,
Haykin S.
2001
Nonlinear Dynamical Systems: Feedforward Neural Network Perspectives
Wiley
New York
Lin F. J.
,
Hung Y. C.
,
Chen S. Y.
2009
“FPGABased Computed Force Control System Using Elman Neural Network for Linear Ultrasonic Motor,”
IEEE Transactions on industrial electronics
56
(4)
1238 
1253
DOI : 10.1109/TIE.2008.2007040
Ku C. C.
,
Lee K. Y.
1995
“Diagonal Recurrent Neural Network for Dynamic Systems Control,”
IEEE Transactions on Neural Networks
6
144 
156
DOI : 10.1109/72.363441
Atiya A. F.
,
Parlos A. G.
2000
“New results on recurrent network training: Unifying the algorithms and accelerating convergence,”
IEEE Transactions on Neural Networks
11
(3)
697 
709
DOI : 10.1109/72.846741
Jaeger H.
2010
“The echo sate approach to analyzing and training recurrent neural networkswith an Erratum note,”
Fraunhofer Institute for Autonomous Intelligent Systems (AIS)
Prokhorov D.
2005
“Echo state networks: Appeal and challenges,”
Proc. IEEE International Conference Neural Networks (IJCNN05)
Montreal, PQ, Canada
July 31August 4
2
905 
910
Jaeger H.
,
Lukosevicius M.
,
Popovici D.
,
Siewert U.
2007
“Optimization and applications of echo state networks with leakyintegrator neurons,”
Neural Networks
special issue
20
335 
352
DOI : 10.1016/j.neunet.2007.04.016
Lin X.
,
Yang Z.
,
Son Y.
2011
“Intelligent stock trading system based on improved technical analysis and Echo State Network,”
Expert Systems with Applications
38
11347 
11354
DOI : 10.1016/j.eswa.2011.03.001
Ishii K.
,
van der Zant T.
,
Becanovic V.
,
Ploger P.
2004
“Optimization of Parameter of Echo State Network and Its Application to Underwater Robot,”
SICE Annual Conference in Sapporo
2800 
2805
Dai J.
,
Zhang P.
,
Mazumdar J.
,
Harley R. G.
,
Venayagamoorthy G. K.
2008
“A Comparison of MLP, RNN and ESN in Determining Harmonic Contribution from Nonlinear Loads,”
34th Annual Conference of IEEE on Industrial Electronics
3025 
3032
Pan Y.
,
Wang J.
2012
“Model Predictive Control of Unknown Nonlinear Dynamical Systems Based on Recurrent Neural Networks,”
IEEE Transactions on Industrial Electronics
59
(8)
3089 
3101
DOI : 10.1109/TIE.2011.2169636
Johansson R.
1992
System Modeling and Identification
PrenticeHall
Englewood Cliffs, NJ, USA
Haykin S.
1994
Neural Networks
Maxwell Macmillan
Ottawa, On, Canada
Mazumdar J.
,
Harley R. G.
2008
“Utilization of echo state networks for differentiating source and nonlinear load harmonics in the utility network,”
IEEE Transactions on power Electronics
23
(6)
2738 
2745
DOI : 10.1109/TPEL.2008.2005097
Venayagamoorthy G.
2007
“Online design of an echo state network based wide area monitor for a multimachine power system,”
Neural Networks
20
(3)
404 
413
DOI : 10.1016/j.neunet.2007.04.021
Li D.
,
Han M.
,
Wang J.
2012
“Chaotic time series prediction based on a novel robust echo state network,”
IEEE Transactions on Neural Networks and Learning Systems
23
(5)
787 
799
DOI : 10.1109/TNNLS.2012.2188414
Jaeger H.
2002
“A tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and echo state network approach,” GMDReport
German National Research Center for Information Technology
Zhang B.
,
Miller D. J.
,
Wang Y.
2012
“Nonlinear system modeling with random matrices: Echo state networks revisited,”
IEEE Transactions on Neural Networks and Learning systems
23
(1)
175 
182
DOI : 10.1109/TNNLS.2011.2178562
Millea A.
2014
“Explorations in echo state networks,”, MA
Department of Artificial Intelligence, Groningen University
Netherlands
S2064235
2064235 
2064235