The systematic optimal tuning of power system stabilizers (PSSs) using the dynamic embedded optimization (DEO) technique is described in this paper. A hybrid system model which has the differentialalgebraicimpulsiveswitched (DAIS) structure is used as a tool for the DEO of PSSs. Two numerical optimization methods, which are the steepest descent and BroydenFletcherGoldfarbShanno (BFGS) algorithms, are investigated to implement the DEO using the hybrid system model. As well as the gain and time constant of phase lead compensator, the output limits of PSSs with nonsmooth nonlinearities are considered as the parameters to be optimized by the DEO. The simulation results show the effectiveness and robustness of the PSSs tuned by the proposed DEO technique on the IEEE 39 bus New England system to mitigate system damping.
1. Introduction
Design processes are inherently optimizing problems, involving tradeoffs between competing objectives, whilst ensuring constraints are satisfied. Such problems are not always established formally, nevertheless underlying optimization principles apply. Design questions arising from system dynamic behavior can also be thought of in an optimization framework. However, the optimization formulation in this case must capture the processes driving dynamics. This class of problems has come to be known as
dynamic embedded optimization
(DEO)
[1]
. For a typical disturbance, power system stabilizer (PSS) used to mitigate system damping of lowfrequency oscillations is an important control objective for which the DEO technique can be applied.
The dynamic behaviors of the PSS are affected by the linear parameters (gain and time constant of phase compensator) with smoothness and the constrained parameter (output limits) with nonsmooth nonlinearities. The appropriate selection of linear parameters has been usually made using the conventional tuning techniques
[2

5]
based on the small signal stability analysis. However, by focusing only small signal conditions, the dynamic damping performance immediately following a large disturbance is often degraded. The PSS output limits (which cannot be determined by the linear approach) can provide the solution to balance theses competing effects
[6]
. In particular, these limit values attempt to prevent the machine terminal voltage from falling below the exciter reference level while speed is also falling, which means that it can improve the reduced transient recovery after disturbance (faster recovery to its initial steady state points, therefore, it allows to save system energy), especially in multimachine power systems
[5]
.
Power systems frequently exhibit interactions between continuoustime dynamics, discretetime, discreteevent dynamics, switching action, and jump phenomenon. Such systems are known generically as
hybrid systems
, which can be modeled by a set of
differentialalgebraicimpulsive switched
(DAIS) structure
[6
7]
. Especially, this hybrid system model provides the effective and insightful analysis of PSS with nonsmooth nonlinear dynamics due to saturation limits. This paper makes the new contribution by jointly determining of the systematic optimal parameters [gain (
K_{PSS}
), time constant (T
_{1}
) of phaselead compensator, and output limits (V
_{max}
and V
_{min}
)] of PSS shown in
Fig. 1
using the hybrid system model based on the DAIS structure. The performance of the PSS tuned by the proposed DEO is assessed through its application to the IEEE 39 bus New England multimachine power system in
Fig. 2
. In this paper, minimization of objective function used in the DEO is solved by the two numerical optimization methods: steepest descent method and
BroydenFletcherGoldfarbShanno
(BFGS)
[8
9]
method, which is the most popular quasiNewton algorithm. The efficiency of these two algorithms used in the DEO for the tuning of PSSs on the multimachine power system (in
Fig. 2
) is investigated. Also, the starting point in an iterative optimization problem can give a significant effect on the efficiency and robustness of the algorithm even though the same method is used. The conventional tuning method based on the eigenvalue analysis may provide a good initial guess as the starting point of steepest descent and BFGS algorithms. Therefore, its usefulness is considered as an important factor for the DEO in this paper.
PSS/AVR block representation
IEEE 39 bus New England power system.
This paper is organized as follows: Section II presents a summary of the hybrid system model with the DAIS structure. The steepest descent and BFGS algorithms used as solver in the DEO are described in Section III. The detailed explanation of how to implement the DEO using the hybrid system model is given in Section IV. The dynamic damping performances of the PSSs tuned by proposed DEO technique are evaluated by the case studies in Section V, which also includes the comparisons of convergence speed by the steepest descent and BFGS algorithms. Finally, the conclusions are given in Section VI.
2. Hybrid System Presentation
 2.1 Modeling
As mentioned in Section I, hybrid systems, which include power systems, are characterized by the following:

• Continuous and discrete states.
In other words, the hybrid system is a mathematical model of a physical process consisting of an interacting continuous and discrete event system
[10]
. This means that there are both continuous and discrete states in such systems that influence each other's behavior. It is shown in
[7]
that such behavior can be captured by the following DAIS structure.
where
and

•xare the continuous dynamic states, for example generator angles, speed, and fluxes.
The differential equations
in (1) are correspondingly structured for
, whilst
z
and
λ
remain constant away from events. Similarly, the reset equations
in (4) ensure that
x
and
λ
remain constant at reset events, but the dynamics states
z
are reset to new values according to
z
^{+}
=
h_{j}
(
,
y
^{}
) (The notation
denotes the value of
just after the reset event, while
and
y
^{−}
refer the values of
and y just prior to the event). The algebraic function
g
in (2) is composed of
g
^{(0)}
together with appropriate choices of
g
^{(i)}
or
g
^{(i+)}
, depending on the signs of the corresponding elements of
y_{d}
in (3). An event is triggered by an element of yd changing sign and/or an element of
y_{e}
in (4) passing through zero. In other words, at an event, the composition of g changes and/or elements of z are reset.
The system flows
φ
are defined accordingly as
The full detailed explanation and associated mathematical equations of the DAIS model (especially for the
switching and impulse effects
) are given in
[7]
with the comprehensive studies of the hybrid system.
 2.2 Trajectory sensitivities
The flows
φ
in (5) of a system will generally vary with changes in parameters and/or initial conditions. Trajectory sensitivity provides a way of quantifying the changes in the flow that result from (small) changes to parameters and/or initial conditions. The development of these sensitivity concepts will be based on the DAIS model in (1)~(4). Trajectory sensitivities follow from a Taylor series expansion (neglecting higher order terms) of the flows
and
φ
_{y}
in (5), which can be expressed as
where Γ
_{x0}
∈ ℜ
^{ṉ×ṉ}
and Γ
_{y}
∈ ℜ
^{m×ṉ}
are partial derivatives of system flows and known as the
trajectory sensitivities
. Recall that
incorporates parameters
λ
, therefore the sensitivities to initial conditions
include parameter sensitivities.
The calculations in (6) and (7) can require the expensive computational efforts when the equations have high dimension for large systems. Fortunately, by using an implicit numerical integration technique such as trapezoidal integration, the computational burden for obtaining the trajectory sensitivities can be reduced considerably
[6
7]
.
3. Numerical Optimization Methods
In engineering multivariable nonlinear problems, numerical optimization methods play the significant part to find the solutions of nonlinear functions on complex systems or select the parameters by which the objective function J can be minimized / maximized. The optimal tuning problem for the PSSs described in this paper is the case of the latter. The two representative
firstorder
gradientbased methods
[8
9]
, which are the steepest descent and BFGS quasiNewton algorithms, are investigated. It is important to note that this firstorder gradientbased information is already available by applying the trajectory sensitivities based on the DAIS structure described in Section II.
 3.1 Steepest descent algorithm
The steepest descent method is the simple to implement. However, it is often slow to converge. The gradient of an objective function
J
at a point is the direction of the most rapid increase in the value of the function at that point. The descent direction is the negative of the gradient direction. The series of steps to be taken are given in below.
 Algorithm1: Steepest descent
Given starting point , N (number of iterations), ε_{1} , ε_{2} (positive stopping criteria), k←0;
while
(
k
<
N
) or (
f_{tol}
_{_1}
〉
ε
_{1}
) or (
f_{tol}
_{_2}
〉
ε
_{2}
)
Compute search direction . Set where α_{k} is the step length. k ← k + 1;
end (while)
The more detailed explanation of the step length
α
_{k}
and functions
f_{tol}
_{_1}
and
f_{tol}
_{_2}
for the stopping criteria is given in the next Section IV.
 3.2 BFGS quasinewton algorithm
Like the steepest descent, the quasiNewton methods
[8
9]
require only the firstorder gradient of the objective function
J
to be supplied at each iterate. By measuring the changes in gradients, it provides the dramatic improvement on the convergence rate and robustness over the steepest descent, especially difficult problem. Moreover, because the
second derivatives
are not required, the quasiNewton methods are sometimes more efficient than Newton's method. A further advantage of the quasiNewton methods is that they provide an estimate of the Hessian
(Recall that building the true Hessian is not feasible in practice since it involves the second order trajectories sensitivities, which are computationally expensive).
This estimated Hessian may provide an indication of coupling between design parameters λ, and hence allow physical insights that assist in the design process
[1]
. The most popular quasiNewton algorithm is the BFGS method, named for its discoverers Broyden, Fletcher, Goldfarb, and Shanno. The BGFS algorithm is driven in below.
Algorithm2: BFGS method
Given starting point , N (number of iterations), ε_{1}, ε_{2} (stopping criteria), inverse Hessian approximation H_{0} (positive definite matrix), k← 0;
while
(
k
<
N
) or (
f_{tol}
_{_1}
〉
ε
_{1}
) or (
f_{tol}
_{_2}
〉
ε
_{2}
)
Compute search direction . Set where β_{k} is the step length. Define and . Compute where I is an identity matrix and . k← k + 1;
end (while)
The descriptions of how to define the initial approximation
H
_{0}
and step length
β_{k}
are also presented in the next Section.
4. Implementation of the DEO
 4.1 Test system
In this study, the ten machines 39 bus New England multimachine power system in
Fig. 2
is considered. The data of this system is given in
[11]
. Each machine has been represented by a fourthorder nonlinear model
[12]
. It is assumed that all generators (G1~G10) are equipped with the PSS/automatic voltage regulator (AVR) system shown in
Fig. 1
. The targeted parameters to be optimized are
K_{PSS}
(
i
), T
_{1}
(
i
), V
_{max}
(
i
) and V
_{min}
(
i
),
i
=G1, G2,…, G10 (of all PSSs), therefore the number of optimized parameters are total 40.
 4.2 Objective function and minimization
Many practical optimization problems can be formulated using a Bolza form of the objective function
J
.
where
λ
are the optimized parameters (of total 40 in this study) that are adjusted to minimize the value of objective function
J
in (10), and
t_{f}
is the final time. The objective of the PSSs tuning is to mitigate system damping and force the system to recover to the postdisturbance stable operating point as quickly as possible. The speed deviation (Δ
ω
) and terminal voltage deviation (Δ
V_{t}
) of each generator are considered as good assessments of the damping and recovery. Therefore, the objective function in (10) can be reformulated for the optimal tuning of PSSs with specific time
t_{f}
as the following.
where the weighting matrix
. The
ω^{s}
and
are the postfault steady state values of
ω
and
V_{t}
, respectively. Note that the dependence of system responses
ω_{i}
(
λ
,
t
) and
V_{t,i}
(
λ
,
t
) on the parameters λ is provided by the flows in (5). Also, the diagonal matrix with weighting factors
W
is determined by considering the balance of the conflicting requirements on the speed and voltage deviations.
 4.3 Computation of gradient
Minimization of the value of
J
in (11) is straightforward even though the cost is obtained by integrating over the system flows (trajectories). The simplest way of obtaining
J
is to introduce a new state variable
, with
equal to the integrand of (11). Thereafter,
The trajectory sensitivities with respect to λ directly provide the gradient.
Again, note that through the appropriate implementation of trajectory sensitivities described in Section IIB, the extra computational requirement in determining (12) is negligible.
 4.4 Important factors of numerical methods
 4.4.1 Stopping criteria
With the efficient computation of ∇
J
by (12), the two numerical optimization methods (steepest descent and BFGS algorithms) proposed in Section III are ready to be applied. It is now necessary to present the functions
f_{tol}
_{_1}
and
f_{tol}
_{_2}
(used for the stopping criteria
ε
_{1}
and
ε
_{2}
), which evaluate the maximum
relative gradient
of
J
at
in (13) and maximum
relative change
in successive values at
in (14), respectively. The detailed descriptions for stopping criteria are given in
[9]
.
 4.4.2 Step length
In this study, the fixed step lengths
α_{k}
and
β_{k}
are used in the steepest descent and BFGS algorithms, respectively. The more efficient line search algorithms
[8
9]
can be used to determine the optimal step length, which will be reported in the authors' other publication.
The magnitude of order (
O
) of gradient at a starting point is observed to be different depending on the characteristic of parameters: i.e. the gradient of [
K_{PSS}
, T
_{1}
] with smoothness is the range of
O
(10
^{3}
), whilst the gradient of [V
_{max}
, V
_{min}
] with nonsmoothness is the range of
O
(10
^{1}
~10
^{2}
). Therefore, for the steepest descent algorithm, the step length
α_{k}
with the different scaling factors of 100 and 0.01 are used for the corresponding separate gradients of [
K_{PSS}
, T
_{1}
] and [V
_{max}
, V
_{min}
], respectively. Otherwise, the convergence of the steepest descent is too slow with a small single step length, or it can be diverged with a big single step length. On the other hands, for the BFGS algorithm, the unit step length of
β_{k}
= 1 is used because it has the effective selfcorrecting property due to the Hessian approximation
H
for even the gradients with different magnitude of order.
 4.4.3 Initial inverse hessian approximation in BFGS method
The initial inverse Hessian approximation
H
_{0}
should be a positive definite matrix. The value of
H
_{0}
affects on the effectiveness and robustness of overall algorithm. It is often set to some multiple
μ
⋅
I
of the identity, but there is no good general strategy for choosing
μ
[8]
. Its value is determined experimentally by evaluating the convergence speed and robustness of the algorithm (the convergence will be slow by "too large" value and be failed by "too small" value of
μ
). The
μ
=100 is used in this study.
 4.5 Starting point of numerical methods
The closedform solution, which is a
global minimum
, is out of question. In other words, it is practically impossible to know if there is a global minimum of the objective function. The most numerical optimization algorithms at best can locate one local minimum, which is "close enough" to the
; usually this is good enough in practice. Therefore, the inability to determine the
existence and uniqueness
of the solution is not the primary concern in many practical applications
[9]
. The choice of starting point
can give a significant effect on the efficiency and robustness of the overall algorithm. In this study, the
conventional tuning method
[2
^{}
5]
based on the eigenvalue analysis is used to determine the starting point (socalled SPEIG) as a good initial guess. For the DAIS model of hybrid system in (1) and (2), the eigenvalues can be computed from the system matrix
A
with the reduced order in (15).
To compare with the SPEIG, another starting point (socalled SPINI) by the same parameters (
K_{PSS}
=2, T
_{1}
=5, T
_{2}
=0.05, and T
_{W}
=10) for all PSSs of G1~G10 (in
Fig. 2
) is used as the one example of the worse starting point than the SP_EIG. Also, the output limits [V
_{max}
, V
_{min}
] of all PSSs are set to [0.1, 0.1] in both the SPEIG and SPINI. The parameters in the SPEIG are shown in
Table 1
, which also presents the eigenvalues of the electromechanical mode (Δ
ω
) of G1~G10 by the SPEIG and SPINI.
It is observed from the
Table 1
that the PSSs of all machines have the better damping ratios in the SPEIG than the SPINI. For the visual illustrations, the system responses (Δ
ω
) of G1 and G10 are shown in
Figs. 3
and
4
when a threephase short circuit fault is applied to the bus 39 in
Fig. 2
at t=0.1 s.
Parameters in SP_EIG and eigenvalues of electromechanical mode in all machines by SP_EIG and SP_TEMP.
Parameters in SP_EIG and eigenvalues of electromechanical mode in all machines by SP_EIG and SP_TEMP.
Damping performance of PSSs in SPEIG and SPINI: speed deviation response Δω [rad/s] of G1.
Damping performance of PSSs in SPEIG and SPINI: speed deviation response Δω [rad/s] of G10.
5. Simulation Results
 5.1 Convergence speed
The performance for convergence speed by the steepest descent and BFGS algorithms is compared in
Figs. 5
and
6
, which show the values of J variations during 20 iterations from the starting points SPINI and SPEIG, respectively.
Value of objective function J variations from SPINI.
Value of objective function J variations from SPEIG
It is clearly shown from the results in
Figs. 5
and
6
that BFGS algorithm has the much faster convergence speed than the steepest descent algorithm. Not surprisingly, this fact proves that the BFGS is remarkably successful over the steepest descent for the DEO applied to the PSSs on a multimachine power system, as known generally in the field of numerical optimization.
 5.2 Damping performance
After 20 iterations from the SPEIG, the values of
J
by the steepest descent and BFGS algorithms are 0.0269 and 0.0176, respectively. At this point, the corresponding damping performance by applying the same threephase short circuit fault at the bus 39 in
Fig. 2
is compared in
Figs. 7
and
8
.
Comparison of damping performance by steepest descent and BFGS algorithms: speed deviation response Δω [rad/s] of G1.
Comparison of damping performance by steepest descent and BFGS algorithms: speed deviation response Δω [rad/s] of G10.
The results (speed deviation responses, Δ
ω
of G1 and G10) in
Figs. 7
and
8
show that the PSSs with the parameters (which is given in
Table 2
) optimized by the BFGS algorithm have the better damping performance for lowfrequency oscillations than by the steepest descent algorithm.
Optimized parameters by BFGS algorithm after 20 iterations from SPEIG
Optimized parameters by BFGS algorithm after 20 iterations from SPEIG
 5.3 Robustness of algorithms
With the same step lengths (
α_{k}
and
β_{k}
) and initial inverse Hessian approximation
H
_{0}
as used before, the robustness of the algorithms is tested by a threephase short circuit applied to a different location, which is the bus 16 in
Fig. 2
.
Starting from the SPINI and SPEIG, the values of
J
variations during 20 iterations are shown in
Figs. 9
and
10
, respectively. The BFGS algorithm still has the better convergence property than the steepest descent algorithm, especially with the SPEIG.
Value of objective function J variations from SPINI: test by a threephase short circuit fault applied at the bus 16.
After 20 iterations by the steepest descent and BFGS algorithms as shown in
Fig. 10
, the values of
J
are 0.4940 and 0.4424, respectively. With the optimized parameters at this point, the damping performance by applying the same threephase short circuit fault at bus 16 (in
Fig. 2
) is evaluated in
Figs. 11
and
12
, which show that the parameters optimized by the BFGS improves the overall system damping more effectively than the steepest descent method.
Value of objective function J variations from SPEIG: test by a threephase short circuit fault applied at the bus 16.
Comparison of damping performance by steepest descent and BFGS algorithms: test by a threephase short circuit fault applied at the bus 16 (speed deviation response Δω [rad/s] of G1).
Comparison of damping performance by steepest descent and BFGS algorithms: test by a threephase short circuit fault applied at the bus 16 (speed deviation response Δω [rad/s] of G10).
6. Conclusions
In this paper, the systematic optimal tuning of power system stabilizers (PSSs) on a multimachine power system by the dynamic embedded optimization (DEO) technique was described. In addition to the linear parameters (gain and time constant of phase compensator) with smoothness, the output limits of the PSSs with the nonsmooth nonlinearities are also considered as the objective to be optimized.
To implement the DEO technique applied for tuning of the PSSs (including the output limits), the hybrid system model based on the differentialalgebraicimpulsiveswitched (DAIS) structure was used. From the trajectory sensitivities exploited in the hybrid system model with the DAIS structure, the gradients of the objective function
J
with respect to the parameters were computed.
Minimization of the value of
J
used in the DEO is solved by the two numerical optimization algorithms, which are the steepest descent method and Broyden FletcherGoldfarbShanno (BFGS) algorithms. For the performance of convergence speed, the simulation results showed that the BFGS algorithm is superior to the steepest descent algorithm. Moreover, the optimized parameters of the PSSs tuned by the BFGS improved the system damping more efficiently than steepest descent. As a good initial guess for the starting point of the steepest descent and BFGS algorithms, the conventional tuning method based on the eigenvalue analysis was used to determine the initial parameters of the PSSs. This starting point was well matched with the two proposed numerical optimization algorithms.
Acknowledgements
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (No. 20110028065)
BIO
Byung Chul Sung received the B.S. and M.S. degree from the School of Electrical and Electronic Engineering, Yonsei University, Seoul, Korea, in 2006 and 2009, respectively. He is currently pursuing his Ph.D. degree in Yonsei University. His research interests are in power system operation and control,faultcurrentlimiter (FCL) control connected power grid, development of bidirectional protection coordination technique for distributed generation.
SeungMook Baek was born in Seoul, Korea. He received B.S., M.S., and Ph.D. degrees from the School of Electrical and Electronic Engineering, Yonsei University, Seoul, Korea, in 2006, 2007, and 2010 respectively, He is currently an Assistant Professor in the Division of Electrical, Electronic and Control Engineering, Kongju National University, Cheonan, Korea. He was a Research Engineer with KEPCO Research Institute, during 20092012. His current research interests are in power system dynamics, hybrid systems, optimization control algorithms, realtime simulation, flexible ac transmission system (FACTS) devices, and control of distributed generations.
JungWook Park was born in Seoul, Korea. He received the B.S. degree (summa cum laude) from the Department of Electrical Engineering, Yonsei University, Seoul, Korea, in 1999, and the M.S.E.C.E. and Ph.D. degrees from the School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, USA in 2000 and 2003, respectively. He was a Postdoctoral Research Associate in the Department of Electrical and Computer Engineering, University of Wisconsin, Madison, USA during 20032004, and a Senior Research Engineer with LG Electronics Inc., Korea during 20042005. He is currently an Associate Professor in the School of Electrical and Electronic Engineering, Yonsei University, Seoul, Korea. He is now leading the National Leading Research Laboratory (NLRL) designated by Korea government to the subject of integrated optimal operation for smart grid. He is also a chair of Yonsei Power and Renewable Energy Future technology Research Center (YonseiPREFER) in School of Electrical and Electronic Engineering, Yonsei University, Seoul, Korea. His current research interests are in power system dynamics, renewable energies based distributed generations, power control of electric vehicle, and optimization control algorithms. Prof. Park was the recipient of Second Prize Paper Award in 2003 from Industry Automation Control Committee and Prize Paper Award in 2008 from Energy Systems Committee of the IEEE Industry Applications Society (IAS). He is currently a vice chair of Intelligent Systems Technical Applications Committee of IEEE Computational Intelligence Society (CIS).
Hiskens Ian A.
,
Park JungWook
,
Donde Vaibhav
,
Chow Joe H.
,
Wu Felix F.
,
Momoh James A.
2005
“Applied mathematics forderegulated electric power systems”
Springer
“Dynamic Embedded Optimization and Shooting Methods for Power System Performance Assessment”, Chapter 9
179 
199
Kundur P.
,
Klein M.
,
Rogers G. J.
,
Zywno M. S.
1989
“Application of Power System Stabilizers for Enhancement of Overall System Stability,”
IEEE Trans. on Power Systems
4
(2)
614 
626
DOI : 10.1109/59.193836
Klein M.
,
Rogers G. J.
,
Moorty S.
,
Kundur P.
1989
“Analytical Investigation of Factors Influencing Power System Stabilizers Performance”
IEEE Trans. on Energy Conversion
7
(3)
382 
388
DOI : 10.1109/60.148556
Martins N.
,
Lima L. T. G.
1989
“Determination of Suitable Locations for Power System Stabilizers and Static VAr Compensators for Damping Electromechanical Oscillations in Large Scale Power Systems”
in Proc. of Power Industry Computer Application
74 
82
Kundur Prabha
1993
“Power system stability and Control,”
McGrawHill, Inc.
Baek SeungMook
,
Park JungWook
2010
“Hessian Matrix Estimation in Hybrid Systems Based on an Embedded FFNN”
IEEE Transactions on Neural Networks
21
(10)
1533 
1541
DOI : 10.1109/TNN.2010.2042728
Hiskens Ian A.
2000
“Trajectory Sensitivity Analysis of Hybrid Systems”
IEEE Trans. on Circuits and SystemsPart I: Fundamental Theory and Applications
47
(2)
204 
220
DOI : 10.1109/81.828574
Nocedal J.
,
Wright S. J.
1999
Numerical Optimization
SpringerVerlag
New York
Dennis J. E.
,
Schnabel Robert B.
1996
Numerical Methods for Unconstrained Optimization and Nonlinear Equations
SIAM
Philadelphia
Branicky M. S.
,
Borkar V. S.
,
Mitter S. K.
1998
“A unified framework for hybrid control: Model and optimal control theory”
IEEE Trans. on Automat. Contr.
43
31 
45
DOI : 10.1109/9.654885
Pai M. A.
1989
Energy Function Analysis for Power System Stability
Kluwer
Norwell, MA
Sauer P. W.
,
Pai M. A.
1998
Power System Dynamics and Stability
PrenticeHall
Englewood Cliffs, NJ