In this paper, we first propose a fast oneparameter relaxation (FOPR) method with a scaled preconditioner for solving the saddle point problems, and then we present a formula for finding its optimal parameter. To evaluate the effectiveness of the proposed FOPR method with a scaled preconditioner, numerical experiments are provided by comparing its performance with the existing one or two parameter relaxation methods with optimal parameters such as the SORlike, the GSOR and the GSSOR methods.
AMS Mathematics Subject Classification : 65F10, 65F50.
1. Introduction
We consider a fast oneparameter relaxation method with a scaled preconditioner for solving the saddle point problem
where
A
∈ ℝ
^{m×m}
is a symmetric positive definite matrix, and
B
∈ ℝ
^{m×n}
is a matrix of full column rank. This problem (1) appears in many different scientific applications, such as constrained optimization
[9]
, the finite element approximation for solving the NavierStokes equation
[5
,
6]
, and the constrained least squares problems and generalized least squares problems
[1
,
3
,
12]
. So many authors have proposed one or two parameter relaxation iterative methods for solving the saddle point problem (1). Golub et al.
[7]
proposed the oneparameter SORlike method and presented an incomplete formula for finding one optimal parameter, Bai et al.
[3]
proposed the twoparameter GSOR (Generalized SOR) method and presented a formula for finding two optimal parameters for the GSOR and a complete formula for finding one optimal parameter for SORlike method, Zhang and Lu
[14]
studied the twoparameter GSSOR (Generalized symmetric SOR) method and Chao et al
[4]
presented a formula for finding two optimal parameters for the GSSOR, and so on
[10
,
13]
.
This paper is organized as follows. In Section 2, we propose a fast oneparameter relaxation (FOPR) method with a scaled preconditioner, and then we present a formula for finding its optimal parameter. In Section 3, numerical experiments are provided to examine the effectiveness of the proposed FOPR method with a scaled preconditioner by comparing its performance with the existing one or two parameter relaxation methods with optimal parameters such as the SORlike, the GSOR and the GSSOR methods. Lastly, some conclusions are drawn.
2. Convergence of a fast oneparameter relaxation (FOPR) method
For the coefficient matrix of the augmented linear system (1), we consider the following splitting
where
and
Q
∈ ℝ
^{n×n}
is a symmetric positive definite matrix which approximates
B^{T}A
^{−1}
B
. Let
where
ω
> 0 is a relaxation parameter,
I_{m}
∈ ℝ
^{m×m}
and
I_{n}
∈ ℝ
^{n×n}
denote the identity matrices of order
m
and
n
, respectively. Then a fast oneparameter relaxation (FOPR) method for solving the saddle point problem (1) is defined by
where
T_{ω}
= (
D
− Ω
L
)
^{−1}
((
I
− Ω)
D
+ Ω
U
) is an iteration matrix for the FOPR method,
g_{ω}
= Ω
c
, and
I
is an identity matrix of order
m
+
n
. That is, the FOPR method is defined by
Note that the FOPR method can be viewed as a special case of the GSOR method
[3]
with
Lemma 2.1
(
[11]
).
Consider the quadratic equation x
^{2}
−
bx
+
c
= 0,
where B and c are real numbers. Both roots of the equation are less than one in modulus if and only if

c
 < 1
and

b
 < 1 +
c
.
Let
λ
be an eigenvalue of
T_{ω}
and
be the corresponding eigenvector. Then we have
or equivalently,
From now on, let
ρ
(
H
) denote the spectral radius of a square matrix
H
. The following theorem provides the convergence result for the FOPR method.
Theorem 2.2.
Let μ_{max} be the spectral radius of Q
^{−1}
B^{T}A
^{−1}
B. If μ_{max}
< 4
, then the FOPR method converges for all
Proof
. Let
μ
be an eigenvalue of
Q
^{−1}
B^{T}A
^{−1}
B
and
λ
be an eigenvalue of
T_{ω}
. Then
μ
> 0. From equation (5), one can obtain the following quadratic equation for
λ
Applying Lemma 2.1 to (6), one easily obtains
, then
ρ
(
T_{ω}
) < 1, which completes the proof. □
Notice that if
μ_{max}
≥ 4 in Theorem 2.2, then the convergence region for which the FOPR method converges may be an empty set. Next theorem provides an optimal parameter
ω
for which the FOPR method performs best.
Theorem 2.3.
Let μ_{min} and μ_{max} be the minimum and maximun eigenvalues of Q
^{−1}
B^{T}A
^{−1}
B, respectively. Assume that μ_{max}
< 4.
Then the optimal parameter ω for the FOPR method is given by ω
=
ω_{o}, where
Moreover
.
That is,
Proof. Let
μ
be an eigenvalue of
Q
^{−1}
B^{T}A
^{−1}
B
and
λ
be an eigenvalue of
T_{ω}
. From the quadratic equation (6) for
λ
, one obtains two roots
Let
f
(
ω
) = 2 −
ω
−
μ
and
g
(
ω
) = (
ω
+
μ
)
^{2}
− 4
μ
. The necessary and sufficient condition for the roots
λ
to be real is
g
(
ω
) ≥ 0, which is equivalent to
. Since
Hence one obtains
Notice that
for
μ
∈ (0, 4) and it has the maximum value 1 at
μ
= 1. Since
is an increasing function for
is a decreasing function for
. Thus, (8) implies that given
μ
, 
λ
 takes the minimum
. If
S
is a set containing all eigenvalues of
Q
^{−1}
B^{T}A
^{−1}
B
, then
where
. Hence the theorem follows. □
As can be shown in Theorems 2.2 and 2.3, a big disadvantage of the FOPR method is that it requires a rather strong condition
μ_{max}
< 4 which may not be true for some types of preconditioners
Q
. To remedy this problem, we need to scale the preconditioner
Q
so that 0 <
μ_{min}
,
μ_{max}
< 4. From Theorem 2.3, it can be also seen that in order to minimize
ρ
(
T_{ωo}
),
Q
needs to be scaled so that
. Next theorem shows how to scale the preconditioner
Q
such that 0 <
μ_{min}
,
μ_{max}
< 4 and
ρ
(
T_{ωo}
) can be minimized. Next theorem also shows an optimal convergence factor of the FOPR method with a scaled preconditioner.
Theorem 2.4.
Let μ_{min} and μ_{max} be the minimum and maximun eigenvalues of Q
^{−1}
B^{T}A
^{−1}
B, respectively. Let Q_{s} = s Q be a scaled preconditioner, where s
> 0
is a scaling factor, and let ν_{min} and ν_{max} be the minimum and maximun eigenvalues of
, respectively. If
, then
0 <
ν_{min}
,
ν_{max}
< 4
and
and
where
refer to the optimal parameter and the iteration matrix for the FOPR method with the scaled preconditioner Q_{s}, respectively
.
Proof
. Since
Since
, one obtains
Using (10), it can be easily shown that
The remaining part of this theorem can be proved by simple calculation. □
From Theorem 2.4, it can be seen that optimal convergence factor of the FOPR method with the scaled preconditioner
Q_{s}
is the same as that of the GSOR method
[3]
with the preconditioner
Q
. Notice that the scaling factor
s
in Theorem 2.4 can be easily computed using MATLAB by computing only the largest and smallest eigenvalues of
Q
^{−1}
B^{T}A
^{−1}
B
.
We next summarize the formulas for finding optimal parameters of the SORlike, the GSOR and the GSSOR methods which are used for numerical experiments in Section 3.
Remark 2.1
(
[7
,
3
,
4]
). Let
μ_{min}
and
μ_{max}
be the minimum and maximun eigenvalues of
Q
^{−1}
B^{T}A
^{−1}
B
, respectively. Then

(a) The optimal parameterωofor the SORlike method takes one of the following 3 formulas depending upon the values ofμminandμmax(see[3]for more details):

(b) The optimal parametersωoandτofor the GSOR method are given by

(c) The optimal parametersωoand τo for the GSSOR method are given by
3. Numerical results
To evaluate the effectiveness of the FOPR method with a scaled preconditioner, we provide numerical experiments by comparing its performance with the SORlike, the GSOR and the GSSOR methods. For performance comparison, both the FOPR method with preconditioner
Q
and the FOPR method with scaled preconditioners
Q_{s}
=
s
Q
and
Q
_{s+ϵ}
= (
s
+
ϵ
)
Q
are provided, where
s
is the scaling factor defined in Theorem 2.4 and
ϵ
is a positive number which is chosen appropriately small as compared with
s
. In
Tables 2
to
5
,
Iter
denotes the number of iteration steps and
CPU
denotes the elapsed CPU time in seconds. In all experiments, the right hand side vector (
b^{T}
,−
q^{T}
)
^{T}
∈ ℝ
^{m+n}
was chosen such that the exact solution of the saddle point problem (1) is
= (1, 1,...,1)
^{T}
∈ ℝ
^{m+n}
, and the initial vector was set to the zero vector. From now on, let ║·║ denote the
L
_{2}
norm.
Example 3.1
(
[2]
). We consider the saddle point problem (1), in which
and
with ⊗ denoting the Kronecker product and
the discretization mesh size. For this example,
m
= 2
p
^{2}
and
n
=
p
^{2}
. Thus the total number of variables is 3
p
^{2}
. We choose the matrix
Q
as an approximation to the matrix
B^{T}A
^{−1}
B
, according to four cases listed in
Table 1
. The iterations for the relaxation iterative methods are terminated if the current iteration satisfies ERR < 10
^{−9}
, where
ERR
is defined by
Numerical results for this example are listed in
Tables 2
to
5
. In
Tables 4
and
5
, numerical results for the FOPR method are not listed since it does not work because of
μ_{max}
> 4, and thus only those for the FOPR method with scaled preconditioners
Q_{s}
and
Q
_{s+ϵ}
are listed.
Example 3.2
(
[3]
). We consider the same problem as Example 3.1 except that
F
is defined by
Note that the matrix
F
is a highly illconditioned Toeplitz matrix. So we choose only Cases III and IV of
Q
in
Table 1
as an approximation to the matrix
B^{T}A
^{−1}
B
, and all iterations are terminated if the current iteration satisfies RES < 10
^{−9}
, where RES is defined by
Since
μ_{max}
> 4 for these choices of
Q
, numerical results for the FOPR method with scaled preconditioners
Q_{s}
and
Q
_{s+ϵ}
are listed in
Tables 4
to
5
.
Choices of the matrix Q.
Numerical results for Example 3.1 with Case I ofQ.
Numerical results for Example 3.1 with Case I of Q.
Numerical results for Example 3.1 with Case II ofQ.
Numerical results for Example 3.1 with Case II of Q.
Numerical results for Case III ofQ.
Numerical results for Case III of Q.
Numerical results for Case IV ofQ.
Numerical results for Case IV of Q.
All numerical tests are carried out on a PC equipped with Intel Core i54570 3.2GHz CPU and 8GB RAM using MATLAB R2014a. For numerical experiments of all relaxation iterative methods used in this paper, the optimal parameters described in Remark 2.1 are used. For test runs of the FOPR method with the scaled preconditioner
Q
_{s+ϵ}
, we have tried the values of
ϵ
∈ [0.0001, 0.0005] in
Tables 2
and
3
, and the values of
ϵ
∈ [0.01, 0.05] in
Tables 4
and
5
. For all of these values of
ϵ
, the FOPR method with
Q
_{s+ϵ}
performs better than the GSOR method, and the values of
ϵ
for which it performs best are reported in
Tables 2
to
5
.
As can be expected from Theorem 2.4, the FOPR method with the scaled preconditioner
Q_{s}
performs as well as the GSOR method. The FOPR method with the scaled preconditioner
Q
_{s+ϵ}
performs best of all methods considered in this paper, and specifically it performs much better than other methods for Cases III and IV of
Q
for which
μ_{max}
> 4 (see
Tables 2
to
5
). On the other hand, the GSSOR method performs worse than the FOPR and the GSOR methods since its computational cost for each iteration is higher than others.
4. conclusions
In this paper, we proposed a fast oneparameter relaxation (FOPR) method with a scaled preconditioner for solving the saddle point problems, and then we presented a formula for finding its optimal parameter. Both theoretical and computational results showed that the FOPR methods with the scaled preconditioner
Q_{s}
performs as well as the GSOR method. In addition, the FOPR method with the scaled preconditioner
Q
_{s+ϵ}
performs better than other one or two parameter relaxation methods with optimal parameters, and specifically it performs much better than other methods for Cases III and IV of
Q
for which
μ_{max}
> 4 (see
Tables 2
to
5
). Hence, it may be concluded that the FOPR method with the scaled preconditioner
Q
_{s+ϵ}
is strongly recommended for solving the saddle point problems when
μ_{max}
=
ρ
(Q
^{−1}
B^{T}A
^{−1}
B
) > 4.
BIO
SeYoung Oh received M.Sc. from Seoul National University and Ph.D at University of Minnesota. Since 1992 he has been at Chungnam National University. His research interests include numerical optimization and biological computation.
Department of Mathematics, Chungnam National University, Daejeon 34134, Korea
email: soh@cnu.ac.kr
Jae Heon Yun received M.Sc. from Kyungpook National University, and Ph.D. from Iowa State University. He is currently a professor at Chungbuk National University since 1991. His research interests are computational mathematics, iterative method and parallel computation.
Department of Mathematics, College of Natural Sciences, Chungbuk National University, Cheongju 28644, Korea
email: gmjae@chungbuk.ac.kr
Arioli M.
,
Duff I.S.
,
de Rijk P.P.M.
(1989)
On the augmented system approach to sparse least squares problems
Numer. Math.
55
667 
684
DOI : 10.1007/BF01389335
Bai Z.Z.
,
Golub G.H.
,
Pan J.Y.
(2004)
Preconditioned Hermitian and skewHermitian splitting methods for nonHermitian positive semidefinite linear systems
Numer. Math.
98
1 
32
DOI : 10.1007/s0021100405211
Bai Z.Z.
,
Parlett B.N.
,
Wang Z.Q.
(2005)
On generalized successive overrelaxation methods for augmented linear systems
Numer. Math.
102
1 
38
DOI : 10.1007/s0021100506430
Chao Z.
,
Zhang N.
,
Lu Y.Z.
(2014)
Optimal parameters of the generalized symmetric SOR method for augmented systems
J. Comput. Appl. Math.
266
52 
60
DOI : 10.1016/j.cam.2014.01.023
Elman H.
,
Silvester D.J.
(1996)
Fast nonsymmetric iteration and preconditioning for NavierStokes equations
SIAM J. Sci. Comput.
17
33 
46
DOI : 10.1137/0917004
Fischer B.
,
Ramage A.
,
Silvester D.J.
,
Wathen A.J.
(1998)
Minimum residual methods for augmented systems
BIT
38
527 
543
DOI : 10.1007/BF02510258
Wright S.
(1997)
Stability of augmented system factorization in interior point methods
SIAM J. Matrix Anal. Appl.
18
191 
222
DOI : 10.1137/S0895479894271093
Wu S.L.
,
Huang T.Z.
,
Zhao X.L.
(2009)
A modified SSOR iterative method for augmented systems
J. Comput. Appl. Math.
228
424 
433
DOI : 10.1016/j.cam.2008.10.006
Young D.M.
1971
Iterative solution of large linear systems
Academic Press
New York
Yuan J.Y.
,
Iusem A.N.
(1996)
Preconditioned conjugate gradient methods for generalized least squares problem
J. Comput. Appl. Math.
71
287 
297
DOI : 10.1016/03770427(95)002391
Oh SeYoung
,
Yun J.H.
,
Kim K.S.
(2015)
ESOR method with diagonal preconditioners for SPD linear systems
J. Appl. Math. & Informatics
33
111 
118
DOI : 10.14317/jami.2015.111