PERFORMANCE COMPARISON OF PRECONDITIONED ITERATIVE METHODS WITH DIRECT PRECONDITIONERS†
PERFORMANCE COMPARISON OF PRECONDITIONED ITERATIVE METHODS WITH DIRECT PRECONDITIONERS†
Journal of Applied Mathematics & Informatics. 2014. Sep, 32(3_4): 389-403
• Received : November 15, 2013
• Accepted : February 25, 2014
• Published : September 28, 2014
PDF
e-PUB
PPT
Export by style
Share
Article
Author
Metrics
Cited by
TagCloud
JAE HEON YUN
HYO JIN LIM
KYOUM SUN KIM

Abstract
In this paper, we first provide comparison results of preconditioned AOR methods with direct preconditioners I + βL, I + βU and I + β ( L + U ) for solving a linear system whose coefficient matrix is a large sparse irreducible L-matrix, where β > 0. Next we propose how to find a near optimal parameter β for which Krylov subspace method with these direct preconditioners performs nearly best. Lastly numerical experiments are provided to compare the performance of preconditioned iterative methods and to illustrate the theoretical results. AMS Mathematics Subject Classification : 65F10, 65F15.
Keywords
1. Introduction
In this paper, we consider the following nonsingular linear system
PPT Slide
Lager Image
where A = ( aij ) ∈ ℝ n×n is a large sparse irreducible L-matrix. Throughout the paper, we assume that A = I L U , where I is the identity matrix, and L and U are strictly lower triangular and strictly upper triangular matrices, espectively. Then the AOR iterative method [3] for solving the linear system (1) can be expressed as
PPT Slide
Lager Image
where x 0 is an initial vector, ω and r are real parameters with ω ≠ 0,
PPT Slide
Lager Image
and Mr,ω = ω ( I rL ) −1 . The Tr,ω is called an iteration matrix of the AOR iterative method.
In order to accelerate the convergence of iterative method for solving the linear system (1), the original linear system (1) is transformed into the following preconditioned linear system
PPT Slide
Lager Image
where P , called a preconditioner , is a nonsingular matrix. Various types of preconditioners P , which are nonnegative matrices with unit diagonal entries, have been proposed by many researchers [2 , 4 , 5 , 6 , 7 , 9 , 11 , 14 , 15 , 17] . In this paper, we study comparison results of preconditioned iterative methods corresponding to direct preconditioners P = Pl = I + βL, P = Pu = I + βU and P = Pb = I + β ( L + U ), where β is a positive real number. Here, direct preconditioner means that the preconditioner can be constructed without any computational step. The preconditioner Pu was first introduced by Kotakemori et al [4] , and it has been studied further in [9 , 11 , 15] . The preconditioner Pl for β = 1 was first proposed by Usui et al [11] , so it is worth studying further Pl for β > 0 in this paper.
Let Au = PuA and UL = Γ + E + F , where Γ is a diagonal matrix, E is a strictly lower triangular matrix, and F is a strictly upper triangular matrix. Then, one obtains
PPT Slide
Lager Image
where Du = I β Γ, Lu = L + βE , and Uu = (1 − β ) U + βU 2 + βF .
Let Al = PlA and LU = Γ 1 + E 1 + F 1 , where Γ 1 is a diagonal matrix, E 1 is a strictly lower triangular matrix, and F 1 is a strictly upper triangular matrix. Then, one obtains
PPT Slide
Lager Image
where Dl = I β Γ 1 , Ll = (1 − β ) L + βL 2 + βE 1 , and Ul = U + βF 1 .
Recently, Wang and Song [14] studied convergence of the preconditioned AOR method with preconditioner Pb = I + β ( L + U ), where 0 < β ≤ 1. Let Ab = PbA .
Then, one obtains
PPT Slide
Lager Image
where Db = I β Γ − β Γ l , Lb = (1 − β ) L + βL 2 + βE + βE 1 , and Ub = (1 − β ) U + βU 2 + βF + βF 1 .
If we apply the AOR iterative method to the preconditioned linear system (4), then we get the preconditioned AOR iterative method whose iteration matrix is
PPT Slide
Lager Image
Notice that the computational costs for constructing Au , Al and Ab will not be expensive since A is assumed to be a large sparse matrix.
The purpose of this paper is to provide performance comparison of preconditioned iterative methods with direct preconditioners Pl , Pu and Pb for solving a linear system whose coefficient matrix is a large sparse irreducible L -matrix satisfying some weaker conditions than those used in the existing literature. This paper is organized as follows. In Section 2, we present some notation, definitions and preliminary results. In Section 3, we provide comparison results of preconditioned AOR methods with preconditioner Pu . In Section 4, we provide comparison results of preconditioned AOR methods with preconditioners Pl and Pb . In Section 5, we propose how to find a near optimal parameter β for which Krylov subspace method with the direct preconditioners Pl , Pu and Pb performs nearly best. In Section 6, numerical experiments are provided to compare the performance of preconditioned iterative methods and to illustrate the theoretical results in Sections 3 to 5. Lastly, some conclusions are drawn.
2. Preliminaries
A matrix A = ( aij ) ∈ ℝ n×n is called a Z - matrix if aij ≤ 0 for i j , an L - matrix if A is a >Z-matrix and aii > 0 for i = 1, 2, . . . , n , and an M - matrix if A is a Z-matrix and A −1 ≥ 0. For a vector x ∈ ℝ n , x ≥ 0 ( x > 0) denotes that all components of x are nonnegative (positive). For two vectors x, y ∈ ℝ n , x y ( x > y ) means that x y ≥ 0 ( x y > 0). These definitions carry immediately over to matrices. For a square matrix A, ρ ( A ) denotes the spectral radius of A , and A is called irreducible if the directed graph of A is strongly connected [13] . Some useful results which we refer to later are provided below.
Theorem 2.1 (Varga [13] ). Let A ≥ 0 be an irreducible matrix. Then
• (a)A has a positive eigenvalue equal to ρ(A).
• (b)A has an eigenvector x> 0corresponding to ρ(A).
• (c)ρ(A)is a simple eigenvalue of A.
Theorem 2.2 (Berman and Plemmons [1] ). Let A ≥ 0 be a matrix. Then the following hold.
• (a)If Ax≥βx for a vector x≥ 0and x≠ 0,then ρ(A) ≥β.
• (b)If Ax≤γx for a vector x> 0,then ρ(A) ≤γ.Moreover, if A is irreducible and if βx≤Ax≤γx, equality excluded, for a vector x≥ 0and x≠ 0,then β<ρ(A) <γ and x> 0.
• (c)If βx 0,then β<ρ(A) <γ.
Theorem 2.3 (Li and Sun [6] ). Let A be an irreducible matrix, and let A = M N be an M - splitting of A with N ≠ 0. Then there exists a vector x > 0 such that M −1 Nx = ρ ( M −1 N ) x and ρ ( M −1 N ) > 0.
3. Comparison results for preconditioner Pu
We first provide a comparison result for the preconditioned Gauss-Seidel method with the preconditioner Pu .
Theorem 3.1. Let A = ( aij ) ∈ ℝ n×n be an irreducible L - matrix. Suppose that 0 < ( UL ) ii < 1 for all 1 ≤ i n − 1, where
PPT Slide
Lager Image
Let T = ( I L ) −1 U and Tu = ( Du Lu ) −1 Uu , where Du, Lu and Uu are defined by (5). If 0 < β ≤ 1, then
• (a)ρ(Tu) <ρ(T)if ρ(T) < 1.
• (b)ρ(Tu) =ρ(T)if ρ(T) = 1.
• (c)ρ(Tu) >ρ(T)if ρ(T) > 1.
Proof . Since A is an L -matrix, D, L and U are nonnegative. Since 0 < β ≤ 1 and ( UL ) ii < 1, Du , Lu and Uu are nonnegative and thus Tu ≥ 0. Since A = ( I L ) − U is an M-splitting of A and U ≠ 0, from Theorem 2.3 there exists a vector x > 0 such that Tx = λx , where λ = ρ ( T ). From Tx = λx , one easily obtains
PPT Slide
Lager Image
Using (5) and (9),
PPT Slide
Lager Image
Let y = (Γ + E + λ −1 U 2 ) x . Since ( UL ) ii > 0 for all 1 ≤ i n − 1, y ≥ 0 is a nonzero vector whose first ( n −1) components are positive and last component is zero. Since A is irreducible, Lu = L + βE ≥ 0 is a strictly lower triangular matrix whose last row vector is nonzero and thus ( Du Lu ) −1 y is a positive vector. It follows that Tux < λx from (10) if λ < 1. From Theorem 2.2, ρ ( Tu ) < ρ ( T ) < 1. For the cases of λ = 1 and λ > 1, Tux = λx and Tux > λx are obtained from (10), respectively. Hence, the theorem follows from Theorem 2.2.
We now provide a comparison result for the preconditioned AOR method with the preconditioner Pu .
Theorem 3.2. Let A = ( aij ) ∈ ℝ n×n be an irreducible L - matrix. Suppose that 0 < r ω ≤ 1 ( r ≠ 1) and 0 < ( UL ) ii < 1 for all 1 ≤ i n − 1, where
PPT Slide
Lager Image
Let Tr,ω and Tu,r,ω be defined by (3) and (8), respectively. If 0 < β ≤ 1, then
• (a)ρ(Tu,r,ω) <ρ(Tr,ω)if ρ(Tr,ω) < 1.
• (b)ρ(Tu,r,ω) =ρ(Tr,ω)if ρ(Tr,ω) = 1.
• (c)ρ(Tu,r,ω) >ρ(Tr,ω)if ρ(Tr,ω) > 1.
Proof . Notice that Tr,ω can be expressed as Tr,ω = (1− ω ) I + ω (1− r ) L + ωU + H , where H is a nonnegative matrix. By assumptions, it can be seen that Tr,ω ≥ 0 is irreducible and Tu,r,ωx ≥ 0. From Theorem 2.1, there exists a vector x > 0 such that Tr,ωx = λx , where λ = λ ( Tr,ωx ). From Tr,ωx = λx , one easily obtains
PPT Slide
Lager Image
Using (5) and (11),
PPT Slide
Lager Image
Let y = (Γ + rE + λ −1 ((1 − ω ) U + ( ω r ) UL + ωU 2 )) x . Since 0 < r ω ≤ 1 and ( UL ) ii > 0 for all 1 ≤ i n −1, y ≥ 0 is a nonzero vector whose first ( n −1) components are positive and last component is zero. Since A is irreducible and r ≠ 0, Lu = L + βE ≥ 0 is a strictly lower triangular matrix whose last row vector is nonzero and thus ( Du rLu ) −1 y is a positive vector. It follows that Tu,r,ωx < λx from (12) if λ < 1. From Theorem 2.2, ρ ( Tu,r,ωx ) < ρ Tr,ω ) < 1. For the cases of λ = 1 and λ > 1, Tu,r,ωx = λx and Tu,r,ωx > λx are obtained from (12), respectively. Hence, the theorem follows from Theorem 2.2.
If 0 < β < 1 in Theorem 3.2, the assumptions for ( UL ) ii can be weakened.
Theorem 3.3. Let A = ( aij ) ∈ ℝ n×n be an irreducible L - matrix. Suppose that 0 < r ω ≤ 1 ( r ≠ 1), ( UL ) ii < 1 (1 ≤ i n − 1) and ( UL ) ii > 0 for at least one i, where
PPT Slide
Lager Image
Let Tr,ω and Tu,r,ω be defined by (3) and (8), respectively. If 0 < β < 1, then
• (a)ρ(Tu,r,ω) <ρ(Tr,ω)if ρ(Tr,ω) < 1.
• (b)ρ(Tu,r,ω) =ρ(Tr,ω)if ρ(Tr,ω) = 1.
• (c)ρ(Tu,r,ω) >ρ(Tr,ω)if ρ(Tr,ω) > 1.
Proof . Since A is irreducible and β < 1, Au is also irreducible. Hence it can be easily shown that both Tr,ω and Tu,r,ω are nonnegative and irreducible. From Theorem 2.1, there exists a vector x > 0 such that Tr,ωx = λx , where λ = λ ( Tr,ω ). From equation (12), one obtains
PPT Slide
Lager Image
If λ < 1, then from (13) Tu,r,ωx λx and Tu,r,ωx λx . Since Tu,r,ω is irreducible, Theorem 2.2 implies that ρ ( Tu,r,ω ) < ρ ( Tr,ω ) < 1. For the cases of λ = 1 and λ > 1, Tu,r,ωx = λx and Tu,r,ωx λx (with Tu,r,ωx ≠ = λx ) are obtained from (13), respectively. Hence, the theorem follows from Theorem 2.2.
By combining Theorems 3.1 and 3.2, we can obtain the following theorem.
Theorem 3.4. Let A = ( aij ) ∈ ℝ n×n be an irreducible L - matrix. Suppose that 0 < r ω ≤ 1 and 0 < ( UL ) ii < 1 for all 1 ≤ i n - 1, where
PPT Slide
Lager Image
Let Tr,ω and Tu,r,ω be defined by (3) and (8), respectively. If 0 < β < 1, then
• (a)ρ(Tu,r,ω) <ρ(Tr,ω)if ρ(Tr,ω) < 1.
• (b)ρ(Tu,r,ω) =ρ(Tr,ω)if ρ(Tr,ω) = 1.
• (c)ρ(Tu,r,ω) >ρ(Tr,ω)if ρ(Tr,ω) > 1.
Proof . If r = 1, then ω = 1 and thus the theorem follows from Theorem 3.1. If r ≠ 1, then the theorem follows from Theorem 3.2.
4. Comparison results for preconditioner Pland Pb
We first provide a comparison result for the preconditioned Gauss-Seide method with the preconditioner Pl .
Theorem 4.1. Let A = ( aij ) ∈ ℝ n×n be an irreducible L - matrix. Suppose that 0 < ( LU ) ii < 1 for all 2 ≤ i n, where
PPT Slide
Lager Image
Let T = ( I L ) −1 U and Tl = ( Dl Ll ) −1 Ul , where Dl, Ll and Ul are defined by (6). If 0 < β ≤ 1, then
• (a)ρ(Tl) <ρ(T)if ρ(T) < 1.
• (a)ρ(Tl) =ρ(T)if ρ(T) = 1.
• (a)ρ(Tl) >ρ(T)if ρ(T) > 1.
Proof . Since A is an L -matrix, D, L and U are nonnegative. Since 0 < β ≤ 1 and ( LU ) ii < 1, Dl, Ll and Ul are nonnegative and thus Tl ≥ 0. Since A = ( I L )− U is an M-splitting of A and U ≠ 0, from Theorem 2.3 there exists a vector x > 0 such that Tx = λx , where λ = ρ ( T ). From Tx = λx , one easily obtains
PPT Slide
Lager Image
Using (6) and (14),
PPT Slide
Lager Image
Let y = (Γ 1 + E 1 )x. Since ( LU ) ii > 0 for all 2 ≤ i n, y ≥ 0 is a nonzero vector whose first component is zero and last ( n − 1) components are positive. Thus z = ( Dl Ll ) −1 y ≥ 0 is also a nonzero vector whose first component is zero and last ( n − 1) components are positive. Let
PPT Slide
Lager Image
where z 2 > 0, T 12 ∈ ℝ 1×(n−1) and T 12 ∈ ℝ (n−1)×(n−1) . From (15) and (16),
PPT Slide
Lager Image
Since z 2 > 0, T 22 x 2 < λx 2 from (17) if λ < 1. From Theorem 2.2, ρ ( T 22 ) < ρ ( T ) < 1. Since ρ ( Tl ) = ρ ( T 22 ), ρ ( Tl ) < ρ ( T ) is obtained. For the cases of λ = 1 and λ > 1, T 22 x 2 = λx 2 and T 22 x 2 > λx 2 are obtained from (17), respectively. Hence, the theorem follows from Theorem 2.2.
We now provide a comparison result for the preconditioned AOR method with the preconditioner Pl .
Theorem 4.2. Let A = ( aij ) ∈ ℝ n×n be an irreducible L - matrix. Suppose that 0 ≤ r,ω ≤ 1 ( ω ≠ 0, r ≠ 1) and ( LU ) ii < 1 for all 2 ≤ 2 ≤ i n, where
PPT Slide
Lager Image
Let Tr,ω and Tl,r,ω be defined by (3) and (8), respectively. If 0 < β < 1, then
• (a)ρ(Tl,r,ω) <ρ(Tr,ω)if ρ(Tr,ω) < 1.
• (b)ρ(Tl,r,ω) =ρ(Tr,ω)if ρ(Tr,ω) = 1.
• (c)ρ(Tl,r,ω) >ρ(Tr,ω)if ρ(Tr,ω) > 1.
Proof . By simple calculation, one can obtain
PPT Slide
Lager Image
where H and Hl are nonnegative matrices. Since 0 ≤ r, ω ≤ 1 ( ω ≠ 0, r ≠ 1) and 0 < β < 1, it can be easily shown from (18) that both Tr, ω and Tl,r,ω are nonnegative and irreducible. Hence, there exists a vector x > 0 such that Tr,ωx = λx , where λ = ρ ( Tr,ω ). Using (6) and Tr,ωx = λx , one obtains
PPT Slide
Lager Image
Let y = (Γ 1 + rE 1 + (1 − r ) L ) x . Since A is irreducible and r ≠ 1, y ≥ 0 is a nonzero vector whose first component is zero. If λ < 1, then from (19) Tl,r,ωx λx and Tl,r,ωx λx . Since Tl,r,ω is irreducible, Theorem 2.2 implies that ρ ( Tl,r,ω ) < ρ ( Tr,ω ) < 1. For the cases of λ = 1 and λ > 1, Tl,r,ωx = λx and Tl,r,ωx λx (with Tl,r,ωx λx ) are obtained from (13), respectively. Hence, the theorem follows from Theorem 2.2.
Notice that Theorem 4.2 for preconditioner Pl does not require the assump-tions r ω and ( LU ) ii > 0 as compared with Theorem 3.2 for preconditioner Pu . If β = 1, the strict inequalities in Theorem 4.2 may not hold and only inequalities are guaranteed.
Lemma 4.3. Let A = ( aij ) ∈ ℝ n×n be an L - matrix. If A = I L U is an M-matrix, then ( LU ) ii < 1 and ( UL ) ii < 1 for all i = 1, 2, . . . , n .
Proof . It is easy to show that ( I + U ) A and ( I + L ) A are Z-matrices. Since A is an M-matrix, there exists a vector x > 0 such that Ax > 0. Thus ( I + L ) Ax > 0 and ( I + U ) Ax > 0. It follows that ( I + L ) A and ( I + U ) A are M-matrices and so all diagonal components of ( I + L ) A and ( I + U ) A are positive. Hence 1 − ( LU ) ii > 0 and 1 − ( UL ) ii > 0 for all i = 1, 2, . . . , n , which completes the proof.
From Lemma 4.3, it can be seen that if A is an M-matrix, all theorems in Sections 3 and 4 do not require the assumptions ( LU ) ii < 1 or ( UL ) ii < 1. Lastly, we provide a comparison result of the preconditioned AOR method for preconditioner Pb .
Lemma 4.4 ( [8] ). Suppose that A 1 = M 1 N 1 and A 2 = M 2 N 1 are weak regular splittings of the monotone matrices A 1 and A 2 , respectively, such that
PPT Slide
Lager Image
If there exists a positive vector x such that 0 ≤ A 1 x A 2 x , then for the monotonic norm associated with x
PPT Slide
Lager Image
In particular, if
PPT Slide
Lager Image
has a positive Perron vector, then
PPT Slide
Lager Image
Theorem 4.5. Let A be an irreducible M - matrix. Suppose that 0 ≤ r ω ≤ 1 ( ω ≠ 0). Let Tr,ω , Tl,r,ω and Tb,r,ω be defined by (3) and (8), respectively. If 0 < β ≤ 1, then
PPT Slide
Lager Image
Proof . Since ρ ( Tl,r,ω ) ≤ ρ ( Tr,ω ) < 1 was shown in [14] , it suffices to show ρ ( Tb,r,ω ) ≤ ρ ( Tl,r,ω ). Since A is an M-matrix and 0 < β ≤ 1, there exists a vector x > 0 such that Ax > 0, and PlA and PbA are also Z-matrices. It follows that PlA and PbA are M-matrices. Notice that PbAx PlAx > 0 and ( Db rLb ) −1 ≥ ( Dl rLl ) −1 . Since A is irreducible and 0 < β < 1, PlA is also irreducible. Since
PPT Slide
Lager Image
is an M -splitting of PlA with Ul ≠ 0, Tl,r,ω has a positive Perron vector from Theorem 2.3. Using Lemma 4.4, ρ ( Tb,r,ω ) ≤ ρ ( Tl,r,ω ) for 0 < β < 1. By continuity of spectral radius, ρ ( Tb,r,ω ) ≤ ρ ( Tl,r,ω ) is also true for β = 1. Therefore, the proof is complete.
5. A near optimal parameter β for Krylov subspace method
In this section, we propose how to find a near optimal parameter β for which Krylov subspace method with preconditioners Pl, Pu and Pb performs nearly best. Before proceeding to the analysis for finding a near optimal parameter β , we define the following notations. For X and Y in ℝ n×n , we define the inner product ⟨ X, Y F = tr( XTY ), where tr( Z ) denotes the trace of the square matrix Z and XT denotes the transpose of the matrix X . The associated norm is the well-known Frobenius norm denoted by ∥ · ∥ F .
If P is a good preconditioner for Krylov subspace method, then PA is close to the identity matrix I . Thus it would be a good idea to determine a value β such that ∥ PA I F is minimized, where P is Pl, Pu or Pb . We call such a value β a near optimal parameter for the preconditioned Krylov subspace method. We first provide a near optimal parameter β for the preconditioner P = Pl .
Theorem 5.1. Let A = I L U be a large sparse nonsingular matrix, and let Pl = I + βL be a preconditioner for Krylov subspace method. Then, a near optimal parameter β is given by
PPT Slide
Lager Image
Proof . Note that ( I + βL ) A I = βLA −( L + U ). By definition of the Frobenius norm, one obtains
PPT Slide
Lager Image
The equation (20) is a quadratic equation in β , so the minimum occurs when
PPT Slide
Lager Image
Hence the proof is complete.
Similarly to the proof of Theorem 5.1, one can obtain a near optimal parameter β for the preconditioner P = Pu .
Theorem 5.2. Let A = I L U be a large sparse nonsingular matrix, and let Pu = I + βU be a preconditioner for Krylov subspace method. Then, a near optimal parameter β is given by
PPT Slide
Lager Image
Lastly, we provide a near optimal parameter β for the preconditioner P = Pb .
Theorem 5.3. Let A = I L U ∈ ℝ n×n be a large sparse nonsingular matrix, and let Pb = I + β ( L + U ) be a preconditioner for Krylov subspace method. Then, a near optimal parameter β is given by
PPT Slide
Lager Image
Proof . Note that ( I + β ( L + U )) A I = ( β −1)( L + U )− β ( L + U ) 2 . By property of the Frobenius norm, one obtains
PPT Slide
Lager Image
From (21), it is sufficient to minimize ∥( β −1) I β ( L + U )∥ F in order to determine a near optimal parameter β . Since tr( L + U ) = 0, one obtains
PPT Slide
Lager Image
The equation (22) is a quadratic equation in β , so the minimum occurs when
PPT Slide
Lager Image
Hence the proof is complete.
6. Numerical experiments
In this section, we provide numerical experiments to compare the performance of preconditioned iterative methods and to illustrate the theoretical results obtained in Sections 3 to 5. All numerical experiments are carried out on a PC equipped with Intel Core i5-4570 3.2GHz CPU and 8GB RAM using MATLAB R2013a. The preconditioned iterative methods used for numerical experiments are the preconditioned AOR method and the right preconditioned BiCGSTAB method [10 , 12] .
In Table 3 , Iter denotes the number of iteration steps, CPU denotes the elapsed CPU time in seconds, P denotes the preconditioner to be used, ILU(0) denotes the incomplete factorization preconditioner without fill-ins, and β denotes a near optimal value computed by the formula given in Section 5. For numerical tests using the right preconditioned BiCGSTAB, all nonzero elements of sparse matrices are stored using sparse storage format which saves a lot of CPU time, the initial vectors are set to the zero vector, and the iterations are terminated if the current approximation xk satisfies
PPT Slide
Lager Image
where ∥·∥ 2 refers to L 2 -norm.
Example 6.1. Consider the two dimensional convection-diffusion equation [15]
PPT Slide
Lager Image
with the Dirichlet boundary condition on Ω which denotes the boundary of Ω. When the central difference scheme on a uniform grid with m × m interior node is applied to the discretization of the equation (23), we can obtain a linear system Ax = b whose coefficient matrix A ∈ ℝ n×n is of the form
PPT Slide
Lager Image
where n = m 2 , Im denotes the identity matrix of order m , ⊗ denotes the Kronecker product,
PPT Slide
Lager Image
are m × m tridiagonal matrices with the step size
PPT Slide
Lager Image
It is clear that this matrix A is an irreducible nonsymmetric L -matrix. Numerical results of the preconditioned AOR method for n = 50 2 = 2500 are provided in Table 1 , and numerical results of the right preconditioned BiCGSTAB method for various values of n are listed in Table 3 .
Example 6.2. Consider the two dimensional Poisson’s equation
PPT Slide
Lager Image
with the Dirichlet boundary condition on Ω. When the central difference scheme on a uniform grid with m × m interior node is applied to the discretization of the equation (24), we obtain a linear system Ax = b whose coefficient matrix A ∈ ℝ n×n is given by
PPT Slide
Lager Image
where n = m 2 ,
PPT Slide
Lager Image
are m × m tridiagonal matrices. Note that this matrix A is an irreducible symmetric L -matrix. Numerical results of the preconditioned AOR method for n = 50 2 = 2500 are provided in Table 2 .
In Figure 1 , we depict the eigenvalue distributions of the preconditioned matrices corresponding to 6 different preconditioners for Examples 6.1 when n = 30 2 . From Figure 1 , it can be seen that eigenvalues of PbA are more clustered than those of any other preconditioners. From Tables 1 and 2 , it can be seen that ρ ( Tl,r,ω ) < ρ ( Tr,ω ) does not hold for ω > 1 and r > 1, and ρ ( Tb,r,ω ) ≤ ρ ( Tl,r,ω ) does not hold for β > 1. For test problems used in this paper, the preconditioner Pu yields better optimal performance than other preconditioners Pl and Pb , and the optimal values of β , ω and r for the preconditioned AOR method with Pu are greater than or equal to 1 (see Tables 1 to 2 ). In other words, ω = r is around 1.3 and β = 1 for Examples 6.1 and 6.2. Further research is required to study how to find optimal or near optimal values of β , ω and r for the preconditioned AOR method.
From Table 3 , it can be seen that the preconditioner Pb with a near optimal parameter β performs much better than the ILU(0) preconditioner which is one of the powerful preconditioners that are commonly used. It can be also seen that the preconditioners Pl and Pu with near optimal parameters β perform better than the preconditioner ( I L ) −1 . The performance of BiCGSTAB with preconditioner ( I U ) −1 is not provided here since its performance is similar to that with the preconditioner ( I L ) −1 . Notice that a near optimal parameter β proposed in Section 5 can be easily computed by MATLAB.
Numerical results forρ(Tr,ω),ρ(Tu,r,ω),ρ(Tl,r,ω) andρ(Tb,r,ω) with various values ofβ,randωfor Example 6.1.
PPT Slide
Lager Image
Numerical results for ρ(Tr,ω), ρ(Tu,r,ω), ρ(Tl,r,ω) and ρ(Tb,r,ω) with various values of β, r and ω for Example 6.1.
Numerical results forρ(Tr,ω),ρ(Tu,r,ω),ρ(Tl,r,ω) andρ(Tb,r,ω) with various values ofβ,randωfor Example 6.2.
PPT Slide
Lager Image
Numerical results for ρ(Tr,ω), ρ(Tu,r,ω), ρ(Tl,r,ω) and ρ(Tb,r,ω) with various values of β, r and ω for Example 6.2.
PPT Slide
Lager Image
Spectra of the preconditioned matrices corresponding to several preconditioners for Example 6.1 when n = 302
Numerical results of preconditioned BiCGSTAB for Example 6.1.
PPT Slide
Lager Image
Numerical results of preconditioned BiCGSTAB for Example 6.1.
7. Conclusions
In this paper, we provided comparison results of preconditioned AOR methods with direct preconditioners Pl , Pu and Pb for solving a linear system whose coefficient matrix is a large sparse irreducible L-matrix, which holds under some weaker conditions than those used in the existing literature. These theoretical results are in good agreement with the numerical results (see Tables 1 and 2 ). However, further research is required to study how to find optimal or near optimal values of β , ω and r for the preconditioned AOR method.
We also proposed how to find a near optimal parameter β for which Krylov subspace method with preconditioners Pl , Pu and Pb performs nearly best. Numerical experiments showed that BiCGSTAB with the preconditioner Pb with a near optimal parameter β performs much better than the ILU(0) preconditioner which is one of the powerful preconditioners that are commonly used. It was also seen that BiCGSTAB with the preconditioners Pl and Pu with near optimal parameters β perform better than the preconditioner ( I L ) −1 (see Table 3 ). Notice that a near optimal parameter β proposed in Section 5 can be easily computed by MATLAB.
BIO
Jae Heon Yun received M.Sc. from Kyungpook National University, and Ph.D. from Iowa State University. He is currently a professor at Chungbuk National University since 1991. His research interests are computational mathematics and preconditioned iterative method.
Department of Mathematics, Chungbuk National University, Cheongju 361-763, Korea.
e-mail: gmjae@chungbuk.ac.kr
Hyo Jin Lim is a graduate student in Chungbuk National University. Her research interests are computational mathematics, iterative methods for solving linear and nonlinear equations.
Department of Mathematics, Chungbuk National University, Cheongju, 361-763, Korea.
e-mail: jellya@naver.com
Kyoum Sun Kim is a graduate student in Chungbuk National University. Her research interests are computational mathematics, new homotopy methods for solving functional equations.
Department of Mathematics, Chungbuk National University, Cheongju, 361-763, Korea.
e-mail: giunsun@naver.com
References
Berman A. , Plemmoms R.J. 1994 Nonnegative matrices in the mathematical sciences SIAM Philadelphia, PA
Gunawardena A. , Jain S. , Snyder L. (1991) Modified iterative methods for consistent linear systems Linear Algebra Appl. 154 (156) 123 - 143    DOI : 10.1016/0024-3795(91)90376-8
Hadjidimos A. (1978) Accelerated overrelaxation method Math. Comp. 32 149 - 157    DOI : 10.1090/S0025-5718-1978-0483340-6
Kotakemori H. , Niki H. , Okamoto N. (1996) Accelerated iterative method for Z-matrices J. Comput. Appl. Math. 75 87 - 97    DOI : 10.1016/S0377-0427(96)00061-1
Li Y. , Li C. , Wu S. (2007) Improving AOR method for consistent linear systems Appl. Math. Comput. 186 379 - 388    DOI : 10.1016/j.amc.2006.07.097
Li W. , Sun W. (2000) Modified Gauss-Seidel methods and Jacobi type methods for Z-matrices Linear Algebra Appl. 317 223 - 240    DOI : 10.1016/S0024-3795(00)00140-3
Milaszewicz J.P. (1987) Improving Jacobi and Gauss-Seidel iterations Linear Algebra Appl. 93 161 - 170    DOI : 10.1016/S0024-3795(87)90321-1
Neumann M. , Plemmons R.J. (1987) Convergence of parallel multisplitting iterative methods for M-matrices Linear Algebra Appl. 88/89 559 - 573    DOI : 10.1016/0024-3795(87)90125-X
Ozban A.Y. (2002) On accelerated iterative methods for the solution of systems of linear equations Intern. J. Computer Math. 79 765 - 773    DOI : 10.1080/00207160211290
Saad Y. 1996 Iterative methods for sparse linear systems PWS Publishing Company Boston
Usui M. , Niki H. , Kohno T. (1994) Adaptive Gauss Seidel method for linear systems Intern. J. Computer Math. 51 119 - 125    DOI : 10.1080/00207169408804271
van der Vorst H.A. (1992) Bi-CGSTAB: A fast and smoothly converging variant of Bi-CG for the solution of nonsymmetric linear systems. SIAM J. Sci. Statist. Comput 13 631 - 644    DOI : 10.1137/0913035
Varga R.S. 2000 Matrix iterative analysis Springer Berlin
Wang L. , Song Y. (2009) Preconditioned AOR iterative method for M-matrices J. Comput. Appl. Math. 226 114 - 124    DOI : 10.1016/j.cam.2008.05.022
Wu M. , Wang L. , Song Y. (2007) Preconditioned AOR iterative method for linear systems Applied Numerical Mathematics 57 672 - 685    DOI : 10.1016/j.apnum.2006.07.029
Young D.M. 1971 Iterative solution of large linear systems Academic Press New York
Yun J.H. (2011) Comparison results of the preconditioned AOR methods for L-matrices Appl. Math. Comput. 218 3399 - 3413    DOI : 10.1016/j.amc.2011.08.085