In this paper, we develop a new hybridization conjugate gradient method for solving the unconstrained optimization problem. Under mild assumptions, we get the sufficient descent property of the given method. The global convergence of the given method is also presented under the Wolfetype line search and the general Wolfe line search. The numerical results show that the method is also efficient.
AMS Mathematics Subject Classification : 65K05, 90C30.
1. Introduction
We consider the unconstrained optimization problem
where
f
.
R^{n}
→
R
is a continuously differentiable function. The nonlinear conjugate gradient method is very useful for solving (1.1), especially when
n
is large. For any given
x
∈
R^{n}
, the nonlinear conjugate gradient method generates
x_{k}
,
k
= 1, 2,...,
n
, by the following recursive relation
where
g_{k}
= ∇
f
(
x_{k}
) is the gradient of
f
at
x_{k}
and
β_{k}
is typically given by some formulas (such as
[1

5]
).
To achieve good computational performance and maintain the attractive feature of strong global convergence, in the past years, there exist many hybridizations of the basic conjugate gradient methods (see
[6

10]
). Based on the above papers, in this paper, we present a new hybridization nonlinear conjugate gradient method, where
β_{k}
is given as
α_{k}
is computed by the Wolfetype line search, which is proposed in
[11]
where 0 <,
δ
<,
σ
<, 1, 0 <,
γ
<, 1. Based on the hybridization of
β_{k}
as given by (1.4) we give the nonlinear conjugate gradient methods under the Wolfetype line search and the general Wolfe line search.
In Section 2, we give the Method 2.1 and prove the global convergence of the proposed method with Wolfetype line search. In Section 3, some discussions and the numerical results of the Method 2.1 are also given.
2. Method 2.1 and its global convergence analysis
Now, we give the Method 2.1 for solving (1.1).
Method 2.1
Step 1. Choose initial point
x
_{0}
∈
R^{n}
,
ε
≥ 0, 0 <,
δ
<,
σ
<, 1,
u, γ
∈ (0, 1).
Step 2. Set
d
_{1}
= −
g
_{1}
,
k
= 1, if ║
g
_{1}
║ = 0, then stop.
Step 3. Let
x
_{k+1}
=
x_{k}
+
α_{k}d_{k}
, compute
α_{k}
by (1.5) and (1.6).
Step 4. Compute
g
_{k+1}
, if ║
g
_{k+1}
║ ≤
ε
, then stop. Otherwise, go to next step.
Step 5. Compute
β_{k}
+1 by (1.4) and generate
d
_{k+1}
by (1.3).
Step 6. Set
k
=
k
+ 1, go to step 3.
In order to establish the global convergence of the Method 2.1, we need the following assumption, which are often used in the literature to analyze the global convergence of nonlinear conjugate gradient methods.
Assumption 2.2
(i) The level set Ω = {
x
∈
R^{n}

f
(
x
) ≤
f
(
x
_{1}
)} is bounded, i.e., there exists a positive constant
C
such that ║
x
║ ≤
C
, for all
x
∈ Ω.
(ii) In some neighborhood Ω of
L
,
f
is continuously differentiable and its gradient is Lipchitz continuous, i.e., there exists a constant
L
> 0, such that
for all
x,y
∈ Ω.
Theorem 2.1.
Let the sequences
{
x_{k}
}
and
{
d_{k}
}
be generated by the method
(1.2), (1.3),
and β_{k} is computed by
(1.4).
Then, we have
for all k
≥ 1
, where u
∈ (0, 1).
Proof
. If
k
= 1, from (1.3), we get
Then, we can easily conclude (2.1). If
k
≥ 2, multiplying (1.3) by
, from (1.4), we get
Theorem 2.2.
Suppose that Assumption 2.2 holds. By the Method 2.1, we have
Proof
. From Theorem 2.1 and Assumption (i), we can know that {
f
(
x_{k}
)} is bounded and monotonically decreasing, i.e., {
f
(
x_{k}
)},
k
= 1, 2,...,
n
, is convergent series. By (1.6), we have that
From Assumption 2.2, we get
So, from (2.3) and (2.4), we have
Square both sides of (2.5), we have
Therefore, by
, we get
According to the convergence of {
f
(
x_{k}
)}, we can conclude that
Remark 2.3.
Suppose that Assumption 2.2 holds. By the Method 2.1, we know that
Proof
. From Theorem 2.1, we know that
where
u
∈ (0, 1).
Square both sides of (2.7), we have
Divided both sides of the above inequation by ║
d_{k}
║
^{2}
, we get
From Theorem 2.2, we can conclude that
Theorem 2.4.
Suppose that Assumption 2.2 holds. If
{
x_{k}
}
(k
= 1, 2,...,
n) is generated by Method 2.1, we have
Proof
. If (2.8) does not hold, there exists
, such that
holds for all
k
≥ 1. From (1.4), if
, we have
From (1.3) and (1.4), we know that
Square both sides of (2.10) , we get
Divided both sides of the above equation by
, we get
By
we have
By (2.9) and (2.11), we know that
Therefore, by
, we have
If
, we get
We can easily conclude that
which leads to a contradiction with (2.2). This shows (2.8) holds. We finish the proof of the theorem. □
3. Discussions of the Method 2.1 and Numerical Results
The line search in Method 2.1 can also given by the general Wolfe line search
where 0 <,
σ
_{1}
≤
σ
_{2}
<, 1.
Theorem 3.1.
Suppose that Assumption 2.2 holds. Consider the Method 2.1, where α_{k} satisfies
(3.1), (3.2).
Then, we have
Proof. From (3.2), we get
So
From (3.1), we have
By (3.3), we get
That is
We have
Discussion 3.1
By Theorem 3.1, we also can get the global convergence of the Method 2.1 with (3.1), (3.2).
Discussion 3.2
If the line search in the Method 2.1 is given by the other Wolfetype line search, which is given in
[12]
the method is also globally convergent.
Discussion 3.3
If in the Method 2.1,
β_{k}
is given as
α_{k}
satisfies (1.5), (1.6) or (3.1), (3.2), we also can get the global convergence of the Method 2.1.
Numerical Results 3.4
Now, we test the Method 2.1, where
α_{k}
satisfing (1.5), (1.6) or (3.1), (3.2) by using double precision versions of the unconstrained optimization problems in the CUTE library
[13]
.
For the Method 2.1,
α_{k}
is computed by (1.5) and (1.6) with
δ
= 0.4 and
σ
= 0.7 in the
Table 3.1
.
α_{k}
is computed by (3.1) and (3.2) with
σ
_{1}
= 0.5 and
σ
_{2}
= 0.6 in the
Table 3.2
.
The numerical results are given in the form of NI/NF/NG/g, where NI, NF, NG denote the numbers of iterations, function evaluations, and gradient evaluations and g denotes the finally gradient norm. Finally, all attempts to solve the test problems were limited to reaching maximum of achieving a solution with ║
g_{k}
║ ≤ 10
^{−3}
.
BIO
Ajie Chu received her BS from Xingtan CollegeQufu Normal University, China in 2013. Since 2013 she has been at Qingdao University, China. Her research interests include optimization, nonsmooth equations.
College of Mathematics, Qingdao University, Qingdao 266071, China.
email: 13658689660@163.com
Yixiao Su received his BS from Dezhou College, China in 2013. Since 2013 he has been at Qingdao University, China. His research interests include optimization, complementarity problems.
College of Mathematics, Qingdao University, Qingdao 266071, China.
email: 124890405@qq.com
Shouqiang Du PhD, Vice Professor. Since 2003 he has been at College of Mathematics, Qingdao University. His research interests include numerical optimization and nonsmooth optimization.
College of Mathematics, Qingdao University, Qingdao 266071, China.
email: dsq8933@163.com; sqdu@qdu.edu.cn
Polak E.
,
Ribiere G.
(1969)
Note sur la convergence de directions conjugees
Rev Francaise Informat Recherche Operatinelle, 3e Annee
16
35 
43
Polyak B.T.
(1969)
The conjugate gradient method in extreme problems
USSR Computational Mathematics and Mathematical Physics
9
94 
112
DOI : 10.1016/00415553(69)900354
Hestenes M.R.
,
Stiefel E.L.
(1952)
Methods of conjugate gradients for solving linear systems
Journal of Research of the National Bureau of Standards
49
409 
436
DOI : 10.6028/jres.049.044
Dai Y.H.
,
Yuan Y.
(1999)
A nonlinear conjugate gradient method with a strong global convergence property
SIAM Journal on Optimization
10
177 
182
DOI : 10.1137/S1052623497318992
Gilbert J.C.
,
Nocedal J.
(1992)
Global convergence properties of conjugate gradient methods for optimization
SIAM Journal on Optimization
2
21 
42
DOI : 10.1137/0802003
Andrei N.
(2010)
Accelerated hybrid conjugate gradient algorithm with modified secant condition for unconstrained optimization
Numerical Algorithms
54
23 
46
DOI : 10.1007/s1107500993210
BabaieKafaki S.
,
Ghanbari R.
,
MahdaviAmiri N.
(2010)
Two new conjugate gradient methods based on modified secant equations
Journal of Computational and Applied Mathematics
234
1374 
1386
DOI : 10.1016/j.cam.2010.01.052
Dai Y.H.
,
Yuan Y.
(2001)
An efficient hybrid conjugate gradient method for unconstrained optimization
Annals of Operations Research
103
33 
47
DOI : 10.1023/A:1012930416777
Li G.
,
Tang C.
,
Wei Z.
(2007)
New conjugacy condition and related new conjugate gradient methods for unconstrained optimization
Journal of Computational and Applied Mathematics
202
523 
539
DOI : 10.1016/j.cam.2006.03.005
Zhang X.J.
,
Xu A.N.
(2005)
Global convergence properties of a new class of nonlinear conjugate gradient methods
Guangxi Sciences
12
282 
283
Wang C.Y.
,
Du S.Q.
,
Chen Y.Y.
(2004)
Global covergence properties of threeterm conjugate gradient method with newtype line search
Journal of Systems Science and Complexity
17
412 
420
Bongartz I.
,
Conn A.R.
,
Gould N.
,
Toint P.L.
(1995)
Toint, CUTE: constrained and unconstrained testing environment
Acm Transactions on Mathematical Software
50
123 
160
DOI : 10.1145/200979.201043