A new class of smoothing functions is introduced in this paper, which includes some important smoothing complementarity functions as its special cases. Based on this new smoothing function, we proposed a smoothing Newton method. Our algorithm needs only to solve one linear system of equations. Without requiring the nonemptyness and boundedness of the solution set, the proposed algorithm is proved to be globally convergent. Numerical results indicate that the smoothing Newton method based on the new proposed class of smoothing functions with
θ
∈ (0, 1) seems to have better numerical performance than those based on some other important smoothing functions, which also demonstrate that our algorithm is promising.
AMS Mathematics Subject Classification : 65K05, 90C33.
1. Introduction
Consider the following nonlinear complementarity problem (NCP): to find a vector
such that
where
is continuously differentiable with
F
:= (
F
_{1}
,
F
_{1}
, . . . ,
F_{n}
)
^{T}
. The NCP has been studied extensively due to its many applications in operation research, engineering and economics(see, for example,
[1
,
2]
).
For the NCPs, many solution methods, such as interior point methods
[3
,
4]
, smoothing methods
[5
,
6
,
7]
. In this paper, we are interested in smoothing Newton methods for solving NCP. This method is to reformulate NCP as a system of smoothing equations by using smoothing function, and to solve the equation at each iteration by Newton method. Smoothing function plays an important role in smoothing Newton algorithms. Up to now, many smoothing functions have been proposed: the Kanzow smoothing function
[8]
, ChenHarkerKanzowSmale smoothing function
[5]
, ChenMangasarian smoothing function
[9]
, HuangHanChen smoothing function
[10]
, and so on. Generally, the construction of a smoothing function is based on a socalled NCPfunction: An NCPfunction is a mapping
having the property
Many NCPfunctions have been studied. Among them, the FischerBurmeister function and the minimum function are the most prominent NCPfunctions, which are defined respectively by
By smoothing the symmetric perturbed FischerBurmeister function, Huang, Han, Xu and Zhang
[11]
proposed the following smoothing function:
By smoothing the symmetric perturbed minimum function, Huang et. al.
[10]
proposed the following smoothing function:
Recently, by combining the FischerBurmeister function and the minimum function, Liu and Wu
[12]
proposed the following function:
Motivated by
[10
,
11
,
12]
, we introduce in this paper the following smoothing function:
where
θ
is a given constant with
θ
∈ [0,1]. It is easy to see that when
θ
= 1,
ϕ_{θ}
reduces to the smoothing function defined by (1.3); and when
θ
= 0,
ϕ_{θ}
reduces to smoothing function defined by (1.2). Thus, the class of smoothing functions defined by (4) contains the smoothing function (1.2) and (1.3) as special cases.
Motivated by the above mentioned work, by using the symmetric perturbed technique and the idea of convex combination, we propose a new class of smoothing functions. We also investigate a smoothing Newton method to solve the NCP based on a new class of smoothing functions. Our algorithm has the following nice properties: (a) Our algorithm needs only to solve one linear system of equations and perform one line search per iteration. (b) Here we give the boundedness of the level set and hence the iteration sequence is bounded and thus there exists at least one accumulation point. We do not need to assume the nonemptyness and boundedness of the solution set of NCP (1.1), although this assumption is widely used in the literature. (c) The function we use is a parametric class of smoothing functions containing some important smoothing complementarity functions as its special cases. We can adjust the two parameter to get better effect in practice. The numerical experiments implicate that the algorithm is efficient and promising.
The organization of this paper is as follows. In section 2, we recall some useful definitions and give some properties of new smoothing function. In section 3, we propose a smoothing Newton algorithm. Convergence results are analyzed in section 4. Some preliminary computational results are reported in section 5. Some words about notation are needed. All vectors are column vectors.
denote the nonnegative and positive orthants of
respectively. We define
N
= {1, 2, . . . ,
n
}.
2. Preliminaries
In this section, we recall some useful definitions and give some properties of the new smoothing function defined by (4).
Definition 2.1.
A matrix
is said to be a
P
_{0}
matrix if all its principal minors are nonnegative.
Definition 2.2.
A function
is said to be a
P
_{0}
function if for all
there exists an index
i
_{0}
∈
N
such that
The following lemma gives some properties of the smoothing function
ϕ_{θ}
(·, ·, ·) defined by (4). Its proof is obviously.
Lemma 2.3.
Let
be defined by (4). Then,
(i) ϕ_{θ}
(0,
a, b
) = 0 ⇔
a
≥ 0,
b
≥ 0,
ab
= 0.
(ii) ϕ_{θ}
(
μ, a, b
)
is continuously differentiable for all points in
different from
(0,
c, c
) for arbitrary
In particular, ϕ_{θ}
(
μ, a, b
)
is continuously differentiable for arbitrary
(
μ, a, b
) ∈
with μ
≠ 0.
(iii) ϕ_{θ}
(
μ, a, b
)
is semismooth on
where
By (5) and Lemma 2.1, we known that solving NCP (1) is equivalent to solve
H
(
z
) = 0.
Define merit function
We also know that the NCP (1) is equivalent to the following equation:
For simplicity, we denote
Lemma 2.4.
Let
be defined by (5) and (6), respectively. Then:
(i)
Փ
_{θ}
is continuously differentiable at any
(ii) H is continuously differentiable at any
with its Jacobian
where
with
If
F
is a
P
_{0}
−function, then the matrix
H
′(
z
) is nonsingular on
Proof
. It is easy to see that Փ
_{θ}
is continuously differentiable at any
Next we prove (ii). It follows from (i) and
F
is continuously differentiable that
H
is continuously differentiable at any
From the definition of
H
(
z
) (5), it follows that (9) holds. For all
i
∈
N
,
By the above equation, we have
Since
which together with (2.6), we have
Thus,
which imply that
D
_{1}
(
z
) and
D
_{2}
(
z
) are positive diagonal matrices for any
Since
F
is a
P
_{0}
function, then
F
′(
x
) is a
P
_{0}
matrix for any
by Lemma 5.4 in
[13]
. In view of the fact that
D
_{2}
(
z
) is a positive diagonal matrix, by a straightforward calculation we have that all principal minors of the matrix
D
_{2}
(
z
)
F
′(
x
) are nonnegative. By Definition 2.1, we know that the matrix
D
_{2}
(
z
)
F
′(
x
) is a
P
_{0}
matrix. Hence, by Theorem 3.1 in
[14]
, the matrix
D
_{1}
(
z
) +
D
_{2}
(
z
)
F
′(
x
) is obviously nonsingular, which implies that
H
′(
z
) is nonsingular.
3. Algorithm
In this section we shall present a smoothing Newton method for NCP and prove that the proposed algorithm is well defined.
Algorithm 3.1.
( Smoothing Newton algorithm)
S0 Choose
Take
γ
∈ (0, 1) such that
Let
be an arbitrary vector,
S1 Termination criterion. If ∥
H
(
z^{k}
)∥ = 0, stop.
S2 Compute
where
β_{k}
=
β
(
z^{k}
) is defined by
β
(
z
) :=
γmin
{1,
h
(
z
)}.
S3 Let
m_{k}
is the smallest nonnegative integer such that
Let
λ_{k}
:=
δ^{mk}
.
S4 Set
z^{k+1 }
=
z^{k}
+
λ_{k}Δz^{k}
and
k
:=
k
+ 1. Go to S1.
The following theorem proves that Algorithm 3.1 is welldefined and generates an infinite sequence. Define the set
Theorem 3.1.
Suppose F is a continuously differentiable P
_{0}

function. Then, Algorithm 3.1 is welldefined and generates infinite sequence
{
z^{k}
= (
μ_{k}
,
x^{k}
)}
with
Proof
. If
μ_{k}
> 0, since
F
is a continuously differentiable
P
_{0}
function, then it follows from Lemma 2.2 that the matrix
H
′(
z^{k}
) is nonsingular. Hence, step S2 is welldefined at the
k
−th iteration. By (11) we have
which implies
where the second inequality follows from
Hence, by the first equation of (3.1), we can get
From (2.1) and (2.4), we have
Let
R^{k}
(
α
) =
h
(
z^{k}
+
α
Δ
z^{k}
)−
h
(
z^{k}
)−
αh
′(
z^{k}
)Δ
z^{k}
. It is easy to see that
R
(
α
) =
o
(
α
). When
Then by (3.1), (3.2), (3.4) and (3.5), we have
Since
For
α
sufficiently small, we can get
this shows that step S3 is welldefined at the
k
th iteration. Therefore, Algorithm 3.1 is welldefined and generates an infinite sequence
Next, we will prove
z^{k}
∈
Ω
for
k
≥ 0. This can be obtained by inductive method. Firstly, it is evident from the choice of the starting point
z
^{0}
∈
Ω
. Secondly, suppose that
z^{k}
∈
Ω
, then by (13) we have
then
4. Convergence of Algorithm 3.1
In this section, we discuss the global convergence and local superlinear convergence of Algorithm 3.1. We need the following Lemma 4.1 which can be founded in
[15]
.
Lemma 4.1.
Let ε
> 0
and the function
be defined by
Let
be any two sequences such that a^{k}, b^{k}
→ +∞
or a^{k}
→ −∞
or b^{k}
→ −∞.
Then

ϕ
(
a^{k}, b^{k}
) → +∞.
Lemma 4.2.
Let
be defined by
Assume that
be any two sequences such that a^{k}, b^{k}
→ +∞
or a^{k}
→ −∞
or b^{k}
→ −∞.
Then
Proof
. (i) Suppose that
a^{k}
→ −∞. If {
b^{k}
} is bounded, then the result holds obviously; else if
b^{k}
→ +∞, we have −
a^{k}
> 0 and
b^{k}
> 0 for all
k
sufficiently large, and hence,
which, together with −
a^{k}
→ +∞, implies that
(ii) For the case of
b^{k}
→ −∞. By using the symmetry of function
about
a^{k}
,
b^{k}
, we know the result holds.
(iii) Suppose that
a^{k}
→ +∞ and
b^{k}
→ +∞. Thus, for sufficiently large
k
,
hence,
By Lemma 4.1, we know that
Lemma 4.3.
Let F be a continuous P
_{0}

function and
Փ
_{θ}
(
μ, x
)
be defined by (6). For any μ
> 0
and c
> 0,
define the level set
Then, for any
0 <
μ
_{1}
≤
μ
_{2}
and c
> 0 ,
the set
is bounded.
Proof
. Suppose, to the contrary, that
L_{μ}
(
c
) is unbounded. Then for some fixed
c
> 0, we can find a sequence {(
μ_{k}
,
x^{k}
)} such that
μ
_{1}
≤
μ_{k}
≤
μ
_{2}
and ∥Փ
_{θ}
(
μ_{k}
,
x^{k}
)∥ ≤
c
, ∥
x^{k}
∥ → ∞.
Since the sequence {
x^{k}
} is unbounded, then the index set
J
:= {
i
∈
N
:
is unbounded } is nonempty. Without loss of generality, we can assume that
be defined by
Then,
is bounded. Note that
F
is a
P
_{0}
function, by Definition 2.2, we have
where
j
is one of the indices for which the max is attained, and
j
is assumed, without loss of generality, to be independent of
k
, we obtained
We consider the following two cases:
case 1:
In this case, since
is bounded by the continuity of
F_{j}
, we deduce from Equation (4.3)
we have
By Lemma 4.2, we know that
case 2:
In this case, since
is bounded by the continuity of
F_{j}
, we deduce from Equation (4.3)
for any
k
. Since
μ
_{1}
≤
μ_{k}
≤
μ
_{2}
, we have
which, together with Lemma 4.2, gives
In either case, we obtained ∥Փ
_{θ}
(
μ_{k}
,
x^{k}
)∥ → +∞, which contradicts with ∥Փ
_{θ}
(
μ_{k}
,
x^{k}
)∥ ≤
c
. This completes the proof.
Corollary 4.3
Suppose that
F
is a
P
_{0}
function and
μ
> 0. Then the function ∥Փ
_{θ}
(
μ
,
x
)∥ is coercive, i.e., lim
_{∥x∥→∞}
∥Փ
_{θ}
(
μ
,
x
)∥ = +∞.
Theorem 4.4.
Suppose F is a continuously differentiable P
_{0}

function, and the sequence
{
z^{k}
= (
μ_{k}
,
x^{k}
)}
is generated by Algorithm 3.1. Then the sequence
{
z^{k}
}
is bounded and any accumulation point z
^{∗}
= (
μ_{∗}
,
x^{∗}
)
of the sequence
{
z^{k}
}
is a solution of H
(
z^{k}
) = 0.
Proof
. Since
h
(
z^{k}
) is monotonically decreasing and bounded from below by zero, it then follows that the sequence ∥Փ
_{θ}
(
z^{k}
)∥ is bounded. By Corollary 4.3, we immediately obtain {
x^{k}
} is bounded. Note that the boundedness of {
h
(
z^{k}
)} implies the boundedness of
μ_{k}
. So {
z^{k}
} is bounded. Without loss of generality, suppose
z^{k}
→
z^{∗}
. Then
h
(
z^{k}
) →
h^{∗}
,
β
(
z^{k}
) →
β
^{∗}
. If
h
(
z^{k}
) = 0, we obtain the desired result. Now, we prove
h^{∗}
= 0 by contradiction. In fact, if
h^{∗}
≠ 0, then
h^{∗}
> 0, then
β^{∗}
=
γ
min{1,
h^{∗}
} > 0, and
It follows from Lemma 2.2 that
H
′(
z^{∗}
) is nonsingular. By the continuity of
H
′(
z
), there exists a closed neighborhood
N
(
z^{∗}
) of
z^{∗}
such that for any
z
∈
N
(
z^{∗}
), we have
is invertible. So, for all sufficiently large
k
,
z^{k}
∈
N
(
z^{∗}
) and
H
′(
z^{k}
) is invertible. Let
be the unique solution of the following system:
It follows from the continuity of
H
and the definition of
β
(.) that {
μ_{k}
} and {
β_{k}
} converge to
μ_{∗}
and
β^{∗}
, respectively. That together with (3.2), implies that
Thus, for sufficiently large
k
, the stepsize
does not satisfy (3.2), then
which implies that
Taking limits on both sides of the inequalities (4.5), from (4.6) we have
This indicates that
we have
σ
≥ 1, which contradicts
is a solution of
H
(
μ, x
) = 0.
Theorem 4.5.
Suppose that F is a continuously differentiable P
_{0}

function. Let z^{∗} be an accumulation point of the iteration sequence
{
z^{k}
}
generated by Algorithm 3.1. If all V
∈
∂H
(
z^{∗}
)
are nonsingular, then:
(1)
λ_{k}
≡ 1,
for all z^{k} sufficiently close to z^{∗};
(2) the whole sequence
{
z^{k}
}
converges to z^{∗};
(3)
∥
z
^{k+1}
−
z^{∗}
∥ =
o
(∥
z^{k}
−
z^{∗}
∥)(
or
∥
z
^{k+1}
−
z^{∗}
∥ =
O
(∥
z^{k}
−
z^{∗}
∥
^{2}
)
if F′ is Lipschitz continuous on
ℜ
^{n}).
Proof
. The proof is similar to the one given in
[16]
, Theorem 3.2.
5. Numerical experiments
In this section, we report some numerical results of Algorithm 3.1. All experiments are done using a PC with CPU of 1.6 GHz and RAM of 512 MB, and all codes are finished in MATLAB 7.5. Throughout our computational experiments, the parameters used in the algorithm are chosen as
In our implementation, we use ∥
H
(
z^{k}
)∥ ≤ 10
^{6}
as the stopping rule.
Example 5.1.
KojimaShindo Problem. This test problem was used by Pang and Gabriel
[17]
, Mangasarian and Solodov
[18]
, Kanzow
[19]
, and Jiang and Qi
[20]
with four variables. Let
Table 1
gives the results for this example with starting points
a
_{1}
= (0, 0, 0, 1)
^{T}
,
a
_{2}
= (1,−2, 1,−2)
^{T}
,
a
_{3}
= (1, 2, 6, 8)
^{T}
.
Numerical results for Examples 5.1 to 5.4
Numerical results for Examples 5.1 to 5.4
Example 5.2.
Josephy Problem. This test problem was used by Dirkse and Ferris
[22]
with four variables. Let
Table 1
gives the results for this example with starting points
a
_{1}
= (2,−2,−2,−2)
^{T}
,
a
_{2}
= (2, 3, 4, 6)
^{T}
,
a
_{3}
= (0, 2, 0, 6)
^{T}
.
Example 5.3.
Mathiesen Problem. This test problem was used by Pang and Gabriel
[17]
with four variables, which was also tested by Kanzow
[19]
. Let
where
α
= 0.75,
b
_{2}
= 1,
b
_{3}
= 2.
Table 1
gives the results for this example with starting points
a
_{1}
= (0.5, 0.5, 0.5, 2)
^{T}
,
a
_{2}
= (2,−2,−2,−2)
^{T}
,
a
_{3}
= (0,−2,−2, 0)
^{T}
.
Example 5.4.
HS 34 Problem. This test problem was from the book of Hock and Schittkowski
[21]
: Their KarushKuhnTucker (KKT) optimality conditions lead to complementarity problems of dimensions 8. Let
Table 1
gives the results with starting points
a
_{1}
= (−1,−1,−1, 1, 1, 1, 1, 1)
^{T}
,
a
_{2}
= (0, 0, 0, 1, 1, 1, 1, 1)
^{T}
,
a
_{3}
= (1, 1, 1,−10,−10,−10,−10,−10)
^{T}
.
In
Table 1
, IT denotes the numbers of iteration; NF denotes the numbers of function value’s evaluation; CPU denotes the CPU time for solving the underlying problem in second; and − denotes the algorithm fails to find the optimizer in the sense that the iteration numbers are larger than 1000.
Table 1
shows that not all the best numerical results occur in the case of
θ
= 0(in this case, the smoothing function is proposed by Huang et. al. in
[11]
) or
θ
= 1 (in this case, the smoothing function is proposed by Huang et. al. in
[10]
). These demonstrate that the new smoothing function introduced in this paper is worth investigating. The
Figures 1
and
2
below plot the corresponding convergence of merit function
h
(
z^{k}
) versus the iteration number. From the two figures, when
θ
= 0.5 and
θ
= 0.75,
h
(
z^{k}
) has a faster decrease than
θ
= 0 and
θ
= 1. These also demonstrate that the new smoothing function introduced in this paper is worth investigating. Numerical experiments also demonstrate the feasibility and efficiency of the new algorithm.This new proposed class of complementarity functions have great advantage because we can adjust the parameter
θ
to obtain an optimal solution to NCP.
Convergence behavior of Example 5.3 with the initial point a_{1}
Convergence behavior of Example 5.3 with the initial point a_{3}
BIO
Jianguang Zhu received his Ph.D. degree in applied mathematics from Xidian University, China, in 2011. Since 2011, he has been a lecture with Shandong University of Science and Technology, China. His research interests include optimization theory, and algorithm & applications in image processing.
School of Science, Shandong University of Science and Technology, Qingdao 266590, P. R. China.
email:jgzhu980@yahoo.com.cn
Binbin Hao received her Ph.D. degree in appliedmathematics from Xidian University, China, in 2009. From 2008 to 2009, she was a research assistant with the Department of Computing, The Hong Kong Polytechnic University, Hong Kong. Since 2010, she has been a lecture with China University of Petroleum, China. Her research interests include inverse problems in image processing, sparse signal representation, and super resolution image reconstruction.
School of Science, China University of Petroleum, Qingdao 266580, P. R. China.
email: bbhao981@yahoo.com.cn
Harker P.T.
,
Pang J.S.
(1990)
Finite dimensional variational inequality and nonlinear complementarity problem: A survey of theory, algorithms and applications
Math. Program.
48
161 
220
DOI : 10.1007/BF01582255
Potra F.A.
,
Ye Y.
(1996)
Interiorpoint methods for nonlinear complementarity problems
J. Optim. Theory Appl.
88
(3)
617 
642
DOI : 10.1007/BF02192201
Wright S.
,
Ralph D.
(1996)
A superlinear infeasibleinteriorpoint algorithm for monotone complementarity problems
Math. Oper. Res.
21
(4)
815 
838
DOI : 10.1287/moor.21.4.815
Hotta K.
,
Yoshise A.
(1999)
Global convergence of a class of noninterior point algorithms using ChenHarkerKanzowSmale functions for nonlinear complementarity problems
Math. Program.
86
(1)
105 
133
DOI : 10.1007/s101070050082
Qi L.
,
Sun D.
,
Zhou G.
(2000)
A new look at smoothing Newton methods for nonlinear complementarity problems and box constrained variational inequalities
Math. Program.
87
(1)
1 
35
Fang L.
(2010)
A new onestep smoothing Newton method for nonlinear complementarity problem with P0function
Appl. Math. Comput.
216
1087 
1095
DOI : 10.1016/j.amc.2010.02.001
Kanzow C.
(1996)
Some noninterior continuation methods for linear complementarity problems
SIAM J. Matrix Anal. Appl.
17
851 
868
DOI : 10.1137/S0895479894273134
Chen C.
,
Mangasarian O.L.
(1996)
A Class of Smoothing Functions for Nonlinear and Mixed Complementarity Problems
Comput. Optim. Appl.
5
97 
138
DOI : 10.1007/BF00249052
Huang Z.H.
,
Han J.
,
Chen Z.
(2003)
Predictorcorrector smoothing newton method, based on a new smoothing function, for solving the nonlinear complementarity problem with a P0 function
J. Optim. Theory Appl.
117
(1)
39 
68
DOI : 10.1023/A:1023648305969
Huang Z.H.
,
Han J.
,
Xu D.C.
,
Zhang L.P.
(2001)
The noninterior continuation methods for solving the P0 function nonlinear complementarity problem
Science in China
44
(9)
1107 
1114
DOI : 10.1007/BF02877427
Kojima M.
,
Megiddo N.
,
Noma T.
(1991)
Homotopy continuation methods for nonlinear complementarity problems
Math. Oper. Res.
16
754 
774
DOI : 10.1287/moor.16.4.754
Chen B.
,
Harker P.T.
(1993)
A noninterior continuation algorithm for linear complementarity problems
SIAM J. Matrix Anal. Appl.
14
1168 
1190
DOI : 10.1137/0614081
Kanzow C.
(1996)
Global convergence properties of some iterative methods for linear complementarity problems
SIAM J. Optim.
6
(1)
326 
341
DOI : 10.1137/0806019
Huang Z.H.
,
Zhang Y.
,
Wu W.
(2008)
A smoothingtype algorithm for solving system of inequalities
J. Comput. Appl. Math.
220
(1)
355 
363
DOI : 10.1016/j.cam.2007.08.024
Pang J.S.
,
Gabriel S.A.
(1993)
NE/SQP: A robust algorithm for the nonlinear complementarity problem
Math. Program.
60
295 
337
DOI : 10.1007/BF01580617
Mangasarian O.L.
,
Solodov M.V.
(1993)
Nonlinear complementarity as unconstrained and constrained minimization
Math. Program.
62
277 
297
DOI : 10.1007/BF01585171
Jiang H.
,
Qi L.
(1997)
A new nonsmooth eqations approach to nonlinear complementarity problems
SIAM J. Control Optim.
35
178 
193
DOI : 10.1137/S0363012994276494
Hock W.
,
Schittkowski K.
(1981)
Test examples for nonlinear programming codes
Lecture Notes in Economics and Mathematical Systems
SpringerVerlag
Berlin, Germany
187
Dirkse S.P.
,
Ferris M.C.
(1995)
MCPLIB: A collection of nonlinear mixed complementarity problems
Optim. Meth. Soft.
5
319 
345
DOI : 10.1080/10556789508805619