Nonsmooth equations with finitely many maximum functions is often used in the study of complementarity problems, variational inequalities and many problems in engineering and mechanics. In this paper, we consider the global convergence methods for nonsmooth equations with finitely many maximum functions. The steepest decent method and the smoothing gradient method are used to solve the nonsmooth equations with finitely many maximum functions. In addition, the convergence analysis and the applications are also given. The numerical results for the smoothing gradient method indicate that the method works quite well in practice.
AMS Mathematics Subject Classification : 65K05, 90C30.
1. Introduction
By the widely used in the problems of image restoration, variable selection, stochastic equilibrium and optimal control, nonsmooth equations and their related problems have been widely studied by many authors(see
[1

16]
). In this paper, we consider the nonsmooth equations with finitely many maximum functions
where
x
∈
R^{n}
,
f_{ij}
:
R^{n}
→
R
are continuously differentiable functions,
j
∈
J_{i}
,
i
= 1, . . . ,
n
,
J_{i}
,
i
= 1, . . . ,
n
are finite index sets. This system of nonsmooth equations with finitely many maximum functions has specific application background, for instance, complementarity problems, variational inequality problems and many problems in national defense, economic, financial, engineering and management lead to this system of equations.(see for instance
[9

10]
). Obviously, (1) is a system of semismooth equations. For simplicity, we denote
Thus, the equations (1) can be briefly written as
The value function of
F
(
x
) is defined as
Then, (5) can be solved by solving the following problem
We consider using the iterative method for solving (6)
where
α_{k}
> 0 is stepsize,
d_{k}
is a search direction.
This paper is organized as follows. In Section 2, when
f
is smooth function, we present the steepest method for solving it and give its global convergence result. When
f
is a nonsmooth function, we call it a nondifferentiable problem. There are many papers (see for instance
[4
,
7
,
8
,
12
,
13
,
14
,
15
,
16]
) deal with this problem. we give the smoothing gradient method for solving it and give the convergence analysis. In Section 3, we discuss the applications of the methods, this further illustrated the system of nonsmooth equations with finitely many maximum functions is related to solve the optimization in theory. In the last section, we discuss the application of the method for the related minimax optimization. The numerical results are also given.
Notation. Throughout the paper, ∥.∥ denotes the
l
_{2}
norm,
R
_{+}
= {
x

x
≥ 0,
x
∈
R
},
g_{k}
denote the gradient of f at
x_{k}
.
2. The methods and their convergence analysis
Case(I).
Firstly, when
f
is smooth function, we give the steepest method for solving it. The steepest method is one of the most used method for solving unconstrained optimization (One can see for
[11]
).
 Method 2.1
Step 1. Choose
σ
_{1}
∈ (0, 0.5),
σ
_{2}
∈ (
σ
_{1}
, 1). Give initial point
x
_{0}
∈
R^{n}
, Let
k
:= 0.
Step 2. Compute
g_{k}
= ∇
f
(
x_{k}
), let
d_{k}
= −
g_{k}
, determine
α_{k}
by Wolfe line search, where
α_{k}
= max{
ρ
^{0}
,
ρ
^{1}
. . .} and
ρ^{i}
satisfying
and
Set
x
_{k+1}
=
x_{k}
+
α_{k}d_{k}
.
Step 3. Let
k
:=
k
+ 1, go to step 2.
The global convergence of the Method 2.1 is given by the following theorem.
Theorem 2.1.
Let
{
x_{k}
}
generated by the Method 2.1. f
(
x
)
is lower bounded. For any
x
_{0}
∈
R^{n}
, ▽
f
(
x
)
is existence and uniformly continuous on the level set
Then we have
Proof
. Suppose that the theorem is not true, then there exist a subsequence ( we still denote the index by
k
)such that
By
d_{k}
is a descent direction and (7) , we can see that {
f
(
x_{k}
)} is monotonically decreasing sequence. Since
f
(
x_{k}
) is lower bounded. So the limitation of
f
(
x_{k}
) is existence. Thus, we have
Set
s_{k}
=
α_{k}d_{k}
. From (7), we know that
Due to the angle between
d_{k}
and −
g_{k}
is
θ_{k}
= 0. Then
Note that ∥
g_{k}
∥ ≥
ε
> 0, hence we must have ∥
s_{k}
∥ → 0.
And because ∇
f
(
x
) is uniformly continuous on the level set, we have
That is
This contradiction with (8) and
σ
_{2}
< 1. So we have
That is
Case (II).
When
f
is locally Lipschitz continuous but not necessarily differentiable function. The generalized gradient of
f
at
x
is defined by
where ”conv” denotes the convex hull of set.
D_{f}
is the set of points at which
f
is differentiable.
Firstly, we introduce the definition of smoothing function.
Definition 2.2
(
[3]
). Let
f
:
R^{n}
→
R
be continuous function. We call
a smooth function of
f
, if
is continuously differentiable in
R^{n}
for any fixed
μ
> 0 and
for any
x
∈
R^{n}
.
In the following, we present a smoothing gradient algorithm for (6).
 Method 2.2
Step 1. Choose
σ
_{1}
∈ (0, 0.5),
σ
_{2}
∈ (
σ
_{1}
, 1)
γ
> 0
γ
_{1}
∈ (0, 1), give a initial point
x
_{0}
∈
R^{n}
, Let
k
:= 0.
Step 2. Compute
let
d_{k}
= −
g_{k}
, determine
α_{k}
by the Wolfe line search, where
α_{k}
= max{
ρ
^{0}
,
ρ
^{1}
. . .} and
ρ^{i}
satisfying
and
Set
x
_{k+1}
=
x_{k}
+
α_{k}d_{k}
.
Step 3. if
then set
μ_{k}
+1 =
μ_{k}
; otherwise,
μ
_{k+1}
=
γ
_{1}
μ_{k}
.
Step 4. Let
k
:=
k
+ 1, go to Step 2.
Then, we give the convergence result of Method 2.2.
Theorem 2.3.
Suppose that
is a smoothing function of f. If for any fixed
satisfies the conditions as in Theorem 2.1, then
{
x_{k}
}
generated by Method 2.2 satisfies
and
Proof
. Define
K
= {
k

μ_{k}
_{+1}
=
γ
_{1}
μ_{k}
}. If
K
is finite set, then there exists an interger
such that for all
Then
in step 3 of Method 2.2.
Since
is a smoothing function, Method 2.2 reduces to solve
Hence, from the above Theorem 2.1, we can deduce that
which contradicts with (10). This show that
K
must be infinite. And we know
Since
K
is infinite, we can assume that
K
= {
k
_{0}
,
k
_{1}
, . . .}, where
k
_{0}
<
k
_{1}
< . . . Then we have
We get the theorem.
From above Theorem 2.3 and the gradient consistency discussion in
[3
,
6]
, we can get the following result.
Theorem 2.4.
Any accumulation point x
^{*}
generated by Method 2.2 is a clarkr stationary point. This is
3. The applications of the methods
 3.1. Application in solving generalized complementarity problem.
Consider the generalized complementarity problem (GCP) as in
[5]
, Find a
x
∈
R^{n}
such that
where
F
= (
F
_{1}
,
F
_{2}
, . . . ,
F_{n}
)
^{T}
,
G
= (
G
_{1}
G
_{2}
, . . . ,
G_{n}
)
^{T}
,
F_{i}
:
R^{n}
→
R
(
i
= 1, . . . ,
n
) and
G_{i}
:
R^{n}
→
R
(
i
= 1, . . . ,
n
) are continuously differentiable functions.
To solve (11) is equivalent to solve the following equations
By min(
x, y
) =
x
− (
x
−
y
)
_{+}
, we know that (12) is equivalent to
Let
ρ
:
R
→
R
_{+}
be a piecewise continuous density function satisfying
Let
then for any fixed
μ
> 0, there is a continuous function
satisfying
From the definition of smoothing function, we know that
ϕ
(·,
μ
) is a smoothing function of (
t
)
_{+}
.
Choose
Then
is a smoothing function of (
t
)
_{+}
. Then, let
t
=
F_{i}
(
x
) −
G_{i}
(
x
),
i
= 1, . . . ,
n
, we have
We know that the smoothing function of
F_{i}
(
x
) − (
F_{i}
(
x
) −
G_{i}
(
x
))
_{+}
is
So, we can transform (13) into
Then, we can use the Method 2.2 to solve (16).
 3.2. Application in solving linear maximum equations.
Here, we consider the equations of maximum functions in
[16]
. Let
F
:
R
→
R
is a finite equations of maximum functions,
where
f_{i}
:
R
→
R
is a affine linear,
where
p_{i}
,
q_{i}
∈
R
(
i
= 1, . . . ,
m
,
m
∈
N
) are scalars. Follow the affine structure of
F
, we know that
F
is Lipschitz and convex. Generally assumption
And there exists −∞ =
t
_{1}
<
t
_{2}
< . . . <
t_{m}
<
t
_{m+1}
= ∞, such that
And
For the above linear affine equations of maximum functions, the smoothing function for the linear equations of maximum functions can be defined as follows. Let
ρ
:
R
→
R
is a piecewise continuous density function such that
and
We define a distribution function that goes with
ρ
by
F
,i.e.,
Similar to
[2]
, we can find the smoothing function
F
(
t
) of this special equations of maximum functions by convolution
For this linear affine finite equations of maximum functions
Using the above convolution, we can transform it into
and we can use the Method 2.2 to solve it.
4. Application in related minimax optimization problem
In this section, we consider the minimax optimization problem(see in
[15]
)
where
f
(
x
) = max
_{i=1,...m}
f
_{1}
(
x
).
f
_{1}
(
x
), . . . ,
f_{m}
(
x
) :
R^{n}
→
R
are twice continuous differentiable functions. Minimax problems are widely used in engineering design, optimal control, circuit design and computeraideddesign. Usually, minimax problems can be approached by reformulating them into smooth problems with constraints or by dealing with the nonsmooth objective directly.
In this paper, we also use the smoothing function (see for instance
[15]
)
to approximate the function
f
(
x
) . In the following, we can see that using the Method 2.2 to solve the minimax optimization problem works quite well from the numerical result. We use the examples in
[4]
. All codes are finished in MATLAB 8.0. Throughout our computational experiments, the parameters used in the Method 2.2 are chosen as
In our implementation, we use ∥Δ
x
∥ ≤ 10
^{−5}
as the stopping rule.
x
_{0}
is the initial point,
x
^{*}
is the optimal value point,
f
(
x
^{*}
) is optimal value,
k
is the iterations.
Example 4.1
(
[4]
).
where
Numerical results for Example 4.1.
Numerical results for Example 4.1.
Example 4.2
(
[4]
).
where
Consider the following nonlinear programming problem as in
[4]
.
Numerical results for Example 4.2.
Numerical results for Example 4.2.
Bandler and Charalambous (see
[1]
) proved that for sufficiently large
α_{i}
, the optimum of the nonlinear programming problem coincides with the following minimax function:
where
Example 4.3 (RosenSuzuki Problem).
Here, we use
The numerical results for Example 4.3 are listed in
Table 4.3
.
Numerical results for Example 4.3.
Numerical results for Example 4.3.
BIO
Deyan Pang received her BS from Jining University, China in 2012. Since 2012 she has been at Qingdao University, China. Her research interests include optimization, nonsmooth equations.
College of Mathematics, Qingdao University, Qingdao 266071, China.
email:mm070348@126.com
Jingjie Ju received her BS from Zaozhuang University, China in 2012. Since 2012 she has been at Qingdao University, China. Her research interests include optimization, complementarity problems.
College of Mathematics, Qingdao University, Qingdao 266071, China.
email:jjj3.28@163.com
Shouqiang Du PhD, Vice Professor. Since 2003 he has been at College of Mathematics, Qingdao University. His research interests include numerical optimization and nonsmooth optimization.
College of Mathematics, Qingdao University, Qingdao, 266071, China.
email:dsq8933@163.com;sqdu@qdu.edu.cn
Bandler J.W.
,
Charalambous C.
(1974)
Nonlinear programming using minimax techniques
Optim Theory Appl.
13
607 
619
DOI : 10.1007/BF00933620
Burke J.V.
,
Hoheisel T.
,
Kanzow C.
(2013)
Gradient consistency for integralconvolution smoothing functions
SetValued and Variational Ana.
21
359 
376
DOI : 10.1007/s1122801302356
Charalamous C.
,
Conn A.R.
(1978)
An efficient method to solve the minimax problem directly
SIAM J.Numer.Anal.
15
162 
187
DOI : 10.1137/0715011
Chen X.
,
Qi L.
(1994)
A parameterized newton method and a quasinewton method for nonsmooth equations
Comput. Optim. Appl.
3
157 
179
DOI : 10.1007/BF01300972
Chen X.
,
Zhou W.
(2010)
Smoothing nonlinear conjugate gradient method for image restoration using nonsmooth nonconvex minimization
SIAM J. Imaging Sci.
3
765 
790
DOI : 10.1137/080740167
Demyanov V.F.
,
Molozenov V.N.
1974
Introduction to minimax
Wiley
New York
Pardalos P.M.
,
Du D.Z.
1995
Minimax and applications
Kluwer Academic Publishers
Dordrecht
Gao Y.
2008
Nonsmooth optimization
Science press
Beijing
Ma C.
2010
Optimization method and its matlab program design
Science press
Beijing
Polak E.
(1987)
On the mathematical foundations of nondifferentiable optimization
SIAM Review
29
21 
89
DOI : 10.1137/1029002
Polak E.
,
Higgins J.E.
,
Mayne D.Q.
(1994)
A barrier function method for minimax problems
Math. Program.
64
277 
294
DOI : 10.1007/BF01582577
Peng J.M.
,
Lin Z.
(1999)
A noninterior continuation method for generalized linear complementarity problems
Math. program.
86
533 
563
DOI : 10.1007/s101070050104
Zhu D.
(2008)
Affine scaling interior levenbergmarquardt method for boundconstrained semismooth equations under local errror bound contions
Comput. Appl. Math.
219
198 
225
DOI : 10.1016/j.cam.2007.07.039