Advanced
GLOBAL CONVERGENCE METHODS FOR NONSMOOTH EQUATIONS WITH FINITELY MANY MAXIMUM FUNCTIONS AND THEIR APPLICATIONS†
GLOBAL CONVERGENCE METHODS FOR NONSMOOTH EQUATIONS WITH FINITELY MANY MAXIMUM FUNCTIONS AND THEIR APPLICATIONS†
Journal of Applied Mathematics & Informatics. 2014. Sep, 32(5_6): 609-619
Copyright © 2014, Korean Society of Computational and Applied Mathematics
  • Received : May 11, 2014
  • Accepted : June 14, 2014
  • Published : September 30, 2014
Download
PDF
e-PUB
PubReader
PPT
Export by style
Share
Article
Author
Metrics
Cited by
TagCloud
About the Authors
DEYAN PANG
JINGJIE JU
SHOUQIANG DU

Abstract
Nonsmooth equations with finitely many maximum functions is often used in the study of complementarity problems, variational in-equalities and many problems in engineering and mechanics. In this paper, we consider the global convergence methods for nonsmooth equations with finitely many maximum functions. The steepest decent method and the smoothing gradient method are used to solve the nonsmooth equations with finitely many maximum functions. In addition, the convergence analysis and the applications are also given. The numerical results for the smoothing gradient method indicate that the method works quite well in practice. AMS Mathematics Subject Classification : 65K05, 90C30.
Keywords
1. Introduction
By the widely used in the problems of image restoration, variable selection, stochastic equilibrium and optimal control, nonsmooth equations and their related problems have been widely studied by many authors(see [1 - 16] ). In this paper, we consider the nonsmooth equations with finitely many maximum functions
PPT Slide
Lager Image
where x Rn , fij : Rn R are continuously differentiable functions, j Ji , i = 1, . . . , n , Ji , i = 1, . . . , n are finite index sets. This system of nonsmooth equations with finitely many maximum functions has specific application background, for instance, complementarity problems, variational inequality problems and many problems in national defense, economic, financial, engineering and management lead to this system of equations.(see for instance [9 - 10] ). Obviously, (1) is a system of semismooth equations. For simplicity, we denote
PPT Slide
Lager Image
PPT Slide
Lager Image
PPT Slide
Lager Image
Thus, the equations (1) can be briefly written as
PPT Slide
Lager Image
The value function of F ( x ) is defined as
PPT Slide
Lager Image
Then, (5) can be solved by solving the following problem
PPT Slide
Lager Image
We consider using the iterative method for solving (6)
PPT Slide
Lager Image
where αk > 0 is stepsize, dk is a search direction.
This paper is organized as follows. In Section 2, when f is smooth function, we present the steepest method for solving it and give its global convergence result. When f is a nonsmooth function, we call it a nondifferentiable problem. There are many papers (see for instance [4 , 7 , 8 , 12 , 13 , 14 , 15 , 16] ) deal with this problem. we give the smoothing gradient method for solving it and give the convergence analysis. In Section 3, we discuss the applications of the methods, this further illustrated the system of nonsmooth equations with finitely many maximum functions is related to solve the optimization in theory. In the last section, we discuss the application of the method for the related minimax optimization. The numerical results are also given.
Notation. Throughout the paper, ∥.∥ denotes the l 2 norm, R + = { x | x ≥ 0, x R }, gk denote the gradient of f at xk .
2. The methods and their convergence analysis
Case(I). Firstly, when f is smooth function, we give the steepest method for solving it. The steepest method is one of the most used method for solving unconstrained optimization (One can see for [11] ).
- Method 2.1
Step 1. Choose σ 1 ∈ (0, 0.5), σ 2 ∈ ( σ 1 , 1). Give initial point x 0 Rn , Let k := 0.
Step 2. Compute gk = ∇ f ( xk ), let dk = − gk , determine αk by Wolfe line search, where αk = max{ ρ 0 , ρ 1 . . .} and ρi satisfying
PPT Slide
Lager Image
and
PPT Slide
Lager Image
Set x k+1 = xk + αkdk .
Step 3. Let k := k + 1, go to step 2.
The global convergence of the Method 2.1 is given by the following theorem.
Theorem 2.1. Let { xk } generated by the Method 2.1. f ( x ) is lower bounded. For any x 0 Rn , ▽ f ( x ) is existence and uniformly continuous on the level set
PPT Slide
Lager Image
Then we have
PPT Slide
Lager Image
Proof . Suppose that the theorem is not true, then there exist a subsequence ( we still denote the index by k )such that
PPT Slide
Lager Image
By dk is a descent direction and (7) , we can see that { f ( xk )} is monotonically decreasing sequence. Since f ( xk ) is lower bounded. So the limitation of f ( xk ) is existence. Thus, we have
PPT Slide
Lager Image
Set sk = αkdk . From (7), we know that
PPT Slide
Lager Image
Due to the angle between dk and − gk is θk = 0. Then
PPT Slide
Lager Image
Note that ∥ gk ∥ ≥ ε > 0, hence we must have ∥ sk ∥ → 0.
And because ∇ f ( x ) is uniformly continuous on the level set, we have
PPT Slide
Lager Image
That is
PPT Slide
Lager Image
This contradiction with (8) and σ 2 < 1. So we have
PPT Slide
Lager Image
That is
PPT Slide
Lager Image
Case (II). When f is locally Lipschitz continuous but not necessarily differentiable function. The generalized gradient of f at x is defined by
PPT Slide
Lager Image
where ”conv” denotes the convex hull of set. Df is the set of points at which f is differentiable.
Firstly, we introduce the definition of smoothing function.
Definition 2.2 ( [3] ). Let f : Rn R be continuous function. We call
PPT Slide
Lager Image
a smooth function of f , if
PPT Slide
Lager Image
is continuously differentiable in Rn for any fixed μ > 0 and
PPT Slide
Lager Image
for any x Rn .
In the following, we present a smoothing gradient algorithm for (6).
- Method 2.2
Step 1. Choose σ 1 ∈ (0, 0.5), σ 2 ∈ ( σ 1 , 1) γ > 0 γ 1 ∈ (0, 1), give a initial point x 0 Rn , Let k := 0.
Step 2. Compute
PPT Slide
Lager Image
let dk = − gk , determine αk by the Wolfe line search, where αk = max{ ρ 0 , ρ 1 . . .} and ρi satisfying
PPT Slide
Lager Image
and
PPT Slide
Lager Image
Set x k+1 = xk + αkdk .
Step 3. if
PPT Slide
Lager Image
then set μk +1 = μk ; otherwise, μ k+1 = γ 1 μk .
Step 4. Let k := k + 1, go to Step 2.
Then, we give the convergence result of Method 2.2.
Theorem 2.3. Suppose that
PPT Slide
Lager Image
is a smoothing function of f. If for any fixed
PPT Slide
Lager Image
satisfies the conditions as in Theorem 2.1, then { xk } generated by Method 2.2 satisfies
PPT Slide
Lager Image
and
PPT Slide
Lager Image
Proof . Define K = { k | μk +1 = γ 1 μk }. If K is finite set, then there exists an interger
PPT Slide
Lager Image
such that for all
PPT Slide
Lager Image
PPT Slide
Lager Image
Then
PPT Slide
Lager Image
in step 3 of Method 2.2.
Since
PPT Slide
Lager Image
is a smoothing function, Method 2.2 reduces to solve
PPT Slide
Lager Image
Hence, from the above Theorem 2.1, we can deduce that
PPT Slide
Lager Image
which contradicts with (10). This show that K must be infinite. And we know
PPT Slide
Lager Image
Since K is infinite, we can assume that K = { k 0 , k 1 , . . .}, where k 0 < k 1 < . . . Then we have
PPT Slide
Lager Image
We get the theorem.
From above Theorem 2.3 and the gradient consistency discussion in [3 , 6] , we can get the following result.
Theorem 2.4. Any accumulation point x * generated by Method 2.2 is a clarkr stationary point. This is
PPT Slide
Lager Image
3. The applications of the methods
- 3.1. Application in solving generalized complementarity problem.
Consider the generalized complementarity problem (GCP) as in [5] , Find a x Rn such that
PPT Slide
Lager Image
where F = ( F 1 , F 2 , . . . , Fn ) T , G = ( G 1 G 2 , . . . , Gn ) T , Fi : Rn R ( i = 1, . . . , n ) and Gi : Rn R ( i = 1, . . . , n ) are continuously differentiable functions.
To solve (11) is equivalent to solve the following equations
PPT Slide
Lager Image
By min( x, y ) = x − ( x y ) + , we know that (12) is equivalent to
PPT Slide
Lager Image
Let ρ : R R + be a piecewise continuous density function satisfying
PPT Slide
Lager Image
Let
PPT Slide
Lager Image
then for any fixed μ > 0, there is a continuous function
PPT Slide
Lager Image
satisfying
PPT Slide
Lager Image
From the definition of smoothing function, we know that ϕ (·, μ ) is a smoothing function of ( t ) + .
Choose
PPT Slide
Lager Image
Then
PPT Slide
Lager Image
is a smoothing function of ( t ) + . Then, let t = Fi ( x ) − Gi ( x ), i = 1, . . . , n , we have
PPT Slide
Lager Image
We know that the smoothing function of Fi ( x ) − ( Fi ( x ) − Gi ( x )) + is
PPT Slide
Lager Image
So, we can transform (13) into
PPT Slide
Lager Image
Then, we can use the Method 2.2 to solve (16).
- 3.2. Application in solving linear maximum equations.
Here, we consider the equations of maximum functions in [16] . Let F : R R is a finite equations of maximum functions,
PPT Slide
Lager Image
where fi : R R is a affine linear,
PPT Slide
Lager Image
where pi , qi R ( i = 1, . . . , m , m N ) are scalars. Follow the affine structure of F , we know that F is Lipschitz and convex. Generally assumption
PPT Slide
Lager Image
And there exists −∞ = t 1 < t 2 < . . . < tm < t m+1 = ∞, such that
PPT Slide
Lager Image
And
PPT Slide
Lager Image
For the above linear affine equations of maximum functions, the smoothing function for the linear equations of maximum functions can be defined as follows. Let ρ : R R is a piecewise continuous density function such that
PPT Slide
Lager Image
and
PPT Slide
Lager Image
We define a distribution function that goes with ρ by F ,i.e.,
PPT Slide
Lager Image
Similar to [2] , we can find the smoothing function F ( t ) of this special equations of maximum functions by convolution
PPT Slide
Lager Image
For this linear affine finite equations of maximum functions
PPT Slide
Lager Image
Using the above convolution, we can transform it into
PPT Slide
Lager Image
and we can use the Method 2.2 to solve it.
4. Application in related minimax optimization problem
In this section, we consider the minimax optimization problem(see in [15] )
PPT Slide
Lager Image
where f ( x ) = max i=1,...m f 1 ( x ). f 1 ( x ), . . . , fm ( x ) : Rn R are twice continuous differentiable functions. Minimax problems are widely used in engineering design, optimal control, circuit design and computer-aided-design. Usually, minimax problems can be approached by reformulating them into smooth problems with constraints or by dealing with the nonsmooth objective directly.
In this paper, we also use the smoothing function (see for instance [15] )
PPT Slide
Lager Image
to approximate the function f ( x ) . In the following, we can see that using the Method 2.2 to solve the minimax optimization problem works quite well from the numerical result. We use the examples in [4] . All codes are finished in MATLAB 8.0. Throughout our computational experiments, the parameters used in the Method 2.2 are chosen as
PPT Slide
Lager Image
In our implementation, we use ∥Δ x ∥ ≤ 10 −5 as the stopping rule. x 0 is the initial point, x * is the optimal value point, f ( x * ) is optimal value, k is the iterations.
Example 4.1 ( [4] ).
PPT Slide
Lager Image
where
PPT Slide
Lager Image
Numerical results for Example 4.1.
PPT Slide
Lager Image
Numerical results for Example 4.1.
Example 4.2 ( [4] ).
PPT Slide
Lager Image
where
PPT Slide
Lager Image
Consider the following nonlinear programming problem as in [4] .
PPT Slide
Lager Image
Numerical results for Example 4.2.
PPT Slide
Lager Image
Numerical results for Example 4.2.
Bandler and Charalambous (see [1] ) proved that for sufficiently large αi , the optimum of the nonlinear programming problem coincides with the following minimax function:
PPT Slide
Lager Image
where
PPT Slide
Lager Image
Example 4.3 (Rosen-Suzuki Problem).
PPT Slide
Lager Image
Here, we use
PPT Slide
Lager Image
The numerical results for Example 4.3 are listed in Table 4.3 .
Numerical results for Example 4.3.
PPT Slide
Lager Image
Numerical results for Example 4.3.
BIO
Deyan Pang received her BS from Jining University, China in 2012. Since 2012 she has been at Qingdao University, China. Her research interests include optimization, nonsmooth equations.
College of Mathematics, Qingdao University, Qingdao 266071, China.
e-mail:mm070348@126.com
Jingjie Ju received her BS from Zaozhuang University, China in 2012. Since 2012 she has been at Qingdao University, China. Her research interests include optimization, complementarity problems.
College of Mathematics, Qingdao University, Qingdao 266071, China.
e-mail:jjj3.28@163.com
Shouqiang Du PhD, Vice Professor. Since 2003 he has been at College of Mathematics, Qingdao University. His research interests include numerical optimization and nonsmooth optimization.
College of Mathematics, Qingdao University, Qingdao, 266071, China.
e-mail:dsq8933@163.com;sqdu@qdu.edu.cn
References
Bandler J.W. , Charalambous C. (1974) Nonlinear programming using minimax techniques Optim Theory Appl. 13 607 - 619    DOI : 10.1007/BF00933620
Burke J.V. , Hoheisel T. , Kanzow C. (2013) Gradient consistency for integral-convolution smoothing functions Set-Valued and Variational Ana. 21 359 - 376    DOI : 10.1007/s11228-013-0235-6
Chen X. (2012) Smoothing methods for nonsmooth, nonconvex minimization Math. Program. 134 71 - 99    DOI : 10.1007/s10107-012-0569-0
Charalamous C. , Conn A.R. (1978) An efficient method to solve the minimax problem directly SIAM J.Numer.Anal. 15 162 - 187    DOI : 10.1137/0715011
Chen X. , Qi L. (1994) A parameterized newton method and a quasi-newton method for nonsmooth equations Comput. Optim. Appl. 3 157 - 179    DOI : 10.1007/BF01300972
Chen X. , Zhou W. (2010) Smoothing nonlinear conjugate gradient method for image restoration using nonsmooth nonconvex minimization SIAM J. Imaging Sci. 3 765 - 790    DOI : 10.1137/080740167
Demyanov V.F. , Molozenov V.N. 1974 Introduction to minimax Wiley New York
Pardalos P.M. , Du D.Z. 1995 Minimax and applications Kluwer Academic Publishers Dordrecht
Gao Y. 2008 Nonsmooth optimization Science press Beijing
Gao Y. (2001) Newton methods for solving two classes of nonsmooth equations Appl. Math. 46 215 - 229    DOI : 10.1023/A:1013791923957
Ma C. 2010 Optimization method and its matlab program design Science press Beijing
Polak E. (1987) On the mathematical foundations of nondifferentiable optimization SIAM Re-view 29 21 - 89    DOI : 10.1137/1029002
Polak E. , Higgins J.E. , Mayne D.Q. (1994) A barrier function method for minimax problems Math. Program. 64 277 - 294    DOI : 10.1007/BF01582577
Peng J.M. , Lin Z. (1999) A non-interior continuation method for generalized linear complementarity problems Math. program. 86 533 - 563    DOI : 10.1007/s101070050104
Xu S. (2001) Smoothing method for minimax problem Comput. Optim. Appl 20 267 - 279    DOI : 10.1023/A:1011211101714
Zhu D. (2008) Affine scaling interior levenberg-marquardt method for bound-constrained semismooth equations under local errror bound contions Comput. Appl. Math. 219 198 - 225    DOI : 10.1016/j.cam.2007.07.039