SMOOTHING APPROXIMATION TO l1 EXACT PENALTY FUNCTION FOR CONSTRAINED OPTIMIZATION PROBLEMS†
SMOOTHING APPROXIMATION TO l1 EXACT PENALTY FUNCTION FOR CONSTRAINED OPTIMIZATION PROBLEMS†
Journal of Applied Mathematics & Informatics. 2015. May, 33(3_4): 387-399
• Received : October 30, 2014
• Accepted : February 18, 2015
• Published : May 30, 2015
PDF
e-PUB
PPT
Export by style
Share
Article
Author
Metrics
Cited by
TagCloud
NGUYEN THANH BINH

Abstract
In this paper, a new smoothing approximation to the l 1 exact penalty function for constrained optimization problems (COP) is presented. It is shown that an optimal solution to the smoothing penalty optimization problem is an approximate optimal solution to the original optimization problem. Based on the smoothing penalty function, an algorithm is presented to solve COP, with its convergence under some conditions proved. Numerical examples illustrate that this algorithm is efficient in solving COP. AMS Mathematics Subject Classification : 90C30, 65K05.
Keywords
1. Introduction
Consider the following COP:
PPT Slide
Lager Image
where f , gi : Rn R , i I = {1, 2,..., m } are continuously differentiable functions and X 0 = { x Rn | gi ( x ) ≤ 0, i = 1, 2,..., m } is the feasible set to (P).
To solve (P), many exact penalty function methods have been introduced in literatures, see, [1 , 3 , 4 , 5 , 7 , 13 , 25] . In 1967, Zangwill [25] first the classical l 1 exact penalty function as follows:
PPT Slide
Lager Image
where ρ > 0 is a penalty parameter. Obviously, it is not a smooth function. In many studies, another popular penalty function for (P) is defined as:
PPT Slide
Lager Image
which is called l 2 penalty function. Although F 2 ( x , ρ ) is continuously differentiable, it is not an exact penalty function.
In recent years, the lower order penalty function
PPT Slide
Lager Image
has been introduced and investigated in [10 , 11 , 18] . Recently, Huang and Yang [6 , 23] and Rubinov et al. [14 , 15 , 16] discussed a nonlinear Lagrangian penalty function,
PPT Slide
Lager Image
for some k ∈ (0,+ ∞).
It is noted that two penalty functions (3) and (4) (0 < k ≤ 1) are exact, but not smooth, which makes certain efficient methods (e.g., Newton methods) not applicable. Therefore, the smoothing methods for these exact penalty functions (1), (3) or (4) (0 < k ≤ 1) attracts much attention, see, [2 , 8 , 9 , 10 , 11 , 12 , 18 , 19 , 20 , 21 , 22 , 24 , 26] . Chen et al. [2] introduced a smooth function to approximate the classical l 1 penalty function by integrating the sigmoid function 1/(1 + e−αt ). Lian [8] and Wu et al. [19] proposed a smoothing approximation to l 1 exact penalty function for inequality constrained optimization. Pinar et al. [12] also proposed a smoothing approximation to l 1 exact penalty function and an ϵ -optimal minimum can be obtained by solving the smoothed penalty problem. Xu et al. [21] discussed a second-order differentiability smoothing to the classical l 1 exact penalty function for constrained optimization problems.
In this paper, we aim to smooth l 1 exact penalty function of the form (1). First, we define a function pϵ ( t ) as follows:
PPT Slide
Lager Image
It is easy to prove that pϵ ( t ) is continuously differentiable on R. Using pϵ ( t ) as the smoothing function, a new smoothing approximation to l 1 exact penalty function is obtained, based on the smoothed penalty function obtained thereafter an algorithm for solving COP is given in this paper.
The rest of this paper is organized as follows. In Section 2, we introduce a smoothing function for the classical l 1 exact penalty function and some fundamental properties of the smoothing function. In Section 3, the algorithm based on the smoothed penalty function is proposed and its global convergence is presented, with some numerical examples given. Finally, conclusions are given in Section 4.
2. A smoothing penalty function
Let p ( t ) = max{ t , 0}. Then, the penalty function (1) is turned into
PPT Slide
Lager Image
where ρ > 0. The corresponding penalty optimization problem to F ( x , ρ ) is defined as
PPT Slide
Lager Image
In order to p ( t ), we define function pϵ ( t ) : R 1 R 1 as
PPT Slide
Lager Image
where ϵ > 0 is a smoothing parameter.
Remark 2.1. Obviously, pϵ ( t ) has the following attractive properties: pϵ ( t ) is continuously differentiable on R and
PPT Slide
Lager Image
.
Figure 1 shows the behavior of p ( t ) (represented by the real line), p 0.5 ( t ) (represented by the dot line), p 0.1 ( t ) (represented by the broken and dot line), p 0.001 ( t ) (represented by the broken line).
PPT Slide
Lager Image
The behavior of p(t) and pϵ(t).
Consider the penalty function for (P) given by
PPT Slide
Lager Image
Clearly, Fϵ ( x , ρ ) is continuously differentiable on Rn . Applying (6), the following penalty problem for (P) is obtained
PPT Slide
Lager Image
Now, the relationship between ( Pρ ) and ( NPρ,ϵ ) is studied.
Lemma 2.1. For any given x Rn , ϵ > 0 and ρ > 0, we have
PPT Slide
Lager Image
Proof . For x Rn and i I , by the definition of pϵ ( t ), we have
PPT Slide
Lager Image
That is,
PPT Slide
Lager Image
Thus,
PPT Slide
Lager Image
which implies
PPT Slide
Lager Image
Therefore,
PPT Slide
Lager Image
that is,
PPT Slide
Lager Image
The proof completes. □
A direct result of Lemma 2.1 is given as follows.
Corollary 2.2. Let { ϵj } → 0 be a sequence of positive numbers and assume xj is a solution to ( NPρ,ϵ ) for some given ρ > 0. Let x′ be an accumulation point of the sequence { xj }. Then x′ is an optimal solution to ( Pρ ).
Definition 2.3. For ϵ > 0, a point xϵ Rn is called ϵ -feasible solution to (P) if gi ( xϵ ) ≤ ϵ , ∀ i I .
Definition 2.4. For ϵ > 0, a point xϵ X 0 is called ϵ -approximate optimal solution to (P) if
PPT Slide
Lager Image
where f is the optimal objective value of (P).
Theorem 2.5. Let x be an optimal solution of problem ( Pρ ) and x′ be an optimal solution to ( NPρ,ϵ ) for some ρ > 0 and ϵ > 0. Then ,
PPT Slide
Lager Image
Proof . From Lemma 2.1, for ρ > 0, we have that
PPT Slide
Lager Image
Under the assumption that x is an optimal solution to ( Pρ ) and x′ is an optimal solution to ( NPρ,ϵ ), we get
PPT Slide
Lager Image
Therefore, we obtain that
PPT Slide
Lager Image
That is,
PPT Slide
Lager Image
This completes the proof. □
Theorem 2.5 show that an approximate solution to ( NPρ,ϵ ) is also an approximate solution to ( Pρ ) when the error ϵ is sufficiently small.
Lemma 2.6 ( [20] ). Suppose that x is an optimal solution to ( Pρ ). If x is feasible to (P), then it is an optimal solution to (P) .
Theorem 2.7. Suppose that x satisfies the conditions in Lemma 2.6 and x′ be an optimal solution to ( NPρ,ϵ ) for some ρ > 0 and ϵ > 0. If x′ is ϵ-feasible to (P). Then,
PPT Slide
Lager Image
that is, x′ is an approximate optimal solution to (P).
Proof . Since x′ is ϵ -feasible to (P), it follows that
PPT Slide
Lager Image
As x is a feasible solution to (P), we have
PPT Slide
Lager Image
By Theorem 2.5, we get
PPT Slide
Lager Image
Thus,
PPT Slide
Lager Image
That is,
PPT Slide
Lager Image
By Lemma 2.6, x is actually an optimal solution to (P). Thus x′ is an approximate optimal solution to (P). This completes the proof. □
By Theorem 2.7, an optimal solution to ( NPρ,ϵ ) is an approximate optimal solution to (P) if it is ϵ -feasible to (P). Therefore, we can obtain an approximately optimal solution to (P) by solving ( NPρ,ϵ ) under some mild conditions.
3. Algorithm and numerical examples
In this section, using the smoothed penalty function Fϵ ( x , ρ ), we propose an algorithm to solve COP, defined as Algorithm 3.1.
Algorithm 3.1
• Step 1: Choosex0,ϵ> 0,ϵ0> 0,ρ0> 0, 0 <η< 1 andN> 1, letj= 0 and go to Step 2.
• Step 2: Usexjas the starting point to solve
PPT Slide
Lager Image
• Letxj+1be the optimal solution obtained (xj+1is obtained by a quasi-Newton method).
• Step 3: Ifxj+1isϵ-feasible to (P), then stop and we have obtained an approximate solutionxj+1of (P). Otherwise, letρj+1=Nρj,ϵj+1=ηϵjandj=j+ 1, then go to Step 2.
Remark 3.1. In this Algorithm 3.1, as N > 1 and 0 < η < 1, the sequence { ϵj } → 0 ( j → + ∞) and the sequence { ρj } → + ∞ ( j → + ∞).
In practice, it is difficult to compute
PPT Slide
Lager Image
. We generally look for the local minimizer or stationary point of Fϵj ( x , ρj ) by computing x j+1 such that ∇ Fϵj ( x , ρj ) = 0. For x Rn , we define
PPT Slide
Lager Image
Then, the following result is obtained.
Theorem 3.1. Assume that
PPT Slide
Lager Image
. Let { xj } be the sequence generated by Algorithm 3.1. Suppose that the sequence { Fϵj ( xj , ρj )} is bounded. Then { xj } is bounded and any limit point x of { xj } is feasible to (P), and satisfies
PPT Slide
Lager Image
where λ ≥ 0 and μi ≥ 0, i = 1, 2,..., m .
Proof . First, we will prove that { xj } is bounded. Note that
PPT Slide
Lager Image
and by the definition of pϵ ( t ), we have
PPT Slide
Lager Image
Suppose to the contrary that { xj } is unbounded. Without loss of generality, we assume that ║ xj ║ → + ∞ as j → + ∞. Then,
PPT Slide
Lager Image
and from (11) and (12), we have
PPT Slide
Lager Image
which results in a contradiction since the sequence { Fϵj ( xj , ρj )} is bounded. Thus { xj } is bounded.
We show next that any limit point x of { xj } is feasible to (P). Without loss of generality, we assume that
PPT Slide
Lager Image
. Suppose that x is not feasible to (P). Then there exits some i I such that gi ( x ) ≥ α > 0. Note that
PPT Slide
Lager Image
If j → + ∞, then for any sufficiently large j , the set { i | gi ( xj ) ≥ α } is not empty. Because I is finite, then there exists an i 0 I that satisfies g i0 ( xj ) ≥ α . If j → + ∞, ρj → + ∞, ϵj → 0, it follows from (13) that Fϵj ( xj , ρj ) → + ∞, which contradicts the assumption that { Fϵj ( xj , ρj )} is bounded. Therefore, x is feasible to (P).
Finally, we show that (10) holds. By Step 2 in Algorithm 3.1, ∇ Fϵj ( xj , ρj ) = 0, that is
PPT Slide
Lager Image
For j = 1, 2,..., let
PPT Slide
Lager Image
Then γj > 1. From (14), we have
PPT Slide
Lager Image
Let
PPT Slide
Lager Image
Then we have
PPT Slide
Lager Image
When j → ∞, we have that λj λ ≥ 0,
PPT Slide
Lager Image
, ∀ i I . By (16) and (17), as j → + ∞, we have
PPT Slide
Lager Image
For i I 0 ( x ), as j → + ∞, we get
PPT Slide
Lager Image
. Therefore, μi = 0, ∀ i I 0 ( x ). So, (10) holds, and this completes the proof. □
Theorem 3.1 points out that the sequence { xj } generated by Algorithm 3.1 may converge to a K-T point to (P) under some conditions.
Now, we will solve some COP with Algorithm 3.1 on MATLAB. In each example, we let ϵ = 10 −6 , then it is expected to get an ϵ -solution to (P) with Algorithm 3.1 on MATLAB. Numerical results show that Algorithm 3.1 yield some approximate solutions that have a better objective function value in comparison with some other algorithms.
Example 3.2. Consider the example in [8] ,
PPT Slide
Lager Image
Let x 0 = (0, 0, 0, 0), ρ 0 = 4, N = 10, ϵ 0 = 0.01, η = 0.05 and ϵ = 10 −6 .
Numerical results of Algorithm 3.1 for solving (COP1) are given in Table 1 .
Numerical results of Algorithm 3.1 withx0= (0, 0, 0, 0),ρ0= 4,N= 10
PPT Slide
Lager Image
Numerical results of Algorithm 3.1 with x0 = (0, 0, 0, 0), ρ0 = 4, N = 10
Therefore, we get an approximate solution
• x3= (0.170768, 0.827977, 2.011779,−0.960639)
at the 3’th iteration. One can easily check that x 3 is a feasible solution since the constraints of (COP1) at x 3 are as follows:
PPT Slide
Lager Image
The objective function value is given by f ( x 3 ) = −44.233515. The solution we obtained is slightly better than the solution obtained in the 4’th iteration by method in [8] (the objective function value f ( x ) = −44.23040) for this example.
Now we change the initial parameters. Let x 0 = (0, 0, 0, 0), ρ 0 = 8, N = 6, ϵ 0 = 0.01, η = 0.03 and ϵ = 10 −6 . Numerical results of Algorithm 3.1 for solving (COP1) are given in Table 2 . Further, with the same parameters ρ 0 , N, ϵ 0 , η as above, we change the starting point to x 0 = (8, 8, 8, 8). New numerical results are given in Table 3 .
Numerical results of Algorithm 3.1 withx0= (0, 0, 0, 0),ρ0= 8,N= 6
PPT Slide
Lager Image
Numerical results of Algorithm 3.1 with x0 = (0, 0, 0, 0), ρ0 = 8, N = 6
Numerical results of Algorithm 3.1 withx0= (8, 8, 8, 8),ρ0= 8,N= 6
PPT Slide
Lager Image
Numerical results of Algorithm 3.1 with x0 = (8, 8, 8, 8), ρ0 = 8, N = 6
It is easy to see from Tables 2 and 3 that the convergence of Algorithm 3.1 is the same and the objective function values are almost the same. That is to say, the efficiency of Algorithm 3.1 does not completely depend on how to choose a starting point in this example.
Note: j is the number of iteration in the Algorithm I.
• ρjis constrain penalty parameter at thej′th iteration.
• xjis a solution at thej′th iteration in the Algorithm I.
• f(xj) is an objective value atxj.
• gi(xj) (i= 1,...,m) is a constrain value atxj.
Example 3.3. Consider the example in [19] ,
PPT Slide
Lager Image
Let
PPT Slide
Lager Image
Thus problem (COP2) is equivalent to the following problem:
PPT Slide
Lager Image
Let x 0 = (1, 1), ρ 0 = 8, N = 10, ϵ 0 = 0.5, η = 0.01 and ϵ = 10 −6 . Numerical results of Algorithm 3.1 for solving (COP2’) are given in Table 4 .
Numerical results of Algorithm 3.1 withx0= (1, 1),ρ0= 8,N= 10
PPT Slide
Lager Image
Numerical results of Algorithm 3.1 with x0 = (1, 1), ρ0 = 8, N = 10
By Table 4 , an approximate optimal solution to (COP2’) is obtained at the 3’th iteration, that is x = (0.800000, 1.200000) with corresponding objective function value f ( x ) = −7.200000. The solution we obtained is similar with the solution obtained in the 4’th iteration by method in [19] (the objective function value f ( x ) = −7.2000) for this example.
4. Conclusion
This paper has presented a smoothing approximation to the l 1 exact penalty function and an algorithm based on this smoothed penalty problem. It is shown that the optimal solution to the ( NPρ,ϵ ) is an approximate optimal solution to the original optimization problem under some mild conditions. Numerical results show that the algorithm proposed here is efficient in solving some COP.
Acknowledgements
The author would like to thank the anonymous reviewers and the editor for their insightful suggestions and comments which led to improvements of this paper. This work is supported by National Natural Science Foundation of China (Grant No.11371242).
BIO
Nguyen Thanh Binh received M.Sc. from Thai Nguyen University of Education. He is also studying in Ph.D degree at Department of Mathematics, Shanghai University, China. His research interests include nonlinear optimization and computational mathematics.
Department of Mathematics, Shanghai University, Shanghai, 200444, China.
e-mail: bingbongyb@gmail.com
References
Bazaraa M.S. , Goode J.J. (1982) Sufficient conditions for a globally exact penalty function without convexity Mathematical Programming Study 19 1 - 15
Chen C.H. , Mangasarian O.L. (1995) Smoothing methods for convex inequalities and linear complementarity problems Math. Program 71 51 - 69    DOI : 10.1007/BF01592244
Di Pillo G. , Grippo L. (1986) An exact penalty function method with global conergence properties for nonlinear programming problems Math. Program 36 1 - 18    DOI : 10.1007/BF02591986
Di Pillo G. , Grippo L. (1989) Exact penalty functions in contrained optimization SIAM J. Control. Optim. 27 1333 - 1360    DOI : 10.1137/0327068
Han S.P. , Mangasrian O.L. (1979) Exact penalty function in nonlinear programming Math. Program 17 257 - 269    DOI : 10.1007/BF01588250
Huang X.X. , Yang X.Q. (2003) Convergence analysis of a class of nonlinear penalization methods for constrained optimization via first-order necessary optimality conditions J. Optim. Theory Appl. 116 311 - 332    DOI : 10.1023/A:1022503820909
Lasserre J.B. (1981) A globally convergent algorithm for exact penalty functions Eur. J. Oper. Res. 7 389 - 395    DOI : 10.1016/0377-2217(81)90097-7
Lian S.J. (2012) Smoothing approximation to l1exact penalty function for inequality constrained optimization Appl. Math. Comput. 219 3113 - 3121    DOI : 10.1016/j.amc.2012.09.042
Liu B.Z. (2009) On smoothing exact penalty functions for nonlinear constrained optimization problems J. Appl. Math. Comput. 30 259 - 270    DOI : 10.1007/s12190-008-0171-z
Meng Z.Q. , Dang C.Y. , Yang X.Q. (2006) On the smoothing of the square-root exact penalty function for inequality constrained optimization Comput. Optim. Appl. 35 375 - 398    DOI : 10.1007/s10589-006-8720-6
Meng K.W. , Li S.J. , Yang X.Q. (2009) A robust SQP method based on a smoothing lower order penalty function Optimization 58 22 - 38    DOI : 10.1080/02331930701761193
Pinar M.C. , Zenios S.A. (1994) On smoothing exact penalty function for convex constrained optimization SIAM J. Optim. 4 486 - 511    DOI : 10.1137/0804027
Rosenberg E. (1984) Exact penalty functions and stability in locally Lipschitz programming Math. Program 30 340 - 356    DOI : 10.1007/BF02591938
Rubinov A.M. , Glover B.M. , Yang X.Q. (1999) Extended Lagrange and penalty function in continuous optimization Optimization 46 327 - 351    DOI : 10.1080/02331939908844460
Rubinov A.M. , Yang X.Q. , Bagirov A.M. (2002) Penalty functions with a small penalty parameter Optim. Methods Softw. 17 931 - 964    DOI : 10.1080/1055678021000066058
Rubinov A.M. , Yang X.Q. 2003 Nonlinear Lagrangian and Penalty Functions in Optimization Kluwer Academic Dordrecht
Sun X.L. , Li D. (1999) Value-estimation function method for constrained global optimization J. Optim. Theory Appl. 24 385 - 409    DOI : 10.1023/A:1021736608968
Wu Z.Y. , Bai F.S. , Yang X.Q. , Zhang L.S. (2004) An exact lower order penalty function and its smoothing in nonlinear programming Optimization 53 57 - 68    DOI : 10.1080/02331930410001699928
Wu Z.Y. , Lee H.W.J. , Bai F.S. , Zhang L.S. , L.S. X.M. (2005) Quadratic smoothing approximation to l1 exact penalty function in global optimization J. Ind. Manag. Optim. 1 533 - 547    DOI : 10.3934/jimo.2005.1.533
Xu X.S. , Meng Z.Q. , Sun J.W. , Shen R. (2011) A penalty function method based on smoothing lower order penalty function J. Comput. Appl. Math. 235 4047 - 4058    DOI : 10.1016/j.cam.2011.02.031
Xu X.S. , Meng Z.Q. , Sun J.W. , Huang L.Q. , Shen R. (2013) A second-order smooth penalty function algorithm for constrained optimization problems Comput. Optim. Appl. 55 155 - 172    DOI : 10.1007/s10589-012-9504-9
Yang X.Q. (1994) Smoothing approximations to nonsmooth optimization problems J. Aust. Math. Soc. B 36 274 - 285    DOI : 10.1017/S0334270000010444
Yang X.Q. , Huang X.X. (2001) A nonlinear Lagrangian approach to constrained optimization problems SIAM J. Optim. 11 1119 - 1144    DOI : 10.1137/S1052623400371806
Yang X.Q. , Meng Z.Q. , Huang X.X. , Pong G.T.Y. (2003) Smoothing nonlinear penalty function for constrained optimization Numer. Funct. Anal. Optim. 24 357 - 364    DOI : 10.1081/NFA-120022928
Zangwill W.I. (1967) Nonlinear programming via penalty function Mgmt. Sci. 13 334 - 358
Zenios S.A. , Pinar M.C. , Dembo R.S. (1993) A smooth penalty function algorithm for network-structured problems Eur. J. Oper. Res. 64 258 - 277    DOI : 10.1016/0377-2217(93)90181-L