A MODIFIED INEXACT NEWTON METHOD†

Journal of Applied Mathematics & Informatics.
2015.
Jan,
33(1_2):
127-143

- Received : March 05, 2014
- Accepted : August 17, 2014
- Published : January 30, 2015

Download

PDF

e-PUB

PubReader

PPT

Export by style

Share

Article

Metrics

Cited by

TagCloud

In this paper, we consider a modified inexact Newton method for solving a nonlinear system
F
(
x
) = 0 where
F
(
x
) :
R^{n}
→
R^{n}
. The basic idea is to accelerate convergence. A semi-local convergence theorem for the modified inexact Newton method is established and an affine invariant version is also given. Moreover, we test three numerical examples which show that the modified inexact scheme is more efficient than the classical inexact Newton strategy.
AMS Mathematics Subject Classification : 65H10, 49M15.
where
F
(
x
) :
D
⊂
R^{n}
→
R^{n}
is Fréchet differentiable. Let
F′
(
x
) denote the Fréchet derivative of
F
at
x
.
Such equations (1.1) often arise in many important practical fields (e.g., physics and engineering, etc.). For example, input-output systems, least squares problems, finite difference or finite element problems, integral or differential equations, constrained function minimization, complementarity problems, variational inequalities, calculation of the load flows for power systems and solving initial or boundary value problems in ordinary or partial differential equations, etc.
Among all kinds of numerical methods for solving the nonlinear equations (1.1), Newton method
[18
,
24
,
33]
is the most classical one. In general, suppose that
x_{k}
is the current approximate solution; a new approximate solution
x
_{k}
_{+1}
can be computed through the following general form:
Algorithm 1.1
: Newton method
1. Let
x
_{0}
∈
R^{n}
be a given initial guess.
2. For
k
= 0 until convergence do.
If
n
is not too large, the Newton method is attractive because it converges rapidly from any sufficiently good initial data. However, Newton method has two disadvantages from the point of view of practical computation: one is that it requires computing Jacobian matrices, and the other is that it requires solving linear equations (1.2) exactly. Computing the exact solution using a direct method such as Gaussian elimination may be expensive if the Jacobian matrix is large and may not be justified when
x_{k}
is far from the exact solution
x
^{∗}
. In order to overcome the disadvantage of Newton method, using an iterative method and solving (1.2) approximately are reasonable. This approach was first considered by Dembo, Eisenstat and Steihaug in
[11]
(such a variant is the so-called inexact Newton method).
Algorithm 1.2
: Inexact Newton method
1. Let
x
_{0}
∈
R^{n}
be a given initial guess.
2. For
k
= 0 until convergence do.
Remark 1.1.
In the above algorithm,
s_{k}
is the inexact Newton step and (1.4) is the inexact Newton condition.
η_{k}
is the forcing term for the
k
-th iteration step which may depend on
x_{k}
; taking
η_{k}
≡ 0 gives the famous Newton method.
In typical applications, the choice of the forcing terms is critical to the efficiency of the method and can affect robustness as well. Usually, it is hard to choose a good sequence of forcing terms. In computational practice, several authors considered some concrete strategies. We list here the following:
where 0 <
p
_{1}
<
p
_{2}
<
p
_{3}
< 1 are prescribed at first, and
In addition, assume that
η
_{0}
is given and
There are three types of convergence issues about inexact Newton method: global, local and semi-local convergence analysis. The first is the convergence analysis based on the whole domain, the second is the convergence analysis based on a neighborhood of the solution
x
^{∗}
, and the last is the convergence analysis based on a neighborhood of the initial guess
x
_{0}
. Recently, several authors have studied the global convergence (see
[14
,
31]
), local convergence (see
[10
,
20
,
23
,
34]
) and semi-local convergence (see
[2
,
3
,
4
,
19
,
22
,
29
,
30
,
32
,
36]
) of inexact Newton method and proposed application in different fields
[5
,
21
,
28]
.
After this method is established, some iteration methods are considered by many authors based on it. AIN (Accelerated Inexact Newton) method is presented by Fokkema, Sleijpen and Voest in
[17]
. They have shown how the classical Newton iteration scheme for nonlinear problems can be accelerated in a similar way as standard Richardson-type iteration schemes for linear equation. REGINN (Regularization based on Inexact Newton iteration) method is presented by Rieder in
[25
,
26
,
27]
. INM (Inexact Newton Multigrid) is considered by Brown, Vassilevski and Woodward in
[6]
. They have proved optimal-order and mesh-independent convergence of an inexact Newton method where the linear Jacobian systems are solved with multigrid techniques. Also, PIN (Preconditioned Inexact Newton) method is considered by Cai and Keyes in
[8]
.
In this paper we consider a modified inexact Newton method. Our modification uses
to replace
F′
(
x_{k}
)
s_{k}
= −
F
(
x_{k}
) +
r_{k}
, where
The use of this method is particularly appropriate when an exact solution of Newton equation is difficult to obtain and/or when evaluating and preparing the Jacobian for the computation is costly, and this method has fast convergence as well.
The rest of this paper is organized as follows. In section 2, a modified inexact Newton algorithm is established. In section 3, we will present the semi-local convergence result for the given algorithm. Moreover, another theorem shows the affine invariance of the convergence of our proposed method. In section 4, we give three test problems using the algorithm presented in this paper to show its convergence properties and robustness. In the last section, some concluding remarks are made.
Note that (2.1) can be rewritten as
if
F′
(
x
) is invertible for every
x
∈
D
.
Consider one-dimensional nonlinear equation
where
f
(
x
) :
R
→
R
and
f′
(
x
) is invertible for every
x
∈
R
. It easily follows from (2.2) that the inexact Newton iteration scheme of (2.3) is
If
η_{k}
≡ 0, then
r_{k}
in (2.4) is 0. Moreover, (2.4) yields the famous Newton iteration scheme as follows
In fact, from the point of view of geometry, Newton iteration scheme (2.5) uses tangent line of point (
x_{k}
,
f
(
x_{k}
)) to approximate curved line. We claim that, if the tangent line of the point (
x_{k}
−
f′
(
x_{k}
)
^{−1}
f
(
x_{k}
),
f
(
x_{k}
−
f′
(
x_{k}
)
^{−1}
f
(
x_{k}
))) is used to replace the former one, then a new scheme can be obtained as follows
Furthermore by generalizing this method to
n
-dimensional case, we have
For simplicity, let
Hence, (2.7) reduces to
However, we have to solve two inverse matrices at each iteration, which will cost a lot of computational time. Meanwhile, it is hard to get
because it is difficult to calculate
F′
(
x_{k}
)
^{−1}
even if the scale of problems is medium. Here, we give a new scheme, which can use the information of the former iteration adequately and save CUP-time. Here, we take
Moreover, a modified inexact Newton algorithm is obtained.
Algorithm 2.1
: Modified inexact Newton method
1. Let
x
_{0}
∈
R^{n}
be a given initial guess.
2. For
k
= 0 until convergence do.
Remark 2.1.
In the above algorithm,
s_{k}
is the inexact Newton step and (2.10) is the modified inexact Newton condition.
η_{k}
is the forcing term for the
k
-th iteration step which may depend on
x_{k}
. Taking
η_{k}
≡ 0 and
give the famous Newton method. Because
has already known in the previous iteration, it is easy to get
Definition 3.1.
F′
is Lipschitz continuous in
D
, if there exists
L
≥ 0 such that
for every
x, y
∈
D
.
Lemma 3.1.
Let F
:
D
⊂
R^{n}
→
R^{n} be Fréchet differentiable and F′ be Lipschitz continuous satisfying
(3.1).
Then
,
for every x, y
∈
D
.
Lemma 3.2.
Let F
:
D
⊂
R^{n}
→
R^{n} be Fréchet differentiable and F′ be invertible for every x
∈
D such that
Then,
for every x, y, z
∈
D
.
Theorem 3.3.
Suppose F
:
D
⊂
R^{n}
→
R^{n} is Fréchet differentiable and F′ is Lipschitz continuous in D. Let x
_{0}
∈
D and
||
F
(
x
_{0}
) || ≤
η. Assume
where
Then the sequence of modified inexact Newton method defined by
(2.9)
stays in S and converges to x^{∗} which satisfies F
(
x
^{∗}
) = 0.
Proof
. Firstly, we will prove
by induction.
For
k
= 0, (3.5) holds evidently.
Assume that (3.5) is true for
k
≤
m
.
Now, we prove the assertion for
k
=
m
+ 1. Since
By Lemma 3.1, it follows that
where
L
_{1}
and
L
_{2}
are Lipschitz constants.
Using (3.6) and (3.7), we have
Hence, it follows from the assumption of induction, (2.10) and (3.3) that
This gives (3.5) and the induction is complete.
Indeed, by (2.9) and (3.5),
By the definition of
δ
, we have
Therefore, for
m
≥ 0,
Thus, by (3.3), {
x_{k}
} is a Cauchy sequence and converges to
x
^{∗}
as
k
→ +∞. Moreover,
for any
m
≥ 0. In view of (3.4), we obtain
x_{m}
∈
S
⊂
D
, which implies that
x
^{∗}
∈
S
⊂
D
as well.
Finally, we assert that
Hence,
F
(
x
^{∗}
) = 0. The proof is completed. □
It is well-known
[13
,
35]
that many Newton-like methods are affine invariant in the sense that when they are used to solve the affinely transformed problem
where
A
is any nonsingular
n
×
n
matrix.
Example.
Consider the Newton iteration
For
F
, we use affine transform
F
=
AF
. Then
Thus, we assert that the Newton iteration sequence {
x_{k}
} is affine invariant.
Convergence conditions for affine invariant methods should themselves be invariant under the transformations of this form
[13]
. Now it is clear that even if the method (2.9) is affine invariant, the condition (2.10) is not affine invariant. In order to give semi-local convergence theorem for the modified inexact Newton method with affine invariant condition, we improve the method (2.9) with the above condition (2.10) as follows
These improvements for inexact Newton method were proposed by Bai and Tong
[4]
. Here, we take them for our method.
The following theorem concerns the affine invariance of our algorithm.
Theorem 3.4.
Suppose F
:
D
⊂
R^{n}
→
R^{n} is Fréchet differentiable and the modified inexact Newton method is defined by
(3.10)
and
(3.11).
Assume F′ is invertible for every x
∈
D such that
Let x
_{0}
∈
D and
||
F′
(
x
_{0}
)
^{−1}
F
(
x
_{0}
) || ≤
η. Suppose further that
where
Then the sequence of modified inexact Newton method stays in S and converges to x
^{∗}
which satisfies F
(
x
^{∗}
) = 0.
Proof
. The proof is similar to the proof of Theorem 3.1. Firstly, we will prove
by induction.
For
k
= 0, (3.14) holds evidently.
Assume that (3.14) is true for
k
≤
m
.
Now, we prove the assertion for
k
=
m
+1. Let
T
_{m}
_{+1}
= ||
F′
(
x
_{m}
_{+1}
)
^{−1}
F
(
x
_{m}
_{+1}
) || for short. Since
By Lemma 3.2, it follows that
where
L
_{1}
and
L
_{2}
are constant.
Using (3.15) and (3.16),
Hence, it follows from the assumption of induction, (3.11) and (3.12) that
This gives (3.14) and the induction is complete.
Indeed, by (3.10) and (3.14)
By the definition of
δ
, we have
Therefore, for
m
≥ 0,
Thus, by (3.12), {
x_{k}
} is a Cauchy sequence and converges to
x
^{∗}
as
k
→ +∞. Moreover,
for any
m
≥ 0. In view of (3.13), we obtain
x_{m}
∈
S
⊂
D
, which implies that
x
^{∗}
∈
S
⊂
D
as well.
Finally, we assert that
Hence,
F
(
x
^{∗}
) = 0. The proof is completed. □
Next, we consider the rate of convergence of the modified inexact Newton method. First, we give the following definition given by R.S. Dembo et al.
[11]
.
Definition 3.2.
Let {
x_{k}
} be a sequence which converges to
x
^{∗}
. Then
x_{k}
→
x
^{∗}
with order at least
q
(
q
> 1) if
Theorem 3.5.
Under the assumptions of Theorem 3.1, if the modified inexact Newton iterates
{
x_{k}
}
converge to x
^{∗}
,
then x
_{k}
→
x
^{∗}
with order at least 2 if and only if
Proof
. Assume that
x
_{k}
→
x
^{∗}
with order at least 2. Note that
Taking norms, we arrive at
Therefore, by Lemma 3.1, the continuity of
F′
at
x
^{∗}
and the assumption that
x_{k}
→
x
^{∗}
with order at least 2, we have
Conversely, assume that ||
r_{k}
|| =
O
(||
x_{k}
−
x
^{∗}
||
^{2}
). Note that
Taking norms, we arrive at
Therefore, by Lemma 3.1 and the assumption that ||
r_{k}
|| =
O
(||
x_{k}
−
x
^{∗}
||
^{2}
), we have
Example 1.
Consider the following nonlinear equations:
with
x
^{∗}
= (1, 1)
^{T}
.
Take
η
= 10
^{−4}
given by Cai, Gropp, Keyes and Tidriri
[9]
. Using the modified inexact Newton method, we can obtain the iterative solutions listed in
Table 1
and
2
with initial guess (−1,−1)
^{T}
and (510, 10
^{21}
)
^{T}
, respectively.
Comparison of iterative solutions of two algorithms with the initial guess (−1,−1)^{T}
Comparison of iterative solutions of two algorithms with the initial guess (510, 10^{21})^{T}
In
Table 1
, a comparison of iterative solutions of two algorithms of Example 1 with the initial guess (−1,−1)
^{T}
is shown. Here,
denote the iterative solutions of inexact Newton method while
represent the iterative solutions of modified inexact Newton method. We have seen from
Table 1
that the modified inexact Newton method has faster convergence.
In
Table 2
, a comparison of iterative solutions of two algorithms of Example 1 with the initial guess (510, 10
^{21}
)
^{T}
is shown. We have seen from
Table 2
that two methods have a wide convergence domain and the modified inexact Newton method has faster convergence.
In
Fig. 1
, we present profiles for the history of absolute error for two algo-rithms of Example 1 with the initial guess (−1,−1)
^{T}
. We have seen from
Fig. 1
that the absolute error using modified inexact Newton method becomes small faster than that of inexact Newton method. It is observed that the proposed method performs better than the inexact Newton method.
History of absolute error for two algorithms with the initial guess (−1, −1)^{T}.
In
Fig. 2
, we present profiles for history of absolute error for two algorithms of Example 1 with the initial guess (510, 10
^{21}
)
^{T}
. We have seen from
Fig. 2
that the modified inexact Newton method may perform several times faster than the inexact Newton method.
History of absolute error for two algorithms with the initial guess (510, 10^{21})^{T} .
Example 2.
Consider the following nonlinear equations:
Take
η
= 10
^{−4}
given by Cai, Gropp, Keyes and Tidriri
[9]
. Using the modified inexact Newton method, we can obtain the iterative solutions. Here, we use ||
F
(
x_{k}
)|| ≤ 1.0
e
− 9 as the stopping rule for this example.
Results for Example 2 are listed in
Table 3
, where No.Eq1 denotes the number of equation in the first step, No.Eqk represents the number of equation in the
k
-th step (
k
> 1), No.Fd1 denotes the number of
F′
in the first step, No.Fdk represents the number of
F′
in the
k
-th step, No.it denotes the number of iter-ation and Res stands for the value of ||
F
(
x_{k}
)|| when our stop rule is satisfied. As shown in
Table 3
, the number of iteration by the modified inexact Newton method is less than that of the classical inexact Newton method. Meanwhile, we have seen from
Table 3
that the number of
F′
by the modified inexact Newton method in the
k
-th step is 2, but
has already known in the previous iteration. Hence, the number of
F′
is indeed 1. Although the number of equation of the modified inexact Newton method is nearly twice of that of the classical one, the CPU-time of the former method is still less. All in all, our method indeed saves on computation.
Results for Example 2
Example 3.
Consider one-dimensional Burgers’ equation:
where the positive number
is the coefficient of viscosity, and
Re
denotes the Reynolds number. This equation has an exact solution in the form of the infinite series
where
I_{j}
(
x
) is the first type of the
j
-th modified Bessel function. When
j
= 35, it is used as an approximation to the infinite sum (4.2).
We solve (4.1) with finite difference method and the modified inexact Newton method. First, we discretize in space with centered difference to obtain a system of ordinary differential equations, which we write as
Then the nonlinear equations that should be solved for the implicit Euler method with a time step
τ
is
Moreover, one solves, at each time step, the nonlinear equations
Then, we use the modified inexact Newton method to solve nonlinear equations (4.5).
Let
h
= 1/
M
be the mesh width in space and set
x_{i}
=
ih
for
i
= 1, 2, . . . ,
M
−1. Let
τ
=
T
/
N
be the mesh width in time and set
t_{n}
=
nτ
for
n
= 1, 2, . . . ,
N
.
u
(
x_{i}
,
t_{n}
) is the approximate solution of
u
(
x, t
). Discretization is on a 100 × 100 grid, so that
N
= 100 and
M
= 100. Hence, the spatial mesh width
h
= 0.01 and the time step
τ
= 0.01. Take
η_{max}
= 0.9 used by Eisenstat and Walker in
[15]
. Here, GMRES(m) algorithm is used for linear systems and
m
= 40.
In
Table 4
, a comparison of the numerical solution with the exact solution at different space points of (0, 1) for Example 3 at T = 0.1 and
v
= 0.1 is shown. It is observed that the proposed method gives sharp results. In order to show the physical behaviour of the given problem, we give surface plots of the computed solutions for different values of the coefficient of viscosity,
v
in
Figs. 3
and
4
.
Comparison of the numerical solution with the exact solution at different space points of Example 3 at T = 0.1 for ν = 0.1
Numerical solutions profiles of Example 3 for ν = 0.1, h = 0.01 and τ = 0.01.
Numerical solutions profiles of Example 3 for ν = 0.01, h = 0.01 and τ = 0.01.
Pengzhan Huang received his Ph.D. Degree from Xinjiang University. He is currently an associate professor at Xinjiang University. His research interests focus on numerical solutions for partial differential equations.
College of Mathematics and System Sciences, Xinjiang University, Urumqi 830046, P.R. China.
e-mail: hpzh007@yahoo.com
Abdurishit Abduwali received his Ph.D. from Okayama University of Science. He is currently a professor at Xinjiang University. His research interests focus on numerical solutions for partial differential equations.
College of Mathematics and System Sciences, Xinjiang University, Urumqi 830046, P.R. China.
e-mail: rashit@xju.edu.cn

1. Introduction

Consider the system of nonlinear equations
PPT Slide

Lager Image

- 2.1. For the iterationxk, find the stepsksatisfying

PPT Slide

Lager Image

- 2.2. Setxk+1=xk+sk.
- 2.3. Setk=k+ 1 and turn to 2.1.

- 2.1. Chooseηk∈ [0, 1).
- 2.2. For the residualrkand the iterationxk, find the stepsksatisfying

PPT Slide

Lager Image

- where

PPT Slide

Lager Image

- 2.3. Setxk+1=xk+sk.
- 2.4. Setk=k+ 1 and turn to 2.1.

- 1. The choiceη= 10−1used by Elias, Coutinho and Martins[16].
- 2. The choiceη= 10−4used by Cai, Gropp, Keyes and Tidriri[9].
- 3. The choiceof Brown and Saad[7].
- 4. The choiceof Dembo and Steihaug[12].
- 5. Eisenstat and Walker[15]proposed two choices:
- (1). Givenη0∈ [0, 1), choose

PPT Slide

Lager Image

- (2). Givenγ∈ [0, 1],α∈ (1, 2] andη0∈ [0, 1), choose

PPT Slide

Lager Image

- 6. H.B. An, Z.Y. Mo and X.P. Liu[1]choosed forcing terms by the following way:

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

2. A modified inexact Newton method

In this section, we introduce a modified inexact Newton method.
From the Algorithm 1.2, we know that the inexact Newton iteration scheme is
PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

- 2.1. Chooseηk∈ [0, 1).
- 2.2. For the residualrkand the iterationxk, find the stepsksatisfying

PPT Slide

Lager Image

- where

PPT Slide

Lager Image

- 2.3. Setxk+1=xk+sk.
- 2.4. Setk=k+ 1 and turn to 2.1.

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

3. Semi-local convergence analysis of modified inexact Newton method

In this section, we will present the semi-local convergence result for Algorithm 2.1 and an affine invariant version is also presented.
The following well-known results are useful for our theorems, one can find them in
[24]
.
PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

4. Numerical experiments

In this section, we give three test problems using the algorithm presented in this paper to show its convergence property and robustness. The purpose of the first two problems are to show that Algorithm 2.1 is useful in the nonlinear case. By useful we mean that the algorithm presented has faster convergence. In the third problem, a nonlinear PDE is solved.
PPT Slide

Lager Image

Comparison of iterative solutions of two algorithms with the initial guess (−1,−1)T

PPT Slide

Lager Image

Comparison of iterative solutions of two algorithms with the initial guess (510, 1021)T

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

Results for Example 2

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

Comparison of the numerical solution with the exact solution at different space points of Example 3 atT= 0.1 forν= 0.1

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

5. Concluding remarks

The modified inexact Newton method for nonlinear equations has been presented. It is shown that the method given performs several times faster than the inexact Newton method. A semi-local convergence theorem for the modified inexact Newton method is studied and an affine invariant version is also presented. We then give three numerical examples which show that the modified inexact Newton scheme is more efficient than traditional inexact Newton strategy. Therefore, it is suggested to use the modified inexact Newton to get the numerical solution of the nonlinear equations effectively.
BIO

An H.B.
,
Mo Z.Y.
,
Liu X.P.
(2007)
A choice of forcing terms in inexact Newton method
J. Comput. Appl. Math.
200
47 -
60
** DOI : 10.1016/j.cam.2005.12.030**

Argyros I.K.
(2009)
On the semilocal convergence of inexact Newton methods in Banach spaces
J. Comput. Appl. Math.
228
434 -
443
** DOI : 10.1016/j.cam.2008.10.005**

Argyros I.K.
(1999)
A new convergence theorem for the inexact Newton methods based on assumptions involving the second Fréchet derivative
Comput. Math. Appl.
37
109 -
115
** DOI : 10.1016/S0898-1221(99)00091-7**

Bai Z.Z.
,
Tong P.L.
(1994)
On the affine invariant convergence theorems of inexact Newton method and Broyden’s method
J. UEST China
(in Chinese)
23
535 -
539

Bai Z.J.
(2006)
Inexact Newton methods for inverse eigenvalue problems
Appl. Math. Comput.
172
682 -
689
** DOI : 10.1016/j.amc.2004.11.023**

Brown P.N.
,
Vassilevski P.S.
,
Woodward C.S.
(2003)
On mesh-independent convergence of an inexact Newton-Multigrid algorithm
SIAM J. Sci. Comput.
25
570 -
590
** DOI : 10.1137/S1064827502407822**

Brown P.N.
,
Saad Y.
(1990)
Hybird Krylov methods for nonlinear systems of equations
SIAM J. Sci. Stat. Comput.
11
450 -
481
** DOI : 10.1137/0911026**

Cai X.C.
,
Keyes D.E.
(2002)
Nonlinearly preconditioned inexact Newton algorithms
SIAM J. Sci. Comput.
24
183 -
200
** DOI : 10.1137/S106482750037620X**

Cai X.C.
,
Gropp W.D.
,
Keyes D.E.
,
Tidriti M.D.
Newton-Krylov-Schwarz methods in CFD
in: Proceedings of the International Workshop on Numerical Methods for the Navier-Stokes Equations
Vieweg, Braunschwieg
1995

Chen J.H.
,
Li W.G.
(2006)
Convergence behaviour of inexact Newton methods under weak Lipschitz condition
J. Comput. Appl. Math.
191
143 -
164
** DOI : 10.1016/j.cam.2005.03.076**

Dembo R.S.
,
Eisenstat S.C.
,
Steihaug T.
(1982)
Inexact Newton methods
SIAM J. Numer. Anal.
19
400 -
408
** DOI : 10.1137/0719025**

Dembo R.S.
,
Steihaug T.
(1983)
Truncated Newton algorithms for large-scale optimization
Math. Program
26
190 -
212
** DOI : 10.1007/BF02592055**

Deuflhard P.
,
Heindl G.
(1979)
Affine invariant convergence theorems for Newton’s method and extensions to related methods
SIAM J. Numer. Anal.
16
1 -
10
** DOI : 10.1137/0716001**

Eisenstat S.C.
,
Walker H.F.
(1994)
Globally convergent inexact Newton methods
SIAM J. Optim.
4
393 -
422
** DOI : 10.1137/0804022**

Eisenstat S.C.
,
Walker H.F.
(1996)
Choosing the forcing term in an inexact Newton method
SIAM J. Sci. Comput.
17
16 -
32
** DOI : 10.1137/0917003**

Elias R.N.
,
Coutinho A.L.G.A.
,
Martins M.A.D.
(2006)
Inexact Newton-type methods for the solution of steady incompressible viscoplastic flows with the SUPG/PSPG finite element formulation
Comput. Methods Appl. Mech. Engrg.
195
3145 -
3167
** DOI : 10.1016/j.cma.2004.08.015**

Fokkema D.R.
,
Sleijpen G.L.G.
,
Vorst H.A.V.D.
(1998)
Accelerated inexact Newton schemes for large system of nonlinear equations
SIAM J. Sci. Comput.
19
657 -
674
** DOI : 10.1137/S1064827595296148**

Galántai A.
(2000)
The theory of Newton’s method
J. Comput. Appl. Math.
124
25 -
44
** DOI : 10.1016/S0377-0427(00)00435-0**

Guo X.P.
(2007)
On semilocal convergence of inexact Newton methods
J. Comput. Math.
25
231 -
242

Li C.
,
Shen W.P.
(2008)
Local convergence of inexact methods under the Hölder condition
J. Comput. Appl. Math.
222
544 -
560
** DOI : 10.1016/j.cam.2007.12.001**

Martinez J.M.
,
Qi L.
(1995)
Inexact Newton methods for solving nonsmooth equations
J. Comput. Appl. Math.
60
127 -
145
** DOI : 10.1016/0377-0427(94)00088-I**

Moret I.
(1989)
A Kantorovich type theorem for inexact Newton methods
Numer. Funct. Anal. Optim.
10
351 -
365
** DOI : 10.1080/01630568908816307**

Morini B.
(1999)
Convergence behaviour of inexact Newton method
Math. Comput.
68
1605 -
1613
** DOI : 10.1090/S0025-5718-99-01135-7**

Ortega J.M.
,
Rheinboldt W.C.
2000
Iterative Solution of Nonlinear Equations in Several Variables
SIAM
Philadelphia, PA

Rieder A.
(2005)
Inexact Newton regularization using conjugate gradients as inner iteration
SIAM J. Numer. Anal.
43
604 -
622
** DOI : 10.1137/040604029**

Rieder A.
(1999)
On the regularization of nonliear ill-posed problems via inexact Newton iterations
Inverse Probl.
15
309 -
327
** DOI : 10.1088/0266-5611/15/1/028**

Rieder A.
(2001)
On convergence rates of inexact Newton regularization
Numer. Math.
88
347 -
365
** DOI : 10.1007/PL00005448**

Shadid J.N.
,
Tuminaro R.S.
,
Walker H.F.
(1997)
An inexact Newton method for fully coupled solution of the Navier-Stokes equations with heat and mass transport
J. Comput. Phys.
137
155 -
185
** DOI : 10.1006/jcph.1997.5798**

Shen W.P.
,
Li C.
(2008)
Convergence criterion of inexact methods for operators with Hölder continuous derivatives
Taiwan. J. Math.
12
1865 -
1882

Shen W.P.
,
Li C.
(2009)
Kantorovich-type convergence criterion for inexact Newton methods
Appl. Numer. Math.
59
1599 -
1611
** DOI : 10.1016/j.apnum.2008.11.002**

Solodov M.V.
,
Svaiter B.F.
1998
A globally convergent inexact Newton method for systems of monotone equations, Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods
Kluwer Academic Publishers

Wu M.
(2008)
A new semi-local convergence theorem for the inexact Newton methods
Appl. Numer. Math.
200
80 -
86

Yamamoto T.
(2000)
Historical developments in convergence analysis for Newton’s and Newtonlike method
J. Comput. Appl. Math.
124
1 -
23
** DOI : 10.1016/S0377-0427(00)00417-9**

Ypma T.J.
(1984)
Local convergence of inexact Newton methods
SIAM J. Numer. Anal.
21
583 -
590
** DOI : 10.1137/0721040**

Ypma T.J.
(1982)
Affine invariant convergence for Newton’s methods
BIT
22
108 -
118
** DOI : 10.1007/BF01934400**

Zhou J.L.
,
Zhang S.Y.
,
Yang G.P.
,
Tan J.R.
(2008)
A convergence theorem for the inexact Newton methods based on Hölder continuous Fréchet derivative
Appl. Numer. Math.
197
206 -
211

Citing 'A MODIFIED INEXACT NEWTON METHOD†
'

@article{ E1MCA9_2015_v33n1_2_127}
,title={A MODIFIED INEXACT NEWTON METHOD†}
,volume={1_2}
, url={http://dx.doi.org/10.14317/jami.2015.127}, DOI={10.14317/jami.2015.127}
, number= {1_2}
, journal={Journal of Applied Mathematics & Informatics}
, publisher={Korean Society of Computational and Applied Mathematics}
, author={HUANG, PENGZHAN
and
ABDUWALI, ABDURISHIT}
, year={2015}
, month={Jan}