In this paper, we propose a modified proximal point algorithm which consists of a resolvent operator technique step followed by a generalized projection onto a moving halfspace for approximating a solution of a variational inclusion involving a maximal monotone mapping and a monotone, bounded and continuous operator in Banach spaces. The weak convergence of the iterative sequence generated by the algorithm is also proved.
AMS Mathematics Subject Classification : 47H09, 47H05, 47H06, 47J25, 47J05.
1. Introduction
Variational inclusions, as the generalization of variational inequalities, are among the most interesting and important mathematical problems and have been widely studied in recent years since they have wide applications in mechanics, physics, optimization and control, nonlinear programming, economics and transportation equilibrium, and engineering sciences, etc. It is well known that the general monotonicity and accretivity of mappings play an important role in the theory and algorithms of variational inclusions. Various kinds of iterative algorithms to solve the variational inclusions have been developed by many authors. For details, we can refer to
[3

23]
. In this paper, we mainly consider the following nonlinear variational inclusion problem: find
u
∈
E
such that
where
E
is a Banach space,
f
:
E
→
E
^{∗}
is a singlevalued mapping and
M
:
E
→ 2
^{E∗}
is a multivalued mapping. The set of solutions of Problem (1.1) is denoted by
V I
(
E
,
f
,
M
), i.e.,
V I
(
E
,
f
,
M
) = {
x
∈
E
: 0 ∈
f
(
x
) +
M
(
x
)}. Throughout this paper, we always assume that
V I
(
E
,
f
,
M
) ≠ ∅.
If
f
≡ 0, then (1.1) reduces to
which is known as the zero problem of a multivalued operator and has been studied by many authors when
M
has the monotonicity or accretivity, see
[9

11
,
17

19
,
22
,
24
,
25]
and the reference therein.
If
M
has the accretivity, then Problem (1.1) has also studied by many authors in Banach spaces by using the resolvent operator, see
[5
,
12]
and the reference therein.
However, when
M
has the monotonicity, Problem (1.1) in Banach spaces is far less studied than that when
M
has the accretivity. In
[13]
, Lou etc. constructed a iterative algorithm for approximating a solution of a class of generalized variational inclusions involving monotone mappings in Banach spaces. But the strongly accretivity and Lipschitz continuity are assumed on the perturbed operator
f
, which are very strong conditions. Therefore, under the weaker assumptions on the perturbed operator
f
, the development of an efficient and implementable algorithm for solving Problem (1.1) and its generalizations in Banach spaces when
M
has the monotonicity is interesting and important.
When
E
is a Hilbert space and
M
is a maximal monotone,
H
monotone or
A
−monotone mapping, Problem (1.1) has been studied in
[15
,
23
,
26]
. Especially, Zhang
[26]
constructed the following iterative algorithm:
Algorithm 1.1
Step0. (Initiation) Select initial
z
_{0}
∈ ℍ(a Hilbert space) and set
k
= 0.
Step1. (Resolvent step) Find
x_{k}
∈ ℍ such that
where
and a positive sequence {
λ_{k}
} satisfies
Step2. (Projection step) Set
K
= {
z
∈ ℍ : ⟨
A
(
z_{k}
) −
A
(
x_{k}
),
z
−
A
(
x_{k}
)⟩ ≤ 0}. If
A
(
z_{k}
) =
A
(
x_{k}
), then stop; otherwise, take
z
_{k+1}
such that
Step3. Let
k
=
k
+ 1 and return to Step1.
Moreover, Zhang
[26]
proved the iterative sequence {
x_{k}
} generated by Algorithm 1.1 converges weakly to a solution of (1.1) when
M
:
H
→ 2
^{H}
is a
A
−monotone mapping and
f
:
H
→
H
is only monotone and continuous.
We should note that:
(1) the Algorithm 1.1 requires only that the perturbed operator
f
has the monotonicity and continuity which are weaker than the strong monotonicity and Lipschitz continuity assumed in some related researches, see
[13
,
15
,
23]
and the references therein;
(2) the next iterate
A
(
z
_{k+1}
) is the metric projection of the current iterate
A
(
z_{k}
) onto the separation hyperplane
K
, which is not expensive at all from a numerical point of view.
But, we should also note that the Algorithm 1.1 is only confined to Hilbert spaces. Since the metric projection strictly depends on the inner product properties of Hilbert spaces, it can no longer be applied for variational inclusions in Banach spaces.
The above fact motivates us to develop alterative methods for approximating solutions of variational inclusions in Banach spaces. Therefore, the purpose of this paper is to modify Algorithm 1.1 to apply it to Banach spaces for approximating a solution of Problem (1.1) when
M
has the maximal monotonicity and the perturbed operator
f
has only the the monotonicity and continuity. This paper is organized as below. In section 2, we recall some basic concepts and properties. In section 3, we consider Problem (1.1) involving a maximal monotone mapping and a monotone, bounded and continuous operator in Banach spaces and prove theorem 3.1 which extends the zero problem of a monotone operator studied by
[6
,
9
,
10
,
19
,
25]
to Problem (1.1) and also extends Problem (1.1) considered in
[15
,
23
,
24
,
26]
from Hilbert spaces to Banach spaces. Furthermore, theorem 3.1 will also be development of the results of
[5
,
11
,
12]
in different directions. In section 4, we consider the zero point problem of a maximal monotone mapping and construct iterative algorithm 4.1. Moreover, we also give a simple example to compare algorithm 4.1 and the algorithm of
[19]
.
2. Preliminaries
Throughout this paper, let
E
be a Banach space with norm ║ㆍ║, and
E
^{∗}
be the dual space of
E
. ⟨·, ·⟩ denotes the duality pairing of
E
and
E
^{∗}
. When {
x_{n}
} is a sequence in
E
, we denote strong convergence of {
x_{n}
} to
x
∈
E
by
x_{n}
→
x
, and weak convergence by
x_{n}
⇀
x
. Let 2
^{E∗}
denote the family of all the nonempty subset of
E
^{∗}
. Let
U
= {
x
∈
E
: ║
x
║ = 1} be the unit sphere of
E
. A Banach space
E
is said to be strictly convex if
for all
x
,
y
∈
U
and
x
≠
y
. It is said to be uniformly convex if
for any two sequences {
x_{n}
}, {
y_{n}
} in
U
and
.
E
is said to be smooth provided
exists for each
x
,
y
∈
U
. It is said to be uniformly smooth if the limit is attained uniformly for
x
,
y
∈
U
.
Let
J
:
E
→ 2
^{E∗}
be the normalized duality mapping defined by
The following properties of
J
can be found in
[2
,
6]
:
(i) If
E
is smooth, then
J
is singlevalued.
(ii) If
E
is strictly convex, then
J
is strictly monotone and one to one.
(iii) If
E
is reflexive, then
J
is surjective.
(iv) If
E
is uniformly smooth, then
J
is uniformly normtonorm continuous on each bounded subset of
E
.
The duality mapping
J
from a smooth Banach space
E
into
E
^{∗}
is said to be weakly sequentially continuous
[4
,
6]
if
x_{n}
⇀
x
implies
Jx_{n}
⇀
Jx
.
Definition 2.1
(
[7
,
20]
). Let
f
:
E
→
E
^{∗}
be a singlevalued mapping.
f
is said to be
(i) monotone if
(ii) strictly monotone if
and equality holds if and only if
x
=
y
.
(iii)
γ
−strongly monotone if there exists a constant
γ
> 0, such that
(iv)
δ
−Lipschitz continuous if there exists a constant
δ
> 0, such that
(v)
α
−inversestronglymonotone, if there exists a constant
α
> 0 such that
It is obvious that the
α
−inversestronglymonotone mapping is monotone and
−Lipschitz continuous.
Definition 2.2
(
[3
,
9
,
13
,
20]
). Let
A
,
H
:
E
→
E
^{∗}
be two nonlinear operators. A multivalued operator
M
:
E
→ 2
^{E∗}
with domain
D
(
M
) = {
z
∈
E
:
Mz
≠ ∅} and range
R
(
M
)= ∪{
Mz
∈
E
^{∗}
:
z
∈
D
(
M
)} is said to be
(i) monotone if ⟨
x
_{1}
−
x
_{2}
,
u
_{1}
−
u
_{2}
⟩ ≥ 0 for each
x_{i}
∈
D
(
M
) and
u_{i}
∈
M
(
x_{i}
),
i
= 1, 2.
(ii)
α
−strongly monotone, if there exists a constant
α
> 0 such that
(iii)
m
−relaxed monotone, if there exists a constant
m
> 0 such that
(iv)maximal monotone, if
M
is monotone and its graph
G
(
M
) = {(
x
,
u
) :
u
∈
Mx
} is not properly contained in the graph of any other monotone operator. It is known that a monotone mapping
M
is maximal if and only if for (
x
,
u
) ∈
E
×
E
^{∗}
, ⟨
x
−
y
,
u
−
v
⟩ ≥ 0 for every (
y
,
v
) ∈
G
(
M
) implies
u
∈
Mx
.
(v) general
H
−monotone, if
M
is monotone and (
H
+
λM
)
E
=
E
^{∗}
, for all
λ
> 0.
(vi) general
A
−monotone, if
M
is
m
−relaxed monotone and (
A
+
λM
)
E
=
E
^{∗}
, for all
λ
> 0.
Remark 2.1.
We have from
[16]
that if
E
is a reflexive Banach space, then a monotone mapping
M
:
E
→ 2
^{E∗}
is maximal if and only if
R
(
J
+
λM
) =
X
^{∗}
, ∀
λ
> 0.
Remark 2.2.
We note that the general
A
monotonicity generalized the general
H
−monotonicity. On the other hand, if
E
is a Hilbert space, then the general
A
monotone operator reduces to the
A
monotone operator studied in
[26]
and the general
H
monotone operator reduces to the
H
monotone operator studied in
[10
,
23]
. For examples about these operators and their relations , we refer the reader to
[3
,
10
,
23]
and the references therein.
Let
E
be a smooth Banach space. Define
Clearly, from the definition of
ϕ
we have that
(A1)(║
x
║ − ║
y
║)
^{2}
≤
ϕ
(
y
,
x
) ≤ (║
x
║ + ║
y
║)
^{2}
,
(A2)
ϕ
(
x
,
y
) =
ϕ
(
x
,
z
) +
ϕ
(
z
,
y
) + 2⟨
x
−
z
,
Jz
−
Jy
⟩,
(A3)
ϕ
(
x
,
y
) = ⟨
x
,
Jx
−
Jy
⟩ + ⟨
y
−
x
,
Jy
⟩ ≤ ║
x
║║
Jx
−
Jy
║ + ║
y
−
x
║║
y
║.
Remark 2.3.
We have from Remark 2.1 in
[14]
that if
E
is a strictly convex and smooth Banach space, then for
x
,
y
∈
E
,
ϕ
(
y
,
x
) = 0 if and only if
x
=
y
.
Let
E
be a reflexive, strictly convex and smooth Banach space.
K
denotes a nonempty, closed and convex subset of
E
. By Alber
[2]
, for each
x
∈
E
, there exists a unique element
x
_{0}
∈
K
(denoted by Π
_{K}
(
x
)) such that
The mapping Π
_{K}
:
E
→
K
defined by Π
_{K}
(
x
) =
x
_{0}
is called the generalized projection operator from
E
onto
K
. Moreover,
x
_{0}
is called the generalized projection of
x
. See
[1]
for some properties of Π
_{K}
. If
E
is a Hilbert space, then Π
_{K}
is coincident with the metric projection
P_{K}
from
E
onto
K
.
Lemma 2.3
(
[2]
).
Let E be a re exive, strictly convex and smooth Banach space, let C be a nonempty, closed and convex subset of E and let x
∈
E. Then
for all y
∈
C
.
Lemma 2.4
(
[2]
).
ELet C be a nonempty, closed and convex subset of a smooth Banach space, and let x
∈
E. Then, x
_{0}
= Π
_{C}
(
x
)
if and only if
Lemma 2.5
(
[8]
).
Let E be a uniformly convex and smooth Banach space. Let
{
y_{n}
}, {
z_{n}
}
be two sequences of E. If ϕ
(
y_{n}
,
z_{n}
) → 0,
and either
{
y_{n}
},
or
{
z_{n}
}
is bounded, then y_{n}
−
z_{n}
→ 0.
An operator
A
of
C
into
E
^{∗}
is said to be hemicontinuous if for all
x
,
y
∈
C
, the mapping
f
of [0, 1] into
E
^{∗}
defined by
f
(
t
) =
A
(
tx
+ (1 −
t
)
y
) is continuous with respect to the weak
^{∗}
topology of
E
^{∗}
.
Lemma 2.6
(
[16]
).
Let E be a re exive Banach space. If T : E
→ 2
^{E∗}
is a maximal monotone mapping and P
:
E
→
E
^{∗}
is a hemicontinuous bounded monotone operator with D
(
P
) =
E
,
then the sum S = T + P is a maximal monotone mapping
.
Lemma 2.7
(
[16]
).
Let E be a re exive Banach space and λ be a positive number. If T
:
E
→ 2
^{E∗}
is a maximal monotone mapping, then R
(
J
+
λT
) =
E^{∗} and
(
J
+
λT
)−1 :
E
^{∗}
→
E is a demicontinuous singlevalued maximal monotone mapping
.
Lemma 2.8
(
[7]
).
Let S be a nonempty, closed and convex subset of a uniformly convex, smooth Banach space E. Let
{
x_{n}
}
be a sequence in E. Suppose that, for all u
∈
S
,
for every n
= 1, 2, ...
Then
{Π
_{S}x_{n}
}
is a Cauchy sequence
.
3. Variational inclusion
In this section, we construct the following iterative algorithm for solving Variational inclusion (1.1) involving a maximal monotone mapping
M
and a continuous bounded monotone operator
f
.
Algorithm 3.1
Step0. (Initiation) Arbitrarily select initial
z
_{0}
∈
E
and set
k
= 0.
Step1. (Resolvent step) Find
x_{k}
∈
E
such that
where a positive sequence {
λ_{k}
} satisfies
Step2. (Projection step) Set
C_{k}
= {
z
∈
E
: ⟨
z
−
x_{k}
,
J
(
z_{k}
) −
J
(
x_{k}
)⟩ ≤ 0}. If
z_{k}
=
x_{k}
, then stop; otherwise, take
z
_{k+1}
such that
Step3. Let
k
=
k
+ 1 and return to Step1.
Remark 3.1.
(1)We show the existence of
x_{k}
in (3.1). In fact, (3.1) is equivalent to the following problem: find
x_{k}
∈
E
such that
Since
M
:
E
→ 2
^{E∗}
is maximal monotone and
f
:
E
→
E
^{∗}
is continuous, bounded and monotone operator with D(f)=E, we have that, by Lemma 2.6,
M
+
f
is maximal monotone. By Lemma 2.7, for any
λ_{k}
> 0,
J
+
λ_{k}f
+
λ_{k}M
is surjective. Hence, there is a
x_{k}
∈
E
such that (3.1) holds, i.e., Step1 of Algorithm 3.1 is welldefined.
(2) If
x_{k}
=
z_{k}
, by (3.4), we have
x_{k}
∈
V I
(
E
,
f
,
M
). Thus, iterative sequence {
x_{k}
} is finite and the last term is a solution of Problem (1.1). If
z_{k}
≠
x_{k}
then
z_{k}
∉
C_{k}
. Therefore Algorithm 3.1 is welldefined.
(3) In Algorithm 3.1, the Resolvent step (3.1) is used to construct a halfspace, the next iterate
z
_{k+1}
is a generalized projection of the current iterate
z_{k}
on the halfspace, which is not expensive at all from a numerical point of view.
Now we show the convergence of the iterative sequence generated by Algorithm 3.1 in the Banach space
E
.
Theorem 3.1
.
Let E be a uniformly convex, uniformly smooth Banach space whose duality mapping J is weakly sequentially continuous and M : E
→ 2
^{E∗} be a maximal monotone mapping. Let f
:
E
→
E
^{∗}
be a continuous, bounded and monotone operator with D
(
f
) =
E. Then, the iterative sequence
{
x_{k}
}
generated by Algorithm 3.1 converges weakly to an element
∈
V I
(
E
,
f
,
M
).
Further
,
Proof
. We split the proof into five steps.
Step1.
Show that {
z_{k}
} is bounded.
Suppose
x
^{∗}
∈
V I
(
E
,
f
,
M
). Then we have −
f
(
x
^{∗}
) ∈
M
(
x
^{∗}
). From (3.4), it follows that
By the monotonicity of
M
, we deduce that
It follows from the monotonicity of
f
and (3.5) that
This implies that
which leads to
Since
z
_{k+1}
= Π
_{Ck}
(
z_{k}
), by Lemma 2.3, we deduce that
Thus,
which yields that the sequence {
ϕ
(
x
^{∗}
,
z_{k}
)} is convergent. From (A1), we know that {
z_{k}
} is bounded.
Step2.
Show that {
x_{k}
} is also bounded and {
x_{k}
} and {
z_{k}
} have the same weak accumulation points.
It follows from (3.6) that
Thus we know that
By Lemma 2.5, we have
From
z
_{k+1}
= Π
_{Ck}
(
z_{k}
) ∈
C_{k}
, we have that
By (A1), (A2) and (3.10),(3.8), we have
This implies that
By (A1), we have
Since {
z_{k}
} is bounded, we have from (3.12) that {
x_{k}
} is also bounded. Moreover {
x_{k}
} and {
z_{k}
} have the same weak accumulation points.
Step3.
Show that each weak accumulation point of the sequence {
x_{k}
} is a solution of Problem (1.1).
Since
J
is uniformly normtonorm continuous on bounded sets, it follows from (3.12) that
Since {
x_{k}
} is bounded, let us suppose
be a weak accumulation point of {
x_{k}
}. Hence, we can extract a subsequence that weakly converges to
. Without loss of generality, let us suppose that
x_{k}
⇀
as
k
→ ∞. Then from (3.12), we have
z_{k}
⇀
as
k
→ ∞. For any fixed
v
∈
E
, take an arbitrary
u
∈
f
(
v
) +
M
(
v
). Then, there exists a point
w
∈
M
(
v
) such that
w
+
f
(
v
) =
u
. Therefore, it follows from the monotonicity of
f
and
M
that
Adding these inequalities, we have
Note
w
+
f
(
v
) =
u
, we have
which implies that
Taking limits in (3.14), by (3.13) and the boundedness of {
x_{k}
} and
, we have
Since
M
+
f
is maximal monotone, by the arbitrariness of (
v
,
u
) ∈
G
(
M
+
f
), we conclude that (
, 0) ∈
G
(
M
+
f
) and hence
is a solution of Problem (1.1), i.e.,
∈
V I
(
E
,
f
,
M
).
Step4.
Show that
V I
(
E
,
f
,
M
) is closed and convex.
Taking {
y_{n}
} ⊂
V I
(
E
,
f
,
M
) and
y_{n}
→
as
n
→ ∞. Since
y_{n}
∈
V I
(
E
,
f
,
M
), we have −
f
(
y_{n}
) ∈
M
(
y_{n}
). For any fixed
v
∈
E
, take
w
∈
M
(
v
). It follows from the monotonicity of
M
that
Taking limits in (3.15), by the continuity of
f
, we have
By the arbitrariness of (
v
,
w
) ∈
G
(
M
), we conclude that
∈
G
(
M
) and hence
∈
V I
(
E
,
f
,
M
), i.e.,
V I
(
E
,
f
,
M
) is closed.
Taking
v
_{1}
,
v
_{2}
∈
V I
(
E
,
f
,
M
), we have 0 ∈
f
(
v_{i}
) +
M
(
v_{i}
),
i
= 1, 2. For any (
v
,
u
) ∈
G
(
M
+
f
) and
t
∈ (0, 1), we have
and
Adding (3.16) and (3.17), we have
By the arbitrariness of (
v
,
u
) ∈
G
(
M
+
f
), we conclude that (
tv
_{1}
+( 1−
t
)
v
_{2}
, 0) ∈
G
(
M
+
f
) and hence
tv
_{1}
+ (1 −
t
)
v
_{2}
∈
V I
(
E
,
f
,
M
), i.e.,
V I
(
E
,
f
,
M
) is convex.
Step5.
Show that
x_{k}
⇀
, as
k
→ ∞ and
Put
u_{k}
= Π
_{V I(E,f,M)}
z_{k}
. It follows from (3.7) and Lemma 2.8 that {
u_{k}
} is a Cauchy sequence. Since
V I
(
E
,
f
,
M
) is closed, we have that {
u_{k}
} converges strongly to
w
∈
V I
(
E
,
f
,
M
). By the uniform smoothness of
E
, we also have
. Finally, we prove
=
w
. It follows from Lemma 2.4,
u_{k}
= Π
_{V I(E,f,M)}
z_{k}
and
that
So, we have
Taking limits in (3.19), by the weakly sequential continuity of
J
, we obtain
and hence
. Since
E
is strictly convex, we get
=
w
. Therefore, the sequence {
x_{k}
} converges weakly to
□
Remark 3.2.
If
M
= 0, then Theorem 3.1 reduces to the 0 ∈
fx
for a monotone operator
f
which has been studied by
[6]
by using the hybrid projection method when
f
:
E
→
E
^{∗}
has the inversestrong monotonicity which is a stronger condition than the monotonicity and continuity assumed in Theorem 3.1.
Remark 3.3.
If
f
= 0, then Theorem 3.1 reduces to the zero point problem of a maximal monotone mapping. To be more precise, we can see section 4.
Remark 3.4.
The thought of Theorem 3.1 is due to
[26]
, i.e. Algorithm 1.1 of this paper. It is a development of
[26]
in spatial structure, since the Banach space is a wider range than the Hilbert space, although Theorem 3.1 don’t thoroughly generalize
[26]
, since the maximal monotone mapping in Hilbert spaces is a special case of the
A
monotone mapping studied in
[26]
when
A
=
I
(the identity mapping).
Remark 3.5.
It follows from Lemma 2.3 of
[7]
that the normalized duality mapping
J
defined by (2.1) is strongly monotone in a 2uniformly convex Banach space and hence the maximal monotone mapping becomes a special case of the
A
−monotone mapping when
m
= 0 and
A
=
J
, where
A
has strong monotonicity and continuity assumed in
[3
,
9
,
24
,
26]
. Therefore, It is interesting to construct the iterative algorithms for approximating the solutions of Problem (1.1) when
M
is a
A
−monotone mapping,
f
is a continuous, monotone bounded operator and
A
is a strong monotone and continuous operator in a 2uniformly convex and uniformly smooth Banach space. This will thoroughly generalize the results of
[26]
from Hilbert spaces to Banach spaces.
4. The zero point problem
Let
M
:
E
→ 2
^{E∗}
be a maximal monotone mapping. We consider the following problem: Find
x
∈
E
such that
This is the zero point problem of a maximal monotone mapping. We denote the set of solutions of problem (4.1) by
V I
(
E
,
M
) and suppose
V I
(
E
,
M
) ≠ ∅.
Theorem 4.1.
Let E be a uniformly convex, uniformly smooth Banach space whose duality mapping J is weakly sequentially continuous. Let the sequence
{
x_{k}
}
be generated by the following Algorithm
.
Algorithm 4.1:
Step0. (Initiation) Arbitrarily select initial z
_{0}
∈
E and set k
= 0.
Step1. (Resolvent step) Find x_{k}
∈
E such that
where a positive sequence
{
λ_{k}
}
satisfies
Step2. (Projection step) Set C_{k}
= {
z
∈
E
: ⟨
z
−
x_{k}
,
J
(
z_{k}
) −
J
(
x_{k}
)⟩ ≤ 0}.
If z_{k}
=
x_{k}
,
then stop; otherwise, take z
_{k+1}
such that
Step3. Let k = k
+ 1
and return to Step1.
Then, the iterative sequence
{
x_{k}
}
generated by Algorithm 4.1 converges weakly to an element
∈
V I
(
E
,
M
).
Further,
.
Proof
. Taking
f
≡ 0 in Theorem 3.1, we can obtain the desired conclusion. □
Remark 4.1.
The setting of Problem (4.1) considered in Theorem 4.1 is a Banach space which is more extensive than Hilbert spaces considered in
[25]
.
Remark 4.2.
In
[19]
, the authors have also constructed a iterative algorithm for approximating a solution of Problem (4.1). More precisely, they constructed the following iterative algorithm:
Algorithm 4.2:
where {
α_{n}
} ⊂ [0, 1] with
α_{n}
≤ 1 −
δ
for some
δ
∈ (0, 1), {
r_{n}
} ⊂ (0,+∞) with inf
_{n≥0}
r_{n}
> 0 and the error sequence {
e_{n}
} ⊂
E
such that ║
e_{n}
║ → 0, as
n
→ ∞. They proved the iterative sequence (4.5) converges strongly to Π
_{V I(E,M)}
x
_{0}
.
Now, we give a simple example to compare Algorithm 4.1 with Algorithm 4.2.
Example 4.1.
Let
E
= ℝ,
M
: ℝ → ℝ and
M
(
x
) =
x
. It is obvious that
M
is maximal monotone and
V I
(
E
,
M
) = {0} ≠ ∅.
The numerical experiment result of Algorithm 4.1
Take
, and initial point
z
_{0}
= −1 ∈ ℝ. Then {
x_{k}
} generated by Algorithm 4.1 is the following sequence:
and
x_{k}
→ 0 as
k
→ ∞, where 0 ∈
V I
(
E
,
M
).
Proof
. By (4.2),
z
_{0}
= (1+
λ
_{0}
)
x
_{0}
. Since
z
_{0}
= −1,
λ
_{0}
= 2, we have
. By algorithm 4.1, we have
C
_{0}
= {
z
∈ ℝ : ⟨
z
−
x
_{0}
,
z
_{0}
−
x
_{0}
⟩ ≤ 0} = [
x
_{0}
,+∞). By (4.4),
z
_{1}
=
P
_{C0}
(−1) =
x
_{0}
< 0. It follows from
z
_{1}
=
x
_{0}
< 0 and (4.2) that
x
_{0}
= (1 +
λ
_{1}
)
x
_{1}
, i.e.,
Suppose that
By Algorithm 4.1,
C
_{k+1}
= {
z
∈
E
: ⟨
z
−
x
_{k+1}
,
z
_{k+1}
−
x
_{k+1}
⟩ ≤ 0}. It follows from hypothesis (4.7) that
x
_{k+1}
>
x_{k}
and
z
_{k+1}
−
x
_{k+1}
< 0. Therefore,
C
_{k+1}
= [
x
_{k+1}
,+∞). Since
z
_{k+2}
=
P
_{Ck+1}
z
_{k+1}
=
P
_{[xk+1;+∞)}
x_{k}
, we have
z
_{k+2}
=
x
_{k+1}
< 0. From (4.2), we have
x
_{k+1}
=
z
_{k+2}
= (1 +
λ
_{k+2}
)
x
_{k+2}
. Hence,
. By induction, (4.6) holds. □
Next, we give the numerical experiment results by using the following
Table 4.1
, which shows that the iteration process of the sequence {
x_{k}
} as initial point
z
_{0}
= −1 and
. From the figures, we can see that {
x_{k}
} converges to 0.
The numerical experiment result of Algorithm 4.2
Take
e_{k}
= 0, for all
k
≥ 0, and initial point
.Then {
x_{k}
} generated by Algorithm 4.2 is the following sequence:
and
x_{k}
→ 0 as
k
→ ∞, where 0 ∈
V I
(
E
,
M
).
Proof
. By Algorithm 4.2, we have
,
H
_{0}
={
v
∈ ℝ, ║
v
−
z
_{0}
║ ≤ ║
v
−
x
_{0}
║} =
,
W
_{0}
= {
v
∈ℝ, ⟨
v
−
x
_{0}
,
x
_{0}
−
x
_{0}
⟩ ≤ 0} = ℝ. Therefore,
Suppose that
, and hence,
H
_{k+1}
= {
v
∈ ℝ : ║
v
−
z
_{k+1}
║ ≤ ║
v
−
x
_{k+1}
║} =
⊂ [
x
_{k+1}
,+∞),
W
_{k+1}
= {
v
∈ ℝ : ⟨
v
−
x
_{k+1}
,
x
_{0}
−
x
_{k+1}
⟩ ≤ 0} = [
x
_{k+1}
,+∞). Therefore,
and
Combine (4.9) with (4.10), we obtain that
. By induction, (4.8) holds. □
Next, we give the numerical experiment results by using the following
Table 4.2
, which shows that the iteration process of the sequence {
x_{k}
} as initial point
. From the figures, we can see that {
x_{k}
} converges to 0.
Remark 4.3.
Comparing
Table 4.1
with
Table 4.2
, we can intuitively see that the convergence speed of Algorithm 4.1 constructed in this paper is faster than that of Algorithm 4.2 constructed in
[19]
.
5. Conclusion
In this paper, we construct Algorithm 3.1 under very mild conditions for approximating a solution of Problem (1.1). The results of this paper develop the corresponding results in some references from the following aspects.
1) From a numerical point of view, the iterative steps of Algorithm 3.1 are less than those of
[6
,
15
,
19,]
because we needn’t compute the intersection of two nonempty closed convex sets. Furthermore, the next iterate
z
_{k+1}
is the generalized projection of the current iterate
z_{k}
onto the separation hyperplane
C_{k}
, which is simpler than the generalized projection onto a general nonempty closed convex set.
2) In terms of the spatial structure, the Banach space considered in this paper is a wider range than the Hilbert space considered in
[15
,
23
,
24
,
26]
.
3) We obtain that the convergence point of {
x_{k}
} generated by Algorithm 3.1 is
, which is more concrete than related conclusions of
[19
,
25
,
26]
and so on.
4) The perturbed operator
f
has only the monotonicity and continuity which are weaker than the strong monotonicity and Lipschitz continuity assumed in
[13
,
15
,
23]
and the reference therein.
BIO
Ying Liu received MS degree from Hebei University. Since 2003 She has been at Hebei University as a teacher. Her research interests include nonlinear optimization and fixed point theorems.
College of Mathematics and Information Science, Hebei University, Baoding, 071002, China.
email: ly_cyh2013@163.com
Alber Y.I.
,
Reich S.
(1994)
An iterative method for solving a class of nonlinear operator equations in Banach spaces
Panamer. Math.J.
4
39 
54
Cai L.C.
,
Lan H.Y.
,
Zou Y.
(2011)
Perturbed algorithms for solving nonlinear relaxed cocoercive operator equations with general Amonotone operators in Banach spaces
Commun Nonlinear Sci Numer Simulat
16
3923 
3932
DOI : 10.1016/j.cnsns.2011.01.024
Diestel J.
1975
Geometry of Banach Spaces
SpringerVerlag
Berlin
Fang Y.P.
,
Huang N.J.
(2004)
Haccretive operators and resolvent operator technique for solving variational inclusions in Banach spaces
Appl. Math. Lett.
17
647 
653
DOI : 10.1016/S08939659(04)900997
Iiduka H.
,
Takahashi W.
(2008)
Strong convergence studied by a hybrid type method for monotone operators in a Banach space
Nonlinear Analysis
68
3679 
3688
DOI : 10.1016/j.na.2007.04.010
Iiduka H.
,
Takahashi W.
(2008)
Weak convergence of a projection algorithm for variational inequalities in a Banach space
J. Math. Anal. Appl.
339
668 
679
DOI : 10.1016/j.jmaa.2007.07.019
Kamimura S.
,
Takahashi W.
(2002)
Strong convergence of a proximaltype algorithm in a Banach space
SIAM J. Optim.
13
938 
945
DOI : 10.1137/S105262340139611X
Lan H.
(2014)
Convergence analysis of new overrelaxed proximal point algorithm frameworks with errors and applications to general Amonotone nonlinear inclusion forms
Applied Mathematics and Computation
230
154 
163
DOI : 10.1016/j.amc.2013.12.028
Liu S.
,
He H.
,
Chen R.
(2013)
Approximating solution of0 ∈ 2Tx for an Hmonotone operator in Hilbert spaces
Acta Mathematica Scientia
33B
1347 
1360
DOI : 10.1016/S02529602(13)600867
Liu S.
,
He H.
(2012)
Approximating solution of0 ∈Tx for an Haccretive operator in Banach spaces
J. Math. Anal. Appl.
385
466 
476
DOI : 10.1016/j.jmaa.2011.06.074
Liu Y.
,
Chen Y.
(2014)
Viscosity iteration algorithms for nonlinear variational inclusions and fixed point problems in Banach spaces
J. Appl. Math. Comput.
45
165 
181
DOI : 10.1007/s1219001307176
Lou J.
,
He X.F.
,
He Z.
(2008)
Iterative methods for solving a system of variational inclusions involving Hηmonotone operators in Banach spaces
Computers and Mathematics with Applications
55
1832 
1841
DOI : 10.1016/j.camwa.2007.07.010
Matsushita S.
,
Takahashi W.
(2005)
A strong convergence theorem for relatively nonexpansive mappings in a Banach space
J. Approx. Theory
134
257 
266
DOI : 10.1016/j.jat.2005.02.007
Min L.
,
Zhang S.S.
(2012)
A new iterative method for finding common solutions of generalized equilibrium problem, fixed point problem of infinite kstrict pseudocontractive mappings and quasivariational inclusion problem
Acta Mathematica Scientia
32B
499 
519
DOI : 10.1016/S02529602(12)600332
Pascali D.
1978
Nonlinear mappings of monotone type
Sijthoff and Noordhoff International Publishers
Alphen aan den Rijn
Rockafellar R.T.
(1976)
Monotone operators and the proximal point algorithm
SIAMJ. Control Optim.
14
877 
898
DOI : 10.1137/0314056
Sahu N.K.
,
Mohapatra R.N.
(2014)
Approximation solvability of a class of Amonotone implicit variational inclusion problems in semiinner product spaces
Applied Mathematics and Computation
236
109 
117
DOI : 10.1016/j.amc.2014.02.095
Wei L.
,
Zhou H.Y.
(2009)
Strong convergence of projection scheme for zeros of maximal monotone operators
Nonlinear Analysis
71
341 
346
DOI : 10.1016/j.na.2008.10.081
Xia F.Q.
,
Huang N.J.
(2007)
Variational inclusions with a general Hmonotone operator operator in Banach spaces
Computers and Mathematics with Applications
54
24 
30
DOI : 10.1016/j.camwa.2006.10.028
Zeng L.C.
,
Guu S.M.
,
Hu H.Y.
,
Yao J.C.
(2011)
Hybrid shrinking projection method for a generalized equilibrium problem, a maximal monotone operator and a countable family of relatively nonexpansive mappings
Computers and Mathematics with Applications
61
2468 
2479
DOI : 10.1016/j.camwa.2011.04.025
Zeng L.C.
,
Guu S.M.
,
Yao J.C.
(2005)
Characterization of Hmonotone operators with applications to variational inclusions
Computers and Mathematics with Applications
50
329 
337
DOI : 10.1016/j.camwa.2005.06.001
Zhang Q.B.
(2011)
An algorithm for solving the general variational inclusion involving Amonotone operators
Computers and Mathematics with Applications
61
1682 
1686
DOI : 10.1016/j.camwa.2011.01.039
Zhang Q.B.
(2012)
A modified proximal point algorithm with errors for approximating solution of the general variational inclusion
Operations Research Letters
40
564 
567
DOI : 10.1016/j.orl.2012.09.008
Zhang Q.B.
(2012)
A new resolvent algorithm for solving a class of variational inclusions
Mathematical and Computer Modelling
55
1981 
1986
DOI : 10.1016/j.mcm.2011.11.057