HIGHER ORDER ITERATIONS FOR MOORE-PENROSE INVERSES†

Journal of Applied Mathematics & Informatics.
2014.
Jan,
32(1_2):
171-184

- Received : March 10, 2013
- Accepted : April 26, 2013
- Published : January 28, 2014

Download

PDF

e-PUB

PubReader

PPT

Export by style

Share

Article

Metrics

Cited by

TagCloud

A higher order iterative method to compute the Moore-Penrose inverses of arbitrary matrices using only the Penrose equation (
ii
) is developed by extending the iterative method described in
[1]
. Convergence properties as well as the error estimates of the method are studied. The efficacy of the method is demonstrated by working out four numerical examples, two involving a full rank matrix and an ill-conditioned Hilbert matrix, whereas, the other two involving randomly generated full rank and rank deficient matrices. The performance measures are the number of iterations and CPU time in seconds used by the method. It is observed that the number of iterations always decreases as expected and the CPU time first decreases gradually and then increases with the increase of the order of the method for all examples considered.
AMS Mathematics Subject Classification : 65H05, 65F10.
^{m×n}
and
denote the set of all complex (
m
×
n
) matrices and all complex (
m
×
n
) matrices with rank
r
, respectively. For
A
∈ ℂ
^{m×n}
, it is denoted by
A
^{†}
and defined as
where,
R
(
A
) and
P_{R(A)}
denote the range space of
A
and orthogonal projection on
R
(
A
) respectively. The unique matrix
A
^{†}
satisfies the following four equations
Both direct and iterative methods (cf.
[11
,
10
,
2
,
3
,
9]
) can be used to compute
A
^{†}
. One of the most commonly used direct methods is the Singular Value Decomposition (SVD) method. For
A
∈ ℂ
^{m×n}
, the SVD method is a factorization of the form
where,
U
is an
m
×
m
complex unitary matrix, Σ is an
m
×
n
rectangular diagonal matrix with nonnegative real numbers on the diagonal and
V
is an
n
×
n
complex unitary matrix. The
A
^{†}
can then be written as
where, Σ
^{†}
is the generalized inverse of Σ obtained by replacing every nonzero diagonal entry in Σ by its reciprocal and then transposing the resulting matrix. This method is very accurate but time-intensive since it requires a large amount of computational resources, especially in the case of large matrices. The most frequently used iterative method for approximating
A
^{−1}
is the method studied by Householder
[14]
by analyzing successive improvements of a matrix
X
to solve
AX
=
M
,
A
nonsingular, using the equation
One version of the above has the matrices
C_{k}
project on the
i
th row, where
i
cycles through all the rows of
A
. By taking
M
=
I
and
C_{k}
=
V_{k}
, the famous Newton’s method originated in
[21]
is
As usual,
I
denotes the identity matrix of an appropriate order. If
L
is the desired limit matrix and
V_{k}
is the
k
-th estimate of
L
, then the convergence properties of examined iterative method can be studied with the aid of error matrix
E_{k}
=
V_{k}
−
L
. If an iterative method is expressible as a simple matrix formula,
E_{k}
_{+1}
is sum of several terms
All suitable algorithms have a zero-order term equal to 0. Hence, the firstorder term determine the terminal convergence properties. The eigenvalues of
are given by
It is further established that the eigenvalues of
I
−
AV
_{0}
must have magnitude less than 1 to ensure the convergence of (3). Since the residuals
R_{k}
=
I
−
AV_{k}
in (3) satisfies ∥
R
_{k+1}
∥ ≤ ∥
A
∥∥
R_{k}
∥
^{2}
, Newton’s method is a second order iterative method
[7]
. Ben-Israel
[11]
used (3) and the starting value
X
_{0}
=
αA
^{∗}
, where
α
satisfies
The iterative scheme (3) is generalized by the iteration
U
_{k+1}
=
U_{k}
(2
P_{R}
(
A
) −
AU_{k}
), which converges to
A
^{†}
. Ben-Israel and Charnes
[13]
proved that the sequence
converges to
A
^{†}
under the assumption (5). Li et al.
[6]
established a family of iterative methods to compute the approximate inverse of a square matrix and inner inverse of a non-square matrix given by
Chen and Wang
[4]
showed that this can be extended to compute
A
^{†}
with higher orders. Katskis et.al.
[8]
developed a much faster computational method to calculate the
A
^{†}
of singular square matrices and of rectangular matrices A. This method has significantly better accuracy than the already proposed methods and works for full and sparse matrices. Weiguo et al.
[5]
improves on the generalized properties of a family of iterative methods to compute the approximate inverses of square matrices and fixed inner inverses of rectangular matrices proposed in
[6]
. They have also established that this fixed inverse is the Moore-Penrose inverse of the considered matrix. Vasilios et.al.
[3]
also presented a very fast and reliable method to compute Moore-Penrose inverse. By using a general framework where analytic functions of scalers are first developed and then matrices substituted for the scalers, Katsaggelos and Efstratiadis
[12]
produced a convergence faster than quadratic, for restricted initial estimates.
In this paper, a higher order iterative method to compute the Moore-Penrose inverses of arbitrary matrices using only the Penrose equation (
ii
) is developed by extending the iterative method described in
[1]
. Convergence properties as well as the error estimates of the method are studied. The efficacy of the method is demonstrated by working out four numerical examples, two involving a full rank simple and an ill-conditioned Hilbert matrix, whereas, the other two involving full rank and rank deficient randomly generated matrices. The performance measures are the number of iterations and CPU time in seconds used by the method. It is observed that the number of iterations always decreases as expected and the CPU time first decreases gradually and then increases with the increase of the order of the method for all examples considered.
The paper is organized as follows. Section 1 is the introduction. In Section 2, the higher order iterative method for computing the Moore-Penrose generalized inverse of an arbitrary rectangular matrix is described. Some Lemmas and the convergence analysis of the method are also established. In Section 3, the efficacy of the method is demonstrated by working out four numerical examples, two involving a full rank simple and an ill-conditioned Hilbert matrix, whereas, the other two involving full rank and rank deficient randomly generated matrices. Finally, conclusions are included in Section 4.
A
^{†}
of an arbitrary complex matrix
A
∈ ℂ
^{m×n}
is described. Let
A^{t}
,
A
^{∗}
,
μ
(
A
),
ν
(
A
) and
rank
(
A
), represent the transpose, the conjugate transpose, the range space, the null space and the rank of the matrix
A
∈ ℂ
^{m×n}
, respectively.
Definition 2.1
(
[19]
). Let
A
∈ ℂ
^{m×n}
, with column vectors
v
_{1}
,
v
_{2}
, . . . ,
v_{n}
, the set
are called the Range (the Image, or the Column Space) of
A
. The set
is called the Null Space (or the Kernel) of
A
and for any
A
∈ ℂ
^{m×n}
,
B
∈ ℂ
^{s×t}
, we call
the Range of (
A
,
B
) and the Null Space of (
A
,
B
) respectively.
We shall now describe the higher order iterative method extending the method described in
[1]
for computing the Moore-Penrose generalized inverses. Petković et al.
[1]
proposed a quadratically convergent iterative method for computing
A
^{†}
based on Penrose equations (
ii
) and (
iv
) as follows. Let
Hence, for arbitrary
β
∈ ℝ, we get
or equivalently,
This leads to the following iterative method
where,
X
_{0}
=
βA
^{∗}
for an appropriate real number
β
. For
β
< 1, the method (8) has a linear convergence while for
β
= 1 its convergence is quadratic. The first-order and the second-order error terms of (8) are
where,
E_{k}
=
X_{k}
−
A
^{†}
is the error matrix. Now, using only the Penrose equations (
ii
) given by
X
=
XAX
and for arbitrary
β
∈ ℝ, we get
or, equivalently
This leads to the following third order extension of method (8)
for
X
_{0}
=
βA
^{∗}
. Continuing in a similar manner, this can further be extended to the
p
th order for
p
≥ 2, given by
for
X
_{0}
=
βA
^{∗}
, where
β
is an appropriate real number. The following Lemmas will be used for establishing the convergence of these iterative methods.
Lemma 2.2.
For all k
≥ 0,
the sequence
{
X_{k}
}
generated by (10) and (11) satisfies
Proof
. This Lemma can be proved by induction. Clearly, (1) holds for
k
= 0 as
X
_{0}
A
=
βA
^{∗}
A
= (
X
_{0}
A
)
^{∗}
. Assume that it holds for some
k
. To show that it also holds for
k
+ 1, we consider
Hence it holds for all
k
≥ 0. Likewise for
p
th order method (11), we get
Clearly, (2) trivially holds for
k
= 0. Let it also holds for some
k
. To show that it also holds for
k
+ 1, we get
Likewise for
p
th order method (11), we get
Proceeding in a similar manner, (3) can easily be proved for (10) and (11). Hence, the Lemma 2.1 is proved for all
k
≥ 0.
Theorem 2.3.
Let
0 ≠
A
∈ ℂ
^{m×n}
,
X
=
A
^{†}
,
the initial approximation X
_{0}
=
βA
^{∗}
,
β
∈ (0, 1]
and its residual R
_{0}
= (
X
_{0}
−
X
)
A satisfy
∥
R
_{0}
∥ < 1.
The sequence
{
X_{k}
}
generated by (10) starting with X
_{0}
=
βA
^{∗}
converges to the Moore-Penrose inverse A
^{†}
.
It has linear convergence for β
≠ 1
and third order convergence for β
= 1.
Its first
,
second and the third order error terms are given by
where
,
E_{k}
=
X_{k}
−
A
^{†}
denotes the error matrix
.
Proof
. Using Lemma 2.2 and substituting for
X_{k}
_{+1}
, we get
Using Lemma 2.2 and (10), we get
Thus, the sequence of residual matrices defined by
R_{k}
=
X_{k}A
−
XA
satisfies the following recurrence relation
Let
s_{k}
= ∥
R_{k}
∥. Now, for the convergence of the sequence {
X_{k}
}, we require that
s_{k}
→ 0 as
k
→ ∞. This can be shown by mathematical induction. It trivially holds for k = 0, since,
s
_{0}
= ∥
R
_{0}
∥ = ∥
X
_{0}
A
−
XA
∥ < 1. Let it holds for some
k
, i.e.,
s_{k}
< 1. To show that it also holds for
k
+ 1, we take norm on (13) and get
Thus,
s_{k}
is a monotonically decreasing bounded sequence converging to
s
as
k
→ ∞ and 0 ≤
s
< 1. From (14), we get
This gives either
s
= 0 or
s
≥ 1. Thus,
s
= 0. This completes the proof that
s_{k}
→ 0 when
k
→ ∞. Thus,
X_{k}
→
X
as
k
→ ∞. Substituting
X_{k}
=
X
+
E_{k}
in (10) and rearranging the terms, we get
E_{k}
_{+1}
given by
This leads to
and
One can easily see that the method is linear convergent for
β
≠ 1 and of third order convergent for
β
= 1 as
error
_{1}
and
error
_{2}
are equal to 0.
Theorem 2.4.
Let
0 ≠
A
∈ ℂ
^{m×n}
,
X
=
A
^{†}
,
the initial approximation X
_{0}
=
βA
^{∗}
,
β
∈ (0, 1]
and its residual R
_{0}
= (
X
_{0}
−
X
)
A satisfy
∥
R
_{0}
∥ < 1.
The sequence
{
X_{k}
}
generated by (11) starting with X
_{0}
=
β_{A}
^{∗}
converges to the Moore-Penrose inverse A
^{†}
.
It has linear convergence for β
≠ 1
and pth order convergence for β
= 1,
where
,
p
≥ 2
is a positive integer
.
All error terms of the method vanishes except the first and the pth order error terms given by
Proof
. Using Lemma 2.2 and substituting for
X_{k}
_{+1}
, we get
Again using Lemma 2.2 and (11), this gives
Thus, the sequence of residual matrices defined by
R_{k}
=
X_{k}A
−
XA
satisfies the following recurrence relation
Proceeding similar to theorem 2.3, it can be proved that
X_{k}
→
X
as
k
→ ∞. Substituting
X_{k}
=
X
+
E_{k}
in (11) and rearranging the terms, we get
E_{k}
_{+1}
given by
On simplification, we get all error terms equal to zero except
error
_{1}
and
error_{p}
given by
One can easily see that the method is linearly convergent for
β
≠ 1 and of pth order convergent for
β
= 1 as all other error terms vanishes. Hence, the theorem is proved.
We must note here that the convergence of the above methods (10) and (11) require the condition ∥(
βA
^{∗}
−
X
)
A
∥ < 1. To verify this, we need an equivalent condition which does not contain the Moore-Penrose inverse
X
or
A
^{†}
. We follow exactly the same way as done in
[1]
by using the following Lemma.
Lemma 2.5
(
[1]
).
Let the eigenvalues of matrix A
^{∗}
A satisfies
(4).
Condition ρ
((
βA
^{∗}
−
X
)
A
) < 1
is satisfied under the assumption
To establish this Lemma, we need the following Lemmas.
Lemma 2.6
(
[15]
).
Let M
∈ ℂ
^{n×n} and ϵ
> 0
be given
.
There is at least one matrix norm
∥.∥
such that
where ρ
(
M
) = max{|
λ
_{1}
(
M
)|, . . . , |
λ_{n}
(
M
)|}
denotes the spectral radius of M
.
Lemma 2.7
(
[18]
).
If P
∈ ℂ
^{n×n} and S
∈ ℂ
^{n×n} are such that P
=
P
^{2}
and PS
=
SP then
Remark 2.1.
For any
A
∈ ℂ
^{m×n}
, the sequence generated by our higher order iterative methods starting with
X
_{0}
=
βA
^{∗}
are convergent for any
β
satisfying 0 <
β
< 2/
σ
^{2}
(
A
), where
σ
(
A
) = ∥
A
∥
_{2}
.
p
th order method for various values of
p
. The mean CPU time in second and the number of iterations are measured as performance parameters. For full rank matrices the number of iterations used are compared by plotting figures with
x
-axis representing the values of
p
and
y
-axis representing the number of iterations. For randomly generated matrices, we have tested 50 times matrices and the mean CPU time in second are measured and tabulated for values of 2 ≤
p
≤ 7 and the number of iterations are compared by plotting figures with
x
-axis representing the times of choice of random matrices and
y
-axis representing the number of iterations only for
p
= 2 and
p
= 3. The value of
ϵ
= 10
^{−7}
and the maximum number of iterations equal to 100 are taken as the termination criteria. All the examples are run on an Intel core 2 Duo processor running at 2.80GHz and using MATLAB 7.4 (R2009b).
Example 3.1.
Consider the matrix
A
of order (5 × 4) given by
The choice
β
= 0.60 satisfies the convergence criteria given by
since the eigenvalues of the matrix
A
^{∗}
A
are
The iterative method (11) generates a sequence of iterates {
X_{k}
} converging to the Moore-Penrose generalized inverse
A
^{†}
given by
The comparison of number of iterations are plotted in
Figure 1
. It can be observed from
Figure 1
that the iterative method (11) converges to the Moore-Penrose generalized inverse
A
^{†}
in 36 iterations for
p
= 2 and as expected as the order
p
increases, it reduces to 25 for
p
= 8.
Number of iterations versus the value of p for example 3.1
Example 3.2.
Consider a (30 × 30) matrix
A
whose elements are generated randomly from [−0.2, 0.2]. Taking the termination criteria as
where, ∥.∥
_{F}
stands for the Frobenius-norm of a matrix,
Table 1
shows the comparison of mean CPU time in seconds for for 2 ≤
p
≤ 7. One can easily see that with the increase of
p
, the CPU time decreases initially and then starts increases after
p
> 4. The comparison of number of iterations are plotted in
Figure 2
for
p
= 2 and
p
= 3. Similar results are also observed for higher order random matrices, for example, (40×40) matrix
A
whose elements are also randomly generated from [−0.2, 0.2].
Comparison of CPU time
Comparison of number of iterations for example 3.2
Example 3.3.
Consider a (100 × 50) rank deficient matrix
A
whose elements are generated randomly from [−0.2, 0.2]. Taking the termination criteria as
where, ∥.∥
_{F}
stands for the Frobenius-norm of a matrix,
Table 2
shows the comparison of mean CPU time in seconds for for 2 ≤
p
≤ 7. One can easily see that with the increase of
p
, the CPU time decreases initially and then starts increases after
p
≥ 4. The comparison of number of iterations are plotted in
Figure 3
for
p
= 2 and
p
= 3.
Comparison of number of iterations for example 3.3
Comparison of CPU time
Example 3.4.
Consider the ill-conditioned Hilbert matrix
A
of order (5 × 5) given by
The choice
β
= 0.80 satisfies the convergence criteria given by
since the eigenvalues of the matrix
A
^{∗}
A
are
The iterative method (11) generates a sequence of iterates {
X_{k}
} converging to the Moore-Penrose generalized inverse
A
^{†}
given by
The comparison of number of iterations are plotted in
Figure 4
. It can be observed from
Figure 4
that the iterative method (11) converges to the Moore-Penrose generalized inverse
A
^{†}
in 51 iterations for
p
= 2 and as expected as the order
p
increases, it reduces to 19 for
p
= 10.
Number of iterations versus the value of p for example 3.4
ii
) in place of (
ii
) and (
iv
) as used in
[1]
. Convergence properties as well as the error estimations are studied. The efficacy of the method is demonstrated by working out four numerical examples involving full rank and rank deficient randomly generated matrices and ill-conditioned Hilbert matrix. The performance in terms of computational time and the number of iterations are evaluated with respect to the order of the iterative methods. It is observed that the number of iterations always decreases as expected and the CPU time first decreases gradually and then increases with respect to the order of the iterative methods for all examples.
Shwetabh Srivastava is a Ph.D. research scholar in the Department of Mathematics, Indian Institute of Technology Kharagpur, India. His research interests include Theory of Generalized inverses and their applications.
Department of Mathematics, IIT Kharagpur, India.
e-mail: shwetabh20059@gmail.com
D.K. Gupta is a senior professor in the Department of Mathematics, Indian Institute of Technology Kharagpur, India. His research interests are Numerical Analysis and Computer Science.
Department of Mathematics, IIT Kharagpur, India.
e-mail: dkg@maths.iitkgp.ernet.in

Moore-Penrose inverse
;
Random matrices
;
Convergence analysis
;
Residual
;
Null space
;
Matrix rank

1. Introduction

The theory of generalized inverses has seen a substantial growth over the past few decades. Many applications of statistics, prediction theory, control analysis and numerical analysis often require computation of generalized inverses. The Moore-Penrose inverse is one of the most important generalized inverses of arbitrary singular square or rectangular (real or complex) matrix. It has been extensively studied by many researchers
[4
,
2
,
6
,
9
,
5
,
3]
and many methods are proposed in the literature. Accordingly, it is important both practically and theoretically to find good higher order algorithms for computing a Moore-Penrose inverse of a given arbitrary matrix. Let ℂ
PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

- zero order term consisting of a matrix which does not depend onEk,
- one or more first order matrix terms in whichEkor its conjugate transposeappears only once,
- higher-order terms in whichEkorappears at least twice

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

2. Higher order iteration with convergence Analysis

In this section, higher order iterative methods and their convergence analysis to compute the Moore-Penrose inverse
PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

3. Numerical examples

In this section, four numerical examples are worked out to demonstrate the efficacy of our
PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

Comparison of CPU time

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

Comparison of CPU time

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

PPT Slide

Lager Image

4. Conclusions

A quadratically convergent iterative method proposed by
[1]
is extended to a family of higher order iterative methods to compute the Moore-Penrose generalized inverses of arbitrary singular or rectangular (real or complex) matrices. This extension is carried out by using only the Penrose equation (
BIO

Petković Marko D.
,
Stanimirović Predrag S.
(2011)
Iterative method for computing Moore-Penrose inverse based on Penrose equations
Journal of Computational and Applied Mathematics
235
1604 -
1613
** DOI : 10.1016/j.cam.2010.08.042**

Guo W.
,
Huang T.
(2010)
Method of elementary transformation to compute Moore-Penrose inverse
Appl. Math. Comput.
216
1614 -
1617
** DOI : 10.1016/j.amc.2010.03.016**

Katsikis Vasilios N.
(2011)
Dimitrios Pappas and Athanassios Petralias, An improved method for the computation of the Moore-Penrose inverse matrix
Appl. Math. Comput.
217
9828 -
9834
** DOI : 10.1016/j.amc.2011.04.080**

Chen Haibin
,
Wang Yiju
(2011)
A Family of higher-order convergent iterative methods for computing the Moore-Penrose inverse
Appl. Math. Comput.
218
4012 -
4016
** DOI : 10.1016/j.amc.2011.05.066**

Weiguo Li
,
Juan Li
,
Tiantian Qiao
(2013)
A family of iterative methods for computing Moore-Penrose inverse of a matrix
Linear Algebra and its Applications
438
47 -
56
** DOI : 10.1016/j.laa.2012.08.004**

Li Weiguo
,
Li Zhi
(2010)
A family of iterative methods for computing the approximate inverse of a square matrix and inner inverse of a non-square matrix
Appl. Math. Comput.
215
3433 -
3442
** DOI : 10.1016/j.amc.2009.10.038**

Pan V.
,
Schreiber R.
(1991)
An improved Newton iteration for the generalized inverse of a matrix with applications
SIAM J. Sci. Statist. Comput.
12
1109 -
1130
** DOI : 10.1137/0912058**

Katsikis V.N.
,
Pappas D.
,
Huang T.
(2008)
Fast computing of the Moore-Penrose inverse matrix
Electronic Journal of Linear Algebra
17
637 -
650

Toutounian F.
,
Ataei A.
(2009)
A new method for computing Moore-Penrose inverse matrices
Journal of Computational and applied Mathematics
228
412 -
417
** DOI : 10.1016/j.cam.2008.10.008**

Courrieu P.
(2005)
Fast Computation of Moore-Penrose Inverse matrices
Neural Information Processing-Letters and Reviews
8
25 -
29

Ben-Israel A.
,
Greville T. N. E.
2003
Generalized Inverses: Theory and Applications
Springer-Verlag
NewYork

Katsaggelos A.K.
,
Efstratiadis S.N.
(1990)
A class of iterative signal restoration algorithms
IEE Trans. Acoust. Speech Signal Process
38
778 -
793
** DOI : 10.1109/29.56022**

Ben-Israel A.
,
Charnes A.
(1963)
Contribution to the theory of generalized inverses
J.Soc.Indust.Appl.Math.
11
667 -
669
** DOI : 10.1137/0111051**

Householder A.S.
1964
The Theory of Matrices in Numerical Analysis
Dover Publications
Newyork

Horn R.A.
,
Johnson C.R.
1986
Matrix Analysis
Cambridge University
Cambridge, UK

Hall F.J.
,
Meryer C.D.
(1975)
Generalized inverses of the fundamental bordered matrix used in linear estimation
Sankhya Ser. A
37
428 -
438

Nacevska B.
(2009)
Iterative methods for computing generalized inverses and splittings of operators
Appl. Math. Comput.
208
186 -
188
** DOI : 10.1016/j.amc.2008.11.030**

Stanimirovic P.S.
,
Cvetkovic-illic D.S.
(2008)
Succesive matrix squaring algorithm for com-puting outer inverses
Appl. Math. Comput.
203
19 -
29
** DOI : 10.1016/j.amc.2008.04.037**

Sun W.
,
Yuan Y.
2006
Optimization Theory and Methods- Nonlinear Programming
Springer
New York

Campbell Stephen L.
,
Meyer Carl D.
2009
Generalized Inverses of Linear Transformations
SIAM

Schultz G.
(1933)
iterative Berechmmg der reziproken matrix
Angew. Math. Mech.
13
57 -
59
** DOI : 10.1002/zamm.19330130111**

Citing 'HIGHER ORDER ITERATIONS FOR MOORE-PENROSE INVERSES†
'

@article{ E1MCA9_2014_v32n1_2_171}
,title={HIGHER ORDER ITERATIONS FOR MOORE-PENROSE INVERSES†}
,volume={1_2}
, url={http://dx.doi.org/10.14317/jami.2014.171}, DOI={10.14317/jami.2014.171}
, number= {1_2}
, journal={Journal of Applied Mathematics & Informatics}
, publisher={Korean Society of Computational and Applied Mathematics}
, author={SRIVASTAVA, SHWETABH
and
GUPTA, D.K.}
, year={2014}
, month={Jan}