We propose and describe new error control algorithms for redundant residue number systems (RRNSs) and residue number system product codes. These algorithms employ search techniques for obtaining error values from within a set of values (that contains all possible error values). For a given RRNS, the error control algorithms have a computational complexity of
t
·O(log
_{2}
n
+ log
_{2}
m
) comparison operations, where
t
denotes the error correcting capability,
n
denotes the number of moduli, and
m
denotes the geometric average of moduli. These algorithms avoid most modular operations. We describe a refinement to the proposed algorithms that further avoids the modular operation required in their respective first steps, with an increase of ⌈log
_{2}
n
⌉ to their computational complexity. The new algorithms provide significant computational advantages over existing methods.
I. Introduction
Error control techniques
play an important role in the reliability of signal transmission in communication systems. This is due to their capability to enhance the robustness of information transmission in noisy environments.
Residue number systems (RNSs) are used to express a computation in a large integer ring as a direct sum of computations in a number of smaller integer rings. These computations can be carried out in parallel.
The Chinese remainder theorem (CRT) forms a basis for an RNS. Redundant residue number systems (RRNSs) have recently been applied to communication systems, including wireless local area network (WLAN)
[1]
, code division multiple access (CDMA)
[2]
, space–time block codes
[3]
, multicarrier modulation
[4]
, wireless sensor networks
[5]
, and cognitive radio
[6]
. Application of RRNSs to provide reliability in cloud storage services is described in
[7]
.
In 1976, the concepts of
legitimate range
and
illegitimate range
were introduced
[8]
, an important breakthrough in the field of RRNSbased error control.
One of the earliest schemes for singleburst errors and double errors was developed in 1992. It computes syndromes of the digits in a received vector and compares them with some observations
[9]
,
[10]
.
Knowledge of Hamming weight and minimum distance was used in detecting and correcting error, in
[9]
and
[10]
. Yang and Hanzo
[11]
used base extension (BEX) to achieve error correction in wireless channels suffering from low reliability. Their method requires decoding and encoding operations for each BEX process, and recovers digits by the CRT. In addition, Yang and Hanzo threw new light on the problem with the theory of minimum distance decoding
[12]
.
An algorithm was proposed in
[13]
to recover directly the original digits from a received vector based on combinations of residues. The method used in
[13]
is based on continued fractions and Euclid’s algorithm. It is a variant of the multiple error correction algorithm in
[14]
. The work of
[13]
and other works by the same authors related to
[13]
can be found in
[15]
and [16]. Relying on
[12]
, Goldreich and others extended
[13]
with theories of maximum likelihood decoding (MLD) to determine errorlocation residues with fewer trials
[17]
. More recent work on scaling and error correction can also be found in
[18]
.
The major contributions of this work are given below. In what follows, RRNSs and RNSPCs are collectively termed as RNS codes (RNSCs). First, a unified mathematical framework for RNSCs is developed. Second, new computationally efficient algorithms for error control in an RNSC are described. Two cases are considered: (a) pure error correction and (b) simultaneous error correction and error detection. In both cases, the algorithms decode correctly if the number of errors is less than or equal to the error control capability of the RNSC. These algorithms provide significant computational savings in comparison to existing works. The methods described here are based on search techniques for use within an ordered set. Various refinements are described that simplify step 1 of these algorithms. The computational complexity of the proposed algorithms is
t
· O(log
_{2}
n
+ log
2
m
). For a given RNSC,
t
is the error correcting capability,
n
is the number of moduli, and
m
is their geometric average. In comparison, the computational complexity of existing works is (i) O (
b
·(log
_{2}
b
)
^{a}
), where
a
is a constant and
b= ∑ i=1 n (1+⌊ log 2 m i ⌋ )
in
[13]
,
[15]
,
[16]
, and (ii) O(
n^{t}
) in
[17]
.
The algorithms presented in this paper require fewer operations by comparison. In addition, they avoid most modular computations. Consequently, the proposed algorithms are expected to be simpler to implement and better suited for highspeed application environments. A preliminary analysis of hardware implementation supports this assertion.
This paper is organized as follows. Section II provides a brief description of the CRT and coding theory for RNSs. Sections III and IV are on the key contributions of this work. Error control algorithms are obtained in Section III and examples are presented to illustrate these algorithms. A refinement to the error control algorithms in Section III is described in Section IV — the refinement avoids modular operations required in the first step of each of the proposed algorithms. The refinement results in a slight increase in complexity. Further refinements are also described that simplify computations, once again, in the first step of each of the proposed algorithms. Computational comparisons with existing algorithms are presented in Section V. Section VI concludes this work.
II. Mathematical Preliminaries
 1. RNS
An RNS is a finite integer ring (
Z
(
M_{K}
)) defined by
k
relatively coprime moduli,
m
_{1}
,
m
_{2}
, … ,
m_{k}
, arranged in ascending order (without loss of generality). The range of the RNS is given by [0,
M_{K}
), where
(1) M K = ∏ i=1 k m i .
An integer
X
∈ [0,
M_{K}
) in the given RNS is represented by a vector,
x
, of length
k
, via the modular computation
(2) X↔ x = ( x 1 x 2 , … , x k ),
where
(3) x i ≡ X ( mod m i ) ( i = 1, 2, … ,k ).
RNSs express a computation in a large integer ring as a direct sum of computations in a number of smaller integer rings; the computations can be carried out in parallel. For instance, multiplication of integers
A
and
B
in
Z
(
M_{K}
) is computed as
k
parallel multiplications
a_{i}
·
b_{i}
(mod
m_{i}
),
i
= 1, … ,
k
.
 2. Permutation
Given a set of integers
S
= {
s
_{1}
,
s
_{2}
, … ,
s
_{λ}
} of cardinality λ, we define a permutation of
S
as a rearrangement of the elements in
S
in a different order. When
S
is an RNS
Z
(
M_{K}
), one such permutation is
S
= {
P
·
s
_{1}
,
P
·
s
_{2}
, … ,
P
·
s
_{λ}
}. Here,
P
is an integer, and multiplication
P
·
s_{a}
is conducted modulo
M_{K}
. Not all values of
P
lead to a permutation. The necessary and sufficient condition is that
P
and
M_{K}
must be coprime; that is, gcd(
P
,
M_{K}
) = 1. For an RNS
Z
(
M_{K}
), a permutation gives rise to a onetoone mapping, represented by
(4) X ′ ≡P⋅X( mod M K ).
As
X
takes values in the set {0, 1, … ,
M_{K}
− 1},
X'
also takes values in the same set, though in a different order. Any
P
such that gcd(
P
,
M_{K}
) = 1 can be used. Such integers always exist. In fact, there are ϕ(
M_{K}
) such values in the range (0,
M_{K}
); ϕ(
M
) being the Euler totient function of
M
. For any given permutation, the following two properties hold:

■X= 0 if and only ifX'= 0.

■xi′≡X′(mod mi)=pi⋅xi(mod mi),pi≡P(modmi),i= 1, 2, … ,k.
We are interested in values of
P
that are associated with CRT computations and RNSCs. In addition, we show that the use of permuted residues simplify decoding algorithms in some cases.
 3. CRTI
Given
x
, computation of
X
, 0 ≤
X
<
M_{K}
, can be done via the CRT; this can be stated as follows:
(5) X≡ ∑ i=1 k a i ⋅ x i ⋅( M k m i ) (mod M k ).
The scalar
a_{i}
, is computed apriori by solving the congruence
(6) a i ⋅( M k m i )≡1(mod m i ) (i= 1, 2, … ,k).
It is clear from (6) that
(7) gcd( a i , m i ) = 1, (i= 1, 2, … ,k).
The CRT computation in (5) can be performed in two steps:
Step 1: Compute the permuted residues
(8) x i ′ ≡ a i ⋅ x i (mod m i ) (i= 1, 2, … ,k).
Step 2: Compute
X
as
(9) X≡ ∑ i=1 k x i ′ ⋅( M K m i ) (mod M K ).
Computation of
X
from
x
necessarily involves large integers as the dynamic range of RNS is large. For the integers
a_{i}
(
i
= 1, 2, … ,
k
) in (5), consider the integer
T
(0 <
T
<
M_{K}
) such that
(10) T≡ a i ( mod m i ) ( i= 1, 2, … ,k ).
The integer
T
can be obtained using the CRT. Stated explicitly,
(11) T ≡ ∑ i=1 k a i ⋅ a i ⋅( M K m i ) ( mod M K ) = ∑ i=1 k a i 2 ⋅( M K m i ) ( mod M K ).
It follows from (7) and (10) that
(12) gcd( T, M K ) = 1.
A. CRTII: Using Permuted Residues
Based on (8), we may also use permuted residues
(13) x ′ =( x 1 ′ x 2 ′ , ... , x k ′ ),
instead of
x
= (
x
_{1}
x
_{2}
⋯
x_{k}
) in residue arithmetic. In such a case,
X
is obtained from (9) in a direct manner. It is clear from (5), (8), and (12) that
X
↔
x
= (
x
_{1}
x
_{2}
, … ,
x_{k}
) and
X ′ ↔ x ′ ( x 1 ′ x 2 ′ , … , x k ′ )
are related via the permutation
(14) X ′ =T⋅X( mod M K ).
Given
X
′,
X
can be recovered as
(15) X≡ T −1 ⋅ X ′ ( mod M K ).
A unique
T
^{−1}
(mod
M_{K}
) always exists due to (12).
B. CRTIII Computation with Permutation
Given the CRT in (5), we can also write
(16) X≡T⋅ ∑ i=1 k x i ⋅( M K m i ) ( mod M K ).
This provides an alternative expression and a way to compute
X
from its residues in the given RNS. The CRT computation based on (16) can be performed in two steps:
Step 1: Compute
(17) X ″ ≡ ∑ i=1 k x i ⋅( M m i ) .
Step 2: Compute
X
as
(18) X≡T⋅ X ″ ( mod M K ).
The three formulations of the CRT, enumerated as CRTI, CRTII, and CRTIII, are equivalent and have their own computational features, thus making each one uniquely suitable for a certain computational environment (as shown in refinement 3 in Section IV). In essence,
T
establishes a permutation between (i)
X
′ and
X
via (15), and (ii)
X
and
X
″ via (18). We note that for a given RNS,
T
is fixed and needs to be computed once only. Finally, a permutation in an RNS can be carried out via any
P
such that gcd(
P
,
M_{K}
) = 1. A permutation is used to establish decoding algorithms for an RNSPC.
 4. Mixed Radix System (MRS)
An MRS can be used to compute
X
from its residues
x
. In an MRS,
X
is represented as follows:
X= y 1 + y 2 ⋅ m 1 +⋅⋅⋅+ y n ⋅( m 1 ⋅ m 2 , … , m k –1 ).
The mixed radix digits satisfy 0 ≤
y_{i}
<
m_{i}
. They are computed from
x
using RNStoMRS conversion algorithms
[19]
.
In our work, integer
X
can be computed from its residues using either the CRT or the MRS. Such a computation constitutes the first step in the new decoding algorithms. We also use the CRT to establish mathematical aspects of the new decoding algorithms. The choice of CRT versus MRS for an actual computation rests with the system designer.
Example 1
. Consider an RNS defined by residues
m
_{1}
= 3 and
m
_{2}
= 5. Then, we have
M_{K}
=
m
_{1}
·
m
_{2}
= 15, and
X
is an integer in the range (0, 15). The values
a
_{1}
and
a
_{2}
needed for the CRT are obtained by solving congruences
a
_{1}
· 5 ≡ 1 (mod 3) and
a
_{2}
· 3 ≡ 1 (mod 5). This results in
a
_{1}
=
a
_{2}
= 2. Using the CRT on
a
_{1}
and
a
_{2}
, we get
T
= 2 and
T
^{−1}
= 8 (mod 15). Selected integers
X
,
X
′, and
X
″ for the permutations in (15) and (18) and their residues are given in
Table 1
.
Permutations and CRT computations.
X  x  X′  x′^{′}  X″  x″ 
1  (1 1)  2  (2 2)  8  (2 3) 
2  (2 2)  4  (1 4)  1  (1 1) 
3  (0 3)  6  (0 1)  9  (0 4) 
4  (1 4)  8  (2 3)  2  (2 2) 
13  (1 3)  11  (2 1)  14  (2 4) 
 5. RRNS
An RRNS is obtained by appending (
n
−
k
) additional relatively coprime moduli,
m
_{k+1}
, … ,
m_{n}
, to an RNS defined by moduli
m
_{1}
,
m
_{2}
, … ,
m_{k}
. We further assume that moduli
m
_{1}
,
m
_{2}
, … ,
m_{k}
,
m
_{k+1}
, … ,
m_{n}
are arranged in an ascending order. An RRNS is a (
n
,
k
) code with redundancy given by
(19) M R = ∏ i=k+1 n m i .
An integer
X
∈ [0,
M_{K}
) in the given RNS is represented by a codeword,
x
, of length
n
, via modular computations as follows:
(20) X↔x= ( x 1 x 2 , … , x k x k +1 , … , x n ),
where
(21) x i ≡X ( mod m i ) ( i= 1, 2, ... ,n ).
The
k
residues (
x
_{1}
x
_{2}
, … ,
x_{k}
) constitute the information digits, and the (
n
−
k
) residues (
x
_{k+1}
, … ,
x_{n}
) the parity digits. Let
(22) M N = M K ⋅ M R .
Given a vector,
a
, the Hamming weight of
a
, denoted by H(
a
), is the number of nonzero elements in
a
. The Hamming distance between two vectors is the number of places in which the two vectors differ. The Hamming distance between two codewords is same as the Hamming weight of another codeword. The Hamming weight of any nonzero codeword in an RRNS is at least
d
=
n
−
k
+ 1. An RRNS is a maximum distance separable (MDS) code having minimum distance
d
. These properties follow even though RRNSs are nonlinear
[19]
.
Example 2
. Consider a (4, 2) RRNS defined by (
m
_{1}
m
_{2}
m
_{3}
m
_{4}
) = (11 13 14 15). The legitimate range is (0, 143),
M_{R}
= 210. This is a minimum distance 3 RRNS. Given an integer
X
= 25 ∈ (0, 143), we have
x
= (3 12 11 10). If we extend this RRNS with a third redundant modulus, say
m
_{5}
= 17, then we get a (5, 2) minimum distance 4 RRNS.
 6. RNSPC
An RNSPC is defined by
n
moduli (
m
_{1}
m
_{2}
, … ,
m_{n}
) such that all codewords when converted to equivalent integer form are divisible by a codegenerator integer
G
, gcd(
G
,
M_{N}
) = 1. Thus, for an integer
X
in the legitimate range (0,
M_{K}
) ,
M_{K}
= ⌊
M_{N}
/
G
⌋ + 1, the corresponding codeword
x
is obtained as
G
·
X
↔
x
= (
g
_{1}
·
x
_{1}
g
_{2}
·
x
_{2}
, … ,
g_{n}
·
x_{n}
), where multiplications are conducted mod respective moduli. Here, ⌊θ⌋ is the wellknown floor function. The minimum distance for an RNSPC is
d
, if and only if
G
satisfies
∏ i=1 d m n−1 >G≥ ∏ i=1 d−1 m n−1 .
RNSPCs can be studied as an equivalent of cyclic codes over finite fields.
Example 3
. Consider an RNSPC defined by (
m
_{1}
m
_{2}
m
_{3}
m
_{4}
) = (13 16 17 19). For a minimum distance 3 RNSPC, we have
m
_{4}
·
m
_{3}
·
m
_{2}
>
G
≥
m
_{4}
·
m
_{3}
or 5,168 >
G
≥ 323. Choosing
G
= 327, we have (0, 206) as the legitimate range for this RNSPC.
III. Unified Framework and Algorithms for Error Control in RNS
Together, RRNSs and RNSPCs will be referred to as RNSCs. A minimum distance
d
RNSC can correct up to
t
residue errors (also called its errorcorrecting capability), where
(23) t=⌊ (d−1)/2 ⌋ .
A code that simultaneously corrects up to
α
residues and detects up to
β
(
β
>
α
) residues in error has minimum distance
(24) d=α+β+ 1.
It is clear that the (4, 2) RNSC in example 2 can correct a single error, while the (5, 2) RNSC can simultaneously correct single errors and detect double errors. In the following, we propose new algorithms for error control for RNSCs.
 1. Error Phenomenon
For a transmitted codeword
x
, an additive error,
e_{i}
, (
e_{i}
≠ 0) in the
i
th residue results in a received residue vector,
y
.
(25) y=x+e,
where
(26) y i ≡ x i + e i ( mod m i ) ( i= 1, 2, … ,n ).
For an error, both its location and value are unknown. The Hamming weight of
e
, H(
e
), is the number of errors in
y
. Using a triangular inequality, we have H(
y
) ≥ H(
x
) − H(
e
), or
(27) H( y ) ≥d − H( e ).
The error model in (25) and (26) is adopted all throughout this work for various error control algorithms.
The first step in error control algorithms in RNSCs being described here is as follows. Given
y
, compute
Y
from
y
and check if it is in legitimate range [0,
M_{K}
) for the given RNSC. Thus far, the first step in decoding of an RNSPC has been computation of
Y
from
y
and checking if
G

Y
[19]
. In the following, we first unify step 1 for RRNS and RNSPC leading to a unified framework for error control in RNSCs. Error control algorithms for RNSCs is then presented.
 2. RRNS
Consider an (
n
,
k
) RRNS with H(
x
) ≥
d
,
x
≠
0
, and a received residue vector
y
in (25).
Lemma 1.
If up to
t
errors occur, then
(28) 0≤E≤ M N − ∏ i=1 n−t m i ,
(29) 0 ≤X+E< M N ,
where
E
is the error integer corresponding to residue vector
e
.
Proof.
If H(
e
) =
a
, then
e
has
a
nonzero and (
n
−
a
) zero residues. Let
e
be nonzero in locations
I
= (
i
_{1}
,
i
_{2}
, … ,
i_{a}
) and 0 in all others. If a residue is 0, then
E
is a multiple of corresponding modulus. Thus,
E
is a multiple of all moduli
m_{i}
,
i
= 1, … ,
n
, that do not belong to
I
. The largest value of
E
such that it is nonzero in locations
I
is given by
0≤E≤( ∏ i=1 i∉I n m i ) ( ∏ i=1 i∉I n m i −1 ), = ∏ i=1 i∉I n m i ∏ i=1 i∉I n m i − ∏ i=1 i∉I n m i , = ∏ i=1 n m i − ∏ i=1 i∉I n m i = M N − ∏ i=1 i∉I n m i .
The largest value of
E
implies that the moduli “not belonging” to set
I
is the (
n − a
) smallest moduli. Hence,
0≤E≤ M N − ∏ i=1 n−a m i .
Again, the largest value of
E
is obtained when
a
is as large as possible. We note that Lemma 1 is applicable if
a
≤
t
. Settin
a
=
t
leads to (28). Equation (29) is obtained by adding
M_{K}
to both sides of (28) and noting that
M K = ∏ i=1 k m i ≤ ∏ i=1 n−t m i .
■
Using Lemma 1, we can express (25) in integer form as
(30) Y=X+E ( mod M N ) =X+E.
Here,
X
has residue vector
x
and
E
has residue vector
e
having 0’s in all locations except for residue positions where errors have occurred. The first step in the error control algorithms for RRNSs being described here is to compute
Y
from
y
. This can be done using either the CRT or the MRS.
 3. RNSPC
Expressing (25) in integer form for RNSPC, we have
Y
=
G
·
X
+
E
(mod
M_{N}
). To test whether
Y
is a codeword or not, the existing algorithms for RNSPC compute
Y
first and then test if
G

Y
[19]
. We now merge the two steps to compute
(31) Y 1 ≡ G –1 ⋅Y ( mod M N ) =X+ G –1 ⋅E ( mod M N ) =X+F ( mod M N ),
where
F
=
G
^{−1}
·
E
(mod
M_{N}
). Computationally,
(32) Y 1 ≡ ∑ i=1 n y i ′ ⋅( M N m i ) ( mod M N ).
where the permuted residues
y i ′
are given by
(33) y i ′ =( g i −1 ⋅ a i )⋅ y i ( mod m i ) = a i ⋅ x i +( g i −1 ⋅ a i )⋅ e i ( mod m i ).
Thus,
F
is a permuted version of
E
via (
G
^{−1}
(mod
M_{N}
)) and,
(34) Y 1 =X+F.
Two key properties of such a computation are as follows:

▪ P1: H(f) = H(e).

▪ P2:Yis a codeword if and only ifY1is a legitimate integer; that is, 0 ≤Y1
Writing
F
=
G
^{−1}
·
E
(mod
M_{N}
) in residue form,
f
=
G
^{−1}
·
e
or
f i = g i −1 ⋅ e i (mod m i )
,
g_{i}
≡
G
(mod
m_{i}
),
i
= 1, 2, … ,
n
. Since gcd(
g_{i}
,
m_{i}
) = 1,
f_{i}
= 0 if and only if
e_{i}
= 0. This establishes P1, thereby showing that if
e
is a correctable/detectable error vector, then so is
f
, and viceversa. As P1 is valid, (28) holds for
F
. P2 follows from (28) and (31) through (34). P1 and P2 taken with Lemma 1 further lead to
Y
_{1}
=
X
+
F
, an expression for RNSPCs that is equivalent to (30) for RRNSs.
In essence, divisibility of
Y
by
G
is now replaced by testing for
Y
_{1}
being in legitimate range [0,
M_{K}
). This unifies error control methods for RRNSs and RNSPCs. In an RRNS, given
y
=
x
+
e
, we compute
Y
from
y
with
Y
=
X
+
E
. In an RNSPC, given
y
=
G
·
x
+
e
, we compute
Y
_{1}
from
y
with
Y
_{1}
=
X
+
F
and H(
f
) = H(
e
). Moving forward, we denote
Y
_{1}
by
Y
and
F
by
E
in our description of error control algorithms within the unified framework for RNSCs. Again, the computation of
Y
from its residues can be carried out using either the CRT or the MRS.
Given
Y
, the task of any decoding algorithm is to estimate
X
and
E
, denoted by
X ^
and
E ^
, respectively, such that
Y= X ^ + E ^
. The decoding algorithm decodes “correctly” if
X= X ^
. Correct decoding must take place if the number of errors in
y
does not exceed the errorcorrecting capability of the RNSC. The following two cases follow from this:

▪ Case I: When used purely for error correction, correct decoding must take place if H(e) ≤t.

▪ Case II: When used for simultaneous error correction and detection, correct decoding takes place if H(e) ≤α. Further, the decoding algorithm detects errors ifα< H(e) ≤β.
Some general properties of an RNSC are as follows. Every legitimate integer
X
and the corresponding codeword
x
satisfy
(35) 0 ≤X< M K ,
(36) H( x ) ≥d ( x≠0 ).
Hence an arbitrary integer
R
, such that H(
r
) <
d
, is an illegitimate integer; that is,
M_{K}
≤
R
<
M_{N}
. It follows that if there are
a
errors in
y
, then H(
e
) =
a
and H(
y
) ≥
d – a
.
Lemma 2.
If there are at most (
d
− 1) errors in
y
(that is, H(
e
) <
d
), then
Y
is an illegitimate integer.
Proof.
All codewords differ in at least
d
places. As
x
and
y
differ in (
d
− 1) places or less,
y
cannot be a codeword. ■
For an RNSC, we construct a set
V
_{l}
, referred to as an “
l
errorset,” which contains all possible integers
E
such that H(
e
) =
l
. Thus, we have
(37) V l = {E,∀e: H( e ) =l}.
Based on
V
_{l}
, we construct a set
U
_{b}
, referred to as an “errorset,” which contains all possible integers
E
such that 1 ≤ H(
e
) ≤
b
. Mathematically, we have
(38) U b = {E,∀e: 1≤ H( e )≤b} = V 1 ∪ V 2 ∪, … ,∪ V b .
We further assume that the elements in
U
_{b}
are listed in an ascending order. It is clear that all integers in
V
_{l}
and hence
U
_{b}
are illegitimate as long as
b
<
d
. The errorset
U
_{b}
is computed only once and then stored for use in the decoding algorithm. To reinforce the unified framework for error control in RNSCs, we note that given the permutation
F
≡
P
·
E
(mod
M_{N}
), the elements in the set
V
_{l}
are the same for both
F
and
E
as H(
f
) = H(
e
). Hence, under the condition that the same moduli are used for both, the sets
U
_{b}
for an RRNS and an RNSPC are identical. The only difference is that the error vectors for such an RRNS and RNSPC are associated with their integer representation via the permutation
P
≡
G
^{−1}
(mod
M_{N}
).
The cardinality of
U
_{b}
(that is, the number of error vectors
e
having a Hamming weight in the range 0 < H(
e
) ≤
b
) can be approximated by a polynomial in
n
of degree
b
as follows:
(39)  U b ≈ ∑ l=1 b n C l ( m ¯ −1) l .
We compute
m ¯
as the geometric mean of all the residues,
(40) m ¯ = ( ∏ i=1 n m i ) 1/n = M N 1/n .
Here,
^{n}C_{r}
=
n
!/[
r
!(
n − r
)!] is the combinatorial function. The expression in (39) is exact if moduli are equal. Since they are not, we take geometric mean to estimate the cardinality of
U
_{b}
.
Example 4
. Consider a (4, 2) RRNS defined by (
m
_{1}
m
_{2}
m
_{3}
m
_{4}
) = (2 3 5 7). Let
b
=
t
= 1. The integers
E
included in
U
_{1}
along with their respective 4dimensional vectors are 105 ← (1 0 0 0), 70 ← (0 1 0 0), 140 ← (0 2 0 0), 126 ← (0 0 1 0), 42 ← (0 0 2 0), 168 ← (0 0 3 0), 84 ← (0 0 4 0), 120 ← (0 0 0 1), 30 ← (0 0 0 2), 150 ← (0 0 0 3), 60 ← (0 0 0 4), 180 ← (0 0 0 5), and 90 ← (0 0 0 6). The errorset including the aforementioned integers sorted in ascending order is
U
_{1}
= {30, 42, 60, 70, 84, 90, 105, 120, 126, 140, 150, 168, 180}. In this case, 
U
_{1}
 = 13. The approximated values are
m ¯
= 4 and 
U
_{1}
 ≈ 12 as per (39). Similarly, 
U
_{2}
 = 69, while the approximate value is 
U
_{2}
 ≈ 67.
Example 5
. Consider an RNSPC defined by the same residues as in example 4 with
G
= 37,
G
^{–1}
= 193. Let
b
=
t
= 1. All the integers
E
included in
U
_{1}
along with their respective 4dimensional vectors obtained via the permutation
P
≡
G
^{–1}
(mod
M_{N}
) are 105 ← (1 0 0 0), 70 ← (0 1 0 0), 140 ← (0 2 0 0), 126 ← (0 0 3 0), 42 ← (0 0 1 0), 168 ← (0 0 4 0), 84 ← (0 0 2 0), 120 ← (0 0 0 4), 30 ← (0 0 0 1), 150 ← (0 0 0 5), 60 ← (0 0 0 2), 180 ← (0 0 0 6), and 90 ← (0 0 0 3). The errorset including the aforementioned integers sorted in ascending order is the same as in example 4.
We now describe a decoding algorithm for the two cases, separately, though the analysis is similar.
 4. Case I
Error correction up to
t
errors; 0 < H(
e
) ≤
t
. For correcting up to
t
errors, we create the error set
U
_{t}
. Given (19) and (22) and the ensuing discussion, it is clear that
Y
and
X
are illegitimate and legitimate integers, respectively,
Y
=
X
+
E
(
E
∈
U
_{t}
). In addition,
E
<
Y
. The following theorem plays a key role in the formulation of new decoding algorithms.
Theorem 1.
Given
Y
=
X
+
E
,
X
and
E
are unique for H(
e
) ≤
t
.
Proof.
If these values are not unique, then there are at least two sets (
X
_{1}
,
E
_{1}
) and (
X
_{2}
,
E
_{2}
) such that
(41) Y= X 1 + E 1 = X 2 + E 2 .
Without any loss in generality, let
X
_{1}
>
X
_{2}
. Rearranging (41),
(42) X 1 – X 2 = E 2 – E 1 .
Let
X
_{3}
=
X
_{1}
−
X
_{2}
and
E
_{3}
=
E
_{2}
−
E
_{1}
. Thus,
X
_{3}
is legitimate with H(
x
_{3}
) ≥ 2
t
+ 1, while H(
e
_{3}
) ≤ H(
e
_{2}
) + H(
e
_{1}
) ≤
t
+
t
= 2
t
. Clearly, (41) is not possible, as the Hamming weight of the left side of (42) can never equal the Hamming weight of the right side of (42). ■
Theorem 1 leads to the following new decoding algorithms.
Decoding Algorithm 1.
Given
Y
, test if
E
= 0. If not, then find the largest integer
E ^
in
U
_{t}
such that
E ^ ≤Y
. Thus, the decoding algorithm finds
E ^
as “the largest integer in
U
_{t}
less than or equal to
Y
.” The corresponding
X ^
is computed as
(43) X ^ =Y− E ^ .
This analysis also leads us to conclude that the difference between any two arbitrarily selected elements in
U
_{t}
is at least
M_{K}
. Finally, noting that 0 ≤
X
<
M_{K}
, we have
(44) Y– M K
A flow chart for new decoding algorithm 1 is shown in
Fig. 1
. The bulk of the complexity consists in searching
U
_{t}
to determine
E ^
. There are numerous binary tree–based search algorithms known in the literature that have a complexity of O(log
_{2}
A
), where
A
is the cardinality of the ordered set
A
. In essence, these algorithms are based on the “divideandconquer” technique. Via (39), 
U
_{t}
 can be approximated by a polynomial of the kind
 U t ≈a⋅ n t ⋅ m ¯ t
. Hence, the complexity of decoding algorithm 1 is
t⋅O( log 2 n+ log 2 m ¯ )
. The errorset
U
_{t}
is straightforward to obtain for a given RNSC, as is shown in examples 4 and 5.
Flow chart for decoding algorithm for case I.
Example 6
. Consider the same RRNS as in example 4. Let
X
= 3,
x
= (1 0 3 3), and an error be introduced in the second residue to obtain
y
= (1 1 3 3). Therefore,
Y
= 73,
Y − M_{K}
= 67. Thus, we need to search in
U
_{1}
to find
E
within (67, 73). There are 13 elements in
U
_{1}
as listed in example 4. Let us denote them by
U
_{1}
(
i
), 1 ≤
i
≤ 13. First, compare
U
_{1}
(7) = 105 with 73 to get 105 > 73. Since 105 >
Y
, we have
E
as one of {
U
_{1}
(1), … ,
U
_{1}
(6)}. Second, compare
U
_{1}
(4) = 70 with 73 to get 70 < 73, thus further narrowing
E
as one of {
U
_{1}
(4), … ,
U
_{1}
(6)}. Third, compare
U
_{1}
(5) = 84 with 73 to get 84 > 73. This eliminates
U
_{1}
(5) and
U
_{1}
(6),
E ^ = U 1 ( 4 )=70
with
X ^ =Y− E ^ =3
.
Following RNSPC in example 5, let
X
= 3 with the codeword
G
·
X
↔ (1 0 1 6). Let an error be introduced in the second residue to obtain
y
= (1 1 1 6). Therefore, in step 1, we compute the permuted version of
y
using (
G
^{−1}
(mod
M_{N}
)) (193 (mod 210)) to obtain
Y
= 73,
Y − M_{K}
= 67. The rest of the steps are identical to the steps for RRNS.
Example 7
. Consider a double error correcting (10, 6) RRNS defined by (
m
_{1}
, … ,
m
_{10}
) = (23 25 27 29 31 32 67 71 73 79). In this case,
t
= 2 and 
U
_{2}
 = 87,899.
 5. Case II
Simultaneous error correction up to
α
errors and detection up to
β
errors; 0 < H(
e
) ≤
α
,
α
<
β
. For correcting up to
α
errors, we create the errorset
U
_{α}
. Based on a similar analysis as in Case I, the new decoding algorithm for Case II is as follows.
Decoding algorithm 2.
Given
Y
, test if
E
= 0. If not, then find the largest integer
E ^
in
U
_{α}
such that
E ^ ≤Y
. The corresponding
X ^
then is computed as
X ^ =Y− E ^
. Theorem 1 also holds in this case. We note that Case II requires fewer comparisons as 
U
_{α}
 < 
U
_{t}
 for
α
<
t
. The error control algorithm for simultaneous correction of up to
α
and detection of up to
β
errors (
d
=
α
+
β
+ 1,
α
<
β
) is almost the same as the one described in
Fig. 1
with
U
_{t}
replaced by
U
_{α}
.
Example 8
. Consider a (16, 10) RRNS defined by (
m
_{1}
, … ,
m
_{16}
) = (23 29 31 32 35 37 39 41 43 47 53 59 61 67 71 73). In this case, we can use
U
_{3}
for correcting three errors, 
U
_{3}
 = 51,159,743, or
U
_{2}
for simultaneous correction of up to two and detection of up to four errors, 
U
_{2}
 = 245,231.
Example 9
. Consider the same RRNS as in example 7. Let us use it for simultaneous correction of one error and detection of up to three errors. Here,
M_{K}
= 446,623,200, 
U
_{1}
 = 447. Let
X
= 15,
x
= (15, … , 15), and two errors be introduced to obtain
y
= (16 15, … , 15 75). First,
Y
= 2,285,962,767,813,615. Ten comparisons are needed in the decoding algorithm.

▪ 1:U1(224) = 6,203,792,762,208,000 >Y,

▪ 2:U1(112) = 3,161,933,085,254,400 >Y,

▪ 3:U1(56) = 1,645,856,960,421,600

▪ 4:U1(84) = 2,377,348,942,831,200

▪ 5:U1(70) = 2,014,108,061,155,200

▪ 6:U1(77) = 2,194,475,947,228,800

▪ 7:U1(81) = 2,326,422,285,828,000 >Y,

▪ 8:U1(79) = 2,268,979,760,252,000

▪ 9:U1(80) = 229,734,2007,255,150 >Y,

▪ 10:E^=U1(79)=2,268,979,760,252,000,X^=Y−E^=16,983,007,561,615>MK.
Thus, “uncorrectable errors in
Y
” is declared.
IV. Further Refinements
 1. Refinement 1
One may reason that computation of
Y
in the first step of the new decoding algorithms described here still requires a modular operation (mod
M_{N}
) if the CRT is used to compute
Y
. We now describe a refinement, whereby such an operation may be avoided at an increase of ⌈log
_{2}
n
⌉ to the number of comparisons. The first step in the decoding algorithms is the computation of
Y
, which can be carried out using either the CRT or the MRS. Mathematically,
Y
can be represented by
(45) Y≡ ∑ i=1 n y i ′ ⋅( M N m i ) ( mod M N ),
where
y i ′ ≡ y i ⋅ a i (mod m i )
,
i
= 1, 2, … ,
n
. Given
y
, the computation in (45) can equivalently be expressed as
(46) Y ′ ≡ ∑ i=1 n y i ′ ⋅( M N m i ) ,
(47) Y≡ Y ′ ( mod M N )
To avoid the modular operation in (47), one may compute
Y
′ in (46) instead of
Y
in (47). Recalling (30), we may write
Y
′ as
(48) Y ′ =Y+Δ⋅ M N =X+E+Δ⋅ M N ,
where Δ is an unknown that satisfies 0 ≤ Δ <
n
. Based on (48), we construct
n
sets,
A
_{i}
,
i
= 0, … ,
n
− 1, as follows:
(49) A i = {all values of X+E+i· M N ; 0≤X< M K , H( e )≤t}.
Since 0 <
X
+
E
<
M_{N}
for H(
x
) ≥ 2
t
+ 1 and H(
e
) ≤
t
, these sets are disjoint. We create a super errorset
U b S
, an
n
fold replica of the previously described errorset
U
_{b}
, as follows:
(50) V l S ={ E,E+ M N ,…,E+( n−1 )⋅ M N ; ∀e: H(e)=l },
(51) U b S = V 1 S ∪ V 2 S ∪, ... ,∪ V b S .
The elements of
U b S
are listed in an ascending order. We also have 
U b S
 =
n
· 
U
_{b}
. Based on this formulation of
U b S
, the decoding algorithms can now be described as follows:

▪ Step 1: ComputeY′ using (46). WhenE= 0,X∈ one ofnintervals,ith interval = [i·MN,i·MN+MK],i= 0, … ,n− 1.

▪ Decoding algorithm 1: GivenY′, test ifE= 0. If not, find the largest integerE^inUtSsuch thatE^≤Y′.

▪ Decoding algorithm 2: GivenY′, test ifE= 0. If not, find the largest integerE^inUαSsuch thatE^≤Y′.
Each iteration in the search process narrows the number of possible values of
E ^
by a factor of two. Here,
 U b S =n⋅ U b 
. Hence, ⌈log
_{2}
n
⌉ additional comparisons reduce the search to its original size of 
U
_{b}
.
The search scheme may be further improved in hardware implementation by arranging elements in
U b S
into different subsets, where the arranged elements have the same bitlength in each subset. In this case, the decoding algorithms carry out a bitlength check of
Y
′, and then implement the search scheme in the subset in which numbers have the same bitlength as
Y
′.
 2. Refinement 2
If it is not required to isolate the case of no error (
Y
=
X
,
E
= 0;
y
=
x
,
e
=
0
) from the case of one or more errors (
Y
≠
X
,
E
≠ 0;
y
≠
x
,
e
≠
0
) at the receiver, then
U
_{b}
is obtained as
(52) U b ={E,∀e:0≤H( e )≤b}= V 0 ∪ V 1 ∪, … ,∪ V b .
Similarly,
U b S
is constructed as:
(53) U b S = V 0 S ∪ V 1 S ∪, ... ,∪ V b S .
In this case, no comparisons are required once
Y
in (30) or
Y
′ in (46) is computed; furthermore, a search is performed even when
E
= 0 (
e
=
0
). A flow chart for decoding algorithm 1 incorporating these refinements is shown in
Fig. 2
. In this flow chart,
U
_{t}
can be replaced by
U
_{α}
to obtain the decoding algorithm for simultaneously correcting
α
and detecting
β
(
α
<
β
) errors. Finally,
Y
and
U
_{t}
in
Fig. 2
can be replaced by
Y
′ and
U t S
in (53), respectively, to obtain the decoding algorithm for correcting up to
t
errors. In
Fig. 2
,
U
_{t}
is replaced by
U α S
to obtain the decoding algorithm for simultaneously correcting
α
and detecting
β
(
α
<
β
) errors. In these cases,
Y
′ in
Fig. 2
is computed using (46) instead of the CRT or the MRS. As a result, decoding algorithms incorporating these refinements avoid all modular operations in Step 1.
Modified flow chart for decoding algorithm for case I.
 3. Refinement 3
The CRT computation of
Y
in RRNS can be simplified by computing and transmitting permuted residues
x
′ instead of
x
,
x i ′ ≡ a i ⋅ x i ( mod m i )
,
i
= 1, 2, … ,
n
. The arithmetic remains the same. In this case,
y
=
x
′ +
e
, and
Y
is calculated as
(54) Y≡ ∑ i=1 n x i ′ ⋅( M N m i ) + ∑ i=1 n e i ⋅( M N m i ) ( mod M N ),
leading to
Y
=
X
+
T
^{−1}
·
E
(mod
M_{N}
). There is no other change in the error control algorithms. This saves
n
modular multiplications required to compute
x
′ from
x
at the receiving end.
The framework described here can also be used for error control in other scenarios. For instance, if we are interested in controlling burst errors, then
U
_{b}
is the errorset corresponding to all correctable burst errors of up to length
b
. Finally, the error correction/list decoding problem mentioned in page 1,332 of
[13]
requires computation of all codewords that differ from
y
in up to
e
places,
e
>
t
. This can also be done by using the methodology described here. In such a case, the decoding algorithm uses errorset
U
_{e}
instead of
U
_{t}
and finds codewords within
M_{K}
of
Y
. There may be multiple solutions as
e
>
t
.
V. Computational Complexity Comparisons
The work of
[10]
deals with doubleerror and single burst error correction. However, it doesn’t perform multiple error correction in general. To date, the algorithms in
[13]
,
[15]
,
[16]
, and
[17]
have been the most efficient error correction schemes for RRNS. In this section, a computational complexity comparison is given for our algorithm, a variant of the algorithm in
[13]
, and the algorithm in
[17]
. The works of
[15]
and
[16]
are closely related to
[13]
. The errorcorrection capabilities of the methods in
[14]
and
[13]
are restricted by the values of moduli in RRNS. In
[13]
, correct decoding takes place in the presence of
e
errors if
e
< log
m
_{1}
(
n − k
) / (log
m
_{1}
+ log
m_{n}
), thereby requiring that the moduli be close to each other. This is in contrast to our work where correct decoding takes place in the presence of e errors if
e
≤
t
= ⌊(
n
–
k
) / 2⌋. The method in
[17]
aims at finding a tuple of correct residues to recover the original integer. The algorithms in
[13]
,
[15]
, and
[16]
calculate and subtract the error value using Euclid’s algorithm and continued fractions involving large integers. This can be computationally intensive. An idea intended to achieve error correction, analogous to similar ideas appearing in
[13]
,
[15]
, and
[16]
, is shown in
[17]
— it is equivalent to recovering and checking all possible combinations of (
n − a
) residues in the received vector
y
assuming that
a
errors have occurred. The decoding algorithm in
[17]
, a variant of the decoding algorithm in
[13]
, uses modular arithmetic. It is simpler to implement.

1) RecoverXfrom the received vector with the CRT.

2) IfYis in the legitimate range, then stop and outputX=Y. Otherwise, go to step 3.

3) Assumea= 1.

4) ComputeXi= 〈Y〉Zi, whereZi,i= 1, 2, … ,nCn−ais a product of any (n−a)dimensional combination of moduli.

5) IfXiis within legitimate range, then outputX^=Xi. Or, ifi=nCn−a, then incrementaby 1. Ifa>t, then go to step 6. Otherwise, go to step 4.

6) Declare more thanterrors and stop.
Thus, the problem of multiple error correction can be solved in polynomial time in the aforementioned scheme. To remedy the weakness that a large number of iterations are required to select correct residues via traversal, MLD is introduced to reduce nonessential operations in
[17]
. We omit the details.
The computational complexity comparison between the proposed algorithms and the algorithms in
[13]
and
[17]
is listed in
Table 2
. With the exception of the algorithms in Section IV, these algorithms require computation of integer Y from the received residues
y
. This count is not included in
Table 2
explicitly. Due to low complexity requirements for the dichotomybased search algorithms, the proposed methods perform error control up to the error control capability of RNSCs in a computationally efficient manner.
Table 3
lists the numeric values of the various entries in
Table 2
for the (10, 6) double error–correcting RRNS and (16, 10) triple error–correcting RRNS described in examples 4 and 5, respectively. We observe that our algorithms are based on “comparison operation.” They avoid modular operations or processing of large integers. This is in contrast to the algorithms in
[13]
and
[17]
. The algorithms presented here are expected to be simpler and better suited for highspeed, application environments.
Complexity comparison of decoding algorithms.
 Variant of paper [13]  Paper [17]  Algorithm (Section III)  Algorithm (Section IV) 
Computation of Y via CRT or MRS  √  √  √  N/A 
Modular operation  ^{n}C_{t}  f · (r + 1)  0  0 
Comparison operation  ^{n}C_{t}  f · (r + 2)  ⌈log_{2} U_{t}⌉ + 2  ⌈log_{2} (n · U_{t})⌉ + 2 
Subtraction operation  0  0  2  2 
Total complexity  2 · ^{n}C_{t}  f · (2 · r + 1)  ⌈log_{2} U_{t}⌉ + 4  ⌈log_{2} (n · U_{t})⌉ + 4 
Note: f is an experimental number such that $f\ge \left[\frac{{}^{n}{C}_{t}^{}}{{}^{r}{C}_{t}^{}}\right]$ .
Complexity comparison of decoding algorithms for (10, 6) & (16, 10) RRNS.
 Variant of paper [13]  Paper [17]  Algorithm (Section III)  Algorithm (Section IV) 
Computation of Y via CRT or MRS  √  √  √  N/A 
Modular operation  45; 560  50; 392  0; 0  0; 0 
Comparison operation  45; 560  60; 448  19; 28  23; 32 
Subtraction operation  0; 0  0; 0  2; 2  2; 2 
Total complexity  90; 1,120  110; 840  21; 30  25; 34 
Note: The first & second entry in each box corresponds to (10, 6) & (16, 10) RRNS.
VI. Conclusion
In this work, new computationally efficient errorcontrol algorithms for RRNSs and RNSPCs, collectively termed RNSCs, are presented under a unified framework. These algorithms require “comparison operations” as opposed to modular operations. This aspect is expected to lead to improvements in processing time when these algorithms are implemented in hardware. The proposed algorithms have a computational complexity of
t
· O(log
_{2}
n
+ log
_{2}
m ¯
) operations. This is significantly lower than the computational complexity of existing algorithms. A refinement is described that avoids modular operations in the first step of decoding algorithms, at a slight increase in computational complexity of ⌈log
_{2}
n
⌉ comparisons. Two other refinements are described that further simplify the computations associated with the first step. The techniques can be extended to non–RNSCs and other scenarios such as correction of burst errors.
Acknowledgements
The work of H.K. Garg is supported by the Singapore National Research Foundation under its International Research Centre @ Singapore Funding Initiative and administered by the Interactive Digital Media Programme Office at the NUSZJU SensorEnhanced Social Media (SeSaMe) Centre. The work of H. Xiao and J. Hu was supported by Tsinghua University Initiative Scientific Research Program (20141081231), the National Natural Science Foundation of China (under Grant 61371104), and National High Technology Project of China (under Grant 2011AA010201).
BIO
xhs13@mails.tsinghua.edu.cn
Hanshen Xiao is currently pursuing his BS degree in mathematics at Tsinghua University, Beijing, China. In 2012, he worked as a research intern for the National Key Lab of Science and Technology on Communications, Chengdu, China. In 2015, he was awarded the Tsinghua Highest Spark Research Fellowship and was a visiting student at the Department of Computer Science, Yale University, CT, USA. His research interests include cryptography; coding theory; and mathematical modeling in signal processing and numerical calculation.
Corresponding Author eleghk@nus.edu.sg
Hari Krishna Garg received his BS degree in electrical engineering from the Indian Institute of Technology, Delhi, India, in 1981 and his MS and PhD degrees in electrical engineering from Concordia University, Canada, in 1983 and 1985, respectively. He received his MBA degree in international finance from Syracuse University, NY, USA, in 1995. From 1985 to 1995, he was with the Department of Electrical & Computer Engineering, Syracuse University. He joined the Department of Electrical & Computer Engineering, National University of Singapore, in 1995. His research interests include wireless communications from physical to application layers. He also engages in enterprise activity as his passion. Thus far, he has founded or cofounded four companies.
jhhu@uestc.edu.cn
Jianhao Hu received his BE and PhD degrees in communication systems from the University of Electronic Science and Technology of China (UESTC), Chengdu, China, in 1993 and 1999, respectively. He joined the City University of Hong Kong, Kowloon Tong, China, in 1999 as a postdoctoral researcher. From 2000 to 2004, he served as a senior system engineer at the 3G Research Center of the University of Hong Kong. He has been a professor of the National Key Lab., UESTC, since 2005. His research interests include highspeed DSP technology with VLSI, NoC, and software radio.
gqxiao@swu.edu.cn
Guoqiang Xiao received his PhD degree in signal and information processing from the University of Electronic Science and Technology of China (UESTC), Chengdu, China and his BS degree in radio technology from Chongqing University, China, in 1999 and 1986, respectively. Since 1986, he has been with the College of Computer and Information Science, Southwest University, Chongqing, China, where he is currently a professor. From 2001 to 2004, he was with the Department of Electrical & Electronic Engineering, University of Hong Kong, as a postdoctoral researcher. His research interests include image processing, pattern recognition, neural networks, and wireless network communication.
Madhukumar A.S.
,
Chin F.
,
Premkumar A.B.
2003
“Incremental Redundancy and Link Adaptation in Wireless Local Area Networks Using Residue Number Systems,”
Wireless Pers. Commun.
27
(4)
321 
336
DOI : 10.1023/B:WIRE.0000012272.95696.ab
Madhukumar A.S.
,
Chin F.
2004
“Enhanced Architecture for Residue Number SystemBased CDMA for HighRate Data Transmission,”
IEEE Trans. Wireless Commun.
3
(5)
1363 
1368
DOI : 10.1109/TWC.2004.833509
Sengupta A.
,
Natarajan B.
2013
“Performance of Systematic RRNS Based SpaceTime Block Codes with ProbabilityAware Adaptive Demapping,”
IEEE Trans. Wireless Commun.
12
(5)
2458 
2469
DOI : 10.1109/TWC.2013.031813.121113
Keller T.
,
Liew T.H.
,
Hanzo L.
2000
“Adaptive Redundant Residue Number System Coded Multicarrier Modulation,”
IEEE J. Sel. Areas Commun.
18
(11)
2292 
2301
DOI : 10.1109/49.895034
Zarei B.
,
Muthukkumarasay V.
,
Wu X.W.
“A Residual Error Control Scheme in SingleHop Wireless Sensor Networks,”
IEEE Int. Conf. Adv. Inf. Netw. Appl.
Barcelona, Spain
Mar. 25–28, 2013
197 
204
Zhang S.
,
Zhang Y.
,
Yang L.L.
“Redundant Residue Number System Based Multicarrier DSCDMA for Dynamic Multipleaccess in Cognitive Radios,”
IEEE Veh. Technol. Conf.
Yokohama, Japan
May 15–18, 2011
1 
5
Villari M.
2013
Adv. ServiceOriented Cloud Comput.
SpringerVerlag
Berlin, Germany
“Data Reliability in Multiprovider Cloud Storage Service with RRNS,”
83 
93
Barsi F.
,
Maestrini P.
1973
“Error Correcting Properties of Redundant Residue Number Systems,”
IEEE Trans. Comput.
C22
(3)
307 
315
Krishna H.
,
Lin K.Y.
,
Sun J.D.
1992
“A Coding Theory Approach to Error Control in Redundant Residue Number Systems, Part I: Theory and Single Error Correction,”
IEEE Trans. Circuits Syst. II: Analog Digital Signal Process.
39
(1)
8 
17
Sun J.D.
,
Krishna H.
1992
“A Coding Theory Approach to Error Control in Redundant Residue Number Systems, Part II: Multiple Error Detection and Correction,”
IEEE Trans. Circuits Syst. II: Analog Digital Signal Process.
39
(1)
18 
34
Yang L.L.
,
Hanzo L.
2001
“Redundant Residue Number System Based Error Correction Codes,”
IEEE Veh. Technol. Conf.
Atlantic City, NJ, USA
1472 
1476
Yang L.L.
,
Hanzo L.
2001
“MinimumDistance Decoding of Redundant Residue Number System Codes,”
IEEE Int. Conf. Commun.
Helsinki, Finland
2975 
2979
Goldreich O.
,
Ron D.
,
Sudan M.
2000
“Chinese Remaindering with Errors,”
IEEE Trans. Inf. Theory
46
(4)
1330 
1338
Mandelbaum D.M.
1976
“On a Class of Arithmetic Codes and a Decoding Algorithm,”
IEEE Trans. Inf. Theory
22
(1)
85 
88
Goldreich O.
,
Ron D.
,
Sudan M.
1999
“Chinese Remaindering with Errors,”
MIT
Cambridge, USA
(revised)
Goldreich O.
,
Ron D.
,
Sudan M.
1999
“Chinese Remaindering with Errors,”
ACM Symp. Theory Comput.
Atlanta, NJ, USA
225 
234
Goh V.T.
,
Siddiqi M.U.
2008
“Multiple Error Detection and Correction Based on Redundant Residue Number Systems,”
IEEE Trans. Commun.
56
(3)
325 
330
DOI : 10.1109/TCOMM.2008.050401
Lo H.
,
Lin T.
2013
“Parallel Algorithms for Residue Scaling and Error Correction in Residue Arithmetic,”
Wireless Eng. Technol.
4
(4)
198 
213
DOI : 10.4236/wet.2013.44029
Krishna H.
1994
“Computational Number Theory and Digital Signal Processing: Fast Algorithms and Error Control Techniques,”
CRC Press
Boca Raton, FL, USA