Improved Reliability-Based Iterative Decoding of LDPC Codes Based on Dynamic Threshold

ETRI Journal.
2015.
Aug,
37(4):
736-742

- Received : December 12, 2013
- Accepted : April 15, 2015
- Published : August 01, 2015

Download

PDF

e-PUB

PubReader

PPT

Export by style

Share

Article

Metrics

Cited by

TagCloud

A serial concatenated decoding algorithm with dynamic threshold is proposed for low-density parity-check codes with short and medium code lengths. The proposed approach uses a dynamic threshold to select a decoding result from belief propagation decoding and order statistic decoding, which improves the performance of the decoder at a negligible cost. Simulation results show that, under a high SNR region, the proposed concatenated decoder performs better than a serial concatenated decoder without threshold with an
E
_{b}
/
N
_{0}
gain of above 0.1 dB.
L_{i}
candidate codewords are processed for order-
i
OSD of an LDPC code with a code length of
N
, where
(1) $${L}_{i}={\displaystyle \sum _{l=0}^{i}\left(\begin{array}{l}N\\ l\end{array}\right)}\text{\hspace{0.17em}}.$$
The OSD algorithm calculates a decoding metric for every candidate codeword and selects the codeword with the smallest decoding metric. For each constructed codeword,
c
= (
c
_{0}
,
c
_{1}
, ... ,
c
_{N−1}
)
^{T}
, its associated decoding metric is computed based on the original noisy received sequence,
y
= (
y
_{0}
,
y
_{1}
, ... ,
y
_{N−1}
)
^{T}
. For the SC-BP-OSD algorithm,
y
is the accumulated LLR from BP decoding. The parameters
r
and
c
_{1}
are the absolute value and the hard decision of
y
, respectively, which can be expressed as
r_{i}
= |
y_{i}
| and
c
_{1,i}
= (1 − sgn(
y_{i}
))/2, 0 ⩽
i
⩽
N
− 1; the function sgn(
x
) represents the sign of
x
. In fact,
r_{i}
is the reliability value of the corresponding
c
_{1,i}
.
One of the most commonly used decoding metrics is that of the weighted Hamming distance, which can be defined as
(2) $${d}_{\text{w}}({\text{c}}_{1},{\text{c}}_{x})\triangleq {\text{r}}^{\text{T}}\cdot ({\text{c}}_{1}+{\text{c}}_{x}),$$
where
c
_{x}
is one of the candidate codewords. The operation “+” is performed in GF(2). The decoding metric can be used to select the most likely codeword, which is denoted by
c
_{2}
, among all of the
L_{i}
candidate codewords. Upon obtaining codeword
c
_{2}
with the smallest decoding metric, this smallest decoding metric itself can also be an indicator for the reliability of
c
_{2}
, which is a straightforward inference. Thus, a threshold,
T
_{d}
, is set to filter the OSD result — if the final decoding metric (the decoding metric of
c
_{2}
),
d_{w}
(
c
_{1}
,
c
_{2}
), is smaller than
T
_{d}
, then the OSD result is chosen; otherwise, the BP decoding result is chosen.
Based on the previous analysis, an SC-BP-OSD algorithm with threshold is constructed. This algorithm can be expressed as SC-BP-OSD with threshold, and its flowchart is shown in
Fig. 1
.
Flowchart of SC-BP-OSD algorithm with threshold: variable “a ” denotes the checksum of BP decoding result c _{1}.
The key issue of the SC-BP-OSD algorithm with threshold is determining a proper threshold
T
_{d}
. A simple method is to investigate the performance of different
T
_{d}
through simulation and select the best one. This work has been done for several famous LDPC codes. Unfortunately, the simulation results show that selecting a
T
_{d}
with the best performance throughout the entire
E
_{b}
/
N
_{0}
range is difficult.
Examples of utilizing decoding metrics to improve ML decoding algorithms are to be found in
[14]
and
[15]
. In
[14]
, a threshold test is used to determine whether the codeword is likely to be the ML codeword. If the codeword is not accepted, then the decoder biases the input vector away from the candidate codeword, thereby reducing the decoding complexity. In
[15]
, the OSD metric is used to arrange the list of the most a priori likely tests, thereby also reducing the decoding complexity. The algorithms in
[14]
and
[15]
are all based on pure OSD or ML decoders. Unlike in
[14]
and
[15]
, the proposed approach of this paper uses the OSD metric to filter a correctly decoded codeword, which improves the BER performance of the SC-BP-OSD algorithm.
Figure 2
shows a comparison of the BER performance under an AWGN channel for different values of
T
_{d}
, where an order-1 OSD is considered. The LDPC code used in
Fig. 2
is a regular (3,6) LDPC code with a code rate of 1/2 and code length of 504
[16]
, represented as (504,252,3,6) LDPC code. The tested
T
_{d}
in this paper includes 200, 600, and 1,000. In
Figure 2
,
T
_{d}
= 0 represents a BP decoding without OSD, and
T
_{d}
= ∞ represents an SC-BP-OSD algorithm without threshold. In an optimal situation, the decoding error caused by the OSD algorithm can always be detected, and this incorrect decoding codeword is then substituted with the BP decoding result. Although the
optimal
choice cannot be realized in practice, it can be computed in simulation. Whether the OSD output codeword is correct or not can be determined in simulation because the simulator knows what is being transmitted. The result for the optimal situation can be a lower bound for evaluating the performance of different
T
_{d}
.
Figure 2
shows that
T
_{d}
= 600 performs better than
T
_{d}
= 200 and
T
_{d}
= 1,000 for
E
_{b}
/
N
_{0}
less than 2.4 dB. However, for
E
_{b}
/
N
_{0}
above 2.4 dB,
T
_{d}
= 1,000 has the best performance. Thus, selecting the threshold is difficult. After all, the performance of both
T
_{d}
= 600 and
T
_{d}
= 1,000 has a notable gap from that of the optimal situation.
Comparison of BER performance of different T _{d}.
E
_{b}
/
N
_{0}
region, a different
T
_{d}
threshold can be used for different
E
_{b}
/
N
_{0}
. However, this method requires many simulations to determine the
T
_{d}
of different
E
_{b}
/
N
_{0}
. Finally, using this method, the performance gap from the optimal algorithm cannot be reduced yet.
Based on the Gaussian approximation of the density evolution of the BP decoding algorithm of LDPC codes, and given an
E
_{b}
/
N
_{0}
and an LDPC code, there will be a unique value for the mean of the LLR output,
m_{v}
, when the iteration approaches infinity
[17]
. The relationship between
E
_{b}
/
N
_{0}
and
m_{v}
can be denoted as a function,
m_{v}
=
f
(
E
_{b}
/
N
_{0}
). Based on the analysis in
[17]
, the function
f
(
E
_{b}
/
N
_{0}
) is monotonically increasing.
Considering that the Gaussian approximation of density evolution is derived based on the assumption that an all-zero codeword is transmitted,
m_{v}
is indeed a conditional mean. If a non-zero codeword is transmitted, then
m_{v}
can be approximated as the mean of the absolute values of the LLR outputs. As a result, the decoding metric, which is expressed by a weighted Hamming distance, also exhibits a growing trend because the weighting value is increased. This result explains why
T
_{d}
= 1,000 performs better for large
E
_{b}
/
N
_{0}
than
T
_{d}
= 600.
The effect of the mean of the absolute values of the LLR outputs on the value of the decoding metric produces the idea that a variable threshold
T
_{d}
based on the LLR output of BP decoding can be calculated as follows:
(3) $${T}_{\text{d}}=\alpha \cdot \frac{1}{n}{\displaystyle \sum _{i=0}^{N-1}\left|{r}_{i}\right|}=\alpha \overline{r}\text{},$$
where
r
). The parameter
α
is to be determined and may be called the threshold factor.
The variable
d
_{w}
(
c
_{1}
,
c
_{2}
) can be expressed in the form of the sum of each element; that is,
(4) $${d}_{\text{w}}({c}_{1},{c}_{2})={\displaystyle \sum _{i=0}^{N-1}\left|{r}_{i}\right|}({c}_{1,i}+{c}_{2,i})\text{}.$$
The following definition is then obtained:
(5) $${d}_{\text{cmp}}\triangleq \frac{1}{n}{\displaystyle \sum _{i=0}^{N-1}\left|{r}_{r}\right|}\cdot {\displaystyle \sum _{i=0}^{N-1}({c}_{1,i},{c}_{2,i})}={d}_{\text{h}}({c}_{1},{c}_{2})\overline{r}\text{,}$$
where
d
_{h}
(
c
_{1}
,
c
_{2}
) is the Hamming distance between the codewords
c
_{1}
and
c
_{2}
. The variable
d
_{cmp}
can be viewed as an estimation of
d
_{w}
(
c
_{1}
,
c
_{2}
).
A comparison of (3) and (5) shows that
d
_{cmp}
and
T
_{d}
are both scalar multiples of
T
_{d}
is to determine whether the decoding metric
d
_{w}
is too large to be accepted, the value of the scalar of
T
_{d}
,
α
, has a close relationship with the scalar of
d
_{cmp}
,
d
_{h}
(
c
_{1}
,
c
_{2}
).
If
c
_{1}
falls inside the decision boundaries of the correct codeword
c
_{0}
, then the Hamming distance between them is not more than
d
_{h}
(
c
_{1}
,
c
_{2}
)/2. Under this assumption,
c
_{2}
with decoding metric
d
_{w}
(
c
_{1}
,
c
_{2}
) >
d
_{min}
d
_{min}
is the minimum Hamming distance. Since a weighted Hamming distance is used in determining which codeword is more similar to
c
_{1}
instead of a Hamming distance, a
c
_{1}
with a Hamming distance from
c
_{0}
of more than
d
_{min}
/2 may also be correctly decoded. As a result,
α
>
d
_{min}
/2.
The lower bound of
α
has been retrieved, but how about the upper bound? Simulation and analysis do not suggest the existence of a proper upper bound that is suitable to all kinds of LDPC codes. However, the lower bound does facilitate the search for the best
α
because the simulated
α
can be adopted starting from the value that is half of the minimum Hamming distance of the considered LDPC code.
P
_{d}
^{+}
. If a decoding codeword is incorrect but is not detected by a threshold, then an undetected error occurs; the probability of this happening is expressed by
P
_{u}
. This definition of an undetected error is the same as that found in
[12]
–
[13]
; however, a detected error in
[12]
–
[13]
is slightly different in that there is no false detection probability, since detected errors coincide with erased codewords, and no decoder output is produced in such cases. Through the proposed approach, instead, a false detection is possible, since it may happen that an error is detected due to the decoding metric overcoming the threshold despite a decoder output codeword coinciding with a transmitted codeword. Thus, a detected error rate,
P
_{d}
, can be divided into two parts: one is
P
_{d}
^{+}
, which is defined above, and the other is the probability that a decoding codeword is correct but is detected as an error, which can be denoted as
P
_{d}
^{−}
. The following relationships can now be established:
(6) $${P}_{\text{d}}={P}_{\text{d}}^{+}+{P}_{\text{d}}^{-}\text{,}$$
(7) $${P}_{\text{w}}={P}_{\text{d}}+{P}_{\text{u}}={P}_{\text{d}}^{+}+{P}_{\text{d}}^{-}+{P}_{\text{u}}\text{,}$$
where
P
_{w}
is the codeword error rate.
References
[12]
–
[13]
provide the relationships among
P
_{w}
,
P
_{d}
, and
P
_{u}
for different bounds (which are of the same meaning as the threshold in this paper). The goal of
[12]
–
[13]
is to simultaneously minimize both
P
_{w}
and
P
_{u}
and to show the lower bound of
P
_{u}
versus
P
_{w}
for different bounds. Although different bounds are considered, the key point in
[12]
–
[13]
is not the bounds themselves but the relationship between
P
_{w}
and
P
_{u}
under such bounds. Based on the results of
[12]
–
[13]
, a single bound will have a different performance under different
E
_{b}
/
N
_{0}
. If a similar performance of
P
_{u}
versus
P
_{w}
for different
E
_{b}
/
N
_{0}
is expected, then a bound should be varied with
E
_{b}
/
N
_{0}
, which leads to a dynamic bound.
Unlike
[12]
–
[13]
, OSD is serially concatenated with BP for the proposed approach; thus, an undetected codeword error should include that which is caused by BP. For a BP decoder, an undetected error occurs when
a
= 0; however, in such a case, the decoded codeword does not coincide with the transmitted one. For both the LDPC codes evaluated in this paper (also for most LDPC codes with sufficient code length), when only BP decoding is considered, the probability of an undetected codeword error occurring is small enough to be ignored. As a result, an undetected codeword error caused by BP is not considered in our later analysis and simulations.
For the proposed approach,
P
_{u}
is expected to be as small as possible; that is, most of the decoding errors of OSD can be correctly detected. Moreover,
P
_{d}
^{−}
should be minimized, which means the minimizing of the probability of discarding the correct OSD output codeword. The so-called optimal situation implies that
P
_{u}
and
P
_{d}
^{−}
are all zero and that all the codeword errors are correctly detected errors. Unfortunately, the decline of
P
_{u}
results in an increase in
P
_{d}
^{−}
, which in turn, is responsible for the increase in
P
_{w}
shown in
[12]
. If the CER of OSD without threshold is
P
_{w_osd}
, then
(8) $${P}_{\text{w\_osd}}={P}_{\text{d}}^{+}+{P}_{\text{u}}.$$
Considering that
P
_{w_osd}
is determinate given an
E
_{b}
/
N
_{0}
, the proportion of
P
_{u}
,
P
_{d}
^{−}
, and
P
_{d}
^{+}
in
P
_{w}
is clear enough to indicate the influence of the threshold on the proposed approach. Let Pc
_{u}
, Pc
_{d}
^{−}
, and Pc
_{d}
^{+}
be a percentage of
P
_{u}
,
P
_{d}
^{−}
, and
P
_{d}
^{+}
, respectively, in
P
_{w}
;
Figure 3
shows them for different thresholds. The (504,252,3,6) LDPC code used in
Fig. 2
is also selected for computing Pc
_{u}
, Pc
_{d}
^{−}
, and Pc
_{d}
^{+}
in
Fig. 3
, and order-1 OSD is concerned. The Pc
_{u}
, Pc
_{d}
^{−}
, and Pc
_{d}
^{+}
for the optimal situation and for
P
_{d}
= ∞ are not shown in
Fig. 3
, because the following results are easy to be obtained:
(9) $$\{\begin{array}{l}{\text{Pc}}_{\text{u}}=0,\text{\hspace{0.17em}}{\text{Pc}}_{\text{d}}^{-}=0,\text{\hspace{0.17em}}{\text{Pc}}_{\text{d}}^{+}=1\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}optimal,}\\ {\text{Pc}}_{\text{u}}=1,\text{\hspace{0.17em}}{\text{Pc}}_{\text{d}}^{-}=0,\text{\hspace{0.17em}}{\text{Pc}}_{\text{d}}^{+}=0\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{T}_{\text{d}}=\infty .\end{array}$$
Percentage Pc
_{u}
for
T
_{d}
= 0 is always zero; thus, it is also not shown in
Fig. 3
.
Figure 3
shows that the Pc
_{d}
^{+}
for the dynamic threshold situation is above 80% for
E
_{b}
/
N
_{0}
less than 2.4 dB and significantly decreases when
E
_{b}
/
N
_{0}
reaches 2.4 dB. This decreasing is caused by the increase in Pc
_{d}
^{−}
. However, the performance of Pc
_{d}
^{+}
in the dynamic threshold situation is the best among all of the three given situations, which means that the filtering of the OSD output codeword has been optimized. For constant thresholds, such as
T
_{d}
= 600 and
T
_{d}
= 1,000, the fluctuation of Pc
_{d}
^{+}
with
E
_{b}
/
N
_{0}
is on a large scale. As a result,
T
_{d}
= 600 and
T
_{d}
= 1,000 are only suitable for a part of the entire
E
_{b}
/
N
_{0}
region.
Comparison of Pc_{u}, Pc_{d}^{−}, and Pc_{d}^{+} for different thresholds. Black line denotes result of dynamic threshold; blue line that of T _{d} = 0; red line that of T _{d} = 600; and green line that of T _{d} = 1,000. For every situation above, the lines with *, +, and o denote the percentages of P _{u}, P _{d}^{−}, and P _{d}^{+}, respectively.
The Pc
_{u}
performance for the dynamic threshold situation is far better than that for the two constant thresholds situations, which is less than 11% for the whole
E
_{b}
/
N
_{0}
region.
In all the situations, the variation of Pc
_{d}
^{−}
with
E
_{b}
/
N
_{0}
exhibits a similar trend; that is, Pc
_{d}
^{−}
increases as
E
_{b}
/
N
_{0}
grows. This phenomenon implies that, for large
E
_{b}
/
N
_{0}
, correctly detecting the decoding codeword error is difficult. Fortunately, the decoding bit error of BP itself for large
E
_{b}
/
N
_{0}
is also small, which alleviates the bit error caused by
P
_{d}
^{−}
.
The situation with
T
_{d}
= 0 has a constant Pc
_{u}
, which is always zero. This makes Pc
_{d}
^{−}
and Pc
_{d}
^{+}
complementary. The two curves of Pc
_{d}
^{−}
and Pc
_{d}
^{+}
constitute an “X” form in the figure, with its crossing point near
E
_{b}
/
N
_{0}
= 1.3 dB. This “X” form indicates that the OSD output codeword becomes increasingly more accurate as
E
_{b}
/
N
_{0}
increases.
α
) used for computing the dynamic threshold in
Figs. 4
and
5
are 16 and 30, respectively.
Figures 4
and
5
show that the BER performance of the SC-BP-OSD algorithm can be improved by introducing a dynamically calculated threshold for both codes. The BER improvement is approximately 0.1 dB for code A and approximately 0.15 dB for code B. For both codes, the BER performance of the proposed algorithm can approach the performance of the optimal situation. Particularly for code B, the BER curve of the dynamic threshold situation almost coincides with that of the optimal situation.
Figures 4
and
5
also show that the BER performance of the SC-BP-OSD algorithm without threshold is worse than that of the pure BP algorithm for small
E
_{b}
/
N
_{0}
. This result is different from the BLER performance of the SC-BP-OSD algorithm, where the SC-BP-OSD always performs better than BP. However, with a dynamically calculated threshold, the approach proposed in this paper can also outperform BP for all
E
_{b}
/
N
_{0}
regions when BER is considered.
BER performance of code A for different thresholds.
BER performance of code B for different thresholds.
E
_{b}
/
N
_{0}
regions. Therefore, a dynamic threshold is computed from the mean of the absolute values of the LLR outputs of BP decoding. The codeword error detection analysis shows that SC-BP-OSD with dynamic threshold has better Pc
_{d}
^{+}
performance, which in turn, leads to better BER performance than that with constant threshold. Using the dynamic threshold, the BER performance of two rate 1/2 LDPC codes can approach that under the optimal situation.
In addition, the computation of the dynamic threshold requires only the mean of the absolute values of the input LLR of OSD. As a result, the increase in the computational complexity of the proposed approach is negligible compared with the high complexity of the OSD algorithm itself.
This work was supported by the National Natural Science Foundation of China (Grant No. 61271175), the Fundamental Research Funds for the Central Universities (Grant No. 7214505101), and Open Project Program of Science and Technology on Information Transmission and Dissemination in Communication Networks Laboratory (ITD-U14006/KX142600013).
Corresponding Author zma@mail.xidian.edu.cn
Ma Zhuo received his BS degree in communication engineering and his MS and PhD degrees in communication and information systems from Xidian University, China, in 2003, 2006, and 2012, respectively. He joined the State Key Laboratory of ISN, Xidian University, as an assistant in 2006 and became a lecture in 2009. His research interests include channel coding and signal processing in wireless communication.
shydy@xidian.edu.cn
Du Shuanyi received his BS degree in communication engineering and his MS degree in communication and electronic systems from Xidian University, China, in 1983 and 1990, respectively. He joined the School of Communication Engineering, Xidian University, in 1983. He is now working as a professor at the State Key Laboratory of ISN, Xidian University. His research interests include wireless digital communication and digital signal processing.

Channel coding
;
low-density parity-check code
;
reliability decoding
;
log-likelihood ratio accumulation

I. Introduction

A reliability-based decoding algorithm is used to resolve performance degradation when a belief propagation (BP) algorithm is adopted to decode low-density parity-check (LDPC) codes with short and medium code lengths
[1]
. The performance degradation is caused by the short cycles in the Tanner graph of these codes, which cannot be ignored
[2]
in the case of short and medium code lengths.
References
[1]
and
[3]
perform reliability-based order statistic decoding (OSD) after every BP iteration. The difference is that in
[3]
, a cyclic redundancy check (CRC) is added after LDPC encoding. The CRC can be used either to check the correction of the OSD or as the criterion for terminating BP iterations.
Considering the complexity of OSD decoding, another type of decoding scheme called the serial concatenated decoding is more acceptable. In this scheme, the OSD algorithm is performed only after the last BP iteration. However, the log-likelihood ratio (LLR) oscillation phenomenon observed in
[4]
decreases the credibility of the LLR of the last iteration as a reliability measure. This problem is resolved in
[5]
by accumulating the LLR over the entire number of iterations as the reliability measure. Reference
[6]
generalizes such an LLR accumulation to the probability domain. Reference
[7]
uses a conditioned LLR to feed an OSD decoder, which also leads to a serial concatenated decoding scheme. Another method of combining a BP algorithm and reliability-based decoding algorithms is reported in
[8]
, where the reliability-based decoding is applied alternatively to BP. The relief-based decoder in
[8]
uses the initial soft information instead of the soft values delivered by the last BP iteration whenever BP does not converge.
Many optimal or suboptimal reliability-based decoding algorithms are currently used for the concatenated decoding of LDPC codes, including the box-and-match algorithm (BMA)
[9]
, chase-2
[10]
, and KNIH algorithms
[11]
.
Most of the aforementioned cited studies evaluate the performance of a concatenated decoding algorithm by block error rate (BLER) or codeword error rate (CER). Given that the reliability-based decoding algorithms always return a valid codeword, once this codeword is incorrect (an undetected error occurs), more error bits will be introduced than that of a BP decoder. Therefore, the performance gain achieved in a bit error rate (BER) metric is not as striking as that in a BLER metric assuming the same decoding scheme.
References
[12]
and
[13]
consider bounded maximum-likelihood (ML) decoders, such as a bounded-distance ML decoder, a bounded-angle ML decoder, and a bounded reciprocal likelihood ratio ML decoder. All of these bounded ML decoders introduce a bound to detect whether a decoding codeword is correct. When a decoding metric exceeds a given bound, a code error is detected, and an erasure is claimed.
This paper introduces the idea of the bounded decoder to the OSD algorithm. A bound or threshold is used to determine whether to adopt the result of the OSD decoder or that of the BP decoder, based on serially concatenated BP-OSD (SC-BP-OSD) algorithms. Unlike in
[12]
and
[13]
, the threshold in the algorithm proposed in this paper is dynamically calculated to avoid mismatching.
II. SC-BP-OSD Algorithm with Threshold

Given that OSD is serially concatenated with BP decoding, it is performed only when a BP decoding achieves its maximum iterations and the checksum of the related final output codeword is not zero. The following analysis concerns the case where OSD is evoked.
As previously discussed, OSD may choose a codeword other than the transmitted codeword as its decoding result, and the differences between these two codewords is no less than the minimum Hamming distance of the related LDPC code. Good LDPC codes often possess a large minimum Hamming distance. Thus, considerable bit errors occur once OSD returns an incorrect codeword. A predictable BER performance gain is obtained when an invalid codeword can be excluded and the BP decoding result is used. Finding a proper criterion for this choice is necessary.
An OSD algorithm contains the following two main steps. The first step is to construct the most reliable basis (MRB) and to then systematically reprocess candidate codewords expressed in the MRB
[1]
. In the second step,
PPT Slide

Lager Image

PPT Slide

Lager Image

III. Dynamically Calculated Threshold

Since a fixed threshold cannot work properly in the entire
r ¯

is the mean of the absolute values of the LLR outputs (
r ¯

Considering that the purpose of
r ¯

/2 has a large probability of being an incorrect codeword. Here,
IV. Performance Analysis for Codeword Error Detection

The threshold of the algorithm proposed in this paper determines whether a decoding codeword is correct; however, this decision itself may or may not be a correct one. If an OSD output codeword is incorrect and is detected as an error through the threshold check, then a detected error occurs — the probability of which is expressed as
PPT Slide

Lager Image

V. Simulation Results

Two rate 1/2 LDPC codes are simulated to verify the effectiveness of the proposed approach. Both of them have a code length of 504. Code A is the (504,252,3,6) LDPC code used in
Fig. 2
. Code B is an irregular LDPC code constructed using the progressive edge-growth algorithm
[16]
. The simulation is performed under an AWGN channel. The simulation result is shown in Figs. 4 and 5. The threshold factors (
PPT Slide

Lager Image

PPT Slide

Lager Image

VI. Conclusion

A dynamically calculated threshold is introduced in this paper to improve the BER performance of a serially concatenated BP-OSD algorithm. Most of the current improvements for the concatenated decoding algorithm are considered in relation to BLER performance, not BER. When BER is considered, most of the existing algorithms become mediocre in light of what has been achieved under the BLER metric. Using a threshold to determine whether the OSD output codeword is credible is a straightforward idea, but the simulation results show that a constant threshold cannot perform well for all
BIO

Fossorier M.
2001
“Iterative Reliability-Based Decoding of Low-Density Parity Check Codes,”
IEEE J. Sel. Areas Commun.
19
(5)
908 -
917
** DOI : 10.1109/49.924874**

Richardson T.J.
,
Urbanke R.L.
2001
“The Capacity of Low-Density Parity-Check Codes under Message-Passing Decoding,”
IEEE Trans. Inf. Theory
47
(2)
599 -
618
** DOI : 10.1109/18.910577**

Gounai S.
,
Ohtsuki T.
2006
“Lowering Error Floor of Irregular LDPC Codes by CRC and OSD Algorithm,”
IEICE Trans. Commun.
E89B
(1)
1 -
10

Gounai S.
,
Ohtsuki T.
2005
“Decoding Algorithms Based on Oscillation for Low-Density Parity Check Codes,”
IEICE Trans. Commun.
E88A
(8)
2216 -
2226

Jiang M.
2007
“Reliability-Based Iterative Decoding of LDPC Codes Using Likelihood Accumulation,”
IEEE Commun. Lett.
11
(8)
677 -
679
** DOI : 10.1109/LCOMM.2007.070450**

Li G.W.
,
Feng G.Z.
2008
“Generalised Reliability-Based Syndrome Decoding of LDPC Codes,”
European Trans. Telecommun.
19
(8)
873 -
877
** DOI : 10.1002/ett.1319**

Yang L.
,
Tomlinson M.
,
Ambroze M.
“Decoding Low-Density Parity-Check Codes with Error-Floor Free over the AWGN Channel,”
Proc. WCNIS
Beijing, China
June 25–27, 2010
192 -
196

Baldi M.
2014
“A Hybrid Decoding Scheme for Short Non-binary LDPC Codes,”
IEEE Commun. Lett.
18
(12)
2093 -
2096
** DOI : 10.1109/LCOMM.2014.2367097**

Bian Y.B.
,
Feng G.Z.
2009
“A Novel Concatenation Decoding Algorithm for Short LDPC Codes with Lower Complexities,”
Proc. PCCF
Nanning, China
550 -
554

Tong S.
,
Zheng H.J.
2012
“On Combining Chase-2 and Sum-Product Algorithms for LDPC Codes,”
ETRI J.
34
(4)
629 -
632
** DOI : 10.4218/etrij.12.0211.0510**

Qiao G.L.
2012
“Parallel Decoding Scheme Based on OSD and KNIH Algorithms,”
Adv. Mater. Res.
433–440
4813 -
4816

Dolinar S.
“The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates,”
Proc. ISIT
Toronto, Canada
July 6–11, 2008
970 -
974

Dolinar S.
“Bounds on Error Probability of Block Codes with Bounded-Angle Maximum-Likelihood Incomplete Decoding,”
Proc. ISITA
Auckland, CA, USA
Dec. 7–10, 2008
1 -
6

Kerr R.
,
Lodge J.
“Near ML Performance for Linear Block Codes Using an Iterative Vector SISO Decoder,”
Proc. Turbo Coding
Munich, Germany
Apr. 3–7, 2006
1 -
6

Kabat A.
,
Guilloud F.
,
Pyndiah R.
“New Approach to Order Statistics Decoding of Long Linear Block Codes,”
Proc. IEEE GLOBECOM
Washington, DC, USA
Nov. 26–30, 2007
1467 -
1471

Chung S.-Y.
,
Richardson T.J.
,
Ubrnake R.L.
2001
“Analysis of Sum-Product Decoding of Low-Density Parity-Check Codes Using Gaussian Approximations,”
IEEE Trans. Inf. Theory
47
(2)
657 -
670
** DOI : 10.1109/18.910580**

MacKay D.J.C.
Encyclopedia of Sparse Graph Codes
http://www.inference.phy.cam.ac.uk/mackay/codes/data.html

Citing 'Improved Reliability-Based Iterative Decoding of LDPC Codes Based on Dynamic Threshold
'

@article{ HJTODO_2015_v37n4_736}
,title={Improved Reliability-Based Iterative Decoding of LDPC Codes Based on Dynamic Threshold}
,volume={4}
, url={http://dx.doi.org/10.4218/etrij.15.0113.1265}, DOI={10.4218/etrij.15.0113.1265}
, number= {4}
, journal={ETRI Journal}
, publisher={Electronics and Telecommunications Research Institute}
, author={Zhuo, Ma
and
Shuanyi, Du}
, year={2015}
, month={Aug}