Advanced
Exact Decoding Probability of Random Linear Network Coding for Tree Networks
Exact Decoding Probability of Random Linear Network Coding for Tree Networks
KSII Transactions on Internet and Information Systems (TIIS). 2015. Feb, 9(2): 714-727
Copyright © 2015, Korean Society For Internet Information
  • Received : September 15, 2014
  • Accepted : January 02, 2015
  • Published : February 28, 2015
Download
PDF
e-PUB
PubReader
PPT
Export by style
Share
Article
Author
Metrics
Cited by
TagCloud
About the Authors
Fang Li
State Key Laboratory of Integrated Services Networks (ISN), Xidian University Xi’an, 710071, China
Min Xie
School of Telecommunications Engineering, Xidian University Xi’an, 710071, China

Abstract
The hierarchical structure in networks is widely applied in many practical scenarios especially in some emergency cases. In this paper, we focus on a tree network with and without packet loss where one source sends data to n destinations, through m relay nodes employing random linear network coding (RLNC) over a Galois field in parallel transmission systems. We derive closed-form probability expressions of successful decoding at a destination node and at all destination nodes in this multicast scenario. For the convenience of computing, we also propose an upper bound for the failure probability. We then investigate the impact of the major parameters, i.e., the size of finite fields, the number of internal nodes, the number of sink nodes and the channel failure probability, on the decoding performance with simulation results. In addition, numerical results show that, under a fixed exact decoding probability, the required field size can be minimized. When failure decoding probabilities are given, the operation is simple and its complexity is low in a small finite field.
Keywords
1. Introduction
N etwork coding (NC) has been shown to offer advantages in throughput, robustness, power consumption, and security in both wireline [1] and wireless networks [2] , by allowing intermediate nodes to mix the information before forwarding it. Their work [1] was regarded as the commencement of research of NC. Li et al . [3] showed that linear network coding (LNC) can achieve the optimal throughput for a multicast transmission.
Later, Ho et al . [4] proposed that an effective, yet simple, approach is random linear network coding (RLNC). They also analyzed the performance by deriving upper bounds for the failure of this code. Furthermore, Huseyin et al . [5] defined two types of failure probabilities to characterize the performance analysis of RLNC and improved bounds in [4] . Their bounds are described by the number of internal nodes, instead of the number of channels or similar quantities.
Typically, the exact decoding probability under RLNC is derived in a fixed topology [6] - [8] . Trullols-Cruces et al . [6] computed the exact probability that a receiver obtains N linearly independent packets among K N received packets, when the senders use RLNC over a Galois field. This problem is equivalent to computing the probability that a N × K matrix has rank N , where each element is randomly selected from a Galois field with equal probability. Deriving the probability that this matrix has full rank in [7] can be viewed as a special case of Th. 1 in our work.
In this paper, we consider that the source sends information to all sinks at the theoretically maximum rate with RLNC for tree networks. From the network topological perspective, hierarchical structures can be reduced to tree-type networks. In addition, tree structured networks are scalable to expand coverage by increasing the depth of tree networks due to their flexibility and low delay, making it suitable for a variety of emergency applications. Such topology is widely used and is a basic building block of more complex multi-hop networks [9] - [13] . Because the topology of tree networks with packet loss is dynamic, RLNC allows higher data rates than routing and the operation is simple.
The main contributions of this paper are described as follows. Firstly, we derive the exact probability of RLNC over a Galois field for tree networks with and without packet loss. Secondly, for the convenience of computing, we propose upper bounds for the failure probability and improve the bound in [5] (Theorem 2) by replacing the number
PPT Slide
Lager Image
by the number m +1 for tree networks to decrease the complexity. Here m is the number of internal nodes, δ is equal to k - ω , k is the min-cut from s to all sinks t T , and ω is the number of symbols generated at the source. Finally, we illustrate that, the impact of major parameters, i.e. , the size of finite fields, the number of internal nodes, the number of sink nodes and the channel failure probability, on the decoding performance with simulation results. Numerical results also show that, under a fixed decoding probability, a required field size can be minimized.
The remainder of this paper is organized as follows. Section II introduces the definition of RLNC and presents the network model of tree networks. Section III derives closed-form probability expressions of the defined model, proves main results and describes numerical results. Section IV provides concluding remarks.
2. Basic Definitions and the Network Model
We consider a multi-hop relay tree network composed of one source, m internal nodes and n sink nodes, i.e. , T = { t 1 ,⋯, tn } in parallel transmission system as shown in Fig. 1 . There are k multiple channels between two nodes. So, k is the min-cut from s to all sinks t T . Every channel can transmit one field symbol per unit time. Further, denote the channel leading from node i to node j by the direct edge e = ( i , j ), where node i is called the tail of e and node j is called the head of e . For each node i , let Out ( i ) be the set of edges leaving node i and let In ( i ) be the set of edges entering node i .
PPT Slide
Lager Image
Network model.
Source messages
PPT Slide
Lager Image
= ( x 1 ,..., xω ) arranged in a row vector over a Galois field F q of size q are transmitted to the source s through ω imaginary channels in In ( s ) . Note that all of the arithmetic operations are performed over this field. Each of the m relays encodes the received packets using RLNC, and it sends the resulting packet toward its child nodes. Even if nodes in Fig. 1 join or leave, preexisting nodes do not need to change their coding operations to preserve the same guarantee of correctness. Ue is the message transmitted over the channel e . The coding process at each node i is represented as follows:
PPT Slide
Lager Image
where kde is called local encoding kernel or coefficient. The coefficient used to encode the data packets is randomly extracted from a field F q , with equal probability.
As we know from [14] , [15] , the global kernel fe for a channel e is a column vector such that
PPT Slide
Lager Image
The global kernels can be determined by the local kernels inductively by the next formula
PPT Slide
Lager Image
Therefore, we denote by Ft the ω × k matrix containing global kernels for channels in In ( t ) , i.e. ,
PPT Slide
Lager Image
where Ft is called the decoding matrix at sink t .
To simplify the discussion, we fix a sink node t T . Without loss of generality, let
PPT Slide
Lager Image
be an upstream to downstream order of nodes. The minimum cut capacity between the source node s and the sink node t is k , where k > ω . Denote rt be the number of internal nodes in disjoint paths P (t) from s to t . Furthermore, since the Menger's Theorem, there exit k disjoint paths from s to t denoted by P (t) = {
PPT Slide
Lager Image
: 1 ≤ j k }. Let
PPT Slide
Lager Image
be the set of all channels in these k paths. Let
PPT Slide
Lager Image
be the sequence of channels in the path
PPT Slide
Lager Image
satisfying
PPT Slide
Lager Image
In the following we use the concept of cuts from s to t represented in [5] . Intuitively, the first cut is CUT 0 ( t ) = { d 1 ,⋯, dω } and the second cut is CUT 1 ( t ) = {
PPT Slide
Lager Image
:1 ≤ i k } . At node i 1 , the next cut is defined from CUT 1 ( t ) by replacing channels in In ( i 1 ) ∩ CUT 1 ( t ) by their respective next channels in the path. Subsequently, once CUTj ( t ) is defined, CUT j+1 ( t ) is formed from [5] by the same method as above. By induction, all cuts CUTj ( t ) for j = 0,..., m +1 can be defined. Furthermore, denote
PPT Slide
Lager Image
( t ) = In ( ij ) ∩ CUTj ( t ) and
PPT Slide
Lager Image
( t ) = CUTj ( t ) -
PPT Slide
Lager Image
( t ) .
Before discussion, we immediately introduce the definition of successful decoding probabilities as follows. For convenience, let < In ( t )> be the linear space spanned by the global encoding kernel in { fe : e In ( t )} .
Definition 1: The successful probability of random linear network coding at sink t is
PPT Slide
Lager Image
and the successful probability of random linear network coding at all sink nodes is
PPT Slide
Lager Image
As shown in [3] and [5] , we have the following result for the network. There exist network codes such that the messages can be decoded successfully at sink t , if and only if any one of the following statements is true:
1) dim (< In ( t )>) = ω ;
2) there exist ω linearly independent global encoding kernel vectors in { fd : d In ( t )} ;
3) Rank ( Ft ) = ω .
In addition, main notations used in section 3 are as follows:
  • Fq: the finite field used.
  • q: the size of the finite fieldFq.
  • ω: the number of symbols generated at the source node.
  • δ: the number of redundancies, i.e.,δis equal tok-ω.
  • m: the number of internal nodes.
  • n: the number of sink nodes.
  • k: the min-cut fromsto all sinkst∈T.
  • rt: the number of internal nodes in disjoint paths P(t)fromstot.
  • r: the maximum number ofrt,i.e.,max{rt:t∈T} .
3. Results
- 3.1 Exact Decoding Probabilities
We have known that the performance analysis of RLNC plays an important part in theory and application. In general, it is difficult to compute the exact decoding probability of RLNC for a general communication network because the topology of a general communication network is unfixed. In this section, we calculate the exact decoding probability of RLNC for tree networks. At first, we give the following theorem.
Theorem 1. Let L be an n -dimensional linear space over finite field F q , L 0 and L 1 be two subspaces of dimensional k 0 and k 1 in space L , respectively, and < L 0 L 1 > = L . Let l 1 ,..., l n-k0+m=r ( m ≥1) be r uniformly distributed random vectors taking values in L .
Then
PPT Slide
Lager Image
Proof . To form the linear space L , we start with L 0 , then add vectors in l 1 ,..., l r to the subspace L 0 one by one. During the process of adding these vectors, if any vector l i falls in the space
PPT Slide
Lager Image
a failure or a redundancy occurs, where B 0 = L 0 . Under the condition that l i B i-1 for 1 ≤ j i -1, we have
PPT Slide
Lager Image
We prove this theorem by induction on m = r - n + k 0 , the index of the number of redundancies in L 1 .
For m =1, we have r = n - k 0 +1 . Note that if a failure occurs firstly in the j th selection, there must not be any failure in the following sequence. If such failure occurs, with probability
PPT Slide
Lager Image
we will leave exactly n - k 0 linear independent vectors. In the derivation, we obtain the probability that n - k 0 linear independent vectors are picked, given r = n - k 0 +1 selections, as
PPT Slide
Lager Image
By the above iterating method, we have proved it for m =1 and obtain the probability for m >1
PPT Slide
Lager Image
From the above equation, we can get the formula
PPT Slide
Lager Image
Assume that the result of the theorem is proved for 2,⋯, m , we now prove it for m +1, we have
PPT Slide
Lager Image
where
PPT Slide
Lager Image
and Eq. ( a ) can be obtained from Eq. (10).
The theorem is proved.
Remark 1. We can observe that pr (dim(< L 0 ∪{ l 1 ,..., lr }>= n )) is not related to the dimension of L 1 .We also notice that when k 0 = 0 , we get the following corollary. This corollary can be directly proved by the method in combinatory in [16] , too.
Corollary 2. Let L be an n - dimensional linear space over finite field F q . Let l 1 ,..., lr be r uniformly distributed random vectors taking values in L . Then
PPT Slide
Lager Image
Applying this corollary and the definition of CUTj , we compute exact decoding probability at a destination node and at all destination nodes in tree networks. In order to decode, a destination node has to collect as many linearly independent vectors as the number of packets that were originally mixed in, and then solve the resulting system of linear equations. The decoding probability thus depends on the coding design and the selected coefficients. The coefficients used to encode the data packets are extracted from a Galois field of size q .
Theorem 3. For tree networks as shown in Fig. 1 , let the minimum cut capacity for sink node t T be k and let the information rate be ω symbols per unit time. Local encoding coefficients are chosen independently and uniformly from the finite field F q .
  • (1) The failure decoding probability of RLNC at sinktsatisfies:
PPT Slide
Lager Image
  • (2) The failure decoding probability of RLNC for tree networks satisfies:
PPT Slide
Lager Image
Proof . Given in Appendix A.
In particular, we denote r = max { rt : t T } . The following corollary can be obtained immediately.
Corollary 4: For tree networks as shown in Fig. 1 , the failure decoding probability of RLNC at sink t satisfies:
PPT Slide
Lager Image
We have described that sometimes it is difficult to use predesigned LNC based on the network topology even if the topology is known. When nothing about the topology of tree networks except the number of internal numbers m are known, we analyze the performance of RLNC in the next corollary.
Corollary 5. For tree networks as shown in Fig. 1 ,
  • (1) the failure decoding probability of RLNC at sinktsatisfies:
PPT Slide
Lager Image
  • (2) the failure decoding probability of RLNC for tree networks satisfies:
PPT Slide
Lager Image
We can notice that we improve the bound in [5] (theorem 2) by replacing the number
PPT Slide
Lager Image
by the number m +1. As above mentioned, our analyses show that the failure probability is a function of m , δ and q .
In practice, due to various reasons, packet loss may occur. Unlike previous work, we consider that the relay nodes may not receive some packets due to link failures. This implies that a destination may be unable to successfully decode the received data packets due to both missing packets at the relays and linearly dependent coefficient vectors. In general, channel failure is a low probability event, that is, 0 ≤ p << 1/ 2 . In particular, when p = 0 , the following theorem is equivalent to Theorem 3.
Theorem 6. For tree networks as shown in Fig. 1 with channel failure probability p ,
  • (1) the failure decoding probability of RLNC at sinktsatisfies:
PPT Slide
Lager Image
  • (2) the failure decoding probability of RLNC for tree network satisfies:
PPT Slide
Lager Image
Proof . Given in Appendix B. □
- 3.2 Numerical Results
We now show the impact of major parameters on the decoding performance. In the following, we fix the number of symbols generated at the source ω = 5.
As discussed above, our analysis concerns the decoding probability at a (all) sink node(s) without packet loss, as a function of the size of finite field q , (the number of sink nodes n ,) and the number of internal nodes m in Th. 3. Typically, the exact probability is limited to the linear independence of the random vectors employed for the encoding of received packets.
Firstly, by a comparison of Fig. 2 .(a) and Fig. 2 .(b), we observe that the number of sink nodes n plays an important role in analysing the decoding performance. Under other parameters are fixed, the smaller the number n is in a tree network, the better performance it has. Secondly, represented by different continuous and dashed lines, Fig. 2 shows the exact decoding probability as a function of the size of finite field q , when δ is fixed. Clearly, as q increases, the performance improves greatly. We also observe that for q = 16 both of two successful probabilities are larger than 0.9 even in the presence of a small number of redundancy. Finally, represented by each continuous or dashed line, Fig. 2 shows the probability as a function of δ ,when q is determined. Obviously, as δ increases to 2 , the performance improves dramatically. We can obtain that δ has a significant impact on the decoding performance.
PPT Slide
Lager Image
(a) shows the simulation results and the exact decoding probability at sink t obtained over 105 runs when ω = 5 and rt = 3 . (b) shows the simulation results and the exact decoding probability at all sinks obtained over 105 runs when ω = 5, m = 10 and n = 5 .
The main difference between Th. 3 and Th. 6 is that packet loss may occur in the latter. We now show the impact of major parameters, i.e., the size of finite fields, the number of internal nodes m , the number of sink nodes n and the channel failure probability p , on the decoding performance at sink t and at all sinks with channel failure probability in Th. 6 .
A comparison of Fig. 3 .(a) and Fig. 3 .(b) shows that the number of sink nodes n has a marked impact on the decoding performance, when other parameters are fixed. Clearly, as n increases, the decoding probability decreases greatly. Among different lines, we also show that the decoding probability is a function of the size of finite field q , when δ and p are fixed. As q increases, the performance improves greatly. Further, we observe that the decoding probability is a function of δ , when q and p are fixed in each line. We observe that for δ = 1, very poor performance is obtained, indeed, the channel failure probability, which makes message decoding fail, has a dominant effect. Clearly, as δ increases to 2, the performance improves greatly and as δ increases to 3, both of two successful probabilities are larger than 0.95 even in the presence of a small number q . Differences between analytical and simulative results are in the order of 10 -3 in Fig. 2 and Fig. 3 . This implies that simulation results match analytical results very well.
PPT Slide
Lager Image
(a) shows the simulation results and the exact decoding probability at sink t obtained over 105 runs when ω = 5, p = 0.1 and rt = 3 . (b) shows the simulation and the exact decoding probability at all sinks obtained over 105 runs when ω = 5, p = 0.1, m = 10 and n = 5 .
We investigate the impact of the number of channel failure probability p on the decoding performance in Fig. 4 . This figure shows the comparison of successful decoding probabilities with and without packet loss with ω = 5, rt = 3, m = 10, n = 5, p = 0.05, q = 8,16,32 and δ = 1,2,3,4 . From the above figure, we can observe that our result in Th. 3 and Th. 6 represented by blue lines and red lines, respectively, and for δ = 1 with channel failure probability p = 0.05, very poor performance is obtained. As δ increases to 2, the performance improves greatly. However, as δ increases from 3 to 4, the performance improves slowly. The importance of q and p is confirmed by Fig. 4 . Numerical results further showed that, as the field size increases, the decoding performance improves significantly.
PPT Slide
Lager Image
(a) shows a comparison of successful decoding probabilities with and without packet loss at sink t as δ varies. (b) shows a comparison of successful decoding probabilities with and without packet loss at all sinks as δ varies.
4. Conclusion
In this paper, we mainly investigated RLNC for tree networks with and without packet loss by combining techniques from linear algebra, network flows and randomization. We derived closed-form decoding probability expressions of tree networks, as a function of the size of finite fields, the number of internal nodes, the number of sink nodes and the channel failure probability. For the purpose of simple computing , we also proposed a sharper bound on the required field size, when the failure probability is given.We investigated the impact of these major parameters on the decoding performance at one sink and at all sinks with simulation results. Furthermore, numerical results illustrated that the required field size can be minimized, when other parameters are fixed.
BIO
Fang LI was born in 1986. She received the B.S. degree in information and computing science from Shangqiu Normal University, China in June 2009. She studied for M.S. degree in cryptography from Xidian University, China in September 2009. She is currently working towards her Ph.D. degree in cryptography at Xidian University, China. Her current research interest includes network coding and information theory.
Min XIE is now an associate professor at Xidian University. Her main research interest are stream ciphers, block ciphers, information theory, and network coding. She received her Ph.D degree in applied mathematics from Graduate from University of Chinese Academy of Sciences in 2003.
References
Ahlswede L. , Cai N. , Li S.-Y. R. , Yeung R. W. 2000 “Network information flow” IEEE Trans. Inf. Theory 46 (4) 1204 - 1216    DOI : 10.1109/18.850663
Zou Y , Zhu J , Zheng B 2013 “A fully distributed opportunistic network coding scheme for cellular relay networks” Wireless Communications and Networking Conference (WCNC) 2937 - 2942
Li S.-Y. R. , Yeung R. W. , Cai N. 2003 “Linear network coding” IEEE Trans. Inf. Theory 49 (2) 371 - 381    DOI : 10.1109/TIT.2002.807285
Ho T. , Médard M. , Koetter R. , Karger D. , Effros M. , Shi J. , Leong B. 2006 “A random linear network coding approach to multicast” IEEE Trans. Inf. Theory 52 (10) 4413 - 4430    DOI : 10.1109/TIT.2006.881746
Balli H. , Yan X. , Zhang Z. 2009 “On randomized linear network codes and their error correction capabilities” IEEE Trans. Inf. Theory 55 (7) 3148 - 3160    DOI : 10.1109/TIT.2009.2018173
Trullols-Cruces O. , Barcelo-Ordinas J.M. , Fiore M. 2011 “Exact decoding probability under random linear network coding” IEEE Commun. Lett. 15 (1) 67 - 69    DOI : 10.1109/LCOMM.2010.110310.101480
Zhao Xubo 2012 “Notes on ‘Exact Decoding Probability Under Random Linear Network Coding’” IEEE Commun. Lett. 16 (5)    DOI : 10.1109/LCOMM.2012.041112.112564
Chiasserini Carla-Fabiana , Viterbo Emanuele , Casetti Claudio 2013 “Decoding Probability in Random Linear Network Coding with Packet Losses” IEEE Commun. Lett. 17 (11)    DOI : 10.1109/LCOMM.2013.091113.131361
Bose S. , Gayme D. F. , Low S. , Chandy K. M. 2011 “Optimal power flow over tree networks” in Proc. of 49th Annu. Allerton Conf. Communication, Control, and Computing 1342 - 1348
Bhavnagarwala A. J. , Kapoor Ashok , Meindl J. D. 2000 “Generic models for interconnect delay across arbitrary wire-tree networks” Interconnect Technology Conference 129 - 131    DOI : 10.1109/IITC.2000.854302
Ma X. , Huang Q.-Y. , Shu X.-M. , Lu Q.-Y. 2012 “Design and implementation of one redundancy function in wireless tree networks” IEEE in Proc. of 2012 International Conference on MIC
Lee S.-H. , Chung S.-Y. 2013 “Capacity of a class of multicast tree networks” IEEE Trans. Inf. Theory 59 (6) 3848 - 3857    DOI : 10.1109/TIT.2013.2253871
Kohavi Zvi , Berger Israel 1975 “Fault Diagnosis in Combinational Tree Networks” IEEE Trans. Comp. 1161 - 1167
Koetter R. , Médard M. 2003 “An algebraic approach to network coding” IEEE/ACM Trans. Netw. 11 (5) 782 - 795    DOI : 10.1109/TNET.2003.818197
Cai N. 2006 Network Coding Theory Now Publishers Inc
van Lint J.H. , Wilson R. M. 2001 A course in Combinatorics 2nd edition Cambridge university press