Advanced
Hybrid Multicast and Segment-Based Caching for VoD Services in LTE Networks
Hybrid Multicast and Segment-Based Caching for VoD Services in LTE Networks
ETRI Journal. 2015. Aug, 37(4): 685-695
Copyright © 2015, Electronics and Telecommunications Research Institute (ETRI)
  • Received : November 04, 2014
  • Accepted : May 04, 2015
  • Published : August 01, 2015
Download
PDF
e-PUB
PubReader
PPT
Export by style
Share
Article
Author
Metrics
Cited by
TagCloud
About the Authors
Kwangjin Choi
Seong Gon Choi
Jun Kyun Choi

Abstract
This paper proposes a novel video delivery scheme that reduces the bandwidth consumption cost from a video server to terminals in Long-Term Evolution networks. This proposed scheme combines optimized hybrid multicast with a segment-based caching strategy for use in environments where the maximum number of multicast channels is limited. The optimized hybrid multicast, allocation of multicast channels, and cache allocation are determined on the basis of a video’s request rate, the related video’s length, and the variable cost per unit size of a segment belonging to the related video. Performance evaluation results show that the proposed scheme reduces a video’s delivery costs. This work is applicable to on-demand TV services that feature asynchronous video content requests.
Keywords
I. Introduction
The volume of video traffic transferred through data networks has increased exponentially in recent years. This is because people began to consume large amounts of video content through over-the-top services of data networks, rather than from cable and terrestrial broadcast services. Therefore, since network capacity is limited, it has become necessary to reduce video traffic by considering video delivery strategies from the viewpoint of network design and service operation.
To date, there have been many studies aimed at reducing video traffic by better management of broadcasting, multicasting, and caching. The original concepts of broadcast and multicast were developed taking into account only real-time streams, while on-demand streams, in which the request time is asynchronous, were not a concern. Some researchers proposed combination techniques that used multiple multicast streams to accommodate video-on-demand (VoD) services [1] [3] . In these cases, however, terminals have to wait to receive the beginning of the first segment of a video; consequently, this creates a problem of waiting time [1] [3] . In addition, previous studies did not consider environments in which the maximum number of multicast channels is finite, as it is in Long-Term Evolution (LTE) networks.
In addition, traditional caching schemes have been studied under the assumption that only unicast transmissions are used [4] [6] . These caching schemes considered only popularity distributions of videos and focused on reducing the bandwidth consumption cost generated by videos of high popularity. The bandwidth consumption cost per unit size of a segment was not taken into consideration. It is important to understand that this bandwidth consumption cost changes in accordance with the number of segments and transmission scheme.
To overcome the limitations of previous studies, this paper proposes a novel video delivery scheme called hybrid multicast and segment-based caching (HMSC). The proposed scheme reduces the bandwidth consumption cost of VoD services, in comparison with existing schemes, in LTE networks in which the number of multicast channels is limited. HMSC minimizes unicast streams that cause large bandwidth consumption and reduces generation of unnecessary multicast channels. It achieves this by separating the present multicast stream from the next recurrent multicast stream. In addition, it maximizes the caching effect by storing those video segments that generate higher bandwidth consumption cost per unit size.
The performance of the proposed scheme is evaluated using simulations. It is compared with existing transmission schemes ( [1] [3] and [7] [8] ) that generate multicast channels repeatedly in succession and unconditionally, and with an existing caching scheme [5] that considers only the popularity of the video content. The performance results indicate that HMSC generates a minimal bandwidth consumption cost over the existing transmission schemes. Furthermore, the results also indicate that HMSC significantly reduces video delivery costs over the entire range of the storage capacity of a cache server.
II. Related Work
To deliver substantial video traffic with efficient consumption of available bandwidth, 3GPP has been developing evolved Multimedia Broadcast Multicast Services (eMBMS) [9] . However, the original multicast concept is appropriate only to deliver real-time broadcast content; it is not suitable for application to asynchronous services such as VoD.
There has been a lot of research into the use of broadcast and multicast to reduce the number of serving streams for asynchronous services. Researchers in early studies proposed batching schemes that aggregate arriving requests and provide video delivery using multicast channels [10] [11] . The major drawback of these solutions is that the terminals that send the requests in advance have to wait for “multicast-channel initiation for batched requests” to arrive.
Other researchers tried to reduce the aforementioned waiting time by transmitting multiple multicast streams for each video. In one study [12] , a threshold-based multicast scheme was proposed that staggered a starting time across multiple channels, and in another [2] , researchers proposed a fast broadcast (FB) scheme that repeatedly broadcasts 2 i−1 segments on the i th broadcast channel. In a previously cited study [1] , a harmonic broadcast (HB) scheme was proposed, in which each video segment was delivered through a dedicated broadcast channel with harmonically reduced bandwidth.
The broadcast schemes of [1] and [2] did not solve the underlying problem; that is, the requested playing time does not coincide with the transferred time stream. Although the problem was solved in [12] by combining patching with multicasting, this required significantly large bandwidth consumption compared with other broadcast schemes. In our previous study [13] , we proposed a hybrid broadcast transmission scheme that required small bandwidth consumption and solved the problem of waiting time. However, this approach assumed an infinite number of broadcast channels and did not consider caching schemes. In addition, it dissipated some bandwidth resource by transmitting broadcast channel streams regardless of whether requests arrived.
There have been many studies on caching schemes to reduce the cost of video delivery from a source server to terminals [4] [6] and [14] [15] . Sofman and Krogfoss [5] proposed hierarchical caching for IPTV services, and Vleeschauwer and Laevens [6] studied the effect of caching algorithms for on-demand IPTV services. However, since most studies assumed only unicast transmission, they were focused on caching popular videos that generate large delivery costs as opposed to caching segments. In a few studies [14] [15] , the segment-caching effect was analyzed; however, those researchers proposed caching only prefix segments of a video, not suffix segments. They did not take into account the possibility that the bandwidth consumption cost of a suffix segment of a video might be larger than that for a prefix segment of the same video.
III. System Model
- 1. Network Architecture
The on-demand video delivery service architecture considered in this paper consists of the public Internet, LTE networks, a source server, cache servers, and terminals that request videos asynchronously. The source server is placed on the public Internet and the terminals are connected to LTE networks. Cache servers are located near Evolved Node B (eNodeB) of LTE networks, as depicted in Fig. 1 . The source server stores all video content, and the cache server stores partial segments of each video. If a segment of a video is stored on the cache server, then the segment is delivered from the cache server to terminals. The other segments of the requested video are delivered from the source server.
PPT Slide
Lager Image
Network architecture.
This paper considers scenarios in which a number of videos are delivered from the source server, or cache servers, to terminals through LTE networks with eMBMS. We assume that eMBMS exploits multicast-broadcast single-frequency network (MBSFN)-operation at base stations, eNodeB. The eNodeB, which is synchronized in an MBSFN area, transfers the same data with the same wireless resource allocated for broadcast or multicast channel information. In an eNodeB cell, each multicast channel (MCH) is modulated with the same coding scheme in wireless areas, and the number of MCHs is finite [16] [17] .
LTE networks deliver video traffic through unicast or multicast channels. It is assumed that terminals receive multiple multicast channels and unicast channels concurrently. In addition, cache servers that are placed near eNodeB of LTE networks, store video segments to reduce the traffic volume from the source server to terminals.
- 2. Video Delivery Cost
In this paper, it is assumed that cache server installation cost is insignificantly small, compared with the transmission cost. Hence, the video delivery cost is defined as the average bandwidth consumption cost.
Let N ( v ) denote the average number of serving channels for a video v per unit of time, and let C ( v ) denote the average bandwidth consumption of the above serving channels for the video v . Here, C ( v ) is equal to the average amount of streaming by a server per unit of time. As already mentioned in Section III-1, a cache server is placed between the source server and terminals and takes over the role of streaming to terminals. This means that the greater the number of channels the cache server serves for the requested video, the fewer channels the source server serves. Therefore, the cost related to the cache server, C ca , and the cost related to the source server, C s , are defined as follows:
C ca = v i,j G C( v i,j ),
C s = i=1 V j=1 J i C( v i,j ) v i,j G C( v i,j ),
where V is the total number of videos, Ji is the number of segments of the i th video, vi,j is the j th segment of the i th video, and G represents a set of segments stored on the cache server.
In addition, the total video delivery cost, C all , is defined as
C all = β 1 C s + β 2 C ca ,
where β 1 and β 2 are scalar factors used to normalize the costs of the source server and the cache server, respectively. It is assumed that β 2 is very small compared with β 1 since the traffic from the source server is delivered through both core networks and access networks, but the traffic from the cache server is delivered only through access networks. Table 1 presents the key parameters that appear in this paper.
Key parameters used in this paper.
Abbreviations Explanation and meaning
β1, β2 Factors for normalization of the cost
λall Sum of the request rates of all videos
λi Request rate of the ith video
b Streaming rate of a video
Call, Cs, Cca Total video delivery cost; Video delivery cost by the source server; Video delivery cost caused by the cache server
C(·)(vi, m) Average bandwidth consumption generated by the video vi, when allocating m multicast channels to the video vi
C(·)(vi,j,δ, m) Average bandwidth consumption generated by the ith video’s jth segment of unit size δ when allocating m multicast channels to the video vi
G Set of segments stored at the cache server
Ji Number of segments of the video vi
L Length of a video (seconds)
M Maximum number of multicast channels
N(·)(v) Average number of serving channels for a video v per unit of time
R(mi) Reduced bandwidth consumption when allocating mi multicast channels for video i over allocating zero multicast channel for video i
Sca Size of the cache server storage
V Total number of videos
IV. Proposed Video Delivery Scheme
The proposed scheme, HMSC, transmits a video using three transmission schemes that combine multicast and unicast. In this paper, these three hybrid video transmission schemes (based on a combination method) are called unicast (UNI), hybrid fast multicast (HFM), and hybrid harmonic multicast (HHM). UNI disposes a video for a unicast channel; whereas HFM and HHM divide a video into several segments and dispose the segments for multiple dedicated multicast channels.
HMSC determines which transmission scheme is selected (UNI, HFM, or HHM) and how many dedicated multicast channels for a video are allocated. These decisions are made based on the maximum number of multicast channels in the LTE networks, the video length, and the request rate for the video, with the goal of minimizing the video delivery cost. Video segments may be delivered in three ways. In the first way, segments are delivered only through unicast channels. In the second, segments are delivered only through multicast channels, and in the third way, segments are delivered through both unicast and multicast channels. This means that the average bandwidth consumption generated by each video segment may vary, even though the segments belong to the same video. HMSC maximizes the caching effect by storing segments that generate large cost, after analyzing the transmission scheme and average bandwidth consumption.
- 1. Multicast and Unicast Combination Approach
This subsection describes how UNI, HFM, and HHM work under the proposed multicast and unicast combination approach. In addition, for each of the schemes, we formulate several equations for quantifying their performance. Assuming that the optimal transmission scheme for video vi is to be determined by the request rate and length of the video, Fig. 2 describes the procedure for unicast and multicast channel initiation when a terminal requests video vi .
PPT Slide
Lager Image
Procedure of unicast and multicast channel initiation when terminal requests video vi.
Note that N (·) ( vi , m ) and C (·) ( vi , m ) denote the average number of serving channels per unit of time and the average bandwidth consumption per unit of time by the servers, respectively, when transmitting video vi using m multicast channels. Here, N (·) ( vi , m ) and C (·) ( vi , m ) are dependent on the request rate, λi , and video length, L , of video vi .
A. UNI
For UNI, all requests are served on unicast channels. Therefore, N u ( vi , m ) and C u ( vi , m ) are calculated as follows:
N u ( v i ,m)= N u ( v i ,0)= λ i L,
C u ( v i ,m)= C u ( v i ,0)=b λ i L,
where b is the streaming rate of video vi .
Using (4) and (5), C u ( vi,j,δ , m ), the average bandwidth consumption generated by i th video’s j th segment of unit size δ when allocating m multicast channels to video vi , transmitted in UNI, is calculated as follows:
C u ( v i,j,δ ,0)= C u ( v i,j,δ )= C u ( v i,j ) l i,j = b λ i L bL = λ i ,
where li,j is the length of the j th segment of video vi and vi,j,δ is a sub-segment of unit size (of the j th segment of video vi ).
B. HFM
For HFM, a video is divided into 2 mf segments of equal size T f (= L / 2 mf ) and the video is repeatedly transmitted, exploiting the maximum m s + m f multicast channels. In other words, the first segment is delivered through m s staggered multicast channels with at least T s (= T f / m s ) seconds interval. The other 2 mf − 1 segments are delivered through m f fast multicast channels. The fast multicast channels of HFM do not consistently occupy the bandwidth, compared with traditional FB schemes [2] [3] . In HFM, each fast multicast channel is deallocated when any terminal does not request the segments contained in the stream of this fast multicast channel.
As shown in Fig. 3(a) , if a request arrives at the server at t 0 , then HFM initiates m f + 1 new multicast channels for the video since there is no multicast channel already serving the video. The first segment is delivered through “staggered multicast channel 1,” and the other segments are delivered through the k th “fast multicast channel,” which contains 2 k−1 segments. The initial duration time of each multicast channel is the same as the size of the segments dedicated to the multicast channel; thus, HFM retains staggered multicast channel 1 during T f seconds and retains fast multicast channel k during 2 k−1 T f seconds. After some time, the second request arrives at t 1 , within T s seconds from the start time of staggered multicast channel 1. This delivers the first segment. Then, the terminal that sent the second request joins the existing staggered multicast channel 1, and fast multicast channels 1, 2, and 3. The missed part of the first segment is delivered through a unicast channel.
PPT Slide
Lager Image
Proposed transmission schemes: (a) HFM and (b) HHM.
If the second request arrives T s seconds after the latest staggered multicast stream started (for example, at t 2 ), then the request initiates a new staggered multicast channel for delivering the first segment. Therefore, the duration of the staggered multicast channel does not change. Since the first fast multicast channel is ongoing when the second request arrives, the request joins the first fast multicast channel and ( t 2 t 0 ) seconds is added to the duration time of the first fast multicast channel to deliver the complete second segment to the terminal that requests it. If the first fast multicast channel is already terminated when the next request arrives (for example, at t 6 ), then the request reinitiates the first fast multicast channel to receive the second segment. Likewise, the time gap between the start time of the k th fast multicast channel and the arrival time of the latest request may be added to the duration time of the k th fast multicast channel. Alternatively, the k th fast multicast channel may be initiated by the latest request. HFM does not cause a waiting-time problem in the middle of video playtime because the terminal receives the segments of the requested video ahead of time, or in time.
The average number of ongoing unicast channels and staggered multicast channels is derived from [12] , and the probability that fast multicast channel k is not ongoing is identical to the probability that any request has not arrived within 2 k−1 λiT f seconds. Therefore, the average number of ongoing channels in HFM is calculated as follows:
N f ( v i ,m) = N f ( v i , T f , T s , J i ) = λ i 2 T s 2 +2 λ i T f 2( λ i T s +1 ) + k=1 log 2 J i ( 1 e 2 k1 λ i T f ) ,
where Ji = 2 mf , T f = L / m f , T s = T f / m s , and m = m s + m f .
Since all unicast, and multicast, channel streaming rates in HFM are the same, the average bandwidth-consumption cost per unit of time is calculated as follows:
C f ( v i ,m) =b N f ( v i ,m) =b{ λ i 2 T s 2 +2 λ i T f 2( λ i T s +1 ) + k=1 log 2 J i ( 1 e 2 k1 λ i T f ) }.
In HFM, only the prefix of the first segment is delivered through both unicast channels and multicast channels, while the suffixes of segments are delivered through only multicast channels. Each segment on multicast channels is delivered using a different time period. For instance, the second segment is transferred at about period T f , but the third segment is transferred at about period 2 T f when the total number of multicast channels is equal to three. The average bandwidth consumption generated by the j th segment of unit size transmitted in HFM is calculated using (8) as follows:
C f ( v i,j,δ ,m)={ λ i 2 T s +2 λ i 2( λ i T s +1 ) for  j=1, prefix λ i λ i T s +1 for  j=1, suffix ( 1 e 2 k1 λ i T f ) 2 k1 T f for  2 k1 <j 2 k ,1k log 2 J.
C. HHM
For HHM, a video is divided into m h segments of equal size, T h , and the video is repeatedly transmitted using the maximum number of multicast channels, m s + m h . Since a genuine harmonic multicast scheme does not deliver a video stream to a terminal in time, it requires any terminal that requested video vi to wait for
( J i −1)L/ J i 2
units of time [18] . Therefore, the first segment is delivered through a unicast channel, and m s dedicated multicast channels, with at least T s seconds interval, in effect, multicast to solve the waiting-time problem. The other j th segments are divided into j − 1 sub-segments, and the sub-segments are then delivered through harmonic multicast channel j − 1 with b /( j − 1) streaming rate. The harmonic-multicast channels of HHM do not consistently occupy their bandwidth, unlike with traditional HB schemes [1] , [7] . In HHM, harmonic multicast channels are deallocated when any terminal does not request segments that are contained in the streams of harmonic multicast channels.
As shown in Fig. 3(b) , a request arrives at the server at t 0 and HHM initiates m h + 1 new multicast channels for the video if there is no multicast channel serving the video. The basic concept of staggered multicasting, joining the existing multicast channels, and adding the duration time of each of the harmonic multicast channels is similar to HFM. However, each harmonic multicast channel delivers only one segment, and the basic duration of the harmonic multicast channels increases linearly from 1 to ( m h − 1). The average number of ongoing channels in HHM is calculated as follows:
N h ( v i ,m) = N h ( v i , T h , T s , J i ) = λ i 2 T s 2 +2 λ i T h 2( λ i T s +1 ) + j=1 J i 1 ( 1 e j λ i T h ) ,
where Ji = m h + 1, T h = L / m h , T s = T h / m s , and m = m s + m h .
Since the streaming rate for the j th segment is b /( j − 1) as depicted in Fig. 3(b) , the average bandwidth-consumption per unit of time is calculated as follows:
C h ( v i ,m)=b λ i 2 T s 2 +2 λ i T h 2( λ i T s +1) +b j=1 J i 1 1 j ( 1 e j λ i T h ).
In HHM, only the prefix of the first segment is delivered through both unicast channels and multicast channels, while the suffixes of segments are delivered only through multicast channels. The average bandwidth consumption generated by the j th segment of unit size transmitted in HHM is calculated as follows:
C h ( v i,j,δ ,m)={ λ i 2 T s +2 λ i 2( λ i T s +1 ) for  j=1,prefix, λ i λ i T s +1 for  j=1,suffix, 1 e ( j1 ) λ i T h ( j1 ) T h for   2j J i .
- 2. Transmission Scheme Selection
Figure 4 describes the relationship between the request rate and the average bandwidth-consumption cost of video vi for the HFM and HHM transmission schemes. The average bandwidth consumption of HHM is smaller than that of HFM when λi is smaller than about 0.025, and the average bandwidth consumption of HFM is smaller than that of HHM when λi is larger than about 0.025. Therefore, it is necessary to determine the optimal transmission scheme among UNI, HFM, and HHM to minimize the total video delivery cost when the maximum number of multicast channels is finite.
PPT Slide
Lager Image
Video delivery cost of transmission schemes for various video request rates.
Let R ( mi ) denote the bandwidth consumption cost reduced by allocating mi multicast channels for video vi , compared to not allocating any multicast channels (that is, R ( mi ) = C u ( vi , 0) for mi = 0, R ( mi ) = C u ( vi , 0) − min[ C f ( vi , mi ), C h ( vi , mi )] for mi > 0). Therefore, the optimization problem is formulated as
maximize i=1 V R( m i ) s.t.      i=1 V m i M ,
where V represents the total number of videos and M represents the maximum number of multicast channels.
Note that the optimization problem belongs to the family of the 0-1 knapsack problem [19] and is solved by the following dynamic programming equation:
     D 0,j =0  for  0jM, D i,j =max( D i1,j , D i1,j m i +R( m i ) ) =max( D i1,j , D i1,j m i + C u ( v i ,0)    min( C h ( v i , m i ), C f ( v i , m i ) ) )        for  0<iV,0jM,
where D represents a two-dimensional matrix, and entry Di,j denotes the maximum reduction in bandwidth consumption for the first i videos with j multicast channels. This dynamic programming computes the entries from D 0,0 to DV,M , in row order. When computing an entry Di,j , we store the selected transmission scheme (from among UNI, HFM, and HHM) into matrix Ti,j and store the number of allocated multicast channels for vi into matrix Ai,j . Therefore, when the total number of videos is i and the maximum number of multicast channels is j , Ti,j represents the optimal video transmission scheme for the i th video and Ai, j represents the optimal number of multicast channels to be allocated to the i th video.
The optimal transmission scheme and number of dedicated multicast channels for each video are determined by tracking Ai,j from AV,M to A 0,0 . Figure 5 shows an example of the tracking operation of the algorithm involving four videos and four multicast channels. In this figure, T 4, 4 indicates that the optimal transmission scheme for video v 4 is UNI, and A 4, 4 = 0 indicates that the optimal number of dedicated multicast channels is “0” for video v 4 . Because Ai,j is the difference value between column index values of entry D i−1,j and entry D i−1,j−mi for computing Di,j , T 4−1,4−A4,4 = HFM represents the optimal transmission scheme for video v 3 ; that is, HFM. In addition, the optimal number of dedicated multicast channels is “1” for video v 3 . Continuing in this way, we found sets of optimal transmission schemes and the optimal number of dedicated multicast channels for each video to be ( v 1 , HFM, 2), ( v 2 , HFM, 1), ( v 3 , HFM, 1), and ( v 4 , UNI, 0).
PPT Slide
Lager Image
Example of results of dynamic programming.
- 3. Cache Allocation
HMSC selects segments to be stored on the cache server, after determination of the optimal transmission scheme and the number of multicast channels for all videos. Equations (1), (2), and (3) lead to the following cache allocation optimization problem to minimize the total video delivery cost, C all :
maximize   C ca = v i,j G C( v i,j )       s.t.    v i,j G S( v i,j ) S ca ,
where G denotes a set of segments stored on the cache server, S ( v i, j ) denotes the size of the j th segment of the i th video, and S ca is the size of the cache server storage. A segment for the last element of G may be divided into smaller segments, and one of the divided segments cached to fill the cache server. Therefore, this formulation is a type of fractional knapsack problem [19] , and is solved by the greedy algorithm described in Fig. 6 . This greedy algorithm selects the segments to store on the cache server, in order of decreasing average bandwidth consumption, generated by the j th segment of the i th video of unit size, C ( vi,j,δ , mi ). Here, C ( vi,j,δ , mi ) is calculated as in (6), (9), or (12), based on the optimal transmission scheme. The optimal transmission scheme for the i th video, vi , and the number of allocated multicast channels, mi , are determined by the transmission scheme selection described in the previous subsection.
PPT Slide
Lager Image
Greedy algorithm for cache allocation.
V. Performance Evaluation
For the performance evaluation, the cases that serve unicast or one broadcast scheme from among FB [2] [3] , [8] and HB [1] , [7] as the transmission scheme, and non-caching (Non) and hot-content-first caching (HOT) [4] as the caching scheme are considered for comparison with the proposed scheme, HMSC. FB transmits a video repeatedly through 2 mf FB channels, and HB transmits a video repeatedly through m h HB channels. The multicast channel allocation in FB and HB exploits the dynamic programming proposed in this paper, assuming a limited number of multicast channels. The HOT scheme stores videos in decreasing order of popularity as described previously [5] .
The request rates of videos vary with their popularity. The request of i th video vi is generated by a Poisson process, and the request rate of the i th video vi is represented as λi . The request rate of a video may change on an hourly basis in a real environment. However, this paper simplified the video request model based on the Poisson process as the referred previous research [12] to focus on formulations that find the optimal video delivery scheme. The total request rate, λ all , is derived by summing up the request rates of all videos. When all videos are sorted in descending order of popularity so that λi is larger than λ i+1 , the distribution of selection of the i th video vi follows the Zipf distribution with power parameter α . The i th video’s request rate λi is λH / iα , where
H=1/ ∑ i=1 V 1/ i α
describes the normalization constraint.
This paper also assumes that the streaming rate b of all videos is 1 Mbps (a 360p standard quality video’s bitrate recommended by YouTube [20] ) and the power parameter α of the Zipf distribution is 0.729 [11] . The total request rate λ all is static, and the popularity of videos does not change. In addition, the scalar factors, β 1 and β 2 , of video delivery cost in (2) are set at “1” and “1/1,000,” respectively. The cache server’s storage size is represented as a percentage of the total size of all the videos. In addition, LTE terminals stay in the same MBSFN area while receiving MCH, and the wireless channel status of each terminal is stable, to receive video data. Therefore, it is assumed that LTE terminals are always able to receive video data through unicast and multicast channels at a static data rate configured by eNodeB. Table 2 presents the parameter values used in this section.
Parameter values.
Parameter Explanation and meaning Value
β1           β2           Scalar factors for normalization of the source     Server’s cost and the cache server’s cost     1           0.001          
λall Sum of the request rates of all videos 0.1–1
b Streaming rate of a video 1 Mbps
α Power parameter of the Zipf distribution 0.729
L Length of a video (s) 3,600 s
M Maximum number of multicast channels 20–320
V Total number of videos 20–1,280
Figure 7 shows each caching scheme for different request rates and the video delivery cost of each transmission. It is assumed that the total number of videos, V , is 50, the video length, L , is 3,600 s (running time of most TV dramas), the maximum number of multicast channels, M , is 200, and the cache server storage size is 10% of the size of all the videos. HMSC generates the lowest video delivery cost across the entire range of request rates when each scheme does not use a cache server, and all the traffic is delivered from the source server. When the request rate λ all is 0.6, HMSC-Non reduces the video delivery cost by 14% and 44% for FB-Non and HB-Non, respectively. It was observed that cost reduction increases when the cache server storage is 0.1. The video delivery cost for HMSC (10%) was 39% and 59% less than the corresponding costs for FB-HOT (10%) and HB-HOT (10%), respectively. When the request rate is larger than “3,” the costs of FB-Non, HB-Non, FB-HOT, and HB-HOT increase rapidly, although this is not presented here due to the poor resolution of the figure. These results indicate that the number of requests served through a unicast channel in FB and HB increases due to the shortage of multicast channels. Since HMSC exploits multicast channels more efficiently than FB and HB, it accommodates more requests to the multicast channels.
PPT Slide
Lager Image
Performance comparison of HMSC and other schemes as request rate varies.
Figure 8 shows the video delivery cost as a function of the number of videos, V . The costs for FB; HB under non-caching and HOT caching; and HMSC are plotted on the graph. Note that the video delivery cost of HMSC (10%) is lower than the costs of FB-HOT (10%) and HB-HOT (10%) across the entire range of the number of videos. In addition, the video delivery costs of HMSC-Non, FB-Non, and HB-Non; and the video delivery costs of HMSC, FB-HOT, and HB-HOT converge to a value as the number of videos increases. This is because the effect of allocating multicast channels decreases when the number of videos increases. It is interesting to note from Fig. 8 that the video delivery cost of all schemes tends to decrease when using a caching scheme, and when λ all is larger than a given value. These results suggest that the request rates of the highly popular videos account for a greater portion of the total request rates and that the effect of caching increases as the number of videos increases.
PPT Slide
Lager Image
Performance comparison of HMSC and other schemes as total number of videos varies.
Figure 9 depicts variations in video delivery costs as the maximum number of multicast channels increases in the case where V = 50 and λ all = 0.1. HMSC achieves the best results with respect to that of the second optimal scheme. When the number of multicast channels is larger than about 80, HMSC-Non reduces the video delivery cost, compared with other caching cases (FB-HOT and HB-HOT). Although the video delivery cost of each scheme tends to decrease as the maximum number of multicast channels increases, there is also a reduction in the size of the cost-decrease of each scheme. The reason for this result is that the effect of additional allocation of multicast channels decreases as the video delivery cost value approaches the minimal cost of each scheme.
PPT Slide
Lager Image
Performance comparison of HMSC and other schemes as maximum number of multicast channels varies.
Figure 10 illustrates the results of variation of the video delivery cost for cache servers of different storage size in the case where V = 50, M = 200, and λ all = 0.1. This result shows that HMSC is the optimal scheme for any cache server size. HMSC maximizes the effect of caching, and in particular, reduced it to 41% of that needed for HMSC-Non when the storage size of the cache server was 0.2. In addition, FB-HOT reduced the cache server size to 29% of that needed for FB-Non, and HB-HOT reduced it to 28% of that needed for HB-Non.
PPT Slide
Lager Image
Performance comparison of HMSC and other schemes as storage size of cache server varies.
Figure 11 shows the video delivery cost as a function of the scalar factor β 2 for normalization of the cache server’s cost. Note that the video delivery costs of HMSC (10%), FB-HOT (10%), and HB-HOT (10%) sharply approach to the corresponding costs for HMSC-Non, FB-Non, and HB-Non, respectively, when β 2 approaches to 1. This is because approaching to 1 means that the location of the cache server becomes near the source server.
PPT Slide
Lager Image
Performance comparison of HMSC and other schemes as scalar factor for normalization of cache server’s cost varies.
VI. Conclusion
Video transmission schemes and caching schemes have become important considerations due to the rapid increase of VoD services through mobile networks. This paper proposed a novel video delivery scheme to reduce video delivery cost related to bandwidth consumption. In the proposed scheme (HMSC), the most suitable transmission scheme and required number of multicast channels for a video are determined by the formulations developed herein and dynamic programming based on request rate, video length, and maximum number of multicast channels. This new scheme used UNI, HFM, or HHM to effectively combine the unicast and multiple-dedicated multicasts to achieve two objectives: minimizing bandwidth consumption and solving the problem of waiting time. In addition, we determined that the cost generated by a segment of a video is subject to change as the transmission scheme changes. HMSC maximizes the effect of caching using a greedy algorithm by which ordering decreased the average bandwidth consumption of segments of unit size. The results from the performance evaluation show that HMSC significantly reduced the video delivery cost, compared with existing schemes. The proposed scheme could be used to design on-demand video streaming in LTE networks.
This work was partly supported by the ICT R&D program of MSIP/IITP, Rep. of Korea (1391104001, Research on Communication Technology using Bio-inspired Algorithm) and Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and Future Planning (2015R1A2A2A03004152).
BIO
kwngjn@kaist.ac.kr
Kwangjin Choi received his BE degree in electrical and communications engineering from the Korea Advanced Institute of Science Technology (KAIST), Daejeon, Rep. of Korea, in 2005 and his MS and PhD degrees in information and communications engineering from KAIST, in 2007 and 2015, respectively.
Corresponding Author sgchoi@cbnu.ac.kr
Seong Gon Choi received his PhD in information and communications engineering from the Information and Communications University, Daejeon, Rep. of Korea, in 2004. Since 2004, he joined Chungbuk National University, Cheongju, Rep. of Korea, as a professor. Since 2002, he has been working as an editor of ITU-T SG13. His main research interests include broadcast networks; mobility issues; next-generation network and infrastructure deployment; and network management.
jkchoi59@kaist.edu
Jun Kyun Choi received his BS degree in electronics engineering from Seoul National University, Rep. of Korea, in 1982 and his MS and PhD degrees in electronics engineering from the Korea Advanced Institute of Science Technology (KAIST), Daejeon, Rep. of Korea, in 1985 and 1988, respectively. From June 1986 until December 1997, he was with the Electronics and Telecommunication Research Institute, Daejeon, Rep. of Korea. In January 1998, he joined the Information and Communications University, Daejeon, Rep. of Korea, as a professor. In 2009, he moved to KAIST to work as a professor. He is a senior member of IEEE; an executive member of the Institute of Electronics Engineers of Korea; an editor board member of the Korea Information Processing Society, and life member of the Korea Institute of Communication Science.
References
Juhn L.-S. , Tseng L.-M. 1997 “Harmonic Broadcasting for Video-on-Demand Service,” IEEE Trans. Broadcast. 43 (3) 268 - 271    DOI : 10.1109/11.632927
Juhn L.-S. , Tseng L.-M. 1998 “Fast Data Broadcasting and Receiving Scheme for Popular Video Service,” IEEE Trans. Broadcast. 44 (1) 100 - 105    DOI : 10.1109/11.713059
Azad S.A. , Murshed M. 2007 “An Efficient Transmission Scheme for Minimizing User Waiting Time in Video-on-Demand Systems,” IEEE Commun. Lett. 11 (3) 285 - 287    DOI : 10.1109/LCOMM.2007.061469
Shen L. , Tu W. , Steinbach E. “A Flexible Starting Point Based Partial Caching Algorithm for Video on Demand,” IEEE Int. Conf. Multimedia Expo Beijing, China July 2–5, 2007 76 - 79
Sofman L.B. , Krogfoss B. 2009 “Analytical Model for Hierarchical Cache Optimization in IPTV Network,” IEEE Trans. Broadcast. 55 (1) 62 - 70    DOI : 10.1109/TBC.2008.2012018
De Vleeschauwer D. , Laevens K. 2009 “Performance of Caching Algorithms for IPTV On-Demand Services,” IEEE Trans. Broadcast. 55 (2) 491 - 501    DOI : 10.1109/TBC.2009.2015983
Kim H.-I. , Park S.-K. 2008 “A Hybrid Video-on-Demand Data Broadcasting and Receiving Scheme of Harmonic and Staggered Schemes,” IEEE Trans. Broadcast. 54 (4) 771 - 778    DOI : 10.1109/TBC.2008.2002768
Yu H.-F. , Yang H.-C. , Tseng L.-M. 2007 “Reverse Fast Broadcasting (RFB) for Video-on-Demand Applications,” IEEE Trans. Broadcast. 53 (1) 103 - 111    DOI : 10.1109/TBC.2006.888917
2012 3GPP TS 25.346, Introduction of the Multimedia Broadcast/Multicast Service (MBMS) in the Radio Access Network (RAN)
Dan A. , Sitaram D. , Shahabuddin P. 1994 “Scheduling Policies for an On-Demand Video Server with Batching,” Proc. ACM Int. Conf. Multimedia New York, USA 15 - 23
Dan A. , Sitaram D. , Shahabuddin P. 1996 “Dynamic Batching Policies for an On-Demand Video Server,” ACM Multimedia Syst. 4 112 - 121    DOI : 10.1007/s005300050016
Gao L. , Towsley D. 2001 “Threshold-Based Multicast for Continuous Media Delivery,” IEEE Trans. Multimedia 3 (4) 405 - 414    DOI : 10.1109/6046.966112
Choi K. , Choi S.G. , Choi J.K. 2012 “Hybrid Video Transmission Scheme for Minimizing Server Bandwidth Consumption with Zero Start-up Delay in Video-on-Demand Service,” IEEE Commun. Lett. 16 (1) 6 - 8    DOI : 10.1109/LCOMM.2011.110711.111959
Wang B. 2004 “Optimal Proxy Cache Allocation for Efficient Streaming Media Distribution,” IEEE Trans. Multimedia 6 (2) 366 - 374    DOI : 10.1109/TMM.2003.822788
Jayasundara C. 2014 “Improving Scalability of VoD Systems by Optimal Exploitation of Storage and Multicast,” IEEE Trans. Circuits Syst. Video Technol. 24 (3) 489 - 503    DOI : 10.1109/TCSVT.2013.2290397
2014 3GPP TS 36.300, Evolved Universal Terrestrial Radio Access (E-UTRA) and Evolved Universal Terrestrial Radio Access Network (E-UTRAN) Overall Description; Stage 2
2014 3GPP TS 36.443, Evolved Universal Terrestrial Radio Access Network (E-UTRAN); M2 Appl. Protocol (M2AP)
Paris J.-F. , Carter S.W. , Long D.D.E. “Efficient Broadcasting Protocols for Video on Demand,” Proc. Int. Symp. Modeling, Anal. Simulation Comput. Telecommun. Syst. Montreal, Canada July 19–24, 1998 127 - 132
Kellerer H. , Pferschy U. , Pisinger D. 2004 “Knapsack Problems,” Springer Berlin, Germany 20 - 27
2014 Recommended Upload Encoding Settings (Advanced), Google https://support.google.com/youtube/answer/1722171?hl=en