Efficient Inter Prediction Mode Decision Method for Fast Motion Estimation in High Efficiency Video Coding
Efficient Inter Prediction Mode Decision Method for Fast Motion Estimation in High Efficiency Video Coding
ETRI Journal. 2014. Aug, 36(4): 528-536
Copyright © 2014, Electronics and Telecommunications Research Institute(ETRI)
  • Received : September 22, 2013
  • Accepted : March 21, 2014
  • Published : August 01, 2014
Export by style
Cited by
About the Authors
Alex, Lee
Dongsan, Jun
Jongho, Kim
Jin Soo, Choi
Jinwoong, Kim

High Efficiency Video Coding (HEVC) is the most recent video coding standard to achieve a higher coding performance than the previous H.264/AVC. In order to accomplish this improved coding performance, HEVC adopted several advanced coding tools; however, these cause heavy computational complexity. Similar to previous video coding standards, motion estimation (ME) of HEVC requires the most computational complexity; this is because ME is conducted for three inter prediction modes — namely, uniprediction in list 0, uniprediction in list 1, and biprediction. In this paper, we propose an efficient inter prediction mode (EIPM) decision method to reduce the complexity of ME. The proposed EIPM method computes the priority of all inter prediction modes and performs ME only on a selected inter prediction mode. Experimental results show that the proposed method reduces computational complexity arising from ME by up to 51.76% and achieves near similar coding performance compared to HEVC test model version 10.1.
I. Introduction
As demands for high-quality video services (such as ultra high definition) increase and high data bandwidth (caused by video applications on mobile devices) imposes severe traffic on today’s network, a new video compression standard with higher coding performance than H.264/AVC [1] is desired. High Efficiency Video Coding (HEVC) [2] was recently developed by the Joint Collaborative Team on Video Coding (JCT-VC), which comprises the ITU-T Video Coding Experts Group and the ISO/IEC Moving Pictures Experts Group. The main goal of HEVC [3] is to achieve a bitrate reduction of 50% with similar video quality compared to H.264/AVC. To achieve this, HEVC deploys the advanced coding tools listed in Table 1 ; in particular, these tools evaluated the powerful coding performance in the compression of video with increased picture resolution, such as high definition and beyond [4] .
As the basic coding unit (CU), the block structure of H.264/AVC supports the macroblock (MB) containing one 16 × 16 luma sample and two corresponding 8 × 8 chroma samples in the case of 4:2:0 sampling. The block structure of HEVC supports coding tree units (CTUs), varying in size from 16 × 16 to 64 × 64, to enable better compression. A CTU consists of one luma coding tree block (CTB), two corresponding chroma CTBs, and syntax elements. It can also be split into a quad-tree structure consisting of four CUs that include one luma coding block (CB), two corresponding chroma CBs, and syntax elements.
Within the CU level, a prediction unit (PU) decides whether a current CU is coding in intra or inter predicted mode. The predicted residual is transformed using a block-based discrete cosine transform (DCT), where the size of a transform unit (TU) ranges from 4 × 4 to 32 × 32. For the 4 × 4 transform of luma different concepts (CU, PU, and TU) allow each to be optimized according to its role.
Comparison of coding tools between H.264/AVC and HEVC.
Block structure MB CU PU TU
Inter prediction Spatial motion vector prediction (median) Direct mode AMVP MERGE mode
Interpolation 6-tap FIR filter Bi-linear interpolation DCT-based interpolation (7-tap, 8-tap filter)
Intra prediction 9 modes (4 × 4, 8 × 8 luma blocks) 4 modes (16 × 16 luma and chroma blocks) 35 modes (planar, dc, 33 angular modes)
In-loop filtering Deblocking filter Simplified deblocking filter
Entropy coding CABAC CAVLC Simplified CABAC
In addition, HEVC adopted a new motion vector (MV) signaling method called advanced motion vector prediction (AMVP). To reduce the MV signaling bit, AMVP derives two MV predictors from spatio-temporal neighboring blocks instead of derivation from only spatial neighboring blocks. HEVC uses DCT-based 7-tap or 8-tap interpolation filters to generate fractional-pel samples. To improve the accuracy of intra prediction, HEVC increases the number of angular intra prediction modes by up to 35 in the luma CB.
Although HEVC focuses on achieving high coding performance with those tools (block structure, inter prediction, interpolation, and intra prediction), its heavy computational complexity causes difficulty in developing the real-time encoder. While many fast algorithms [5] [13] for inter or intra prediction were proposed in H.264/AVC, there are currently few fast algorithms [14] [19] to reduce the encoding complexity of HEVC. Most of all, it is important to develop a fast inter prediction mode decision method since inter prediction has the most computational complexity in HEVC. Therefore, we propose an efficient inter prediction mode (EIPM) decision method to significantly reduce the complexity of an HEVC encoder.
The remainder of this paper is organized as follows. In section II, we introduce the inter prediction process of HEVC, evaluate the encoding complexity, and address the motivation of the proposed method. The proposed method is described in section III. Finally, experimental results and conclusions are given in sections IV and V, respectively.
II. Analysis and Motivation
- 1. Overview of Inter Prediction
Figure 1 depicts the procedures of the inter prediction process in HEVC when a PU is encoded. If a current PU is 2N × 2N, then ME is sequentially conducted on uniprediction in list 0 (Uni-L0), uniprediction in list 1 (Uni-L1), and biprediction (Bi) after SKIP/MERGE prediction. Otherwise, motion estimation (ME) is preferentially performed on Uni-L0, Uni-L1, and Bi before MERGE prediction. The best inter prediction mode for the PU is finally decided among Uni-L0, Uni-L1, and Bi modes by rate-distortion optimization (RDO) [20] . In the case of Uni-L0 and Uni-L1 in ME, the AMVP construction process finds the two initial MVs from spatio-temporal neighboring PUs and selects an RDO-based initial MV. Starting from the initial MV, integer-pel MV is searched using a sum of absolute differences (SAD)-based motion cost, which is calculated from
J Motion =SAD(s,c(MV, ref_idx))+ λ Motion R(MVD, ref_idx).
In (1), SAD ( s , c ( MV , ref_idx )) is the sum of absolute differences between the current PU( s ) and its reference PU( c ) whose motion vector is MV and reference frame index is “ ref_idx .” The difference between the current MV and the initial MV, obtained from AMVP, is denoted by MVD . In (1), λ Motion is a Lagrangian multiplier [20] and R ( MVD, ref_idx ) is the required bitrate to encode MVD and ref_idx . Then, fractional-pel ME is searched using the sum of absolute Hadamard transformed differences–based motion cost to find the best MV.
PPT Slide
Lager Image
Flowchart of inter prediction process in HEVC.
Bi should find the two best motion vectors from the reference frames of list 0 and list 1. Since the two best MVs are already computed from Uni-L0 and Uni-L1 prediction, one of them is fixed by RDO, and the MV of the other list is newly searched to find the optimally predicted block for Bi.
- 2. Complexity Analysis of Inter Prediction
To analyze the complexity of inter prediction, we used the sequences of Class A and Class B under the main profile-random access (MP–RA) configuration [21] recommended by JCT-VC. As shown in Figs. 2 and 3 , inter prediction has the most computational complexity, which consumes about 70% of total encoding time (TET). In particular, ME of inter prediction accounts for the most complexity because SKIP/MERGE prediction omits the ME process and obtains the motion information, including MV, reference index, and IPM, from spatio-temporal neighboring PUs.
PPT Slide
Lager Image
Complexity distribution of total encoding time.
PPT Slide
Lager Image
Complexity distribution of inter prediction.
We evaluated the computational complexity for each inter prediction mode (IPM). Figure 4 shows that uniprediction has higher complexity than that of Bi. Also, the complexity of Uni-L0 is higher than that of Uni-L1 between unipredictions. It means that if the reference frame of Uni-L1 is the same as that of Uni-L0, then the MV of Uni-L1 is only copied from Uni-L0 to avoid redundant ME on the same reference frame. The reason why Bi is less complex than unipredictions is because it reuses the MV information from Uni-L0 or Uni-L1 without the AMVP process and because the integer ME is simply performed with a small search range of four.
PPT Slide
Lager Image
Complexity distribution according to inter prediction.
- 3. Motivation of Proposed Method
According to complexity analysis, the inter prediction process imposes a heavy complexity burden of up to 70% on the entire encoding process. Regardless of the complexity distribution of IPMs, Bi is more likely to be selected than Uni-L0 or Uni-L1, as shown in Fig. 5 . Therefore, if accurate IPM is predetermined among Uni-L0, Uni-L1, and Bi, then the ME complexity incurred by unnecessary IPM can be significantly removed. Although Kim [19] proposed a fast algorithm related to inter prediction mode decision, it still has heavy computational complexity because [19] always performed ME in Uni-L0 and Uni-L1. Therefore, we define the priority for each IPM using spatial and upper-PU correlation such that ME can be performed on the selected IPM along a descending order of priority unless the priority of each IPM is lower than the threshold value.
PPT Slide
Lager Image
Distribution of best IPM
III. Proposed EIPM Decision Method
In HEVC, PU shape is differently partitioned within a current CU, as shown in Table 2 . A total of seven PU partition shapes are defined for inter prediction as follows: one square (2N × 2N), two rectangular (2N × N, N × 2N), and four asymmetric (2N × nU, 2N × nD, nL × 2N, and nR × 2N). The four asymmetric shapes are disabled for both ME and MERGE prediction at 8 × 8 CU and are just enabled for MERGE prediction at 64 × 64 CU in HEVC [2] .
Illustration of PU partition type and shape in HEVC.
PU partition type Square Rectangular Asymmetric
PU partition shape
As demonstrated in [8] and [9] , a current block tends to have similar motion information. It means that the correlation relative to motion information between a current PU and a spatially neighboring PU is very high. Also, the motion information of the upper-layer block has a strong correlation to those of lower-layer blocks [13] . Figure 6 demonstrates that the correlation of IPM between a current PU and spatial/upper PUs is as high as 82% on average. Therefore, we define the upper PU of current PU partitions to be those shown in Table 3 and exploit the correlation between the spatial neighboring PU and the upper PU to calculate the priority for the IPM decision.
PPT Slide
Lager Image
Correlation of IPM between current PU and spatial/upper PUs.
Definition of upper PU.
PU partition shape Current PU Upper PU
2N×2N N/A
1st partition of nL×2N or nR×2N
2nd partition of nL×2N or nR×2N
1st partition of 2N×nU or 2N×nD
2nd partition of 2N×nU or 2N×nD
The overall flowchart of the proposed EIPM method is depicted in Fig. 7 . The proposed EIPM method is applicable to all PU shapes except a 2N × 2N PU — as there is no upper PU for a 2N × 2N PU. Before performing ME on the three IPMs, the priority of each IPM is computed for a current PU by the following:
 Priority[IPM_idx]= SPU[IPM_idx]+UPU[IPM_idx] Max k {SPU[k]+UPU[k]} , where IPM_idx, k{Uni-L0, Uni-L1, Bi}.
PPT Slide
Lager Image
Flowchart of proposed EIPM method.
The priority of each IPM is the quantitative IPM correlation obtained from the previously coded spatial and upper PU. In (2), SPU[IPM_idx] and UPU[IPM_idx] are the number of PU blocks with optimal inter prediction mode IPM_idx among all available spatial and upper PU blocks, respectively. The denominator of (2) is used for the normalization of the IPM priority by taking the maximum IPM priority whose range is from zero to one.
The proposed EIPM method performs ME only on the selected IPM along a descending order of priority, unless the priority of each IPM is lower than the threshold value TH p . According to the designed decision rule, as the threshold is increased encoding complexity is reduced (by skipping ME), while coding loss is increased; and vice versa. Therefore, it is important to choose the optimal threshold of priority because it has an effect on the trade-off between complexity reduction and coding loss. To determine the optimal TH p we define the cost function (CF) to be
where ANM is the average number of MEs performed by a PU block and a positive BD-Bitrate [22] represents coding loss. Since BD-Bitrate is calculated from the various QPs (QP = 22, 27, 32, 37) recommended by JCT-VC common conditions [21] , CF reflects the various QPs from low bitrate to high bitrate through the terms of BD-Bitrate in (3). Figure 8 shows the distribution of the average CF obtained from the training sequences “PeopleOnStreet” and “BasketballDrive”. When a specific threshold is given, we plotted the corresponding CF values obtained from the training sequences; these procedures were conducted on offline experiments because both ANM and BD-Bitrate cannot be computed in the middle of an encoding process. In addition, we set the weighting factor α as 0.25 to emphasize more on the side of computational complexity ( ANM ) than coding loss ( BD-Bitrate ); this was after analyzing the distribution of CF corresponding to various values of weighting factor α . In accordance with the results from Fig. 8 , we set THp to 0.7 in all experiments.
PPT Slide
Lager Image
Distribution of average CF for “PeopleOnStreet” and “BasketballDrive” sequences.
IV. Experimental Results
The proposed method was implemented in HEVC test model (HM) 10.1 and evaluated under the JCT-VC common conditions [21] listed in Table 4 . We compared the proposed method with HM 10.1 [2] and Kim’s method [19] under the MP–RA configuration.
Encoder parameters used in experiment.
Test sequences Class A (2,560 × 1,600) Traffic 150 frames 30 fps
Nebuta 300 frames 60 fps
SteamLocomotive 300 frames 60 fps
Class B (1,920 × 1,080) Kimono 240 frames 24 fps
ParkScene 240 frames 24 fps
Cactus 500 frames 50 fps
BQTerrace 600 frames 60 fps
Coding options Intra period 24 for 24 fps, 32 for 30 fps, 48 for 50 fps, and 64 for 60 fps.
GOP size 8
Search range 64
CTU Size 64 × 64
Asymmetric motion partitioning On
For comparison of computational complexity, we measured the time reduction using both TET and motion estimation time (MET) as follows:
ΔT= Tim e HM10.1 −   Tim e Proposed Tim e HM10.1 ×100.
As shown in Table 5 , the proposed method reduces MET and TET by 51.76% and 31.29%, respectively. Also, it is about 5.6 times faster than Kim’s method. In addition, since the time reduction may not truly reflect computational complexity, due to different programming skills or use of a different hardware platform, we examined how many times ME was performed. In (5), we measured the speed-up ratio using Number HM10.1 and Number Proposed , which are the number of MEs with or without the proposed method, respectively.
ΔNumber= Numbe r HM10.1 Numbe r Proposed Numbe r HM10.1 ×100.
Comparison of complexity reduction between Kim’s method[19]and proposed method.
Test sequences QP Kim’s method Proposed method
MET (%) TET (%) MET (%) TET (%)
Class A (2,560 × 1,600) Traffic 22 7.24 2.17 50.60 25.09
27 11.26 6.74 52.25 31.04
32 10.60 6.76 53.02 34.14
37 13.69 9.49 54.81 37.00
Nebuta 22 3.50 2.70 43.57 18.05
27 5.90 4.34 46.54 20.30
32 4.30 1.82 52.95 27.89
37 7.32 4.11 52.29 32.78
StreamLocomotive 22 10.29 6.98 39.62 23.74
27 12.91 9.47 46.64 31.06
32 11.01 7.29 47.18 31.39
37 12.05 8.23 49.13 33.61
Class B (1,920 × 1,080) Kimono 22 3.79 0.33 52.25 28.47
27 5.28 1.92 54.25 33.49
32 11.19 7.60 56.05 36.95
37 14.11 10.61 56.73 38.63
ParkScene 22 6.30 1.94 53.45 28.00
27 5.60 1.89 54.37 32.30
32 11.45 7.80 56.99 37.83
37 12.69 8.99 56.95 38.74
Cactus 22 7.35 3.31 47.94 25.31
27 8.53 5.02 51.19 32.15
32 9.47 5.43 50.31 31.79
37 8.49 4.66 51.98 34.62
BQTerrace 22 10.36 6.90 52.24 25.20
27 9.16 5.43 54.30 31.94
32 10.70 7.34 55.49 36.40
37 10.90 6.82 56.27 38.23
Average 9.12 5.57 51.76 31.29
Table 6 shows that the proposed method reduced the number of ME processes by 61.23% on average. To evaluate coding performance we used the BD-Bitrate recommended by JCT- VC [22] . Table 7 shows that the proposed method increased the BD-Bitrate by 1.06% on average. Although Kim’s method [19] increased the BD-Bitrate by 0.14%, it still has heavy computational complexity because uniprediction (Uni-L0, Uni- L1) should always be performed. In contrast to [19] , the difference in BD-Bitrate increments is 0.92% for EIPM; such insignificant degradation does not manifest in any noticeable visual quality.
Speed-up ratio of proposed method compared to HM 10.1.
Test sequences ΔNumber (%)
QP 22 QP 27 QP 32 QP 37
Class A (2,560 × 1,600) Traffic 61.31 59.42 58.13 57.41
Nebuta 65.21 64.25 63.69 57.84
SteamLocomotive 59.23 57.52 56.66 56.24
Class B (1,920 × 1,080) Kimono 61.28 59.99 58.93 58.18
ParkScene 61.19 59.48 58.31 57.54
Cactus 59.65 59.00 58.53 58.05
BQTerrace 60.77 60.41 58.56 57.48
Average 61.23 60.01 58.97 57.53
Coding performance comparison.
Test sequences BD-Bitrate (%)
Kim’s method Proposed method
Class A (2,560 × 1,600) Traffic 0.09 1.22
Nebuta 0.09 0.68
SteamLocomotive 0.07 0.54
Class B (1,920 × 1,080) Kimono 0.29 0.79
ParkScene 0.09 1.05
Cactus 0.09 1.11
BQTerrace 0.23 2.00
Average 0.14 1.06
As in Fig. 9 , the rate distortion (RD) performance of the proposed EIPM maintains nearly the same coding performance as [19] and HM 10.1.
PPT Slide
Lager Image
RD curves of HM, Kim’s method [19], and proposed method.
In addition, we measured the hit rate to verify the accuracy of the priority. It is the probability that the IPM selected from the proposed EIPM is equal to that obtained from the HM 10.1. Table 8 shows the hit rate to be as high as 85% on average for the given test sequences.
Hit rate of proposed EIPM method.
CU size (2N × 2N) PU shape Hit rate (%)
64 × 64 2N × N 84.50
N × 2N 83.97
32 × 32 2N × N 80.24
N × 2N 78.05
2N × nU 88.80
2N × nD 89.42
nL × 2N 88.89
nR × 2N 89.08
16 × 16 2N × N 80.87
N × 2N 75.93
2N × nU 88.62
2N × nD 89.10
nL × 2N 88.77
nR × 2N 88.84
8 × 8 2N × N 82.52
N × 2N 76.28
Average 84.62
V. Conclusion
The proposed EIPM method is to perform ME on the selected IPM, based on the priority computed from spatial and upper PU blocks. Experimental results demonstrated that the proposed method significantly reduced the computational complexity of ME, while maintaining the coding efficiency to be almost the same RD performance for various bitrates and test sequences.
Alex Lee received his BS in electrical engineering from Iowa State University, Ames, USA, in 2010 and his MS in mobile communications and digital broadcasting engineering from the University of Science and Technology, Daejeon, Rep. of Korea, in 2014. His research interests include image processing and video compression.
Dongsan Jun received his BS in electrical engineering and computer science from Pusan National University, Rep. of Korea, in 2002 and his MS and PhD in electrical engineering from the Korea Advanced Institute of Science and Technology, Daejeon, Rep. of Korea, in 2004 and 2011, respectively. He has been a senior researcher at ETRI, Daejeon, Rep. of Korea, since 2004 and an adjunct professor in Mobile Communication and Digital Broadcasting Engineering Department at the University of Science and Technology, Daejeon, Rep. of Korea, since 2011. His research interests include image computing systems, pattern recognition, video compression, and UHDTV broadcasting systems.
Jongho Kim received his BS degree from the control and computer engineering department, Korea Maritime University, Busan, Rep. of Korea, in 2005 and his MS degree from the University of Science and Technology, Daejeon, Rep. of Korea, in 2007. In September 2008, he joined ETRI, Daejeon, Rep. of Korea, where he is currently a researcher. His research interests include video processing and video coding.
Jin Soo Choi received his BE, ME, and PhD degrees in electronics engineering from Kyungpook National University, Daegu, Rep. of Korea, in 1990, 1992, and 1996, respectively. Since 1996, he has been a principal member of the research staff at ETRI, Daejeon, Rep. of Korea. He has been involved in developing the MPEG-4 codec system, data broadcasting systems, and the 3D/UHDTV broadcasting system. His research interests include visual signal processing and interactive services in the field of digital broadcasting technology.
Jinwoong Kim received his BS and MS in electronics engineering from Seoul National University, Rep. of Korea, in 1981 and 1983, respectively. He received his PhD in electrical engineering from Texas A&M University, College Station, TX, USA, in 1993. He has been working at ETRI since 1983 and is now a principal member of the research staff and director of the Realistic Broadcasting Media Research Department. He carried out many government-funded R&D projects on digital broadcasting technologies, including MPEG video and audio data compression technologies, data broadcasting, viewer-customized broadcasting, and MPEG-7/MPEG-21 metadata-related technologies. Currently, his research interests are focused on realistic media technologies, such as stereoscopic/multiview 3DTV, UHDTV, panoramic video, and digital holography. He was a chair of the 3DTV standardization project group of TTA. He was a Far-East Liaison of the 3DTV Conference in 2007-2008 and has been an invited speaker at a number of international workshops and conferences, including 3D Fair 2008 and 3DTV Conference 2010, ICTC2013.
2003 Draft ITU-T Recommendation and Final Draft International Standard of Joint Video Specification, JVT-G050, ITU-T Rec. H.264 and ISO/IEC 14496-10 AVC, Joint Video Team (JVT) of ITU-T VCEG and ISO/IEC MPEG
2013 High Efficiency Video Coding (HEVC) Text Specification Draft 10 (for FDIS & Consent), JCTVC-L1003, ITU-T/ISO/IEC Joint Collaborative Team on Video Coding (JCT-VC)
Sullivan G.J. 2012 “Overview of the High Efficiency Video Coding (HEVC) Standard,” IEEE Trans. Circuits Syst. Video Technol. 22 (12) 1649 - 1668    DOI : 10.1109/TCSVT.2012.2221191
Ohm J.R. 2012 “Comparison of the Coding Efficiency of Video Coding Standards — Including High Efficiency Video Coding (HEVC),” IEEE Trans. Circuits Syst. Video Technol. 22 (12) 1669 - 1684    DOI : 10.1109/TCSVT.2012.2221192
Cheng C.C. , Chang T.S. “Fast Three Step Intra Prediction Algorithm for 4×4 Blocks in H.264,” IEEE Int. Symp. Circuits Syst. May 23–26, 2005 2 1509 - 1512    DOI : 10.1109/ISCAS.2005.1464886
Meng B. , Au O.C. “Fast Intra-prediction Mode Selection for 4×4 Blocks in H.264,” IEEE Int. Conf. Acoust., Speech, Signal Process. Apr. 6–10, 2003 3 389 - 392    DOI : 10.1109/ISCAS.2005.1464886
Zhang Y.D. , Dai F. , Lin S.X. “Fast 4×4 Intra-prediction Mode Selection for H.264,” IEEE Int. Conf. Multimedia Expo June 27–30, 2004 2 1151 - 1154
Shen L. 2007 “An Adaptive and Fast Multiframe Selection Algorithm for H.264 Video Coding,” IEEE Signal Process. Lett. 14 (11) 836 - 839    DOI : 10.1109/LSP.2007.898343
Jun D. , Park H. 2010 “An Efficient Priority-Based Reference Frame Selection Method for Fast Motion Estimation in H.264/AVC,” IEEE Trans. Circuits Syst. Video Technol. 20 (8) 1156 - 1161    DOI : 10.1109/TCSVT.2010.2057016
Kim B.G. , Kim J.H. 2011 “Efficient Intra-mode Decision Algorithm for Inter-frames in H.264/AVC Video Coding,” IET Image Process. 5 (3) 286 - 295    DOI : 10.1049/iet-ipr.2009.0097
Lu X. “Fast Mode Decision and Motion Estimation for H.264 with a Focus on MPEG-2/H.264 Transcoding,“ IEEE Int. Symp. Circuits Syst. May 23–26, 2005 2 1246 - 1249    DOI : 10.1016/j.jvcir.2004.12.002
Pan F. “A Directional Field Based Fast Intra Mode Decision Algorithm for H.264 Video Coding,” IEEE Int. Conf. Multimedia Expo June 27–30, 2004 2 1147 - 1150    DOI : 10.1109/ICME.2004.1394420
Chen Z. 2006 “Fast Integer-pel and Fractional-pel Motion Estimation for H.264/AVC,” J. Visual Commun. Image Representatation 17 (2) 264 - 290    DOI : 10.1016/j.jvcir.2004.12.002
2011 Early SKIP Detection for HEVC, JCTVC-G543, JCT-VC of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11
2011 Early Termination of CU Encoding to Reduce HEVC Complexity, JCTVC-F045, JCT-VC of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11
2011 Coding Tree Pruning Based CU Early Termination, JCTVC-F092, JCT-VC of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11
Xiong J. 2014 “A Fast HEVC Inter CU Selection Method Based on Pyramid Motion Divergence,” IEEE Trans. Multimedia 16 (2) 559 - 564    DOI : 10.1109/TMM.2013.2291958
Kim Y. 2013 “A Fast Intra-prediction Method in HEVC Using Rate-Distortion Estimation Based on Hadamard Transform,” ETRI J. 35 (2) 270 - 280    DOI : 10.4218/etrij.13.0112.0223
Kim J. 2012 “An SAD-Based Selective Bi-prediction Method for Fast Motion Estimation in High Efficiency Video Coding,” ETRI J. 34 (5) 753 - 758
Sullivan G.J. , Wiegand T. 1998 “Rate-Distortion Optimization for Video Compression,” IEEE Signal Process. Mag. 15 (6) 74 - 90    DOI : 10.1109/79.733497
2013 Common HM Test Conditions and Software Reference Configurations, JCTVC-L1100, ITU-T/ISO/IEC Joint Collaborative Team on Video Coding (JCT-VC)
2001 Calculation of Average PSNR Differences Between RD-Curves, VCEG-M33, ITU-T Q6/SG16