Bourtsoulatze, Eirina and Chadha, Aaron and Fadeev, Ilya and Giotsas, Vasileios and Andreopoulos, Yiannis (2020) Deep Video Precoding. IEEE Transactions on Circuits and Systems for Video Technology, 30 (12). pp. 4913-4928. DOI https://doi.org/10.1109/tcsvt.2019.2960084
Bourtsoulatze, Eirina and Chadha, Aaron and Fadeev, Ilya and Giotsas, Vasileios and Andreopoulos, Yiannis (2020) Deep Video Precoding. IEEE Transactions on Circuits and Systems for Video Technology, 30 (12). pp. 4913-4928. DOI https://doi.org/10.1109/tcsvt.2019.2960084
Bourtsoulatze, Eirina and Chadha, Aaron and Fadeev, Ilya and Giotsas, Vasileios and Andreopoulos, Yiannis (2020) Deep Video Precoding. IEEE Transactions on Circuits and Systems for Video Technology, 30 (12). pp. 4913-4928. DOI https://doi.org/10.1109/tcsvt.2019.2960084
Abstract
Several groups worldwide are currently investigating how deep learning may advance the state-of-the-art in image and video coding. An open question is how to make deep neural networks work in conjunction with existing (and upcoming) video codecs, such as MPEG H.264/AVC, H.265/HEVC, VVC, Google VP9 and AOMedia AV1, AV2, as well as existing container and transport formats, without imposing any changes at the client side. Such compatibility is a crucial aspect when it comes to practical deployment, especially when considering the fact that the video content industry and hardware manufacturers are expected to remain committed to supporting these standards for the foreseeable future. We propose to use deep neural networks as precoders for current and future video codecs and adaptive video streaming systems. In our current design, the core precoding component comprises a cascaded structure of downscaling neural networks that operates during video encoding, prior to transmission. This is coupled with a precoding mode selection algorithm for each independently-decodable stream segment, which adjusts the downscaling factor according to scene characteristics, the utilized encoder, and the desired bitrate and encoding configuration. Our framework is compatible with all current and future codec and transport standards, as our deep precoding network structure is trained in conjunction with linear upscaling filters (e.g., the bilinear filter), which are supported by all web video players. Extensive evaluation on FHD (1080p) and UHD (2160p) content and with widely-used H.264/AVC, H.265/HEVC and VP9 encoders, as well as a preliminary evaluation with the current test model of VVC (v.6.2rc1), shows that coupling such standards with the proposed deep video precoding allows for 8% to 52% rate reduction under encoding configurations and bitrates suitable for video-on-demand adaptive streaming systems. The use of precoding can also lead to encoding complexity reduction, which is essential for cost-effective cloud deployment of complex encoders like H.265/HEVC, VP9 and VVC, especially when considering the prominence of high-resolution adaptive video streaming.
Item Type: | Article |
---|---|
Divisions: | Faculty of Science and Health Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
Depositing User: | Unnamed user with email elements@essex.ac.uk |
Date Deposited: | 21 Apr 2020 14:34 |
Last Modified: | 23 Sep 2022 19:39 |
URI: | http://repository.essex.ac.uk/id/eprint/27250 |
Available files
Filename: adaptive_precoding_accepted.pdf