The world of internet video streaming has experienced an astonishing rate of growth, providing a natural conduit for catchup and VoD services that have quickly spread from international pioneers, through to a whole host of national and niche services. Even as internet service providers expand backbone capacity and increase ‘last mile’ broadband connection speeds to the user, content providers are looking at emerging compression methods as a way of meeting consumer expectations.
In this multiscreen, multi-platform world, there are hundreds of permutations for delivering video at a level of quality and corresponding bitrate. Making the right strategic decisions during the encoding process can mean the difference between delivering an also-ran service, drowned out by myriad competitors and an industry leading platform that’s at the top of its game.
High performance encoding requires world-class algorithms
It’s here, in the world of compression algorithms, that MediaKind operates a different approach from many of our competitors. Video encoders are not all equal – and the work put into improving the encoding process has not stood still.
This is where the benefits of open standards come into play. Today, we can see a multitude of examples where the decoder is fully defined but the ability to innovate and improve the encoder still exists. To put it into context, the first-generation MPEG-2 encoders were developed nearly 25 years ago, yet MPEG-2 SD is still widely in use. Looking at the basic progression of compression efficiency over this period, it’s fascinating to see how MPEG-2 has continued to see regular year-on-year improvements. The same will very likely be true for MPEG-4 AVC, HEVC and VVC.
Encoder implementation improvements have led to significant efficiency gains in all major encoding standards over many years – in some cases long after we might have anticipated the asymptote of performance to have been reached.
While it is possible to get off-the-shelf implementations of encoding standards, these are not typically driven by a commercial need to provide more efficiency to operators and broadcasters. By saving bandwidth for the same quality, broadcasters have a compelling business model for upgrading to more efficient encoding. It is therefore important for encoder vendors who are providing solutions to operators and broadcasters to be fully in control of the algorithms and implementations of the encoder.
There are two fundamental rules for building a coherent encoding strategy. Firstly, encoding is a case of diminishing returns, so decide on the trade-off between the value of more efficiency, against the cost of gaining that increased efficiency. From some applications, encoding cost will be the key parameter, but for others, the spectrum/capacity cost can be more important, making it worth the additional processing cost. It is often significantly cheaper to utilize existing standards and improve compression performance, rather than upgrade an entire infrastructure.
As long as we can enhance the encoding efficiency using existing standards, there will always be opportunities for operators to seek further benefits. Since MPEG-2, we have seen how subsequent standards have successfully achieved the target of halving the bitrate requirement of the previous standard – this can be seen from MPEG-4 AVC to HEVC.
Secondly, owning the entire encoder processing implementation, and the video research that underpins it, is a major strategic advantage. As encoders are refined, this offers operators the operational benefit of more efficient encoding engines that can be seamlessly upgraded without impacting workflows. This provides the assurance that all elements will work as expected. We are already seeing how the same deep development projects used to improve the pioneering MPEG family of encoders, are also ramping up for the current generation of HEVC encoders. The same is true for next generation options such as VVC, which is expected to be ratified in 2020.
Ready to adapt
However, it’s not just the operator-to-consumer workflow that is impacted. Many broadcasters are in the middle of a transition period between HD/SDR and UHD/HDR workflows and for many, it’s common to run two separate production workflows. Fairly quickly, we can expect this to collapse into a single UHD/HDR-based workflow, with everything else being derived from that, so deriving all outputs from a UHD/HDR source will become normal.
Although many in the industry are looking with interest at HEVC and beyond to EVC and VVC, the reality for the operator is there is still a lot of video content that is encoded using existing standards. These are not going away anytime soon and even as workflows shift to UHD/HDR, ensuring continuity of service for all subscribers is critical. We can expect HD services to predominantly use MPEG-4 AVC for many years.
Lastly, and perhaps more importantly, compression strategies must be considered part of an end-to-end delivery chain. The modern media landscape has a myriad of service delivery options, business models and device ecosystems. These are underpinned by new operating models that are embracing beneficial technologies such as the cloud and microservices architectures that impact long term CAPEX/OPEX calculation. On the horizon are emerging trends such as targeted advertising and immersive 360-degree video that although still at an early stage, may ramp up quickly.
Compression is no longer an isolated silo within the media delivery landscape. Instead, industry pioneers recognize that flexibility and seamless integration is essential to meet wider strategic goals. In my next blog, we will look at how compression technologies support the concept of true end-to-end service delivery.