2022: The year live sports streaming kicks latency issues into touch?

2022: The year live sports streaming kicks latency issues into touch?

By Chris Wilson, Director of Market Development, Sports, MediaKind February 21, 2022 | 4 min read
Broadcast, chris wilson, Cloud, Latency

2022 is a year resplendent with glittering live sports events. In recent days we’ve witnessed the LA Rams win the Super Bowl, and we’re currently being treated to the ice-fueled wonder of the Winter Olympics. In the months to come, there’s still the Commonwealth Games, IAAF World Championships, Rugby League World Cup, Asian Games, the Ryder Cup, and of course, the FIFA World Cup. This year there really is something for every kind of sports fan!

According to Altman Solon’s 2021 Global Sports Survey, which looked at viewing and fandom among 18,000 respondents in 16 countries across North America, Europe, Latin America, and Asia, sports fans globally self-report watching an average of 3.5 hours of live sports per week. While that’s a large investment of time from the average viewer, the real concern for streaming providers is the concentration of those audiences.

Addressing the issue of scale and latency is an absolute priority if we are to realize a streaming world of live without limits. Audiences are increasingly less forgiving, and the top-tier sporting rights owners are no longer tolerant of streaming services that are less performant and potentially impacted by broadcast services.  In short, 2022 must be the year when live video latency times of up to 45 seconds behind the broadcast feed become a thing of the past, with streaming services moving as close to broadcast equivalence as feasibly possible.

Low latency live streaming: What’s achievable?

Firstly, before answering, let’s define what we mean and expect by latency because depending on the target audience, the expectation can be quite different. Here we reference end-to-end (E2E) latency, from camera to screen – through the entire broadcast chain. For sports streaming, there are really three benchmarks when it comes to latency:

  • Equivalent to Broadcast latency (Approx. 6 seconds, but can be up to 15)
  • Equivalent to Social Media content/post latency (as quickly as you can type + a few seconds!)
  • Near-real-time data feeds

Each one needs progressively lower latency which can impact quality and drive-up infrastructure costs. VOD is relatively easy in this respect. However, real-time, high-quality live sports is much harder! Of course, completely eradicating delays in live video is but a pipe dream – no pun intended. Streaming video will always carry an element of delay. The E2E value chain dictates that the sports content must be captured, produced, distributed, encoded, packaged, stored – and then transitioned to CDNs for eventual delivery.

But due to how live video is processed, latency is always increased with online streaming. Video and audio data segments are provided individually and processed separately. This is a very fine balance. If the video files are too small, the processing becomes sub-optimal. However, if the files are too big, the processing takes too long, because each video segment must be generated in its entirety before the next phase of the broadcast chain can be initiated.

In some instances, deploying an ultra-low latency solution (sub-second delay) may not make sense from an ROI standpoint, as this interesting LinkedIn post from Dan Rayburn highlights. An ultra-low latency solution is not cheap, and the cost of implementation will always outweigh any direct benefit – particularly for large-scale events such as the Super Bowl whereby viewer numbers and ad revenues won’t be significantly impacted by a small reduction in latency. Ultra-low latency solutions are also typically based on WebRTC, meaning it can’t be scaled via conventional CDNs and is therefore an expensive option requiring custom processing in the service provider’s network. Broadcasters and rightsholders must therefore question where the demand for low latency comes from, and what value they’re extracting from it.

Realizing the pathway for broadcast-quality streaming

There’s no doubt we need to reduce the level of glass-to-glass latency – and fast. Today, it’s not uncommon to watch streamed sports content where the latency delays can be anywhere from 45-60 seconds – and a couple of minutes in some instances. If that number can be cut to under 7 seconds – the same level as broadcast-type transmission – that period will almost certainly eliminate the risk of social media spoiling key moments for the viewer. It would take lightning-quick fingers to release a live, real-time tweet that can be published and processed faster than live streaming data onto your screen!

Another consideration is the burgeoning growth of online sports betting. If the betting industry is to attract a legion of customers and fans in this market, streaming providers must provide dependable, broadcast-quality video with the lowest possible delay to support the interactive betting software. This will ensure viewers can be confident they are betting on an equal playing field with fans in attendance at the court, stadium, or arena.

Yet as consumer watching patterns become increasingly fragmented, betting material must now be distributed to potentially millions of devices, which is a whole new ball game! Scaling up this style of communication to massive numbers of people and devices comes with several obstacles. What the end-user considers appropriate with this type of information is yet unknown.

The need for direct path technology

Low latency solutions should be the first port of call for streaming providers, and there are two key elements they must include. The first is the use of Common Media Application Format (CMAF) Low Latency Coding with HTTP/1.1 Chunked Transfer Encoding, which eliminates unnecessary delays between, for example, a packager and a CDN. A traditional ABR format means that the entire segment must be completed to know its length, and it needs to be signaled for transfer to the subsequent stage.

A traditional ABR system might have segment in the order of 6-10 seconds. Each time a segment is processed or forwarded (including by the client), that segment duration is added to the cumulative latency, so the latency increases progressively through the chain, in increments of segment length. By bringing together CMAF low latency and chunk transfer encoding, service providers can now initiate the process of moving that data through the delivery chain at an earlier stage. That’s because they only need to wait for a fragment of the segment to arrive, before being able to forward onto the next stage in the chain.

Employing ‘direct path technology’ radically changes the way data is connected from the encoder to the packager, removing buffer delays and helping to play a huge role in cutting the time needed to move content from one video processing function to the next. Traditionally, the connection between an encoder and a packager would have been a multi-rate constant bitrate type connection delivered in real-time, which necessarily requires buffering and therefore incurs latency.

Careful consideration also needs to be given to the synchronization of live feeds, especially where direct feeds, such as alternate cameras, are fed straight from an event/stadium and viewed alongside programming and production feeds that would be subject to additional delay. This highlights the need to aim for a predictable and repeatable delay that all feeds can be timed against.


Related Posts