The “Near Future” videos that CableLabs has produced over recent years have a theme of ubiquitous connectivity and interactivity: People are able to interact with each other in real-time from and to anywhere. With that type of connectivity, potential applications are limited only by our imagination.
The cable industry’s 10G initiative is a part of realizing that future by laying the foundation — enabling the technology — needed to make it a reality. As the name implies, speed — in the form of very high-speed broadband networks — is of course one part of the 10G platform. But it takes more than speed to enable the technologies shown in the “Near Future” videos, which is why the 10G platform stands on other pillars as well: reliability, security, and latency. Reliability to ensure that broadband is always available and it works; security to ensure information is safe and protected; and latency to enable real-time interactions without noticeable delays.
Reducing and managing latency — the time it takes for a packet to travel from one point on a network to another — is not just about enabling these future technologies, though; improving network responsiveness and reducing delay also enables new business opportunities and improves the customer experience in the here and now. Which is why it is something our industry has been working on for some time and that we are investing in more heavily now.
One of the first efforts CableLabs engaged in was to address what is known as “buffer bloat,” increased latency caused by large packet buffers in cable equipment. This was initially addressed by providing controls to manage the size of the packet buffers, allowing a reduction in latency down to ~10 milliseconds (ms) when idle and ~100 ms under load. This was followed by a feature known as active queue management (AQM), which allows operators to target shorter average latencies by more intelligently managing the packet buffers, enabling latencies under load of ~10 ms.
Our industry has also worked to reduce latency caused by media access, the time it takes to transmit a packet onto a shared media, such as a DOCSIS upstream. As far back as the DOCSIS 1.1 specifications, we introduced new scheduling services — such as the real-time polling service (RTPS) — which were intended to reduce latency. More recently we added a new proactive grant service (PGS) which allows a cable modem termination system (CMTS) to provide grants to a cable modem (CM) even before it requests them.
All of these techniques help improve latency for traffic in a variety of situations. However, some applications require even lower and/or more consistent latency. A key realization is that those applications also have different traffic patterns and needs. Therefore, achieving further latency improvements requires looking at the specific needs of each application and which sources of latency most affect it.
Multiplayer Online Gaming
An application space that can be very sensitive to latency and jitter (variations in latency) is multiplayer online gaming: playing a game locally (on a computer, phone, or console) while competing in a shared world online. This is a massive mainstream market with SuperData reporting global revenue of $120B in 2019, larger than the cinema ($41B) and music ($19B) industries combined. In the US alone there are 44 million active console and PC players who are dependent on high performance networks to provide the best gaming experience. Gamers are network power users with a high household network usage and spend, so reducing latency and jitter for online gaming could open up significant business opportunities for cable operators.
Gaming traffic tends to be low data rate and very latency sensitive, so it does not tend to fill up queues or buffers; hence, we refer to it as non-queue building (NQB). In comparison, applications such as video streaming are not very latency sensitive but operate at higher data rates, so they tend to keep buffers/queues as full as possible; therefore, we refer to them as queue-building (QB).
When mixed together on a single queue, the NQB traffic can get stuck waiting behind the QB traffic, increasing its latency. Most user traffic does share the same queue within cable modems and Wi-Fi access points. Separating QB and NQB traffic into separate queues — each optimized for that type of traffic — eliminates that problem.
An existing feature called Wi-Fi Multimedia (WMM) uses the differentiated services code point (DSCP) field to mark traffic for different queues. That addresses the separation of QB and NQB traffic over Wi-Fi.
The DOCSIS® specifications also have a solution for separating traffic into queues through the use of classifiers and service flows. A further enhancement ties two service flows together in an aggregate service flow. This enables the use of a single service rate shared by both queues (which better aligns with cable operator service models), and also allows excess traffic in the NQB queue to spill over into the QB queue, protecting latency without dropping traffic.
The end result is a DOCSIS feature we refer to as Dual-Queue Coupled AQM with Queue Protection. That feature also classifies traffic using the same DSCP marking as WMM by default. A single marking method therefore provides appropriate handling on both Wi-Fi and DOCSIS networks, making it easy for both game developers and cable operators to implement.
The resulting improvement in latency is dramatic: consistent sub-5 ms round-trip latencies for NQB traffic are easily achievable, and simulations show that in certain circumstances 1 ms for NQB traffic may be possible.
But what about services such as game streaming — running the game on a remote server and streaming it to an end device — which requires not just low latency but also high data rates (on the order of tens of megabits/second)? Separating the traffic into two queues is not enough; new algorithms to adapt to changing capacity along the network path are needed.
It is for that reason that support for an emerging new technology known as low latency, low loss, scalable (L4S) throughput was included in the DOCSIS specifications. This builds on top of the features mentioned previously, and if implemented by the applications at both ends of a network connection as well as any bottleneck points in between, it permits much higher data rates with consistent low latency. By including this support, cable operators are well positioned to offer new services to support those applications.
Another near-term opportunity for cable operators that would greatly benefit from reducing latency across the DOCSIS network is mobile xhaul: carrying mobile traffic across the DOCSIS network.
A well-known hurdle for small cell densification is the need for cost-efficient transport. DOCSIS networks, with their large existing footprint, offer an excellent alternative to installing new fiber, as long as latencies are low enough. Unfortunately, the technologies that benefit online gaming and game streaming do not help us here because mobile traffic is encapsulated and is transparent to DOCSIS network equipment. Plus, there is a more significant source of latency to address.
Both mobile and DOCSIS networks have an inherent latency in the upstream due to multi-user access in a shared upstream. When added together, the total latency is even greater. The solution we have defined is a mechanism referred to as pipelining. Pipelining utilizes a new message called a bandwidth report (BWR) to allow the wireless scheduler to coordinate with the DOCSIS scheduler to move data across both systems as quickly as possible. The wireless scheduler uses the BWR message to inform the DOCSIS scheduler that data will be coming before it actually arrives, allowing the DOCSIS scheduler to prepare a data grant in advance, virtually eliminating that source of latency.
The additional beauty of this technology is that it can be applied to other scheduled media such as PON, and in fact, is being defined for other networks in the O-RAN Alliance via an initiative known as the Cooperative Transport Interface (CTI).
Becoming a Reality
CableLabs has been working with its members and vendor partners to make these technologies a reality through several different projects: the Low Latency Xhaul (LLX) project has been addressing latency reductions to support mobile xhaul; the Low Latency DOCSIS (LLD) project has been addressing everything else over the DOCSIS network; reducing latency over Wi-Fi is a part of our Low Latency Wi-Fi (LLW) project; outreach to game developers is ongoing; and a latency measurement effort is underway. These efforts and others have come together under a single strategic umbrella, which we refer to as our Low Latency Program.
The reality is near:
- An LLX lab trial using commercial CMTS and small cell equipment has demonstrated uplink latencies in the 1-2 ms range
- LLW testing has demonstrated dramatic improvements with packet marking
- LLD lab testing and simulations have shown round-trip latencies below 5 ms for NQB traffic are readily achievable, and 1 ms may be possible
- LLD interoperability events are underway using DOCSIS 3.1 CMs and CMTSs which are quickly maturing
While these technologies enable several short-term opportunities, they are not just limited to those applications. The latency improvements could enable applications we have not even thought of yet. At its core, that is the intent of the 10G initiative: to enable the innovations of the future. And that future appears near at hand.
By: The CableLabs Low Latency Program Team
The core of the Low Latency Program Team comprises individuals who have been leads, strategists, or product analysts on CableLabs® efforts including AR/VR, optical technologies, wireless technologies, and all versions of the DOCSIS® specifications from 1.1. In total it represents over 80 years of combined senior level technical and leadership experience at CableLabs, demonstrating the substantial investment by CableLabs in low latency technology, one of the pillars of 10G.
Pictured right, clockwise from the upper left: Greg White, Matt Schmitt, Steve Glennon, Barry Ferris, Karthik Sundaresan, Dr. Jennifer Andreoli-Fang, and Shahed Mazumder.