Converging Data and Video on an R-PHY Platform: It Takes a Village

By Alan Skinner and Ernest Fabre

If a distributed access architecture (DAA) is on your technology roadmap, you’ve no doubt been contemplating the daunting task of converging data and video services. Whether using remote PHY (R-PHY) or remote MAC-PHY, the principle is the same: without RF combining in the headend to lean on, you’ve got to inject each service into your IP network so that the remote device can digitally ingest and then regenerate all of these signals onto the coax network. At Cox, our strategy has been to use remote PHY while keeping an open door to other DAA technologies. Cox resident technologist, Jeff Finkelstein, has written about these in past issues of Broadband Library. One of the challenges with this strategy has been supporting legacy QAM-based video services.

The concept of data and video convergence at the access layer is nothing new. The CCAP specifications from CableLabs introduced this concept back in 2010 and most of the same principles still apply even though the PHY output has been moved out into the node. Obviously, none of this is necessary if you can get rid of legacy video requirements prior to DAA. With a QAM-based CPE footprint like ours, this was not a near term option.

Benefits of video convergence on R-PHY

Utilizing R-PHY to carry both video and data QAM channels does have some advantages over either an integrated CCAP or traditional RF combining in the headend.

One advantage is in the troubleshooting of video issues such as missing channels or tiling. Rather than drag a test rig around the headend and physically connect to whichever narrowcast combiner is feeding the node or nodes in question, you can utilize a single stationary test RPD and simply provision it to the appropriate service group. The RPD then becomes a real-time, temporary member of that group and shows you what you’d be seeing if you were in the field.

Another advantage is that video service groups can be expanded or contracted based on congestion, growth, or decline without having to perform any physical work. Depending on the type and configuration of R-PHY video core chosen, this could be as simple as a re-provisioning exercise to remove RPDs from one logical group and move them to another group.

Finally, your data and video teams will forge new relationships that may have been lacking when the services were in silos. And both teams will become friends with the network and outside plant groups. It takes a village to deploy and operate a converged data/video network using R-PHY.

Challenges of video convergence on R-PHY

Adding a new broadcast or narrowcast video edge device of any kind requires a great deal of testing. Adding a device that can do both, as well as support legacy out-of-band, is even more challenging. The R-PHY core must seamlessly integrate with VOD and SDV edge resource managers. And while narrowcast video services are implemented similarly from chassis to chassis and site to site, the same cannot be said of broadcast video services. The core must have the flexibility to support multiple broadcast service groups comprising separate lineups, ad zones, EAS zones, PEG channels, etc. Encryption must be accounted for as well, and every flavor of conditional access system has its own challenges. Regardless of whether encryption is internal or external to the core, it must correctly pass or generate encryption data and tables.

Support for legacy two-way STBs and one-way CableCARD devices using SCTE 55-1 and 55-2 is a challenge, and one with fairly low reward. You’re prolonging the life of a platform that is already a couple of generations old, so if possible, limit support to DSG-based set-tops. If legacy support cannot be avoided, solutions are available that virtualize a 55-1 modulator and return path demodulator for use in R-PHY implementations. In the case of 55-2, an external OOB core, or similar NDF/NDR solution, is required — adding to complexity and expense.

Another challenge of using a converged R-PHY network is that it is no longer straightforward to determine which nodes belong to which service groups. This is now logically controlled via provisioning of the RPDs, so it’s essential to utilize tools which can track not only the broadcast and narrowcast service groups, but also how those groups overlay the data service groups. Because R-PHY permits the RPD group memberships to change fairly easily, a dynamic mapping tool is needed.

Convergence of data and video, especially on a single CCAP platform, typically requires some tradeoffs. When a platform is built to handle both data and video, with their unique requirements and topology, it’s difficult to optimize either one. For example, QAM limitations may prevent efficient utilization of the chassis or linecard, and you may end up with fewer service groups supported than would be possible when doing only data or only video. In addition, when using a converged platform, maintenance activities and config updates are more challenging. All services get impacted — not just DOCSIS — when a CCAP reloads, and recovery time for RPDs is significantly longer than with analog nodes.

How to get it done

There are some strategies to minimize both the pain and the risk associated with moving legacy video onto a converged platform. These strategies have served us well in our initial R-PHY trials and early production deployments this year.

As much as possible, standardize the video configuration that will be applied to the CCAP such that only minimal input is required on a per-site or per-SG basis. A converged CCAP requires an extremely complex configuration, so the more variability that can be removed the better — both for deployment and for proper testing of the supported configuration. A fully configured CCAP, populated for max capacity, facilitates the use of a golden template and minimizes the future “touches” required to grow the network.

Second, test all video services using a “staging” or captive RPD in a controlled environment. As mentioned earlier, it is trivial to move that RPD from SG to SG in order to verify readiness in each one. Make sure that the validation plan includes all flavors of video CPE, including legacy OOB set-tops, DSG-based set-tops, digital transport adapters (DTAs), consumer CableCARD tuning devices, and any commercial video offerings. Check every channel and every service on each device type and ensure parity with the same service/channel on a legacy fiber node. This confidence will make the night of cutover go quite smoothly.

Finally, develop and refine a detailed cutover MOP that leverages the validation described above. Resources from data, video, network, and outside plant at a minimum should be represented so that any issues can be quickly resolved before customers see them. Monitor for call volume of course, but also video quality and functionality using eyeballs or automation to simulate eyeballs.

What our experience has taught us

DAA does require that data and video services be converged at some point in the network, but not necessarily in a single CCAP chassis. If we were starting from scratch today, we would likely de-couple the video core functionality from the data core. Doing this provides more flexibility to scale each service independently, as well as making troubleshooting and operations more cleanly delineated. Separate cores also allow for the development of features and code upgrades, as well as associated testing, to happen more frequently.

Starting to retire legacy OOB-dependent devices (STBs, tuning adapters) sooner, or migrating them to DSG-enabled code, would have also made the deployment quicker and smoother. Beginning this process as early as possible is strongly recommended.

Parting thoughts

Between the new RPD technology, the required data/video/OOB cores, new provisioning processes, PTP timing, and a converged interconnect network (CIN), the complexity of the access network has indeed increased significantly. Take every opportunity where you can to remove some complexity from it. Standardized config templates, compliance auditing, re-use of existing video gear, controlled validation of new services, even things like using SLAAC instead of DHCP for RPD addressing — these are all ways to get your arms around this new technology without being overwhelmed by it. When you see an opportunity to simplify or standardize, even a small one… take it. You tackle converged video on R-PHY the same way you eat an elephant — one bite at a time.


Alan Skinner

Systems Engineer

Cox Communications, Inc.

alan.skinner@cox.com

Alan Skinner is a 12-year veteran of Cable Access Engineering at Cox Communications. His current efforts center around remote PHY and CCAP video integration, but for the three years prior to that he served as technical lead for IPv6 implementation throughout the company. His responsibilities have also included the design and implementation of DSG and related STB technologies. Prior to joining Cox, Alan spent eight years in DOCSIS engineering at ARRIS and Cisco.


Ernest Fabre

Video Systems Engineer

Cox Communications, Inc.

ernest.fabre@cox.com

Ernest Fabre is a Video Systems Engineer at Cox Communications. His current role includes the design, integration, and testing of video platforms and architecture. He began his career for Cox in the New Orleans market in 1997. He also served in the Northern Virginia market for a number of years before making his way to Cox’s Atlanta headquarters in 2012.


Shutterstock