Blurred Lines

By Kraig Neese

So you decided to push the launch button on remote PHY and like all things new and technical, you found a few things you might not have expected. From the 30,000 foot view, remote PHY does not look all that difficult. There are some network elements here, some back-office applications there, and all these little lines that connect them together. As you drop below the clouds, the detail and challenges begin to reveal themselves. Those lines connecting all the network elements start to multiply exponentially and it’s hard to see where one traditional engineering task begins and the other ends. How do you deploy such a mammoth undertaking? Inevitably things are going to go wrong, something will break, or something just does not want to do what it is told. How do you troubleshoot this new environment in a controlled and methodical way, let alone know who to contact based on the information at hand?

Remote PHY and DAA will challenge your existing operational model. As an engineer, you quickly find yourself sharing a studio apartment with many other engineers trying to manage the various technologies without tripping over each other. What do you define as the boundaries for one group to the next? R-PHY is a multi-faceted network consisting of numerous devices, complex configurations, vast amounts of fiber connections, and of course new operational relationships. You must navigate this new landscape of challenges to succeed.

Standing up your first trial of a remote PHY network will look like many other new deployments you have probably been a part of. Install this, wire up that, add some config here, and voilà, you’ve installed a DAA infrastructure. This appears relatively manageable, you might think, because all the cables are in the right place and all the lights are blinking. Inevitably, new challenges manifest themselves. Did the metro and backbone routing teams get it right on their end? Did the CCAP platform owners get service groups built and RPDs provisioned? Did the PTP time clocks get correctly activated? Have the plant and hubsite teams tied the correct fibers together? Let us also not forget about the video network. Introducing video into this environment immediately educates those not traditionally part of that world, introducing many to just how complex video operations can be. The video, network, outside plant, and timing teams are your new roommates now. Most have a bad roommate story in our past somewhere. You may have another one after this endeavor, but you will have to find a way to get along. First, you’re going to need someone or something to provision the RPDs. Next, you need a way to test your provisioning. But most importantly, when it all decides not to cooperate, how do we troubleshoot this new environment?

roll the dice

Provisioning

  • If possible, automate this before proceeding beyond a field trial. Manually provisioning RPDs might seem like a tame little bear cub. Eventually that little bear grows up and can easily eat two or three of your staff who quickly become consumed with the never-ending flow of provisioning requests.
  • The cost of automating provisioning upfront just may live in the shadow of the cost of consuming multiple full-time workers performing a provisioning task that did not exist before.
  • Provisioning takes on the role of being a logical hubsite technician. Provisioning an RPD is likened to connecting RF from various services to a combiner so that the “common” RF can be shot out to the field via a transmitter. Unlike the hubsite tech who has years of experience with such connections and knows the nuances of city boundaries or franchise boundaries, provisioning may very well be performed by a team in the backoffice who have no tribal knowledge of these nuances. Knowing the details of what video lineup, community channel, and spectrum map is going to be a learning curve unless you have your databases clear and concise from the beginning. The last thing you want is to put the insertion channel meant only for a school on every node provisioned because of a logical error.

Testing and Validation

  • Before diving headfirst into the water, at least know how deep it is. Once you fully embrace R-PHY, the RF connections traditionally used to validate inside the hubsite begin to disappear. You are left with nothing more than blinking lights and fiber connectors. Having a standardized test rig or rack in your hubsites is going to be a powerful tool in your arsenal. Having a dedicated RPD or R-PHY shelf is a must. They give you your RF back so you can do your normal validations with CPE, analyzers, or DOCSIS test units. These nodes can be logically moved around to validate different service groups and video lineups. If you can employ a device to remotely view video, your test rack will be that much more powerful in validating channel lineups without having to be onsite. Using a test rack to validate your configuration assumptions can be the difference between confidence and success versus potentially endless troubleshooting sessions while customers could be impacted after launching. The alternative could be driving out to each RPD in the field and setting up a mobile test rig. Depending on the quantity of RPDs in your network, this will undoubtedly become very expensive. The moral of the test rack story: Your customers should not be your canaries in the coal mine. Have a test rig and use it often.

Troubleshooting

  • One of the challenges you will likely experience is operational boundaries and interactions. Who starts where and how far into someone else’s backyard do you venture to vet an issue? In a typical DAA environment the traditional operational boundaries for ownership and troubleshooting no longer look clear. Working in a silo while troubleshooting could be a major roadblock. You need to stand back a few feet and observe the whole picture by engaging your network neighbors. Engage your operation and design teams ahead of time to foster an environment that will allow a “divide and conquer” strategy when challenges arise.

DAA and the remote PHY networks of our future are a tough nut to crack, don’t make them harder than they have to be. Take the time to stand up an automation apparatus that is agile and malleable. This will be a critical piece of your network and helps increase payload without increasing workload. If possible, automate your provisioning before proceeding beyond a field trial, then work towards automating network device activations. Embrace your frontline operations teams’ ideas. Those who are in the trenches often have a better perspective on what is needed operationally. How will you upgrade and swap RPDs? What about firmware and platform upgrades? These are now semi-intelligent network elements and will require new tools to manage and operate. Get to know your new network roommates. Have a path of engagement devised so when the time comes to sort out an issue, you already know who to involve. Finally, set yourself up for success by installing a testing rack or rig in the sites where you deploy R-PHY. Give yourself the best chance at success by confirming you’re good to go rather than rolling the dice.


Neese

Kraig Neese
Cable Access Engineer,
Cox Communications, Inc.

kraig.neese@cox.com

Kraig Neese is a Cable Access Engineer at Cox Communications. He has spent the last 19 years in the cable industry with Cox Communications. He spent his first 17.5 years in the Phoenix market where he was involved in the various evolutions of DOCSIS platforms from original DOCSIS 1.0 platforms to the first launches of 3.0 and 3.1. He joined the Atlanta Access Engineering team in 2018 and has been focused on supporting remote PHY on an operations level.


Shutterstock