Merging CATV IP Networks

By Patrick Hunter —

We’ve seen an incredible amount of change in the CATV industry lately. Not that this is a new development. Let’s be honest, the last time things hadn’t really changed in the cable business was around the late 1940s, when there really wasn’t much of a business at all. After that, it seems change is the most constant part of our industry, and at this point, frankly, it would be weird if the constant change ever stopped. One of the changes that has been very exciting and noteworthy in the past couple of years has been the recent mergers and acquisition activities. There has been a trend toward consolidation in the business as of late which seems to be taking on a larger and sometimes more international feel than in the past. With these mergers come a host of interesting challenges that requires deep thought and careful analysis with regard to the best manner in which we connect and integrate our networks.


There are quite a few challenges to consider in this arena, maybe the toughest being the challenge of integrating networks which have overlapped Internet protocol (IP) address space. In order to maximize the efficiencies and economies of scale that are available with the integration of networks, being able to natively route IP traffic from end to end throughout the new, merged networks is absolutely critical. This is due to a number of operational and logistical challenges that exist if the overlap is not remediated, among the most challenging of which is dealing with the overlap using IP network address translation (NAT) mechanisms.

Why does this overlap matter? Why should it present such a challenge? The reason is due to the basic IP principle that suggests that all devices on a network must have a unique logical address (IP address). On the same network, no two devices are allowed to have the same address, at least in principle. That basic rule ensures that, barring any other network controls that might restrict communication, every unique device should be able to communicate with every other device successfully. If not, when one device attempts to communicate with a particularly-addressed device, how can it be certain from which device it hears a reply? And how can one particular device be certain that the communication is indeed intended for itself? So, the real challenge comes in when you connect two networks together that have some (maybe most, or all) IP addresses that exactly match up.

Wait, something doesn’t make sense here. If one of the basic rules of networking is that every unique device must have a unique IP address, then why the heck don’t we just make sure to never have duplicate IP addresses on anything, ever? The answer lies in the history of the Internet and its explosion and popularity in the last 20 years or so.

Let’s do some basic math. There are roughly 4.3 billion unique addresses possible in the dominant IP addressing scheme that’s been in existence since the first internetworks began to emerge way back when, known as Internet Protocol version 4 (IPv4). You might recognize addresses that look something like this: 66.88.22.199. Basically, each portion of the address, called an “octet”, can be represented as any decimal number from 0 to 255, which gives us 256 possible values. Our junior high math teacher would be happy if we remembered that to calculate the total number of addresses, we simply needed to multiply each octet’s possible values by each other, so 256*256*256*256, which equals 4,294,967,296.

OK, that’s a lot of possible addresses. So, what’s the problem? Basically, as the Internet grew and matured, it became very apparent that once we divided the entire IPv4 address space into many different sections (networks or subnetworks), and more and more people bought home computers, there was going to be a shortage of address space pretty quickly. The folks that hold the responsibility for keeping an eye on all the Internet addresses and doling them out are known as the Internet Assigned Numbers Authority (IANA). IANA basically grants reservation of blocks of addresses to different regions of the world and, by extension, to different Internet service providers (ISPs) and other private entities around the world. As each portion of the address space was reserved and given out, IANA and other Internet pioneers foresaw the shortage and decided to find ways to get in front of the shortfall. One of the most significant and long-standing solutions to the problem was to reserve 3 large portions of the unique address space for “private” use. The address ranges in this private space might look extremely familiar to many casual users of the Internet: 10.0.0.0 – 10.255.255.255, 172.16.0.0 – 172.31.255.255, and 192.168.0.0 – 192.168.255.255. These blocks of addresses are commonly referred to as RFC 1918 space, referring to the Internet Engineering Task Force’s (ITEF) Request for Comments #1918 on how to address the challenge of IP address shortage.

The idea behind this private space was to encourage all network administrators to use these addresses to number devices in each internal/private network, with the promise that no one would ever advertise those private addresses out to the “public” Internet. That way, every single administrator of a network would have millions of possible addresses to use for their devices without draining the larger pool of addresses still reserved for “public” use. So, if every network had millions of addresses to use that could be duplicated over and over again with no negative consequences (because they all agreed to not advertise those private addresses), we were able to bend the rule of unique addresses in order to greatly expand the usefulness of the roughly 4.3 billion addresses.

That bought us many, many years of successful growth in Internet usage and much prosperity. But, it also presents a significant problem once we decide to directly connect two distinct “private” networks together, such as in the case of two companies merging. That’s because both networks most certainly ended up using exactly the same IP addresses on both sides. Now, in order to connect these two networks together and have all devices able to talk to each other, we either have to re-address many devices (possibly thousands or more) or come up with a way to mask the duplicate addresses. Previously I mentioned NAT which works well for this application. Essentially, in order that a device on one network might appear to have a unique address on the other network, a network device such as a firewall or router could “translate” the network address into a different, unused address on the other network. As devices talk back to this address, it would also be translated in reverse on the way back. In fact, one NAT-ed address could be used to represent many different devices, thus providing great savings in terms of the number of addresses that need to be consumed by NAT-ing.

Now back to the original challenge. Once many different devices and applications, on the order of thousands, or possibly millions, need to have a completely unique address on both sides of a NAT boundary, managing those NATs becomes very unwieldy and time-consuming. It also begins to place a strain on the resources of routers and firewalls as the devices are tasked with keeping up with thousands or more NATs. At a large enough scale, entire teams of engineers may be needed to keep up with the constantly growing and changing NAT landscape in a network environment in which two networks have been connected.

Applications that have unique addresses for users to reach them are the most important devices that require unique addressing as they receive the greatest number of requests from individual devices, like computers, cell phones, and tablets. In today’s hyperconnected world, one application very likely talks to many, many other applications and databases. That means that every one of those has an increased exposure and need for unique addressing. The problem grows exponentially in a big hurry. As well, if said applications require exposure to the “public” Internet in addition to the “private” network, unique addressing is absolutely mandatory.

Not surprisingly, simply changing the IP addresses of thousands or millions of devices in a large network is not easily accomplished. Careful planning, in many phases, is required to re-address a network while minimizing the downtime that users or subscribers feel. But, once achieved, all devices in the network, and indeed possibly the world, that can reach applications and other devices through native routing from unique IP address to unique IP address will increase the effectiveness and economy of IP communications and greatly reduce the amount of network engineering upkeep required to allow two newly-merged networks to communicate together.


Patrick Hunter Charter CommunicationsPatrick Hunter — “Hunter”

Director,
IT Enterprise Network and Telecom,
Charter Communications
hunter.hunter@charter.com

Hunter has been employed with Charter since 2000 and has held numerous positions, from Installer, System Technician, Technical Operations management, Sales Engineer, and Network Engineer. His responsibilities include providing IP connectivity and network security to all users in Charter’s approximately 1,000 facilities, including executive and regional offices, technical centers, call centers, headends, hubsites, and data centers. Mr. Hunter has served on the Gateway Chapter Board of Directors since 2005. He spends his spare time mentoring, teaching, and speaking on IP and Ethernet networks as well as careers in the network field.


Credit: Shutterstock
Credit: Cartoonstock.com