Never Say Never
By Patrick Hunter
Our conversations these days have continued to drift to the Internet of Things (IoT) and Internet of Everything (IoE). Certainly these terms are adorable, and the best part is we get to continue to feed our irrational obsession with abbreviations. More importantly, in the world of the “IP Address” column, as the discussions begin to turn to adding many more devices to our networks, the consideration for Internet protocol (IP) address space becomes very important.
Depending on whom you ask, there are expected to be somewhere between 50 billion and 200 billion devices connected to the Internet by 2020. There’s no question that the sheer numbers are staggering and, frankly, a bit intimidating to a network administrator who is tasked with doling out IP addresses and ensuring that they are managed thoughtfully with an eye on the future, especially network expansion. So, let’s do some quick math to determine from where we’ve come and where we might be headed.
Recall from prior discussions that the time-tested IP version 4 (IPv4) address schema gives us 4,294,967,296 possible addresses. Once upon a time, that was an unnecessarily large address pool. But, surprise, surprise, we found a way to exhaust that space. As an interesting comparison, there are around 8 billion people in the world, so that address pool doesn’t even cover one unique address for each person on the planet!
Now, here’s where the logic gets interesting, in my humble opinion. IP version 6 addressing has a 128-bit address scheme, which give us 340,282,366,920,938,463,463,374,607,431,768,211,456 possible addresses. Yep, that’s a lot. “An unnecessarily large pool of addresses,” one might be tempted to say. However, let’s drag our friend “one” kicking and screaming and follow the trends so far and see where this really goes.
When IPv4 addressing was overkill, the idea of one computer residing in nearly every home was not commonplace. As the personal computing revolution took off and as mobile devices and tablets took hold, the idea of a handful of IP addresses per person became common sense. But of course each person would need multiple addresses, because they own multiple Internet-connected devices! Let’s dive even deeper. In my house, we have 10 devices that connect to our home Wi-Fi and accept voice commands among a variety of other capabilities. Then, we have the refrigerator, a light-bulb control hub, 28 light bulbs, a thermostat, two robot vacuum cleaners, a home security system, a motion detector, two glass-break detectors, three IP-capable floodlight/cameras, and a front door camera/doorbell. And that’s just what I could think of in the moment. While I recognize that, technically, not all of these devices may have IP addresses per se, the idea of unique addressing in some domain is still required to make them work.
So now we’ve arrived at the idea that for each person, tens of IP addresses may be needed for an average middle class home. When you follow the trends in the high tech trade magazines, the predictions are that nearly everything that we can imagine will eventually be connected. In theory, if we saw fit, everything we could possibly own or consume could have some form of addressing. It’s hard to argue that some application for assigning an address for every piece of clothing you own could be conceived and used practically in the future.
So, if we follow that logic, we also have to plan for the possibility that the “things” we connect to our home networks or the Internet will have a limited lifespan. If that’s true, then we need to give consideration to the fact that our IP addresses will need to be consumed in a manner that allows for recycling them in a manageable, scalable way. If we decide that it is worthwhile to make each blade of grass in our lawn addressable for myriad reasons, then we have to consider that each blade will only live for a period of time. It’s certainly an extreme example, but seriously, are we foolish enough to say “no one would EVER need to have enough IP addresses to cover every blade of grass in their lawn…?” Really, people?
So, following that logic, I have to question the experts out there who have claimed that there is no possible way to exhaust our IPv6 address space. With regard to IP addressing, the only concrete lesson we have learned since networking began is that if we create the address space, someone will find a way to use that space. This is analogous to bandwidth growth and demand. Every time we’ve ever built our pipes much larger, there has always been someone to find a way to fill those pipes. Each construct created for unique addressing has always been finite. Our imagination in managing those addresses has to be much closer to infinite.
One of the latest amendments to the standard has garnered quite a lot of fanfare in recent months: 802.11ax. This version has been dubbed Wi-Fi 6 by the Wi-Fi Alliance, the industry consortium that promotes Wi-Fi as a technology of choice. There are a number of reasons why 802.11ax is receiving the attention it is, and frankly, I’m a pretty big advocate of its adoption. The technical shortcomings of Wi-Fi have been addressed pretty significantly in this update – more so than in any prior version, in this author’s opinion. Let’s walk through the basic technology so we can better understand what’s changing and why it matters.
If one were to do a bit of research, there is a significant amount of data out there on the variety of 802.11 standard amendments. The family of specifications has always outlined a protocol that allows for half-duplex communication between a pair of wireless network interface cards (NICs). That uses carrier-sense multiple access with collision avoidance (CSMA/CA) to account for the fact that multiple devices may transmit at the same time. This should seem eerily familiar to those who have read prior articles and studied the 802.3 Ethernet standard. Because Ethernet was also intended to operate over media that allowed multiple transmissions that could occur at any time, carrier-sense multiple access with collision detection (CSMA/CD) was used to account for the possibility of simultaneous transmission on “the wire” at the same time and outlined a means to deal with the problem. Wireless technology has used a similar approach with CSMA/CA, except that the goal is to avoid collisions altogether if possible. The approach calls for each device transmitting to listen for other devices on that channel before transmitting.
Now that we know how the devices are expected to work, one important piece of the puzzle is understanding on what frequency the wireless devices operate. The Federal Communications Commission issued a ruling in 1985 that allowed for use of the 2.4 GHz industrial, scientific, and medical (ISM) band, and part of the 2.3 GHz to 2.45 GHz amateur radio band for unlicensed use. Amateur radio operators sometimes use modified Wi-Fi equipment for mesh and other communications in what is known as the 13 cm ham band (2.4 GHz). In the CATV world, we have a long and storied history of amateur radio affiliation, so the use of this spectrum really feels like decades of this application coming full circle. But, I digress somewhat nostalgically perhaps…
Since the original iterations of 802.11 (802.11b, being the most commercially-popular) capitalized on the availability of the 2.4 GHz spectrum, there has also been release and widespread adoption of the 5 GHz band in later releases of the wireless standard, starting with 802.11a. Yep, that’s a bit confusing, since “a” should come before “b.” In fact, they were both released around the same time in 1999, but the “b” version was more widely adopted as the first useable standard with “a” joining the ranks of popularity a little later. (By the way, don’t confuse use of the 5 GHz spectrum, often called “5 gig” by technicians, with the now quite popular “5G” marketing hype phenomenon; they’re most certainly not the same thing.) While I won’t go deep into the technical details of the nature of the multiple “channels” used in each of the bands, broadband professionals like us can certainly understand the use of multiple carriers in the same medium for moving information back and forth. The most important concept is that the multiple channels available in the 2.4 GHz band actually overlap considerably, so conventional use of the platform is such that the technology is typically designed to use the highest number of channels without overlap. In the 2.4 GHz band, those are channels 1, 6, 11, and 14, each 20 MHz wide, centered at 2412, 2437, 2462, and 2484 MHz respectively.
That’s a lot of bandwidth compared to our traditional channel widths, but when you think about it, it does make sense due to the fact that we are trying to successfully transmit much higher data rates than the traditional over-the-air signals from days long gone.
OK, we’ve lightly touched on the basics of the wireless technology, so what’s all the hype about with 802.11ax/Wi-Fi 6? There are a number of changes this time around, but I’ll focus on three of them.
First, 802.11ax uses orthogonal frequency division multiple access (OFDMA) for a modulation scheme. By now, most readers are familiar with orthogonal frequency division multiplexing (OFDM), since it’s used so commonly in many of our broadband applications today. Essentially, OFDM is a very sophisticated use of good old-fashioned frequency division multiplexing (FDM), but it is done in a manner that the subcarrier signals are related to each other in a way that removed the ability for cross-talk to affect the signals and helps relieve the need for guard bands between the subcarriers. That’s a bit of an oversimplification, but it gets the job done.
So, for 802.11ax, OFDMA essentially uses the same mathematical principle to allow assignment of sub-channels to multiple users at the same time. Traditionally, because our collision avoidance technique forced each transmitter to take its turn, one at a time, the entire 20 MHz of a channel would be tied up while each user transmitted. With an allowance for multiple users to transmit simultaneously on different sub-channels, we now will see a significant increase in efficiency for our wireless networks, most especially for those which have a very high number of devices connected to one wireless access point. Think about it, the wireless data rates at your house can often be great, but once you take your Wi-Fi-enabled device to a more densely populated area, like a shopping mall or public arena, suddenly the exact same service seems to struggle to meet your individual demands. That’s mostly because of the “taking turns” problem presented in older iterations of wireless standards. This is a big win for our friends on the 802.11 committee.
The second feature that we see in 802.11ax is the concept of target wait time (TWT), which actually was introduced in an earlier update, 802.11ah. The idea is that devices communicating with the access point have traditionally had to “wake up” at intervals for the beacon transmission in order to keep communication possible when actually desired by a user. But with TWT, the access point in control of the communication can set targeted times in order to both minimize the need for the connection to “stay awake” actively as well as group different devices into different TWT periods. This would allow for less contention for bandwidth and…wait for it…longer battery life for Wi-Fi-connected devices like cell phones! Yet another meaningful feature.
Lastly, spatial frequency reuse, or “coloring” for different WLANs is a feature of 802.11ax. Historically, when there have been multiple networks operating on the same channels in a particular band, devices operating in that band were unable to transmit on their own network when they detected traffic in the same channel bands from an entirely different WLAN. (Remember the “wait your turn” methodology?) But, coloring of the different networks now allows a device to recognize that transmissions at a particular frequency are or are not a part of their native WLAN, and as such, the device can treat the frequency as clear air with respect to its own network. This would allow for yet more efficiency and greater bandwidth availability for connected devices.
It is clearly the dawn of a new age in wireless technology, and the latest generation of Wi-Fi devices will be available on the shelves soon, even before the full ratification of the 802.1ax standard. This is one update to a standard that merits close attention and early planning for adoption. After all, customers, whether they pay for our services or consume them as employees at our offices, have come to expect available, high-performance wireless networks wherever they go. And, we’ve learned from experience that we want to be the team to best meet their needs as early as possible. After all, it’s been a CATV recipe for success for many years!
Patrick Hunter — “Hunter”
Director, IT Enterprise Network and Telecom,
Charter Communications
hunter.hunter@charter.com
Hunter has been employed with Charter since 2000 and has held numerous positions, from Installer, System Technician, Technical Operations management, Sales Engineer, and Network Engineer. His responsibilities include providing IP connectivity to all users in Charter’s approximately 4,000 facilities, including executive and regional offices, technical centers, call centers, stores, headends, hubsites, and data centers. Mr. Hunter has served on the SCTE Gateway Chapter Board of Directors since 2005. He spends his spare time mentoring, teaching, and speaking on IP and Ethernet networks as well as careers in the network field.