Ever Wonder How Machines Learn?

By Leslie Ellis

We work in an industry full of machines: From set-tops and gateways, to optical nodes and hubs, to data centers and clouds bulging with software. Which is probably why machine learning (ML) is a front-and-center topic in so many conversations, and so many industries. Ours definitely included.

If you need evidence, link over to www.nctatechnicalpapers.com, a delicious repository comprising pretty much everything this industry’s technical brain trust created, dating back to 1964. Search on “ML,” or its cousin, artificial intelligence (AI). From chatbots to smart homes to field operations, there’s a lot going on, when it comes to machines getting smarter at their jobs. Not surprisingly, the bulk of the contributions happened over the last few years.

Know going in that AI and ML is an epic jargon-jumble, festooned with multi-syllabic gibberish. Its shepherds are the world’s practitioners of math, science, computer science, business analytics and specialized domain knowledge. A quick sample, if only to augment your satchel of impressively nerdy language: Convolutional neural networks. Generative adversarial networks. Out on the edges, but getting closer, quantum computing.

The sheer volume of explorable angles, when it comes to AI and ML, is the reason I nearly missed the deadline for this edition! This is literally the (Nth) version of this piece. After many crumpled sheets of paper (so to speak), the intent emerged: to shine a light on how it is that machines learn, and to raise a cautionary flag about related energy usage.

Experience: Teacher of all things

Humans learn from experience. So do machines. Think of the Roomba — one of the first consumer devices with AI inside. After it bumps into a wall, or the ottoman, it learns. Oops! Obstacle! Such obstructions get mapped into its memory, so that the next time, it knows to turn, or angle, or otherwise anticipate. (In the lingo of ML, that’s known as “reinforcement learning.”)

Machines learn in lots of ways, two of which we’ll cover here: supervised, and unsupervised. Supervised learning uses “known” / labeled data inputs, for a predicted outcome. Unsupervised learning is just what it sounds like: unleashing algorithms on big vats of data — more is always better, with AI and ML — to find patterns, glean insights, and solve anomalies.

It follows that forecasts are a specialty of supervised learning, from the weather to market performance and population growth. Unsupervised learning tends to do well at things like recommendations, targeted marketing, and segmenting, because it’s super-focused on seeking patterns.

Some of the more plausible examples of ML and AI I’ve had the good fortune to glimpse, over the past few months, involve the outside plant. Turns out that ML goes really well with proactive network maintenance, because it’s really good at correlating things, so that techs can spend more time fixing things, and less time finding what needs to be fixed.

But! Training the models is costly

I’d be remiss if I didn’t point out one of the less charming realities of AI, ML, and their ilk, and that’s how very, very much energy it takes to “train the models.” A summer 2019 edition of MIT Technology Review (https://tinyurl.com/yxhsppoz), for instance, cited a paper from the University of Massachusetts, Amherst, which examined five popular training modules specific to natural language processing — the under-the-hood algorithms of the voice assistants in your speakers, smartphones and remote controls.

The results were startling: Just training some NLP models can emit more than 626,000 pounds of carbon dioxide-equivalent. That’s about 5x the lifetime emissions of the average American car, including its manufacture. The heavy cost is tied to the intense amount of compute power required to train the computer models.

As someone who just finished reading “The Overstory,” by Richard Powers — admittedly a piece of fiction, but a masterpiece (the Pulitzer committee agreed) for anyone interested in the well-being of the planet’s forests — I’d suggest this: ML is great, long live ML. And while we’re ML-ing, let’s remember to be choosy about “training the models.” Ask yourself: Is the answer you seek worth an acre of forest?

Feature Image from Shutterstock

 


Leslie Ellis,

President,
Ellis Edits Inc.
leslie@ellisedits.com

Leslie Ellis is a tech writer focused on explaining complex engineering stuff for people who have less of a natural interest than engineers. She’s perhaps best known (until now!) for her long-running weekly column in Multichannel News called “Translation Please.” She’s written two broadband dictionaries, one field guide to broadband, and is a behind-the-scenes tech translator for domestic and global service providers, networks, and suppliers. She’s served as board member of the Rocky Mountain chapter of the SCTE since 2015, and is a 2019 Cable Hall of Fame inductee.