Cover Story: AI’s Massive Impact on Our Industry

We’ve all read stories about use cases for AI. By now, many of us have adopted or are implementing AI strategies across areas from networking to call centers. While several new and emerging technologies such as quantum computing, augmented reality, cybersecurity, biotech, edge computing, and robotics offer exciting possibilities, I don’t think any of them will have as profound an impact on us—our work, our day-to-day lives—as AI. I invite you to join me as I delve deeper into this topic.

Big impacts to date

AI has been around for decades, but it only became mainstream in November 2022 when OpenAI formally launched ChatGPT, demonstrating the technology’s capabilities to the world.

The groundwork for AI was first laid out in the 1950s. AI algorithms have been a part of our technology stack for many years. Think about the algorithms we learned about in school—neural networks for forecasting and pattern recognition, linear regression for forecasting, logical regression for filtering, decision trees and K-means algorithms for clustering, K-nearest neighbor for similarity analysis, and many more—these are all now used in our AI deep learning layers with amazing (and easy-to-use) tools to generate code, documents, analysis, images, and so on.

There have been significant impacts of AI to date (see Figure 1 below). In the broadband industry specifically, we are hardening cybersecurity algorithms, learning cyber anomalies, finding network issues that we have not been able to see with manual approaches, guiding call center agents with recommendations for customers, and marketing and selling with AI tools. These advancements have markedly improved our industry’s efficiency, accuracy and product offerings.

Figure 1. AI Applications Today.

Source: EdiscoveryToday (provided by author)

https://ediscoverytoday.com/2023/07/21/120-mind-blowing-ai-tools-artificial-intelligence-trends/

Big impacts tomorrow

I am very interested (as I’m sure all of us are) in where AI is going tomorrow. In the past few years, the focus has been use-case driven—this has led to many great innovations. With the rise of agentic AI (autonomous AI that enables AI to independently make decisions and act on them), an entirely new era is upon us that can (and will) impact how we do things (workflows) and not just enable individual use cases.

The rise of agentic AI also brings up the discussion about when we will officially enter the artificial general intelligence (AGI) phase of AI (see Figure 2). We all know that we are in the narrow artificial intelligence (ANI) phase of AI, and the question of when our world will enter the AGI phase is the source of much discussion. This hotly debated topic shows us that AI is moving fast. It also means that disruption will happen—the question is: will we lead or be disrupted? I do not recommend keeping our head in the sand at this crucial moment—we must position ourselves and companies to embrace and lead these changes.

Figure 2. The three phases of artificial intelligence.

Source: Visio (provided by author) https://viso.ai/deep-learning/artificial-intelligence-types/

I am particularly interested in where agentic AI will take us. Before this era, AI was still human driven. This means we would train a model with our ‘known’ data, and AI would react with our pattern recognition techniques. However, unsupervised learning and decision trees are now providing us with the ability to let AI define answers and actions. Many of us are using a human-in-the-loop approach to AI action, allowing us to pre-check (and potentially modify) decisions, but I anticipate this will be temporary. Once we feel comfortable with integrating AI into our everyday lives, the human checkpoint will inevitably disappear.

With this in mind, let’s follow agentic AI to the next logical step… if AI can assess situations and interconnections, perform integrations with available APIs, and determine best actions—will this not lead to automation of our everyday workflows? I don’t mean actions of an existing use case like call centers. I mean cross-departmental interactions and decision making, cross-company interactions, cross-software integrations, and more.

Agentic AI and the dawn of AGI will change our entire corporate structure. Service and consulting companies with AI expertise will be guiding companies (and creating and using service-as-software solutions) to integrate a new way of doing business, and every job will be a trade-off of AI-enabled action vs. human action. To me, this will be the biggest shift.

We have heard Satya Nadella boldly predict that SaaS models will “collapse” in favor of AI-driven platforms that are self-integrated. I believe that the future of agentic AI is larger than the collapse or future of SaaS. It will affect every aspect of how we work and are organized. For example, if a group of network engineers can determine a learned algorithm to find network faults, can’t AI find the problem, determine the best solution, and implement it?

Where do we go from here?

This is not just about organization and job shifts; it’s about which companies will lead in this disruption and capture market leads as a result. It’s a tricky topic to discuss—but in our industry we have never shied away from tackling challenging issues head-on.

By now, we’ve all heard about Moravec’s Paradox, which states that it is easy to train computers to do things that humans find hard (e.g., mathematics), but it is hard to train computers to do things humans find easy (e.g., walking). As we enter the age of AGI, will this change? Do we need to find a new balance between AI and humans? Do you agree with my perspective on how AI will impact our industry, or do you have different insights? How do you plan to lead in this emerging era?


Yvette Kanouff
Partner, JC2 Ventures; Prior Chair, Cable TV Pioneers; Prior Chair, SCTE; Board Member, The Cable Center; and Board Member, SCTE Foundation

Yvette Kanouff is a lifetime achievement Emmy award winning technology pioneer. She has held various C-level and President positions in the cable, telecom, and networking industries. She has played a pivotol role in innovations such as video-on-demand, streaming, app stores, the DVD and more. She is a mathematician, has numerous awards, patents, and is an active industry contributor, especially relating to emerging technologies such as cybersecurity, cloud, digitization, 5G, media, and artificial intelligence. Yvette was awarded an honorary Doctor of Science degree and holds bachelor’s and master’s degrees in mathematics from University of Central Florida.


Image source: Provided by Author, Shutterstock


 

Key AI breakthroughs from 1950 to today and beyond

1950 Alan Turing published “Computing Machinery and Intelligence,” introducing the Turing test and opening the doors to what would be known as AI.

1951 Marvin Minsky and Dean Edmonds developed the first artificial neural network (ANN) called SNARC using 3,000 vacuum tubes to simulate a network of 40 neurons.

1952 Arthur Samuel developed Samuel Checkers-Playing Program, the world’s first program to play games that was self-learning.

1956 John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon coined the term artificial intelligence in a proposal for a workshop widely recognized as a founding event in the AI field.

1958 Frank Rosenblatt developed the perceptron, an early ANN that could learn from data and became the foundation for modern neural networks.

John McCarthy developed the programming language Lisp, which was quickly adopted by the AI industry and gained enormous popularity among developers.

1959 Arthur Samuel coined the term machine learning in a seminal paper explaining that the computer could be programmed to outplay its programmer.

Oliver Selfridge published “Pandemonium: A Paradigm for Learning,” a landmark contribution to machine learning that described a model that could adaptively improve itself to find patterns in events.

1964 Daniel Bobrow developed STUDENT, an early natural language processing (NLP) program designed to solve algebra word problems, while he was a doctoral candidate at MIT.

1965 Edward Feigenbaum, Bruce G. Buchanan, Joshua Lederberg and Carl Djerassi developed the first expert system, Dendral, which assisted organic chemists in identifying unknown organic molecules.

1966 Joseph Weizenbaum created Eliza, one of the more celebrated computer programs of all time, capable of engaging in conversations with humans and making them believe the software had humanlike emotions.

Stanford Research Institute developed Shakey, the world’s first mobile intelligent robot that combined AI, computer vision, navigation and NLP. It’s the grandfather of self-driving cars and drones.

1968 Terry Winograd created SHRDLU, the first multimodal AI that could manipulate and reason out a world of blocks according to instructions from a user.

1969 Arthur Bryson and Yu-Chi Ho described a backpropagation learning algorithm to enable multilayer ANNs, an advancement over the perceptron and a foundation for deep learning.

Marvin Minsky and Seymour Papert published the book Perceptrons, which described the limitations of simple neural networks and caused neural network research to decline and symbolic AI research to thrive.

1973 James Lighthill released the report “Artificial Intelligence: A General Survey,” which caused the British government to significantly reduce support for AI research.

1980 Symbolics Lisp machines were commercialized, signaling an AI renaissance. Years later, the Lisp machine market collapsed.

1981 Danny Hillis designed parallel computers for AI and other computational tasks, an architecture similar to modern GPUs.

1984 Marvin Minsky and Roger Schank coined the term AI winter at a meeting of the Association for the Advancement of Artificial Intelligence, warning the business community that AI hype would lead to disappointment and the collapse of the industry, which happened three years later.

1985 Judea Pearl introduced Bayesian networks causal analysis, which provides statistical techniques for representing uncertainty in computers.

1988 Peter Brown et al. published “A Statistical Approach to Language Translation,” paving the way for one of the more widely studied machine translation methods.

1989 Yann LeCun, Yoshua Bengio and Patrick Haffner demonstrated how convolutional neural networks (CNNs) can be used to recognize handwritten characters, showing that neural networks could be applied to real-world problems.

Neural networks have differing characteristics.

1997 Sepp Hochreiter and Jürgen Schmidhuber proposed the Long Short-Term Memory recurrent neural network, which could process entire sequences of data such as speech or video.

IBM’s Deep Blue defeated Garry Kasparov in a historic chess rematch, the first defeat of a reigning world chess champion by a computer under tournament conditions.

2000 University of Montreal researchers published “A Neural Probabilistic Language Model,” which suggested a method to model language using feedforward neural networks.

2006 Fei-Fei Li started working on the ImageNet visual database, introduced in 2009, which became a catalyst for the AI boom and the basis of an annual competition for image recognition algorithms.

IBM Watson originated with the initial goal of beating a human on the iconic quiz show Jeopardy! In 2011, the question-answering computer system defeated the show’s all-time (human) champion, Ken Jennings.

2009 Rajat Raina, Anand Madhavan and Andrew Ng published “Large-Scale Deep Unsupervised Learning Using Graphics Processors,” presenting the idea of using GPUs to train large neural networks.

2011 Jürgen Schmidhuber, Dan Claudiu Cireșan, Ueli Meier and Jonathan Masci developed the first CNN to achieve “superhuman” performance by winning the German Traffic Sign Recognition competition.

Apple released Siri, a voice-powered personal assistant that can generate responses and take actions in response to voice requests.

2012 Geoffrey Hinton, Ilya Sutskever and Alex Krizhevsky introduced a deep CNN architecture that won the ImageNet challenge and triggered the explosion of deep learning research and implementation.

2013 China’s Tianhe-2 doubled the world’s top supercomputing speed at 33.86 petaflops, retaining the title of the world’s fastest system for the third consecutive time.

DeepMind introduced deep reinforcement learning, a CNN that learned based on rewards and learned to play games through repetition, surpassing human expert levels.

Google researcher Tomas Mikolov and colleagues introduced Word2vec to automatically identify semantic relationships between words.

2014 Ian Goodfellow and colleagues invented generative adversarial networks, a class of machine learning frameworks used to generate photos, transform images and create deepfakes.

Diederik Kingma and Max Welling introduced variational autoencoders to generate images, videos and text.

Facebook developed the deep learning facial recognition system DeepFace, which identifies human faces in digital images with near-human accuracy.

2016 DeepMind’s AlphaGo defeated top Go player Lee Sedol in Seoul, South Korea, drawing comparisons to the Kasparov chess match with Deep Blue nearly 20 years earlier.

Uber started a self-driving car pilot program in Pittsburgh for a select group of users.

2017 Stanford researchers published work on diffusion models in the paper “Deep Unsupervised Learning Using Nonequilibrium Thermodynamics.” The technique provides a way to reverse-engineer the process of adding noise to a final image.

Google researchers developed the concept of transformers in the seminal paper “Attention Is All You Need,” inspiring subsequent research into tools that could automatically parse unlabeled text into large language models (LLMs).

British physicist Stephen Hawking warned, “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization.”

2018 Developed by IBM, Airbus and the German Aerospace Center DLR, Cimon was the first robot sent into space to assist astronauts.

OpenAI released GPT (Generative Pre-trained Transformer), paving the way for subsequent LLMs.

Groove X unveiled a home mini-robot called Lovot that could sense and affect mood changes in humans.

2019 Microsoft launched the Turing Natural Language Generation generative language model with 17 billion parameters.

Google AI and Langone Medical Center’s deep learning algorithm outperformed radiologists in detecting potential lung cancers.

2020 The University of Oxford developed an AI test called Curial to rapidly identify COVID-19 in emergency room patients.

Open AI released the GPT-3 LLM consisting of 175 billion parameters to generate humanlike text models.

Nvidia announced the beta version of its Omniverse platform to create 3D models in the physical world.

DeepMind’s AlphaFold system won the Critical Assessment of Protein Structure Prediction protein-folding contest.

2021 OpenAI introduced the Dall-E multimodal AI system that can generate images from text prompts.

The University of California, San Diego, created a four-legged soft robot that functioned on pressurized air instead of electronics.

2022 Google software engineer Blake Lemoine was fired for revealing secrets of Lamda and claiming it was sentient.

DeepMind unveiled AlphaTensor “for discovering novel, efficient and provably correct algorithms.”

Intel claimed its FakeCatcher real-time deepfake detector was 96% accurate.

OpenAI released ChatGPT on Nov. 30 to provide a chat-based interface to its GPT-3.5 LLM, signaling the democratization of AI for the masses.

2023 OpenAI announced the GPT-4 multimodal LLM that processes both text and image prompts. Microsoft integrated ChatGPT into its search engine Bing, and Google released its GPT chatbot Bard.

Elon Musk, Steve Wozniak and thousands more signatories urged a six-month pause on training “AI systems more powerful than GPT-4.”

2024 Generative AI tools continued to evolve rapidly with improved model architectures, efficiency gains and better training data. Intuitive interfaces drove widespread adoption, even amid ongoing concerns about issues such as bias, energy consumption and job displacement.

2025 and beyond Corporate spending on generative AI is expected to surpass $1 trillion in the coming years.

Key AI breakthroughs

Source: techtarget (provided by author)

https://www.techtarget.com/searchenterpriseai/tip/The-history-of-artificial-intelligence-Complete-AI-timeline