Understanding “Paradigm Shifts” in AI with Vasant Dhar

NYU Center for Data Science
4 min readOct 25, 2023

--

Artificial Intelligence (AI) has been a hot topic particularly since the rise of ChatGPT this year. This marked the first time we could engage in conversation with a machine as we would with other humans, however flawed. This occurrence has created a new “paradigm shift” in the field of AI, transforming it from an application to a more general purpose technology that can be configured for specific tasks and uses. CDS associated faculty and Professor of Technology, Operations, and Statistics at NYU Stern Vasant Dhar has recently authored a paper “The Paradigm Shifts in Artificial Intelligence” analyzing historical shifts that have led us to the current and why it matters.

So what is a paradigm? As Dhar puts it, “a paradigm is essentially a set of theories and methods accepted by the community to guide inquiry.” Essentially, it’s a way of thinking. In his paper, he leverages Thomas Kuhn’s theory of scientific progress to understand past paradigm shifts in AI. Kuhn describes science as a

process involving occasional “revolutions” stemming from crises faced by the dominant theories, followed by periods of “normal science” where the details of the new paradigm are fleshed out. Over time, as the dominant paradigm fails to address an increasing number of important anomalies or challenges, we see a paradigm shift to a new set of theories and methods — a new way of thinking that better addresses them.

Dhar emphasizes that in order to understand where AI is heading, we must understand its scientific history, particularly what impeded progress in each paradigm and how those hindrances were addressed in each subsequent shift. Below is a chart featured in the paper that breaks down a history of each paradigm in AI:

History of AI Paradigms

The impetus in the expert systems paradigm was to “apply AI to diagnosis, planning, and design across a number of domains including healthcare, science, engineering, and business”. The idea was that if these systems performed at the same level as human experts, they would prove to be intelligent. One of the challenges in this shift was obtaining reliable knowledge from experts as it was a time-consuming process that could take decades. Additionally, researchers found that systems would often make errors in common-sense reasoning, leading to the conclusion that human reasoning and language was too complex to be captured via this method.

The machine learning paradigm occurred in the late 1980s into the 1990s in tandem with the advancement of the Internet and a growing abundance of observational and transactional data. This shift moved away from simply giving data to the machine and towards training the machine to learn rules from data “guided by human intuition”. Machine learning enabled machines to learn models automatically from examples selected by humans. However, machine learning failed to consume raw data from the world directly, which is ultimately what we needed instead of relying on humans to perform the difficult task of feature engineering.

Deep learning greatly relieved much of the headache associated with feature engineering. Unlike the machine learning paradigm, deep learning employed algorithms with the ability to consume raw data similar to the way a human would. However, when it comes to common sense, current machines are still unable to match humans.

This brings us to general intelligence, the current paradigm. Pre-trained models are the foundation of general intelligence. Whereas previous AI applications were tuned to a task, general intelligence is about machines’ ability to integrate knowledge the way humans are able to do, and to apply that knowledge to “unforeseen situations”.

Each paradigm shift has brought a great expansion of the breadth of applications — structured databases with machine learning and the ability to deal with structured and unstructured data in a human-like way with deep learning. What pre-trained models provide are the building blocks for general intelligence “by virtue of being domain-independent, requiring minimal curation, and being transferable across applications”. Pre-trained models demonstrate a pivotal divergence from the paradigms before it.

However, Dhar cautions against making the assumption that we have arrived at the “right paradigm” for AI as it will eventually and inevitably also prove to have limitations. These shifts don’t always improve upon the previous paradigms in all aspects. The challenge with the current paradigm is one of trust. For example, ChatGPT can be trained on data at an immensity much greater than a human expert would encounter in a lifetime. However, it seems to lack an aptitude to explain its rationale or engage in self-analysis. Thus, we can never be certain in their accuracy because there’s always a possibility they’re “hallucinating” to compensate for lack of knowledge. “It’s like talking to someone intelligent that you can’t always trust,” warns Dhar. The systems are also unpredictable, presenting additional issues of confidence. “We are seeing the emergence of things like fake art and documents,” says Dhar. Ultimately, these problems of trust and alignment need to be addressed and are some of the most presently concerning for humanity to contend with.

To learn more about Vasant Dhar’s thoughts on AI, we encourage you to listen to his podcast Brave New World, which has featured notable guests such as Scott Galloway, Jonathan Haidt, Yann LeCun, and many more.

By Ashley C. McDonald

--

--

NYU Center for Data Science

Official account of the Center for Data Science at NYU, home of the Undergraduate, Master’s, and Ph.D. programs in Data Science.