People have grown excited about LLMs due to the breadth of tasks they can perform. Most machine learning systems are trained to solve a particular problem — such as detecting faces in a video feed or translating from one language to another. These models are known as “narrow AI” because they can only tackle the specific task they were trained for.
Most machine learning systems are trained to solve a particular problem —, such as detecting faces in a video feed or translating from one language to another —, to a superhuman level, in that they are much faster and perform better than a human could. But LLMs like ChatGPT represent a step-change in AI capabilities because a single model can carry out a wide range of tasks. They can answer questions about diverse topics, summarize documents, translate between languages and write code.
This ability to generalize what they’ve learned to solve many different problems has led some to speculate LLMs could be a step toward AGI, including DeepMind scientists in a paper published last year. AGI refers to a hypothetical future AI capable of mastering any cognitive task a human can, reasoning abstractly about problems, and adapting to new situations without specific training.
AI enthusiasts predict once AGI is achieved, technological progress will accelerate rapidly — an inflection point known as “the singularity” after which breakthroughs will be realized exponentially. There are also perceived existential risks, ranging from massive economic and labor market disruption to the potential for AI to discover new pathogens or weapons.
But there is still debate as to whether LLMs will be a precursor to an AGI, or simply one architecture in a broader network or ecosystem of AI architectures that is needed for AGI. Some say LLMs are miles away from replicating human reasoning and cognitive capabilities. According to detractors, these models have simply memorized vast amounts of information, which they recombine in ways that give the false impression of deeper understanding; it means they are limited by training data and are not fundamentally different from other narrow AI tools.
Nonetheless, it’s certain LLMs represent a seismic shift in how scientists approach AI development, said Hooker. Rather than training models on specific tasks, cutting-edge research now takes these pre-trained, generally capable models and adapts them to specific use cases. This has led to them being referred to as “foundation models.”
“People are moving from very specialized models that only do one thing to a foundation model, which does everything,” Hooker added. “They’re the models on which everything is built.”
How is AI used in the real world?
AI enthusiasts predict once AGI is achieved, technological progress will accelerate rapidly — an inflection point known as “the singularity” after which breakthroughs will be realized exponentially. There are also perceived existential risks, ranging from massive economic and labor market disruption to the potential for AI to discover new pathogens or weapons.AI enthusiasts predict once AGI is achieved, technological progress will accelerate rapidly — an inflection point known as “the singularity” after which breakthroughs will be realized exponentially. There are also perceived existential risks, ranging from massive economic and labor market disruption to the potential for AI to discover new pathogens or weapons.AI enthusiasts predict once AGI is achieved, technological progress will accelerate rapidly — an inflection point known as “the singularity” after which breakthroughs will be realized exponentially. There are also perceived existential risks, ranging from massive economic and labor market disruption to the potential for AI to discover new pathogens or weapons.AI enthusiasts predict once AGI is achieved, technological progress will accelerate rapidly — an inflection point known as “the singularity” after which breakthroughs will be realized exponentially. There are also perceived existential risks, ranging from massive economic and labor market disruption to the potential for AI to discover new pathogens or weapons.AI enthusiasts predict once AGI is achieved, technological progress will accelerate rapidly — an inflection point known as “the singularity” after which breakthroughs will be realized exponentially. There are also perceived existential risks, ranging from massive economic and labor market disruption to the potential for AI to discover new pathogens or weapons.
Technologies like machine learning are everywhere. AI-powered recommendation algorithms decide what you watch on Netflix or YouTube — while translation models make it possible to instantly convert a web page from a foreign language to your own. Your bank probably also uses AI models to detect any unusual activity on your account that might suggest fraud, and surveillance cameras and self-driving cars use computer vision models to identify people and objects from video feeds.