Google DeepMind CEO Demis Hassabis says we may have AGI ‘in the next few years’
The CEO of Google DeepMind says human-level AI could emerge before 2033 — an event that could radically alter how crypto trading bots and GPT-based tech functions.
Demis Hassabis, the CEO of Google DeepMind, recently predicted that artificial intelligence (AI) systems would reach human-level cognition somewhere between “the next few years” and “maybe within a decade.”
Hassabis, who got his start in the gaming industry, co-founded Google DeepMind (formerly DeepMind Technologies), the company known for developing the AlphaGo AI system responsible for beating the world’s top human Go players.
In a recent interview conducted during The Wall Street Journal’s Future of Everything festival, Hassabis told interviewer Chris Mims he believes the arrival of machines with human-level cognition is imminent:
“The progress in the last few years has been pretty incredible. I don’t see any reason why that progress is going to slow down. I think it may even accelerate. So I think we could be just a few years, maybe within a decade away.”
These comments come just two weeks after internal restructuring led Google to announce the merging of “Google AI” and “DeepMind” into the aptly named “Google DeepMind.”
When asked to define “AGI” — artificial general intelligence — Hassabis responded: “human-level cognition.”
There currently exists no standardized definition, test or benchmark for AGI widely accepted by the science, technology, engineering and math community. Nor is there a unified scientific consensus on whether AGI is even possible.
Some notable figures such as Roger Penrose (Stephen Hawking’s long-time research partner) believe AGI can’t be achieved, while others think it could take decades or centuries for scientists and engineers to figure it out.
Among those who are bullish on AGI in the near term, or some similar form of human-level AI, are Elon Musk and OpenAI CEO Sam Altman.
Don’t Look Up … but AGI instead of comet
— Elon Musk (@elonmusk) April 1, 2023
AGI’s become a hot topic in the wake of the launch of ChatGPT and myriad similar AI products and services over the past few months. Often cited as a “holy grail” technology, experts predict human-level AI will disrupt every facet of life on Earth.
If human-level AI is ever achieved, it could disrupt various aspects of the crypto industry. In the cryptocurrency world, users could see fully autonomous machines capable of acting as entrepreneurs, C-suite executives, advisers and traders, with the intellectual reasoning capacity of a human and the ability to retain information and execute code like a computer system.
As to whether AGI agents would serve humankind as AI-powered tools or compete with humans for resources remains to be seen.
For his part, Hassabis didn’t speculate on any scenarios, but he did tell The Wall Street Journal that he “would advocate developing these types of AGI technologies in a cautious manner using the scientific method, where you try and do very careful controlled experiments to understand what the underlying system does.”
This might stand in juxtaposition to the current landscape, where products such as his own employer’s Google Bard and OpenAI’s ChatGPT were recently made available for public use.
Related: ‘Godfather of AI’ resigns from Google, warns of the dangers of AI
Industry insiders such as OpenAI’s Altman and DeepMind’s Nando de Freitas have stated that they believe AGI could emerge by itself if developers continue to scale current models. And one Google researcher recently parted ways with the company after claiming that a model named LaMDA had already become sentient.
Solving these scaling challenges is what will deliver AGI. Research focused on these problems, eg S4 for greater memory, is needed. Philosophy about symbols isn’t. Symbols are tools in the world and big nets have no issue creating them and manipulating them 2/n
— Nando de Freitas ️ (@NandoDF) May 14, 2022
Because of the uncertainty surrounding the development of these technologies and their potential impact on humankind, thousands of people, including Musk and Apple co-founder Steve Wozniak, recently signed an open letter asking companies and individuals building related systems to pause development for six months so scientists can assess the potential for harm.