WorldTech & Telecoms

Google AI Chief Says Its Next AI Product, Called Gemini, Will Eclipse ChatGPT

IN 2016, AN artificial intelligence program called AlphaGo from Google’s DeepMind AI lab made history by defeating a champion player of the board game Go. Now Demis Hassabis, DeepMind’s cofounder and CEO, says his engineers are using techniques from AlphaGo to make an AI system dubbed Gemini that will be more capable than that behind OpenAI’s ChatGPT.

DeepMind’s Gemini, which is still in development, is a large language model that works with text and is similar in nature to GPT-4, which powers ChatGPT. But Hassabis says his team will combine that technology with techniques used in AlphaGo, aiming to give the system new capabilities such as planning or the ability to solve problems.

“At a high level you can think of Gemini as combining some of the strengths of AlphaGo-type systems with the amazing language capabilities of the large models,” Hassabis says. “We also have some new innovations that are going to be pretty interesting.” Gemini was first teased at Google’s developer conference last month when the company announced a raft of new AI projects.

AlphaGo was based on a technique DeepMind has pioneered called reinforcement learning, in which software learns to take on tough problems that require choosing what actions to take like in Go or video games by making repeated attempts and receiving feedback on its performance. It also used a method called tree search to explore and remember possible moves on the board. The next big leap for language models may involve them performing more tasks on the internet and computers.

Gemini is still in development, a process that will take a number of months, Hassabis says. It could cost tens or hundreds of millions of dollars. Sam Altman, OpenAI CEO, said in April that creating GPT-4 cost more than $100 million.

When Gemini is complete it could play a major role in Google’s response to the competitive threat posed by ChatGPT and other generative AI technology. The search company pioneered many techniques that enabled the recent torrent of new AI ideas but chose to develop and deploy products based on them cautiously. Since ChatGPT’s debut, Google has rushed out its own chatbot, Bard, and put generative AI into its search engine and many other products.

To juice up AI research the company in April combined Hassabis’ unit DeepMind with Google’s primary AI lab, Brain, to create Google DeepMind. Hassabis says the new team will bring together two powerhouses that have been foundational to the recent AI progress. “If you look at where we are in AI, I would argue that 80 or 90 per cent of the innovations come from one or the other,” Hassabis says. “There are brilliant things that have been done by both organizations over the last decade.”

Hassabis has experience with navigating AI gold rushes that roil tech giants—although last time around he himself sparked the frenzy.

In 2014, DeepMind was acquired by Google after demonstrating striking results from software that used reinforcement learning to master simple video games. Over the next several years, DeepMind showed how the technique does things that once seemed uniquely human—often with superhuman skill. When AlphaGo beat Go champion Lee Sedol in 2016, many AI experts were stunned, because they had believed it would be decades before machines would become proficient at a game of such complexity.

Training a large language model like OpenAI’s GPT-4 involves feeding vast amounts of curated text from books, webpages, and other sources into machine learning software known as a transformer. It uses the patterns in that training data to become proficient at predicting the letters and words that should follow a piece of text, a simple mechanism that proves strikingly powerful at answering questions and generating text or code.

An important additional step in making ChatGPT and similarly capable language models is using reinforcement learning based on feedback from humans on an AI model’s answers to finesse its performance. DeepMind’s deep experience with reinforcement learning could allow its researchers to give Gemini novel capabilities.

Hassabis and his team might also try to enhance large language model technology with ideas from other areas of AI. DeepMind researchers work in areas ranging from robotics to neuroscience, and earlier this week the company demonstrated an algorithm capable of learning to perform manipulation tasks with a wide range of different robot arms.

Learning from the physical experience of the world, as humans and animals do, is widely expected to be important to making AI more capable. The fact that language models learn about the world indirectly, through text, is seen by some AI experts as a major limitation.

Hassabis is tasked with accelerating Google’s AI efforts while also managing unknown and potentially grave risks. The recent, rapid advancements in language models have made many AI experts—including some building the algorithms—worried about whether the technology will be put to malevolent uses or become difficult to control. Some tech insiders have even called for a pause on the development of more powerful algorithms to avoid creating something dangerous.

Hassabis says the extraordinary potential benefits of AI—such as for scientific discovery in areas like health or climate—make it imperative that humanity does not stop developing the technology. He also believes that mandating a pause is impractical, as it would be near impossible to enforce. “If done correctly, it will be the most beneficial technology for humanity ever,” he says of AI. “We’ve got to boldly and bravely go after those things.”

That doesn’t mean Hassabis advocates AI development proceeds in a headlong rush. DeepMind has been exploring the potential risks of AI since before ChatGPT appeared, and Shane Legg, one of the company’s co-founders, has led an “AI safety” group within the company for years. Hassabis joined other high-profile AI figures last month in signing a statement warning that AI might someday pose a risk comparable to nuclear war or a pandemic.

One of the biggest challenges right now, Hassabis says, is to determine what the risks of more capable AI are likely to be. “I think more research by the field needs to be done—very urgently—on things like evaluation tests,” he says, to determine how capable and controllable new AI models are. To that end, he says, DeepMind may make its systems more accessible to outside scientists. “I would love to see academia have early access to these frontier models,” he says—a sentiment that if followed through could help address concerns that experts outside big companies are becoming shut out of the newest AI research.

How worried should you be? Hassabis says that no one really knows for sure that AI will become a major danger. But he is certain that if progress continues at its current pace, there isn’t much time to develop safeguards. “I can see the kinds of things we’re building into the Gemini series right, and we have no reason to believe that they won’t work,” he says.

Comments

Source
Wired

Related Articles

Back to top button