World

Former OpenAI Researcher Says There’s a 50% Chance AI Ends in ‘Catastrophe’

A former key researcher at OpenAI believes there is a decent chance that artificial intelligence will take control of humanity and destroy it. “I think maybe there’s something like a 10-20% chance of AI takeover, [with] many [or] most humans dead, ” Paul Christiano, who ran the language model alignment team at OpenAI, said on the Bankless podcast.

Christiano, who now heads the Alignment Research Center, a non-profit aimed at aligning AIs and machine learning systems with “human interests,” said that he’s particularly worried about what happens when AIs reach the logical and creative capacity of a human being. “Overall, maybe we’re talking about a 50/50 chance of catastrophe shortly after we have systems at the human level,” he said.

Christiano is in good company. Recently scores of scientists around the world signed an online letter urging that OpenAI and other companies racing to build faster, smarter AIs, hit the pause button on development. Big wigs from Bill Gates to Elon Musk have expressed concern that left unchecked, AI represents an obvious, existential danger to people.

Why would AI become evil? Fundamentally, for the same reason a person does: training and life experience. Like a baby, AI is trained by receiving mountains of data without knowing what to do with it. It learns by trying to achieve certain goals with random actions and zeroes in on “correct” results, as defined by training.

So far, by immersing itself in data accrued on the internet, machine learning has enabled AIs to make huge leaps in stringing together well-structured, coherent responses to human queries. At the same time, the underlying computer processing that powers machine learning is getting faster, better, and more specialized. Some scientists believe that within a decade, that processing power, combined with artificial intelligence, will allow these machines to become sentient, like humans, and have a sense of self.

That’s when things get hairy. And it’s why many researchers argue that we must figure out how to impose guardrails now, rather than later. As long as AI behaviour is monitored, it can be controlled. But if the coin lands on the other side, even OpenAI’s co-founder says that things could get very, very bad.

This topic has been on the table for years. One of the most famous debates on the subject took place 11 years ago between AI researcher Eliezer Yudkowsky and the economist Robin Hanson. The two discussed the possibility of reaching “foom”—which apparently stands for “Fast Onset of Overwhelming Mastery”—the point at which AI becomes exponentially smarter than humans and capable of self-improvement. (The derivation of the term “foom” is debatable.)

Metzger argued that even when computer systems reach a level of human intelligence, there’s still plenty of time to head off any bad outcomes. “Is ‘foom’ logically possible? Maybe. I’m not convinced,” he said. “Is it real world possible? I’m pretty sure not. Is long term deeply superhuman AI going to be a thing? Yes, but not a ‘foom’”

Comments

Source
Decrypt
Back to top button