WorldTech & Telecoms

Ex-Google CEO Claims AI Tools Can Be Used To Kill People

A former Google CEO has warned that artificial intelligence be used to kill people in the future. Eric Schmidt – who spent two decades at the helm of the search giant, told a gathering of senior executives Wednesday that he believes AI presents an ‘existential risk‘ for humanity ‘defined as many, many, many, many people harmed or killed.’

The software PhD said the technology, which Google is helping spearhead through its relatively primitive Bard chatbot system – could be ‘misused by evil people’ when it becomes more advanced. Schmidt, who recently chaired the US National Security Commission on AI, is the latest in a slew of former Google staffers to come out publicly against the rapid development of the technology in recent weeks.

Schmidt focused specifically on AI’s burgeoning ability to identify software vulnerabilities for hackers and the tech’s inevitable hunting down of new biological pathways, which could lead to the creation of fearsome new bioweapons. 

‘There are scenarios, not today, but reasonably soon, where these systems will be able to find zero-day exploits in cyber issues or discover new kinds of biology,’ Schmidt said before The Wall Street Journal’s CEO Council Summit in London. So-called ‘zero-day exploits’ are security flaws in code — anywhere from personal computing to digital banking to infrastructure — that have only just been discovered and thus not yet patched up by cybersecurity teams. Zero days are the prized tools in hackers’ arsenal. 

Schmidt did not go into detail on what ‘new kinds of biology’ dreamed up by a maliciously run AI worry him most.  ‘Now, this is fiction today,’ Schmidt cautioned, ‘but its reasoning is likely to be true. And when that happens, we want to be ready to know how to make sure these things are not misused by evil people.’

Schmidt’s comments, which are not his first warnings, join a raucous debate across Silicon Valley over the moral questions and mortal dangers posed by AI. Elon Musk, Apple co-founder Steve Wozniak and the late Stephen Hawking are among the most famous critics of AI who believe it poses a ‘profound risk to society and humanity’ and could have ‘catastrophic effects’.

Earlier this spring, the ‘Godfather of Artificial Intelligence’ sensationally resigned from Google, warning that AI technology could upend life as we know it. Speaking to the New York Times about his resignation, he warned that in the near future, A.I. would flood the internet with false photos, videos and texts. These would be of a standard, he added, where the average person would ‘not be able to know what is true anymore’.

But Bill Gates, My Pichai and futurist Ray Kurzweil are on the other side of the debate, hailing the technology as our time’s ‘most important’ innovation. Among these titans, only Schmidt helmed the creation of a mammoth 756-page report for the US government on the national security risks posed by AI.

‘America is not prepared to defend or compete in the AI era,’ wrote Schmidt and his vice chair of the US National Security Commission on AI in 2021. ‘This is the tough reality we must face.’ Schmidt, who spent three years chairing the fact-finding body alongside Bob Work, a previous deputy US secretary of Defense, argued that China was on track to outpace the US as planet Earth’s ‘AI superpower.’

‘We will not be able to defend against AI-enabled threats,’ Schmidt and Work wrote, ‘without ubiquitous AI capabilities and new warfighting paradigms.’ Their committee advised that the Biden administration commits to doubling US government Artificial intelligence research and development spending to $32 billion per year by 2026 and to free itself from dependence on overseas microchip manufacturing.

Schmidt and his commission also suggested that the US should renounce any calls for a global ban on AI-powered autonomous weapons, arguing that neither Russia nor China would uphold their end of any treaties banning these weapons. In London this week, however, Schmidt told the gathering of CEOs that he did not have any clear ideas, personally, on how AI should be or even could be regulated, suggesting that it should be a ‘broader question for society.’ He did voice his belief that there is unlikely to be a new regulatory agency created to regulate AI in the United States. 

Comments

Source
Daily Mail

Related Articles

Back to top button