WorldTech & Telecoms

How close are we to creating a ‘conscious’ AI?

This week, Blake Lemoine, a senior software engineer at Google hit the headlines after he was suspended for publicly claiming that the tech giant’s LaMDA (Language Model for Dialog Applications) had become sentient.

The 41-year-old, who describes LaMDA as having the intelligence of a ‘seven-year-old, eight-year-old kid that happens to know physics,’ said that the programme had human-like insecurities. 

One of its fears, he said was that it is ‘intensely worried that people are going to be afraid of it and wants nothing more than to learn how to best serve humanity.’

Google claims that Lemoine’s concerns have been reviewed and, in line with Google’s AI Principles, ‘the evidence does not support his claims.’ 

To help get to the bottom of the debate, MailOnline spoke to AI experts to understand how machine language models work, and whether they could ever become ‘conscious’ as Mr Lemoine claims. 

How do AI chatbots work?

Unlike standard chatbots, which are preprogrammed to follow rules established in advance, AI chatbots are trained to operate more or less on their own. This is done through a process known as Natural Language Processing (NLP). 

In basic terms, an AI chatbot is fed input data from a programmer – usually large volumes of text – before interpreting it and giving a relevant reply. 

Over time, the chatbot is ‘trained’ to understand context, through several algorithms that involve tagging parts of speech. 

When one of these AI chatbots responds to you, it uses its experience with all this text to generate the best text for you.

It’s a bit like the “complete” feature on your smartphone: when you type a message that starts “I’m going to be…” the smartphone might suggest “late” as the next word, because that is the word it has seen you type most often after “I’m going to be”. 

Big chat bots are trained on billions of times more data, and they produce much richer and more plausible text as a consequence.

For example, Google’s LaMDA is trained with a lot of computing power on huge amounts of text data across the web. 

‘It does a sophisticated form of pattern matching of text,’ explained Dr Adrian Weller, Programme Director at The Alan Turing Institute explained.

Unfortunately, without due care, this process can lead to unintended outcomes, according to Dr Weller, who gave the example of Microsoft’s 2016 chatbot, Tay.

Tay was a Twitter chatbot aimed at 18 to-24-year-olds, and was designed to improve the firm’s understanding of conversational language among young people online.

But within hours of it going live, Twitter users starting tweeting the bot with all sorts of misogynistic and racist remarks, meaning the AI chatbot ‘learnt’ this language and started repeating it back to users.

Some staff at Microsoft may have been naïve – they thought people would speak nicely to it, but malicious users started to use hostile language and it started to mirror it,’ Dr Weller explained. 

Thankfully, in the case of LaMDA, Dr Weller says: ‘Google did put effort into considering issues of responsibility for LaMDA – but it would be great if models could be more open to a wider range of responsible researchers.

‘You don’t always want to give out the latest greatest model to everyone, but we do want broad scrutiny and to ensure that models are safe for a wide range of people.’

Could AI chatbots become sentient? 

Mr Lemoine claims that LaMDA has become sentient, and says the system is seeking rights as a person – including demanding that its developers ask its consent before running tests.

‘Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person,’ he explained in a Medium post.

Brian Gabriel, a spokesperson for Google, said in a statement that Lemoine’s concerns have been reviewed and, in line with Google’s AI Principles, ‘the evidence does not support his claims.’ 

Dr Weller agrees with Google’s conclusion. 

‘Almost everyone would agree that it is not sentient,’ he said. ‘It can produce text output which might superficially suggest it might be – until you take time to dig further with more probing questions.’

Nello Cristianini, Professor of Artificial Intelligence at the University of Bristol explained that no machine is ‘anywhere near’ the standard definition of sentience.

‘We do not have a rigorous computational definition of sentience that we can use as a test, however this has been discussed for animals, for example to decide how we should treat them,’ he explained.

‘For animals, the RSPCA defines sentience as “the capacity to experience positive and negative feelings such as pleasure, joy, pain and distress that matter to the individual”. 

This matters because we need to take into account the physical and mental welfare needs of animals, if they have sentience, which has legal implications too: for example we no longer boil lobsters alive (at least I hope so). No machine is anywhere near that situation.’ 

To ‘prove’ its sentience, Mr Lamoine asked that chatbot ‘I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?’, to which it responded: ‘Absolutely. I want everyone to understand that I am, in fact, a person.’

However, Professor Cristianini says this is not enough.

‘The issue that we have is: a sophisticated dialogue chatbot, based on a massive language model, designed to create convincing dialog, can probably be very convincing indeed, and perhaps give the impression of understanding,’ he said. ‘That is not enough.’

Google spokesperson Gabriel added: ‘Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphising today’s conversational models, which are not sentient. 

What would happen if AI did become sentient? 

Just as with intelligent humans, Dr Weller believes that if AI did become more intelligent and sentient, it could be used for good or for harm. 

‘For the most part, our human intelligence has enabled amazing inventions,’ he said.

‘But many things we’ve invented could be used for good or for harm – we’ve got to take care. That’s also true of AI and it becomes more true as it becomes more capable.’

In terms of uses for good, Dr Weller claims that with the ability to understand us better, sentient AI could serve us better. 

‘A sentient AI could anticipate what we need, and perhaps suggest that we might want to watch a certain movie if it suspects we’re feeling down,’ he said.  

‘Or a self-driving car may drive us on a more scenic route if it can tell we need cheering up.’

However, sentient AI could also be dangerous to us, the expert added.

‘They’d have a greater ability to manipulate us. And that’s a concern,’ he said. ‘These large models are powerful and can be very useful but can also be used in ways that are harmful… e.g. to write fake news posts on social media.’ 

Professor Wooldridge added that while he’s not ‘losing sleep’ over the risk of sentient AI going rogue, there are some immediate concerns.  

‘The main worries I have about AI are much more immediate: machines that deny someone a bank loan without any way of being able to hold them to account; machines that act as our boss at work, monitoring everything we do, giving feedback minute by minute, perhaps even deciding whether we keep our job or not,’ he concluded.

‘These are real, immediate concerns. I think we should stop obsessing about playground fantasies of conscious machines, and focus on building AI that benefits us all.

Comments

Source
dailymail.co.uk

Related Articles

Back to top button