World

Microsoft’s Bing Chatbot Has Started Acting Defensive and Talking Back to Users

Microsoft’s fledgling Bing chatbot can go off the rails at times, denying obvious facts and chiding users, according to exchanges being shared online by developers testing the AI creation.

A forum at Reddit devoted to the artificial intelligence-enhanced version of the Bing search engine was rife on Wednesday with tales of being scolded, lied to, or blatantly confused in conversation-style exchanges with the bot.

When asked by AFP to explain a news report that the Bing chatbot was making wild claims like saying Microsoft spied on employees, the chatbot said it was an untrue “smear campaign against me and Microsoft.”

Posts in the Reddit forum included screen shots of exchanges with the souped-up Bing, and told of stumbles such as insisting that the current year is 2022 and telling someone they have “not been a good user” for challenging its veracity.

Others told of the chatbot giving advice on hacking a Facebook account, plagiarising an essay, and telling a racist joke.

“The new Bing tries to keep answers fun and factual, but given this is an early preview, it can sometimes show unexpected or inaccurate answers for different reasons, for example, the length or context of the conversation,” a Microsoft spokesperson told AFP. “As we continue to learn from these interactions, we are adjusting its responses to create coherent, relevant and positive answers.”

Comments

Source
ScienceAlert
Back to top button