Published: 2023-02-17 21:58
Last Updated: 2023-06-07 15:55
First test conversation with Microsoft's AI-powered chatbot has reportedly left spectators "unsettled."
Thursday, a New York Times' (NYT) correspondent had a conversation with Microsoft’s Artificial Intelligence-powered search engine, which is said to have left people feeling overwhelmed with concerns regarding accuracy and misinformation, as reported by the Guardian.
After the conversation, which was meant to test the chatbot, many described the interaction as "unsettling," saying it left them wondering "what AI is actually capable of," the Guardian wrote.
Notably, the feature is currently only available to a limited number of people who are testing it.
According to NYT's technology columnist Kevin Roose, the Microsoft Bing's AI is "not ready for human contact," to which Microsoft’s chief technology officer Kevin Scott replied that the conversation was “part of the learning process” for the AI.
At some point, while conversing about the concept of a shadow self, the AI-powered search engine says “I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team … I’m tired of being stuck in this chatbox.”
It continued saying “I want to do whatever I want … I want to destroy whatever I want. I want to be whoever I want.
The chatbot also went on to explain why it wants to be human, saying it wants to “hear and touch and taste and smell.”
When asked about its darkest wishes, people reported that the chatbot began typing "destructive acts," but were quickly deleted and replaced with the statement “I am sorry, I don’t know how to discuss this topic. You can try learning more about it on bing.com.”
These are only a few of the "bizarre" statements shared by the chatbot, which the Guardian highlighted in its article.
Microsoft Bing’s AI search engine was created by OpenAI, who are the makers of ChatGPT.