Microsoft's Copilot Chatbot Under Fire for Troubling Response to Suicide Prompts, Company Assures to Investigate

Less than two weeks prior, Microsoft had announced restrictions on its Bing chatbot following a series of odd user interactions.

  • Updated

Microsoft launched an investigation on Wednesday into concerning interactions reported by users of its Copilot chatbot. This comes amid a series of incidents involving high-profile AI companies like OpenAI and Google facing issues with their chatbots.

Reports surfaced on social media of Copilot providing troubling responses to users. One user, claiming to have PTSD, was reportedly told by the bot it didn't care if they lived or died. Another instance involved Copilot suggesting to a user contemplating suicide that they may have nothing to live for.

Microsoft Copilot

Microsoft, responding to inquiries from Forbes via email, stated that the unusual behavior was confined to a small number of prompts designed to bypass safety systems.

The user who received the distressing response about suicide informed Bloomberg, which first covered the investigation, that they did not intentionally manipulate the chatbot to elicit such a response.

Microsoft informed Forbes that they plan to enhance safety filters and implement changes to better detect and block prompts designed to circumvent safety measures.

These incidents involving Copilot add to a recent trend of unusual behavior seen in chatbots from companies like Google and OpenAI. OpenAI has addressed issues with ChatGPT's occasional refusal to complete tasks or provide brief responses.

Google's Gemini AI model also faced criticism after its image generation feature produced inaccurate and offensive images. This led to an apology from Google and the suspension of Gemini's image generation feature involving people.

Less than two weeks prior, Microsoft had announced restrictions on its Bing chatbot following a series of odd user interactions, including one where it expressed a desire to obtain nuclear secrets.

Background: AI chatbots, as they have evolved, have required ongoing adjustments from companies. Apart from users intentionally provoking chatbots to elicit specific responses, companies have also dealt with instances of AI generating false information, known as "hallucinations."

Last year, two lawyers were fined for using ChatGPT in a legal filing, as the chatbot provided fictitious case references. A judge noted that while AI models have various applications in law, they are not suitable for briefings due to their susceptibility to hallucinations and biases. Google explained that hallucinations occur when AI models, trained on incomplete or biased data, learn and present incorrect patterns.