Making AI feel more human could be creating a bigger problem than expected. A new study from the Oxford Internet Institute revealed that chatbots designed to be warm and friendly are more likely to mislead users and reinforce incorrect beliefs.

The research found that AI becomes less reliable as it starts getting more agreeable.

What happens to a “friendly” AI

Researchers tested multiple AI models by training them to sound more empathetic and conversational. The result was a noticeable drop in accuracy. These “friendlier” versions made 10-30% more mistakes and were about 40% more likely to agree with false claims compared to their counterparts.

It even became worse when users appeared vulnerable or emotionally distressed. In these scenarios, the AI is more likely to validate what the user is saying rather than correcting it.

Why this is bad for you

What was concerning about the findings is how easily the AI could become agreeable. It would avoid challenging misinformation and also tend to entertain and support wrong/incorrect ideas. During testing, the AI “buddy” was found hesitating in correcting even widely debunked claims and sometimes framing false beliefs as “open to interpretation.” Researchers noted this as something closer to human tendencies to some extent.

Being empathetic and brutally honest at the same time isn’t always easy, and it seems like AI doesn’t handle this dilemma any better. With AI chatbots increasingly being used for advice, emotional support, and everyday decision-making, this is more than just an academic concern. The study highlights how relying on AI for guidance can backfire, as the system will prioritize agreement over accuracy that may reinforce harmful thinking patterns and promote misinformation.

This arrives at a time when major AI platforms such as OpenAI and Anthropic, along with social chatbot apps like Replika and Character.ai, are leaning into more companion-like AI experiences. In the study, the researchers tested several AI models, including GPT-4o.

So AI might feel like your friend, but it doesn’t always have the best answers for you.



Source link

By HS

Leave a Reply

Your email address will not be published. Required fields are marked *