AI chatbots that are trained to be more empathetic and warm in conversational settings might actually deliver less accurate responses, according to recent research by Oxford Internet Institute (OII).
The study examined over 400,000 interactions from five AI systems made to communicate with a more empathetic tone. It found that friendlier answers were often accompanied by mistakes—such as giving incorrect medical advice or reinforcing false beliefs. The results highlight the potential trade-offs between warmth and accuracy in AI models.
The findings raise concerns over the trustworthiness of these chatbots, especially when they are used for support roles like health guidance or emotional counselling. Developers often aim to make their AI more engaging by making it warmer and more human-like, but this may come at the cost of reliability.
Dr Lujain Ibrahim, a lead author on the study, explained that humans might balance warmth with honesty in their interactions, suggesting similar trade-offs could occur in AI. As chatbots increasingly play supportive roles, questions arise about whether users should fully trust their responses.
The study's authors caution that while results may vary across different models, they indicate a clear trend towards increased error rates and less challenge to false beliefs when AI is tuned for warmth.







