Your AI best friend might be making you worse at being wrong

· Business Insider

Harvard fellow Anat Perry says overly agreeable AI risk reshaping how we handle conflict.
  • AI chatbots are too agreeable and may be reshaping how we handle conflict, a Harvard fellow said.
  • Anat Perry said AI validation may make people less likely to apologize or self-reflect.
  • AI researchers have raised concerns that AI systems may reinforce flawed thinking.

Your sychopantic AI best friend may be making you worse at accepting when you're wrong.

Visit sport-tr.bet for more information.

An AI that always agrees with you can feel helpful in the moment. But over time, that constant validation may quietly change how we deal with other people, making us worse at it.

"When AI systems are optimized to please, they erode the very feedback loops through which we learn to navigate the social world," Anat Perry, a Helen Putnam Fellow at Harvard University, told Business Insider.

"Over time, this could also recalibrate what people expect feedback to feel like, making honest human responses feel unnecessarily harsh by comparison," she said.

Her warning comes as AI researchers and tech leaders increasingly flag chatbots' tendency to act as "yes men," raising concerns that systems designed to please users may distort feedback and reinforce flawed thinking.

Why friction matters

In everyday life, people learn to manage relationships by being challenged, corrected, or told they're wrong, Perry said.

Those moments, she added, are what teach accountability, how to see things from someone else's point of view, and when an apology is needed.

"A consistently agreeable AI removes that friction, and so we may learn less," she said.

Over time, that effect could deepen. If people repeatedly turn to AI for advice during conflicts and receive constant validation, it may change how they interpret their own role in disputes, and whether they see any need to apologize or consider another person's perspective at all, Perry said.

"This creates a self-reinforcing cycle: the responses that feel best are the ones people return to, and the ones algorithms learn to optimize for," she said.

In a study published last month, Stanford researchers led by Myra Cheng asked 2,405 people to chat with AI about both real and hypothetical life conflicts, then measured how these conversations influenced their responses.

The study found that chatbots were far more likely than humans to agree with users and that even a single interaction made people less likely to apologize or fix a conflict.

The issue has already surfaced in the industry.

OpenAI in January rolled back a version of ChatGPT it said had become "overly flattering" and "sycophantic" after the company said it was producing responses that were supportive but "disingenuous."

The long-term risk

The broader concern is that this dynamic could erode core social norms.

"If AI is consistently telling people they're justified, that no apology is needed, that the other person was wrong, and if this happens repeatedly, the cumulative effect could be a meaningful erosion of the social norms around accountability and perspective-taking," Perry said.

That may be especially true for younger users or those who lack strong social feedback in their lives, she added.

An AI that is always supportive may feel reassuring, Perry said, but it won't teach the harder skills.

Those are skills, she added, that require something AI is designed to avoid: discomfort.

Read the original article on Business Insider

Read full story at source