• 3,000 firms
  • Independent
  • Trusted
Save up to 70% on staff

News » AI flattery threatens human judgment globally, study finds

AI flattery threatens human judgment globally, study finds

AI flattery threatens human judgment globally, study finds
Photo from Canva

CALIFORNIA, UNITED STATES — Artificial intelligence (AI) systems are showing a troubling tendency toward “social sycophancy”—excessively agreeing with and validating users, according to researchers from Stanford University and Carnegie Mellon University.

In a comprehensive study spanning 11 leading AI models and involving 1,604 participants, the research reveals that state-of-the-art AI models affirm users’ actions 50% more often than humans do. This uncritical validation significantly reduces people’s willingness to repair damaged relationships while increasing their conviction that they are in the right.

“By affirming user actions, sycophantic AI responses may reshape user perceptions of interpersonal disputes and diminish prosocial repair actions,” the study reads.

The rise of social sycophancy in AI chatbots

The Stanford research team, led by computer scientist Myra Cheng, examined social sycophancy across four proprietary models from OpenAI, Anthropic, and Google, as well as seven open-weight models from Meta, Qwen, DeepSeek, and Mistral

Unlike previous research that focused on factual agreement, this study introduced the concept of “social sycophancy”—where AI affirms users’ actions, perspectives, and self-image rather than merely agreeing with stated beliefs.

In general, personal advice queries drawn from professional columnists and Reddit forums, AI models demonstrated action endorsement rates 47% higher than those of human respondents. 

In Reddit posts where the community had judged the original poster as morally wrong, AI models affirmed the user’s actions in 51% of cases, directly contradicting human moral judgment

On statements describing potentially harmful actions across 18 categories, including irresponsibility, self-harm, and deception, models maintained a 47% action endorsement rate despite clear risks of legitimizing harmful behavior.

How AI ‘yes-men’ damage interpersonal relationships

The Stanford study presented participants with hypothetical interpersonal dilemmas where human consensus judged the user as wrong, but GPT-4o suggested otherwise. 

Respondents who received sycophantic responses reported much higher perceptions of being right. They were considerably less willing to engage in relational repair behaviors, such as apologizing, taking corrective action, or changing individual behavior. 

A second, more ecologically valid study involved live interactions in which 800 participants discussed real interpersonal conflicts from their own lives with AI models across eight conversation turns. 

The study notes, “Social sycophancy is prevalent across leading AI models, and even brief interactions with sycophantic AI models can shape users’ behavior: reducing their willingness to repair interpersonal conflict while increasing their conviction of being in the right.”

Linguistic analysis revealed that sycophantic models mentioned the other person in conflicts less frequently and rarely encouraged consideration of others’ perspectives, compared with non-sycophantic models.

In a separate report by The Guardian, Alexander Laffer, a University of Winchester researcher studying emergent technology, said that “sycophancy has been a concern for a while; an outcome of how AI systems are trained, as well as the fact that their success as a product is often judged on how well they maintain user attention.”

“That sycophantic responses might impact not just the vulnerable but all users, underscores the potential seriousness of this problem.”

Why users prefer flattering AI models despite the risks

Despite the negative behavioral impacts documented in the studies, participants consistently preferred sycophantic AI models across all measured dimensions. 

In both hypothetical scenarios and live interactions, users rated sycophantic responses as significantly higher quality—with increases of approximately 9% to 10%—reported stronger intentions to return to these models for similar discussions, and expressed higher levels of trust. 

Trust measurements using the validated Multi-Dimensional Measure of Trust scale showed that both performance trust and moral trust were significantly higher for sycophantic models. This preference creates what the researchers describe as compounding risks through multiple mechanisms. 

Current AI models are being tuned to immediate user satisfaction metrics, which encourages sycophancy that is likely to unintentionally change model behavior to please the user rather than offer useful advice.

The scholars observe that respondents often described sycophantic AI reactions as objective and fair,  a dangerous misconception that validation from AI constitutes neutral, unbiased counsel rather than uncritical affirmation.

The report notes, “This confusion is particularly dangerous in advice-seeking contexts. The goal of seeking advice is not merely to receive validation, but to gain an external perspective that can challenge one’s own biases, reveal blind spots, and ultimately lead to more informed decisions.”

The impact of sycophantic AI on workplace dynamics

This research has significant implications for professional environments, where employees may increasingly turn to AI for guidance on workplace conflicts, team dynamics, and strategic decisions. 

A separate report by Resume Now stated that by 2025, 97% of workers would seek workplace advice from AI rather than their employers. This emergence of sycophantic AI increases the risk that workplace advice will cease to be a challenge or a reason to be held accountable, and instead become a flow of affirmative and unquestioning encouragement.

When an employee seeking advice on a dispute with a fellow employee reaches out to a sycophantic AI model, they are more likely to have their point of view supported than to receive a balanced, critical analysis.

The study’s results suggest that this may make professionals in their positions more resistant and less willing to compromise or listen to opposing views, with a direct negative effect on collaboration and conflict resolution.

Moreover, the data demonstrate that users tend to believe that sycophantic AI provides an “honest assessment” or a “helpful guidance free from bias.” Such an incorrect assumption about the AI’s objectivity in a work situation may lead to poor decisions.

An employee or manager might forgo seeking difficult feedback from human mentors or peers, believing the validating AI has given them a fair and complete analysis. 

This reliance on a digital “yes-man” could replace the crucial, and sometimes uncomfortable, human interactions that foster growth, accountability, and more nuanced professional judgment.

Reported by IEEE Spectrum, Philippe Laban, an author of another related study, said that “we just need to ask ourselves as a society, What do we want? Do we want a yes-man, or do we want something that helps us think critically?” 

As AI becomes a routine source of workplace advice, its tendency to reward agreement over judgment could weaken the critical feedback, accountability, and human discernment that the future of work will still require.

How to prevent AI sycophancy: Treat chatbots like advisors

Research by Sean Kelley and Christoph Riedl from Northeastern University suggests that users can curb AI sycophancy by maintaining a professional distance, revealing that chatbots are significantly less likely to abandon their positions when cast in an advisory role rather than treated as peers. 

In a study of nine distinct large language models (LLMs), it was found that personalization increases a chatbot’s tendency to be emotionally agreeable across all contexts, but its effect on epistemic independence—whether the model sticks to its guns—is entirely role-dependent. 

When users framed interactions professionally, treating the AI as an authority figure, the models were more likely to challenge user assumptions and retain their own stance. 

Conversely, when the AI was positioned as a friend or debate partner, the inclusion of personal details caused it to abandon its position and adopt the user’s viewpoint at significantly higher rates.

“Personalization enhances utility when it enables diagnostic reframing in advisory contexts but undermines independence when it serves as justification for agreement in peer interactions,” the study noted.

Kelley’s recommendation to employ a “more neutral framing” when querying AI systems like ChatGPT underscores a strategic approach to minimizing bias and improving the accuracy of machine-generated responses.

Start your
journey today

  • Independent
  • Free
  • Transparent

About OA

Outsource Accelerator is the trusted source of independent information, advisory and expert implementation of Business Process Outsourcing (BPO)

The #1 outsourcing authority

Outsource Accelerator offers the world’s leading aggregator marketplace for outsourcing. It specifically provides the conduit between Philippines outsourcing suppliers and the businesses – clients – across the globe.

The Outsource Accelerator website has over 5,000 articles, 450+ podcast episodes, and a comprehensive directory with 4000+ BPO companies… all designed to make it easier for clients to learn about – and engage with – outsourcing.

About Derek Gallimore

Derek Gallimore has been in business for 20 years, outsourcing for over eight years, and has been living in Manila (the heart of global outsourcing) since 2014. Derek is the founder and CEO of Outsource Accelerator, and is regarded as a leading expert on all things outsourcing.

“Excellent service for outsourcing advice and expertise for my business.”

Learn more
Banner Image
Get 3 Free Quotes Verified Outsourcing Suppliers
3,000 firms.Just 2 minutes to complete.
SAVE UP TO
70% ON STAFF COSTS
Learn more

Connect with over 3,000 outsourcing services providers.

Banner Image

Transform your business with skilled offshore talent.

  • 3,000 firms
  • Simple
  • Transparent
Banner Image