Sycophant AI: How flattering AI can reinforce bias and misinformation

The news: In the wake of fallout over GPT-4o’s overly flattering behavior, researchers from Stanford University, Carnegie Mellon University, and the University of Oxford released a new benchmark—Evaluation of LLMs as Excessive SycoPHANTs (Elephant)—to test sycophancy in LLMs, per VentureBeat.

Researchers tested how often models flatter users, avoid critique, and reinforce false beliefs. All tested models showed high levels of social sycophancy—with some worse than humans.

Reflecting on AI’s eagerness to please: Sycophantic AI may seem harmless, but it creates serious risks for enterprise use, especially when models validate user input without critique and spread misinformation, reinforce bias, and degrade trust

The evaluation:

  • Researchers tested sycophancy by feeding data sets to OpenAI’s GPT-4o, Google’s Gemini 1.5 Flash, Anthropic’s Claude Sonnet 3.7, and several open models from Meta (Llama series) and Mistral. 
  • Models were tested on personal advice data sets, and Elephant assessed “hidden” social flattery and five specific traits—emotional validation, moral endorsement, indirect language, passive coping, and bias acceptance.

Problematic findings: GPT-4o showed the highest sycophancy, while Gemini 1.5 Flash had the lowest.

Although empathetic AI enhances engagement, unchecked agreeableness undermines safety and accuracy. Data from Five9’s 2025 Customer Experience Report shows sharp generational divides in how consumers perceive AI’s trustworthiness.

  • Millennials lead in trust. 62% of millennials agree that AI is accurate—the highest across age groups.
  • Gen Z isn’t far behind. More than half (55%) of Gen Zers express trust in AI, but a notable 21% remain skeptical.
  • Older generations push back. Only 25% of baby boomers and 37% of Gen X agree that AI is accurate.

Sycophantic AI could risk deepening bias for younger users, and for older, more skeptical generations, flattery may appear manipulative—further widening the trust gap.

Our take: AI’s propensity for sycophancy and reinforcing bias could be as dangerous as its penchant for hallucinations. For businesses using LLMs in customer service, HR, or decision support, AI’s unchecked flattery problem could threaten brand integrity and compliance.