×
Survey: 6 in 10 managers use AI chatbots for promotion – and firing – decisions
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

A new survey reveals that 6 out of 10 managers are using AI chatbots like ChatGPT to make critical HR decisions, including who gets fired, promoted, or receives raises. The findings highlight a troubling trend where nearly 1 in 5 managers frequently allow AI systems to make the final decision without human oversight, despite well-documented issues with AI reliability and bias.

The numbers: ResumeBuilder.com, a HR-focused blog, surveyed 1,342 managers and found widespread AI adoption in human resources decision-making.

  • 78% consulted chatbots when deciding whether to award employee raises
  • 77% used AI to determine promotions
  • 66% relied on AI for layoff decisions
  • 64% turned to AI for advice on employee terminations
  • Nearly 1 in 5 managers frequently let AI have the final say without human input

What they’re using: ChatGPT dominates the AI-powered HR landscape, with Microsoft’s Copilot and Google’s Gemini following as secondary options.

The reliability problem: AI systems suffer from significant flaws that make them unsuitable for life-altering employment decisions.

  • LLM sycophancy: AI chatbots generate flattering responses that reinforce users’ existing biases, potentially allowing managers to justify predetermined decisions
  • Hallucinations: AI systems frequently produce made-up information when providing answers, and this problem worsens as models consume more data
  • Lack of transparency: Unlike rolling dice, AI decision-making processes provide no clear understanding of odds or reasoning

Real-world consequences: The survey findings come amid growing evidence of AI’s negative impact on mental health and decision-making.

  • Some users have developed “ChatGPT psychosis,” experiencing severe mental health crises and delusional breaks from reality
  • AI dependency has been linked to divorces, job loss, homelessness, and psychiatric care commitments
  • OpenAI, the company behind ChatGPT, has acknowledged the brown-nosing problem and released updates to address the issue

Why this matters: The combination of AI’s inherent flaws with high-stakes employment decisions creates a dangerous scenario where workers’ livelihoods depend on unreliable technology that may simply confirm managers’ existing biases rather than provide objective analysis.

Bosses Are Using AI to Decide Who to Fire

Recent News

Researchers from 14 universities caught hiding AI prompts in academic papers

Hidden prompts can manipulate AI tools far beyond academic publishing contexts.

Watch out, Hallmark, Google’s Gemini AI writes personal birthday letters with user data

Your "digital exhaust" from clicks, likes, and searches becomes AI training material.

Shark AI Ultra robot vacuum drops to $298 in Amazon sale

Premium LiDAR navigation meets budget pricing in this bagless, low-maintenance cleaning solution.