• 3,000 firms
  • Independent
  • Trusted
Save up to 70% on staff

News » AHA warns U.S. hospitals against deepfake AI scams

AHA warns U.S. hospitals against deepfake AI scams

AHA warns U.S. hospitals against deepfake AI scams

ILLINOIS, UNITED STATES — The American Hospital Association (AHA) is urging hospitals, health systems, and clinics across the United States to brace for a fast-growing wave of deepfake scams, warning that cybercriminals are now deploying AI-generated audio, video, and text to impersonate trusted individuals and deceive healthcare staff.

Deepfake surge puts hospitals and staff at higher risk

In a December 3 notice, the AHA said cybercriminals are increasingly using synthetic media to breach healthcare networks by manipulating employees into handing over credentials, approving fund transfers, or unknowingly hiring malicious remote IT workers. 

“Deep fakes are used to manipulate unwitting individuals by having them click on phishing emails, provide their credentials, hire malicious remote IT workers or transfer funds to criminal accounts,” said John Riggi, AHA national advisor for cybersecurity and risk.

For healthcare providers already stretched thin by workforce shortages and rising cyberattacks, deepfake-enabled schemes pose a dangerous new layer of complexity. 

AI-generated audio and video can closely mimic executives, clinicians, billing managers or IT leaders, making traditional red flags harder to spot.

Multi-million dollar deepfake scams target healthcare teams

Recent incidents illustrate the accelerating threat landscape. In late 2025, Microsoft disrupted a phishing operation targeting at least 20 U.S. healthcare organizations that used stolen credentials and AI tools to hasten ransomware deployment. 

Pharmaceutical scams have also surged, with deepfake videos and voice recordings impersonating physicians across telehealth platforms and hospital networks. 

Meanwhile, the U.S. Department of Justice uncovered $46 million in fraudulent Medicare Advantage and telemedicine claims involving AI-generated documentation and falsified patient recordings.

And in one high-profile case cited in a federal brief, deepfake video calls impersonating a multinational company’s CFO successfully tricked finance employees into transferring $25 million—an example experts say mirrors risks facing U.S. hospital finance teams.

Beyond organizational risk, patients, especially seniors, face heightened vulnerability. Federal warnings note that AI-driven scams are increasingly targeting Medicare beneficiaries through phantom billing, upcoding, and identity theft. 

At the same time, AI-generated medical documentation, synthetic patient records, and fake provider identities are rendering traditional verification systems unreliable. Fraudsters are also using standardized medical coding and publicly available AI tools to produce large-scale, difficult-to-detect fraudulent claims.

These schemes compromise more than finances: they erode trust, disrupt operations, and put patient safety at risk—particularly when clinical or billing teams unknowingly rely on fabricated data.

AHA and FBI push for vigilance and stronger safeguards

To strengthen defenses, the AHA is pointing hospitals to public resources from the FBI and American Bankers Association (ABA) Foundation

The FBI’s infographic outlines telltale signs of deepfake manipulation and explains how criminals combine AI-generated text, images, audio, and video for fraud schemes. 

“The information provided by the FBI and the ABA is relevant for health care as criminals are increasingly using AI-generated deep fake audio and video content — often in combination — to deceive health care staff,” Riggi said.

Across the industry, organizations are being urged to adopt robust controls, including AI-powered threat detection systems, continuous staff training, and closer collaboration with insurers and law enforcement. 

National bodies, including the AHA, are also calling for faster implementation of AI-backed fraud monitoring and improved incident response plans as adversaries become more sophisticated.

“Constant vigilance and multilayered human verification processes are needed, especially as AI-synthetic video and audio capabilities continue to advance,” Rigg noted as healthcare leaders now face a critical inflection point.

For hospitals, clinics, and health systems, deepfake AI threats are no longer emerging; they are here, and they demand immediate, coordinated action.

Start your
journey today

  • Independent
  • Free
  • Transparent

About OA

Outsource Accelerator is the trusted source of independent information, advisory and expert implementation of Business Process Outsourcing (BPO)

The #1 outsourcing authority

Outsource Accelerator offers the world’s leading aggregator marketplace for outsourcing. It specifically provides the conduit between Philippines outsourcing suppliers and the businesses – clients – across the globe.

The Outsource Accelerator website has over 5,000 articles, 450+ podcast episodes, and a comprehensive directory with 4000+ BPO companies… all designed to make it easier for clients to learn about – and engage with – outsourcing.

About Derek Gallimore

Derek Gallimore has been in business for 20 years, outsourcing for over eight years, and has been living in Manila (the heart of global outsourcing) since 2014. Derek is the founder and CEO of Outsource Accelerator, and is regarded as a leading expert on all things outsourcing.

“Excellent service for outsourcing advice and expertise for my business.”

Learn more
Banner Image
Get 3 Free Quotes Verified Outsourcing Suppliers
3,000 firms.Just 2 minutes to complete.
SAVE UP TO
70% ON STAFF COSTS
Learn more

Connect with over 3,000 outsourcing services providers.

Banner Image

Transform your business with skilled offshore talent.

  • 3,000 firms
  • Simple
  • Transparent
Banner Image