Healthcare workers’ AI use risks patient data, Netskope report warns

CALIFORNIA, UNITED STATES — A Netskope Threat Labs report reveals healthcare employees are increasingly uploading sensitive patient data to unauthorized AI tools and cloud services, with 81% of policy violations involving protected health information.
As 88% of healthcare organizations now use generative AI, security experts warn this trend could lead to major regulatory penalties and erode patient trust.
Unchecked AI use triggers regulatory and privacy concerns
Healthcare workers are routinely exposing sensitive data by uploading information to personal AI accounts and cloud services, with 44% of violations involving regulated health records, according to Netskope Threat Labs.
Popular tools like ChatGPT and Gemini are frequent destinations, along with personal cloud storage such as Google Drive and OneDrive—despite many lacking proper compliance safeguards. The issue is exacerbated by the fact that 96% of organizations use AI platforms that train on personal data, creating potential long-term privacy risks.
Regulators are taking notice: violations could trigger general data protection regulation (GDPR) fines of up to €20 million (US$22 million) or Health Insurance Portability and Accountability Act (HIPAA) penalties of $1.5 million per incident.
“Beyond financial consequences, breaches erode patient trust and damage organizational credibility with vendors and partners,” warned Ray Canzanese of Netskope, emphasizing that healthcare’s reliance on AI demands stricter oversight. While 71% of workers still use personal AI accounts, down from 87%, organizations are racing to implement approved alternatives to curb “shadow AI” risks.
Security teams push proactive measures amid rising threats
Healthcare IT leaders are fighting back with Data Loss Prevention (DLP) tools, now deployed by 54% of organizations, up from 31% last year, to block unauthorized AI data uploads.
Real-time alerts have proven effective, with 73% of employees halting risky actions when prompted. Canzanese noted that it is vital that CISOs prevent inadvertent loss while enabling the AI benefits, advocating for Zero Trust Network Access (ZTNA) to monitor data flows.
The shift to organization-approved AI apps is gaining traction, reducing reliance on uncontrolled personal accounts. However, with 98% of healthcare systems using AI-embedded tools, experts stress that policies must evolve as quickly as the technology.
Training and transparent controls, rather than outright bans, are emerging as key to balancing innovation with compliance in this high-stakes environment.