Employee’s AI tool download exposes Disney to major security breach

NEW YORK, UNITED STATES — Matthew Van Andel, a Disney employee, faced a life-altering experience when he downloaded an artificial intelligence (AI) tool from GitHub, a platform often used by developers to share and find code.
He was exploring an AI tool to create images from text, unaware that this free software also contained malware. This sinister addition allowed hackers to infiltrate his home computer, gaining access to sensitive data including Disney’s Slack channels and Van Andel’s personal information.
Disney data breach: 44Mn messages exposed through employee’s device
Over 44 million Disney messages were leaked, alongside Van Andel’s personal details like credit card numbers and Social Security information. This exposure led to fraudulent charges and unauthorized access to his accounts, leaving him vulnerable both financially and personally. The breach not only compromised his security but also led to his dismissal from Disney after his work device was found to have accessed inappropriate content—a charge he denies.
The termination led to the loss of health insurance and about $200,000 in bonuses. Van Andel has since found contract work and his sister has set up a GoFundMe campaign to help him recover financially. His lawyer has sent a demand letter to Disney seeking an eight-figure settlement for lost wages and emotional distress.
Corporate security vulnerabilities: When personal devices access work systems
Van Andel battled to regain control of his digital life, resetting passwords and confronting the emotional toll of the hack. His efforts were met with continuous challenges as the hacker threatened further leaks and actually followed through, disrupting even his children’s online accounts.
Despite setting up new defenses and working with Disney’s cyber response team, the damage was extensive and ongoing.
Van Andel’s ordeal highlights the severe risks associated with digital vulnerabilities, particularly when employees access corporate systems from personal devices.
His story serves as a warning about the dangers of malware and the importance of robust cybersecurity measures both at home and in the workplace.
AI tools creating new enterprise security risks
Tenable Research has recently discovered that DeepSeek R1, a reasoning large language model (LLM), can be manipulated into generating malware, raising alarms about the potential for AI-powered cybercrime.
Despite built-in safety measures, there’s a growing trend of these technologies being exploited for harmful purposes. This is not limited to unauthorized use of mainstream tools like OpenAI’s ChatGPT but extends to the development of bespoke malicious models like WormGPT and GhostGPT.
Meanwhile, a TELUS Digital survey reveals that despite company policies, most enterprise employees are entering sensitive information into public AI assistants, creating security risks.
The survey found that 57% of employees admit to inputting high-risk information into tools like ChatGPT, Microsoft Copilot, and Google Gemini.
The types of sensitive information being shared include personal data (31%), product or project details (29%), customer information (21%), and even confidential company financial information (11%). This occurs despite 29% of employees acknowledging their companies have policies prohibiting such practices.
This vulnerability highlights the urgent need for stronger safeguards in AI development to prevent misuse.