Contact centers face surge in AI, deepfake-driven cyberattacks in 2025

CALIFORNIA, UNITED STATES — Contact centers, repositories of sensitive customer data and hubs of constant interaction, are increasingly attractive targets for cybercriminals. Roberta Gamble, Chief Research Analyst at FOURCASTERS, highlighted in a recent No Jitter article that the push to modernize operations, particularly with AI, can inadvertently create new vulnerabilities. This warning comes as the industry reflects on security challenges post-RSAC 2025.
The sheer volume of daily interactions—often numbering in the hundreds or thousands—combined with a large workforce and the growing reliance on AI and automation for tasks ranging from triage to complex issue resolution makes contact centers high-value targets.
According to Zendesk’s 2025 Benchmark report, this concern is well-founded, with 56% of CX leaders admitting their organization suffered a data breach or cyberattack targeting customer data in the past year.
Credentials and human element remain key weaknesses
A primary attack vector continues to be credential theft, given the wealth of personally identifiable information (PII) managed by contact centers. IBM’s 2025 X-Force Threat Intelligence Index revealed that nearly one-third of security incidents observed in 2024 led to credential theft.
Agents themselves can pose a vulnerability, particularly when they lack sufficient training and tools to manage multiple systems and customer demands. The Zendesk study found that “only 28% of contact centers believe their teams have advanced knowledge of data privacy best practices.”
This is a significant figure when considering that, according to Secureframe, nearly three-fourths (74%) of breaches involve a human element.
The escalating threat of voice deepfakes
Adding to existing challenges, voice deepfakes have emerged as a potent threat in 2025. Synthetic audio tools can now clone a voice with alarming accuracy using just minimal audio input.
Gamble notes that these deepfakes are already being deployed in fraud and impersonation scams targeting financial services, government entities, and enterprise help desks. The FBI even issued a warning in May 2025 about cloned voices of government officials.
Research from Reality Defender underscores the danger, stating that these deepfakes are “easy to produce and difficult to detect” and have already caused tens of millions of dollars in losses for call centers and their clients.
Further compounding the issue, Pindrop’s Voice Intelligence and Security report found cybercriminals successfully bypassed contact center Knowledge-Based Authentication (KBAs) over 80% of the time.
To combat these evolving threats, Gamble suggests that contact centers consider several strategies. These include adopting next-generation multi-factor and biometric authentication, leveraging AI-powered anomaly detection, implementing security-focused agent training that involves deepfake awareness, ensuring better workflow and system integration to close security gaps, and conducting real-world “red team” simulations.
As Gamble concludes, “The same AI capabilities that streamline customer service can also be used to deceive it.” Therefore, maintaining a robust security posture through agent readiness, advanced authentication, and continuous testing is paramount as contact centers embrace modernization.