Deepfakes increasingly used by fraudsters to target contact centers
GEORGIA, UNITED STATES — Fraudsters are increasingly exploiting deepfake technology to bypass security measures in contact centers, posing a significant threat to customer data and privacy.
Deepfake technology refers to the use of artificial intelligence to create convincing synthetic images, audio, and video.
Pindrop, a company specializing in audio traffic monitoring, identified four primary methods by which attackers are using deepfake voices to target contact centers.
Firstly, attackers are not just using deepfake voices to dupe authentication systems. They often use synthetic voices to navigate Interactive Voice Response (IVR) systems and gather basic account details. With this information, they revert to traditional social engineering tactics to further their fraudulent activities.
Secondly, deepfake voices are being used to bypass IVR authentication entirely. This allows fraudsters to access sensitive information such as bank balances, helping them to identify lucrative targets for further exploitation. The use of deepfake technology combined with automation enables scammers to operate on a much larger scale than before.
Thirdly, scammers are utilizing deepfake voices to alter account details, such as email and home addresses. This opens up opportunities for various frauds, including intercepting one-time passwords and ordering new bank cards.
Lastly, attackers are mimicking IVRs using their own voicebots. Initially, they repeat prompts back to the IVR, but subsequent calls use cloned IVR voices, indicating preparation for more complex fraud schemes that impersonate customer service lines.
To combat these threats, Pindrop emphasizes the importance of liveness detection, a biometric technology that can discern whether a voice is live or synthesized. This should be integrated with multifactor authentication (MFA) processes.
Avivah Litan, VP Analyst at Gartner, also stressed the need for transparent policies and governance to prevent exposure of sensitive data.
“Organizations should monitor unsanctioned uses of ChatGPT and similar solutions with existing security controls and dashboards to catch policy violations,” she stated.
Litan also recommends monitoring unsanctioned uses of generative AI with security controls, employing firewalls to restrict access, and creating immutable assets for engineered prompts.
Last February, a finance worker at a Hong Kong multinational firm fell victim to a deepfake scam, paying out a staggering $25 million to fraudsters who used deepfakes to impersonate the company’s chief financial officer (CFO) during a video conference call.