OpenAI urges end to voice-based authentication amid voice cloning risks

CALIFORNIA, UNITED STATES — ChatGPT creator OpenAI issued a stark warning to businesses to phase out voice-based authentication as a security measure for accessing bank accounts and other sensitive information.
This cautionary advice comes as the company delays the release of its Voice Engine, a voice cloning tool capable of generating speech that closely mimics a specific individual’s voice.
OpenAI’s decision to hold back the general release of Voice Engine is a strategic move to strengthen societal defenses against the potential misuse of such technology.
The company’s voice cloning tool, which is currently in preview, has demonstrated significant potential in early applications. It has been used to assist with reading, translating content, and even supporting non-verbal individuals, showcasing the positive impact of synthetic voices in customer experience (CX) innovation.
Moreover, it holds promise for medical applications, such as helping patients with speech impairments regain their voices.
However, the underlying technology also poses a substantial risk. Recent research from Pindrop underscores OpenAI’s warning, revealing that fraudsters are already exploiting synthetic voice technology to bypass authentication in interactive voice response (IVR) systems.
These scammers were been able to change customer details and create voice deepfakes, leading to a range of fraudulent activities, including the potential theft of one-time passwords and unauthorized credit card requests.
The research also found instances of voice bots being used to replicate a company’s IVR system, likely as part of a larger scam to deceive customers.
These findings highlight the urgency of OpenAI’s message and the need for businesses to reconsider their reliance on voice-based security measures.
In a blog post, OpenAI wrote a broader call to action, urging the public to become more aware of deceptive AI content and for businesses to explore new methods to trace the origins of audiovisual content.
The company is also advocating for policies to protect individuals’ voices from unauthorized AI use, emphasizing the importance of ongoing dialogue with policymakers, researchers, developers, and creatives on the challenges and opportunities presented by synthetic voices.