AI giants partner with hackers to test technology
NEW YORK, UNITED STATES — Major artificial intelligence (AI) providers are partnering with the United States government to allow thousands of hackers to test the limits of their AI technology.
The hackers will be tasked with finding flaws and vulnerabilities in chatbots and other language models that can lead to algorithmic bias, data breaches, and other problems.
The mass hacking event, set to take place this summer at the DEF CON hacker convention in Las Vegas, will draw thousands of people with diverse backgrounds and subject matter expertise.
The event aims to test AI language models following the White House’s Blueprint for an AI Bill of Rights. This set of principles seeks to limit the impacts of algorithmic bias, give users control over their data, and ensure that automated systems are used safely and transparently.
Several companies like Google, OpenAI, Nvidia, and startups Anthropic, Hugging Face, and Stability AI will provide their models for testing. The testing platform will be built by Scale AI, known for its work in assigning humans to help train AI models by labeling data.
Rumman Chowdhury, Co-founder of AI accountability nonprofit Humane Intelligence and the event’s coordinator, said that the “hackathon” is a direct pipeline to give company feedback.
“It’s not like we’re just doing this hackathon and everybody’s going home. We’re going to be spending months after the exercise compiling a report, explaining common vulnerabilities, things that came up, patterns we saw,” Chowdhury added.