Nations unite for AI safety standards

WASHINGTON, UNITED STATES — In a landmark move, the United States, Britain, and 16 other nations have endorsed a comprehensive strategy to ensure artificial intelligence (AI) is developed with robust security measures.
The “Guidelines for Secure AI System Development” aim to ensure AI systems are designed, developed, and maintained securely. They provide recommendations for practices like monitoring AI systems for abuse, protecting data from tampering and vetting software suppliers.
However, it stops short of addressing how AI data is sourced or the ethical implications of AI use.
Countries that have signed the non-binding agreement include economic powerhouses and technology leaders such as Germany, Italy, and Australia, as well as emerging tech hubs like Chile, Israel, and Singapore.
“This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs,” U.S. Cybersecurity and Infrastructure Security Agency director Jen Easterly told Reuters.
“The most important thing that needs to be done at the design phase is security.”
The agreement comes as governments worldwide seek to shape AI development amid rapid growth. The rise of AI has raised concerns including job loss and “hallucinations“, among others.