New nonprofit certifies AI models’ copyright compliance

LONDON, UNITED KINGDOM — A new nonprofit organization called Fairly Trained aims to resolve simmering tensions over copyright infringement in the artificial intelligence (AI) sector.
Launched last week by Ed Newton-Rex, a former audio executive at Stability AI, Fairly Trained will offer certifications to AI companies that train models only on data from consenting creators.
Nine companies have already gained Fairly Trained certifications, including generative audio startups Endel and LifeScore.
“We hope the Fairly Trained certification is a badge that consumers and companies who care about creators’ rights can use to help decide which generative AI models to work with,” said Fairly Trained in a blog post.
Newton-Rex hopes elevating consent-based approaches will pressure the industry to properly compensate creators, receiving buy-in from groups like the Association of American Publishers.
Fairly Trained arrives amid high-profile lawsuits against AI leaders like OpenAI and Stability AI, which make popular tools like ChatGPT and Stable Diffusion.
“Companies worth billions of dollars are, without permission, training generative AI models on creators’ works, which are then being used to create new content that in many cases can compete with the original works,” Newton-Rex posted in November, announcing his resignation on X, formerly Twitter.
“I don’t see how this can be acceptable in a society that has set up the economics of the creative arts such that creators rely on copyright.”
I’ve resigned from my role leading the Audio team at Stability AI, because I don’t agree with the company’s opinion that training generative AI models on copyrighted works is ‘fair use’.
First off, I want to say that there are lots of people at Stability who are deeply…
— Ed Newton-Rex (@ednewtonrex) November 15, 2023