Leading AI models lack transparency – Stanford study

CALIFORNIA, UNITED STATES — Major players in artificial intelligence like OpenAI and Meta are not providing enough transparency about how their AI foundation models could impact society, according to a new report from Stanford University.
The Foundation Model Transparency Index by Stanford Human-Centered AI (HAI) group evaluated 10 top AI models on 100 transparency indicators like model construction, functionality, and usage.
Meta’s Llama 2 model scored the highest at 54%, followed by BloomZ at 53% and OpenAI’s GPT-4 at 48%.
However, none of the AI creators gave any information about potential societal impacts, privacy, copyright issues, or bias complaints related to their models.
“What we’re trying to achieve with the index is to make models more transparent and disaggregate that very amorphous concept into more concrete matters that can be measured,” said Rishi Bommasani, lead researcher on the project.
The report looked at models including Stability AI’s Stable Diffusion, Anthropic’s Claude, Google’s PaLM 2, Cohere’s Command, AI21 Labs’ Jurassic 2, Inflection’s Inflection-1, and Amazon’s Titan.
Stanford plans to continue evaluating the same 10 foundation models over time, hoping to prompt more transparency in the AI industry.
Renowned artificial intelligence (AI) researcher Geoffrey Hinton, the “Godfather of AI”, believes we must critically evaluate AI’s trajectory and ethical implications to protect the future.