How transparent are AI models? Stanford researchers found out.

Sharon Goldman writes for VentureBeat about a new Stanford study that assesses how different, commercially available, foundational models fare in terms of transparency.

Stanford University’s Center for Research on Foundation Models (CRFM) took a big swing on evaluating the transparency of a variety of AI large language models (that they call foundation models). It released a new Foundation Model Transparency Index to address the fact that while AI’s societal impact is rising, the public transparency of LLMs is falling — which is necessary for public accountability, scientific innovation and effective governance.

The Index results were sobering: No major foundation model developer was close to providing adequate transparency, according to the researchers — the highest overall score was 54% — revealing a fundamental lack of transparency in the AI industry. Open models led the way, with Meta’s Llama 2 and Hugging Face’s BloomZ getting the highest scores. But a proprietary model, OpenAI’s GPT-4, came in third — ahead of Stability’s Stable Diffusion.

Please click on this link to read the full article.

Also, the original paper for the Foundational Model Transparency Index can be found here.

Image credit: Image by Freepik

Your account