Everybody wants to audit AI, but nobody knows how
This article is authored by Ryan Heath for Axios.
Some legislators and experts are pushing independent auditing of AI systems to minimize risks and build trust.
Why it matters: Consumers don’t trust big tech to self-regulate and government standards may come slowly or never.
The big picture: Failure to manage risk and articulate values early in the development of an AI system can lead to problems ranging from biased outcomes from unrepresentative data to lawsuits alleging stolen intellectual property.
Driving the news: Sen. John Hickenlooper (D-Colo.) announced in a speech Monday that he will push for auditing of AI systems, because AI models are using our data “in ways we never imagined and certainly never consented to.”
- “We need qualified third parties to effectively audit generative AI systems,” Hickenlooper said, “We cannot rely on self-reporting alone. We should trust but verify” claims of compliance with federal laws and regulations, he said.
Catch up quick: The National Institute of Standards and Technology (NIST) developed an AI Risk Management Framework to help organizations think about and measure AI risks, but it does not certify or validate AI products.
- President Biden’s executive order on AI mandated that NIST expand its support for generative AI developers and “create guidance and benchmarks for evaluating and auditing AI capabilities,” especially in risky areas such as cybersecurity and bioweapons.
What’s happening: A growing number of companies provide services that evaluate whether AI models are complying with local regulations or promises made by their developers — but some AI companies remain committed to their own internal risk research and processes.
- NIST is only the “tip of the spear” in AI safety, Hickenlooper said. He now wants to establish criteria and a path to certification for third-party auditors.
The “Big Four” accounting firms — Deloitte, EY, KPMG, and PwC — sense business opportunities in applying audit methodologies to AI systems, Nicola Morini Bianzino, EY’s global chief technology officer, tells Axios.
- Morini Bianzino cautions that AI audits might “look more like risk management for a financial institution, as opposed to audit as a certifying mark. Because, honestly, I don’t know technically how we would do that.”
- Laura Newinski, KPMG’s COO, told Axios the firm is developing AI auditing services and “attestation about whether data sets are accurate and follow certain standards.”
Established players such as IBM and startups such as Credo provide AI governance dashboardsthat tell clients in real time where AI models could be causing problems — for example around data privacy.
- Anthropic believes NIST should focus on “building a robust and standardized benchmark for generative AI systems” that all private AI companies can adhere to.
Market leader OpenAI announced in October that it’s creating a “risk-informed development policy” and has invited experts to apply to join its OpenAI Red Teaming Network.
- OpenAI released a paper Jan. 31 purporting to examine whether its models increase the risk of bioweapons. The company’s answer: not really.
- NYU professor Gary Marcus argues the paper is misleading. “The more I look at the results, the more worried I become,” Marcus wrote in his blog. “Company white papers are not peer-reviewed articles,” he notes.
Yes, but: An AI audit industry without clear standards could be a recipe for confusion, both for corporate customers and consumers using AI.
- “A clear baseline for AI auditing standards can prevent a race-to-the-bottom scenario, where companies just hire the cheapest third-party auditors to check off requirements,” Hickenlooper believes.
Please click on this link to read the original article.
Image credit: Image by rawpixel.com on Freepik