Responsible AI Institute Launches RAISE Benchmarks to Operationalize & Scale Responsible AI Policies
Responsible AI Institute (RAI Institute), a prominent non-profit organization dedicated to facilitating the responsible use of AI worldwide, has introduced three essential tools known as the Responsible AI Safety and Effectiveness (RAISE) Benchmarks. These benchmarks are designed to assist companies in enhancing the integrity of their AI products, services and systems by integrating responsible AI principles into their development and deployment processes.
Against the backdrop of rapid developments in generative AI and growing regulatory oversight, exemplified by initiatives like President Biden’s Executive Order, the European Union’s AI Act, Canada’s Artificial Intelligence and Data Act, and the recent UK AI Safety Summit where 28 nations signed an accord on developing safe AI, the RAISE Benchmarks take on a vital role in guiding organizations towards alignment with evolving global and local standards such as NIST AI Risk Management Framework (RMF) and the under-development ISO 42001 family of standards.
“In an era of accelerating AI advancements and increasing regulatory scrutiny, our RAISE Benchmarks provide organizations with the compass they need to chart a course of innovating and scaling AI responsibly, guiding them towards compliance with evolving global standards,” said Var Shankar, executive director of RAI Institute. The RAISE Benchmarks announced today are independent, community developed tools that are accessible for RAI Institute members through the non-profit’s responsible AI testbed, announced earlier this year.
The initial series of RAISE Benchmarks being announced today serve three crucial purposes:
- RAISE Corporate AI Policy Benchmark: This benchmark evaluates the comprehensiveness of a company’s AI policies by measuring their scope and alignment with RAI Institute’s model enterprise AI policy which is based on NIST AI Risk RMF. Today, RAI Institute is releasing the methodology, FAQs and an initial demo of the RAISE Policy Benchmark to guide organizations in framing their AI policies effectively to include new trustworthiness and risk considerations from generative AI and large language models (LLMs).
- RAISE LLM Hallucinations Benchmark: Organizations often grapple with mitigating AI hallucinations common in LLMs when creating new AI-powered products and solutions that result in unexpected, incorrect, misleading outputs. This benchmark helps organizations using LLMs, whether commercially available, open source or proprietary, assess the risk of hallucinations and take proactive measures to minimize them.
- RAISE Vendor Alignment Benchmark: This benchmark assesses whether the policies of supplier organizations align with the ethical and responsible AI policies of their purchasing counterparts. It ensures that vendors’ AI practices harmonize with the values and expectations of the businesses they serve.
Please click on these links to read about the RAISE Benchmarks and the methodology.
Image credit: Image by rawpixel.com on Freepik