Responsible AI now has an ISO standard

This article is written by Jonathan Kemper for The Decoder.

A new ISO standard aims to provide an overarching framework for the responsible development of AI.

The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) have approved a new international standard, ISO/IEC 42001. This standard is designed to help organizations develop and use AI systems responsibly.

ISO/IEC 42001 is the world’s first standard for AI management systems and is intended to provide useful guidance in a rapidly evolving technology area. It addresses various challenges posed by AI, such as ethical considerations, transparency and continuous learning. For organizations, the standard is intended to provide a structured way to balance the risks and opportunities associated with AI.

The standard is aimed at companies that offer or use AI-based products or services. It is designed for all AI systems and is intended to be applicable in various application areas and contexts. The ISO provides a reading sample on this page, the full text costs 187 Swiss francs.

ISO has already developed several AI standards, including ISO/IEC 22989, which defines AI terminology; ISO/IEC 23053, which provides a framework for AI and machine learning; and ISO/IEC 23894, which provides guidelines for AI-related risk management. ISO/IEC 42001 now provides overarching governance.

Please click on this link to read the full article.

Image credit: Image by Freepik

Your account