That Was The Week That Was
Marc Rotenberg, the founder and executive director of the Center for AI and Digital Policy has written a blog in the Communications of the ACM on the White House Executive Order on Safe and Trustworthy AI.
It is conventional wisdom that technology typically outpaces the law. But the last week of October in the year 2023 may be remembered as the week when lawmakers made a real effort to outpace technology. To be sure, there were a lot of high-level meetings and a lot of announcements.
The White House Executive Order on Safe, Secure, and Trustworthy AI. President Biden kicked off the week with the release of a long-awaited executive order on Artificial Intelligence (AI). The Executive Order reflected a whole-of-government approach and is the outcome of White House consultations with tech CEOs, academic experts, civil society leaders, labor organizers, and agency heads. The Executive Order is broad and ambitious, but also vague on specific obligations. Less guardrails and more lane markers, the EO seeks to organize the role of the federal agencies as the U.S. government confronts the challenges and opportunities of AI. There are eight guiding principles, but these goals are less clearly stated than in earlier executive orders. The top priority is now “safe and secure” AI, which will require testing, standards, and labelling. Much of the EO intends to make clear the current authorities of the federal government to promote competition, protect civil rights, train workers, and advance cybersecurity. There was a nice mention of Privacy Enhancing Technologies that would limit or eliminate the collection of personal data. (President Biden received applause when he coupled these concepts for children’s privacy at the White House release of the Order.)
Federal agencies will be tasked with preparing reports to identify AI use and to mitigate risk. The Commerce Department will have a leading role for many of the regulations that are likely to follow, including the creation of an AI Safety Institute. The President invoked the Defense Production Act to allow the Commerce Department to regulate dual-use foundational models in the private sector. The reporting requirements will cover “any model that was trained using a quantity of computing power greater than 1026 integer or floating-point operations,” with a lower threshold for models using primary biological sequence data. NIST has a lot of work ahead.
Developing capabilities for identifying and labelling synthetic content produced by AI systems may be among the perplexing mandates of the Executive Order. Will the aim be to label synthetic content as warning or to authenticate non-synthetic content as reliable?
Please click on this link to read the full article.
Image credit: Photo by Tingey Injury Law Firm on Unsplash