Japan’s Approach to AI Regulation and Its Impact on the 2023 G7 Presidency
This report was authored by Hiroki Habuka and published on Center for Strategic and International Studies.
Artificial intelligence (AI) is making significant changes to our businesses and daily lives. While AI brings dramatic solutions to societal problems, its unpredictable nature, unexplainability, and reflection or amplification of data biases raise various concerns about privacy, security, fairness, and even democracy. In response, governments, international organizations, and research institutes around the world began publishing a series of principles for human-centric AI in the late 2010s.
What began as broad principles are now transforming into more specific regulations. In 2021, the European Commission published the draft Artificial Intelligence Act, which classifies AI according to four levels and prescribes corresponding obligations, including enhanced security, transparency, and accountability measures. In the United States, the Algorithmic Accountability Act of 2022 was introduced in both houses of Congress in February 2022. In June 2022, Canada proposed the Artificial Intelligence and Data Act (AIDA) in which risk management and information disclosure regarding high-impact AI systems will be made mandatory.
While regulating AI is somewhat necessary for preventing threats to fundamental values, there is a concern that the burden of compliance and the ambiguity of regulatory contents may stifle innovation. In addition, regulatory fragmentation will impose serious costs not only on businesses but also society. How to address AI’s risks while accelerating beneficial innovation and adoption is one of the most difficult challenges for policymakers, including Group of Seven (G7) leaders.
During the 2023 G7 summit in Japan, digital ministers are expected to discuss the human-centric approach to AI, which may cover regulatory or nonregulatory policy tools. As the host country, Japan’s approach to AI regulation may have considerable influence on consensus-building among global leaders. This paper analyzes the key trends in Japan’s AI regulation and discusses what arguments could be made at the G7 summit.
To summarize, Japan has developed and revised AI-related regulations with the goal of maximizing AI’s positive impact on society, rather than suppressing it out of overestimated risks. The emphasis is on a risk-based, agile, and multistakeholder process, rather than a one-size-fits-all obligation or prohibition. Japan’s approach provides important insights into global trends on AI regulation.
Japan’s AI Regulations
In 2019, the Japanese government published the Social Principles of Human-Centric AI (Social Principles) as principles for implementing AI in society. The Social Principles set forth three basic philosophies: human dignity, diversity and inclusion, and sustainability. It is important to note that the goal of the Social Principles is not to restrict the use of AI in order to protect these principles but rather to realize them through AI. This corresponds to the structure of the Organization for Economic Cooperation and Development’s (OECD) AI Principles, whose first principle is to achieve “inclusive growth, sustainable development, and well-being” through AI.
To achieve these goals, the Social Principles set forth seven principles surrounding AI: (1) human-centric; (2) education/literacy; (3) privacy protection; (4) ensuring security; (5) fair competition; (6) fairness, accountability, and transparency; and (7) innovation. It should be noted that the principles include not only the protective elements of privacy and security but also the principles that guide the active use of AI, such as education, fair competition, and innovation.
Japan’s AI regulatory policy is based on these Social Principles. Its AI regulations can be classified into two categories. (In this paper, “regulation” refers not only to hard law but also to soft law, such as nonbinding guidelines and standards.):
- Regulation on AI: Regulations to manage risks associated with AI.
- Regulation for AI: Regulatory reform to promote the implementation of AI.
As outlined below, Japan takes a risk-based and soft-law approach to regulation on AI while actively advancing legislative reform from the perspective of regulation for AI.
Regulation on AI
Japan has no regulations that generally constrain the use of AI. According to the AI Governance in Japan Ver. 1.1 report published by the Ministry of Economy, Trade, and Industry (METI) in July 2021—which comprehensively describes Japan’s AI regulatory policy ( AI Governance Report)—such “legally-binding horizontal requirements for AI systems are deemed unnecessary at the moment.” This is because regulations face difficulties in keeping up with the speed and complexity of AI innovation. A prescriptive, static, and detailed regulation in this context could stifle innovation. Therefore, the METI report concludes that the government should respect companies’ voluntary efforts for AI governance while providing nonbinding guidance to support or guide such efforts. The guidance should be based on multistakeholder dialogue and be continuously updated in a timely manner. This approach is called “agile governance,” which is Japan’s basic approach to digital governance.
Looking at sector-specific regulations, none prohibit the use of AI per se but rather require businesses to take appropriate measures and disclose information about risks. For example, the Digital Platform Transparency Act imposes requirements on large online malls, app stores, and digital advertising businesses to ensure transparency and fairness in transactions with business users, including the disclosure of key factors determining their search rankings. The Financial Instruments and Exchange Act requires businesses engaging in algorithmic high-speed trading to register with the government and requires them to establish a risk management system and maintain transaction records. From the viewpoint of fair competition, the Japan Fair Trade Commission analyzed the potential risks of cartel and unfair trade to be conducted by algorithms and concluded that most issues could be covered by the existing Antimonopoly Act.
Other Relevant Laws
There are some laws that do not directly legislate AI systems but still remain relevant for AI’s development and use. The Act on the Protection of Personal Information (APPI) describes the key mandatory obligations for organizations that collect, use, or transfer personal information. The latest amendment of the APPI, which came into effect in 2022, introduced the concept of pseudonymized personal data. Since the obligations for handling pseudonymized information are less onerous than those for personal information, this new concept is expected to encourage businesses to use more data for AI development.
If an AI causes damage to a third party, the developer or operator of the AI may be liable in tort under civil law if it is negligent. However, it is difficult to determine who is negligent in each situation because AI output is unpredictable and the causes of the output are difficult to identify. The Product Liability Act reduces the victim’s burden of proof when claiming tort liability, but the act only covers damages arising from tangible objects. Therefore, it may apply to the hardware in which the AI is installed but not to the AI program itself.
There are other relevant regulations and laws that aim to encourage the development and deployment of AI, which will be introduced in the “Regulation for AI” section.
Please click on this link to read the full report.
Image credit: Image by pikisuperstar on Freepik