AI safety governance, the Southeast Asian way

AI safety governance, the Southeast Asian way Conversations around AI safety are largely dominated by the United States, Europe, and China, leaving Southeast Asian voices underrepresented in broader global AI governance discourse. Still, countries in this region have made important strides in digital and AI policy, recognizing the opportunities presented by AI while adopting nuanced […]

New Mechanisms Promoting International Cooperation on Governance of Artificial Intelligence

New Mechanisms Promoting International Cooperation on Governance of Artificial Intelligence The following statement was issued today by the Spokesman for UN Secretary-General António Guterres: The Secretary-General warmly welcomes the General Assembly’s decision to establish two new mechanisms within the United Nations to promote international cooperation on the governance of artificial intelligence (AI). The establishment of the […]

The real winners from Trump’s ‘AI action plan’? Tech companies

The real winners from Trump’s ‘AI action plan’? Tech companies This article was written by Dara Kerr for The Guardian. Donald Trump’s AI summit in Washington this week was a fanfare-filled event catered to the tech elite. The president took the stage on Wednesday evening, as the song God Bless the USA piped over the […]

AI Safety Index – Summer 2025

AI Safety Index – Summer 2025 Source: Future of Life Institute AI systems are growing increasingly powerful as tech companies drive toward artificial general intelligence (AGI) and beyond. Just as functioning breaks give drivers the confidence to accelerate, effective AI safety measures give society the confidence to innovate and adopt AI. Competitive pressures can incentivize […]

As Trump moves to decimate state AI laws, Governor Newsom taps the nation’s top experts for groundbreaking AI report

As Trump moves to decimate state AI laws, Governor Newsom taps the nation’s top experts for groundbreaking AI report In September 2024, Governor Gavin Newsom requested that Dr. Fei-Fei Li, Co-Director of the Stanford Institute for Human-Centered Artificial Intelligence; Dr. Mariano-Florentino Cuéllar, President of the Carnegie Endowment for International Peace; and Dr. Jennifer Tour Chayes, Dean of […]

How Some of China’s Top AI Thinkers Built Their Own AI Safety Institute

How Some of China’s Top AI Thinkers Built Their Own AI Safety Institute The article is written by Scott Singer, Karson Elmgren, and Oliver Guest for Carnigie Endowment for International Peace. Since the January 2025 release of the DeepSeek-R1 open-source reasoning model, China has increasingly prioritized leveraging artificial intelligence (AI) as a key engine for economic growth, encouraged […]

📣 Approaches to Responsible Governance of GenAI in Organizations

📣 Approaches to Responsible Governance of GenAI in Organizations Authors: Dhari Gandhi, Himanshu Joshi, Lucas Hartman, Shabnam Hassani Abstract: The rapid evolution of Generative AI (GenAI) has introduced unprecedented opportunities while presenting complex challenges around ethics, accountability, and societal impact. This paper draws on a literature review, established governance frameworks, and industry roundtable discussions to […]

AI Policy Template: Build Your Foundational Organizational AI Policy

AI Policy Template: Build Your Foundational Organizational AI Policy The AI Policy Template is developed by Responsible AI Institute and is available on their website. The Responsible AI Institute’s AI Policy Template helps organizations build a comprehensive framework to guide AI development, procurement, supply, and use. Use this template to:  Essential components include: Developed by […]

The Singapore Consensus on Global AI Safety Research Priorities

The Singapore Consensus on Global AI Safety Research Priorities Rapidly improving AI capabilities and autonomy are driving a vigorous debate on how to keep AI safe, secure and beneficial. While regulatory approaches remain under active deliberation, the global research community demonstrates substantial consensus around specific high-value technical AI safety research domains. Because of this, there are […]

An AI Liability Regulation would complete the EU’s AI strategy

An AI Liability Regulation would complete the EU’s AI strategy This article is written by Kai Zenner on CEPS. In its 2025 work programme, the European Commission effectively scrapped the AI Liability Directive (AILD) – a move that threatens to unravel trust in the EU’s burgeoning AI policy landscape. This abrupt decision strips away potential critical […]