AI doomsday warnings a distraction from the danger it already poses, warns expert

AI doomsday warnings a distraction from the danger it already poses, warns expert This article was published in The Guardian and authored by Dan Milmo. Focusing on doomsday scenarios in artificial intelligence is a distraction that plays down immediate risks such as the large-scale generation of misinformation, according to a senior industry figure attending this […]

Marc Rotenberg: An AI Oversight

Marc Rotenberg: An AI Oversight Marc Rotenberg of the Center for AI and Digital Policy, has published an open letter “An AI Oversight”. Following is the text from his LinkedIn post: I published a Letter to the Editor in the current edition of Foreign Affairs. My letter (“An AI Oversight”) is a response to the […]

UN Secretary-General Announces Creation of New Artificial Intelligence Advisory Board

UN Secretary-General Announces Creation of New Artificial Intelligence Advisory Board On 26th October 2023, the United Nation’s Secretary-General at a press conference announced the creation of a new Artificial Intelligence Advisory Body on risks, opportunities and international governance of artificial intelligence.  That body will support the international community’s efforts to govern artificial intelligence. Here’s what […]

Research Article: Living guidelines for generative AI — why scientists must oversee its use

Research Article: Living guidelines for generative AI — why scientists must oversee its use This article, published in Nature and co-authored by Claudi L. Bockting, Eva A. M. van Dis, Robert van Rooij, Willem Zuidema, and Johan Bollen, recommends establishing an independent scientific body to test and certify generative artificial intelligence, before the technology damages […]

“Middleware” and Modalities for the International Governance of AI

“Middleware” and Modalities for the International Governance of AI Anja Kaspersen, a Carnegie Council Senior Fellow and part of the Artificial Intelligence and Equality Initiative (AIEI), writes about two significant risks in the pursuit of global AI governance, i.e., the potential failure of well-intentioned but overly ambitious efforts, and proposals that limit themselves merely to […]

How We Can Have AI Progress Without Sacrificing Safety or Democracy

How We Can Have AI Progress Without Sacrificing Safety or Democracy Daniel Privitera and Yoshua Bengio have published an excellent, balanced, and very clearly written op-ed in TIME on AI regulation. Most importantly, they argue that we should, and effectively can, pursue AI progress, safety, and democratization simultaneously. They make a range of policy proposals […]

How transparent are AI models? Stanford researchers found out.

How transparent are AI models? Stanford researchers found out. Sharon Goldman writes for VentureBeat about a new Stanford study that assesses how different, commercially available, foundational models fare in terms of transparency. Stanford University’s Center for Research on Foundation Models (CRFM) took a big swing on evaluating the transparency of a variety of AI large language models […]

New AI Strategy Adopted by the Norwegian Ministry of Defense

New AI Strategy Adopted by the Norwegian Ministry of Defense Alex Moltzau writes in his Medium article on the new AI strategy adopted by Norwegian Ministry of Defense. The government has adopted a strategy that deals with how the defence sector will utilize artificial intelligence to promote Norway’s security and defence policy goals. – The […]

France has appointed a Generative AI Committee

France has appointed a Generative AI Committee The gradual arrival of artificial intelligence (AI) at the heart of our daily lives reveals a little more and more the potential of this technology and raises many questions, especially in the fields of ethics, economy, productivity, work, business organization or the industrial and digital sovereignty of States. […]

Explainable AI – LIME and SHAP

Explainable AI – LIME and SHAP This blog post by The AIEdge, provides a very good overview of two powerful techniques underlying the field of ExplainableAI, i.e., LIME and SHAP. To read the full article, please click on this link.