Research Article: Living guidelines for generative AI — why scientists must oversee its use

Research Article: Living guidelines for generative AI — why scientists must oversee its use This article, published in Nature and co-authored by Claudi L. Bockting, Eva A. M. van Dis, Robert van Rooij, Willem Zuidema, and Johan Bollen, recommends establishing an independent scientific body to test and certify generative artificial intelligence, before the technology damages […]

How We Can Have AI Progress Without Sacrificing Safety or Democracy

How We Can Have AI Progress Without Sacrificing Safety or Democracy Daniel Privitera and Yoshua Bengio have published an excellent, balanced, and very clearly written op-ed in TIME on AI regulation. Most importantly, they argue that we should, and effectively can, pursue AI progress, safety, and democratization simultaneously. They make a range of policy proposals […]

How transparent are AI models? Stanford researchers found out.

How transparent are AI models? Stanford researchers found out. Sharon Goldman writes for VentureBeat about a new Stanford study that assesses how different, commercially available, foundational models fare in terms of transparency. Stanford University’s Center for Research on Foundation Models (CRFM) took a big swing on evaluating the transparency of a variety of AI large language models […]

New AI Strategy Adopted by the Norwegian Ministry of Defense

New AI Strategy Adopted by the Norwegian Ministry of Defense Alex Moltzau writes in his Medium article on the new AI strategy adopted by Norwegian Ministry of Defense. The government has adopted a strategy that deals with how the defence sector will utilize artificial intelligence to promote Norway’s security and defence policy goals. – The […]

Challenges in evaluating AI systems

Challenges in evaluating AI systems This article authored by ANTHRO\C discusses in details the different challenges in evaluating AI systems. To read the full article, please click on this link.

Lessons Learned from Assessing Trustworthy AI in Practice

Lessons Learned from Assessing Trustworthy AI in Practice This article is published in Digital Society journal by several renowned researchers in the domain of Trustworthy AI. Abstract: Building artificial intelligence (AI) systems that adhere to ethical standards is a complex problem. Even though a multitude of guidelines for the design and development of such trustworthy […]

The European AI Liability Directives

The European AI Liability Directives: Critique of a Half-Hearted Approach and Lessons for the Future This article, authored by Philipp Hacker, Chair of Law and Ethics of the Digital Society @ European University Viadriana, has been recently published in Computer Law & Security Review (51). Abstract: The optimal liability framework for AI systems remains an unsolved problem across the […]

The Presidio Recommendations on Responsible Generative AI

The Presidio Recommendations on Responsible Generative AI World Economic Forum and AI Commons have jointly published a report on Responsible Generative AI. Generative artificial intelligence (AI) has the potential to transform industries and society by boosting innovation and empowering individuals across diverse fields, from arts to scientific research. To ensure a positive future, it is […]

Exploring Institutions for Global AI Governance

Exploring Institutions for Global AI Governance A new white paper by Google’s DeepMind investigates models and functions of international institutions that could help manage opportunities and mitigate risks of advanced AI. To read the full white paper, please click on the this link.

AI “Godfather” Yoshua Bengio Feels “Lost” over Life’s Work

AI “Godfather” Yoshua Bengio Feels “Lost” over Life’s Work Zoe Kleinman of BBC News talks to Prof. Yoshua Bengio (ACM Turing Prize Laureate and renowned for his work on Deep learning and Artificial Neural Networks) on his concerns about AI development. To read the full article, please click on this link. Image credit: rawpixel.com on […]