Computers make mistakes and AI will make things worse — the law must recognize that

Computers make mistakes and AI will make things worse — the law must recognize that This is an editorial that was published in Nature. A tragic scandal at the UK Post Office highlights the need for legal change, especially as organizations embrace artificial intelligence to enhance decision-making. More than 20 years ago, the Japanese technology […]

Responsible AI Institute Launches RAISE Benchmarks to Operationalize & Scale Responsible AI Policies

Responsible AI Institute Launches RAISE Benchmarks to Operationalize & Scale Responsible AI Policies Responsible AI Institute (RAI Institute), a prominent non-profit organization dedicated to facilitating the responsible use of AI worldwide, has introduced three essential tools known as the Responsible AI Safety and Effectiveness (RAISE) Benchmarks. These benchmarks are designed to assist companies in enhancing […]

A Taxonomy of Trustworthiness for Artificial Intelligence

A Taxonomy of Trustworthiness for Artificial Intelligence Jessica Newman, of the Berkeley’s Center for Long-Term Cybersecurity (CLTC) has developed a standalone taxonomy of trustworthiness for AI. The National Institute of Standards and Technology (NIST) has developed an AI Risk Management Framework (RMF) intended to promote trustworthy artificial intelligence (AI). In this report, we introduce a […]

CDEI portfolio of AI assurance techniques

CDEI portfolio of AI assurance techniques The portfolio of AI assurance techniques has been developed by the Centre for Data Ethics and Innovation (CDEI), initially in collaboration with techUK. The portfolio is useful for anybody involved in designing, developing, deploying or procuring AI-enabled systems, and showcases examples of AIassurance techniques being used in the real-world to support the development of trustworthy AI. […]

Google’s New Gmail Tool is Hallucinating Emails that don’t Exist

Google’s New Gmail Tool is Hallucinating Emails that don’t Exist This article is authored by Maggie Harrison for Futurism. Google’s new Bard extension will apparently summarize emails, plan your travels, and — oh, yeah — fabricate emails that you never actually sent. Last week, Google plugged its large language model-powered chatbot called Bard into a bevy of Google […]

OECD AI Principles overview

OECD AI Principles overview The OECD’s work on Artificial Intelligence and rationale fordeveloping the OECD Recommendation on Artificial Intelligence AI is a general-purpose technology that has the potential to improvethe welfare and well-being of people, to contribute to positivesustainable global economic activity, to increase innovation andproductivity, and to help respond to key global challenges. It […]

Marc Rotenberg: An AI Oversight

Marc Rotenberg: An AI Oversight Marc Rotenberg of the Center for AI and Digital Policy, has published an open letter “An AI Oversight”. Following is the text from his LinkedIn post: I published a Letter to the Editor in the current edition of Foreign Affairs. My letter (“An AI Oversight”) is a response to the […]

Research Article: Living guidelines for generative AI — why scientists must oversee its use

Research Article: Living guidelines for generative AI — why scientists must oversee its use This article, published in Nature and co-authored by Claudi L. Bockting, Eva A. M. van Dis, Robert van Rooij, Willem Zuidema, and Johan Bollen, recommends establishing an independent scientific body to test and certify generative artificial intelligence, before the technology damages […]

How We Can Have AI Progress Without Sacrificing Safety or Democracy

How We Can Have AI Progress Without Sacrificing Safety or Democracy Daniel Privitera and Yoshua Bengio have published an excellent, balanced, and very clearly written op-ed in TIME on AI regulation. Most importantly, they argue that we should, and effectively can, pursue AI progress, safety, and democratization simultaneously. They make a range of policy proposals […]

How transparent are AI models? Stanford researchers found out.

How transparent are AI models? Stanford researchers found out. Sharon Goldman writes for VentureBeat about a new Stanford study that assesses how different, commercially available, foundational models fare in terms of transparency. Stanford University’s Center for Research on Foundation Models (CRFM) took a big swing on evaluating the transparency of a variety of AI large language models […]