Exciting AI News: Predicting the structures of all proteins known by science
Melissa Heikkilä writes in MIT Technology Review about this incredible achievement by DeepMind’s AlphaFold, which has successfully predicted the structure of nearly all proteins known to science. Read the full article here.
Prof. Hodac won a prestigious grant from Confiance.AI
We would like to congratulate Prof. Marion Hodac, General Secretary of the AI Transparency Institute, for having won a prestigious from Confiance.AI, with the institute. The institute is also affiliated to the project on AI Trustworthiness.
AITI @ The Hague Conference on Responsible AI
The AI Transparency Institute participated in the International Conference on Responsible AI at The Hague, Netherlands in May 2022. Please read the full report here.
A quick guide to the most important AI law you’ve never heard of
The European Union is planning new legislation aimed at curbing the worst harms associated with artificial intelligence. By Melissa Heikkiläarchive It’s a Wild West out there for artificial intelligence. AI applications are increasingly used to make important decisions about humans’ lives with little to no oversight or accountability. This can have devastating consequences: wrongful arrests, incorrect grades for students, […]
Global Trends Analysis: Aligning AI Governance Globally – Lessons from Current Practice
Global Trends Analysis: Aligning AI Governance Globally – Lessons from Current Practice By Amandeep Singh Gill Considering data and artificial intelligence (AI) as global commons could be crucial in ensuring that these key technologies of the 21st century benefit all of humanity. However, fragmented efforts of AI development and governance across the world risk diluting the effectiveness […]
AI and Ethics: New Special Issue is Out!
The new Special Issue on Governance and AI: Challenges for a Sustainable Digital Ecosystem has been published in Springer’s AI and Ethics journal. The special issue was jointly guest edited by Eva Thélisson, Kshitij Sharma, and Himanshu Verma. The preface of the Special Issue was written by Jan Kleijssen, Director of Information Society and Action […]
ISO/IEC TR 24027:2021
Information technology — Artificial intelligence (AI) — Bias in AI systems and AI aided decision making By ISO This document addresses bias in relation to AI systems, especially with regards to AI-aided decision-making. Measurement techniques and methods for assessing bias are described, with the aim to address and treat bias-related vulnerabilities. All AI system lifecycle […]
Artificial Intelligence and Accountability in Digital Health
By Eva Thelisson How to build an ecosystem of trust in digital health? The availability of large amounts of personal data, from multimodal sources, combined with AI and ML capacities, Internet of Things and strong computational platforms have the potential to transform healthcare systems in a disruptive way. The emergence of personalized medicine offers opportunities […]
Classification Schemas for Artificial Intelligence Failures
By Peter J. Scott and Roman V. Yampolskiy In this paper we examine historical failures of artificial intelligence (AI) and propose a classification scheme for categorising future failures. By doing so we hope that (a) the responses to future failures can be improved through applying a systematic classification that can be used to simplify the […]
AI Ethics for Law Enforcement: A Study into Requirements for Responsible Use of AI at Dutch Police
By Lexo Zardiashvili, Jordi Bieger, Francien Dechesne and Virginia Dignum his article analyses the findings of empirical research to identify possible consequences of using Artificial Intelligence (AI) for and by the police in the Netherlands, and ethical dimensions involved. We list the morally salient requirements the police need to adhere to for ensuring the responsible […]