AI Cannot Read Your Mind

AI Cannot Read Your Mind Authors: Visar Berisha and Pavan Turaga (Arizona State University) Can AI read your mind? That is the question on many minds these days. A recent study to be published at the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2023 has been cited in popular media as evidence […]
Defining the scope of AI regulations

Jonas Schuett has published a paper titled “Defining the scope of AI regulations” in the journal of Law, Innovation & Technology. According to Jonas, “The paper argues that the material scope of AI regulations should not rely on the term “artificial intelligence”, mainly because existing AI definitions don’t meet the requirements for legal definitions (e.g. […]
President Biden’s remarks about Privacy, AI, Transparency and Bias in the 2023 State of the Union

Marc Rotenburg, Founder at Center of AI and Digital Policy, has written a LinkedIn post on President Biden’s remarks about privacy, AI, Transparency, and Bias. Please click on this link to see his post.
KU Leuven AI Summer School: AI and Interdisciplinary

Victoria Hendrickx and Nathalia Smuha write an article on the push towards interdisciplinary in AI. “Interdisciplinary research is becoming increasingly popular, especially in the domain of artificial intelligence (AI) where it is difficult to fathom societal opportunities and risks without considering insights from various disciplines. However, taking an interdisciplinary approach also comes with several challenges, […]
Risk Management in the Artificial Intelligence Act

Jonas Schuett of the Center for Governance of AI (Oxford, UK) has published an article on the Risk Management in EU AI Act. Please click on this link to read the full article.
Comments on the Initial Draft of the NIST AI Risk Management Framework

Jonas Schuett and Markus Anderljung of the Center for the Governance of AI have compiled key recommendations and commented on the initial draft of the NIST AI Risk Management Framework. To read the comments and key recommendations, please click on this link.
Nine Cities Set Standards for Transparent use of AI

Nine cities, cooperating through the Eurocities network, have developed a free to use open-source ‘data schema’ for algorithm registers in cities. The data schema, which sets common guidelines on the information to be collected on algorithms and their use by a city, supports the responsible use of AI and puts people at the heart of […]
Seminar Series: The mechanisms of mathematical intuition in human beings and machines

A very interesting seminar series, organized by Collège de France, aims to develop discourses on human and machine intelligence. To check out the full program, please click on this link.
The Reproducibility Issues that are Haunting Health-Care AI

This technology feature published in Nature by Emily Sohn discusses how despite the rolling out of numerous AI tools for diagnosis and monitoring, their reliability is questionable. To read the full article, please click on this link.
Interactive Timeline of the AI Incident Database

GitHub user LKchemposer has created a notebook with an interactive timeline of the AI Incident Database. To check out this interactive timeline, please click on this link.