THE AI TRANSPARENCY INSTITUTE
We are a non-profit research centre
The AITI is an interdisciplinary research centre.
We are a non-profit organisation, dedicated to AI Governance, Human Trust in AI and Corporate Digital Responsibility.
The AI Transparency Institute contributes to build an open artificial intelligence for the benefit of all around its 3 missions of Education, Advocacy and Research activities.
It adresses key challenges like climate change, digital ethics, AI safety, explainability, fairness, transparency and privacy related issues.
It engages private actors in business models that are sustainable, zero-carbon, eco-responsible in the interest of future generations.
The AI Transparency Institute is committed to re-investing all revenues to fulfill its core activities.
Research on AI ethics and sustainability by our AI experts
Develop and disseminate knowledge on AI ecosystems
Monitor legal aspects of responsible AI at public institutions
Eva has a Ph.D. in Law on data protection in the digital age, as well as a Master from ESSEC Business School, more than 15 years experience in Project Management, Internal Audit, Innovation and Governance related matters. She was awarded the CAIDP AI Policy Certification in 2021
Marion Ho-Dac is Full Professor of Private Law at the University of Artois (France). She holds a PhD in EU Law from Bordeaux University (published by Bruylant, Brussels, 2012) and was a researcher in Comparative Law at the Court of Justice of the European Union. Her teaching and research focus on EU civil justice and EU digital single market.
Himanshu has a Ph.D in Computer Science from EPFL and is specialised in Human-Computer Interaction. He is Assistant Professor at TU Delft, Netherland
Dr. Chris Greenwood is a consultant working in the Geneva area on fundraising for causes and social businesses. He is a graduate of London, Cambridge and Harvard Universities.
He has more than 20 years experience in Fund-Raising and Marketin
Kirtan Padh is pursuing a PhD Student at Helmholtz AI and TUM focusing at the intersection of causality and machine learning. He is particularly interested in the societal aspects of AI, particularly on the unintended negative consequences it can have, and looking at technical and regulatory solutions to such problems.
Our advisory committee
Prof. Michael Wade
Prof. Wade is a Professor in Innovation and Strategy at IMD and holds the Cisco Chair in Digital Business Transformation. He is the Director of the Global Center for Digital Business Transformation. His areas of expertise relate to strategy, innovation, and digital transformation.
Noella is a Rwanda Government Fellow at the World Economic Forum – Centre for the Fourth Industrial Revolution, where she focuses on co-designing policy and governance approaches to adopt emerging technologies like precision medicine in Rwanda. She serves as Strategic Advisor at Rwanda Biomedical Center, where she provides strategic support to the senior management team implementing national-level programs. She has worked as a health policy consultant with the World Health Organization and USAID-funded projects in Rwanda and Guinea-Conakry
Prof. Dominique Lambert
Dominique is Professor and Director of the Department of Philosophy at the University of Namur in Belgium. Recognized as a specialist in theoretical physics and the philosophy of science, his interests extend to the relations between science and theology, biology, and the history of modern physical cosmology. Lemaître was one of two fathers of the of Big Bang Theory who will be honored in 2014 when the European Space Agency launches the 5th Automated Transfer Vehicle “ATV-5 Georges Lemaître” that will be connected to the International Space Station.
Dr. Danielle Belgrave
Danielle is a machine learning researcher in the Healthcare Intelligence group at Microsoft Research, in Cambridge (UK) where she works on Project Talia. In Project Talia, we explores how a human-centric approach to machine learning can meaningfully assist in the detection, diagnosis, monitoring, and treatment of mental health problems.
Dr. Łukasz Kidziński
Dr. Kidziński is the co-founder of Saliency.ai, a medical imaging platform, and a researcher in the Mobilize Center at Stanford, working on the intersection of computer science, statistics, and biomechanics. He was previously a researcher in the CHILI group, Computer-Human Interaction in Learning and Instruction, at EPFL, Switzerland
Prof. Kshitij Sharma
Dr. Sharma is Assistant Professor at the NTNU in Norway. His background is in the area of Human-Computer Interaction and Collaborative/cooperative learning. His research interests are primarily in the area of Applied Machine Learning, Artificial Intelligence, and Human-Computer Interaction (HCI) with a heavy emphasis on groups’ behavior and physiological data such as eye-tracking, EEG, facial expressions (theoretical and practical methods in digital interaction). He then started working on developing methods based on Extreme Values Theory (EVT), a methodological space to compute features from abnormalities in data emerging out of collaborative work.
Dr. Raffaele Marino
Dr. Marino is a theoretical physicist with a background on the theory of transport processes in non-equilibrium systems, where thermal noise typically plays a dominant role. He worked at The Hebrew University of Jerusalem under the supervision of Scott Kirkpatrick on stochastic optimisation, computational complexity, and graph theory. The aim of his research was developing new greedy algorithms and message passing algorithms for probabilistic graphical models. Today he is working at La Sapienza University on Explainability (XAI), after a postdoc at EPFL where he worked on high dimensional statistics, modern inference, and machine learning.
CONTRIBUTE TO THE FUTURE OF RESPONSIBLE AI
We offer an access to a strong network of researchers in AI and Machine Learning, a unique opportunities for networking and connecting with our affiliated experts for stimulating conversations and many more.
When you become a member you will benefit from a number of advantages, of which one is our self-assessment test. Join us to help pioneer responsible AI.