How transparent are AI models? Stanford researchers found out. Sharon Goldman writes for VentureBeat about a new Stanford study that assesses how different, commercially available, foundational models fare in terms of transparency. Stanford University’s Center for Research on Foundation Models (CRFM) took a big swing on evaluating the transparency of a variety of AI large language models […]
Explainable AI – LIME and SHAP This blog post by The AIEdge, provides a very good overview of two powerful techniques underlying the field of ExplainableAI, i.e., LIME and SHAP. To read the full article, please click on this link.
Exploring Explainable AI for the Arts This article co-authored by Nick Bryan-Kinns, Berker Banar, Corey Ford, Courtney N. Reed, Yixiao Zhang, Simon Colton and Jack Armitage asks the crucial question about how can we maintain transparency in the music industry about music generated by AI? Abstract: Explainable AI has the potential to support more interactive […]
Murat Durmu, writes about the new book from Information Commissioner’s Office and The Alan Turing Institute which is called “Explaining Decisions Made with AI”. To read more about this post and the book, please follow this link.