A small number of samples can poison LLMs of any size

A small number of samples can poison LLMs of any size Authors: Alexandra Souly, Javier Rando, Ed Chapman, Xander Davies, Burak Hasircioglu, Ezzeldin Shereen, Carlos Mougan, Vasilios Mavroudis, Erik Jones, Chris Hicks, Nicholas Carlini, Yarin Gal, Robert Kirk Abstract: In a joint study with the UK AI Security Institute and the Alan Turing Institute, we found that as few as 250 malicious documents can produce a “backdoor” […]

📣 How People Use ChatGPT

📣 How People Use ChatGPT This working paper is written by Aaron Chatterji,  Thomas Cunningham,  David J. Deming, Zoe Hitzig,  Christopher Ong,  Carl Yan Shan & Kevin Wadman. Abstract: Despite the rapid adoption of LLM chatbots, little is known about how they are used. We document the growth of ChatGPT’s consumer product from its launch in November 2022 through July 2025, when […]

A Swiss LLM: A language model built for the public good

A Swiss LLM: A language model built for the public good This article is written by Florian Meyer, Corporate Communications and Mélissa Anchisi, Head of AI Communication EPFL Earlier this week in Geneva, around 50 leading global initiatives and organisations dedicated to open-source LLMs and trustworthy AI convened at the International Open-Source LLM Builders Summit. Hosted […]

Article Alert: Do large language models have a legal duty to tell the truth?Article Alert:

Article Alert: Do large language models have a legal duty to tell the truth? Authors: Sandra Wachter, Brent Mittelstadt, and Chris Russel. Abstract: Careless speech is a new type of harm created by large language models (LLM) that poses cumulative, long-term risks to science, education and shared social truth in democratic societies. LLMs produce responses […]

Article Alert: Can Large Language Models Reason and Plan

Article Alert: Can Large Language Models Reason and Plan Author: Subbarao Kambhampati (Arizona State University) Abstract: While humans sometimes do show the capability of correcting their own erroneous guesses with self-critiquing, there seems to be no basis for that assumption in the case of LLMs. Please click here to read the full article. The article […]