AI doomsday warnings a distraction from the danger it already poses, warns expert

This article was published in The Guardian and authored by Dan Milmo.

Focusing on doomsday scenarios in artificial intelligence is a distraction that plays down immediate risks such as the large-scale generation of misinformation, according to a senior industry figure attending this week’s AI safety summit.

Aidan Gomez, co-author of a research paper that helped create the technology behind chatbots, said long-term risks such as existential threats to humanity from AI should be “studied and pursued”, but that they could divert politicians from dealing with immediate potential harms.

“I think in terms of existential risk and public policy, it isn’t a productive conversation to be had,” he said. “As far as public policy and where we should have the public-sector focus – or trying to mitigate the risk to the civilian population – I think it forms a distraction, away from risks that are much more tangible and immediate.”

Gomez is attending the two-day summit, which starts on Wednesday, as chief executive of Cohere, a North American company that makes AI tools for businesses including chatbots. In 2017, at the age of 20, Gomez was part of a team of researchers at Google who created the Transformer, a key technology behind the large language models which power AI tools such as chatbots.

Gomez said that AI – the term for computer systems that can perform tasks typically associated with intelligent beings – was already in widespread use and it is those applications that the summit should focus on. Chatbots such as ChatGPT and image generators such as Midjourney have stunned the public with their ability to produce plausible text and images from simple text prompts.

“This technology is already in a billion user products, like at Google and others. That presents a host of new risks to discuss, none of which are existential, none of which are doomsday scenarios,” Gomez said. “We should focus squarely on the pieces that are about to impact people or are actively impacting people, as opposed to perhaps the more academic and theoretical discussion about the long-term future.”

Gomez said misinformation – the spread of misleading or incorrect information online – was his key concern. “Misinformation is one that is top of mind for me,” he said. “These [AI] models can create media that is extremely convincing, very compelling, virtually indistinguishable from human-created text or images or media. And so that is something that we quite urgently need to address. We need to figure out how we’re going to give the public the ability to distinguish between these different types of media.”

Please click on this link to read the full article.

Image credit: Image by vecstock on Freepik

Your account