AI’s Achilles Heel: New Research Pinpoints Fundamental Weaknesses

This article was originally published in SciTechDaily.

Researchers from the University of Copenhagen have become the first in the world to mathematically prove that, beyond simple problems, it is impossible to develop algorithms for AI that will always be stable.

ChatGPT and similar machine learning-based technologies are on the rise. However, even the most advanced algorithms face limitations. Researchers from the University of Copenhagen have made a groundbreaking discovery, mathematically demonstrating that, beyond basic problems, it’s impossible to develop AI algorithms that are always stable. This research could pave the way for improved testing protocols for algorithms, highlighting the inherent differences between machine processing and human intelligence.

The scientific article describing the result has been approved for publication at one of the leading international conferences on theoretical computer science.

Machines interpret medical scanning images more accurately than doctors, translate foreign languages, and may soon be able to drive cars more safely than humans. However, even the best algorithms do have weaknesses. A research team at the Department of Computer Science, University of Copenhagen, tries to reveal them.

Take an automated vehicle reading a road sign as an example. If someone has placed a sticker on the sign, this will not distract a human driver. But a machine may easily be put off because the sign is now different from the ones it was trained on.

“We would like algorithms to be stable in the sense, that if the input is changed slightly the output will remain almost the same. Real life involves all kinds of noise which humans are used to ignore, while machines can get confused,” says Professor Amir Yehudayoff, heading the group.

A language for discussing weaknesses

As the first in the world, the group together with researchers from other countries has proven mathematically that apart from simple problems it is not possible to create algorithms for Machine Learning that will always be stable. The scientific article describing the result was approved for publication at one of the leading international conferences on theoretical computer science, Foundations of Computer Science (FOCS).

“I would like to note that we have not worked directly on automated car applications. Still, this seems like a problem too complex for algorithms to always be stable,” says Amir Yehudayoff, adding that this does not necessarily imply major consequences in relation to the development of automated cars:

“If the algorithm only errs under a few very rare circumstances this may well be acceptable. But if it does so under a large collection of circumstances, it is bad news.”

The scientific article cannot be applied by the industry to identify bugs in its algorithms. This wasn’t the intention, the professor explains:

“We are developing a language for discussing the weaknesses in Machine Learning algorithms. This may lead to the development of guidelines that describe how algorithms should be tested. And in the long run, this may again lead to the development of better and more stable algorithms.”

Please click on this link to read the full article.

Image credit: Image by Freepik

Your account