How We Can Have AI Progress Without Sacrificing Safety or Democracy
Daniel Privitera and Yoshua Bengio have published an excellent, balanced, and very clearly written op-ed in TIME on AI regulation. Most importantly, they argue that we should, and effectively can, pursue AI progress, safety, and democratization simultaneously. They make a range of policy proposals to that end, from supporting SMEs to mandatory safety features (red teaming, compute monitoring) and democratic oversight. I agree with basically all of them.
Dive into current discussions about how to regulate artificial intelligence, and you’d think we’re grappling with impossible choices: Should we choose AI progress or AI safety? Address present-day impacts of AI or potential future risks? And many more perceived dilemmas. But are these trade-offs real? As world leaders prepare to gather at the upcoming AI Safety Summit in Bletchley Park in the U.K. in early November, let’s dig a bit deeper and uncover three core values that underpin most policy proposals in the AI regulation discourse.
The first value is progress. The promises of AI are vast: curing diseases, increasing productivity, helping to solve climate change. This seems to call for a “full steam ahead” approach, in which we attempt to accelerate AI progress even beyond the current level. But moving at breakneck speed comes with increased risks—epidemics of automated fake news, AI enhanced bio terrorism, automated cyber warfare, or out-of-control AI threatening the existence of humanity. We are not currently prepared to handle these risks well. We don’t know how to reliably control advanced AI systems, and we don’t currently have mechanisms for preventing their misuse.
The second core value in AI regulation is therefore safety. Leading experts as well as the general public are increasingly concerned about extreme risks from AI, and policy-makers are rightly beginning to look for ways to increase AI safety. But prioritizing safety at all costs can also have undesirable consequences. For instance, it can make sense from a safety perspective to limit the open-sourcing of AI models if they can be used for potentially dangerous purposes like engineering a highly contagious virus. On the flipside, however, open-source code helps to reduce concentration of power. In a world with increasingly capable AI, leaving this rapidly growing power in the hands of a few profit-driven companies could seriously endanger democratic sovereignty. Who will decide what very powerful AI systems are used for? To whose benefit or to who’s detriment? If superhuman capabilities end up in a few private hands without significant democratic governance, the very principle of sharing power that underlies democracy is threatened.
That is why the third core value in AI regulation is democratic participation. There is a real concern that AI might entrench existing power imbalances at the expense of marginalized groups, low income countries, and potentially everyone but a handful of tech giants building the most powerful AI models. This suggests we need to ensure continued participation from everyone in the future of AI. But focusing exclusively on participation would also come at a cost. Democratizing access to potentially highly destructive technology can lead to catastrophic outcomes, which is why access to certain tech in sectors like nuclear energy or pathogen research is also not democratized, but highly restricted and regulated.
To read the full article, please click on this link.
Image credit: Image by mindandi on Freepik