UK’s House of Lords Report on LLMs

Martin Ebers, President of the Robotics and AI Law Society, writes about UK’s House of Lords report on Large Language Models (LLMs) in one of his LinkedIn post. He writes:

The Government’s approach to Artificial Intelligence and large language models (LLMs) has become too focused on a narrow view of AI safety. The UK must rebalance towards boosting opportunities while tackling near-term security and societal risks. It will otherwise fail to keep pace with competitors, lose international influence and become strategically dependent on overseas tech firms for a critical technology.

The report issues a stark warning about the “real and growing” risk of regulatory capture, as a multi-billion pound race to dominate the market deepens. Without action to prioritise open competition and transparency, a small number of tech firms may rapidly consolidate control of a critical market and stifle new players, mirroring the challenges seen elsewhere in internet services.

Main findings :

The Committee welcomes the Government’s work on positioning the UK as an AI leader, but says a more positive vision for LLMs is needed to reap the social and economic benefits, and enable the UK to compete globally. Key measures include more support for AI start-ups, boosting computing infrastructure, improving skills, and exploring options for an ‘in-house’ sovereign UK large language model.

The Committee considered the risks around LLMs and says the apocalyptic concerns about threats to human existence are exaggerated and must not distract policy makers from responding to more immediate issues.

The report found there were more limited near-term security risks including cyber attacks, child sexual exploitation material, terrorist content and disinformation. The Committee says catastrophic risks are less likely but cannot be ruled out, noting the possibility of a rapid and uncontrollable proliferation of dangerous capabilities and the lack of early warning indicators. The report called for mandatory safety tests for high-risk models and more focus on safety by design.

The Committee calls on the Government to support copyright holders, saying the Government “cannot sit on its hands” while LLM developers exploit the works of rightsholders. It rebukes tech firms for using data without permission or compensation, and says the Government should end the copyright dispute “definitively” including through legislation if necessary. The report calls for a suite of measures including a way for rightsholders to check training data for copyright breaches, investment in new datasets to encourage tech firms to pay for licensed content, and a requirement for tech firms to declare what their web crawlers are being used for.

To steer the UK toward a positive outcome, the Committee sets out 10 core recommendations. These include measures to boost opportunities, address risks, support effective regulatory oversight – including to ensure open competition and avoid market dominance by established technology giants – achieve the aims set out in the AI White Paper, introduce new standards, and resolve copyright disputes.

Please click on this link to read the full post.

Image credit: Image by pikisuperstar on Freepik

Your account