AI Cyber Security Code of Practice
By Department of Science, Innovation and Technology, Feryal Clark MP and The Rt Hon Peter Kyle MP
The UK government is taking forward a two-part intervention to address the cyber security risks to AI. This involves the development of a voluntary Code of Practice which will be used to help create a global standard in the European Telecommunication Standards Institute (ETSI) that sets baseline security requirements. We believe a Code focused specifically on the cyber security of AI is needed because AI has distinct differences to software. These include security risks from data poisoning, model obfuscation, indirect prompt injection and operational differences associated with data management. Further examples of the unique risks posed by AI systems can be found in Appendix B within the National Institute of Standards and Technology’s (NIST) Risk Management Framework.
The government is also intervening in this area because software needs to be secure by design and stakeholders in the AI supply chain require clarity on what baseline security requirements they should implement to protect AI systems.
The proposed intervention was endorsed by 80% of respondents to the Department for Science, Innovation and Technology’s (DSIT) Call for Views which was held from 15 May to 9 August 2024. Support for each principle in the Code ranged from 83% to 90%. This document also builds on NCSC’s Guidelines for Secure AI Development which were published in November 2023 and endorsed by 19 international partners. As set out in DSIT’s modular approach to cyber security codes of practice, AI stakeholders should view this document as an addendum to the Software Code of Practice.1
Scope
The scope of this voluntary Code of Practice is focused on AI systems. This includes systems that incorporate deep neural networks, such as generative AI. For consistency, we have used the term “AI systems” throughout the document when framing the scope of provisions and “AI security” which is considered a subset of cyber security. The Code is not designed for academics who are creating and testing AI systems only for research purposes (AI systems which are not going to be deployed).
The Code sets out cyber security requirements for the lifecycle of AI. We recognise that there is no consistent view in international frameworks on what forms the AI lifecycle. However, to help stakeholders, we have separated the principles into five phases. These are secure design, secure development, secure deployment, secure maintenance and secure end of life. We have also signposted relevant standards and publications at the start of each principle to highlight links between the various documents and the Code. This is not an exhaustive list.
Please click on this link to read the full report.
Image credit: Image by fullvector on Freepik