Joint Guidance: Guidelines for Secure AI System Development

The National Cyber Security Centre (NCSC) has joined agencies from 17 countries to release guidance that will help artificial intelligence (AI) developers to bake-in cyber security from the outset.

The United Kingdom’s National Cyber Security Centre(external link) led the development of Guidelines for Secure AI System Development(external link) that have been endorsed by 23 international agencies, including New Zealand’s NCSC.

The new guidelines are the first of their kind to be agreed globally and will help developers of any systems that use AI to make informed cyber security decisions at every stage of the development process – whether those systems have been created from scratch or built on top of tools and service provided by others.

Cyber security is an essential pre-condition for the safety of AI systems, and is required to ensure their resilience, privacy, fairness, reliability, and predictability. 

Lisa Fong, Deputy Director General, National Cyber Security Centre, says the guidelines reinforce the need for developers to take a secure by design(external link) approach and aim to raise the cyber security of AI systems by helping to ensure that they are designed, developed, and deployed securely.

Making these guidelines available in collaboration with international partner agencies and industry experts is vital to establishing a common understanding of cyber risks, vulnerabilities, and mitigation strategies,” she says.

The publication follows the July 2023 release of interim generative AI guidance for the public service(external link). This was jointly produced by the NCSC as the Government’s system leader for cyber security alongside our data, digital, procurement, and privacy counterparts, recognising the multidisciplinary approach required to safely take advantage of generative AI.

If you have any queries about this guidance, please contact the NCSC by email: info@ncsc.govt.nz