Joint Guidance: Deploying AI Systems Securely

The National Cyber Security Centre has today released joint cyber security guidance regarding deploying AI systems securely, alongside international partners the United States Cybersecurity Infrastructure and Security Agency(external link) (CISA), National Security Agency(external link) (NSA), Federal Bureau of Investigation(external link) (FBI), the Australian Cyber Security Centre(external link) (ACSC), the Canadian Centre for Cyber Security(external link) (CCCS), and the United Kingdom’s National Cyber Security Centre(external link) (NCSC-UK).

The rapid adoption, deployment, and use of AI capabilities can make them highly valuable targets for malicious cyber actors. Actors who have historically used data theft of sensitive information and intellectual property to advance their interests may seek to co-opt deployed AI systems and apply them to malicious ends.

Malicious actors targeting AI systems may use attack vectors unique to AI, as well as standard techniques used against traditional IT. Due to the large variety of attack vectors, defences need to be diverse and comprehensive. Advanced, malicious actors often combine multiple vectors to execute operations that are more complex. Such combinations can more effectively penetrate layered defences.

This guidance outlines methodologies for protecting data and AI systems and responding to malicious activity. This expands on previous AI guidelines we have issued alongside international partners, namely Guidelines for Secure AI System Development, and Engaging with Artificial Intelligence (AI). This guidance aims to improve the confidentiality and integrity of AI systems and provide assurances that known vulnerabilities are mitigated.

The term AI systems throughout this report refers to machine learning (ML) based artificial intelligence (AI) systems. The best practices in this guidance are most applicable to organisations deploying and operating externally developed AI systems on premises or in private cloud environments, especially those in high-threat, high-value environments. They are not applicable for organisations who are not deploying AI systems themselves and instead are leveraging AI systems deployed by others.