Joint Guidance

Careful Adoption of Agentic AI Services

This guidance discusses key cyber security challenges and risks associated with the introduction of agentic AI into IT environments, as well as best practices for securing agentic AI systems.

PUBLISHED DATE: 1 May 2026

This guidance, Careful Adoption of Agentic AI Services, provides practical guidance to help organisations design, develop, deploy and operate agentic AI systems, and to make informed risk assessments and mitigations. 

The guidance concludes with actionable recommendations to help organisations prepare for and defend against emerging and future agentic AI threats.

This guidance primarily focuses on large language model (LLM)-based agentic AI systems. It considers both threats to and vulnerabilities within agentic AI systems, as well as risks arising from agentic AI behaviour. This includes risks introduced through system components, integrations and downstream use.

The guidance covers 

  • Broader agentic AI security considerations
  • Agentic AI security risks
  • Best practices for securing agentic AI systems; and how to
  • Defend against future risks

This guidance was co-authored by the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC), the United States Cybersecurity and Infrastructure Security Agency (CISA) and National Security Agency (NSA), the Canadian Centre for Cyber Security (Cyber Centre), the New Zealand National Cyber Security Centre (NCSC-NZ) and the United Kingdom National Cyber Security Centre (NCSC-UK).

Careful adoption of Agentic AI Services [PDF, 1.2 MB]

For questions related to this guidance, email info@ncsc.govt.nz.