Start Here Overview AI Risk Products Identity MCP Threats Frameworks Zero Trust Gaps Playbooks Changelog Contact
๐Ÿ“Œ Author's note: This site synthesises the author's own understanding from publicly available Microsoft documentation, official Microsoft Security blog posts, RSAC 2026 announcements, and insights from Microsoft Security professionals and MVPs. It is independent and not affiliated with or endorsed by Microsoft.
ZERO TRUST ยท AI ยท MICROSOFT

Zero Trust
for AI

Microsoft's Zero Trust framework extends to AI workloads โ€” but the controls are different from traditional Zero Trust. Here's what it means in practice, how to stage your implementation, and which controls matter most.

The Three Principles โ€” Applied to AI

Zero Trust isn't just for users and devices

The three Zero Trust principles โ€” Verify Explicitly, Use Least Privilege, Assume Breach โ€” apply directly to AI agents, but the implementation looks very different from user-centric Zero Trust.

๐Ÿ”
Verify Explicitly
PRINCIPLE 01 ยท IDENTITY & AUTHENTICATION
For users, this means MFA and Conditional Access. For AI agents, it means ensuring every agent has a verified identity โ€” not just a name โ€” before it can access resources or communicate with other agents. In practice: require Entra ID authentication for all agent interactions, register agents in the Agent 365 Registry, and use modern Agent ID authentication (OAuth 2.0) where available. For Copilot Studio agents, this means enforcing one of the four authentication patterns rather than allowing No Authentication. The hardest part: Classic Copilot Studio agents authenticate as service principals or OBO โ€” they don't use modern Agent ID and therefore can't be verified by CA for Agents or ID Protection. This is the single biggest gap in Microsoft's current Zero Trust for AI story.
๐Ÿ”’
Use Least Privilege
PRINCIPLE 02 ยท ACCESS & PERMISSIONS
For users, this means JIT/JEA and PIM. For AI agents, it means scoping each agent's permissions to exactly what it needs for its specific task โ€” no broader. In practice: avoid Application permissions (tenant-wide) in favour of Delegated permissions (user-scoped), avoid maker credentials (which grant the maker's full permission set to every user), use access packages for time-bound agent resource assignments, and configure Custom Security Attributes to classify agent access levels for attribute-based CA policies. The hardest part: agents are often provisioned broadly "to make sure they work" and permissions are rarely reviewed. Agent lifecycle workflows and access reviews are the operational controls that enforce this principle over time.
๐Ÿ›ก๏ธ
Assume Breach
PRINCIPLE 03 ยท DETECTION & RESPONSE
For users, this means SIEM, SOAR, and EDR. For AI agents, it means assuming any agent could be compromised via prompt injection, malicious tool output, or credential theft โ€” and building detection and containment accordingly. In practice: deploy Sentinel AI analytics rules, configure Defender real-time agent protection, build AI incident response playbooks, and set up automated response rules for high-risk AI activity. The hardest part: agent compromise often looks like normal agent behaviour โ€” the agent is doing what it was told, just by an attacker rather than a legitimate user. Detection requires behavioural baselines, not just signature matching.
Maturity Model

Where to start โ€” a staged approach

Don't try to implement all 80+ ZT Workshop AI controls at once. This three-stage model gives organisations a practical sequence from zero visibility to full automation.

STAGE 01
Visibility
Know what you have before you try to control it. Most organisations skip this and jump to controls โ€” then discover the controls don't apply to most of their agents.
Discover agents in Agent 365 Registry
Enable AI Agent Inventory (Defender)
Run Playbook 01 KQL audit queries
Identify Classic vs Modern agents
Triage no-auth and maker-cred agents
Assign owners to all published agents
STAGE 02
Control
Apply identity and access controls to the agents you've inventoried. Focus on the highest-risk patterns first โ€” no-auth, maker credentials, org-wide sharing.
Enforce Entra ID auth on all agents
Deploy CA posture for Modern Agents
Enable ID Protection for Agents
Configure Global Secure Access for agents
Deploy DSPM for AI + DLP policies
Enable Defender real-time protection
STAGE 03
Automation
Operationalise your controls so they scale without manual effort. Governance that requires manual review of every agent will break down as agent count grows.
Lifecycle workflows for mover/leaver
Access reviews for agent permissions
Automated response rules in Sentinel
Graph API agent registry management
Recurring AI threat review cadence
Red teaming cadence for all agents
Priority Controls

The highest-impact ZT Workshop AI controls

From the 80+ controls in the Microsoft Zero Trust Assessment Workshop AI section, these are the ones security architects should prioritise first.

AI_000 ยท IDENTITY
Require Entra ID Auth for All Agent Interactions
Ensure every agent that interacts with users or data authenticates via Entra ID. No anonymous or no-auth agents in production. Foundation for everything else.
MEDIUM EFFORT
AI_001โ€“002 ยท VISIBILITY
Discover, Inventory and Assign Ownership
Use Agent 365 Registry to discover all agents. Triage each one and assign an accountable owner. Unowned agents are your highest sprawl risk.
LOW EFFORT
AI_005 ยท IDENTITY
Custom Security Attributes for Agent Classification
Tag agents with custom attributes (risk level, data sensitivity, environment). Enables attribute-based CA policies that scale to hundreds of agents without per-agent rules โ€” directly addresses the agent name sync gap.
MEDIUM EFFORT
AI_006 ยท IDENTITY
ID Protection + Risk-Based CA for Agents
Enable Identity Protection risk signals for Modern Agents and deploy risk-based CA policies. Automatically blocks high-risk agents without manual intervention.
MEDIUM EFFORT
AI_014 ยท GOVERNANCE
Lifecycle Workflows for Sponsor Mover/Leaver
When the person who sponsors an agent leaves the organisation, a workflow must reassign sponsorship or decommission the agent. Without this, agents become orphaned and unmanaged over time.
MEDIUM EFFORT
AI_072 ยท RUNTIME
Content Safety SDK for All Agent Inputs
Require all agents to pass inputs through Azure AI Content Safety before processing. Detects prompt injection, harmful content, and jailbreak attempts at the input layer before the model sees them.
HIGH EFFORT
AI_077 ยท MCP
APIM Gateway for All MCP Server Deployments
Require Azure API Management as a governance layer in front of all custom MCP servers. Provides authentication, rate limiting, logging, and policy enforcement at the tool layer.
HIGH EFFORT
AI_080 ยท DATA
Sensitivity Label Inheritance for AI Outputs
AI-generated content should inherit the highest sensitivity label of its source data. Without this, a Confidential document summarised by an agent produces an Unclassified output โ€” bypassing your data protection controls.
MEDIUM EFFORT
AI_090โ€“091 ยท DETECT
Sentinel AI Analytics Rules
Enable AI-specific analytics rules for prompt injection detection and create custom rules for agent anomaly detection. Also configure AI threat detection workbooks for ongoing visibility.
MEDIUM EFFORT
AI_094 ยท RESPOND
Automated Response Rules for High-Risk AI Activity
Configure SOAR-style automated containment for high-risk AI activity โ€” automatic agent suspension, access revocation, or alert escalation without waiting for manual triage.
HIGH EFFORT
AI_081โ€“083 ยท RUNTIME
AI Red Teaming Cadence
Configure AI Red Teaming Agent in Azure AI Foundry. Establish red teaming as a requirement for all new agent deployments and a recurring validation cadence (quarterly recommended) for existing agents.
HIGH EFFORT
AI_128 ยท MCP
MCP Management Server
Deploy a dedicated MCP Management Server as the control plane for all custom MCP server deployments. Provides centralised approval, discovery, and governance of the tool layer โ€” the MCP equivalent of an app catalogue.
HIGH EFFORT
๐Ÿ“Œ Full control list

This page covers the highest-priority controls. The complete Microsoft Zero Trust Workshop AI section contains 80+ controls across identity, governance, data, runtime, MCP, and detection. Access it at microsoft.github.io/zerotrustassessment/docs/category/ai

Related Pages
Identity & OBO โ†—
Five auth patterns, Classic vs Modern, Entra Agent ID, KQL detection
Gaps & Roadmap โ†—
What Zero Trust controls don't yet cover and when Microsoft plans to close them
Playbooks โ†—
Runnable Stage 1 audit โ€” KQL queries to get visibility in 30 minutes