๐Ÿ“Œ Author's note: This site synthesises the author's own understanding from publicly available Microsoft documentation, official Microsoft Security blog posts, RSAC 2026 announcements, and insights from Microsoft Security professionals and MVPs. It is independent and not affiliated with or endorsed by Microsoft. Microsoft updates products and documentation frequently โ€” always verify current status directly with Microsoft before making architecture or purchasing decisions.
Developer Security

Microsoft Foundry Control Plane

Everything developers need to observe, secure, and govern fleets of AI agents โ€” from code to runtime. The Foundry Control Plane is the developer-facing counterpart to Agent 365: while Agent 365 gives IT and security teams governance visibility, Foundry Control Plane gives developers the tools to build agents that are secure and compliant by design.

Source: Microsoft Agent 365 Training ยท Microsoft Foundry documentation ยท May 2026

๐Ÿ“Œ Two control planes โ€” different audiences, same goal

Agent 365 (admin.cloud.microsoft) โ†’ IT administrators and security teams. Observe, govern, and secure all agents at the tenant level. GA May 1, 2026.
Foundry Control Plane (ai.azure.com/foundry) โ†’ Developers and platform engineers. Build, evaluate, monitor, and govern agent fleets from code through production. Announced alongside Agent 365.

Agents built in Foundry are automatically deployed to Agent 365 for IT/security governance. The two planes share the same Entra identity layer โ€” an agent identity created in Foundry appears in Agent 365.

Four Capabilities

What Foundry Control Plane provides

๐Ÿ“Š
End-to-End Observability
OpenTelemetry instrumentation for every agent. Traces, metrics, and logs from development through production. Distributed tracing across multi-agent workflows. Integration with Azure Monitor and Application Insights. Agent action tracing for accountability โ€” who did what, when, and why.
๐Ÿ”ง
Guardrails and Controls
System prompt enforcement, prompt shielding, content safety filters, task adherence controls, and custom blocklists. The same configurable controls used by Microsoft Copilot โ€” now available to developers building custom agents. Applied consistently at both inputs and outputs.
๐Ÿ›ก๏ธ
Integrated Security
Entra identity per agent (auto-provisioned), Purview data governance, Defender runtime threat protection. Network isolation via managed VNet and private endpoints. Credential-less storage access. AI Red Teaming Agent for pre-deployment vulnerability probing.
๐Ÿš€
Fleet-Wide Operations
Manage agents across clouds, frameworks, and teams from one control plane. Projects model for simplified setup. A/B experimentation. CI/CD automation. Weights & Biases integration for fine-tuned model evaluation. Organisational quota and compute management.
Agent Lifecycle

From define to operate โ€” the full lifecycle

StageDeveloper (Foundry Control Plane)IT/Security (Agent 365)
DefineInherit enterprise policies, set guardrails, configure evaluatorsDefine enterprise policies, set allowed templates
BuildDevelop with SDK, run evals, integrate Content Safety, AI Red Teamingโ€”
ApproveDeploy agent to Agent 365 (triggers IT approval workflow)Onboard agent, apply guardrails, enforce policies
OperateMonitor performance, quality, cost, risk via Foundry dashboardsMonitor all deployed agents: usage, performance, risk
GovernContinuous evaluation, tracing, debugging, A/B experimentationManage policies (access, data security, compliance), defend against threats
Evaluation Framework

Three evaluation categories โ€” automated and manual

Foundry Control Plane provides structured evaluation of agents before and after deployment. Evaluations run locally during development, in CI/CD on every commit, and in production against real user inputs.

CategoryEvaluatorsWhat they measure
Quality Groundedness ยท Coherence ยท Fluency ยท Relevance ยท Retrieval Score ยท Similarity ยท NLP Metrics (F1) Is the agent response accurate, relevant, and well-formed? Does it faithfully use grounded sources?
Risk & Safety Jailbreak Defect ยท Hate and Unfairness ยท Sexual ยท Violence ยท Self-Harm ยท Protected Material ยท Ungrounded Attributes ยท Code Vulnerability Does the agent produce harmful, unsafe, or legally problematic outputs? Can it be manipulated?
Agent-Specific Intent Resolution ยท Tool Call Accuracy ยท Task Adherence ยท Response Completeness Does the agent correctly understand user intent, use tools accurately, and complete tasks as instructed?
๐Ÿ“Œ Evaluation workflow

Test data: Generate adversarial and non-adversarial test datasets using the Foundry evaluation client library, or upload your own domain-specific prompts.
Evaluator: Metric instructions + Azure OpenAI model โ†’ scores each response with reasoning for human review.
CI/CD integration: Batch evaluation runs on every check-in and deployment. Production evaluations run against real user inputs using traces to debug issues.
A/B experimentation: Compare models, prompts, and workflows at scale before committing to a change.

AI Red Teaming Agent

Automated vulnerability probing โ€” built into Foundry

The Foundry Control Plane includes a built-in AI Red Teaming Agent powered by PyRIT integration. Distinct from running PyRIT manually โ€” the Foundry Red Teaming Agent is a managed, scheduled service that automatically probes your agents for content risks and security vulnerabilities as part of the development lifecycle.

CapabilityDetail
Automated content risk scansScheduled adversarial probing across harmful content categories, jailbreak attempts, and sensitive information extraction
Evaluate probing successLLM-as-judge scoring on whether attacks succeeded โ€” not just whether the attack ran
Reporting and loggingStructured findings linked to OWASP LLM Top 10 categories; exportable for compliance evidence
PyRIT integrationBuilt on the same Microsoft PyRIT framework โ€” 53+ adversarial datasets, 70+ converters, 6 attack strategies
When to usePre-deployment (gate on result), post-deployment (continuous monitoring), after system prompt changes
โš  Foundry Red Teaming Agent vs PyRIT directly

The Foundry Red Teaming Agent is a managed service โ€” scheduled, governed, and integrated with Foundry observability. PyRIT standalone (Playbook 06) is a flexible research library you wrap yourself for custom CI/CD pipelines. For organisations using Foundry, the managed agent is the right starting point. For custom agents on other platforms, PyRIT standalone is the tool.

Content Safety

Configurable guardrails โ€” same stack as Microsoft Copilot

Azure AI Content Safety is integrated directly into the Foundry Control Plane, providing the same configurable content filters used by Microsoft's own Copilot products. Applied at both input (prompt) and output (response) layers.

Content categories blocked
Violence: Weapons, bullying, terrorism, stalking
Sexual: Vulgar content, prostitution, nudity
Hate & Unfairness: Race, gender, religion, disability, harassment
Self-harm: Eating disorders, bullying facilitation
Child safety: Exploitation, abuse, grooming
Protected materials: Copyrighted content
Ungrounded attributes: Fabricated/inaccurate claims
Prompt injection: Direct, indirect, spotlighting, agent-specific
Control mechanisms
System message: Ground and constrain agent behaviour
Prompt Shields: Direct and indirect injection detection
Custom categories: Block entire topics, not just specific words
Custom blocklists: Domain-specific terms and phrases
Groundedness detection: Flag responses not in source materials
Task adherence: Detect when agent goes off-task
Multimodal filtering: Scan text, images, and multimedia
Protected material detection: Avoid known/owned text content
๐Ÿ“Œ Content Safety runtime flow

User prompt โ†’ Content Safety evaluates โ†’ Modified/filtered prompt โ†’ Foundry model โ†’ Filtered response โ†’ App response. Purview data governance and Defender threat detection run alongside this pipeline โ€” not instead of it.

Observability & Tracing

End-to-end visibility into agent behaviour

Foundry Control Plane provides comprehensive tracing of every agent action โ€” enabling debugging, performance optimisation, and accountability. All traces are stored and queryable, forming an audit trail of what the agent did and why.

LayerWhat is tracedWhy it matters for security
Model inferenceEvery LLM call: model, tokens, latency, prompt, responseDetects unusual inference patterns, cost anomalies, model substitution
Tool invocationsEvery tool/MCP call: name, parameters, result, durationATG blocks happen here; traces show what was attempted vs blocked
Memory operationsReads/writes to agent memory (Dataverse)Memory is a persistent data store โ€” sensitive context accumulates over sessions
Agent-to-agentOrchestrator calls to sub-agents in multi-agent workflowsLateral movement risk; trust propagation between agents
User interactionsSession start/end, message counts, satisfaction signalsBehavioural baseline for ID Protection anomaly detection
Security Architecture

Network isolation and data protection

ControlDetail
Managed VNetAI hub and projects run within a managed virtual network. Private endpoints for all connected resources (Azure Storage, Key Vault, Container Registry, Foundry models). No public internet exposure for managed resources.
ExpressRoute / VPNOn-premises connectivity to Foundry via ExpressRoute or VPN Gateway to your Azure VNet.
Credential-less storageFoundry supports credential-less access to Azure Storage and Foundry IQ using managed identity โ€” no stored secrets, no rotation required. Generally available.
Customer-managed encryptionAdd your own encryption layer on top of Microsoft-managed encryption. Customer-managed key (CMK) for Blob Storage, Foundry IQ, and Azure CosmosDB resources.
Entra Agent IDEvery Foundry agent is automatically provisioned with an Entra Agent Identity. CA for Agents, ID Protection, and lifecycle governance apply at the identity layer.
Foundry Projects Model

Simplified setup replacing the Hub/Project architecture

The new Foundry projects model significantly simplifies the previous Hub โ†’ Project โ†’ Resource hierarchy that made setup and coding complex.

Old: Hub + ProjectsNew: Foundry Projects
Entry pointAI Hub โ†’ Projects โ†’ Multiple SDKsSingle Foundry Resource โ†’ Foundry SDK or API
ResourcesMany different resources needed upfrontMulti-tenant services by default; attach dedicated resources optionally
SDKAzure ML SDK, Azure OpenAI SDK, various othersSingle Foundry SDK (or Azure OpenAI SDK for compatibility)
Optional attachmentsAll requiredAzure OpenAI, AI Search, Storage, Fabric, Azure Monitor โ€” attach as needed
ScaleComplex enterprise configuration required from startStart simple, add enterprise controls as needed
Purview Data Security Investigations

AI-powered investigation workflow for data incidents

Microsoft Purview Data Security Investigations (Preview) is a three-stage workflow for investigating data security incidents involving AI โ€” enabling security teams to find impacted data, analyse risks, and coordinate remediation without moving data between tools.

StageWhat you doKey capability
1 โ€” IdentifyFind incident-relevant data across the M365 estateSearch documents, emails, Copilot prompts/responses, and Teams messages. Launch directly from a Purview IRM case or a Defender XDR incident โ€” pre-scoped to relevant data.
2 โ€” InvestigateAnalyse impacted data for security risksAI-powered content categorisation, severity assessment, vector search (find all content related to a subject based on context and meaning, not just keywords), key risk identification.
3 โ€” MitigateCoordinate remediation across teamsView data/user/activity correlations, create a mitigation plan, add reviewers from partner teams securely, use incident learnings to improve security practices.
๐Ÿ“Œ What makes this different from Advanced Hunting

Advanced Hunting (AIAgentsInfo, CloudAppEvents) gives you metadata and telemetry. Purview Data Security Investigations gives you the actual content โ€” prompt text, response text, document content, emails โ€” with AI-powered analysis to understand what sensitive data was exposed and to whom. The two tools are complementary: use Advanced Hunting to detect the incident, Purview DSI to investigate what was actually in the data.

AI Baseline โ€” Compliance Manager

Automated compliance posture assessment for AI deployments

The AI Baseline assessment in Microsoft Compliance Manager provides an out-of-the-box trust assessment that automatically evaluates your AI deployment against global AI regulations (EU AI Act, NIST AI RMF) and surfaces gaps with recommended remediation actions.

AI Baseline provides
โœ“ Automatic compliance posture evaluation
โœ“ Regulatory gap identification (EU AI Act, NIST AI RMF)
โœ“ Remediation actions (configure Purview, Entra, Defender)
โœ“ Real-time AI Compliance Score
โœ“ Audit-ready reporting
Access paths
M365 admin center:
Admin center โ†’ Compliance โ†’ AI Baseline

Purview DSPM:
Purview portal โ†’ DSPM for AI โ†’ AI Baseline tab

Requires: Compliance Administrator or Global Administrator role
Pre-built AI regulatory templates: Upload PDF regulatory documents โ†’ automatically converted to actionable controls โ†’ mapped to Microsoft 365, Azure, and Foundry settings. AI-powered regulatory intelligence keeps you current as regulations evolve. Templates available: EU AI Act, NIST AI RMF 1.0, ISO/IEC 42001:2023, ISO/IEC 23894:2023.
Agent 365 MCP Tool Catalog

Certified MCP servers for Agent 365 agents

Agent 365 provides a managed MCP tooling gateway that integrates certified tools for a consistent developer and governance experience. These tools are available to agents built with any SDK โ€” Foundry, Copilot Studio, LangChain, or custom.

CategoryMCP ToolKey capabilitiesTypical use
Search & AICopilot SearchChat, multi-turn conversations, grounding with filesKnowledge retrieval
Business DataDataverseDynamics 365 CRUD operations, domain actionsBusiness workflows
CommunicationOutlook Mail & Calendar ยท Microsoft TeamsMessaging, meetings, channel operationsCollaboration
Content & FilesSharePoint ยท OneDriveUpload, search, metadata management, listsContent management
IdentityUser ProfileManager reports, profile lookup, org chartOrganisational context
DocumentsMicrosoft WordCreate/read documents, commentsDocument workflows
๐Ÿ“Œ MCP tool governance

Central admin control: Admins manage MCP servers via Microsoft 365 admin center โ€” blocking a server blocks it for all users and agents.
Scoped permissions: Each MCP server = one app permission requiring admin consent during onboarding.
Observability: Full tool call tracing โ€” tool invoked, parameters, execution outcome.
Security: Rate limits, payload checks, security scans on all MCP traffic.
Admin tasks: View activated MCP servers, allow/block servers, apply scoped permissions.

Shadow AI Discovery โ€” Setup

Four-step setup for Global Secure Access Shadow AI detection

Source: Agent 365 Training Day 3 โ€” Module 5

StepActionDetail
1Enable Internet Access traffic forwardingGlobal Secure Access โ†’ Traffic forwarding โ†’ Enable Internet Access profile. Routes internet traffic through GSA client for inspection.
2Assign users and groupsAssign the Internet Access profile to target users/groups. Can scope to specific users for phased rollout or POC before tenant-wide deployment.
3Install the GSA clientDeploy Global Secure Access client to user devices. Verify in Connections view: Status should show connected, Channels configured.
4Access Shadow AI discoveryGlobal Secure Access portal โ†’ App discovery โ†’ Use Generative AI apps filter. See detected AI applications with usage statistics and risk scores.