In This Article
Autonomous AI just arrived in Microsoft 365. On March 30, Microsoft made Copilot CoWork available through the Frontier program, an early-access tier where enterprise customers test advanced capabilities in production environments before general availability. CoWork doesn't just answer questions. It executes multi-step tasks autonomously across Word, Excel, Outlook, Teams, and SharePoint, powered by both Anthropic's Claude and OpenAI's GPT models working together.
Describe an outcome like preparing the quarterly loan committee package, reconcile this month's BSA reports, build the board presentation from last quarter's exam findings. CoWork creates a plan, reasons across your files and tools, and carries the work forward. Minutes or hours of autonomous execution using your institution's data.
For credit unions, community banks, and mortgage companies, that should trigger an immediate question: what governance controls do we need in place before autonomous AI starts touching our regulated data? The answer isn't a policy document you can write over a weekend. It's a set of technical configurations that most financial institutions haven't started yet.
That's where Microsoft Agent 365 comes in. Going generally available May 1 at $15 per user per month, Agent 365 is the centralized control plane for discovering, managing, securing, and governing every AI agent running inside your tenant. That includes agents built with Copilot Studio, deployed through CoWork, or acquired from third-party partners. Microsoft announced the governance gap data months ago: the majority of financial institutions use AI tools, but most lack formal governance frameworks. Autonomous agents widen that gap because they inherit user permissions, access sensitive data, and make decisions without real-time human oversight. Your institution has 30 days.
Copilot CoWork Goes Live in Frontier
Microsoft's Frontier program is how enterprise customers get early production access to capabilities before they roll out broadly. Think of it as the fast lane: your organization opts in, agrees to preview terms, and gets features months ahead of general availability. On March 30, Copilot CoWork joined that program. Any M365 customer with a Copilot license can now opt in and start running autonomous AI workflows.
CoWork is built on the technology platform behind Anthropic's Claude Cowork, integrated directly into Microsoft 365 Copilot. It's multi-model by design: Claude handles planning and orchestration while OpenAI's GPT models handle drafting and generation. A new Critique feature has Claude review GPT's output before it reaches you, scoring 13.8% higher on Microsoft's DRACO deep research benchmark than single-model approaches.
Capital Group, one of the early access organizations, described the shift: this isn't about generating content or answers. It's about taking real action: connecting steps, coordinating tasks, and following through across everyday workflows. For financial institutions, that means CoWork can draft a loan committee package in Word, pull supporting data from Excel, email reviewers through Outlook, and schedule the follow-up in Teams. Autonomously. Using your institution's data.
Alongside CoWork, Microsoft announced the Frontier Suite, a new M365 E7 license at $99 per user per month that bundles E5, Copilot, Microsoft Agent 365, and the Entra Suite. This isn't just a pricing change. It signals that Microsoft expects every enterprise to run autonomous agents within 12 months.
Anthropic's Claude integrated into Copilot. Microsoft Agent 365 governance details published. M365 E7 at $99/user announced.
Autonomous multi-step AI workflows now available to any M365 customer with a Copilot license who opts into Frontier.
Free Copilot Chat restricted to chat-only experience. Paid Copilot unaffected.
Centralized governance control plane for all AI agents: registry, identity, security, compliance. The tool your institution needs to manage what CoWork and Copilot Studio deploy.
First state-level AI governance law with financial services impact takes effect.
Business Basic up 17%, Business Standard up 12%, E3 up 8.3%. Business Premium holds flat.
Why Agent 365 Governance Is Different
If your institution already deployed Copilot, you may assume the same governance controls apply. They don't. Traditional Copilot is reactive: a user asks a question, Copilot answers using data the user can already access. Autonomous agents through CoWork are proactive: they execute multi-step workflows, access multiple data sources, and produce outputs without a human reviewing each step. Microsoft Agent 365 is the governance layer that gives your IT team visibility and control over what those agents are doing.
That distinction matters for regulated institutions because it changes the risk surface. An agent processing loan applications can touch borrower PII, pull credit data, access compliance documents, and generate regulatory filings in a single workflow. If any permission in that chain is misconfigured, the agent amplifies the exposure faster than any human user could.
Microsoft's Agent 365 governance framework treats agents as first-class organizational identities. Each agent receives a unique Entra Agent ID with the same lifecycle management as human accounts: authentication, scoped permissions, conditional access policies, and time-limited access packages that auto-expire. The Agent Registry surfaces both sanctioned and shadow agents, with IT able to quarantine unauthorized deployments. For financial institutions, this means agent governance can plug directly into your existing Entra ID governance workflows.
Microsoft built strong governance primitives into Agent 365. The problem isn't the tooling. The problem is that most financial institutions haven't configured the prerequisites. Conditional Access policies need to cover agent identities. Purview DLP needs rules for agent-processed data. Sensitivity labels need to propagate to agent outputs. Audit logging needs to capture every agent action for examiner review. None of this happens automatically.
Five Controls to Verify Before May 1
If your institution already enforces Conditional Access policies and Purview DLP rules, you have a head start. But agent identities operate through Entra Agent IDs, which function like service principals rather than traditional user accounts. Policies scoped to "All Users" may not capture them. The task here isn't building these controls from scratch. It's verifying your existing policies cover agent identities and extending them where they don't.
Verify your existing Purview DLP policies apply to agent-processed data. Because agents operate under Entra Agent IDs rather than user accounts, "All Users" policies may not enforce on agent workflows. Extend coverage to block agents from processing SSNs, account numbers, and ITINs without encryption. Add policies for loan application data and NPI if they aren't already in place.
Confirm your Conditional Access policies include Entra Agent IDs. Agent identities function like service principals, so policies scoped only to user accounts won't capture them. Add agent-specific rules that restrict access to sensitive resources by network location, time window, and risk level. Require step-up verification for agents accessing regulated data classifications.
Check whether your sensitivity labels propagate from source documents to agent outputs. Most institutions already label documents as "Confidential NPI," but the gap is inheritance: when an agent processes that document, the output should inherit the same classification and encryption requirements. Verify this works for agent-generated content, not just human-created files.
Confirm your current Purview audit policies capture agent activities. Standard M365 audit logging records agent actions, but default retention periods may not meet GLBA or examiner evidence requirements. Extend retention policies and configure alerts for agent-specific events: policy violations, out-of-scope data access, and permission escalations.
If your institution has an AI acceptable use policy, update it to cover autonomous agents specifically. If you don't have one yet, this is the trigger: document which processes agents can execute, define escalation triggers that require human review, and specify data classifications that agents cannot access without approval workflows.
These five controls aren't aspirational. They map directly to what OCC examiners evaluate under Bulletin 2023-17, the interagency guidance on third-party relationships and risk management. The bulletin requires banks to adopt risk management processes proportional to the complexity of their third-party relationships. An autonomous AI agent accessing your entire tenant qualifies as complex.
Is Your Tenant Ready for Autonomous AI?
ABT configures the governance controls your institution needs before Agent 365 goes live May 1.
The Data Residency Question Your Regulator Will Ask
Here's the question your examiner will ask that most IT teams can't answer yet: when Copilot uses Anthropic's Claude to process your data, where does that processing happen?
Microsoft states that all Copilot data remains within the Microsoft 365 service boundary, protected by Enterprise Data Protection. Prompts and responses are never used to train foundation models. Tenant isolation prevents cross-tenant data visibility. These are meaningful protections.
But Anthropic is now a designated sub-processor for Microsoft 365 organizations using Copilot with Claude models. That sub-processor relationship, effective since January 7, 2026, creates documentation obligations under OCC third-party risk management guidance. Your institution needs to verify and document: how the data processing agreement covers Anthropic's role, whether data flows remain within U.S. jurisdiction for your regulated workloads, and how your shadow AI controls extend to multi-model architectures.
Your institution enables autonomous agents without configuring DLP policies for agent-processed data. A mortgage loan officer's agent autonomously compiles a borrower package, pulling SSNs and bank statements from SharePoint, credit reports from a connected system, and denial history from archived emails.
The agent generates a summary containing NPI from denied applications alongside active applications. This creates a fair lending documentation risk, a potential ECOA violation, and an audit trail that shows the institution had no DLP controls on autonomous AI data access.
The scenario above isn't hypothetical. It's the natural consequence of deploying autonomous agents into a tenant where permissions were configured for human users browsing files manually. Copilot amplifies oversharing because agents actively surface data that was technically accessible but practically buried in folder structures no human would navigate.
The Governance Gap Most Institutions Won't Close in Time
The gap between what autonomous agents can do and what most financial institutions have configured is wider than any previous technology shift in banking IT. Cloud migration had years of runway. Copilot deployment had months. You have 30 days from this article's publication to GA.
Default Agent Configuration
- Agents inherit user's full permission set
- No DLP policies specific to agent workflows
- No Conditional Access scoped to agent identities
- Sensitivity labels don't propagate to agent outputs
- Audit logging captures agent actions but lacks regulatory retention
- No documented acceptable use policy for autonomous AI
Governed Agent Configuration
- Agents receive scoped, time-limited permissions via access packages
- Purview DLP blocks agent processing of unencrypted NPI
- Conditional Access restricts agent data access by classification and context
- Sensitivity labels inherited from source to output automatically
- Audit trails with GLBA-compliant retention and examiner-ready reporting
- Board-approved AI acceptable use policy with defined escalation triggers
The left column is where most credit unions, community banks, and mortgage companies will be on May 1 if they don't act. Not because their IT teams are negligent, but because Agent 365's governance requirements are new. The Agent Registry, Entra Agent IDs, and agent-specific Conditional Access policies didn't exist before the Wave 3 announcement. Financial institutions that hardened their tenants for standard Copilot still have work to do.
Federal Reserve Supervisory Letter SR 11-7 applies to all AI/ML models in banking, including third-party models accessed through Microsoft 365. The letter requires effective challenge of complex models by objective, informed parties, understanding of model limitations, and ongoing validation. Agent 365 adds a layer: the autonomous agents themselves become "models" that need governance, separate from the foundation models they run on.
How ABT Bridges the Gap
Access Business Technologies has configured AI governance frameworks for 750+ financial institutions. Agent 365 governance extends the same Guardian operating model that already wraps around your M365 tenant.
Guardian's hardening templates cover 160+ Microsoft Secure Score controls across 11 categories, including Conditional Access, Defender, DLP, and information protection. For Agent 365, ABT extends those templates to cover agent identities: new Conditional Access policies scoped to Entra Agent IDs, DLP rules for agent-processed data classifications, sensitivity label propagation to agent outputs, and audit retention policies that satisfy GLBA Safeguards Rule requirements.
Guardian's continuous monitoring detects compliance drift in real time. When an agent's permissions change, when a DLP policy is bypassed, or when an agent accesses data outside its authorized scope, Guardian surfaces the issue before your next examination. This is the operational evidence examiners expect under OCC Bulletin 2023-17: documented, continuous, and auditable.
The Treasury Department's Financial Services AI Risk Management Framework, published February 2026, calls for lifecycle risk management, governance integration, and transparency across all AI deployments. ABT's approach maps directly to this framework because Guardian was built for the same regulatory reality these newer guidelines describe.
Frequently Asked Questions
Agent 365 is Microsoft's control plane for managing, governing, and securing autonomous AI agents across an organization's Microsoft 365 tenant. It goes generally available on May 1, 2026, priced at $15 per user per month. It includes an Agent Registry, Entra Agent IDs for each agent, Conditional Access policy support, Microsoft Purview integration for DLP and audit logging, and Microsoft Defender integration for threat detection.
Microsoft states that all Copilot data remains within the Microsoft 365 service boundary, protected by Enterprise Data Protection. Anthropic became a designated sub-processor for Microsoft 365 Copilot effective January 7, 2026. Prompts and responses are not used to train foundation models, and tenant isolation prevents cross-tenant data visibility. However, financial institutions should review the updated Data Processing Agreement to document the sub-processor relationship for regulatory compliance purposes.
Financial institutions should verify five controls before Agent 365 goes live: DLP policies that cover agent-processed data in Microsoft Purview, Conditional Access policies scoped to Entra Agent IDs (not just user accounts), sensitivity label inheritance from source documents to agent outputs, audit logging with retention periods that meet GLBA and examiner evidence requirements, and a board-approved acceptable use policy for autonomous AI that defines which processes agents can execute and what triggers human review.
OCC Bulletin 2023-17, the interagency guidance on third-party relationships published June 6, 2023, requires banks to adopt risk management processes proportional to the complexity of their third-party relationships. Autonomous AI agents deployed through CoWork and Copilot Studio access organizational data through Microsoft's platform, with Anthropic as a sub-processor. This creates a multi-layered third-party relationship that examiners will evaluate for due diligence, ongoing monitoring, contractual protections, and documented risk assessments.
Standard Copilot is reactive: a user asks a question, and Copilot responds using data the user can access. Autonomous agents are proactive: they execute multi-step workflows autonomously, accessing multiple data sources and producing outputs without human review at each step. This requires additional governance controls including agent-specific identities in Entra ID, scoped permissions with automatic expiration, DLP policies covering agent data processing, and audit trails that track every autonomous action for regulatory evidence.
Access Business Technologies extends its Guardian operating model to cover Agent 365 governance for 750+ financial institutions. This includes configuring Conditional Access policies for Entra Agent IDs, extending DLP rules to agent-processed data, setting up sensitivity label propagation, configuring audit retention for GLBA compliance, and documenting acceptable use policies. Guardian's continuous monitoring then tracks compliance drift across all agent activities, providing the operational evidence examiners expect under OCC Bulletin 2023-17.
AI Agents Are Running. Governance Launches in 30 Days.
Copilot CoWork is already deploying autonomous agents in Frontier tenants. Agent 365, the governance control plane, goes live May 1. ABT helps you get the prerequisites in place so you're ready on day one.
Justin Kirsch
CEO, Access Business Technologies
Justin Kirsch has led AI governance and Microsoft 365 security implementations for financial institutions since 1999. As CEO of Access Business Technologies, the largest Tier-1 Microsoft Cloud Solution Provider dedicated to financial services, he helps more than 750 credit unions, community banks, and mortgage companies configure the controls regulators expect before new technology enters their environments.

