What Are AI Agents
AI agents are the new generation of automation robots, built on top of large language models. They use base models (knowledge frozen to a date) combined with grounding capabilities such as web search or Model Context Protocol (MCP), which provides structured access to enterprise systems and data to gather information and execute tasks. Through reasoning, those AI agents plan and sequence their actions like a human. All major vendors nowadays include AI agent capabilities inside their products promising ease of use and automation of mundane everyday tasks. The barrier for deploying agents is super low and AI agents are becoming the new cornerstone of the white-collar workflow after Microsoft Excel.
The Emerging AI Agent Ecosystem: Key Vendors and Platforms
AI agents are already part of your enterprise landscape embedded into key platforms and vendor ecosystems without a traditional buy-in or onboarding process. They arrive as built-in capabilities inside tools organizations already license and trust. All major manufacturers are trying to get a piece of the cake in this still very young market.
Microsoft integrates agents deeply across Microsoft 365, Copilot Studio, Power Platform, and Azure with AI Foundry. Google follows with agent capabilities based on Gemini across Workspace and Google Cloud. AWS provides agent frameworks via Bedrock, tightly coupled with its service portfolio. Salesforce embeds agents natively into its CRM platform, while SAP is introducing agentic capabilities aligned to core business processes and ERP workloads. In parallel, many SaaS vendors are layering agents onto their applications by default.
Who Owns AI Governance? The Role of the AI Center of Excellence (AI CoE)
Following the rise and entry of ChatGPT into completely unprepared enterprises that are still sorting out their AI use cases, developing an AI strategy, and are far from having an operating model, the next wave is already hitting the enterprise shore in the form of AI agents. AI agents can be enabled, configured, and used directly by business users, often without procurement, architecture review, or governance oversight. They are everywhere by design, and they scale faster than most organizations can keep up with. As is so often the case, this is a self-inflicted problem because, in order to keep up with the AI hype, features such as agents are activated in the above-mentioned platforms without a concrete strategy, governance, or operating model in place.
The topic of AI requires clearly defined roles and ownership, as well as the establishment of a dedicated organizational unit (AI Center of Excellence – AICoE). This is the only way to channel AI initiatives effectively and make it clear across the organization that AI is being managed in a structured and accountable way. It requires a strong commitment and the assignment of real, accountable positions. This is not a new concept: similar to the cloud boom, most enterprises have established a Cloud Center of Excellence (CCoE) responsible for all cloud-related topics and its governance. In the same way, the AI CoE should serve as the guardian and owner of AI and AI agent governance.
To learn more about building an AI CoE, visit: https://aicenterofexcellence.de/
The Real Problem: Uncontrolled Proliferation and Invisible Risk
The core of the issue is not just that these agents are “there” it is that they are operationally invisible. When an employee starts using a new software application, there is usually a trace: a new login, a new URL, or a line item on a credit card. AI agents, however, live inside the ecosystem. They are a new button in a familiar interface or a background process running in a tenant you already “trust.”
This invisibility masks a fundamental shift: we have moved from Passive AI to Active AI. While early AI was a “library” where you could ask questions and get answers, agents are “employees” with hands. Through the Model Context Protocol (MCP) and grounding, they have the ability to move data across silos and execute changes in system states. The risk isn’t just a “bad answer”; it’s a bad action taken at enterprise scale, triggered by a user who may not understand the underlying logic of the agent they just “activated.”
A Familiar Pattern: What AI Agents Have in Common with SaaS Sprawl
If this sense of looming chaos feels like déjà vu, it is because we have been here before. Not long ago, the enterprise world was rocked by SaaS Sprawl. The ease of swiping a credit card meant that departments could bypass IT and Procurement to stand up their own solutions. We ended up with “Business-Led IT” or as others call it, “Shadow IT” a fragmented landscape of redundant tools, siloed data, and massive security holes that many enterprise organizations are still trying to patch today.
AI agents are simply SaaS Sprawl 2.0, but with a dangerous twist: they do not even require a credit card this time
The parallels are striking:
- The Procurement Bypass: Just as SaaS moved the “buy” decision from the CIO to the department head, AI agents move the “enable” decision to the individual user. Since the platform (Microsoft, Salesforce, etc.) is already licensed, there is no financial trigger to alert Governance teams that a new autonomous agent has been deployed.
- The “Orphan” Problem: One of the biggest risks in SaaS is the “Orphaned App” a tool owned by someone who has since left the company. With AI agents, this risk is amplified. An agent can be configured to automate a specific data pipeline and if the creator leaves, that agent continues to run, access data, and execute actions with no one at the helm to monitor its actions.
- The Transparency Gap: You cannot govern what you cannot see. In the SaaS era, we struggled to build an accurate inventory of applications. With AI agents, the inventory is even more elusive because the agent is not even a standalone “app” it is a capability buried inside an existing workflow.
Just like SaaS the same logic applies here. We must stop treating AI agents as tools, features or shenanigans and start treating them as governed assets.
Classifying AI Agents
To regain control, we need a common language. We cannot apply the same level of heavy governance to a personal productivity AI Agent as we do to an AI Agent managing our ERP system. That would hinder innovation and drown the organization in bureaucracy. Instead, we must classify AI Agents based on the sensitivity of the data they touch and the impact of their actions.
Below are three proposed risk classes for AI Agents. This framework should be adapted to fit the specific needs of the organization; for example, it could be extended to include an “AI Agent with Enterprise Data” if the agent operates across multiple legal entities. For simplicity, the focus remains on the following three core tiers:
- Personal AI Agent: These are AI Agents used by an individual for personal productivity. They summarize personal emails, draft notes, or organize a calendar. They do not have access to shared company repositories or sensitive customer data.
- AI Agent with Company Data: These AI Agents have access to internal company knowledge. They can query internal Wikis, SharePoint, or team collaboration channels to help find information faster. The risk here involves internal data leakage or the hallucination of internal company facts.
- AI Agent with Critical Company Data: This is the high-stakes zone. These AI Agents have access to PII, financial records, and intellectual property, or they have write access to core systems. They are not just reading data – they are executing actions that affect the business main processes or data.
AI Agents Governance Criterias
Before we look at how these classes are governed, we must define the specific pillars of oversight. Each pillar is designed to ensure transparency, accountability, and reliability.
- AI Agent Company Catalog: The AI Agent must be officially documented in a central registry.
- AI Agent Manager: A technical role responsible for the day-to-day management and maintenance of the AI Agent (Technical Responsibility).
- AI Agent Owner: The person accountable for cost management and the business value of the AI Agent (Financial/Strategic Accountability).
- AI Agent Data Owner: The individual responsible for the data the AI Agent accesses. If multiple sources are used, one person oversees the data integrity.
- Technical Architecture Diagram: A visual representation of how the AI Agent connects to enterprise systems and data sources.
- AI Agent Code Repository: The source code and prompts must be stored centrally so the AI Agent can be audited or rebuilt by others if necessary.
- Process Diagram Integration: The AI Agent’s workflow must be documented within the corporate BPM platform (e.g., Adonis) just like any other business process.
- AI Control Questions: A set of documented test cases and “unit tests” used to regularly verify that the AI Agent’s logic and quality remain stable.
Governance Requirements by AI Agent Class
The following table defines the mandatory governance requirements for each risk class. As the potential organizational impact of the AI Agent increases, the baseline controls for its operation become more stringent.
| Governance Criteria | Personal AI Agent | AI Agent w/ Company Data | AI Agent w/ Critical Data |
| Governance Level | Light | Moderate | High |
| AI Agent Company Catalog | ✓ | ✓ | ✓ |
| AI Agent Manager | ✓ | ✓ | ✓ |
| AI Agent Owner | ✓ | ✓ | |
| AI Agent Data Owner | ✓ | ✓ | |
| Technical Architecture Diagram | ✓ | ✓ | |
| AI Agent Code Repository | ✓ | ✓ | |
| Process Diagram Integration | ✓ | ||
| AI Control Questions | ✓ |
AI Agent Asset Management
As mentioned earlier, most organizations are completely unprepared for this wave. Just like with SaaS, the sheer volume and invisibility of these assets mean that manual tracking is impossible from day one. To really get to grips with the problem, a dedicated specialised tool is needed.
The Tool Landscape
While the market is still young, the urgency of the issue has been recognized by leading advisory and analyst firms, leading to the creation of a new software category: AI Governance Platforms. As highlighted in major reports like the Forrester Wave™
1. SAP LeanIX (The Architect’s Choice)
Already a leader in Enterprise Architecture, LeanIX has launched a dedicated AI Agent Hub. Its superpower is the integration with SAP Signavio, allowing you to see not just which agent is running, but which business process it is supporting. It is the gold standard for organizations that view agents as long-term architectural assets.
2. Credo AI (The Governance Choice)
Recognized as a Leader in the Forrester Wave™, Credo AI is the frontrunner for regulation (e.g., EU AI Act) and risk control. Their AI Registry focuses heavily on “Governance, Risk, and Compliance” (GRC). It provides deep policy intelligence, ensuring that every agent has passed its specific “Control Questions” before it goes live.
3. IBM watsonx.governance (The Lifecycle Choice)
Also named a Leader in the Forrester Wave™, IBM offers the most technically robust solution. Unlike others that just “catalog” agents, watsonx monitors them in real-time. It can detect hallucinations, drift, and toxic output as the agent runs. It connects the legal team (Policy) with the developer team (DevOps), making it the best choice for organizations building their own high-risk custom agents.
4. Microsoft Native Stack (The Ecosystem Choice)
For organizations running purely on Microsoft, you may not need a third-party tool. Microsoft is building governance directly into the platform, though it is currently split across three consoles:
- Copilot Studio: Tracks low-code/no-code agents.
- Azure AI Foundry: Assigns a unique Microsoft Entra Agent ID to pro-code agents.
- Microsoft Purview: Scans agent interactions for sensitive data leaks.
Feature Matching: Which Tool Covers What?
The following table maps these vendors against our Governance Criteria.
| Governance Criteria | SAP LeanIX | Credo AI | IBM watsonx | Microsoft Native* |
| AI Agent Catalog | ✓ (Excellent) | ✓ (Excellent) | ✓ (Excellent) | ✓ (Fragmented) |
| AI Agent Manager | ✓ | ✓ | ✓ | ✓ (Entra ID) |
| AI Agent Owner | ✓ | ✓ | ✓ | ✓ (Tags) |
| AI Agent Data Owner | ✓ | ✓ | ✓ (Strong) | ✓ (Purview) |
| Tech. Architecture | ✓ (Best in Class) | ✓ | ✓ (Lineage) | ✓ (Foundry) |
| AI Agent Code Repo | ✓ | ✓ | ✓ (Git Integ.) | ✓ (DevOps) |
| Process Diagram | ✓ (Signavio) | ✗ | ✓ (Limited**) | ✓ (Power Auto.) |
| AI Control Questions | ✓ | ✓ (Best in Class) | ✓ (Real-time) | ✓ (Azure Eval) |
*Note: “Microsoft Native” requires combining Copilot Studio, Azure AI Foundry, and Purview.
**IBM integrates with BPM tools but does not have a native “Process Modeler” in the governance console itself.
Governance Processes Aligned to AI Agent Classes
Governance processes should be an enabler, not a blocker. The following processes are proposals and should be adapted to fit your companies specific needs and organizational structure. By matching the process rigor to the risk class, we ensure that personal productivity thrives while critical systems remain secure.
1. Automated AI Agent Process (Personal AI Agent)
Goal: Visibility without Bureaucracy
For personal agents, we rely on automation rather than permission. The system creates an automated inventory of all agents in the AI Agent Company Catalog, logging the creator as the AI Agent Manager. There is no explicit approval step the focus is purely on visibility, ensuring we can track usage and identify orphaned agents the moment they appear.
2. Light AI Agent Process (Company Data)
Goal: Structured Documentation & Awareness
When agents touch company data, the AI CoE steps in to formalize the setup. They collect the key ownership roles (Agent Owner & Data Owner) and document the cost analysis and budget forecast for the agent. Technical Diagrams are created jointly with the CoE and stored in the EAM (Enterprise Architecture Management) tool. If the agent has reuse potential, its code is secured in the Agent Code Repository. Finally, Information Security, Data Protection, and Enterprise Architecture are explicitly informed of the use case (AI Agent) to ensure transparency.
3. AI Agent Process (Critical Company Data)
Goal: Risk Control & Formal Approval
For high-stakes agents, we include all steps from above but add a strict validation layer. AI Control Questions and the testing process are coordinated directly with the AI CoE to ensure stability. Unlike the lighter tier, this requires formal approval from Information Security, Data Protection, Enterprise Architecture, the AI CoE, and potentially the Workers Council. Furthermore, the underlying business process must be formally documented or updated in the BPM system.
Consult Me
Like always if any help is needed please feel free to consult me