Share this article:

Bringing Confidence to Autonomy: A Governance Blueprint for Agentic AI

Share This Article

Agentic AI or AI Agents can be viewed in many ways, similar to new employees joining a company. You wouldn’t hand them all the keys or give them broad permissions from day one. Instead, you grant them only the permissions they need for their day-to-day tasks and ensure they consult you on major decisions. The same fundamental principle illustrated in this simple example underlies Agentic AI Governance or Agentic Governance.

Overview of Agentic Governance

AI Governance encompasses the rules, policies, practices, and frameworks necessary to guide the responsible development, deployment, and use of artificial intelligence technologies.

Unlike previous generations of AI systems, which either required significant human oversight or performed very strictly defined tasks, Agentic AI extends the autonomy given to AI systems. Therefore, it requires stricter rules to keep it in check.

Agentic Governance is built upon three pillars: people, processes, and technology:

  • People will need to take on responsibility for the Agents and require enablement to do so with confidence.
  • Processes must be implemented to manage Agent action permissions and determine access control. Furthermore, security practices and regulatory compliance need to be established.   
  • Technology must provide measures for robust operations, as well as ensure safety and security. 

While these dimensions apply to a degree to all agentic systems, it is important to point out that they are critical for autonomous agentic workflows (fully autonomous agents). Collaborative Agents (Agents that are directed by the user) are inherently safer because the user is constantly kept as a human-in-the-loop.

Implementation and concrete measures

A detailed breakdown of the key measures of implementing Agentic Governance can be found below. Some of these measures are straightforward to implement; others require a profound analysis of company processes to fit the company-specific risk profile and integrate with governance and security measures already in place:

People:

Enablement

  • Change management: AI, especially Agentic AI transforms the way we work, introduces new concepts of collaboration and delegation, and gives rise to fears about being replaced by AI. These challenges need to be addressed, ideally through practical, hands-on approaches.
  • Upskilling: Employees responsible for managing or implementing Agents in their respective domains must be trained on how Agents function and the techniques required to control them. Only with proper training can accountability for agentic processes be assigned.

Responsibility

  • Organizational structure and new roles for oversight: Business stakeholders will oversee Agents in their respective domains, supported by a central department that coordinates general oversight. New roles will therefore be needed, and the central responsibility should likely sit within the department already managing AI-related topics.
  • Responsibility assignment: As with any custom software, there must be someone accountable for its correct operation. Oversight should therefore be explicitly assigned to specific employees.

Processes:

Action permissions

  • Risk model (static/dynamic): Agents performing actions autonomously pose an inherent risk of error. Due to the non-deterministic nature of AI systems, this risk can never be completely eliminated, but substantially mitigated. A risk model should determine for a specific Agent, which actions can be performed under which circumstances. I.e., certain actions might require human approval or at least monitoring (human in the loop, human-on-the-loop), while others can be performed autonomously (human-out-of-the-loop). The risk model should make these decisions based on risk likelihood, severity, and impact (financial impact, reputational damage, etc.). This requires a detailed understanding of company processes.

Access Control

  • Human access control: Agents must not bypass existing access restrictions or have more privileges than their users. Users interacting with an Agent should always have equal or higher access rights than the Agent itself.
  • Agent access control: Agents should only access the information necessary to perform their tasks. Irrelevant information can confuse them and lead to errors. Additionally, limiting data access reduces the risk of exposing sensitive company information.

Security practices

  • Testing and red teaming: The robustness and cybersecurity of Agentic AI are still evolving. Techniques like prompt injection already pose real threats, and more may emerge over time. Both external- and internal-facing systems should be regularly tested for such vulnerabilities.
  • Incident reporting: Because Agents behave non-deterministically, errors are inevitable. Some will be minor and easily fixed, while others could reveal serious risks that need addressing. Clear processes for reporting and addressing incidents are essential.
  • Cybersecurity assessment: As with other software solutions, Agentic AI needs to pass the same basic cybersecurity requirements to secure company assets.

Regulatory compliance

  • Regulatory compliance with legislation like the EU AI Act needs to be verified.

Technology:

Operations

  • Tracing: Robust operations rely first of all on understanding why things go wrong. Tracing logs the agentic systems interactions and allows operators to understand where problems occurred. 
  • Evals: A central element to AI development is Evals or evaluation datasets, i.e., datasets that show a gold standard of workflow results, given specified inputs. These can be applied to the whole workflow or individual steps and help tremendously in testing for robustness, as well as uncovering error patterns.  
  • Operations guardrails: GenAI-based Agents might sometimes get things wrong. While they increasingly can self-correct, explicit guardrails help them stay on course and successfully complete their intended task. 

Safety and Security

  • Outlier detection & monitoring: While close monitoring of every agentic process is unrealistic, detection of unusual occurrences (known as outlier detection) helps a central team to monitor them effectively.
  • Security guardrails: Common guardrails preventing exploitation techniques, such as jailbreaking, need to be implemented for each agentic workflow, preventing misuse of agentic systems.

 

Concrete first steps to take

As Agentic AI is a rapidly evolving field, a compromise between planning and experimentation must be struck. Depending on the risk appetite and regulatory environment of a company, the first steps to take in Agentic Governance will look different. 

The said company might decide to focus first on the technology and people dimensions to enable rapid deployment of Agents and gain valuable experience. The first measures to implement could, for example, be:

  • Upskilling employees in how to implement Agents.
  • Restrict action permissions and access control to actions and environments with little risk. 
  • Provide an Agent implementation platform (not part of Governance).

If the company prioritizes control and compliance, processes should be evolved first, and the following measures should be implemented next:

  • Assigning (or building-up) a central unit in the organizational structure to guide Agent efforts.
  • Creating a risk model for agentic actions and determining access controls.
  • Selecting a proof-of-concept use case and evaluating it for regulatory compliance.

 


Supervision Schemes:

  • Human-in-the-loop (HITL): A human actively participates in the decision process — reviewing, approving, or intervening before actions are taken (e.g., a human verifies each AI output).
  • Human-on-the-loop (HOTL): A human supervises the system’s operations and can intervene if needed, but the AI acts autonomously most of the time.
  • Human-out-of-the-loop (HOOTL): The system operates fully autonomously without human monitoring or intervention in real time.

 

Want to receive updates from us?

agree_checkbox

By subscribing, you consent to Unit8 storing and processing the data provided above in order to provide you with the requested content. For more information, please review our Privacy Policy.

Our newsletter features industry news, the latest case studies, and future Unit8 events.

close

This page is only available in english