AI agents become truly useful when they can act: create tickets, send messages, update records, orchestrate workflows. The flip side is obvious: every tool you expose is a potential source of harm if misused or misconfigured.

The “Use” stage in DASUD is where you define and control the agent’s action space.

From capabilities to governed actions

Think of every tool as a capability:

  • “lookup_kb” → can search internal knowledge bases.
  • “create_ticket” → can create entries in your IT system.
  • “send_email” → can contact users or customers.
  • “update_record” → can change persistent state in your databases.

Your job in Use is to decide:

  • Which agents can use which tools.
  • Under what conditions.
  • With what oversight.

This is the agent equivalent of role‑based access control (RBAC).

Create a capability catalogue

Start by creating a simple capability catalogue. For each tool, capture:

  • Name and description What it does in plain language.
  • Risk level Low (read‑only, non‑sensitive), medium (internal write actions), high (external actions, financial/HR changes).
  • Preconditions What must be true before the tool can be used (e.g., user role, ticket priority, environment).
  • Oversight Whether the agent can call it autonomously, or whether human approval is required.

For each agent, list which tools from the catalogue it may access. This gives you a clear view of the agent’s action surface.

Match oversight to action risk

Not all actions deserve the same level of autonomy.

For low‑risk tools:

  • Allow autonomous calls E.g., searching knowledge bases or reading dashboards.
  • Monitor usage at an aggregate level Check for performance and unexpected spikes.

For medium‑risk tools:

  • Consider human‑in‑the‑loop E.g., agent proposes ticket updates, human approves.
  • Or human‑on‑the‑loop Agent acts within tight constraints, humans review samples and metrics.

For high‑risk tools:

  • Default to human approval Agent prepares actions (e.g., draft email, staged transaction), but execution requires explicit human sign‑off.
  • Or forbid agent use entirely In some domains, you may decide that certain tools are never agent‑accessible.

The key is to link risk classification (from Day 10) directly into Use policies.

Implement guardrails in code, not just in documentation

Policies are only as good as their implementation.

For each tool, ensure:

  • Hard checks exist Code should enforce preconditions and prevent calls from unauthorised agents or contexts, regardless of prompts.
  • Rate limits are applied Limit how many times a tool can be called in a given period, per agent or per user, to prevent runaway behaviour.
  • Fallbacks are defined If a tool call fails or returns unusual results, define what the agent should do (e.g., escalate, retry with limits, or stop).

Guardrails turn your catalogue and RACI into actual safety.

Log and explain tool‑use

For auditability and debugging, log agent actions with enough detail to reconstruct what happened:

  • Which agent called which tool, on whose behalf, and when.
  • With what parameters (redacting sensitive parts where needed).
  • What the tool returned and what the agent did as a result.

Present these logs in a way that non‑technical stakeholders can understand if needed (e.g., in incident reviews).

Over time, these logs will also help you refine risk classifications and oversight for tools.

Make it concrete

For one agent in your environment (or a planned one):

  • Build a capability catalogue entry for each tool it can use.
  • Assign risk levels and oversight modes.
  • Implement or specify hard checks, rate limits, and logging.
  • Review the catalogue with risk/compliance and business owners.

By intentionally governing the agent’s action space, you make sure that every capability it has is one you’ve consciously granted—and can explain if questioned.

If you’d like assistance or advice with your Data Governance implementation, or any other topic (Privacy, Cybersecurity, Ethics, AI and Product Management) please feel free to drop me an email here and I will endeavour to get back to you as soon as possible. Alternatively, you can reach out to me on LinkedIn and I will get back to you within the same day!

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.