You can have beautiful frameworks and detailed policies, but eventually someone will ask: “How do we know if our AI governance is working?” For traditional data governance, you might track glossary coverage, policy adoption, or data quality scores. For advanced AI—GenAI, RAG, agents—you need equally concrete measures.

The good news: you can still use DASUD to structure your KPIs.

Principles for AI governance metrics

Before listing specifics, keep a few principles in mind:

  • Measure behaviour, not just artefacts It’s not enough to count how many policies exist; you want to see how often they’re followed.
  • Focus on leading indicators Try to catch issues early (e.g., design coverage, risk assessment completion) rather than only measuring incidents after the fact.
  • Align with business and risk goals Metrics should help you answer: Are we deploying AI safely, fairly, and efficiently?

Design metrics

For the Design stage, track:

  • Use‑case coverage Percentage of AI initiatives (GenAI, RAG, agents) that have a documented Design artefact (e.g., use‑case canvas, agent charter, RAG design sheet).
  • Risk classification Percentage of use cases that have been explicitly assigned a risk tier (low/medium/high) and oversight mode (HITL/HOTL/autonomous).
  • Red‑line adherence Number or percentage of proposals rejected or modified due to violating predefined red‑lines (e.g., disallowed domains like medical diagnosis).

These metrics show whether Design is happening deliberately, not implicitly.

Acquire metrics

For Acquire, monitor:

  • Source approval coverage Percentage of data sources, knowledge repositories, and tools used by AI systems that have been explicitly approved and catalogued.
  • Fine‑tuning and RAG governance Number/percentage of fine‑tuning datasets and RAG repositories with documented owners, classification, and retention rules.
  • Prompt and tool library governance Coverage of prompts and tools used at scale that have gone through review and approval.

These metrics reflect how much of what you feed AI has been consciously governed.

Store metrics

For Store, consider:

  • Memory and log governance Percentage of AI systems with defined retention policies for logs, embeddings, and memories.
  • Segmentation Number of environments/tenants where memory and vector stores are appropriately segmented (no cross‑tenant or cross‑domain mixing where it shouldn’t occur).
  • Access control Number of policy exceptions or incidents related to unauthorised access to AI artefacts (logs, models, vector stores).

These show whether you’re managing the accumulation of artefacts responsibly.

Use metrics

For Use, you want to understand how systems are being used and controlled:

  • Oversight adherence Percentage of high‑risk use cases where required human‑in‑the‑loop approvals actually occurred (as seen in logs or workflow data).
  • Incident rates Number of AI‑related incidents (harmful outputs, misuse, policy violations) per period, ideally normalised by usage volume.
  • Drift and performance issues Number of times models or agents crossed predefined performance or behavioural thresholds, and how quickly issues were identified.

For agents specifically:

  • Tool‑use patterns Distribution of tool calls by risk level and agent; spikes in high‑risk tools can indicate misuse or misconfiguration.

Delete and lifecycle metrics

Finally, for Delete:

  • Retirement coverage Percentage of AI systems with defined decommissioning criteria and plans.
  • Kill‑switch tests Frequency of kill‑switch drills and time‑to‑disable in test scenarios.
  • Retention compliance Number of AI artefact categories (logs, embeddings, fine‑tunes, memories) where actual retention aligns with documented policies.

These metrics demonstrate that you manage the full lifecycle, not just initial deployment.

Make it concrete

Start small. For your first wave of KPIs:

  • Choose 1–2 metrics per DASUD stage that you can realistically measure today.
  • Collect baseline data over a quarter.
  • Present these to your leadership or AI governance forum with interpretation, not just numbers.

As you mature, you can add more metrics and refine them. The goal is not to create a dashboard for its own sake, but to answer three questions with evidence:

  • Are we governing GenAI, RAG, and agents consistently?
  • Are we catching issues early enough?
  • Are we improving over time?

With that, DASUD stops being just a conceptual framework and becomes a measurable, evolving practice for advanced AI.

If you’d like assistance or advice with your Data Governance implementation, or any other topic (Privacy, Cybersecurity, Ethics, AI and Product Management) please feel free to drop me an email here and I will endeavour to get back to you as soon as possible. Alternatively, you can reach out to me on LinkedIn and I will get back to you within the same day!

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.