Policies don’t change behaviour on their own. For GenAI, RAG, and agents, the gap between “we published guidance” and “people actually use this safely” can be huge. Training and change management are where AI governance becomes lived practice.

The good news: you can design training around DASUD and roles, instead of generic “AI awareness” sessions that nobody remembers.

Start with roles, not tools

Training should be role‑based. Different people need different depth and focus:

  • Frontline users People who interact with GenAI assistants or agents daily (e.g., service staff, analysts, case workers).
  • Builders Data scientists, ML engineers, developers integrating GenAI and agents.
  • Owners and approvers Product owners, business leads, risk/compliance reviewers.
  • Executives and board members Decision‑makers focused on risk, strategy, and accountability.

Each group touches DASUD differently.

Map DASUD topics to roles

Use DASUD to structure content per role:

  • Frontline users Focus on Use and Delete:
    • What the tools are for (Design synopsis).
    • What they can and cannot use them for (allowed uses, red‑lines).
    • How to review outputs, escalate issues, and request deletion/reset.
  • Builders Cover all stages with more depth:
    • Design checklists, risk classification.
    • Data and content rules for Acquire.
    • Storage, logging, and memory controls in Store.
    • Oversight patterns, monitoring, and incident hooks in Use.
    • Decommissioning and change management in Delete.
  • Owners and approvers Emphasise Design, Use, and lifecycle:
    • How to read and challenge DASUD artefacts.
    • How to interpret KPIs and incident reports.
    • When to say no or ask for more controls.
  • Executives and boards Focus on big picture:
    • How DASUD aligns with regulation and risk.
    • High‑level metrics and oversight responsibilities.

Keep training practical and scenario‑based

For each role, avoid abstract lectures. Use:

  • Realistic scenarios E.g., “A customer‑facing assistant produces an unsafe answer; what should you do?” or “An agent suggests closing a ticket with limited context.”
  • Checklists and cheat sheets One‑page DASUD‑aligned guides: “Before you approve a GenAI use case, ask these questions.”
  • Short, focused sessions Multiple 30–60 minute modules are better than a single long, forgettable workshop.

The goal is to equip people with decisions and actions, not just knowledge.

Embed training into existing channels

You don’t have to create a new education empire. Plug AI governance training into:

  • Onboarding For roles that will use or own AI tools.
  • Regular risk/compliance or data training Add AI modules to existing programs.
  • Product and project kick‑offs Include quick DASUD refreshers when new AI initiatives start.
  • Brown‑bags and internal communities Host informal Q&A sessions and share case studies.

Repetition across channels reinforces the message without overwhelming people.

Provide ongoing support and updates

AI governance isn’t static. Neither should training be.

  • Maintain living documentation Update checklists, FAQs, and playbooks as your practices and tools evolve.
  • Offer office hours Give people a place to bring questions about specific use cases or behaviours they’re seeing.
  • Share incident learnings When something goes wrong (or almost does), anonymise and share what happened and how you addressed it.

This turns training from a one‑off event into a continuous change program.

Make it concrete

For your organisation:

  • Identify 3–4 key roles interacting with advanced AI.
  • For each, draft a simple learning objective list using DASUD as a spine.
  • Design one short module per role (30–60 minutes) with scenarios pulled from your context.
  • Plan where these modules will live: onboarding, quarterly training, project kick‑off, etc.

With role‑based, DASUD‑aligned training, you move AI governance out of PDFs and into everyday decisions.

If you’d like assistance or advice with your Data Governance implementation, or any other topic (Privacy, Cybersecurity, Ethics, AI and Product Management) please feel free to drop me an email here and I will endeavour to get back to you as soon as possible. Alternatively, you can reach out to me on LinkedIn and I will get back to you within the same day!

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.