We’ve covered a lot: DASUD for models, GenAI, RAG, and agents; oversight patterns; incident response; playbooks. Now it’s time to put it all together in a single example. Let’s walk through a realistic use case and see how DASUD guides governance decisions end‑to‑end.
Imagine you’re in a large organisation rolling out an internal “IT Support Copilot”—a GenAI‑plus‑agent system to help staff with common IT issues.
Design: define the mission and boundaries
You start with Design:
- Mission Assist employees with IT issues by answering questions, summarising relevant knowledge, and drafting ticket updates.
- In scope Password reset guidance, VPN issues, software installation questions, device troubleshooting.
- Out of scope Actually resetting passwords, modifying access rights, touching HR or finance systems, handling security incidents.
- Oversight HITL for any suggested ticket‑closing actions; HOTL for non‑critical ticket updates; autonomous for low‑risk knowledge lookups.
You create:
- A GenAI design canvas for the conversational assistant.
- An agent charter for the tool‑using components (e.g., create/update tickets, search knowledge bases).
- A RAG design sheet defining which knowledge bases are in scope (IT KB, internal how‑tos) and what’s excluded.
Acquire: tools, data, and content
In Acquire, you:
- Approve RAG sources Include: IT knowledge base, selected Confluence spaces. Exclude: security runbooks, HR spaces, personal data repositories.
- Classify and prepare content Clean outdated articles, tag by product/issue, ensure sensitivity labels are in place.
- Approve tools/APIs Expose “search_kb” and “create_ticket” tools. Hold back “close_ticket” and “change_access” for now, pending further risk work.
- Decide what the assistant can learn from Use structured feedback (“Did this help?”) but don’t feed free‑form complaints directly into learning.
Everything gets documented in your catalogues and intake forms.
Store: logs, state, and memory
For Store, you decide:
- Logs Keep prompts, queries, and tool calls with minimal necessary detail for 90 days; aggregate metrics for longer.
- Embeddings and vector indexes Store in a segmented environment, isolated from other domains, with access limited to the AI team and governance.
- Memory Allow session‑level context, but no long‑term per‑user memory for now, to avoid building undeclared profiles.
- Access Restrict raw log and index access to specific roles; log any administrative access.
This ensures you can diagnose issues without creating a hidden archive of sensitive user information.
Use: day‑to‑day interaction and oversight
In Use, the copilot operates as follows:
- Employees ask questions The assistant uses RAG to retrieve relevant IT articles and generates summarised answers, citing sources so users can click through.
- Ticket assistance The agent proposes ticket categories and draft responses, but cannot close tickets. Analysts review and approve edits using a simple UI that highlights AI‑proposed changes.
- Oversight Team leads periodically review samples of AI‑assisted tickets. Metrics track accuracy of suggestions and time saved.
- Safety The system blocks tasks involving security breach reporting or access changes, and directs users to the appropriate channels instead.
Every part of this behaviour ties back to the Design decisions you made earlier.
Delete: updates, retirement, and incidents
Over time:
- Knowledge updates As IT policies and systems change, content owners update KB articles. These changes trigger re‑indexing, so RAG answers reflect current guidance.
- Artefact retention Logs are rotated and anonymised according to policy. Embeddings for retired content are removed.
- Incidents One day, an incident occurs: the assistant suggests an incorrect configuration change. Thanks to logs, you trace the suggestion to an outdated KB article. You treat it as a RAG content issue: update the article, re‑index, and review content owner processes.
- Retirement When you later replace your ITSM platform, you use your AI decommissioning checklist to retire the copilot gracefully: disabling integrations, archiving configurations, and cleaning up indexes and logs.
What this story shows
This “day in the life” example illustrates:
- DASUD is not abstract; it shows up in concrete decisions about scope, tools, content, oversight, and lifecycle.
- GenAI, RAG, and agents can be governed together when you treat them as connected layers in one system.
- A single use case can become a proof‑point and reference model for the rest of your organisation.
You can repeat this for other domains—HR support, finance queries, operations assistants—using the same DASUD backbone.
If you’ve been following the series, you now have the ingredients to do exactly that: a lifecycle, templates, oversight patterns, and a way to tell the story.
If you’d like assistance or advice with your Data Governance implementation, or any other topic (Privacy, Cybersecurity, Ethics, AI and Product Management) please feel free to drop me an email here and I will endeavour to get back to you as soon as possible. Alternatively, you can reach out to me on LinkedIn and I will get back to you within the same day!