One of the biggest mistakes organisations make is treating AI governance as something completely separate from everything else. They spin up new committees, policies, and processes that sit beside—rather than within—their existing governance structures. The result: duplication, confusion, and fatigue.
If you already have a data governance operating model, DASUD can help you extend it to advanced AI instead of starting from scratch.
Start from your existing pillars
Most organisations already have some combination of:
- Data governance councils or steering committees.
- Risk and compliance functions.
- Change and project management processes.
- Security and privacy by design practices.
- Incident response and escalation workflows.
You don’t need to duplicate these. Instead, you need to:
- Expand their scope to include GenAI, RAG, and agents.
- Inject AI‑specific checks at the right stages.
- Clarify roles where new expertise is needed.
DASUD gives you a way to position these changes: at each stage, “what’s new for AI?” rather than “what’s entirely separate?”
Design: extend project and product lifecycles
Your existing project or product lifecycle likely includes:
- Idea/intake.
- Business case.
- Design and architecture.
- Build, test, deploy.
To include AI:
- Add a DASUD‑aligned Design gate Require AI use‑cases to pass through a design review: use‑case definition, risk classification, oversight mode, red‑lines.
- Route high‑risk AI use‑cases to the right forums E.g., data/AI governance council, risk committee, or a dedicated AI review group.
This can be a lightweight extension of existing architecture or change review boards.
Acquire/Store: reuse data governance machinery
You already:
- Approve new data sources and uses.
- Maintain data catalogues and classifications.
- Govern data retention and access.
Extend that to advanced AI by:
- Treating fine‑tuning datasets, RAG repositories, and agent tools as governed assets Add them into your catalogues and approval processes.
- Using existing privacy and security reviews Hook AI projects into current privacy impact assessments and security design reviews, adding AI‑specific questions (e.g., prompt logging, vector stores, agent memory).
This way, your AI input governance is another application of data governance, not a separate universe.
Use: fit AI oversight into operational structures
Operational governance structures—like change management, access control, and monitoring—already exist:
- Change management Ensure AI model and agent changes go through change boards, with DASUD artefacts attached (Design sheets, risk assessments, test evidence).
- Access control Integrate AI systems into your identity and access management (IAM), treating GenAI tools and agent actions as systems that need permissions, not just “features.”
- Monitoring and incident response Extend existing monitoring and incident processes to include AI incidents and metrics, rather than building separate channels.
You’re not reinventing oversight; you’re broadening its scope.
Delete: embed AI decommissioning into lifecycle
When systems are retired or transformed today, you probably:
- Archive data and configurations.
- Update asset registers.
- Inform stakeholders.
Add AI‑specific elements:
- Model and agent retirement plans Ensure decommissioning of models, RAG indexes, and agent configurations is part of project closure.
- Content and index updates Tie RAG content lifecycle to your records and document management practices.
- Kill‑switch testing Include AI kill‑switch drills in broader business continuity and disaster recovery exercises.
Again, this is extension, not reinvention.
Clarify roles and capabilities
To make this work, you may need:
- A clear AI governance lead or function Often emerging from data governance, risk, or digital leadership.
- Training for existing bodies Educate councils, risk committees, and change boards on advanced AI basics and DASUD so they can make informed decisions.
- Updated charters Formally expand the remit of existing governance bodies to cover advanced AI.
Make it concrete
Pick one existing governance mechanism—say your data governance council:
- Update its charter to explicitly include GenAI, RAG, and agents.
- Define what types of AI decisions must come to that forum.
- Add DASUD‑aligned artefacts to its standard templates (e.g., Design sheets, risk summaries).
Over time, you’ll have a single, cohesive governance fabric that covers data, models, GenAI, and agents—rather than parallel structures that compete for attention.
If you’d like assistance or advice with your Data Governance implementation, or any other topic (Privacy, Cybersecurity, Ethics, AI and Product Management) please feel free to drop me an email here and I will endeavour to get back to you as soon as possible. Alternatively, you can reach out to me on LinkedIn and I will get back to you within the same day!