If you already think in terms of data lifecycles, you are holding one of the most useful mental models for AI governance. The DASUD framework—Design, Acquire, Store, Use, Delete—doesn’t just apply to data. It can also structure how you govern AI systems end‑to‑end.
Adapting DASUD to AI is less about inventing something new and more about extending what you already do.
DASUD in data governance: a quick recap
In data governance, you might use DASUD like this:
- Design: define why data is needed, who will use it, and under what rules.
- Acquire: decide where it comes from, how it’s collected, and under what terms.
- Store: choose where it lives, how it’s protected, and who can see it.
- Use: decide what uses are allowed, how access is granted, and how usage is monitored.
- Delete: define when data should be archived or removed and how to do that safely.
You’ve likely used this thinking to shape policies, processes, and tooling. AI simply adds more moving parts.
Understanding the AI lifecycle you need to cover
Most AI systems follow a broadly similar lifecycle:
- A problem or use case is identified.
- Data is prepared, engineered, and labelled.
- Models are trained, tuned, and validated.
- A model is deployed into a product, process, or decision flow.
- Its behaviour is monitored, adjusted, and eventually retired.
Your job is to overlay DASUD onto this lifecycle so that every stage has explicit design choices and controls.
Mapping DASUD to AI
Start by rephrasing each DASUD stage in AI terms.
Design
Here you focus on the AI use case itself. What decision or process is being changed? Who is affected? What benefits are expected? What harms could occur? What does “good” look like?
This is where you bring governance into early conversations. You help teams clarify scope, stakeholders, and red lines before any data is pulled or models are built.
Acquire
This covers data and external model inputs. Which datasets will be used for training and testing? Are they lawful, consented, representative, and fit‑for‑purpose? Are you using vendor models or APIs, and what are their constraints?
Here you apply and extend your existing rules for sourcing, licensing, and data quality.
Store
Now think beyond databases.
Where do training datasets, feature stores, model artefacts, configurations, and logs live? How are they protected, who can access them, and how are they backed up and retained?
This is where you unify data and model storage policies and tie them into your security and privacy standards.
Use
Use is about deployment and operation. Who can run the model? In which products or processes? How are humans involved (or not)? How do you monitor performance, drift, fairness, and misuse?
Here you define usage policies, access controls, monitoring requirements, and escalation paths for when things go wrong.
Delete
Finally, the end of life.
When should a model be retired? What happens to related data and artefacts? What must be archived for accountability and what should be removed for minimisation and privacy?
This is where you design decommissioning criteria, kill switches, and retention rules.
Building a DASUD‑for‑AI checklist
To make this practical, turn the mapping into a simple checklist that projects must address at each stage.
For example:
- Design: What problem are we solving? Who can be harmed? What decisions are in scope or out of scope?
- Acquire: What are our data sources? Are there any consent, bias, or licensing concerns?
- Store: Where do all AI artefacts live and who has access?
- Use: What are our guardrails, monitoring metrics, and escalation paths?
- Delete: Under what conditions do we stop using this model and what do we keep for audit purposes?
Keep it short and concrete. Your goal is to structure thinking, not drown teams in paperwork.
Piloting DASUD on a real AI project
The best way to refine your adaptation is to use it. Pick one AI initiative—ideally one where you have some influence—and walk through DASUD with the team. Use it to identify missing decisions, unclear ownership, or uncontrolled risks. Capture what worked, what felt heavy, and where you needed more detail. Then adjust. Over time, you’ll have a DASUD‑based AI governance approach that feels natural in your organisation because it evolved from real projects rather than theory.
By using a lifecycle you already understand, you make AI governance less mysterious and more manageable—for yourself, for AI teams, and for the leaders who depend on both.
If you’d like assistance or advice with your Data Governance implementation, or any other topic (Privacy, Cybersecurity, Ethics, AI and Product Management) please feel free to drop me an email here and I will endeavour to get back to you as soon as possible. Alternatively, you can reach out to me on LinkedIn and I will get back to you within the same day!