Before layering in Generative AI and agents, it’s useful to ground ourselves in something you may already know: traditional machine learning. If you can see how DASUD fits an ML lifecycle, it becomes much easier to extend it to more complex systems.
Think of a typical classifier or regression model used in your organisation: credit scoring, fraud detection, churn prediction, demand forecasting. These models aren’t new, but they are often the first place where AI governance expectations show up.
Design: framing the ML use case
The Design stage for ML is where you define:
- The business problem (e.g., reduce churn, detect fraud earlier, improve risk decisions).
- The decision being supported or automated.
- The population affected and potential harms (false positives/negatives).
- Constraints: regulatory rules, ethical principles, internal policies.
At this stage, you can already encode governance:
- Which decisions must remain human‑in‑the‑loop.
- Which attributes must not be used (e.g., protected characteristics).
- What fairness or performance criteria must be met before go‑live.
Most organisations under‑invest in Design. Bringing DASUD language here helps you ask, “What does responsible Design look like for this model?”
Acquire: governing training and test data
Acquire is where you bring data into the ML lifecycle:
- Selecting data sources and time ranges.
- Defining features and labels.
- Handling missing data, outliers, and sampling.
From a governance perspective, Acquire is where you ask:
- Are we allowed to use this data for this purpose?
- Is the data representative of the populations and conditions we care about?
- Are we inadvertently encoding bias from historical decisions?
Here you can apply your existing data governance controls—classification, approvals, data sharing reviews—to training data. You can also define a “data acquisition checklist” for ML projects as a precursor to the GenAI/agent checklists you’ll see later.
Store: models, datasets, and artefacts
In ML, Store covers:
- Where training, validation, and test datasets live.
- Where model artefacts (parameters, configurations) are stored.
- How metadata and documentation (model cards, lineage) are kept.
Questions for Store include:
- Who can access these datasets and models?
- Are they protected according to sensitivity?
- Can we reconstruct which version of a model and dataset produced which outputs?
This is where your existing data storage policies extend naturally: models and ML artefacts become governed assets, not just files in a data scientist’s folder.
Use: deployment, access, and monitoring
Use is where the model interacts with reality:
- Deployment into production systems (batch, real‑time, APIs).
- Integration into workflows (dashboards, decision engines, applications).
- Monitoring for performance, drift, and incidents.
Governance questions for Use:
- Who can call the model, and in what contexts?
- Are humans able to override or challenge model outputs?
- What monitoring thresholds trigger investigation or rollback?
You can treat ML models like other critical systems: they need access controls, change management, incident response, and run‑books.
Delete: decommissioning and retirement
Finally, Delete covers:
- When to retire or replace models (performance decline, policy changes, new regulations).
- How to handle old model versions and datasets (archiving vs deletion).
- How to document decommissioning decisions for audit and accountability.
This is where many organisations are weakest. Models “just keep running” until someone notices a problem. With DASUD, you can make retirement and kill‑switch criteria part of the initial design and periodic review.
Why this matters for advanced AI
Seeing DASUD working for ML does two things:
- It validates your existing governance instincts—this isn’t new territory, just a richer landscape.
- It gives you patterns you can directly reuse when we talk about GenAI and agents: design checklists, acquisition rules, storage policies, monitoring, and decommissioning.
In the next posts we’ll layer these same stages onto Generative AI: starting with how to re‑think Design when your system generates content instead of just predicting numbers.
If you’d like assistance or advice with your Data Governance implementation, or any other topic (Privacy, Cybersecurity, Ethics, AI and Product Management) please feel free to drop me an email here and I will endeavour to get back to you as soon as possible. Alternatively, you can reach out to me on LinkedIn and I will get back to you within the same day!