Most organisations started their AI journey with analytics and classic machine learning. Then came Generative AI, which could suddenly write, summarise, and suggest. AI agents are the next step: systems that can plan, call tools, and take actions on your behalf.
That extra autonomy changes the risk profile—and makes a DASUD‑based lifecycle even more important.
From models to agents: what’s different?
The progression looks roughly like this:
- Traditional ML models Input → prediction (score/class) → human or system uses it.
- GenAI models Input → generated content (text/code) → human or system uses it.
- AI agents Goal → multi‑step planning → tool/API calls → changes in systems or communications.
Agents don’t just answer—they do. They might:
- Create and update tickets in your IT system.
- Send emails or chat messages to customers.
- Change configuration in internal tools.
- Orchestrate other agents (“researcher”, “planner”, “executor”).
That means governance must account for sequences of actions and interactions, not just individual model calls or outputs.
Why DASUD still applies—more than ever
For agents, DASUD looks like this:
- Design What is the agent allowed to do, for whom, and under which conditions? What decisions or actions are out of bounds?
- Acquire What tools, data sources, and knowledge bases can the agent access? What feedback does it learn from?
- Store What state and memory does the agent keep about users, tasks, and environment? Where is that stored?
- Use How, when, and with what oversight does the agent act? Which actions are autonomous vs human‑approved?
- Delete How do you retire agent behaviours, clear memory, roll back changes, or disable the agent safely?
Instead of being optional, Design and Use become central: you’re effectively designing and governing a new class of semi‑autonomous digital worker.
Designing agents like you design teams
A helpful mindset shift: treat agents like you would a small internal team.
Ask:
- What is this agent’s role and mission?
- What tasks and decisions fall inside that mission, and which don’t?
- Which systems and tools can it use?
- Who is accountable for its behaviour?
- When must it ask for help or escalate?
This “agent charter” becomes your Design artefact. It translates beautifully into technical configuration (allowed tools, permissions, prompts) and process (oversight, approvals).
Tool‑use and action spaces
For agents, tools and APIs are where governance meets reality:
- A search tool is relatively safe.
- A “send email” or “update customer record” tool is much riskier.
- A “perform payment” tool is highly sensitive.
Governing the agent means governing the action space:
- Which tools are exposed at all.
- What each tool is allowed to do.
- Under what conditions actions can be performed autonomously vs requiring approval.
This is where your “Use” stage for agents diverges most from GenAI: you’re not just managing content; you’re managing actions.
Memory, state, and continuous learning
Agents often keep more state than simple models:
- Conversation context across multiple steps.
- Long‑term preferences or profiles.
- Lessons from past successes/failures (continuous learning).
That brings “Store” and “Delete” to the foreground:
- What may the agent remember about users and tasks?
- How are memories isolated between users or clients?
- How do you clear or reset state if it becomes corrupted or risky?
In some cases, you’ll decide that certain agents shouldn’t have long‑term memory at all.
Where we go next in the series
In the coming posts, we’ll unpack DASUD for agents in more detail:
- Design: agent charters, boundaries, and escalation.
- Acquire: governing tools, data, and feedback as inputs.
- Store: state and memory segmentation.
- Use: action catalogues, tool‑use, oversight modes.
- Delete: shutdown, rollback, and un‑learning.
For now, the key takeaway is simple: when AI starts acting, lifecycle governance becomes non‑negotiable. DASUD gives you a way to talk about and structure that governance without throwing away the mental models you already use.
If you’d like assistance or advice with your Data Governance implementation, or any other topic (Privacy, Cybersecurity, Ethics, AI and Product Management) please feel free to drop me an email here and I will endeavour to get back to you as soon as possible. Alternatively, you can reach out to me on LinkedIn and I will get back to you within the same day!