After launching the DASUD Framework last year, I have been fortunate enough to step into a role where I am now looking after the AI Governance for an organisation. Ensuring I respected the norms of the company and the patterns that have been outlined, I threw out my thinking around DASUD and sat down to scale what the organisation already had in place. After nearly 5 weeks, it all came back to understanding the fundamental question – what exactly am I solving for?
What are you trying to solve for?
Governance at scale – that is easy to understand, get the most people onboard and quick to turn around decisions. As the world of AI evolves, you need a simple set of repeatable questions that you can apply. To help increase my understanding, I started reading about the latest release of OpenAI’s Codex 5.3 model and Anthropic’s Claude Opus 4.6 and there was one item that stood out
“Multi-agent prompt injection privilege escalation threat”
Whilst these are all words that make sense in and of itself, combining it together unleashes a whole new set of problems. Let’s break down this sentence to help understand the threat better. An agent can be seen as a non-human entity completing a single task. It can do a simple retrieval (get me the weather forecast for a specific city everyday at 06:45) or update a sheet, with new entries from a customer. Single, well-defined task. Multi-agents, is when you have an orchestration occurring – you have multiple “single” agents that when combined will complete a more complex task – get me the stock price of all the major markets and update this in my spreadsheet that will help me understand whether I should buy or sell my stock based on the specified conditions I have outlined. Whilst this is still an individual user – imagine when it starts to impact your clients? Now you are no longer playing with theory and have skin in the game to help solve for – and boy-oh-boy can it go catastrophically wrong. Now the threat: most agents have the same rights as a human (this is CATEGORICALLY bad – never EVER do this. You should always give it less permission than a human does), which means the agent has access to data and systems you don’t want it having. But even if you gave it less access than it should, this threat – means that the agent could be asked to increase it’s access and start to export data or update systems in ways that you don’t want it to!
How do you fix this?
First up – you need to ask yourself the same DASUD question – Do I know what I am Designing for? How am I going to Acquire good quality data, Store it safely, Use it appropriately and Delete it when it’s not needed? From a governance perspective – at the very minimum, you need to know the name of the system, the data it uses, and the purpose for which it was built. Once you know this – you can then add in a named owner and a regular review cycle. This gives you the oversight you need and yet as a developer you also gain confidence that you’re doing the right thing.
Governance that is fit for purpose
I always assumed that I needed to be the person that solved all the problems or thought about questions in different ways. One interesting perspective that developed in the last few days, leans on the idea “if you want to go fast, go alone. If you want to go far, go together” – I bring my Data Governance expertise, others bring their Cyber, Legal, Risk, Privacy and so on perspective – each having a viewpoint that builds on the other. But when you work together, you come up with a solution that is so much more powerful than an individual could ever hope to do.
If you’d like assistance or advice with your Data Governance implementation, or any other topic (Privacy, Cybersecurity, Ethics, AI and Product Management) please feel free to drop me an email here and I will endeavour to get back to you as soon as possible. Alternatively, you can reach out to me on LinkedIn and I will get back to you within the same day!