Once your Retrieval‑Augmented Generation (RAG) assistant is live, the key governance question shifts from “What did we index?” to “Who can ask what, and what can they see?” The “Use” stage of DASUD is where you control how queries flow into the system and how answers come back out.
If you don’t govern Use for RAG, you risk exposing sensitive content to the wrong people, surfacing outdated information, or allowing adversarial queries to extract more than you intended.
The main risks at RAG “Use” stage
RAG combines a model with your content. The main Use‑stage risks are:
- Access leakage Users seeing documents or facts they should not see, because retrieval ignores role‑based access.
- Over‑general answers Assistants mixing content from multiple domains and losing important nuance like jurisdiction or version.
- Adversarial queries Users intentionally probing the system to extract sensitive information (“prompt injection” for RAG).
- Misleading confidence Answers delivered with a tone of certainty, even when the underlying documents are weak or missing.
Your goal is to design query‑time controls that match your Access and risk model.
Enforce role‑aware retrieval
The first rule: retrieval must respect the same access boundaries your content stores do.
For each user or client context:
- Determine what content domains they are allowed to access Based on role, department, geography, or tenant.
- Apply filters before retrieval At query time, filter documents by allowed domains, sensitivity level, and any other access tags.
- Propagate identity and context Ensure the RAG layer knows who is asking and what their access rights are, rather than treating all queries as equal.
If your knowledge base access model is messy, this is a prompt to mature it—not to bypass it via RAG.
Constrain query types and patterns
Not every question should be allowed, even if content is technically accessible.
Consider:
- Blocking certain query patterns For example, “show me all documents about [sensitive topic] across all departments” may be too broad.
- Flagging high‑risk queries Queries related to legal, HR, security, or personal issues can be flagged for closer scrutiny or routed to more constrained answer modes.
- Adding friction where needed In high‑risk contexts, require users to confirm they understand the limitations and appropriate use of answers.
You’re not trying to police curiosity; you’re aligning usage with risk.
Design answer behaviour and transparency
How answers are presented matters as much as what they contain.
Decide:
- When to say “I don’t know” If no suitable documents are found, or confidence is low, the assistant should be allowed to say so, rather than hallucinating.
- How to show sources Include citations or links to the underlying documents so users can verify and read in full context. This is crucial for governance‑sensitive domains.
- How to handle conflicting content If multiple documents disagree, consider surfacing that fact instead of averaging them into a single, misleading answer.
These behaviours set expectations and support responsible human decision‑making.
Monitor usage patterns and feedback
Use is not static. Over time, you should:
- Track query patterns Which domains get the most questions? Are there unexpected spikes in certain topics or from certain roles?
- Review flagged or high‑risk interactions Look at queries that triggered safety mechanisms or user complaints. Use them to improve filters, content curation, or training.
- Adjust boundaries If you see a consistent need for answers in an out‑of‑scope area, decide whether to expand the assistant’s mission or direct those questions elsewhere.
Monitoring helps you keep RAG aligned with reality, not just with initial design.
Make it concrete
For one RAG assistant:
- List the user roles or audiences it serves.
- For each role, define which content domains they can query.
- Implement or specify filters that enforce these domains at retrieval.
- Decide how sources and confidence will be surfaced in answers.
- Set up a basic review process for flagged or problematic interactions.
With these steps, “Use” becomes a governed interaction space, not a free‑for‑all over your content.
If you’d like assistance or advice with your Data Governance implementation, or any other topic (Privacy, Cybersecurity, Ethics, AI and Product Management) please feel free to drop me an email here and I will endeavour to get back to you as soon as possible. Alternatively, you can reach out to me on LinkedIn and I will get back to you within the same day!