When you deploy Generative AI, you’re not just launching a new tool—you’re changing how people create, decide, and communicate. The “Use” stage of DASUD is where that change becomes real. If you don’t govern how outputs are used, GenAI can quietly shift from “helpful draft” to “hidden decision engine” without anyone noticing.

You already know how to govern data access and usage. The same instincts apply here, but the failure modes look different: hallucinated facts, over‑trust in fluent language, harmful content, and prompt injection.

Redefining “Use” for Generative AI

“Use” for GenAI covers a spectrum of tasks:

  • Drafting and summarising text (emails, reports, briefings).
  • Transforming content (translation, simplification, tone changes).
  • Generating code, queries, and test cases.
  • Producing recommendations or “opinions” on what to do next.

Each of these can be harmless in some contexts and dangerous in others. The key is to be explicit: for each use case, is GenAI providing a draft, a recommendation, or a decision?

Drafts invite review. Recommendations influence decisions but should never be final on their own. Decisions—especially automated ones—need the highest scrutiny.

Categorise uses by risk and oversight

A simple risk/use matrix helps you decide what’s acceptable:

  • Low‑risk Internal brainstorming, non‑critical drafts, content where errors are inconvenient but not harmful.
  • Medium‑risk Customer‑facing content, policy summaries, internal reports used to inform decisions. Errors can mislead but are usually catchable with good review.
  • High‑risk Outputs that directly affect rights, safety, access to services, legal positions, or regulatory obligations.

For each category, define the oversight pattern:

  • Low‑risk: GenAI can be used freely as a draft generator, with users expected—but not forced—to review. Add light guidance and occasional spot checks.
  • Medium‑risk: require named reviewers for AI‑generated content before it goes external or feeds key decisions. Make human‑in‑the‑loop the default.
  • High‑risk: forbid direct GenAI use or require extremely constrained patterns (e.g., retrieval‑only summaries of approved documents with clear citations).

Tie these decisions back to your Design stage: if a use case was scoped as high‑risk, Use must reflect that.

Design review and approval flows

Once you’ve classified use cases, design workflows that make safe use easy and unsafe use hard.

For medium‑ and high‑impact outputs:

  • Specify who reviews what Define review responsibilities by role (e.g., legal for contract language, clinical leadership for clinical summaries, comms for public statements). Ensure it’s part of job descriptions or operating procedures.
  • Make review meaningful Present original and AI‑edited content side‑by‑side. Highlight sections that were heavily changed or added. Allow reviewers to accept, edit, or reject suggested content.
  • Prevent “one‑click publish” in risky contexts In tools that integrate GenAI into publishing workflows, remove any path that lets people go from “generate” to “live” without a review step in high‑risk domains.

These are design and UX choices as much as governance ones.

Embed safety filters, disclaimers, and transparency

“Use” also includes the safety features around generation:

  • Safety filters Enable and tune filters that block known harmful content categories (self‑harm, hate, explicit content, violence, illegal instructions). For sensitive domains, add rules specific to your context (e.g., no investment advice, no diagnostic statements).
  • Prompt and task restrictions Block prompts that request disallowed behaviour (“write me a medical diagnosis”, “draft a termination letter without HR review”). Communicate clearly when something is out of scope.
  • Disclaimers and labels Use standard, domain‑appropriate disclaimers in interfaces and outputs: “AI‑generated draft—must be reviewed by [role] before use.” Make sure these aren’t buried or easily removed.

Transparency doesn’t eliminate risk, but it sets expectations and supports cultural adoption of safe practices.

Monitoring GenAI use over time

Finally, Use is an evolving space: people will test boundaries and find new ways to apply GenAI.

You should:

  • Track where and how GenAI is being used Which teams use it most? For what types of tasks? Are there emerging uses that weren’t part of the original Design?
  • Collect feedback on outputs Allow users to flag outputs as harmful, biased, or incorrect. Analyse this feedback to refine prompts, filters, and training for users.
  • Periodically review high‑impact workflows Sample outputs from critical use cases, check whether review steps are being followed, and adjust oversight if you see drift.

Making it concrete

Pick one GenAI tool in your organisation—a writing assistant, internal Q&A bot, or code helper. For that tool:

  • List its main use cases.
  • Classify them into low, medium, and high‑risk.
  • Define allowed/prohibited uses and required oversight per category.
  • Document safety filters, disclaimers, and feedback mechanisms.

By treating “Use” as a governed stage, you move GenAI from “cool experiment” to “controlled capability”—and you do it using patterns your organisation already understands.

If you’d like assistance or advice with your Data Governance implementation, or any other topic (Privacy, Cybersecurity, Ethics, AI and Product Management) please feel free to drop me an email here and I will endeavour to get back to you as soon as possible. Alternatively, you can reach out to me on LinkedIn and I will get back to you within the same day!

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.