Generative AI looks and feels different from traditional machine learning. Instead of predicting a number or a class, it creates content: text, code, images, or even actions. That shift means the “Design” stage in DASUD needs an update. If you don’t rethink Design for GenAI, you’ll inherit risks that are much harder to control later.

In predictive models, you mostly worry about accuracy, bias, and stability. With GenAI, you add hallucinations, plausibility without truth, and the potential for users to over‑trust fluent outputs. You also have a wider range of use cases—from drafting internal emails to influencing high‑stakes decisions. Design is where you decide which of those uses you will and will not allow.

Why Generative AI needs a different Design lens

The first design mistake many organisations make is treating GenAI like a generic “smart assistant” that can be plugged into any workflow. This is where problems start.

You need to explicitly recognise how GenAI differs:

  • It will produce plausible‑sounding but wrong content at times.
  • It can be steered into harmful or unsafe territories via prompts.
  • It can influence human decisions even when formally “just assisting.”

Design is your chance to ask: for this use case, in this context, with these users, are those characteristics acceptable?

Define GenAI use case types up front

A simple, powerful move is to classify GenAI use cases by impact before any build starts.

Think in three broad types:

  • Informational Low‑stakes drafting, summarising, rewriting, or brainstorming. Mistakes are annoying but usually recoverable—if humans are paying attention.
  • Decision‑support Outputs that shape or structure decisions (e.g., risk summaries, case overviews, recommendations) but are meant to be reviewed by humans.
  • Action‑taking Agents or workflows where GenAI triggers actions directly (sending emails, making changes in systems, initiating processes).

Once classified, map each type to an oversight mode:

  • Low‑risk informational tasks might allow lighter review or spot checks.
  • Decision‑support should default to human‑in‑the‑loop, with clear accountability.
  • Action‑taking, in many domains, may need to be blocked or heavily constrained until maturity and risk appetite are much higher.

By encoding this mapping in Design, you anchor later decisions about controls.

Create a GenAI Design checklist

Next, turn your thinking into a simple checklist every GenAI proposal must answer. For example:

  • What is the intended purpose and audience of this system?
  • What decisions might people make because of its outputs?
  • What is an acceptable level of error or hallucination in this context?
  • Which topics or domains must the system never touch?
  • How will users be informed about the system’s limitations?

You should also define red‑lines early:

  • Prohibited domains: e.g., no medical diagnoses, no legal conclusions, no automated HR decisions.
  • Prohibited output types: e.g., hate, self‑harm advice, explicit content, unsafe instructions.
  • Requirements: e.g., citations for factual claims, visible disclaimers, or clear “AI‑generated” labels.

This checklist then becomes your anchor for approvals, risk assessments, and later audits.

Design for hallucinations and uncertainty

One crucial design question for GenAI is: where is “occasionally wrong but plausible” acceptable?

In some contexts, it’s fine. Brainstorming product ideas or rewriting non‑critical emails can tolerate occasional nonsense. In others, it’s catastrophic. Summaries of regulatory obligations or risk classifications must be much more reliable.

In Design, you should decide:

  • Which outputs must be backed by retrieval (e.g., RAG) rather than pure generation.
  • Where structured templates, controlled vocabularies, or constrained generation should replace free‑form text.
  • How uncertainty will be surfaced (e.g., disclaimers, links to sources, confidence indicators).

You should also embed human‑in‑the‑loop patterns at Design time: Who must review what? At what frequency? With what escalation paths?

Pulling it together

To make this practical, pick one existing or planned GenAI use case and re‑evaluate it using this new Design lens. Ask whether you have:

  • Classified its type and risk.
  • Defined acceptable and unacceptable uses.
  • Designed for hallucinations and uncertainty, not just against them.
  • Captured decisions and guardrails in documentation.

The more intentional your Design stage is, the easier it is to govern Acquire, Store, Use, and Delete. For GenAI, Design is where you decide whether you’re building a clever assistant, a high‑stakes advisor, or something in between—and what risks you’re prepared to own.

If you’d like assistance or advice with your Data Governance implementation, or any other topic (Privacy, Cybersecurity, Ethics, AI and Product Management) please feel free to drop me an email here and I will endeavour to get back to you as soon as possible. Alternatively, you can reach out to me on LinkedIn and I will get back to you within the same day!

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.