How to “Forget” in GenAI: Deletion, Retention, and Kill Switches

Generative AI complicates data deletion compared to traditional governance, as it involves multiple artefacts like logs and user memories. Organisations must define clear deletion policies for each artefact, including user-triggered options and emergency controls. Balancing auditability and privacy is crucial, necessitating regular reviews of retention policies for compliance and risk management.

Governing Generative AI Outputs: From Drafts to Decisions

Deploying Generative AI fundamentally alters creation and decision-making processes. Proper governance in its "Use" stage is essential to prevent risks such as hallucinated facts and harmful content. By categorising use cases into risk levels and implementing structured review processes, organisations can ensure safe and effective usage of GenAI technologies.

How to Govern Storage and “Memory” for Generative AI Systems

Generative AI systems retain various types of sensitive data, necessitating expanded governance over storage, memory, and logging practices. Organisations should segment data storage by sensitivity and purpose, implement role-based access controls, and ensure safe logging, embedding, and memory management. Clear user communication and a structured governance matrix are essential for effective oversight.

How to Govern Prompt, Context, and Fine‑Tuning Data in the “Acquire” Stage

In Generative AI, the concept of "Acquire" extends beyond training data to include fine-tuning, retrieval contexts, and prompts. Effective governance is crucial to prevent issues like IP leakage and bias. A structured approach involves defining data usage policies, curating knowledge sources, and treating prompts as governed assets, ensuring safety and compliance in AI initiatives.

Redesigning “Design” in DASUD for Generative AI

Generative AI requires a distinct design approach compared to traditional machine learning, emphasising the need to classify use cases by their impact: informational, decision-support, or action-taking. Organisations should establish guidelines addressing acceptable error levels, prohibited areas, and human oversight, enabling effective management of risks associated with GenAI outputs.