When you add memory to AI agents, they start to feel more helpful—and more dangerous. Memory makes interactions smoother: agents remember preferences, context, and prior actions. But it also raises hard governance questions: What exactly is being stored? For how long? Who can see it? Can it leak between users or clients?

The “Store” stage in DASUD is where you answer those questions deliberately, instead of letting memory grow organically.

What counts as “memory” for agents?

Agent memory isn’t a single thing. It can include:

  • Short‑term state Context within a single task or session (e.g., prior steps in a workflow).
  • Long‑term per‑user memory Preferences, prior tasks, typical patterns (“this user prefers summaries with bullet points”).
  • Global/shared memory Patterns or knowledge learned across many users (“this troubleshooting step works well for error X”).
  • System state Checkpoints about what the agent has done (e.g., tickets updated, emails drafted, tools called).

All of this may be stored in different places: databases, vector stores, cache layers, logs. From a governance perspective, you need a unified view.

Segment memory by scope and sensitivity

The first rule of agent memory governance: don’t let everything mix.

Decide, for each type of memory:

  • Scope Is it per‑session, per‑user, per‑team, per‑tenant, or global?
  • Sensitivity Does it contain personal data, confidential business information, or regulated content?
  • Retention How long is it useful, and when does it become a liability?

Examples:

  • Session context Usually low‑risk and short‑lived. It can often be dropped once the task completes.
  • Per‑user preferences Potentially sensitive (especially if it includes behavioural patterns), and should be clearly documented and resettable.
  • Global optimisation data May be less personal but still requires care, particularly if derived from sensitive domains.

Segmentation means separate storage for each scope, with permissions aligned to that scope. Per‑tenant or per‑client separation is especially important in multi‑tenant environments.

Define what agents are allowed to remember

You don’t have to let agents remember everything they see.

Design rules for what can be stored as memory:

  • Allowed High‑level preferences, non‑sensitive task summaries, generic learnings (e.g., “knowledge base article X works well for issue Y”).
  • Restricted or disallowed Detailed personal circumstances, protected characteristics, sensitive HR or health details, free‑form text copied verbatim from user inputs, unless strictly necessary and consented.

These rules should align with your data classification and privacy policies. For example, you might allow storage of “technical context” but not “personally identifying information” in long‑term memory.

Make reset and control part of “Store”

Governance isn’t just about what you store—it’s about how users and admins can control it.

Plan for:

  • User‑level reset A way for a user (or their administrator) to clear their personal memory with the agent: preferences, histories, and other state linked to them.
  • Tenant‑level reset For clients or business units, the ability to reset or purge agent memories across their environment (e.g., at contract end, or after policy changes).
  • System‑level reset A mechanism to clear or archive global or shared memories if a serious issue is discovered.

These resets should have clear, documented effects: what is actually cleared, what remains (e.g., aggregated statistics), and how quickly the change takes effect.

Protect access to memory stores

Access control is critical:

  • Limit access to raw memory data Only specific technical and governance roles should be able to inspect detailed memory stores, and only for approved purposes (debugging, audits, incident response).
  • Log access Any access to memory stores—especially for per‑user or sensitive domains—should be logged and auditable.
  • Use least privilege Agents themselves should only access the memory they need for a given context, not every piece of stored state.

Treat memory stores like you would sensitive data warehouses, not casual caches.

Make it concrete

For one agent system in your environment:

  • List all types of memory it currently uses or could use (session, per‑user, global).
  • Classify each by scope, sensitivity, and retention.
  • Decide what’s allowed to be stored in each, and what’s out‑of‑bounds.
  • Implement or specify reset mechanisms at user, tenant, and system level.
  • Review access controls and logging for memory stores.

By handling “Store” for agents intentionally, you avoid two major risks at once: uncontrolled hoarding of sensitive data, and silent cross‑user or cross‑client leakage.

If you’d like assistance or advice with your Data Governance implementation, or any other topic (Privacy, Cybersecurity, Ethics, AI and Product Management) please feel free to drop me an email here and I will endeavour to get back to you as soon as possible. Alternatively, you can reach out to me on LinkedIn and I will get back to you within the same day!

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.