Make Memory work for you.

Designing transparency and control into AI recall.

Product
Sep 2, 2025Mistral AI

As conversational AIs get more capable, our expectations grow with them. We don’t just want faster answers, we want tools that remember, adapt, and fit the way we work. That’s where Memories (beta) come in. And with it, new questions: What should an AI remember? How should it recall? And what does it take for you to trust it?

What users told us.

When we talked to users across personal and professional settings, three consistent needs stood out: transparency, control, and focus. People want the ability to ask, “What do you know about me?” and get a clear answer. They also want memory to stay on-task. As one Le Chat user put it: “I need a hammer, not a friend.”

For some, that means citations: show me exactly which conversation or file a response is pulling from. For others, it means scoped recall, project-specific details, not casual asides from months ago. Whatever the preference, one thing is clear: memory should stay visible, editable, and under your control.

A memory system built for users.

Many AI tools today store information automatically. Some resurface it without warning; others only recall when you explicitly ask.

Le Chat takes a hybrid approach. It saves useful information automatically, like jotting down a note while you talk. But recall is designed to be smart, timely, and visible. You’ll always see what memory is in play, with links to the source.

That design comes from a simple philosophy: An AI assistant should help you think better, not guess what it’s doing. Here’s how we put that into practice.

Three principles behind Le Chat’s memory.

Transparency.

You’ll always know when memory is being used. Le Chat clearly shows when it’s recalling something, where it came from, and why it’s relevant. Think of it like clickable receipts.

Agency.

Memory is something you manage—not something that manages you.
You can:

Arrow Right Orange Turn Memories off anytime
Arrow Right OrangeStart an incognito chat that doesn’t use memory
Arrow Right Orange Edit or delete individual memories from your log

Sovereignty.

You own your memories. Export them. Import from elsewhere. Memories in Le Chat are portable and interoperable by design, because control shouldn’t stop at the interface.

What it helps you do.

Memory makes your assistant more useful over time, without getting in the way. That can look like:

  • Recalling how you solved a similar problem last quarter

  • Surfacing a past insight you’d forgotten

  • Connecting your current query to something you said in another thread

One user asked a follow-up about a legal policy weeks after uploading a PDF. Le Chat instantly found the right section in the document, without needing to re-upload or re-explain. That kind of connective recall saves time and unlocks flow. No retraining. No do-overs.

Try it now: Memory Insights.

We introduced Memory Insights, lightweight prompts that help you explore what Le Chat remembers and how it can help. They surface trends, suggest summaries, and point out moments worth revisiting, all based on your own data, and all editable. It’s a simple way to turn memory from passive storage into active signal. Download Le Chat on the App Store or Google Play to try memory on mobile.

What’s next?

We’re continuing to improve how memory works: Trimming noise, speeding up recall, and making it easier to organize long-term. You’ll see updates soon that let you sort memories into categories, instantly forget something, and get clearer visibility into what memory was used and when.

Under the hood, we’ve built a graph-based architecture that balances performance with context-awareness, so memory doesn’t just get longer, it gets smarter. AI is still early. Models will change, and fast. Memories are what keep your assistant anchored in your context, even as everything else shifts. Not a feature. Not a friend. A system you can trust and one that grows with you.