Hello! This is Sergei.
If you've been building or using agentic systems, you've hit this from time to time. Your agents start pulling irrelevant information, outputs go sideways.
I see this constantly. It looks like the problem isn't the system itself (but anyway I've started to use some small models for small tasks to exclude extra knowledge and prevent system being too smart for the task). The problem is about memory structure.
This week I read Markus Franz's post on LinkedIn (he's CTO & Incubator Lab lead at Ippen Digital). He breaks agent memory into three layers:
Procedural memory — rules, roles, tool access, guardrails. How the agent behaves.
Semantic memory — facts, terminology, style, preferences. What the agent knows.
Episodic memory — past interactions and outcomes. What happened, what worked, what failed.
Here's what this looks like in practice: mostly we dump everything into one undifferentiated memory blob and hope the LLM figures it out. It doesn't. But when we separate these layers, agents can actually retrieve the right information at the right time.

Read the full post here.
Also, I’ve collected a bunch of new AI initiatives, resources, and news this week. One example I really liked: a project from Politico 🇵🇹 — an AI-powered, Tinder-style app to choose your presidential candidates (we'll have elections this weekend in Portugal). Simple idea, clear use case, and a good example of how AI can make political information more accessible instead of just “smarter.” Read more about it here.
New on AI For Newsroom this week
Stories, guides, initiatives, and signals we surfaced in this issue.
Get the weekly email digest
One curated Friday roundup for newsrooms. No spam — you can unsubscribe anytime.