
PlugMem Under the Hood: Why Knowledge-Centric Memory Changes LLM Agents
Most agent-memory systems still do the lazy thing: store raw interaction history, retrieve a few chunks, and hope the base model compresses the mess at inference time. PlugMem starts from a much stronger assumption. The useful part of experience is sparse, structured, and should be compiled before retrieval. That is why this paper matters. PlugMem was submitted to arXiv on February 6, 2026, published on the Microsoft Research site on March 6, 2026, and the PDF metadata marks it as an ICML 2026 proceedings paper. As of April 5, 2026, the code and benchmark artifacts are public. The claim is ambitious but concrete: a single task-agnostic memory module, attached unchanged to very different agents, can beat both raw-memory baselines and several task-specific memory systems while using much less agent-side context. S1 S2 S3 S4 ...