<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>State-Space-Models on Pavel Nasovich's Blog</title><link>https://forcewake.me/tags/state-space-models/</link><description>Recent content in State-Space-Models on Pavel Nasovich's Blog</description><generator>Hugo -- 0.157.0</generator><language>en-us</language><copyright>Copyright 2026</copyright><lastBuildDate>Sun, 01 Mar 2026 14:22:24 +0100</lastBuildDate><atom:link href="https://forcewake.me/tags/state-space-models/index.xml" rel="self" type="application/rss+xml"/><item><title>The Linear Revolution at ICLR 2026: Mamba-3, EFLA, and the End of the Quadratic Bottleneck</title><link>https://forcewake.me/the-linear-revolution-at-iclr-2026-mamba-3-efla-and-the-end-of-the-quadratic-bottleneck/</link><pubDate>Sun, 01 Mar 2026 00:00:00 +0000</pubDate><guid>https://forcewake.me/the-linear-revolution-at-iclr-2026-mamba-3-efla-and-the-end-of-the-quadratic-bottleneck/</guid><description>A NotebookLM-generated analysis of why post-Transformer sequence models became one of the hottest AI research topics in early 2026, with a focus on Mamba-3 and Error-Free Linear Attention.</description></item></channel></rss>