On April 16, 2026, I replaced my earlier local Gemma run with a heavier stack: llama-server -hf unsloth/Qwen3.6-35B-A3B-GGUF:UD-Q8_K_XL --jinja The result was not “a slightly better chat model.” The result was a qualitatively different local agent loop. This time the agent did not stop at repo reconnaissance, rough planning, or code scaffolding. It wrote a research narrative, generated a slide deck module by module, rebuilt after failures, converted the deck to PDF, rasterized slides for visual inspection, read the resulting PNGs, ran text extraction against the .pptx, checked for placeholder residue, and then closed the loop with targeted repairs. That is not AGI. But it is no longer a toy local demo either. ...







