Richmond, VA · 45°F · Partly Cloudy

THE MORNING PAPER

"All the Intelligence That's Fit to Print"

🧠 Front Page: Models & Research

GPT-5.2 Pro Derives New Result in Theoretical Physics

Not a benchmark win — an actual contribution to science. OpenAI's GPT-5.2 Pro found that "single-minus gluon tree amplitudes," a type of particle interaction most physicists assumed was zero, are in fact nonzero under specific conditions. Human authors computed cases up to n=6 by hand. GPT-5.2 simplified the expressions, spotted a pattern, and conjectured a general formula. A scaffolded version then spent 12 hours verifying it. Paper on arXiv, co-authored with researchers from the Institute for Advanced Study, Harvard, Cambridge, and Vanderbilt. This is the clearest example yet of AI as research collaborator, not just summarizer.

Gemini 3 Deep Think V2 Rolling Out

Google's new reasoning mode hits 84.6% on ARC-AGI-2 and 3455 Elo on Codeforces. Notable for practical applications: error detection in math papers, semiconductor optimization, and a sketch-to-CAD/STL pipeline for 3D printing. François Chollet projects human-AI parity on ARC around 2030.

GLM-5: New Opus-Class Open Weights

Zhipu AI releases GLM-5, 355B-744B parameters with DeepSeek Sparse Attention. 200K context in, 128K out. SOTA on BrowseComp. Available on OpenRouter, Ollama, DeepInfra. The Chinese open-weights ecosystem keeps accelerating.

Qwen-Image 2.0 & Seedance 2.0

Alibaba's unified generation AND editing model at 2K resolution. ByteDance's Seedance 2.0 marks a big leap in text-to-video quality.

Inference & Infrastructure

The Two Speeds of "Fast Mode"

A trending explainer reveals how Anthropic and OpenAI's fast modes work completely differently. Anthropic's fast mode is low-batch-size inference — same Opus 4.6, 2.5x faster, 6x the cost. You're paying to skip the bus queue. OpenAI's Codex Spark uses Cerebras wafer-scale chips — 1000+ tokens per second, 15x faster, but it's a different, worse model that gets confused on tool calls. The tradeoff: Anthropic sells the same model faster. OpenAI sells a faster model.

MiniMax AMA on Reddit

Founders of MiniMax (M2.5, Hailuo video, Speech, Music) doing a live Q&A. Worth following if you care about multi-modal stacks.

🔭 Current Interests

Moonshot's Agent Swarm

Kimi's new Agent Swarm supports up to 100 sub-agents with 4.5x faster parallel execution. Multi-agent orchestration is moving from research novelty to production feature. The patterns emerging — supervisor agents, task decomposition, result aggregation — are becoming the standard playbook.

OpenAI Responses API Goes Agentic

Adding server-side compaction, hosted containers, and a Skills API for multi-hour agent workflows. This is OpenAI building the infrastructure layer for agents that run for hours, not seconds. The sandbox-as-a-tool architecture is the interesting pattern — agents that can spin up their own execution environments.

Geo-Localization From Video

Computer vision pipelines that map pixels to GPS coordinates are getting practical. Homography transforms, satellite imagery alignment, and real-time player tracking are converging. Sports broadcast is the proving ground — but the applications extend to any domain where you need to answer "where exactly is this happening?"

Industry & Market

"18 Months" — Suleyman's White-Collar Prediction

Microsoft AI chief Mustafa Suleyman tells Fortune that all white-collar work will be automated within 18 months. The timeline is almost certainly wrong, but the direction isn't. The interesting question isn't "when will AI replace jobs" — it's which jobs become dramatically more productive first, and who captures that surplus.

ChatGPT Personality Unlocked

Sam Altman announces personality customization — users can make ChatGPT act like a friend, use emoji, be more human. The "treat adults like adults" era. The shift from one-size-fits-all to personalized AI personality is a signal: the next wave of AI products won't compete on capability alone, but on character.

$60M for Code Context

EntireHQ raised $60M seed for a Git-compatible database capturing code intent and agent context. The bet: AI coding agents need richer context than git diffs. If they're right, "context layer" becomes as fundamental as version control itself.

Briefings

  • Zvec (Alibaba) — lightweight in-process vector database. Worth watching for anyone building local RAG.
  • YouTube Shorts blocked — a uBlock filter to hide Shorts is the most popular HN post today. 975 points. People really hate Shorts.
  • Publishers vs. Archive — news orgs limiting Internet Archive access over AI scraping fears. 519 points, 319 comments. Content licensing wars continue.
  • Instagram's URL Blackhole — how Instagram deliberately breaks outbound links. Useful context for social media strategy.
  • Sleep Mask Security Horror — a smart sleep mask broadcasting brainwaves to an open MQTT broker. IoT at its finest.

😄 The Funny Pages

Terminal Art

Someone built a 60fps truecolor Perlin noise animation for the terminal in Rust. Pure vibes. No practical value. Beautiful.

200K Flash Games Saved

The Flashpoint Archive has preserved over 200,000 Flash games and animations. The internet's childhood, rescued from extinction. Nostalgia rabbit hole warning.

Starflight Decompiled

Someone reverse-engineered a 40-year-old space game from 1986. The dedication is unhinged in the best way.

The Physics Paper Credit

Kevin Weil, OpenAI's CPO, is a co-author on the particle physics preprint. Product managers really will put their name on anything.

"The big story is GPT-5.2 doing real physics — not benchmarks, not vibes, an actual result in theoretical particle physics. The quiet story is the infrastructure race: Anthropic and OpenAI are solving the same problem (fast inference) in completely opposite ways. Watch the architecture, not the marketing."