Reco.ai used AI-assisted 'vibe porting' to rewrite the JSONata JSON expression language from JavaScript into Go in roughly one day, eliminating a costly runtime dependency and saving an estimated $500K/year. Simon Willison frames this as a repeatable pattern — using LLMs to port libraries between languages rather than wrapping or paying licensing fees. The case study is concrete: working production code, real cost savings, documented process.
A federal judge ordered the Trump administration to rescind restrictions placed on Anthropic's relationship with the Defense Department. This is a significant legal win that clears Anthropic's path to federal and defense contracts, a multi-billion dollar market. It also signals that courts are willing to push back on executive branch interference in AI procurement.
Anthropic's paid Claude subscriptions have more than doubled in 2026, with total user estimates ranging from 18M to 30M — the wide range reflects Anthropic's deliberate opacity on metrics. Rapid paid subscription growth signals that Claude is converting free users at a rate that challenges the assumption that ChatGPT has an unassailable consumer lead. This is relevant for anyone building on or competing with frontier model APIs.
Self-propagating malware is actively targeting open source software packages and wiping machines geolocated in Iran. This is a supply chain attack vector that can silently compromise developer environments and production pipelines. Any team pulling open source dependencies without rigorous verification is potentially exposed.
A malicious package was injected into LiteLLM — one of the most widely used open source LLM routing libraries — and a researcher used Claude transcripts in real time to confirm the vulnerability, scope the blast radius, and identify the correct PyPI security contact to report it. This is the highest-profile supply chain attack yet targeting AI infrastructure specifically. LiteLLM sits in the critical path of thousands of AI applications.
Google has revised its estimate for 'Q Day' — the point at which quantum computers can break RSA and elliptic curve cryptography — from the mid-2030s to 2029, a dramatic acceleration. The company is urging the entire industry to migrate off RSA and EC encryption immediately. This compresses the migration window for any system handling sensitive long-lived data, including AI model weights, training data, and API credentials.
Sam Rose published an interactive essay that is being called one of the best visual explanations of LLM quantization ever written, covering how models are compressed from float32 down to int4 and below with minimal quality loss. This is educational infrastructure — the kind of resource that accelerates the next wave of engineers deploying local and edge models. Understanding quantization is now a prerequisite for anyone making cost/quality tradeoffs in inference.
Google released Gemini 3.1 Flash Live, an iteration of its real-time audio model focused on naturalness and reliability improvements for voice AI applications. The low HN score suggests limited technical disclosure so far, but audio/voice is Google's clearest path to consumer AI differentiation against OpenAI's Realtime API. This signals continued investment in streaming, low-latency multimodal inference as a product-grade capability.
That's today's briefing.
Get it in your inbox every morning — free.
Help us improve AI in News
Got a suggestion, bug report, or question?