Anthropic walked away from a Pentagon contract over AI safety red lines; OpenAI stepped in and Dario Amodei is publicly calling OpenAI's framing of the deal dishonest. This is the first high-profile public rupture between the two frontier labs on a concrete policy decision, not just positioning. Defense AI is now explicitly a differentiating axis — labs must pick a lane.
Anthropic is approaching a $20B annualized revenue run rate, signaling that the enterprise API and Claude product lines are scaling fast regardless of the Pentagon controversy. This puts Anthropic in a different capital position than most assumed — less dependent on any single contract or investor. It also implies Claude's B2B API adoption is compounding faster than public narrative suggests.
Lio closed a $30M Series A led by a16z to build AI-native enterprise procurement automation, targeting a workflow category historically owned by clunky ERPs and manual sourcing teams. Procurement is a high-value, low-glamour enterprise function with deep inefficiency and measurable ROI — a profile that converts well in enterprise sales. a16z backing signals conviction that vertical AI agents are ready for the messy back-office.
OpenAI struck a deal allowing US military use of its models in classified settings — the exact scenario Anthropic's safety team had flagged as a red line, leading to their exit from the Pentagon contract. Sam Altman acknowledged the deal was 'definitely rushed,' which is a notable admission for a classified deployment context. This is the clearest signal yet that the labs have diverged not just in rhetoric but in actual deployment policy.
Jensen Huang signaled Nvidia will not make further investments in OpenAI or Anthropic, framing it as a natural end to early-stage bets — but the timing amid lab-vs-chip-maker tension raises strategic questions. This could reflect Nvidia hedging as labs develop custom silicon (Google TPUs, Amazon Trainium, internal efforts), reducing GPU dependency. A chip giant distancing from its biggest customers is a structural signal worth reading carefully.
Google released Gemini 3.1 Flash-Lite at $0.25/M input tokens and $1.50/M output tokens — 8x cheaper than Gemini 3.1 Pro — with four configurable thinking levels. This continues the trend of capable small models collapsing the cost floor for inference at scale. For high-volume, latency-tolerant workloads, this pricing makes previously uneconomical AI features viable.
Research demonstrates LLMs can correlate writing style, vocabulary, and behavioral patterns across platforms to de-anonymize pseudonymous users at scale — a capability previously requiring expert forensic effort. This effectively kills pseudonymity as a privacy mechanism for anyone who writes extensively online. The attack surface is passive, scalable, and requires no hacking — just text.
Alibaba's Qwen team — which just released the impressive Qwen 3.5 open-weight model family — is experiencing high-profile leadership departures, raising questions about continuity of one of the strongest open-weight model lineages. Qwen 3.5 had been a serious challenger to closed frontier models on cost-adjusted benchmarks. If the team fragments, the open-weight competitive landscape loses a key counterweight to Meta's Llama.
That's today's briefing.
Get it in your inbox every morning — free.
Help us improve AI in News
Got a suggestion, bug report, or question?