Bluesky launched Attie, an AI-powered app that lets users create custom algorithmic feeds on the atproto open social protocol without writing code. This is Bluesky's first major AI-native product move and signals they're building tooling to differentiate atproto as a programmable social substrate. The open protocol angle means third-party builders can extend and compete on feed intelligence.
A California federal judge temporarily blocked the Pentagon from labeling Anthropic a supply chain risk and ordering agencies to stop using its AI, ruling against the month-long DOD campaign. The case exposes the growing political risk frontier for AI vendors selling into government and the judiciary's willingness to push back on executive-branch AI procurement interference. Anthropic's win here has implications for every frontier AI company pursuing federal contracts.
Starcloud raised a $170M Series A to build orbital data centers, becoming YC's fastest unicorn at 17 months post-demo day. Space-based compute offers potential advantages in cooling, solar power, and latency for certain global workloads. The speed of this raise signals serious institutional conviction in off-Earth infrastructure as a viable compute tier.
Google has revised its estimate for cryptographically relevant quantum computing ('Q Day') to 2029, years earlier than prior consensus, and is urging the entire industry to migrate off RSA and elliptic curve cryptography immediately. This compresses the migration runway for any system handling sensitive data — including AI model weights, API keys, and training pipelines. NIST's post-quantum standards are now effectively urgent, not aspirational.
Cheng Lou (React core contributor, creator of react-motion) released Pretext, a browser library that calculates wrapped-text height without touching the DOM — solving a longstanding performance bottleneck in UI rendering. The HN score of 442 signals strong developer resonance; this is the kind of primitive that quietly becomes a dependency in AI chat UIs, streaming text renderers, and canvas-based interfaces. It matters because accurate, fast text measurement is a prerequisite for polished AI-generated content presentation.
ScaleOps raised $130M Series C to automate Kubernetes resource optimization in real time, targeting GPU waste and runaway cloud costs driven by AI workloads. The raise validates that infrastructure efficiency tooling is a high-margin, high-demand category as AI inference bills become a board-level concern. Automated resource management at the orchestration layer is now a standard requirement for any AI platform running at scale.
Rebellions, a South Korean AI chip startup focused on inference-optimized silicon, raised $400M at a $2.3B valuation in a pre-IPO round ahead of a planned public offering later in 2026. The raise continues the pattern of inference-specific chip challengers attracting large capital as NVIDIA's margin profile and supply constraints make alternatives strategically attractive. An IPO would be a significant liquidity and validation event for the non-NVIDIA AI chip ecosystem.
Mistral AI raised $830M in debt financing to build a proprietary data center near Paris, targeting Q2 2026 operations. The debt structure (vs. equity) is notable — it preserves cap table while signaling that Mistral's revenue base is strong enough to service significant leverage. This move positions Mistral as sovereign AI infrastructure for Europe, reducing dependence on US hyperscalers and strengthening its regulatory positioning under the EU AI Act.
Trip Venturella trained Mr. Chatterbox entirely from scratch on 28,000+ out-of-copyright Victorian-era British Library texts, producing a locally runnable model with a distinctive corpus-constrained 'ethical' profile. The project demonstrates that domain-specific, legally clean training sets can produce deployable models with predictable stylistic and content behavior. While weak by modern benchmarks, it's a proof-of-concept for copyright-safe, auditable model lineage.
Google released Gemini 3.1 Flash Live, an update to its real-time audio model focused on improving naturalness and reliability for voice interactions. Flash Live sits at the low-latency, cost-efficient end of Google's model lineup, making it the practical choice for voice agents, phone bots, and real-time assistants. Incremental audio quality improvements at the Flash tier directly lower the bar for shipping production voice AI products.
That's today's briefing.
Get it in your inbox every morning — free.
Help us improve AI in News
Got a suggestion, bug report, or question?