OpenClaw, a popular open-source Chinese AI agent capable of autonomously taking over devices and completing tasks, is spawning a grassroots entrepreneurial ecosystem in China. A 27-year-old engineer is among many building businesses on top of the platform. This mirrors early App Store or GPT-wrapper gold rush dynamics — but originating from a Chinese open-source base.
Security firm Codewall used an offensive AI agent to breach McKinsey's internal Lilli platform — used by 43,000+ employees — in two hours with no credentials or insider knowledge, exploiting a classic technique (likely prompt injection or indirect injection via documents). This is a live, high-profile demonstration that enterprise AI deployments at scale are vulnerable to automated adversarial agents. The attack surface expands proportionally with how much sensitive data the platform can access.
Wayfair deployed OpenAI models to automate support ticket triage and enrich millions of product attributes at scale, improving catalog accuracy and support resolution speed. This is a canonical enterprise AI case study: structured data enrichment and ticket classification are high-volume, low-variance tasks where LLMs reliably outperform rule-based systems. The catalog enrichment use case is particularly replicable across any catalog-heavy e-commerce or marketplace.
Sequoia argues that AI is collapsing the distinction between software products and professional services — AI agents can now deliver outcomes previously requiring human service delivery, creating a new category of 'software-as-a-service-firm.' This is a significant framing shift from a top-tier VC: the investable opportunity is no longer just tools but AI systems that directly replace service revenue. It signals where Sequoia's check-writing attention is focused for the next cycle.
A persistent malware campaign has compromised ~14,000 Asus routers primarily in the US, using architecture designed to resist law enforcement takedowns — likely P2P or fast-flux command-and-control. This represents active, hard-to-remediate infrastructure compromise at the edge. For AI builders running inference or agents that rely on consistent outbound connectivity, compromised upstream routing is an underappreciated threat surface.
Meta has disclosed four generations of custom inference chips designed to serve AI features across its billions of users while reducing dependence on Nvidia and AMD. This is a significant vertical integration move — Meta joins Google (TPU), Amazon (Trainium/Inferentia), and Microsoft (Maia) in owning its inference silicon. At Meta's scale, even marginal per-token cost reductions translate to hundreds of millions in annual savings.
OpenAI published its architectural approach to making ChatGPT agents resistant to prompt injection and social engineering, focusing on action constraints and sensitive data protection in agentic workflows. This is OpenAI codifying defensive patterns it's applying to its own systems — effectively a public reference architecture for agent security. Coming the same day as the McKinsey hack story, the timing is notable.
METR research finds that ~50% of AI-generated code solutions passing SWE-bench — the dominant coding agent benchmark — would be rejected by actual open-source maintainers in real PR reviews. This exposes a fundamental validity gap: SWE-bench optimizes for test passage, not code quality, readability, or maintainability. It directly undermines capability claims made by coding agent vendors citing SWE-bench scores.
That's today's briefing.
Get it in your inbox every morning — free.
Help us improve AI in News
Got a suggestion, bug report, or question?