Anthropic Brings in Ex-Stripe CTO to Strengthen Infrastructure as AI Spending War Escalates
New leadership move comes as Claude faces usage caps and rivals pour billions into data centers
Anthropic has hired Rahul Patil, the former chief technology officer of Stripe, to take over as its new CTO. The move underscores the company’s urgent push to toughen up its infrastructure at a time when the AI industry is locked in a spending battle worth hundreds of billions of dollars.
Patil, who officially started this week, replaces co-founder Sam McCandlish. McCandlish isn’t leaving, though. He’ll step into a new role as chief architect, focusing on pre-training and large-scale experiments. Both men will now report to Anthropic president Daniela Amodei, a setup designed to bring product, infrastructure, and inference work under tighter coordination.
This shift highlights how the San Francisco startup is evolving. Anthropic has marketed its Claude AI assistant as a more thoughtful, reliable alternative to ChatGPT. But now it needs to prove it can deliver enterprise-grade performance—without the same massive infrastructure budgets that giants like Meta, Microsoft, and OpenAI command.
The scale of the challenge is hard to overstate. Meta has pledged at least $600 billion for U.S. data centers and AI infrastructure through 2028, while OpenAI has locked in staggering amounts of compute capacity through its Stargate partnership with Oracle and SoftBank. Compared to that, Anthropic is playing with a far smaller war chest.
The cracks have already shown. Recently, Anthropic placed weekly usage caps on Claude Code, its developer-focused tool. Depending on demand, Sonnet tier users can now run between 240 and 480 hours per week, while Opus 4 users get just 24 to 40 hours. The limits bluntly acknowledge what many insiders already knew: heavy background usage was straining the system.
Amodei called Patil’s arrival crucial. “Rahul brings over two decades of engineering leadership building dependable infrastructure at enterprise scale,” she said, framing the hire as key to Claude’s future as a trusted platform for businesses.
Patil’s résumé backs that up. He spent five years leading Stripe’s technical operations, an environment famous for its obsessive reliability standards. Before that, he worked on cloud infrastructure at Oracle and held senior engineering posts at both Amazon and Microsoft. At Anthropic, his responsibilities cover everything from compute infrastructure to inference optimization—essentially, keeping the company’s AI models fast, efficient, and cost-effective.
Patil himself struck an ambitious tone, calling the role “the most important work” he could be doing right now. He praised Anthropic’s focus on AI safety and said the company was at a pivotal moment for the technology.
The leadership shake-up extends beyond new titles. Anthropic is restructuring its technical teams to bring product engineers closer to infrastructure and inference specialists. The aim: squeeze more out of existing compute power while improving speed and reliability. That focus reflects lessons from past service hiccups, which the company—unusually for a top AI lab—chose to disclose publicly.
Industry watchers see Patil’s appointment as a sign that the AI race has entered a new phase. It’s no longer just about who has the smartest model. Now, reliability, low latency, and high uptime are just as important, especially for demanding use cases like coding assistants and long-context processing.
The division of labor between McCandlish and Patil makes sense. McCandlish will steer the bleeding-edge research—massive pre-training runs and experimental models—while Patil ensures those models can reach paying customers at scale.
Even so, the competition looms large. Meta’s multibillion-dollar investments, first teased by Mark Zuckerberg over dinner at the White House, make clear just how high the stakes are. OpenAI’s Stargate project, bankrolled by Microsoft, Oracle, and SoftBank, shows similar muscle.
Anthropic, by contrast, can’t spend its way to dominance. It has raised significant venture capital but lacks the balance sheet of Big Tech. That means it must win through engineering creativity—extracting more performance from every watt of power, leaning on techniques like model compression, smart batching, and inference optimization. Enterprise customers will also expect stronger service-level agreements, not just best-effort consumer service.
The recent usage caps may frustrate power users, but they’re likely temporary as Anthropic scales up and fine-tunes its resource allocation. More importantly, the changes hint at a broader strategy: moving away from consumer-style access and toward enterprise contracts with guaranteed capacity and performance.
For Anthropic, the balancing act is delicate. It must keep research momentum alive while professionalizing operations—something that has tripped up other AI startups in the past. By putting a seasoned infrastructure leader at the top of its technical org, reporting directly to the president, Anthropic is signaling a clear bet. The next stage of the AI race won’t be won by breakthroughs alone. It will also be won by execution, reliability, and the ability to deliver at scale.