Former OpenAI executive Mira Murati’s highly secretive AI startup, Thinking Machines Lab, has inked a multi-billion-dollar deal to leverage Google Cloud’s advanced AI infrastructure, including systems powered by Nvidia’s latest GB300 GPUs, signaling a major move in the fiercely competitive AI compute landscape.
Key Takeaways
- **Strategic Alliance:** Thinking Machines Lab, founded by ex-OpenAI CTO Mira Murati, has partnered with Google Cloud in a multi-billion-dollar agreement, securing access to cutting-edge AI infrastructure, notably Nvidia’s powerful GB300 chips.
- **Compute Arms Race:** The deal highlights Google Cloud’s aggressive strategy to lock in frontier AI developers amidst intense competition from Amazon and other cloud providers, with infrastructure deals increasingly defining the future of AI development.
- **Scaling Reinforcement Learning:** The partnership will enable Thinking Machines to scale its computationally intensive reinforcement learning workloads for its Tinker product, a tool designed to automate the creation of custom frontier AI models, leveraging Google’s robust backend.
In a move that sends ripples across the artificial intelligence landscape, Thinking Machines Lab, the enigmatic startup founded by former OpenAI chief technologist Mira Murati, has secured a landmark multi-billion-dollar agreement with Google Cloud. The exclusive revelation, initially reported by TechCrunch, confirms that Murati’s venture will significantly expand its reliance on Google’s formidable AI infrastructure, notably gaining early access to systems powered by Nvidia’s next-generation GB300 GPUs.
The Multi-Billion-Dollar Bet: Google Cloud’s Strategic Acquisition
Sources close to the negotiations indicate the deal’s valuation falls into the single-digit billions, underscoring the magnitude of Google’s investment in cultivating strategic partnerships within the burgeoning AI ecosystem. This substantial commitment isn’t merely for raw compute power; it encompasses a comprehensive suite of infrastructure services crucial for the rigorous demands of large-scale model training and deployment. By offering access to its most advanced AI systems, built upon the highly anticipated Nvidia GB300 chips, Google is positioning itself as a pivotal enabler for the next wave of AI innovation.
This agreement represents a significant win for Google Cloud, which has been vigorously pursuing major cloud deals with leading AI developers. The company’s strategy extends beyond simply providing compute; it aims to weave together its diverse cloud offerings—including robust storage solutions, its versatile Kubernetes engine, and the globally distributed Spanner database—into a compelling, integrated package. The goal is clear: to establish an indispensable infrastructure layer for the world’s most ambitious AI projects, fostering an ecosystem where advanced labs are deeply embedded within Google’s technological framework.
Thinking Machines Lab: Emerging from the Shadows
Founded in February 2025 by Mira Murati, who departed her high-profile role as OpenAI’s chief technologist, Thinking Machines Lab has largely operated under a shroud of secrecy. The company made headlines shortly after its inception, raising a staggering $2 billion seed round that instantly valued the startup at an eye-watering $12 billion. This meteoric rise, combined with Murati’s pedigree, instantly cemented Thinking Machines Lab as a frontier player to watch, even as details about its core operations remained scarce.
The veil of secrecy began to lift slightly in October, with the launch of Tinker, the company’s inaugural product. Tinker is described as an innovative tool designed to automate the creation of custom frontier AI models, promising to streamline a process often fraught with complexity and resource intensity. Today’s announcement provides further insight into Tinker’s underlying architecture, with Google specifically noting its capacity to support Thinking Machines’ computationally demanding reinforcement learning workloads – a training approach that has been instrumental in breakthrough achievements at AI powerhouses like DeepMind and Murati’s former employer, OpenAI. The sheer scale of the Google Cloud deal underscores just how resource-intensive this cutting-edge research and development can be.
The partnership with Google Cloud marks a significant strategic pivot for Thinking Machines. While the lab had previously forged an alliance with Nvidia earlier this year, which included an investment from the chipmaker, this is its first major infrastructure deal with a cloud services provider. While the agreement is non-exclusive, leaving the door open for Thinking Machines to diversify its cloud providers in the future, it undeniably signals Google’s intent to lock in fast-growing, high-potential frontier labs at an early stage of their development.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
The AI Infrastructure Arms Race: A Battle for Dominance
The competitive landscape for AI cloud infrastructure is nothing short of fierce, with tech giants vying for market share and, more critically, for the loyalty of the most innovative AI developers. This week’s news from Thinking Machines comes on the heels of other high-profile deals that highlight the escalating “compute arms race.” Earlier this month, Google made headlines with an agreement involving Anthropic and Broadcom for multiple gigawatts of Tensor Processing Unit (TPU) capacity – Google’s custom-designed AI chips optimized for machine learning workloads. This demonstrated Google’s commitment to offering diverse, powerful hardware solutions.
However, the competition is relentless. Just days prior to the Thinking Machines announcement, Anthropic also signed a new, equally substantial agreement with Amazon, securing up to 5 gigawatts of capacity for training and deploying its Claude AI models. These parallel deals with a single key AI player underscore the intense bidding wars unfolding behind the scenes, as cloud providers race to offer the most compelling combination of hardware, services, and strategic partnerships.
Nvidia, as the dominant force in AI hardware, continues to play a central role, with its GPUs being the backbone of much of this infrastructure. The fact that Thinking Machines Lab is among the very first Google Cloud customers to gain access to its GB300-powered systems is a testament to the strategic importance of this partnership. Google itself touts the GB300 as offering a significant performance leap, boasting a 2X improvement in training and serving speed compared to prior-generation GPUs. This technological advantage is critical for labs pushing the boundaries of AI, where every increment of speed and efficiency translates into faster iteration cycles and more complex model development.
Myle Ott, a founding researcher at Thinking Machines, encapsulated the sentiment in a statement: “Google Cloud got us running at record speed with the reliability we demand.” This quote not only validates Google’s technical capabilities but also speaks volumes about the imperative for speed and stability in the high-stakes world of frontier AI development.
The Bottom Line
The multi-billion-dollar partnership between Mira Murati’s Thinking Machines Lab and Google Cloud is more than just a large financial transaction; it’s a powerful strategic maneuver in the global AI race. For Google, it solidifies its position as a go-to infrastructure provider for elite AI developers, fending off fierce competition and strengthening its ecosystem. For Thinking Machines, it provides the immense computational firepower needed to scale its ambitious reinforcement learning models and bring its Tinker product to full fruition, potentially accelerating breakthroughs in custom AI creation. This deal underscores the critical role of robust, cutting-edge cloud infrastructure in shaping the future of artificial intelligence, setting the stage for further innovation and an intensified battle among tech giants to power the next generation of AI advancements.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
{content}
Source: {feed_title}

