Google is reorganizing the personnel behind Project Mariner, its AI-driven tool capable of operating the Chrome browser and executing tasks for a user, WIRED has come to understand. In recent months, some Google Labs employees who contributed to the research prototype have transitioned to more critical initiatives, according to two individuals familiar with the situation.
A Google spokesperson verified the modifications, stating that the digital interaction functionalities conceived through Project Mariner will be integrated into the company’s assistant approach going forward. Google has already embedded some of these features into alternative AI assistant products, including the newly introduced Gemini Agent, the spokesperson further noted.
This shift occurs while Google and other AI labs hasten to address the emergence of exceptionally proficient assistants like OpenClaw. While these instruments are primarily utilized by developers today, Silicon Valley anticipates they could shortly drive versatile helpers for individuals and enterprises. Nvidia CEO Jensen Huang likened the much-discussed application to a novel foundational software for autonomous computing systems. “Every company in the world today must adopt an OpenClaw strategy,” he remarked at the company’s programmer’s summit earlier this week.
Google CEO Sundar Pichai emphasized Project Mariner during the previous year’s I/O conference. Back then, browser agents appeared to be the industry’s subsequent major venture, with OpenAI and Perplexity unveiling consumer-facing assistants that vowed to streamline online assignments for users. These agents could interact with web elements, navigate content, and complete digital forms on a webpage, akin to a human. However, the uptake of these offerings has struggled to satisfy market forecasts.
Perplexity’s Comet browser agent attained merely 2.8 million weekly active users in December 2025. Meanwhile, OpenAI’s ChatGPT Agent was said to have dropped below 1 million weekly active users in recent months. In contrast with the hundreds of millions of users interacting with ChatGPT per week, browser agent usage fundamentally equates to a negligible figure.
Emerging Agents on the Scene
Impetus in the AI world has veered significantly in the past year toward tools such as Claude Code and OpenClaw (whose developer was recruited by OpenAI). In contrast to internet navigation assistants, these systems manage computing devices via the command-line, which has demonstrated itself as a more dependable method to finish assignments. Some of these products incorporate system interaction as a functionality, alongside additional autonomous capacities. By comparison, browser agents now appear rather constrained as an independent offering.
Kian Katanforoosh, Chief Executive Officer of the AI skill-enhancement service Workera who teaches on the subject of AI at Stanford, says one contributing factor computer use agents have not gained widespread traction is due to their enormous processing demands. Most of these agents operate through capturing multiple static images of a webpage, inputting that into an AI model, and then executing operations in response to their visual input. Analyzing such data can be sluggish and occasionally inconsistent.
“What Claude Code and OpenClaw demonstrated was that it’s actually considerably more efficacious to interact via the command line, because the terminal is character-oriented and LLMs are text-based,” Katanforoosh remarked. “It’s likely 10 to 100X fewer stages to achieve identical results.”
This does not imply that browser agents are not advancing, or that investigation into system interaction has reached an impasse.
The previous month, the emerging company Standard Intelligence unveiled a system interaction prototype developed using video footage, rather than screenshots. The startup states it engineered a video compression tool that can condense video data into an AI model’s processing capacity, which it contends is 50X more streamlined than prior computer use models. To demonstrate its AI model’s functions, the startup connected it with a car, a real-time video stream, and a digital input device. The model managed to momentarily operate autonomously around San Francisco.
{content}
Source: {feed_title}

