Key Takeaways:
- Tesla Robotaxis were involved in at least two distinct crashes since July 2025 where a human teleoperator was remotely controlling the vehicle, according to newly unredacted NHTSA data.
- Tesla abruptly ceased its practice of redacting crash descriptions, providing unprecedented transparency into 17 incidents involving its nascent Robotaxi network, revealing a mix of causes from teleoperator error to unique environmental challenges.
- These revelations underscore the complex safety hurdles in autonomous vehicle development, particularly the interplay between AI, remote human intervention, and the environment, influencing Tesla’s cautious scaling strategy.
Unveiling the Robotaxi Roadblocks: Tesla’s Remote-Piloted Crashes Emerge from Redaction
A significant veil has been lifted from Tesla’s secretive Robotaxi operations. Newly unredacted information submitted to the National Highway Traffic Safety Administration (NHTSA) reveals that Tesla Robotaxis have been involved in at least two crashes since July 2025 while under the remote control of a teleoperator. These incidents, both occurring in Austin, Texas, at low speeds with a safety monitor present and no passengers aboard, offer a rare glimpse into the complex challenges facing autonomous vehicle (AV) deployment.
For months, Tesla had been opaque about the specifics of its AV incidents, steadfastly redacting crash descriptions submitted to the NHTSA, citing confidential business information. This week, however, marks a dramatic shift. The latest data release from the NHTSA now provides narrative descriptions for all 17 crashes recorded by Tesla’s nascent Robotaxi network since last year, shedding critical light on the operational realities of its autonomous fleet.
The Unforeseen Role of Human Intervention: Remote Piloting Incidents
The most striking revelations concern incidents where human teleoperators, not the vehicle’s automated driving system (ADS), were directly responsible for collisions. Tesla had previously informed lawmakers about its capability to allow remote operators to pilot vehicles, albeit with a strict speed limit of 10 miles per hour. The company justified this, stating, “This capability enables Tesla to promptly move a vehicle that may be in a compromising position, thereby mitigating the need to wait for a first responder or Tesla field representative to manually recover the vehicle.” The newly revealed crashes put this capability, and its inherent risks, into sharp focus.
Crash 1: The July 2025 Curb Incident
The first documented remote-piloted crash occurred in July 2025, shortly after Tesla commenced its Robotaxi network operations in Austin. The vehicle’s ADS encountered difficulty moving forward from a stopped position on a street. Faced with this impediment, the onboard safety monitor initiated a request for assistance from Tesla’s remote assistance team. A teleoperator subsequently “took over vehicle control and gradually increased vehicle speed and turned the Tesla ADS left toward the left side of the street.” What followed was an unfortunate miscalculation: the teleoperator then drove “up the curb and made contact with a metal fence.” This incident highlights the challenges of human-machine handoff and the potential for remote human error, even when operating at low speeds.
Crash 2: The January 2026 Barricade Collision
A similar scenario unfolded in January 2026. While the Tesla ADS was autonomously driving straight on a street, the safety monitor once again “requested support to assist with vehicle navigation.” The teleoperator assumed control when the ADS had stopped. As the remote operator attempted to proceed straight, the Tesla vehicle “made contact with a temporary barricade for a construction site at approximately 9MPH, scraping the front-left fender and tire,” according to the data submitted to the NHTSA. Both incidents underscore that while remote intervention is designed as a safety net, it introduces its own set of potential risks and requires flawless execution, even in controlled low-speed environments.
Beyond Remote Control: A Spectrum of Incidents
While the remote-piloted crashes are particularly noteworthy, the newly unredacted data provides a fuller picture of the challenges Tesla’s Robotaxis face. Similar to other autonomous vehicle companies like Waymo, a significant portion of the unredacted crash reports involve Tesla Robotaxi vehicles being struck by other vehicles, rather than causing the collisions themselves. This phenomenon is common across the AV industry, often attributed to the AV’s conservative driving style, which can sometimes be unpredictable to human drivers.
However, not all incidents were external. At least two of the reports detail Tesla Robotaxis clipping their mirrors on other vehicles, suggesting close-quarters navigation remains a nuanced challenge for the ADS. In a more unusual incident from September 2025, the Tesla ADS was unable to avoid hitting a dog that unexpectedly ran into the street. While Tesla reported the dog was able to run away, this highlights the unpredictable nature of real-world driving and the ongoing struggle for AVs to predict and react to highly dynamic, non-standard obstacles.
Another September 2025 crash saw a Tesla Robotaxi making an unprotected left turn into a parking lot, resulting in a collision with a metal chain. This type of incident is not unique to Tesla; the NHTSA recently concluded an investigation into Tesla’s Full Self-Driving (FSD) software for its tendency to crash into parking lot bollards, chains, and gates. Waymo, a leading competitor, also issued a recall last year related to a similar problem, indicating that static, often thin or low-visibility, obstacles continue to pose a significant perception and navigation challenge for even the most advanced AV systems.
Scale, Safety, and the Road Ahead
Compared to companies like Waymo and Zoox, Tesla’s Robotaxi network operates at a considerably smaller scale, which naturally results in fewer total reported crashes. However, the details unearthed this week in the newly unredacted data offer valuable insights into why Tesla might be scaling up its nascent autonomous ride-hailing network at a deliberately slow pace. Elon Musk himself acknowledged last month that “making sure things are completely safe” is the paramount limiting factor for Tesla’s network expansion, emphasizing the company is being “very cautious.”
The transparency, albeit belated, that Tesla has now provided is crucial for both regulators and the public. It allows for a more granular understanding of the specific scenarios where current AV technology, including its human backup systems, can fall short. As the race for fully autonomous vehicles intensifies, these detailed incident reports serve as invaluable learning opportunities, informing everything from sensor development and AI training to regulatory frameworks and public education. The ongoing challenge lies in mitigating these identified risks while continuing to push the boundaries of what autonomous technology can achieve safely.
The Bottom Line
Tesla’s sudden pivot to transparency, revealing incidents including remote-piloted crashes, underscores the complex and often unpredictable journey towards widespread autonomous vehicle deployment. While the promise of Robotaxis remains compelling, these unredacted reports highlight that human intervention, even as a safety measure, introduces its own fallibility, and the challenges of perception and decision-making for AI are far from fully resolved. The industry’s path forward will undoubtedly hinge on continued rigorous testing, an unwavering commitment to safety, and a level of transparency that fosters both trust and continuous technological improvement, acknowledging that the road to full autonomy is paved with lessons learned from every bump and scrape.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
{content}
Source: {feed_title}

