Lawyers Are Weighing Suits Over Driverless Cars—But There's a Catch

Lawyers Eye Lawsuits Over Driverless Cars, But Complex Liability Issues Pose Challenges

SAN FRANCISCO, CA – August 5, 2025 – As autonomous vehicles (AVs) like Waymo’s robotaxis and Tesla’s Autopilot-equipped cars become more common on U.S. roads, lawyers are gearing up for a wave of lawsuits stemming from accidents involving driverless technology. However, a significant catch complicates these cases: determining liability in crashes involving complex AI systems is a murky and uncharted legal territory, blending traditional negligence claims with novel product liability theories. A recent Tesla case, where the company was ordered to pay $7 million for a fatal 2018 Autopilot crash despite the driver’s reckless behavior, has heightened the auto industry’s concerns about legal risks as self-driving technology advances.

The National Highway Traffic Safety Administration (NHTSA) reports that 94% of the 36,096 fatal car crashes in 2019 were due to human error, a statistic AV proponents cite to argue that driverless cars could drastically reduce accidents. Yet, high-profile incidents, like a 2023 San Francisco Cruise robotaxi accident where a pedestrian was dragged 20 feet after a hit-and-run, highlight the potential for AI errors to cause harm. Lawyers are exploring claims against manufacturers, software developers, and even mapping companies, but the complexity of AV systems makes proving liability daunting. “These cases don’t fit neatly in a mold,” said Matthew Wansley, former general counsel at nuTonomy, noting that plaintiffs must often rely on costly expert testimony to demonstrate a “reasonable alternative design” for AI algorithms, a hurdle that can deter lawsuits.

In the Tesla case, a Florida jury found the company 30% liable for a crash that killed a driver who ignored warnings to keep hands on the wheel, marking a precedent that manufacturers could face liability even when human error contributes. Posts on X reflect growing sentiment that companies like Tesla may face mounting lawsuits as juries favor plaintiffs, with one user noting, “The legal argument proven here is that when the car drives itself and kills somebody, the company can be held liable.” California’s comparative negligence system, for instance, could assign partial blame to a robotaxi for exacerbating injuries, as seen in the Cruise case, where the vehicle’s pullover maneuver was not initially disclosed to regulators.

The catch lies in the legal system’s unpreparedness. No clear federal or state frameworks exist for fully autonomous vehicles, forcing reliance on traditional product liability laws, which require proving defects in design, manufacturing, or warnings. Companies like Volvo have pledged to accept full liability for their AVs, but others, like Tesla, resist, arguing drivers share responsibility. Additionally, the high cost of securing experts in AI, computer science, and economics to dissect proprietary software makes litigation prohibitively expensive for many plaintiffs. Emerging state laws, like California’s requirement for human operators to take control in emergencies, further blur liability lines when transitions between human and AI fail.

Despite these hurdles, lawyers see a “fertile ground” for litigation, with deep-pocketed manufacturers like General Motors or Alphabet as prime targets compared to individual drivers. Proposed legislation, like Congressman Kevin Mullin’s AV Safety Data Act, aims to increase transparency by mandating companies to report miles traveled and unplanned stops, potentially aiding plaintiffs in building cases. As AVs promise safer roads—Waymo reports only one fatal crash in 70 million driverless miles, caused by another driver—the legal system must evolve to balance innovation with accountability.

Leave a Comment