Mark Rober wanted to know if Tesla’s self-driving car could be deceived in the most cartoonish way possible.
The former NASA engineer and YouTube sensation built a wall designed to trick a car. The wall stretched across the road, painted to look like the asphalt continuing straight ahead. Would Tesla’s Autopilot system recognize the deception in time to stop? Or would it speed forward like Wile E. Coyote chasing the Road Runner?
In the moment of truth, a Tesla Model Y, equipped with its camera-based Autopilot system, barreled forward at 40 miles per hour. The result was spectacular in the worst possible way: a gaping, cartoon-style hole as the car smashed through the fake road. Meanwhile, a second vehicle—this one fitted with Lidar, a laser-based sensing system—stopped cleanly before impact.
The video was an instant hit, racking up 10 million views in just two days. But as with anything related to Tesla, the crash test didn’t just spark curiosity—it ignited a firestorm.

Camera vs Lidar
Tesla’s approach to driver assistance has long been a subject of debate. Unlike most autonomous vehicle developers, who rely on a combination of cameras, radar, and lidar, Tesla has doubled down on vision alone, no Lidar. The company removed radar from its vehicles in 2021, betting that neural networks trained on camera data could replicate—and eventually surpass—human perception.
Elon Musk has called Lidar a “fool’s errand.” But Rober’s test suggests that, at least for now, the technology has a clear advantage. The Lidar-equipped vehicle correctly identified the fake road as an obstacle, while the Tesla—trusting its cameras—saw only an open highway.
That wasn’t Tesla’s only fumble. In a separate test, Autopilot successfully avoided a stationary dummy and another that suddenly ran into its path. But in fog and heavy rain, it failed, flattening the child-sized dummy. The Lidar system, by contrast, detected the mannequin every time.
This shouldn’t have been a surprise. Cameras struggle with poor visibility. Lidar, which actively scans the environment using lasers, doesn’t. The technology is more expensive and requires significant data processing, but as Rober’s experiment demonstrated, it can see what cameras miss.
<!– Tag ID: zmescience_300x250_InContent_3
–>


Controversy and Conspiracies
The test was not without controversy, however. Some Tesla supporters questioned whether Autopilot had even been engaged during the wall crash. Others claimed Rober manipulated the footage, secretly pushing an anti-Tesla agenda on behalf of Big Lidar.
The scrutiny became so intense that Rober released unedited footage showing that Autopilot had, in fact, been active. But eagle-eyed viewers noticed something else: just before impact, the system appeared to disengage. That led to a new round of speculation—was this a deliberate Tesla feature to avoid responsibility for crashes?
It wouldn’t be the first time the issue had come up. In 2022, the National Highway Traffic Safety Administration (NHTSA) investigated dozens of Tesla crashes involving stationary emergency vehicles. In 16 cases, Autopilot “aborted vehicle control less than one second prior to the first impact.” Critics suspect this is a convenient way to avoid liability. Unsurprisingly, Tesla has denied any wrongdoing.


The Real Takeaway
Rober’s test wasn’t perfect. We’re not sure if anything was tampered. Ultimately, the video was designed to be entertaining, and some elements, like the exaggerated hole in the wall, were added to the spectacle. But the core lesson is hard to ignore: Autopilot is not a true self-driving system. It’s a Level 2 driver assistance feature, meaning the driver is expected to remain engaged at all times.
Simply put, you can’t rely on it. You’re still driving the car.
Tesla’s defenders argue that Full Self-Driving (FSD), the company’s more advanced software, wasn’t tested. But FSD relies on the same camera-based approach, raising questions about whether it would have fared any better.
And while a painted wall might seem like an absurd scenario, the same underlying problem—camera-based systems misinterpreting their surroundings—has led to real-world tragedies. In 2016, a Tesla driver was killed when Autopilot failed to recognize a truck trailer crossing its path. The system mistook the bright white trailer for open sky.
Even if most drivers won’t encounter a Wile E. Coyote-style trap, fog, rain, and other visibility issues are everyday realities. And if a system that claims to be the future of autonomous driving can’t handle those, what else is it missing?