Opinion

Arizona death put road testing of robo-cars in spotlight

AUSTIN, Texas -- What to make of the Elaine Herzberg's death?

Posted Updated

By
Ben Wear
, Cox Newspapers

AUSTIN, Texas -- What to make of the Elaine Herzberg's death?

Not even the most stringent advocate for driverless cars, and for their presumptive salubrious effect on road safety, would have denied that eventually someone would die in a collision involving a robot car. In fact, already one person had died in 2016 when a Tesla Model S in autopilot mode slammed into a truck.

But the incident involving Herzberg and an Uber self-driving taxi March 18 stands as the first time a pedestrian was killed by an autonomous vehicle.

The inevitable reaction, at least in some quarters, was to see it as grim validation of skepticism about whether something as complex as driving can be handled by artificial intelligence.

On a more immediate level, Herzberg's death -- especially given what is known about the specifics so far -- raises the question of whether the technology is advanced enough to allow the vehicles to operate on public streets. Even some advocates of driverless vehicles said that perhaps testing should occur only on control tracks and in computer simulations for some unspecified period.

That would be a significant rollback, and one unlikely to be mandated by governments.

Multiple companies are already testing them on public roads -- the first such drive is said to have occurred on an Austin street in August 2015, with only a legally blind person in the car. The laws in 21 states (including, since June, Texas) and the District of Columbia allow this.

Uber, with the official permission of the state of Arizona, was operating taxis in Tempe, Arizona, including the one that hit Herzberg.

The video of the incident, released Thursday by Tempe police, is hard to watch. And, seeing it, it is equally hard not to conclude as a layperson that the technology failed.

The Uber Volvo SUV, outfitted with the laser- and radar-based detectors and cameras common to all autonomous vehicles, was driving along a five-lane road in Tempe. Darkness had fallen, and the "safety driver" on board the car can be seen in one view glancing down toward her lap as the car moves along at 38 mph. It appears to have been a clear night.

Herzberg, wearing a bike helmet and walking her bicycle, appears suddenly in the right lane in front of the car. At this point, she clearly had already walked the bike across an adjacent left lane, but the vehicle's sensors had failed to detect her and, police said, never slowed down before hitting her. The video, mercifully, concludes just before impact.

What to make of the tragedy, from a policy standpoint? The math of all this is murky.

In 2016, the last full year we have complete statistics, 37,461 people died on U.S. streets and highways, including about 6,000 pedestrians. In that same year, according to Federal Highway Administration figures, about 3.2 trillion miles were driven on those same roads.

That amounts to one death for every 85 million miles driven.

Self-driving cars, meanwhile, have been involved in two fatal accidents. The number of miles auto-driven on public streets is far from clear, given the multiple corporate and academic players involved and the loose to nonexistent reporting requirements for self-driving operations. But, based on the 6 million cumulative miles of on-street driving reported last year by Uber and Waymo (the former Google cars venture), the figure would appear to be well under 85 million.

So, the safety claim for robot cars has to be bogus, right?

Too soon to say, Chris Poe with the Texas A&M University Transportation Institute told me.

"You never want to downplay anyone's death," said Poe, assistant director of connected and automated vehicle technology for the institute, which is doing autonomous vehicle testing on control tracks near College Station. "But we don't have enough data to say if that is worse or better than we might expect."

Jason JonMichael, the city of Austin's assistant director of smart mobility, said he hopes that the Tempe incident doesn't cause self-driving vehicle companies or policymakers "to take too many steps back out of fear."

On-street testing, with all of its complexity and attendant risks, needs to continue, JonMichael said.

"If you understand how (artificial intelligence) works, it is constantly needing to learn," he told me. "We shouldn't stop (on-road testing), because if we stop, we stop learning. And if we stop, we might as well throw out the goal of no deaths on our highways."

But Chandra Bhat, until about a month ago the director of University of Texas' Center for Transportation Research and still a professor there, said the rush for street testing and actual commercial use in Uber's case has been premature.

The center has been involved in self-driving research and testing, driving some vehicles in controlled circumstances at the J.J. Pickle Research Campus in North Austin. Bhat's researchers have run vehicles on Austin streets, but with a human driver in control, collecting sensor information about actual traffic conditions to help update the self-driving technology.

"The basic idea is to always have safety first," Bhat said. "We do not want to have any driverless cars using public roads until they are completely tested and any algorithms completely tweaked and optimal."

Pinpointing that optimal point is, of course, the great unknown, particularly when we seem to be at a highly competitive inflection point in transportation.

Those in charge of the big companies involved -- Waymo, Uber, Ford, Toyota and General Motors -- no doubt look in the mirror and see themselves as the Henry Ford of self-driving technology. The financial stakes and pressure are huge. No one wants to be second.

Herzberg, however, likewise did not want to be second.

Ben Wear writes for the Austin American-Statesman. Email: bwear(at)statesman.com.

Story Filed By Cox Newspapers

For Use By Clients of the New York Times News Service

Copyright 2024 Cox Newspapers. All rights reserved.