Select PCMAG editors and assess products independently. If you buy through affiliated links, we can earn committees that help support our tests.
The obvious failure mode for a self -driving car is that it makes the wrong decision by road, but it is almost as bad that it does not make a decision at all.
“It is not only that we have to take the right actions; it is that we always have to act,” said Mike Curtiss, senior staff software engineer at Waymo, in a lecture at Bugbash, a conference organized by software testing antitheshes.
“We can’t just throw our hands in the air in the middle of the road and say,” I don’t know what to do, “he said.” You always have to do something. “
But self -driving vehicles have a history of doing nothing in uncomfortable situations. I saw it during a test drive in a Zox Robotaxi in Las Vegas this year when one of the self-driving pods of that Amazon-daughter company stopped between a double row of traffic tiles.
And while my four journeys in Waymo’s Jaguar I-Pace vehicles in Los Angeles did not have such loffens of indecision last summer, the cars had their own confused moments. In October, for example, four Waymos in San Francisco on either side of a parked Amazon delivery car spent a few minutes in an impasse before one came forward.
Curtiss did not refer to that or in a different way of accidents, such as a police stop from a Waymo car in Phoenix last July after it drove the opposite row. Instead, he emphasized the problem of unsafe human driving and his high annual injury and death rates worldwide.
“The status quo does not work for humanity,” he said. He quoted a recent study in which Waymo accident rates were compared with those of human drivers in the same period in San Francisco in Phoenix, which found 83% less airbag deployment accidents and 81% fewer injuries for the Waymo robotaxis.
Never enough nine
Curtiss said Waymo wants to improve that, and reliable software is the key. He draws from his earlier experience at Google, where he worked on the key database service of that company, to explain what reliability means for Waymo.
Spanner’s Service Level Agreement indicates 99.999% uptime (also known as “five nines”), which means that you would fall for about five minutes a year. ” But that is not good enough for his current employer. “For self -driving cars we actually need a higher reliability,” he said, defined seven nines as a starting point and nine nine as ideal.
(In the Q&A part of the conversation, a participant in the developer-heavy audience asked how far Waymo had progressed on that scale; Curtiss said the company had gone to at least seven nine.)
Curtiss repeatedly emphasized the importance of safety at Waymo when developing its software. Now in his sixth generation, the input of the vehicles of the Lidar, Radar and Camerasensors on Waymo combines. That is a clear contrast to Tesla’s cameras-all approach, although Curtiss did not mention it himself.
“There is a direct link between software quality, including reliability and safety,” he said for a slide that later traced the loss of NASA’s Space Shuttle Challenger and the Oceangate-Onderzeer-Been-Lijke Disasters to Organizational Go-Fever. “We are not going to tolerate safety problems in the field.”
Coding for and, more and more often, by robots
Curtiss then outlined a rigorous development process for Waymo’s code basis (C ++, of which he said: “I know it is a bit embarrassing, but that’s where we are”). As with his parent company, this now includes the use of machine learning to automate some coding.
“ML actually makes some of our systems more reliable,” said Curtiss, who said that first frameworks in those systems to set up, helps set up AI-code generation tools to work better and send fewer bugs.
Waymo also builds redundancy in its software subsystems to reduce the consequences of what Curtiss called “abnormal outputs”. (“We don’t want to say that software crashes when we work on self -driving cars,” he explained.)
Then there will be extensive integration, unity and regression tests using the Google cloud environment to simulate Waymo vehicles. However, some bugs only come to the surface in software in a moving vehicle, so Waymo implements extensive follow-up software to check actions on on-road: “The observability story is still very true; we want to follow these events and statistics in the field,” Curtiss said.
“The best bug is the bug that was never made,” Curtiss said, adding that when Waymo developers catch one, they are involved in “very thoughtful retrospectives” to determine how things got into the system.
During the Public Q&A I asked Curtiss if Waymo had a process for recording external reports such as press reporting (I mentioned a piece through The Washington Post’s The Tech columnist Geoffrey Fowler, established in San Francisco, who reports that Waymos would not admit him in marked pedestrian crossings) and social media messages.
He said he couldn’t talk to that and advised me to check with Waymo’s press office. Spokesperson Katherine Barna wrote back: “Our teams constantly assess feedback about the Waymo One experience of all channels”, adding that waymo in-app feedback “mainly useful”.
Lessons for other software stores
Curtiss concluded his presentation with a bit of a sermon for the developers in the room and said: “In the long term you get the software quality that you deserve.” He then asked them to raise their hands if their organizations had a software quality process that made them proud. Almost no hands went up.
Curtiss admitted that not every software store has a mission-critical definition, such as Waymos and that many are confronted with conflicting obligations-“Sometimes you have to accept that technical debt to sign the deal next week.” However, he repeated that the software quality will not take care of itself.
“Safety and clean software are not alone,” he said. “You have to be vigilant.”