Tesla’s unveiling of its Robotaxi tomorrow should finally offer us a better look at CEO Elon Musk’s next big project. But it’s likely we won’t come away knowing much—or not enough—about the technology underpinning the self-driving system.
Tesla relies on neural networks—which mimic the way the brain works in silico—to comprehend the road experience, and take actions much in the same way a human driver would. (Competitor Waymo, for comparison, uses machine learning to get its vehicles to first recognize common parts of the road space, from signs to pedestrians, and then behave in response to those cues.)
The Tesla method is seen as a quicker way to train cars to drive, but also needs vast volumes of data to train the system, which operates as a black box, where you can’t isolate the individual decisions made in the process.
Tesla vehicles are outfitted with eight cameras, according to Deutsche Bank research, rather than the single camera that most production vehicles have. All Tesla vehicles running the Tesla HW3 hardware, which Deutsche Bank estimates is 1.9 million cars in North America alone, also helps the systems learn how to react to road situations. That means Tesla is getting a more intensive and constant stream of data on which to judge their reactions—but it also comes with questions.
Tesla’s decision to rely on camera-only systems and neural networks over more established technology, like LiDAR, a three-dimension world modeling tech, is a key differentiator from competitors, says Jack Stilgoe, a researcher at University College London, who specializes in autonomous vehicles. “Tesla committed a few years ago to a camera only system,” he says. “Lots of people in the self-driving world say, if you’re not using LiDAR, then you’re never going to be as safe, and you’re never going to be as reliable.”
The unique approach taken by Tesla also helps explain why they’ve not brought robotaxis to market while competitors like Waymo have. “Elon Musk has been talking about the imminent availability of self-driving cars for over a decade,” says Paul Miller, vice president and principal analyst at Forrester. “We’re certainly not there yet, but his company and others are hard at work on improving the technologies that will be required to take today’s interesting—but isolated—pilots and turn them into something that we might all see, trust, and even use in our daily travels.”
But there are other worries. Tesla’s “black box” approach raises concerns about transparency, accountability, and safety in the event of crashes. “The AI systems within the car are a black box,” says Stilgoe. “When they go wrong, we don’t know why they go wrong.” That’s in large part because of the method Tesla has taken in developing its self-driving software, which “thinks” on the fly, and doesn’t follow rule-based systems that can be traced. (Neural networks’s decision making is notoriously hard to reverse-engineer.)