Raquel Urtasun is the founder and CEO of self-driving truck startup Waabi as well as a computer science professor at the University of Toronto. Unlike some competitors, Waabi’s AI technology is designed to drive goods all the way to their destinations, rather than merely to autonomous vehicle hubs near highways.
Urtasun, one of Fast Company’s AI 20 honorees for 2025, spoke with us about the relationship between her academic and industry work, what sets Waabi apart from the competition, and the role augmented reality and simulation play in teaching computers to drive even in unusual road conditions.
This Q&A is part of Fast Company’s AI 20 for 2025, our roundup spotlighting 20 of AI’s most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers. It has been edited for length and clarity.
Can you tell me a bit about your background and how Waabi got started?
I’ve been working in AI for the last 25 years, and I started in academia, because AI systems weren’t ready for the real world. There was a lot of innovation that needed to happen in order to enable the revolution that we see today.
For the last 15 years, I’ve been dedicated to building AI systems for self-driving. Eight years ago, I made a jump to industry: I was chief scientist and head of R&D for Uber’s self-driving program, which gave me a lot of visibility in terms of what building a world-class program and bringing the technology to market would look like. One of the things that became clear was that there was a tremendous opportunity for a disrupter in the industry, because everybody was going with an approach that was extremely complex and brittle, where you needed to incorporate by hand all the knowledge that the system should have. It was not something that was going to provide a scalable solution.
So a little bit over four years ago, I left Uber to go all in on a different generation of technology. I had deep conviction that we should build a system designed with AI-first principles, where it’s a single AI system end-to-end, but at the same time a system that is built for the physical world. It has to be verifiable and interpretable. It has to have the ability to prove the safety of the system, be very efficient, and run onboard the vehicle.
The second core pillar was that the data is as important as the model. You will never be able to observe everything and fully test the system by deploying fleets of vehicles. So we built a best-in-class simulator, where we can actually prove its realism.
And what differentiates your approach from the competition today?
The big difference is that other players have a black-box architecture, where they train the system basically with imitation learning to imitate what humans do. It’s very hard to validate and verify and impossible to trace a decision. If the system does something wrong, you can’t really explain why that is the case, and it’s impossible to really have guarantees about the system.
That’s okay for a level two system (where a human is expected to be able to take over), but when you want to deploy level four, without a human, that becomes a huge problem.
We built something very different, where the system is forced to interpret and explain at every fraction of a second all the things it could do, and how good or bad those decisions are, and then it chooses the best maneuver. And then through the simulator, we can learn much better how to handle safety-critical situations, and much faster as well.
How are you able to ensure the simulator works as well as real-world driving?
The goal of the simulator is to expose the self-driving vehicle’s full stack to many different situations. You want to prove that under each specific situation, how the system drives is the same as if the situation happens in the real world. So we take all the situations where Waabi driver has driven in the real world, and clone them in simulation, and then we see, did the truck do the same thing.
We also recently unveiled a really exciting breakthrough with mixed-reality testing. The way the industry does safety testing is they bring a self-driving vehicle to a closed course and they expose it to a dozen, maybe two dozen, scenarios that are very simple in order to say it has basic capabilities. It’s very orchestrated, and they use dummies in order to test things that are safety critical. It’s a very small number of non-repeatable tests.
But you can actually do safety testing in a much better way if you can do augmented reality on the self-driving vehicle. With our truck driving around in a closed course, we can intercept the live sensor data and create a view where there’s a mix of reality and simulation, so in real time, as it’s driving in the world, it’s seeing all kinds of simulated situations as though they were real.
That way, you can have orders of magnitude more tests. You can test all kinds of things that are otherwise impossible, like accidents on the road, a traffic jam, construction, or motorbikes cutting in front of you. You can mix real vehicles with things that are not real, like an emergency vehicle in the opposite lane.
You’re also a full professor. Are you still teaching and supervising graduate students?
I do not teach—I obviously do not have time to teach at all. I do have graduate students, but they do their studies at the company. We have this really interesting partnership with the University of Toronto.
If you want to really learn and do research in self-driving, it is a must that you get access to a full product. And that’s impossible in academia. So a few years ago, we designed this program where students can do research within the company. It’s one of a kind, and to me, this is the future of education for physical AI.
When did you realize the time was ripe for moving from academic research to industry work?
That was about eight and a half years ago. We were at the forefront of innovation, and I saw companies were using our technology, but it was hard for me to understand if we were working on the right things and if there was something that I hadn’t thought of that is important when deploying a real product in the real world.
And I decided at the time to join Uber, and I had an amazing almost four years. It blew my mind in terms of how the problem of self-driving is much bigger than I thought. I thought, Okay, autonomy is basically it, and then I learned about how you need to design the hardware, the software, the around safety systems, etc., in a way that everything is scalable and efficient.
It was very clear to me that end-to-end systems and foundational models would be the thing. And four and a half years in, our rate of hitting milestones really speaks to this technology. It’s amazing—to give an example, the first time that we drove in rain, the system had never seen rain before. And it drove with no interventions in rain, even though it never saw the phenomenon before.
That for me was the “aha” moment. I was actually (in the vehicle) with some investors on the track, so it was kind of nerve-racking. But it was amazing to see. I always had very, very high expectations, but it blew my mind what it could do.
The final deadline for Fast Company’s World Changing Ideas Awards is Friday, December 12, at 11:59 pm PT. Apply today.
