AI systems have a range of applications in the automotive market. However, they do not come without their challenges.
Applications of artificial intelligence (AI) have become increasingly prominent within the automotive sector, from in-cabin monitoring systems to lane and object detection while driving.
While a large number of new vehicles have some form of impressive AI technology on board, this does not come without challenges during production processes. Machine learning algorithms are highly complex and specialized, requiring engineers to overcome challenges and resolve errors on a regular basis.
Assisting the industry on its AI journey, Voxel51 has a team of experts in machine learning and computer vision helping to address the challenges and issues that AI projects bring. Voxel51’s ‘FiftyOne’ allows users to build production-ready visual AI applications easily, efficiently and at scale.
We spoke with Brian Moore, CEO of Voxel51, and Jason Corso, Chief Science Officer, Voxel51, to learn more about the company and the benefits and challenges of AI solutions for the automotive industry.
Simply automatic (YES): Can you give some background information about the Voxel51 company?
Brian Moore (BM): At Voxel51, we work at the cutting edge of visual AI, building technology that accelerates AI work in virtually every industry, from medical imaging to revolutionizing agriculture to self-driving cars.
For context, we grew out of research at the University of Michigan, where Jason Corso is on the faculty, and where I did my PhD work and research. Our software platform enables visual AI builders to bring their innovations to life. In particular, we help organizations develop advanced AI systems that can effectively interpret and understand visual data input, such as images, videos and other related modalities.
You can think of us as a development platform for software teams that work with visual data. Obviously one of the unique things about being from Michigan is that we are so close to Detroit. We have a long history of working with automakers and understand the types of challenges, problems and solutions they have developed over the past decade.
What work have you done in the automotive sector so far?
B.M.: We work with Detroit OEMs in the US, Germany, Japan and other locations, working closely with their machine learning, data architecture and AI teams to help power the systems they build for vehicle remote systems . This includes lane keeping assistance and adaptive cruise control, to name just a few applications, as well as in-cabin systems.
What I would say about car manufacturers in 2024 is that they are increasingly becoming software companies with thousands of employees who are software engineers and data scientists. They are building quite advanced systems to develop these new technologies, which they see as crucial to their business strategies. We have the pleasure of supporting all of these teams as they build out their visual AI infrastructure, which they brought in-house rather than outsourcing.
Jason Corso (JC): At the higher level we investigate the cooperation between the driver and the car. It’s important to think about what humans and AI are good at. People are good at adapting to changing and new situations that require reasoning – both dynamic reasoning and ethics. On the other hand, humans are bad at redundant, repetitive things like mundane tasks, while AI is great at repeating things over and over again.
What does that mean? In the field of driving, AI has the potential to augment humans’ ability to keep their eyes on the road, around you and in your blind spots, when you may not be able to do so because you are tired or distracted, or for another reason. This will increase overall driving safety or the safety of the road systems, and that will happen in a scenario where AI and humans work together in synchrony. I think this will come first before we see Level 4 or Level 5 autonomy. We’re already seeing many amazing improvements in this extension of human driving with AI.
What ADAS features are on the horizon, versus longer-term projects for companies?
B.M.: We are seeing specific technologies being rolled out that can alert drivers to identify appropriate times to change lanes or overtake other vehicles safely, in a way that does not distract them from the road. Lane Keeping or adaptive cruise control are features that are already on the market but are constantly being developed and improved, deploying not only sensor-based systems but also other modalities and visual data to increase their power, reduce costs and to make these more available. on more types of vehicles.
In-cab solutions are a key focus of the teams we support. Consider cameras mounted in strategic positions that can identify driver drowsiness and distraction (i.e. when the driver is not paying attention to the road), and then provide warnings or interventions where necessary. This type of innovation is already on the market and will become increasingly sophisticated as AI-powered systems become more capable and better attuned to driver attention and awareness.
There are also future features that will automate tasks that are difficult for human drivers to perform, such as reversing a truck to a loading dock. Another innovation is parking assistance, where the car can identify whether a parking space is available and park there automatically. We could also see vehicles that warn of dangers when you get out of the car, such as a passing cyclist, to avoid collisions.
Some other visual AI features we could see in the coming years are features that help reduce the risk of inexperienced or bad drivers, which is made possible by driver recognition. Some examples that consumers can choose from:
Intervene more aggressively when people with a lower reaction time (for example someone who is older) take over the wheel.
Preventing teenagers from accessing the car’s sport mode or enhanced acceleration features or driving more than 5 miles per hour over the speed limit. Full control over the car could also only be given after 100 hours of experience under controlled conditions.
Preventing a driver other than the owner from driving the car for the rest of the day to help prevent theft.
There are also ADAS features that adapt to real-time conditions that we expect to be rolled out in the longer term:
Automatically increasing the distance to the vehicle in front, depending on weather conditions such as fog, snow and rain.
Adjust the sensitivity of driving warnings or safety features such as automatic braking to the road conditions.
Providing extra assistance under challenging lighting conditions caused by factors such as glare and low-light conditions.
What challenges do OEMs face in developing these projects?
JC: I think the ultimate challenge in many AI and visual AI projects is the long tail problem. It’s pretty easy to get 80% performance on a typical machine learning or AI problem these days, but usually decent performance is based on typical scenarios that drivers are used to. When you think about deploying AI in the real world, you can’t deploy a pedestrian avoidance system that works four out of five times. That would be a problem. This is due to the visual and behavioral complexity of working in the everyday world.
It is very difficult to build systems that have seen enough examples of all the variability or all the variations of situations in which different autonomous systems are expected to behave. It is even more difficult to build a general intelligence capable of reasoning about these myriad situations.
Although accidents do happen, humans are quite good at using adaptive, on-the-fly responses and reflexes that generally align with what we have found to be best practices. While the automotive industry strives to achieve this goal, current systems cannot fully utilize these capabilities. Even a student can download open source code and use open source data and quickly train a model that delivers decent performance. But achieving production quality that interprets the wide variability in the real world requires expertise, time, a lot of money, a lot of testing and a lot of patience.
Ultimately, that’s the game I think a lot of these companies are playing right now: trying to figure out how to cover as many of the toughest cases as possible while ensuring confidence in security.
What do you see in the future for the use of AI in the automotive industry? Not only in the field of safety, but more broadly?
B.M.: As I said before, modern automakers are essentially software companies, and their value to consumers is measured by safety and reliability. The rest actually comes from the software or AI functions that are now becoming possible for vehicles.
Personally, I’m excited that safety will be one of the first areas where we see some real progress in terms of what these technologies can do – whether it’s in-cab awareness or keeping vehicles safe and reducing accidents. That’s one of the first areas where we’ll see real returns from AI investments.
Car manufacturers are good at thinking long term. The lifespan of a vehicle in the real world is more than twenty years and as a result there will be a longer rollout period. But these car manufacturers are making the right investments today. It starts with collecting the right data to address some of the challenges and achieve 99.999% reliability. That starts with having highly sophisticated data collection systems that can help automakers collect enough data to understand all the different situations, edge cases, anomalies, and so on. We’re well into that journey now and it’s exciting to see how the automakers are making that transition and are primed and positioned for success.
Is there anything else you would both like to add?
B.M.: While it’s easy to focus on machines and algorithms taking full control when it comes to vehicles, humans play a key role in the entire lifecycle of next-generation automotive technology. People play a key role in ensuring that the data fed into these in-vehicle AI systems is accurate and free of bias. It is essential that those working on autonomous vehicles consider whether the system meaningfully increases the quality of both their product and the driving experience. It is not something that can be fully automated right away. We will all be living with these cars in the future, so it is important that the human element is part of their development from start to finish.
“Automotive AI: Applications that come with opportunities in creation” was originally created and published by Just Auto, a brand owned by GlobalData.
The information on this site has been included in good faith for general information purposes only. It is not intended to amount to advice on which reliance should be placed and we make no representation, warranty or guarantee, express or implied, as to its accuracy or completeness. You must obtain professional or specialist advice before taking or refraining from any action on the basis of the content on our site.
Sign Up For Daily Newsletter
Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.