Is AI safe enough for autonomous vehicles?
A mix of big tech, worldwide automakers and startups are working hard to create safe and deterministic artificial intelligence (AI) using machine learning (ML) that will make fully autonomous vehicles a reality. After spending billions over the past 3 years, we have learned this: Fully autonomous driving will take a while, but in the meantime, it will bring huge benefits for safety and active assistance to drivers.
Continue reading below or download the white paper.
Already, luxury car makers such as Mercedes-Benz, BMW, Lexus and Tesla, and tech firms such as Google and Amazon, have sophisticated advanced driver-assistance systems (ADAS) that can provide an autonomous experience in urban environments up to SAE Level 3. These systems cover a few driving situations based on narrow drive requirements, which increases traffic safety. However, all still rely on human intervention in a stressful situation or in the event of an imminent accident.
As of spring 2019, the European Union’s Commissioner for Transport at the time, Violeta Bulc, said she expected fully autonomous driving capabilities to arrive by 2030¹. The European Road Transport Research Advisory Council, however, has predicted these capabilities will not arrive until after 2030.
To achieve full autonomy — a state in which the vehicle can navigate itself through any terrain in any conditions — it’s necessary to take AI approaches to the next level. Driven by safety and deterministic behavior, these driving agents implement human driving behavior by collecting driver demonstrations that optimize driving policy for unknown or unpredictable situations. This requires enormous amounts of data, so data engineers must find ways to eliminate the processing of so-called “boring” data that will not help the system develop better driving strategies.
Autonomous vehicles must be able to sift out extraneous data so they can make driving decisions better and faster than humans — 100 percent of the time.
For truly safe and secure autonomous driving that delivers on the promise of fewer accidents and enhanced sustainability, autonomous vehicles must be able to sift out extraneous data so they can make driving decisions better and faster than humans — 100 percent of the time. Smart road infrastructure attempts to address this problem by delivering only the data needed, but it also creates a dependency that inhibits development of a truly autonomous vehicle. The data, compute and AI issues ahead are extremely challenging.
But the upside is tremendous, because autonomous cars have great potential to save lives. The U.S. National Highway Traffic Safety Administration (NHTSA)² contends that automated vehicles can reduce injuries based on one critical fact: 94 percent of serious auto crashes are caused by human error. Autonomous vehicles promise to eliminate a big part of human error from the crash equation, which will help protect drivers and passengers, as well as bicyclists and pedestrians.
A new approach to safety awareness
Today, extensive work³ is underway. With about half of the U.S. states testing autonomous vehicles on public roads, California is emerging as the busiest state.
A new approach to AI begins with a baseline autonomous driving platform that enables driving-behavior engineers to establish driver-behavior knowledge that can be refined to simulate how an intelligent driving agent (the autonomous car) would respond in real-world situations. With this process, engineers can aid the intelligent driving agent to learn driving behavior, build driving functions and recall that information over time, similar to human learning.
Teaching AI driving agents to be safe
Now computer scientists experiment with different types of approaches, architecture and training strategies in their incorporation of AI and ML — all of which will play a role in the development of autonomous vehicles over the next decade or two.
The evolution must take smaller steps so that including AI in car production becomes a reality and ensuring that human lives, as well as the new technology, are not at risk. The main AI/ML theories relevant for teaching an autonomous driving agent include:
- Brain science theories. One possible approach to building better AI driver agents draws inspiration from the brain science research conducted by Danko Nikolic. Nikolic, in the Practopoiesis⁴, theorizes that humans actually learn at three levels: trial and error (accumulating knowledge throughout evolution); learning how to learn; and finally, putting the knowledge into action. It’s the second level at which humans “learn how to learn” that Nikolic considers the most important application in applying AI to autonomous driving. If we can discover more about how humans learn, there is greater potential for AI as it applies to autonomous cars and for education in general. Nikolic’s theory maintains that we must teach AI technology to learn the way humans do.
Adapting this theory to AI in the automotive realm starts by building driving-learning agents capable of learning the way humans do, then creating small, fast-learner training sets from human driver demonstrations. The concept of adapting innovative brain research to AI and the production of fully autonomous vehicles has emerged as one of the more exciting technology innovations.
The last year has seen great progress in graph neural networks (GNN) that are able to learn algorithms⁵. This could be a promising approach that ensures safety and deterministic behavior.
The brain science approach can potentially take the best of human learning and develop the best AI driver agent for the automotive field, but the research can take several years, and it’s hard to adapt it to the production of autonomous driving.
-
Imitation learning. In this approach, the AI learner first observes the actions of the driving expert, often a human, in supervised learning. The AI learner then uses this training to learn a policy that tries to mimic the actions demonstrated by the expert to achieve the best performance. In practice, the AI imitation learning agent observes a human expert driver and registers the driver’s actions over time, recording how he or she makes right and left turns, stops at stop signs and accelerates on the open highway.
Based on the tracked results of what the expert did, the agent creates a policy of what actions to take for any given situation. At a runtime test, it would compute the best action based on the developed policy. Over time, the system develops a network of best cases and, from experience, it theoretically would always pick the best option grounded on a deterministic policy model in a context of inference.
Imitation learning leverages the benefits of supervised learning, but struggles with an important issue: how to collect expert driving situations and sum up steering errors in a way that allows the driving agent to handle off-course situations it has never dealt with before, while keeping safety as one of its major principles. It’s possible to overcome this problem, but it requires advanced autonomous driving technology, a full team of expert technologists and lots of time. -
Reinforcement learning. With this type of learning, there’s no expert, human or otherwise; it’s based on unsupervised learning. The agent is assigned a reward function and uses various strategies to effectively explore the different states and actions on the road. Via trial and error, it devises the optimum policy. In reinforcement learning, we try to maximize the rewards for the agent’s actions. In driving mode (model inference), the reinforcement learning agent executes the model for every event and passes with no issues.
However, if the car crashes during a test or hits a pedestrian in a simulation, the task ends with negative rewards. The agent will then start with random actions, and through trial and error learn which actions maximize rewards and which actions result in a positive score. When reinforcement learning works well, the car will quickly learn not to run into a tree again and avoid repeating that mistake. The main challenges of reinforcement learning are low stability in behavior and high dependency on the size of the environment and the computer processing power.
Learning how to deal with catastrophic forgetting
Resolving catastrophic forgetting promises to take autonomous driving to the highest possible level — fully autonomous driving.
Neural networks are computer systems designed to simulate a human brain; however, being artificial does not mean they do not forget. Catastrophic forgetting refers to the catastrophic loss of previously learned driving knowledge whenever neural networks are trained with a single new additional response. Under this theory, depending on the architecture, the agent neural network can run only a small number of scenarios before its performance rapidly decreases.
Today, if data scientists implement more drive scenarios in a neural network, it will top out at a certain threshold. In application today, nobody really knows how to fix this, except to add scenario classifier input and preconditions to the driving behavior agent. However, based on probability theory, this decreases accuracy.
Research is addressing this challenge, but until this issue of catastrophic forgetting is overcome, there is no way to address fully autonomous driving through one unified behavior model. Thus, there is no definitive method to expand the ability to train and manage end-to-end unlimited scenarios. This is very important; if a neural network forgets what it has learned based on previous experiences, autonomous vehicles would have to stick with the existing rule-based programming model and never reach full autonomy.
Resolving catastrophic forgetting promises to take autonomous driving to the highest possible level, fully autonomous driving. But it’s very complex and difficult to get an AI driving agent past a certain threshold.
The real challenge of autonomous driving technology
Driving behavior data engineers, computer scientists, and AI scientists and researchers must find the proper architecture — or even new AI approaches and technology — to develop fully autonomous driving.
Wondering which of these AI approaches will prevail? Most likely, some of all three will be used over the next two decades. Driving behavior data engineers, computer scientists, and AI scientists and researchers must find the proper architecture — or even new AI approaches and technology — to develop fully autonomous driving. However, as we run more experiments and technology development iterations, we will know more about “learning how to learn” and how to generalize and evolve as we gain knowledge that will ensure safety.
The challenge of getting autonomous driving behavior to the next level presents too many data and processing issues for us to grapple with all of human intelligence. Humankind has yet to create a storage facility that can collect and hold all of that data. But if our scientists can stay focused on AI agents for autonomous driving, many of these challenges will be resolved, and the first wave of fully autonomous vehicles will offer higher levels of autonomous driving functions in the near future.
The key question is how do we build safe, useful and affordable state-of-the-art technology for autonomous driving?
All of this new autonomous driving technology will be developed on back-end autonomous driving platforms that include compute, data lake, fully managed development security operations, metadata and sense management, governance and high-performance networks. Any enhanced AI system will have to handle hundreds, if not thousands, of petabytes of data and at least thousands of graphics processing units (GPUs). The computer industry has only begun to address many of these data storage and processing performance needs.⁶
We will continue to make bigger and faster processing units, but mastering the power of AI (AIOps / MLOps) and applying it to autonomous driving will keep automotive computer science engineers busy for decades. It’s one of the great challenges of our generation.
About the author
Davor Andric is chief technology officer of Autonomous Driving, AI and head of Robotic Drive Engineering at DXC Technology. Over the past 20 years, Davor has been working in the consulting, software and technology space. His expertise is in designing and building scalable platforms for data, analytics and ML / AI for the automotive industry and building service products on those platforms.
1 Forbes, April 6, 2019: “Self-driving cars in 10 years: EU Expects ‘Fully Automated’ cars by 2030”
2 NHTSA: “Automated Vehicles for Safety”
3 Curbed.com, March 8, 2019: “Are self-driving cars safe for our cities?”
4 Danko Nikolic, “Practopoiesis”
5 Petar Velickovic, “Neural Execution Of Graph Algorithms” January 2020
6 All the AI approaches outlined in this paper are specific and focused on human driving behavior capabilities. While the field of artificial general intelligence (AGI) targets a different science and research area and technologists can apply it to multiple industries and business processes, the approaches outlined in this paper focus solely on driving an autonomous car.