Blog

The Two Biggest Safety Threats to Autonomous Vehicles

The Two Biggest Safety Threats to Autonomous Vehicles

Cruise and Waymo, two leading American autonomous vehicle makers, are being duly scrutinized after two very public driverless car crashes in October and December, respectively. The two companies have a growing presence in major western US cities, notably San Francisco, Phoenix, and Houston, though a mounting number of incidents is raising serious concerns for officials and civilians sharing the roads with these semi- or fully-autonomous vehicles.

The Prevalence of Autonomous Vehicles

Globally, there were 31 million cars with some level of automation in 2019, according to Statista, and that number is expected to reach 54 million in 2024. 

However, automation does not automatically mean driverless cars. There are different levels of automation ranging from 0 to 5, with 0 being minimal automation capabilities and 5 being fully autonomous vehicles that don’t need a human driver. 

According to the latest S&P Global Mobility report, Level 5 cars will not be publicly available before 2035, but cars with Level 2 autonomy will be much more prevalent. Statista expects nearly 60% of all new cars sold globally to have Level 2 autonomy by 2025.

Safety Threats in Autonomous Vehicles

There are two major challenges regarding the safety of autonomous vehicles:

  • Errors in the perception system
  • Cyberattacks
  1. Errors in the Perception System

Autonomous vehicles are composed of two systems: The perception system and the control system. The former uses cameras and sensors to feed raw data to ML algorithms that re-create the environment outside the vehicle, i.e. where cars are positioned in lanes, what direction they’re going, angles, etc. The control system receives information from the perception system and then carries out actions — turning, slowing down, switching lanes, etc. 

According to Sayan Mitra, a computer scientist at the University of Illinois, Urbana-Champaign, accidents happen when data isn’t interpreted correctly by the perception system. His approach, however, is not to achieve perfect perception. It is to learn the range of error of the perception system, and to achieve a specified range of uncertainty. 

For example, if you know your speedometer is off by five miles an hour, but not more than that, you know that when it says you are going 65, you may really be going 60 or 70 MPH, but not less or more than that. That’s what Mitra and his team are doing for the perception systems of driverless cars — identifying the range of error. This range will provide a statistical guarantee as to the safety of the system. 

Even if a margin of error is identified, however, the industry faces the challenge of operating without a human driver. In their recent accident, Cruise’s autonomous vehicle, in trying to extract itself from the pedestrian it hit, ran her over a second time. In separate incidents, San Francisco recorded nearly 75 occasions where autonomous vehicles inhibited emergency rescue operations. Waymo autonomous vehicles also, following their recall, led to another collision with a pedestrian in February. The prevalence of recent crashes has prompted significant pushback from industry unions and skeptical government bodies.

  1. Cyberattacks

The perception system, already unreliable across safety measures, is also vulnerable to cyberattacks. 

ML models “learn” through training phases, when they are shown data and corresponding labels. The model then examines the data and creates settings so that it can link each piece of data to the correct label. For example, ML systems need to be trained to “recognize” stop signs, traffic lights, other cars, etc. But they’re not actually seeing these objects — they’re receiving data that they have learned corresponds with certain labels. 

Adversarial attacks are when cybercriminals deliberately manipulate ML models by introducing erroneous data, resulting in faulty outputs. For example, an adversarial attack can cause the ML model in a car to misinterpret a stop sign, or not see it at all. Such is the case for many of the reported incidents above, contributing to major recalls by companies like Waymo and Cruise. 

Self-driving cars work through complex systems using the Internet of Things (IoT), which connects them to a cloud system, and also, to each other. This connection is what enables them to do the amazing things they do, but it is also a vulnerability when it comes to cybersecurity. If a malicious actor manages to breach the security of one IoT device, they can potentially gain access to a large amount of vehicles, causing mass disruptions and/or crashes in some of the largest US cities. 

In addition to the threat of adversarial attacks on an entire network, cybercriminals can perpetrate all of the typical crimes that IoT devices are vulnerable to, including ransomware, DoS and DDoS attacks, phishing, etc.

Researchers from the Nara Institute of Science and Technology in Japan proposed an adversarial attack defense mechanism that uses an autoencoder and comprehensive memory module. They found it to be significantly effective in different types of ML learning models — over 90% effective for whitebox models and between 64%-74% effective for blackbox models. In practical terms, this could indicate a promising start toward defending against cyberattacks, though its feasibility remains unclear.

Asking the Right Questions

Despite the clear challenges of autonomous vehicles, there are still notable benefits to their development – mitigating the risk of drunk drivers, for example. 

In 2022, there were 42,795 fatal car accidents in the US. Between July 2021 and May 2022, there were 392 crashes of cars equipped with Advanced Driver Assistance Systems (a form of autonomous control), six deaths, and five serious injuries. 

While autonomous car crashes are concerning, driving in and of itself is a dangerous act. Autonomous vehicles will continue to be developed, and from that, must adapt to account for human error without removing safeguards for the response of a human driver (e.g. to a small child, to an emergency vehicle, unexpected roadblocks, etc) .As autonomous vehicles become more prevalent — and they will — the focus might shift from their safety alone to their contextual safety, making them overall better equipped to operate on populated roads.

At this point, we do not have enough data to answer whether autonomous vehicles are safer than human drivers because autonomous vehicles are much rarer than human-driven ones. However, as we have already seen, this question will likely hold a central importance in future debates. How the technology holds up to current safety standards remains to be seen.