Feds Release ADAS-Pilot Crash Data But Little Information Except That There Probably Isn’t A Problem
Last week the National Highway Transportation Safety Agency released what seemed to be a large trove of data about crashes in cars with ADAS “autopilot” functions as well as prototype self-driving cars being tested. There were 392 crashes over a 10 month period for these vehicles, most of them Tesla
The reality though, is that there are, depending on who you ask, between 6 and 12 million crashes in a typical year. Police report about 6 million. Insurance companies hear about 12 million. There are probably another 15 million minor dings nobody tells anybody about. There are much fewer “airbag deployment” crashes, which are what Tesla reported to NHTSA, but still quite a few.
No matter what number you use, the number of crashes with these systems on is a tiny, tiny fraction. What matters is the rate of crashes per mile driven in different situations (including highway vs. street, as the crash rate is around 3x higher per mile on the street.) Many in the press lamented that NHTSA did not provide any numbers on these rates, just absolute data on crashes. So nobody can say a lot about what really matters, which is whether people using these systems are having more crashes or worse crashes than those who don’t.
There is data to suggest it’s not worse. Every quarter, Tesla releases a misleading analysis of how many miles per crash (airbag deployment) Teslas have with Autopilot on or off. It’s misleading because Autopilot is mostly used on freeways (over 90% of the time) where the normal crash rate is much lower per mile. Earlier, I analyzed the data and combined it with other studies to conclude that Teslas had similar crash rates with Autopilot on as with off.
Unfortunately this data set doesn’t help us improve that analysis a lot. Perhaps in time more data will be released that says more. While Tesla has full data uploads from its cars and knows the data in great detail, some other manufacturers don’t get this data from their cars and can’t tell NHTSA.
Why would the safety record be better, worse, or the same?
A good ADAS pilot will improve the driving of its user. It is looking in all directions at all times. Having two sets of eyes, one human, and one computer, on the road can be a plus. We know that automatic emergency braking, which can hit the brakes when the driver doesn’t, reduces accidents. While you never would want to do this, if you are going to fall asleep while driving (which some suspect is the largest cause of accidents, even more than drinking) you absolutely want to do it in a car driving with an ADAS pilot. 99 times out of 100, it’s going to just slow to a stop if you don’t wakeup, not careen off the road as a regular car will do 100 times out of 100.
In the other direction, we have the problem of automation complacency. The systems are good enough that people trust them too much. They might be more willing to risk nodding off, or texting, or even ignoring the road for long periods. Then, when the systems screw up, as they definitely do, you are less safe.
The big question is, how much does each factor contribute? Perhaps 90% of drivers use the systems well, as a team, with “two sets of eyes.” And 10% get lazy and increase the risk. It’s possible for overall safety to go both ways. Perversely, as the systems get better, both factors can increase. The second set of eyes gets better, and the risk of complacency also goes up. (In the meanwhile, teams are working on true self-driving systems that need no human driver. They can’t be put into production until they are already making things safer than the ordinary drivers.)
One answer to this is to monitor the driver to reduce complacency. Some systems do this with a camera watching the driver’s eyes. Tesla famously avoided this for a long time, requiring just regular torque on the wheel from the driver’s hands. Tesla has started using an internal camera to look at the driver and may be able to improve.
In general, the outlook seems good. These systems do not appear to be making things worse, and as they improve their driving monitoring and their quality, they have great potential to make things better. Let’s hope we get usable data on this, and we continue on this path.