The Self-Driving Car Conundrum

T.Z. Barry
4 min readOct 29, 2020

Self-driving cars are inevitable. Tesla, Ford, BMW, Nissan, GM, Google, Baidu, Apple, Amazon, Uber, Lyft, and many more companies are all working on developing autonomous driving systems for their vehicles. It is not a matter of if, but when. Self-driving automobiles will be ubiquitous in the future. There is one core conundrum with self-driving cars, however. Self-driving cars will only work if there are only self-driving cars.

So much of driving is being able to know, understand, and predict the behavior of other drivers-something humans can do moderately well with other humans, and artificial intelligence (AI) can do extremely will with other AI, but as of yet AI cannot do very well with humans.

For humans, driving often comes down to subtle gestures and split-second eye-contact between drivers and/or pedestrians. For instance, when you’re at a four-way stop sign, and each car is inching up, there’s a general rule that you take turns. But then what happens when a pedestrian or bicyclist enters the equation? These situations really come down to the driver knowing what the other drivers and pedestrians will do, which is picked up through the types of subconscious, subtextual rules that govern human behavior such as eye contact, a nod of the head, or a wave of the hand. In short, it’s based on common sense-something that comes easy to (most) humans, but is surprisingly difficult for AI to figure out. AI cannot (as yet) pick up on humans’ myriad social communicative clues. That ability may only come with general AI-that is AI that matches human intelligence in every way.

When driving, humans generally follow “unwritten rules” of the road. However, these unwritten rules directly contradict the expressly written rules of the road under the law. The truth is human drivers constantly break the law. They drive faster than the speed limit, don’t come to a complete stop at STOP signs, don’t always use the turn signal, cross double-yellow lines to pass other cars, park in “No Parking” zones, stand in “No Standing” zones, speed up to go through yellow lights, and turn left through intersections after the light turns red. All these technical law infractions are completely normal and accepted-in fact, they are expected-even by law enforcement. When someone follows the letter of the law, such as driving 50 MPH in a 55 MPH speed limit zone, they’ll be tailgated and beeped at. The written law says 55 is the maximum speed you can go, but the unwritten law says you can safely go 60–65 MPH.

How do you teach AI to ignore the written law and follow the “unwritten law” of the road? The ultimate problem with self-driving cars is that if AI and human drivers share the road, there will inevitably be accidents between them. AI drivers following the written law and human drivers following the unwritten law cannot coexist. That doesn’t mean we should abandon self-driving cars-obviously there are already millions of accidents between human drivers each year. Humans have their own flaws such as being distracted by their phone, radio, and fellow passengers, becoming fatigued and falling asleep, or driving while intoxicated. AI would not fall prey to any of those issues, but as long as AI drivers are sharing the road with human drivers, dangers between them will persist. Self-driving cars will never be able to always accurately anticipate what a human driver will do because each driver follows their own interpretation of the rules of the road.

The safest roadway would be one in which there are only self-driving cars. Then, each car’s AI would be connected via the internet so that every single car knows exactly what every other car is doing and what they will do under every possible circumstance. The self-driving cars would all work together like a hive mind super-organism. Millions of autonomous cars on the road would be like cells in one giant brain. There would never be accidents between self-driving cars because they would be programmed to make it impossible for two cars to collide.

The car’s AI system would detect potential mechanical issues before they result in failure and prevent the car from driving until it is fixed. The only possible accidents would be with pedestrians or other natural elements not connected to the AI network. That is obviously a problem, but there are plenty of precautions that can be taken to reduce those kinds of incidents. Pedestrians and automobiles can have separate roadways with overpasses, underpasses, and tunnels so they never have to interact. As for acts of nature, they may be unavoidable, but automated cars would be better equipped to deal with them than humans. AI can run billions of simulations of potential environmental hazards and share its data with every self-driving vehicle on the road. Although a particular self-driving car may face an ice storm for the first time, the AI operating it will have encountered such a scenario countless times and know exactly what to do. But when a human driver encounters a completely novel scenario on the road, they have no experience to draw on.

If you’re one of those people who for whatever reason enjoy the experience of driving yourself, then enjoy it while it lasts. Because in the future, it will probably be illegal for humans to drive. Driving cars will be like riding horses today-something done recreationally on designated closed courses. I like driving myself, but I would happily give it up to reduce the death and destruction caused by automobile accidents.

This essay appears as the introduction to Death by Self-Driving Car, a collection of three short science fiction stories that explore the future of autonomous vehicles, currently available as an ebook.

Originally published at http://tzbarry.com on October 29, 2020.

--

--