Semi-Autonomy Isn’t the Middle Ground We Want

2017 will go down as a strange section of the autonomous car timeline. Today’s cars live in a purgatory-like state; stuck somewhere in the middle-ground between human and machine control. Brands like Tesla, Audi, and Cadillac to name a couple are currently selling cars that in certain situations completely drive themselves. Furthermore, we’re in the golden age of driver aids. These are helpful features built into cars that make driving easier, but do not do the job for you. Think of lane departure warnings, traction control systems, backup cameras, etc. Cars rolling off the production line these days are packed with so much tech that a lot of the time the average consumer doesn’t even know where to start when they step into their new vehicle. Today’s crop of rolling tech expos highlight one of the challenges inherent to the middle stages on the path toward level 5 autonomy. These cars are trying to drive themselves but we won’t let them because of a combination of regulations and the fact that the cars aren’t quite ready to let drivers safely fall asleep at the wheel. Once the cars can handle every driving task on their own, the various driver aids and safety features will be implemented into the vehicles’ software, but for now people are tasked with managing a huge flow of information from in car tech that can end up making driving overly-complicated. I want to survey where we are now with autonomous features on non-autonomous cars, identify some of the related problems, and look at how future iterations of increasingly autonomous vehicles will address some of these concerns.

A good benchmark for the latest and greatest in car technology is the new Audi A8, pictured below.

It can recognize a parking spot and parallel park itself without any input from the driver. The car can even do this while the driver is outside and communicating with the vehicle through a smart phone app. It’s easy to forget about all this cool parking technology though when you look at what Audi calls “Traffic Jam Pilot.” This system is the first level 3 autonomy to be seen in a production car, which is no small achievement. When the driver hits a certain button while driving on a highway, the A8 will fully drive itself up to 37 miles per hour. Unlike the comparable Tesla or Volva systems, the driver does not have to keep their hands on the wheel, so the feature really is the first of its kind. Here’s what Traffic Jam Pilot looks like in action.

All this tech seems great on first glance, but an unintended consequence of the increased complexity in cars is that people just don’t know how to use it all. In a report published by MIT’s Age Lab in September that sought to find if participants could understand what current semi-autonomous features do based on their name, researchers found that the vast majority could not identify a system’s capability based on its name[1]. If people are unable to identify what a system purpose is, they are certainly not going to actually use it out on the road. This sheds light on a real problem in the automotive industry, which is that people are generally unaware of what safety features their cars have. It is such a pervasive issue that the University of Iowa and the National Safety Council created a website called My Car Does What.org that overviews these modern features and tells drivers which ones their car has and how to use them. You can take a look at the site here if you aren’t sure what your car is capable of.

While it is understandable that the average consumer is not up to date on the latest and greatest features, in-car technology is becoming so complicated that even dealership staff are at a loss for how to explain and sell these increasingly complex vehicles. Wired Magazine author Aarian Marshall found that many car dealers have a concerning lack of knowledge about these semi-autonomous systems[2]. Not only did they not know how they worked, but they also spread dangerously incorrect information about the systems to potential customers, such as saying that a parking assist system brakes for the driver when it actually doesn’t or that a pedestrian detection system that only works above 30 mph works at all speeds. There are clear and dangerous implications to this misinformation that extend beyond the obvious possibility of people crashing because they did not understand a semi-autonomous feature in their car. How should a judge interpret a situation where a driver misunderstood a system? Who is at fault here? The driver, the dealer, or the manufacturer? Casting blame when things go wrong is a big topic within the sphere of self-driving cars, and with the current generation of technology, most people choose to handle the driving themselves rather than begin to delve into all these difficult questions. A 2015 study found that among 265 Hondas brought in for servicing in the Washington D.C. area, more than two thirds had their lane departure warning systems turned off[3]. While it seems counterintuitive to turn off safety features, I can understand why many people would rather not have excess stimulation from their car like a beep or flash when they are trying to focus on the road, especially if they don’t even know what the system is doing.

It is clear then that in some ways, the slew of tech that is wrapped around today’s cars are doing more harm than good. But how can we move toward that light at the end of the tunnel which is full autonomy for all cars? When Brian Reimer from MIT spoke to our class, he mentioned that some experts think skipping level 3 autonomy and leaping straight from level to level 4 is a better alternative. Level 2 incorporates assistance like cruise control, but not the autonomous features that say the new Audi A8 has. Level 4 means full autonomy within a certain specified environment or area. Level 3, which is where the A8 sits, has the car drive itself but mandates driver supervision and vigilance. What we’ve seen is that these level 3 technologies create a lot of confusion and aren’t even used by most customers. When the driver ultimately has full control, maybe it’s better that new cars retain a higher level of driver input until we have made greater advances in autonomy. Easy driving is safe driving, and if the bells and whistles fitted to modern cars are more distracting than useful, making driving more difficult and therefore more dangerous, then there is no need for them until we can rely on the vehicles to use this new tech on their own.

[1]https://www.researchgate.net/publication/319269928_What’s_in_a_Name_Vehicle_Technology_Branding_Consumer_Expectations_for_Automation

[2] https://www.wired.com/2017/01/car-dealers-dangerously-uneducated-new-safety-features/

[3] http://www.tandfonline.com/doi/abs/10.1080/15389588.2016.1149698?journalCode=gcpi20&

Other works used but not cited:

https://www.wired.com/story/no-one-knows-self-driving-car/?mbid=BottomRelatedStories

https://www.wired.com/story/self-driving-cars-take-over-highways/?mbid=BottomRelatedStories

https://www.wired.com/2016/06/mercedess-new-e-class-kinda-drives-kinda-confusing/

One thought on “Semi-Autonomy Isn’t the Middle Ground We Want

  1. Awesome post, Gabe! I totally agree with you in that this issue concerning users’ inability to understand the ever-increasing levels of autonomy will prove to be a very important problem. In fact, it is one of the main issues that Dylan H. and I will be trying to tackle in our presentation in a few weeks. However, I do not know if I share Dr. Sawyer’s optimism in the idea of jumping straight into level 4 autonomy. In my opinion, I do not know if this decision would be feasible. The first obstacle in the way of widespread level 4 autonomy would be the government. As we heard from Anik at nuTonomy, it is not an easy task to get local governments to approve the use of autonomous vehicles for a specified environment like Boston. This resistance existed even for the level 2 cars that they were planning on using. The only way I see the government allowing this is if they were incredibly confident in the technology. This brings me to my second point. Even if the government were to be fully confident in this technology, I truly believe that users will not be. It is going to take a long time (and, in my opinion, a general progression from level 2 to 3 and then to 4 autonomy) for users to fully trust these automated systems. Without everyone on board with the idea of stepping into a level 4 autonomous car, there are still going to be level 2 vehicles on the road which will lead to less effective communication between these automated systems and thus less effective automated driving. Instead of making this jump to level 4 autonomy, I think that level 3 systems will have to provide a user interface that makes the car’s technology transparent and intuitive. How designers will go about that is still up in the air, but I believe that this is a more effective step to gaining user trust and taking another step closer to full autonomy.

Leave a Reply