Transportation

In Defense Of Tesla: The Recent Autopilot Naming Study Is Being Grossly Misinterpreted


Tesla has become known for controversy surrounding its selection of the word “Autopilot” to represent its car automation system.

© 2015 Bloomberg Finance LP

The news has been awash with raucous commentary about a recent study that attempted to analyze the names being given to various Advanced Driver Assistance Systems (ADAS), and unfortunately, the preponderance of the blather is grossly misinterpreting what the study actually accomplished.

In particular, the most notable aspect involves the Autopilot name that has become a hallmark of Tesla and a favored moniker of Elon Musk.

In short, many are suggesting that the study purportedly “proves” that the Autopilot name is misleading and inappropriate, and that supposedly “drivers assume Autopilot does more than it does” (a matter that I’ll be analyzing herein).

Some readers already know that I’ve taken to task Tesla and Musk for various aspects, including my analysis of their quarterly safety statistics, and for which I tried to provide a reasoned and thorough basis for asserting that those numbers are being misinterpreted, along with my expressed concern that Tesla essentially is promulgating the misinterpretations when it could readily be setting the record straight by providing more stats and sharing the underlying data to offer constructive validation and elaboration of their safety record (being more safety status transparent).

To put it mildly, fans of Tesla were not particularly fond of the analysis. As such, I’m guessing that some of those same supporters might be surprised to see that I am now providing a kind of defense for Tesla on this news front.

In our new-found era of tribalism, it seems that we all need to always be ensconced in one tribe or another. For Tesla, presumably, you are either in the “always in favor” camp or you must otherwise be in the “always opposed” camp.

My camp is the one that seeks strong science and appropriate interpretations of scientific results.

This means that I’m ready, able, and purposeful about offering insights for circumstances that warrant being analyzed, regardless of which side the chips fall on. I am a proponent of properly conducted scientific studies and an equally stout proponent of ensuring that those studies are interpreted sensibly and within the context of what the study actually shows.

Unpacking The Naming Study

Let’s unpack what this naming study reported.

Researchers conducted a survey of about 2,000 drivers, nationally representative, via telephone, and the poll was performed in 2018.

It is useful to always first consider who a survey targeted, how many respondents were involved, and how the survey respondents were selected.

I mention the importance of scrutinizing the sample set because there is the infamous case of the 1936 presidential election in which a reputable magazine predicted that candidate Alfred Landon would win over candidate Franklin D. Roosevelt, and the fatal flaw in that fouled prediction was that they used the telephone as the survey mechanism and yet in that particular time period only a certain class of people could afford phones, and therefore the case became a notable example of sampling bias.

I think we can be generally comfortable that the naming study having used the telephone as its outreach instrument, last year, would be reasonably satisfactory in today’s times, and that 2,000 drivers are a large enough sample size of the overall population, and we’ll take it at their word that they somehow were able to ensure that those contacted were a national representation of drivers.

You could easily skew such a study by either intentionally or unintentionally selecting only certain kinds of drivers, say only those over the age of 65, or you could focus on just drivers in the city of Pinole, but let’s assume that didn’t happen in this case.

What did the survey ask of the respondents?

Unfortunately, we aren’t seemingly given the actual text of the questions used, and I’ll just add that how you ask a question can make a big difference in the results of any survey, but in any case they reportedly asked “questions about behaviors respondents perceived as safe while a Level 2 driving automation systems is in operation.”

As background, the Society for Automotive Engineers (SAE) has provided a multi-level numbering system to classify cars as to the extent of their automation. The Level 4 and Level 5 are considered the topmost and would be essentially true autonomous cars, meaning there is no human driver needed, while the Level 3 and Level 2 are considered cars that require a human driver and involve a co-sharing of the driving task with the ADAS automation.

Be aware that I’ve described at length the difficulties associated with the co-sharing of automation and humans in the act of driving a car, especially as Level 3 is now emerging, and have forewarned that we are facing new dangers on our roadways as such.

Back to the study, the Autopilot name was “associated with the highest likelihood that drivers believed a behavior was safe while in operation, for every behavior measured, compared with other systems names. Many of these differences were statistically significant.”

Furthermore, the study said that “when asked whether it would be safe to take one’s hands off the wheel while using the technology, 48 percent of people asked about Autopilot said they thought it would be, compared with 33 percent or fewer for the other systems. Autopilot also had substantially greater proportions of people who thought it would be safe to look at scenery, read a book, talk on a cellphone or text.”

It is also useful to mention that “each respondent was asked about two out of five system names at random for a balanced study design,” of which the five system names were “Autopilot (used by Tesla), Traffic Jam Assist (Audi and Acura), Super Cruise (Cadillac), Driving Assistant Plus (BMW) and ProPilot Assist (Nissan).”

And, notably, “respondents also were asked about Level 2 systems in general and about their own vehicle and driving.”

Finally, it is important to realize that “a limited proportion of drivers had experience with advanced driver assistance systems: 9–20% of respondents reported having at least one crash avoidance technology such as forward collision warning or lane departure warning, and fewer of these reported driving a vehicle in which Level 2 systems were available.”

Making Conclusions Based On What The Study Did

Before I dig into what kinds of conclusions can be reached, let’s start with how people can have misconceptions about all sorts of things in our world.

Surveys show that most people believe that bats are blind. Not so. Bats do have eyes and can see. Admittedly, bats do use echolocation, which is what most people think of when asked about bats, but nonetheless, bats are not blind.

Surveys show that most people believe that ostriches stick their heads into the sand to hide from their enemies. Not so. They often will flop to the ground and lay flat, but it would be quite extraordinary for one to try and bury its head.

My point being that people have tons of misconceptions, and yet the misconception might or might not particularly matter per se.

It seems to me that the respondents in the naming study generally had little if any experience using a Level 2 and less so any Level 3 cars, and so they were merely speculating about what the names of the ADAS might mean. Their concepts, whether right or wrong, whether on-target or off-target, appear to be based on conjecture.

I suppose it’s not a good thing that a proportion of drivers have a misconception about Autopilot, and from a marketing perspective that they might have a false impression about what Autopilot can do, perhaps leading them to consider buying a Tesla when they don’t really know what its capabilities are, but until those with these drivers with their misconceptions get behind the actual wheel of a Tesla, it’s somewhat less crucial that they are harboring such misconceptions, in the balance of things.

In essence, this misconception about what Autopilot can or cannot do is really only crucial per se when it manifests itself in some substantive manner.

What we really need to know is whether such a misconception gets carried over into the act of driving a Tesla and using Autopilot.

Sure, there are anecdotal examples of Tesla drivers napping or doing other unsavory acts while needing to be undertaking the driving task, but until a quantitative study on that specific aspect is performed, the matter is still more anecdotal than substantiated in an evidentiary way.

For those that drive a Tesla and use Autopilot, if they misconceive what Autopilot can and cannot do, that’s the danger spot. That’s what we need to find out.

There is a long path between being someone that happens to drive cars, and having a misconception about Autopilot, and then putting that misconception into action.

Presumably, such a driver would need to get behind the wheel of a Tesla and be using Autopilot, in order for us to then have qualms about the facet that they don’t know what they need to do as a driver and what the Autopilot is going to do.

Returning then to my earlier point, the chatter in the news is making the leap from the overall notion that drivers overall misconceive of what Autopilot does, and construing that this means that actual Autopilot-using drivers are full of misconceptions about Autopilot, for which we would all certainly be rightfully concerned.

The study does not say this, and nor was the study designed to get at this kind of question.

In my earlier indication, I said this:

In short, many are suggesting that the study purportedly “proves” that the Autopilot name is misleading and inappropriate, and that supposedly “drivers assume Autopilot does more than it does” (a matter that I’ll be analyzing herein).

A key problem is that the word “drivers” is ambiguous, and it could be incorrectly construed as meaning “Tesla drivers assume Autopilot does more than it does” (I’ve inserted the word Tesla), but this study does not tackle that question, and instead the proper wording might be “overall-drivers assume that Autopilot does more than it does” (I’ve inserted the phrase overall-drivers, meaning drivers that generally aren’t Tesla drivers and just so happen to drive cars of one kind or another).

That’s a big distinction, a night and day difference.

Added Salient Points

I mentioned earlier herein that I was going to call things as I see them.

In terms of the recent study, I don’t think its right to go beyond the nature of the study and make assertions that weren’t under scrutiny. The study found that overall people, who happen to be drivers, seem to have misconceptions about Autopilot and assume that Autopilot can do more than it actually can do.

That makes sense to me, since I’ve repeatedly indicated that the Autopilot name is misleading in terms of what the automation currently can actually do.

It is perhaps helpful to know that my anecdotal belief is supported by the study and shows that people overall, consisting of drivers, in the United States, appear to have inflated views of Autopilot and that it might be because of the name (we don’t know that it is the name per se, since it could also be that the respondents had some other way to reach this impression, but it seems fair game to assume it is the naming).

What we don’t know from this particular study is whether or not those people that have inflated views of Autopilot are also then getting into a Tesla, turning on Autopilot, and driving on our public roadways with false assumptions, and for which it could cause them to get into car accidents and injure or kill others too.

Conclusion

Until we have robust studies that focus on that particular question, we are stuck with the intuitive sense that there are some Tesla drivers that have misconceptions about Autopilot, but we don’t know how many and we don’t know how often those such drivers are driving their Tesla’s with Autopilot engaged.

There have been some related studies such as on human vigilance and Autopilot disengagements, and others, but the field is still wide open and more needs to be done.

We know for sure that some do have that misconception, as evidenced by the videos of Tesla drivers not properly performing the driving task while presumably, Autopilot is on, yet we don’t know how widespread this is. Of course, even just having a minor number of Tesla drivers doing so is dangerous, endangering themselves and the rest of us too.

In any case, it would be helpful to know how much a problem it is, the extent or magnitude, along with whether the various means to overcome it are in place and being appropriately undertaken. There are ongoing anecdotal remarks that some say showcase that Tesla drivers are informed about what Autopilot can do when they first get the car and thus do know what the limits of Autopilot are, while there are other equally voiced anecdotal remarks that there isn’t any such formal training or that those formal or informal training efforts are done in a short shrift manner.

You can’t really carry on much of a useful debate or discussion when all you have is a bunch of anecdotes. As they say, opinions are opinions, but having the facts makes for a better dialogue.

Well, as you can see, I appear to not be squarely in either tribe, neither the “always favors” Tesla and nor the “always opposes” Tesla.

Normally, the good thing about being in any particular tribe is that you usually get at least those tribe members to bolster you, supporting whatever you say, meanwhile the other opposing tribe is against you, but at least you’ve got a contingent backing you of the tribe you are in.

When you are not in either tribe, it usually means that both sides don’t like what you say. Ouch!

In my defense, perhaps we could all agree that more needs to be done on this topic and it could benefit us all, no matter which side of the fence you happen to be sitting on.



READ NEWS SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.