Transportation

Hidden Secrets That The Burger King ‘Autopilot Whopper’ Teaches Us About Tesla And Self-Driving Cars


What do AI, self-driving cars, Tesla, Autopilot, and hamburgers such as the infamous Burger King Whopper all have in common?

Well, a whopper of a story, of course.

A recent ad campaign by Burger King has this smarmy catchphrase: “Smart cars are smart enough to brake for a Whopper” (here’s a link to the video).

And, just to make sure that the ad comes with all the toppings, they also indicate “Artificial Intelligence knows what you crave.”

Admittedly, it is yet another notable example of Burger King cleverly leveraging what otherwise might have been a small-time viral social media item into something indubitably marketable to sell those vaunted burgers and fries. This time their eagle eye and caustic wit involve a facet of automation that powers self-driving cars and perchance involves the use of a Tesla running Autopilot, which is not yet a full-fledged self-driving capability (despite what you might have heard otherwise, see my forthright elucidation at this link here).

Here’s the backstory about the so-called Autopilot Whopper tale.

In May of this year, a video was posted on YouTube by a driver recording a highway driving journey that purports to showcase a Tesla running Autopilot that mistakenly classifies a roadside Burger King sign as possibly a stop sign.

Note that the car merely began to slow down, gradually, and did not react jerkily or opt to radically try to come to a halt upon detecting what it interpreted as a possible stop sign.

How do we know what the car was trying to do?

Per the video recording made by the driver, the console screen in the Tesla flashes the classic message of “Stopping for traffic control” that is a usual message to let the human driver know that the system detects some form of traffic condition warranting the car to be brought to a halt by the computer (for additional details about how this works, see my indication at this link).

The car remained in motion on the highway, and once the distance to the roadside sign diminished, the alert about an upcoming traffic control no longer displayed and the vehicle accelerated back up to the posted speed limit.

You might say this was a no-harm, no-foul situation.

No one was hurt, no car crash ensued, and it does not appear that traffic was disturbed in the least.

That being said, yes, the automation did falsely at first interpret that the detected sign might be a stop sign and began to reduce speed accordingly, but once the car got close enough to make a more ascertained analysis, the system figured out that it was not a stop sign and continued unabated in traffic.

Let’s take the video at face value and assume it is not faked or otherwise doctored (you can see the original video at this link). I mention this caveat since someone could readily craft such a video via any decent video editing software, but generally, the video seems to be a likely indication of what happened and we can reasonably assume it is an actual occurrence (for more about faked videos of self-driving cars, see my indication here).

Your first thought, perhaps similar to mine, consisted of whether this was a perchance one-time fluke or whether it would potentially happen a second time, a third time, and so on.

We do not know for sure if it was repeatable per se, though about a month or so later, the same driver drove the same way again and posted a newer video showing that the Tesla did not appear to make the same mistake (see this link for the late-June video posting).

In that subsequent video, the driver verbally congratulates Tesla for the seeming facet that the car had presumably “learned” to deal with the Burger King sign and no longer was falsely categorizing it as a stop sign.

We cannot necessarily make that leap of logic, nor leap of faith.

Why so?

There could be other plausible reasons for why the vehicle did not react the same way as it had done the first time.

Allow me a moment to elaborate.

Imagine when you are driving a car and sometimes you might see something as based on the lighting and other environmental conditions, and you see a sign in a sharper way or a more occluded manner, depending upon the amount of sunshine, cloud cover, and the like.

It could be that the camera detection differed from the first time and thus by luck of the draw the subsequent drive-by did not spot the sign at all, or it spotted the sign but got a better look at it this time (for details about AI-based roadway sign detection, see my discussion here).

Realize that at a distance, a camera picture or video is going to have less detail and be dealing with objects that are only vaguely visible. Again, this is somewhat like how you might strain to figure out a faraway object, and similarly, the on-board computer system attempts to classify whatever it can see, even if only seen faintly.

Many that are not involved in self-driving tech do not realize that driving, even by humans, consists of a game of probabilities and uncertainties.

When you see something up ahead on the road resembling say roadway debris, a stationary object that is sitting on the road, you might not know if it is a hard object akin to a dropped toolbox from the bed of a truck, or maybe it is an empty cardboard box and relatively harmless.

Until you get closer, you are pondering what the object might be, along with trying to decide in advance as to what course of action you should take. If you can switch lanes, maybe you should do so to avoid hitting the object. If you cannot readily switch lanes, maybe it is better to try and roll over the top of the object and thus not take other extreme measures like swerving or screeching to a halt.

This brings up an important lesson about AI and self-driving tech, which is that it is not going to operate in some magical way and drive in pure perfection. Just as a human will struggle to identify what a roadway piece of debris is, and has to ascertain driving options, likewise the AI has to do the same.

That’s also why I keep exhorting that this notion of zero fatalities due to adopting AI driving systems is a false set of expectations.

We are still going to have car crashes, despite having AI driving systems. In some cases, it could be that the AI “judges” improperly and takes the wrong driving action, while in other situations such as a pedestrian that unexpectedly darts in front of a moving car there are no viable alternatives available to avoid a collision.

Keep in mind that even the revered AI-based true self-driving car is still bound by the law of physics.

When something untoward happens, suddenly, without apparent pre-warning, you can only stop a car as based on physics and cannot miraculously cause the vehicle to instantaneously cease in motion. Stopping distances are still stopping distances, regardless of human-driven versus AI-driven cars.

That being said, it is certainly hoped that by having AI driving systems that fully operate a car, the number of car crashes due to human drunk driving and various human driving foibles will be significantly reduced and we will have a lot less injury and fatalities on our roadways (but, I emphasize, still not zero).

In any case, just because the car did not repeat the mistaken identification of the Burger King sign on the subsequent run, we cannot assume that it was due to the car “learning” about the matter.

Unless we are allowed to dig into the Autopilot system and the data being collected, it is not readily determinable what has perhaps altered, though it does seem like a reasonable guess that the system might have changed and can do a better job on dealing with the Burger King sign.

What Is This Thing Learning

Let’s suppose it was the case that the system was better able to categorize the Burger King sign.

Does that mean that the system “learned” about the matter?

First, whenever you use the word “learn” it can overstate what a piece of automation is doing. In a sense, the use of this moniker is what some people refer to as anthropomorphizing the automation.

Here’s why.

Suppose that the AI developers and engineers were examining the data being collected by their cars, including the video streams, and realized that the Burger King sign was being falsely classified as a stop sign. Those human developers might have tweaked the system to prevent it from doing so again.

In that case, would you describe the automation as having “learned” what to do?

Seems like a stretch.

Or, suppose that the system was using Machine Learning (ML) or Deep Learning (DL), consisting of an Artificial Neural Network (ANN), which is a type of mathematical pattern matching approach that tries to somewhat mimic how the human brain might work (please be aware that today’s computer-based neural networks are a far cry from how the brain works, not at all equivalent, and generally a night and day kind of difference of the real thing).

It could be that the HQ computer system on the back-end via data collected from cars in the fleet has assembled the data into a cloud database, and might be set up to examine false positives (a false positive is when the detection algorithm assesses that there is something there, such as a stop sign, but it is not a stop sign).

Upon computationally discovering the Burger King sign as a false positive, the system mathematically might flag that any such an image is decidedly not a stop sign and then this flag is pushed out to the cars in the fleet, doing so via the OTA (Over-The-Air) electronic communications that allow HQ to send data and program patches to the vehicles (for more about OTA, see my discussion at this link here).

Could you describe this as “learning” about Burger King signs?

Well, you might try to make such a claim, exploiting the aspect that the computational methods are known as Machine Learning and Deep Learning, but for some this a stretch of the meaning associated with learning in any human-like manner.

For example, a human that learned to not mistake Burger King signs might also have learned a lot of other facets at the same time. A human might generalize and realize that McDonald’s signs could be misinterpreted, maybe Taco Bell signs, and so on, all of which are part of the overarching semblance of learning.

You could take that further.

A human might learn about the whole concept that sometimes there are signs that resemble something else that we know, and thus, it is vital to carefully not assume that the traits of the Burger King sign are carried over into other aspects of making false identifications.

This might also prompt the human to think about how they make other false assumptions based on quick judgments. Whenever they see someone, from a distance, perhaps judging them as to whether they are a certain kind of person is a premature act.

And so on.

I realize you might be pained to contemplate how far a human would really take the instance of a misclassified Burger King sign, but that misses the point I am trying to make.

My point is that when a human learns, they usually (or hopefully) generalize that learning in a multitude of other ways. Some lump this into the idea that we have common sense and can perform common-sense reasoning.

Shocker for you: There is not yet any AI that has any bona fide semblance of common-sense reasoning.

Some assert that until we can get AI to embody common-sense reasoning, we will not achieve true AI, the kind of AI that is the equivalent of human intelligence, which nowadays is referred to as AGI or Artificial General Intelligence (suggesting that today’s typical AI is much narrower and simpler in scope and capability than the aspired version of AI). For more about the future of AI and AGI, see my analysis at this link here).

Overall, you would be hard-pressed to say that car automation has “learned” from the Burger King incident in any generalizable and full-reasoning way that a human might.

In any case, people like to use the word “learn” when referring to today’s variant of AI, though it overstates what is happening and can cause overinflated and confounding expectations.

The Puzzle About The Sign

You might remember the famous scene in the movie Princess Bride involving a battle of wits, and one of the characters brazenly touts that he has only begun to proffer his logic.

Let’s use that same bravado here.

We have so far assumed that the Burger King sign was classified as a stop sign, momentarily so, while the car was traveling on a highway and approaching the off-highway signage.

You might be thinking, why in the heck is a sign that isn’t actually on the roadway being examined as a potential stop sign and being given due consideration for coming to a stop?

When driving your car on the highway, there are dozens upon dozens of off-highway stop signs and a slew of other traffic control signs that are quite readily visible from the highway, and yet you do not deem them worthy of bringing your car to a halt while on the highway.

This is because you know that those signs are off the roadway and have nothing to do with your driving whilst on the highway.

Imagine if every time you saw a formal traffic sign for local streets and yet they were not on the highway that you opted to react as though they were positioned on the highway.

What a mess!

You would be continually doing all sorts of crazy driving antics on the highway and be confusing all the other nearby drivers.

In short, since the Burger King sign was not on the highway, it should have instantly been disregarded as a traffic control sign or any kind of sign worthy of attention by the automation. We could go extreme and say that if the Burger King sign was identical to a stop sign, in essence, replace the Burger King logo with an actual stop sign, this still should not matter.

This brings us back to the so-called “learning” aspects.

If the automation now has a computational indication that a Burger King sign is not a stop sign, this seems insufficient. We would also want it to “learn” that signs off the highway are not relevant to the highway driving, though, of course, there are exceptions that make this a necessarily flexible rule and you cannot simply declare that all off-the-road signs can be completely disregarded.

Why did the automation seem to initially assess that the Burger King sign pertained to the highway?

There is a bit of an optical trick involved and one that typically impacts human drivers too.

The Burger King sign was atop a tall pole and standing relatively close to the highway.

If you have ever seen their prominent signs, they are notable because they have “Burger King” spelled out in bright red letters, boldly proclaimed, and the shape of the sign is an oval, all of which does resemble from a distance the same overall look of a stop sign. Of course, the sign is purposely facing the highway to attract maximum attention.

In this driving scenario, the car comes over a crest in the highway and the Burger King sign appears to be immediately adjacent to the highway and possibly could be construed as on the highway itself, as seen from a distance and based on the highway structure and coming over the crest.

You most certainly have experienced such visual illusions before, and it is an easy phenomenon to fall for.

Once you realize it is a Burger King sign, you don’t care anymore whether it is on the highway or off the highway since it does not require any action on your part (well, unless you are hungry and the signage sparks you to get off the highway for a burger).

In theory, a human driver could have done the same thing that the automation did, namely begin to slow down as a precautionary act in case the sign was a stop sign. A novice driver might especially get caught by this kind of visual illusion the first few times they experience it, and thereafter presumably get the gist of what they are seeing.

In that sense, as a human, you are learning by experience, essentially collecting data and then adjusting based on the data that you’ve collected.

Potentially, the Machine Learning or Deep Learning that the automaker has established for self-driving automation can do somewhat likewise.

A training data set is usually put together to try and train the ML/DL on what kinds of roadway signs to expect. The training data must include a sufficient variety of examples, otherwise, the computational calculations will overfit to the data and only those signs that are strictly obedient to the true sign will be later detectable.

In the real world, stop signs are oftentimes defaced, bashed, or bent, possibly partially covered by tree limbs, and all sorts of other variations exist. If you used only the cleanest of stop signs to do the training of the ML/DL, the resultant in-car detection would undoubtedly be unable to ascertain lots of everyday and real-world distressed stop signs that are posted.

One of the nightmare dangers for any self-driving car is the possibility of false negatives.

A false negative is when let’s say a stop sign exists, but the automation does not construe the stop sign as a stop sign.

This is bad.

The automation could fail to make a required stop and the cataclysmic result could be a car crash and horrific outcomes.

You could also assert somewhat the same about false positives. Suppose the automation fully mistook the Burger King sign as a stop sign and did come to a halt on the highway. Other drivers behind the stopped car could readily ram into the car since it inexplicably and unexpectedly stopped in the middle of a normally rushing along the highway.

Conclusion

Welcome to the conundrum facing those that are crafting self-driving cars.

The goal is to prevent false negatives and prevent false positives, though this is not always going to be possible, thus, the system has to be adept enough to cope with those possibilities.

For a Tesla on Autopilot, it is important to realize that the existing automation is considered at Level 2 of the automated driving capabilities, meaning that it is a driver-assisted kind of automation and not fully autonomous.

For Level 2 cars, the human driver is still considered the responsible driver of the vehicle.

In the case of falsely believing that the Burger King sign was a stop sign, even if the automation tried to come to a full stop, the human driver is presumed to be in-charge and should override that kind of adverse driving action.

As I have repeatedly exhorted, we are heading into the dangerous territory of expecting human drivers to override automation in Level 2 and Level 3 rated cars, which you can bet many human drivers will not do or will do so belatedly due to a false belief that the car is soundly driving itself.

You could say that human drivers will be making false positive and false negative judgments about what their car automation is doing, any of which can then lead to dreadful calamity.

That is why some are arguing fervently that we ought to wait until the AI is good enough that we can use it in Level 4 and Level 5 self-driving cars, whereby the AI does all the driving and there is no human driving involved.

If we can get there, it would mean that the Artificial Intelligence does “know what we crave” and that consists of a fully autonomous and safe driving journey, burgers included or not.



READ NEWS SOURCE

Also Read  Bugatti Veyron Designer Joins Volkswagen After Leaving Rolls-Royce