Transportation

Teenage Driver Swerves To Avoid Squirrel, Rams Into Abraham Lincoln Ancestry Home, Providing Honest Abe Insights For AI Self-Driving Cars


Honest Abe.

That was the moniker given to our famous statesman and sixteenth president of the United States, namely Abraham Lincoln.

He well-earned the nickname. Furthermore, the catchy phrase became widely known and popularized during his lifetime and of course, was everlasting too.

There are plenty of stories about Abe’s honest ways.

Some are large-scale examples and others are seemingly trivial but nonetheless telling. The small ones are ostensibly as important as the big ones if nothing else as evidence that this was a trait that was through-and-through and not merely by happenstance or when the right moment perchance presented itself.

For example, one tale is that when he was working as a store clerk a customer paid for some items and left the store, and upon further reflection, Abe ascertained that the customer had inadvertently been overcharged by six cents. Six cents isn’t much money today, and even back then it wouldn’t have been a fortune, though the fact that the customer was overcharged was the pivotal matter at hand. Apparently, once the store closed for the day, Abe walked several miles to the home of the customer and returned the overpaid six cents.

As they say, honesty is the best policy.

Of course, a cynic might point out that if Abe had not returned the six cents, and if the customer discovered the error on their own, the customer could have made quite a ruckus. And a store in a small town could be nearly put out of business if local customers thought it was some kind of a rip-off operation. In that sense, it was the practical and prudent thing to right away return the overpaid amount, averting a potential for a dire and adverse consequence to the liveliness and survivability of the local store.

Be that as it may, let’s go ahead and stick with the stout remark that honesty is nonetheless still the best policy and leave things there for now.

I bring up Abe Lincoln due to a recent errant car incident that unintentionally plowed into the historic Samuel Lincoln Cottage located in Hingham, Massachusetts.

This is a longstanding home that was built in 1650 and occupied by Abe’s great-great-great-great-grandfather (according to historical records). For those of you that are savvy history buffs, rest assured that this is not the same as the Samuel Lincoln House, which is just a bit down the street from the revered cottage. There are quite a number of vital structures in that same Hingham geographic locality that are all part of the Lincoln Local Historic District.

Anyway, here’s what happened.

A teenager was driving an everyday car there in the Lincoln Local Historic District and minding their own business when (according to the driver) a squirrel suddenly ran into the street. The teenager did not want to strike and potentially harm or outright kill the squirrel. Thus, the teenager opted to swerve and try to avoid the bushy-tailed little creature.

In the act of swerving, the teenager generally lost control of the car and went up and over the curb of the street. Continuing forward, the car then skirted across a small sidewalk and rammed into the side of the Samuel Lincoln Cottage.

Bam! Bash! Crash! Smash!

Pictures released by the Hingham Police Department are quite revealing as to the amount of damage done. The vehicle seems to have gone sideways into a major wall that is adjacent to the front door of the home. The right side front area of the car slammed into the wooden wall and created a massive opening. In addition, pictures of the interior showcase a lot of debris such as shattered wood tables, chairs, and other items.

To clarify, the cottage was not destroyed. It still stands.

We can though likely agree that there is a lot of damage and will be costly to repair. On top of that, one could have some despair that this is historically an important home and that making repairs might undermine the original nature of the structure. That being said, it is hard to imagine that this cottage of some hundreds of years of age hasn’t already had plenty of retrofits and fixes during those many years of standing tall.

Beyond the historical elements, we can rejoice that nobody was hurt in this car crash.

The teenage driver seemed to be fine and was sitting on the sidewalk awaiting the arrival of the local constables (per the official police report that was later formally filed). None of the residents inside the cottage were injured. This is nearly a miracle since anyone that might have been seated or standing in that part of the house would certainly have gotten struck, either by the wayward vehicle or by flying debris.

You can almost envisage that the driver would not have been seriously injured due to the aspect that this was a relatively modern car that had various contemporary safety features (it was a 2014 Audi Q7). The vehicle should have pretty much protected the driver. Also, it seems that the speed at impact could not have been particularly high, thus the forces involved were less so than a car crash on an open highway or freeway.

The teenage driver was issued a citation for unlawfully staying in the marked lanes. Assuming too that the driver perhaps lives at home with parents, it stands to reason that perhaps the scolding upon getting home was nearly worse than receiving the citation. The apparent damages done to the car appear to be somewhat modest and repairable.

You might be thinking that this is an interesting news item about a car crash but that other than the coincidental fact that the vehicle struck a venerated Lincoln ancestry structure, there isn’t much else of noteworthiness involved.

Well, truthfully, you’d be somewhat mistaken in that assumption.

Here’s the first twist.

Social media immediately questioned the claim that a squirrel was the culprit in this car crash.

Aha, some caustically stated, the old blame-it-on the squirrel trickery. Skeptics roundly asserted that the teenage driver was probably using a smartphone and texting, or maybe watching cat videos (perhaps, ironically, watching videos of cats chasing squirrels!).

This altogether presumed distracted newbie driver sought to pin the blame on an imaginary furry animal, those naysayers contend. Doing so makes indubitable sense. If you were the driver and divulged freely that you were driving while distracted, the nature of the citation is seemingly going to get more severe. The beauty too is that by having saved the life of an innocent squirrel, the driver appears to be a heartwarming person that was doing whatever they could to spare the tiny creature, including putting their own human life at risk.

What would Honest Abe say?

If Abe had been at the wheel, we would of course presume that whatever he said was the gosh-darned honest and unbridled truth.

In this recent case, there do not seem to be any witnesses to the car crash and nor any video captured that would show precisely what happened. As far as can be discerned, the teenage driver is the only one that knows exactly what occurred.

I suppose you could suggest that the squirrel knows.

If indeed there is one that was involved.

I’m not sure how we could sufficiently interview the squirrel to find out what it has to say. Maybe someday we’ll have animal-to-human AI-based machine translators that can figure this kind of thing out. Hey, Alexa, translate that squirrel chirping over there (that kind of thing). Not today.

I’d vote that we give the teenager a kindly break in life and assume that there was a squirrel and that the incident entailed avoiding squashing one. That seems like the right thing to assume. Nobody ended up getting hurt and the damages to the car and the cottage can be repaired.

Matter closed.

But wait for a second, there’s more.

Did the driver really do the proper driving act?

Most of us would undeniably agree that squirrels can be endearing, there’s little doubt about that. On the other hand, if you are at the wheel of a car and driving along, and you have to choose between hitting a squirrel and perhaps hitting a pedestrian by swerving, which would you choose? All else being equal, you’d better be choosing to strike the squirrel over striking the human (some might disagree, but would likely find themselves on feeble ground when explaining their choice).

Now, to be abundantly clear, the squirrel versus pedestrian dilemma was not an aspect of this particular car crash case.

The hidden factor that maybe you might not have considered is that there could have been, and it turns out that there were, people inside the house that was struck. Thus, even though pedestrians were not at risk, the innocent people residing in that cottage were definitely at risk.

The point is that other people besides the driver were intertwined into the driving action and had a stake in how the driver made their choice. These “bystanders” weren’t as obvious as when you see pedestrians walking along on a sidewalk. They were simply and quietly residing in the house that it turns out was summarily rammed into.

None of us would likely be thinking about that at the moment of swerving.

In other words, you might swerve to avoid the squirrel and be anticipating that you can maintain control of the vehicle. This would then entail going erratically onto the sidewalk, driving ahead a bit, and then coming back into the street.

Nobody would be hurt, assuming there aren’t any pedestrians nearby.

We can guess that perhaps the reason the car rammed into the house might be as a result of striking the curb on the way up onto the sidewalk. It could be that the sharp blow to the tires and the swerving vehicle made it very hard to control the trajectory. The car was somewhat an unguided missile as it came into contact with the side of the house. The driver likely jammed on the brakes at that time, and there were therefore multiple forces bringing the car ultimately to a halt (the emergency use of the brakes and the strength and fierce resistance of the structure).

In theory, the driver should have been mentally calculating that there was a chance that swerving to avoid the squirrel could lead to the car going into the structure that was immediately adjacent to the sidewalk.

This same logic should also have included that there is a chance that people might be inside that structure.

If you knew beyond a shadow of a doubt that the cottage was empty, you could then simply weigh the concern of saving the squirrel against the costs and concerns of slamming into the cottage. We would certainly want to assume that the driver is also considering the risk to themselves. So, in the instance of an empty structure, the driver needs to be figuring out the risk of their own injury or death as a result of potentially swerving into the house.

Again, with special attention to those of you that are devoted squirrel fans, what is the acceptable risk level for potentially harming yourself as the driver and damaging an empty structure?

That would be one estimation of risk.

Suppose you did not know for sure that the house was empty. There is some probability that people are inside. There is some additional probability that those people inside are somewhere near where the car is potentially going to impact the house. There is yet another probability associated with the consequent danger associated with the car crashing into the house, such as other parts of the domicile collapsing or perhaps a stovetop gas line ignites. And so on.

You see, that’s seemingly a lot more risk involved.

I dare say that few humans use that kind of mental reckoning or mindfully calibrated probabilities when they are trying to decide what driving action is “best” to undertake.

We might though have different expectations if a car was being driven by a machine.

Say what?

Consider that the future of cars consists of AI-based true self-driving cars.

There isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, and nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

Here’s an intriguing question that is worth pondering: Should we expect that AI-based true self-driving cars will figure out a full set of potential risks when making day-to-day and special case driving choices?

I’d like to first further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Those Squirrels On The Roadways

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic.

First, an important aspect of exploring this topic entails being straightforward and proffering an Honest Abe-like perspective.

Here’s why I mention that emphasis.

There are some pundits and vendors that keep insisting that self-driving cars will be utterly uncrashable. This is rather bold and turns out an entirely misleading and incorrect claim, see my critical analysis at this link here.

I bring up the uncrashable notion because those promulgating that rather false opinion about self-driving cars would also argue vehemently that a self-driving car would never get into a car crash akin to the story of the teenage driver that averted hitting the squirrel. If self-driving cars will never get into car crashes, there doesn’t seem to be any need to discuss the nature of such matters.

Case closed, as they say.

But wait for a second, do not be led down a path of falsehoods. We need to look more closely and see the truth. We can do so by examining the squirrel-avoiding situation to see how the “uncrashable” viewpoint seems to magically (and untruthfully) arise.

Envision a car that is cruising down an everyday street at a speed of 35 miles per hour, which is about 51 feet per second. If the brakes are instantaneously applied the car will continue a distance of about 60 feet before coming to a complete halt.

When human drivers apply the brakes with brute force, we would usually need to add several additional seconds to account for the mental time required to figure out the need to apply the brakes and then the physical activity time of applying one’s body and limbs to carry out that braking action. Thus, this would be an additional 80 feet or so on top of the 60 feet, meaning that the car will have gone a total of around 140 feet.

A squirrel can run at about a speed of 12 miles per hour, which is nearly 18 feet per second. Squirrels are relatively small in stature. They can easily hide in bushes, tall grass, and the like. All told, I think we would all agree that squirrels can seemingly dart here or there quite quickly and can seemingly appear out of nowhere.

Imagine that a squirrel is hiding in tall grass that is adjacent to the roadway. A self-driving car is coming along. With nary any substantive distance involved, the squirrel opts to scramble out of the hiding spot and directly into the street, perhaps trying to get to the other side of the road, and has not sufficiently figured out that the car is an impending threat to its existence.

The self-driving car uses its sensors and perchance detects the presence of the squirrel. The distance between the squirrel and the nearing self-driving car is about 10 feet. Assume that the AI driving system nearly instantaneously detects the squirrel and applies the brakes of the self-driving car. The car comes to a firm stop at a distance of approximately 60 feet later.

What happened to the squirrel?

For those of you that are squeamish, please skip to the next paragraph. The answer about what happened to the squirrel is rather apparent, it is a goner. The self-driving car would have run over the furry animal, even if we are assuming that almost no time was taken to compute the need to undertake a braking action. The physics of the world as we know it will overtake any make-believe notions about being uncrashable.

Okay, if the self-driving car in that scenario indeed struck the squirrel, we can declare that a crash has occurred. Ergo, the self-driving car in this instance is decidedly not uncrashable. Those of you that are smarmy might try to argue that hitting a squirrel is not tantamount to a “car crash” since it is only a squirrel and the damage to the car itself is going to be minimal.

Well, I don’t see how the amount of damage is especially pertinent to whether a car crash occurred or not. A crash is a crash. Some crashes are worse than others, certainly. At the same time, when a car rams into something, this seems clearly a form of a car crash, big or little, massive or minuscule.

If you are still stuck on the demure size of the squirrel, we can change the scenario and substitute a deer in the place of the squirrel. A deer is hiding in the brush that is adjacent to a roadway. The deer scoots out of the hiding spot and directly into the path of the self-driving car. Once again, we have a car crash. I won’t go into the details of what happens to the deer, but you get the picture.

As an aside, one aspect of an interesting cultural and ethical matter is the programming of AI driving systems and the striking of animals that might dart into the roadway.

If a rat darts into the street, we probably would be somewhat mollified if the self-driving car ran it over, assuming that no other viable choice was feasible. Suppose though a chicken ran into the street? We might be less sanguine. Suppose it was a dog or cat? You can readily anticipate that this rundown would be considered outrageous and somehow derelict. For my discussion on such matters, see the link here.

Back to the squirrel.

There is a real possibility that a self-driving car could ram into a squirrel. There isn’t any magic wand that will negate that possibility. AI driving systems are not going to be all-seeing and all-knowing. In fact, you could easily make the case that the sensors of the self-driving car might fail or falter in detecting the squirrel.

Depending upon the circumstances, the video cameras of the self-driving car might not capture sufficient visual imagery to discern the presence of the squirrel, and nor might the radar, LIDAR, and other sensory devices. We would also need to include the prevailing weather conditions. There is also the matter of daytime versus nighttime. Etc.

Even if the sensors do provide data that seems to include the presence of the squirrel, the added question is whether the AI programming has been crafted to determine that a squirrel is there. In other words, the AI might only be computationally able to identify that an object is present and it is moving along on the street. Being able to also identify what kind of object is there, and what the significance of the object is, such as a squirrel versus say a child’s toy that has rolled onto the roadway, that’s an entirely separate matter and differing capability.

All in all, we ought to reasonably agree that a self-driving car could run over a squirrel. The next and particularly tough question is whether the AI driving system should attempt to avoid hitting the squirrel and if so, what other risks are satisfactory to take in the effort to avert striking the wondrous creature.

Would you want the AI driving system to intentionally swerve the car, similar to the action of the teenage driver, such that the squirrel would live?

I hope that we would all acknowledge that the answer depends upon what the swerving action might produce. Swerving into a house that might have people inside would seem like a really bad computational choice by the AI driving system. This then lays at the feet of the automaker and self-driving tech firm that devised that kind of AI programming (remember, the AI is not sentient).

Conclusion

There are numerous other added twists and turns (I’ll touch upon just a few more for now).

Conventional human-driven cars always presumably have at least one human inside them while being driven, obviously so, because a human driver must be at the wheel. A self-driving car can be entirely empty and have no humans in it whatsoever.

I bring up this fact because the choice about what to have a self-driving car do or not do can weigh into the equation the potential lack of presence of a human inside the autonomous vehicle. The teenage driver had to consider their own potential injury as a factor in what to do. An AI driving system does not seem to have that same difficulty. If the AI driving system makes a computed choice that wrecks the self-driving car and yet saves others, this would seem a readily agreeable option to all.

There is a well-known and controversial thought experiment known as the Trolley Problem that comes to bear here. In short, the Trolley Problem posits the thorny issue of having to make life-or-death choices (or, more aptly, deaths versus deaths choices). This raises all sorts of ethics-oriented considerations.

Some in the self-driving cars industry decry the Trolley Problem and assert that it is entirely unrelated to the advent of self-driving cars. I politely and stridently disagree with that sentiment. For my discussion about the importance of the Trolley Problem, see the link here.

You see, there is a hidden Trolley Problem in the case of the teenage driver, the squirrel, and the ramming of the Abe Lincoln ancestry cottage. If the teenage driver had chosen to run over the squirrel, the car would seemingly not have swerved, and the people residing in that house would have not been in any immediate danger. By swerving to avoid the squirrel, the chances of things go awry began to pile up.

Would a self-driving car be calculating those odds?

Some technoids might contend that the AI driving system could have swerved and yet not subsequently rammed into the cottage. Maybe yes, maybe no. We do not know for sure that mishandling of the car was the basis for the post-swerve resulting in striking the house. It could be that anyone or anything at the driver’s wheel would have been unable to avert the cottage ramming once the choice was made to swerve in that direction.

Overall, I trust that you are now aware that we cannot assume that AI driving systems are going to miraculously avoid all car crashes. In addition, there is a vacuous indication as yet about how AI driving systems will be computing the tradeoffs and risks of taking evasive driving actions. This is all being done in a very proprietary way right now.

Even scarier, this is oftentimes being done without any explicit realization of what is being assumed or taken for granted. Few are asking open and honest questions about how these AI driving systems are being programmed and what kind of Trolley Problem type of decision algorithms are being put into place.

That’s honestly quite disconcerting.



READ NEWS SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.