Transportation

That Hacked Electronic Roadside Display In Brooklyn Said Cars Are Bad, Which Startlingly Brings Up Questions About The Advent Of AI Self-Driving Cars


Right lane closed ahead.

Or sometimes it is stated as RIGHT LANE CLOSED AHEAD.

That’s what you often see displayed on those official roadside electronic displays.

According to formal guidebooks, the messages are supposed to be clear and concise. This seems necessary since a harried motorist driving along does not have a lot of time to focus on those signs. Getting immediately to the point at hand is a sensible and altogether essential precept.

The displays are supposed to show messages that are timely, accurate, and up-to-date. You’ve undoubtedly seen some instances whereby the sign has warned about a wreck in the roadway ahead, yet when you get to the ahead portion there isn’t any wreckage to be seen. The odds are that a clean-up took place prior but the workers responsible for the display sign have not returned to update the message board.

Here’s a good rule of thumb that is worth pondering: Convey only a single thought per panel.

I like that.

If a display is flashing a message and then proffers a second message, this is probably better than trying to cram too much into just a single message. Our minds can readily grasp a message that has a definitive instruction or warning. When the message is muddied by having a conglomeration of thoughts, you can easily get confounded. You might even not realize that there is a twofer in the single message, thus completely fail to glean that there are two things you need to be aware of.

For example, the message that the right lane is closed ahead would seem a reasonably standalone thought. You’ve got the portion that tells you where things are awry, namely in the right lane. Upon that indication, you surely would wonder what about the right lane is worthy of attention. As a nice touch, the message tells you that the right lane is closed ahead.

Sweet and simple.

Imagine that two things were happening at once. The right lane is closed ahead, plus the upcoming next exit is closed. It might be tempting to jam this together into one message. Maybe saying something like the right lane is closed and so is the next exit. Of course, there are only so many characters available to be displayed at one time, akin to how Twitter allows a maximum number per tweet.

Perhaps the message might say “Right Lane Closed Ahead, Next Exit Closed Too” which might squeak in at the allowed number of characters. A shortened version could be “Right Lane + Next Exit, Closed” though this could be a bit ambiguous and open to interpretation. Meanwhile, those hectic drivers seeing the sign are bound to find themselves devoting undue mental exertion to the signage.

Probably safer and more useful if you display one message about the right lane, and then after the usual interval of a second or two, go ahead and display the second message about the next exit aspects. You can debate this approach, though the overall point is that sometimes a single message might not be viable and you genuinely are better off displaying two or more instead.

The messages that seem to extraordinarily get the goat of many drivers usually entail platitudes or seemingly vague exhortations. The classic being to drive safely (“DRIVE SAFELY”) or the urging to use caution (“USE CAUTION”).

Why do those messages cause a ruckus among some drivers?

Because the feeling is that these are not conveying any conclusive instructions or specific down-to-earth useful statements. Knowing that the right lane ahead is closed would be quite useful and provides a driver with a heads-up for making early lane changes or slowing down. Knowing that you should drive safely is, well, seemingly already a known commodity. Sure, we should always be driving safely, one might retort at the sign.

In fact, the concern arises that maybe there is some reason that you are being cautioned to drive safely at this moment in time on this specific highway or street. Could a gaping hole exist in the roadway up ahead and that’s why you are being implored to drive safely? When being informed to use caution, your mind wanders to the possibility that mighty sea monsters are beyond the bend and will swallow whole your vehicle as it passes.

Couldn’t the display have warned specifically about the sea monsters?

Anyway, you get the notion that some drivers are irked by general proclamations.

We want specifics.

We want to know something that is directly crucial to driving while in the throes of driving. I’m sure that some others would argue that reminding all drivers to drive safely and use caution is indubitably helpful. It wakes up those drivers that are in a zombie-like mental fog as they drive a moving car at over 60 miles per hour.

Thank goodness that the display kicks those dolts into mental gear, some exclaim.

The irony is that a sign that asserts for you to drive safely might trigger some idiots into suddenly deciding to do just the opposite. These might be people that eschew any kind of authority. They detest being told what to do. The advisory that the right lane is closed provides merely advice, and it is up to the driver to decide what to do with that vaunted piece of information. In contrast, the drive safely message is a strict order, an authoritative command, an iron fist edict from those that control our lives.

You can’t win on these matters.

Another rub is that there is a notable chunk of time and attention consumed by a driver to look at and make sense of the messages.

First, the driver has to look away from the traffic ahead of them to see and read the display.  Second, the frenzied driver has to put on their thinking cap to decide what the message has stated. One potential argument is that the messages are a distractor from concentrating on the roadway. You certainly don’t want drivers to end up in car crashes because they were busy deciphering a traffic display and become wholly preoccupied doing so.

That is yet an additional irony, whereby a message that said to drive safely was considered as distracting and drivers got into car collisions precisely as a result of the presumed helpful flashing display.

You really cannot win on these matters.

From time to time, the news provides reporting that an electronic roadway display had been temporarily “hacked” and someone posted an untoward message.

To clarify, these are rarely bona fide hacks per se. The most frequent way that these message boards get reprogrammed with an oddball message is that the display panels usually have a default password that the roadway crew never changed. It is akin to having a password such as 12345 and any dullard that knows about the default can try and see if it gets them into the display controls.

I don’t especially consider that to be any semblance of hacking. A real hack would involve having to crack through some form of computer security and do so by employing genuine programming finesse. That is a rarity when it comes to the traffic display boards.

Though I’ve been describing these as display boards, the proper nomenclature is that they are Variable Message Signs (VMS). Furthermore, if the display board or VMS is readily movable and not fixed permanently in place, it is considered a Portable Variable Message Sign (PVMS). You see the PVMS displays that are temporarily placed at the scene of a bad car crash or used in a construction zone that will only be there for a few days or weeks.

The mainstay purpose of the PVMS is to allow for aiding drivers whenever there is an incident underway. The display boards are equally useful for activities including construction zones, special events, environmental conditions such as heavy smoke or when forest fires occur, congestion management, law enforcement, and at times for public service campaigns.

Again, the messages need to have a demonstrative and notable basis for existing. There is a vital trade-off between not displaying anything at all and allowing drivers to figure out whatever is taking place versus the act of displaying something and possibly distracting or confounding drivers inadvertently.

This takes us to an interesting instance of a PVMS “hack” that occurred recently (as reported initially on Twitter, of all places).

It happened in Brooklyn.

In case you’ve been living under a rock, Brooklyn is a well-known borough of New York City. For those that know about Brooklyn, they either would be upset that a PVMS was hacked or they might be proud, perhaps prideful that everything happens in Brooklyn and so why not include display boards that get reprogrammed by malcontents or rowdy roadway warriors (or whomever).

Just to mention, it is against the law to reprogram those officially programmed displays.

And, it goes without mentioning that a reprogrammed display could be a distractor to drivers, conveying info that has nothing to do with the existent traffic conditions and roadway safety. This then becomes an unquestionable distraction and could produce car crashes that lead to injuries or deaths. Thus, it isn’t a laughing matter and we should condemn any such practices.

In the recently reported instance, the reprogramming went whole hog, as it were.

There is usually one particular thing that the person wants to display. The most common message seems to be something replete with curse words or a single thought about the world at large. At times, the pirated message is whimsical, though this does not excuse the lawlessness of the hacking act.

In this specific case, the message display was reprogrammed to display eight successive messages, quite seemingly serious and somber remarks, each one separated from the other by the usual few seconds allotted per message seen on a conventional display panel.

Here it is:

·        Honking Won’t Help

·        Cars Are Death Machines

·        Use Bus Subway Or Bike!

·        Cars Kill Kids

·        Car Melt Glaciers

·        Cars Ruin Cities

·        Stop Driving!

·        Get Rid of Your Car

None of these contain profanity, which provides some solace. That being said, they certainly are outspoken and emboldened statements that raise controversy, and for which not everyone would find the messages to be appropriate. All told, regardless of whether you agree or disagree with the content of the messages, using an official roadway PVMS for this purpose is illegal.

Let’s shift away from the matter of improperly making use of a PVMS and consider that those same or notably similar remarks can be found on the Internet. You see, there are plenty of social media postings that contain the same sentiments.

As suggested by the messages, the overall tone and expression are that cars are bad and we ought not to be using them.

Mull that over.

Meanwhile, consider that the future of cars consists of AI-based true self-driving cars.

There isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, and nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

Here’s an intriguing question that is worth pondering: Should we even be seeking to produce AI-based self-driving cars or is the very notion of cars of any kind something that we ought to be averting?

Before jumping into the details, I’d like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Whether Cars Are Bad To The Bone

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic.

There is a tremendous amount of effort and billions of dollars going toward devising AI self-driving cars.

The hope is that self-driving cars will ultimately bring forth an era of mobility-for-all. This is a catchphrase suggesting that the use of these newer kinds of cars will be much more widely available and utilized than conventional cars are today. Presumably, conventional human-driven cars will gradually fade out of use, and meanwhile, self-driving cars will readily become predominant.

A basis for believing that this will arise is due to the expected low cost or lowered per mile price for the use of self-driving cars, predominantly as a result of no longer requiring a human driver at the wheel.

In addition, a further hope is that there will be a lot fewer car crashes whence we have an abundance of self-driving cars. Currently, in the United States, there are about 6.7 million car crashes annually, producing approximately 2.5 million injuries and around 40,000 fatalities, see my collection of related stats at the link here.

Many of those car crashes are attributed to human driving foibles such as drunk driving, distracted, driving, etc. Since self-driving cars will be using an AI driving system, there should not be any of those human foibles occurring as a prompter for a car crash. As an aside, this does not ergo imply that self-driving cars will be “uncrashable” (and oft claimed aspect), see my analysis at this link here. Self-driving cars will still at times get into car crashes, though this is anticipated to occur infrequently in comparison to comparable human driving occurrences.

We now ready to tackle those pointed remarks that had been posted on the Brooklyn roadside display panel.

First, consider two of the most straight away haymaker accusations, namely that cars are death machines and that cars kill kids.

What are we to make of that claim?

Those strident contentions take us into the classic realm about the nature of humanity and the tools that humans devise for use. I think we can reasonably agree that a car by itself is not particularly a death machine and nor does it seek to kill children. A car is a car. A non-running parked car is a highly unlikely producer of deaths.

When a human drives a car, there is a risk that the human driver will somehow fail or falter while driving a car and ergo get into a car crash. A car crash can decidedly involve injuries and also fatalities. Furthermore, it is the case that a human driver might inadvertently ram into children crossing a street. This happens, of course sadly so. In other instances, there might be children inside a car as passengers, and an adult human driver is at the wheel, for which the driver perchance loses control of the car, and all inside the vehicle are regrettably lost.

The point is that we might need to elongate the assertions and more clearly state that human drivers can make use of cars and in the act of doing so can potentially produce fatalities. By and large, this occurs not by direct intent and instead by some form of failing on the part of the human driver (let’s for sake of discussion excise out the rare instances of someone that opts to use a car for the intentional purpose of harming others).

This brings us around to the hoped-for advantages of AI self-driving cars. The belief is that the AI driving systems will be programmed to drive in a manner that averts the human foibles of driving. If that occurs, the claim about cars being death machines and killing kids will change quite radically.

Here’s how.

As I’ve repeatedly indicated in my columns, the number of car crashes should come down tremendously, along with a corresponding reduction in the number of injuries and fatalities. That being said, self-driving cars won’t be “uncrashable” and there will still be some instances of car crashes that involve self-driving cars (reemphasizing my earlier point).

All told, it is anticipated that the number of deaths via the acts of self-driving cars will eventually be extremely low. Any deaths at all are naturally abhorrent, but the emphasis at least is that there will be many fewer by far. This would also suggest that the number of children killed by a car, specifically self-driving cars, will also be substantially reduced.

What makes things a bit of an added twist is that you could be on more grounded turf by then claiming that it was indeed the car that did the unfortunate deed. The finger is normatively pointed at the human driver, though in the case of self-driving cars it will be the AI driving system.

Before you get into a conniption about whether the AI driving system is a “being” and therefore bears the responsibility for the driving act, I am not one of those that buys into that theory. Per my earlier comments, today’s AI is not sentient, and not even close to being sentient. It is my view that the AI driving system is not a responsible party and that it is instead those that devised and fielded the AI driving system that ought to be held responsible. Usually, this would be the automaker or self-driving tech firm that crafted the self-driving car, along with whatever entity is operating the autonomous vehicle for use on public roadways such as a fleet operator.

So, we are going to gradually switch from the notion that human drivers at times go awry or falter and inadvertently turn cars into so-called death machines, and shift toward the possibility that AI driving systems might do likewise, but with greater rarity, and that the humans behind-the-scenes that actually developed and subsequently operate the autonomous vehicles are to be held responsible for what the car does. As a society, we will need to grapple with the tradeoff of having self-driving cars versus the paucity of still existent fatalities from car crashes, which significantly changes the usual equation of balancing modern-day risks versus modern-day accessibilities.

That’s probably enough for now on those two in-your-face incendiary remarks about cars.

The next conveyed thoughts seem to cover the bold declaration that we ought to avoid using cars. We are told that we should use the bus, subways, or bikes, rather than using cars.

No particular beef on that recommendation. Cars tend to be relatively inefficient when it comes to transporting people. The amount of space and energy consumed for the transport of one person in a car is not nearly as efficient as when using a bus or subway that contains lots of people all at once.

The rub is that a car provides a point-to-point form of transit, while a bus or subway typically does not.

If you want to get from your home to an office building on the far side of a city, a car can usually take you from the door of your domicile directly to the door of the office structure.  In comparison, to use a bus, you need to somehow get from your home to the bus stop. Once the bus transports you, there is a need to somehow get from that drop-off to the desired final destination.

It is hard to compete with a mechanism that is essentially seamless and provides one-stop shopping of sorts. A car is pretty handy since it goes from point A to point B, and there isn’t a need to change across modes of transport. It would seem that human nature is likely to gravitate toward the convenience of the point-to-point option.

It would further seem that AI self-driving cars will make that seamlessness even more abundantly apparent. The assumption is that self-driving cars will be readily available 24×7. Making use of a self-driving car is going to be easier than having to arrange for finding a suitable human driver that is available to give you a lift. Overall, the allure of using self-driving cars will further exacerbate the challenges of encouraging people to use other forms of transport such as busses, subways, and even bicycles (see my discussion about bikes at this link here).

Moving on, one of the remarks on the display panel was about the aspect that cars melt glaciers.

This is a somewhat complex commentary and would take a lot of ink herein to fully unpack.

In short, the likely idea is that conventional combustion engines are a notable contributor to adverse climate impacts. For self-driving cars, the likelihood is that EVs (electrical vehicles) will be used, partially due to the aspect that the onboard processors for the AI driving system require gobs of electrical power and that EVs are especially well-devised for that requirement. To some extent, an argument could be made that this will demonstrably undercut the melting glaciers decree when it comes to the advent of self-driving cars, all else being equal (it’s a complicated matter).

Another allegation was that cars ruin cities.

This is an oft-repeated exhortation and certainly has some sound considerations. The basis partially has to do with the need to accommodate a tremendous amount of surface space toward simply having roads and also set aside space for parking. In many ways, this leads to a design of cities that is off-putting and disruptive to how people live.

Whether self-driving cars will significantly change the design of cities is still an open question. Some for example strongly argue that we will be able to remove the parking aspects and merely instruct self-driving cars to go outside of the city whenever they might need to be in a parked posture. This alone would radically reduce various space constraints in cities. And so on.

The roadside electronic display told us to stop driving, and that we ought to get rid of our cars.

Via self-driving cars, you indeed will stop driving. The AI driving system will be doing the driving. Seems like that appeal is satisfied.

Well, that admittedly is a bit smarmy as an answer, since the conveyed remark is ostensibly about not using cars, and thus for the case of self-driving cars might be reworded to something like “Stop Being A Car Passenger” or the equivalent heralding.

In terms of getting rid of your car, one viewpoint about the emergence of self-driving cars is that we will become predominantly a ridesharing or ride-hailing society, whereby very few people will own a car. When you need to use a car, you will request a lift via an app and a self-driving car will come to undertake your driving journey. The cost and convenience of doing so will eclipse any interest in owning your car (there is more to this, so see my columns for added explanation).

Conclusion

This brings us to the last remark to be covered.

It was this advice: Honking won’t help.

Now that seems to be the most uncontested quip and one that we can all generally accept as a truism.

As you certainly know, the moment you honk your horn, you don’t know what the reaction will be. Even though you might be honking as a considered kind gesture to alert a fellow human about some pending danger, the odds are that most of the time the honking will backfire. Many road rage incidents began with the use of a honking horn.

Will AI self-driving cars undertake the act of honking?

Probably so.

Please don’t get mad. No need to go into a road rage outburst against an AI driving system. Besides, it won’t be able to put up its dukes and defend itself. Seems like it wouldn’t be a fair fight.

Who knows, we might need to put up electronic displays that alert those self-driving cars to not do anymore honking, and this might be officially posted on a formal PVMS by the proper authorities, obviating the hacker messages that decry the same sentiment.



READ NEWS SOURCE

Also Read  Ford Spearheads Mobility Living Lab For Real-World Workout Of Self-Driving Tech And More