Transportation

When Wildfire Smoke And Ash Fill The Sky, Here’s How Self-Driving Cars Can Go Awry


There has been a recent spate of deadly wildfires throughout the western parts of the country, especially hitting California, Oregon, and Washington. An estimated three million acres of California land alone has been burnt so far, woefully causing fatalities and at times producing extensive destruction to homes and forests. Most of the fires seemed to be set-off by the nearly 15,000 lightning strikes that had accompanied recent tropical storms, though some of the fires were initiated by human-led carelessness or untoward intentional acts.

An especially eerie sight made the national and international news last week when the San Francisco Bay Area and other surrounding locales became thick with smoke and ash, causing the skies to showcase a menacing red-orange glow and blotted out any semblance of a normal sky. Daytime pictures of the downtown areas resembled prophetic scenes of the future as depicted in the sci-fi movie Bladerunner 2049.  Throughout the daylight hours, drivers had to use their headlights, drive quite cautiously, and sought to safely navigate the roadways under a dense cloak of smoky air.

If driving for humans is difficult during such conditions, this raises an interesting question: How will AI-based true self-driving cars handle the roadways during existent and likely adverse aftermaths of widespread wildfires?

Let’s unpack the matter and see.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered a Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out, see my indication at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Coping With Wildfire Conditions

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

For those of you that perchance live in an area where self-driving cars are currently being tried out in California, you might have noticed that many of the companies that are deploying these state-of-the-art vehicles were doing so amid the smoky air conditions.

Your first thought might be that they should close down their operations in those circumstances and not get onto the roadways. If their vehicles were getting in the way of rescue efforts or otherwise obstructing matters, indeed you can expect that the self-driving cars during these tryout periods would most likely be kept off the streets. In this case, it seemed that being on the roadways was not disrupting activities, plus there is a purposeful desire to get the self-driving cars immersed into such smoky air conditions.

Here’s why.

Most of the self-driving car development efforts to date have focused on everyday kinds of driving activities. This might involve going from a home to a grocery store or making a quick run on local freeways to exercise the AI driving capabilities on fast-paced byways and navigating amongst normal traffic conditions.

One concern is that the AI is not necessarily ready for extraordinary driving situations, known as edge cases, or sometimes referred to as corner cases.

As we all know, even a newbie teenage driver doesn’t begin driving with much experience under their belt. Over time, they end-up driving in a widening variety of circumstances and presumably become familiar with what to do. In a sense, the hope is to do something similar for self-driving cars (do not though misconstrue this point, the AI is not anywhere akin to human intelligence). A wider encountering of unusual driving situations is helpful to the AI in its use of Machine Learning (ML) or Deep Learning (DL) to improve the handling of whatever might happen while on the roads.

Best now to get the AI used to being able to drive in the wildfire aftermath conditions, rather than later on discovering post-implementation that perhaps the AI gets confounded by such circumstances. Imagine that you are using self-driving cars daily, and upon those days or perhaps weeks when a wildfire might cause the air to become blotchy, the self-driving cars were unable to cope and thus were not providing rides. Those self-driving cars would potentially be parked in depots and warehouses, waiting simply for the air to improve, and meanwhile, people needing rides would be potentially stranded or mobility deprived.

Let’s next consider why the wildfire aftermath conditions would have an impact on the AI and the self-driving cars.

For anyone that has ever parked their car in an area that was filled with smoke and ash, you’ve undoubtedly come out to your sitting car and noticed a thick layer of soot on the exterior. Besides messing with that nifty paint job, other concerns are related to the driving of the vehicle. As a driver, you likely got into the driver’s seat and instantly notice that your windshield is coated with the dust and ash. That’s obviously a problem since it can cut down on your ability to see what might be on the roadway ahead of you.

For self-driving cars, they are chock-full of specialized sensors, including cameras, radar, LIDAR, thermal imaging, ultrasonic, and so on. These are how the AI can detect the roadway and the surrounding scene such as traffic signs, pedestrians, bikers, and the like. Without those sensors, the AI would be effectively blind and unable to appropriately drive the self-driving car.

An ongoing problem that has yet to be fully resolved involves assuring that the sensors on the self-driving car are continuously kept free of any obstructions. Sure, you know that for the windshield you can use your wiper blades and spritz windshield wiper fluid to keep the glass clean and clear. In the case of the self-driving car sensors, this is not quite so easy. Different cleaning methods are being tried, including employing tiny windshield wiper blades, specialized chemical dispensers, protective coatings, cages to reduce debris impacts, and so on.

Thus, the first and most prominent aspect of driving in the smoke and ash involves making sure that the sensors can function properly. Furthermore, if somehow a sensor is no longer able to adequately function, there must be provisions to consider what they ought to be done.

Let’s pursue that line of thinking.

Besides the sensor directly being obscured, another facet is that even if the sensor is working perfectly fine, the air itself is potentially going to present a problem. Human drivers last week in the San Francisco area were reporting that it was at times like driving through a heavy fog, such that the smoke and ash significantly reduced overall visibility.

Some assume that self-driving cars are immune to such conditions. They seem to think that with all the advanced sensory apparatus, the self-driving car should be able to find its way in any kind of adverse weather or similar foul environments. This is not the case. There are many situations that the sensors will not be able to see or scan sufficiently and therefore those such moments need to be taken into account by the AI driving system.

This brings up an allied topic that is noteworthy.

When a human opts to drive a car, they presumably would judge whether the air is so bad and the visibility so obscured that it would be best to not go for a drive. This is a judgment call and not a hard-and-fast rule that is particularly based on a numeric calculus. The question arises as to how the AI should make such a “judgment” when considering whether to proceed on a driving journey.

You might be tempted to say that the self-driving cars need to be managed by an owner that makes those decisions, such as a fleet of self-driving cars owned by say an automaker or a ride-sharing firm. The humans of those firms need to decide whether to send out the self-driving cars, rather than leaving such decisions to the AI.

Unfortunately, it is not that easy of an answer.

Suppose in the morning the air seems relatively acceptable for the self-driving cars to proceed. Some of the self-driving cars end-up giving rides in places that are suddenly faced with new wildfires, and thus the smoke and ash are especially fierce. The AI, using the sensors, can detect in real-time the conditions, whereas human managers sitting at a headquarters might not be aware of those moment-to-moment circumstances. As such, the AI must be programmed to assess the drivability and ascertain whether it is safe to proceed or not, making such a decision in real-time as needed. From an AI Ethics perspective, there are increasing calls by many that the automakers and self-driving tech firms should make available the nature of the algorithms being used to make these kinds of on-the-spot driving decisions (for more on AI Ethics, see my coverage at this link here and this link here).

An interesting conundrum can also arise in these settings.

You live in a house that at first was untouched by the wildfires and seemed to be safely at a notable distance from the fires.

 Keep in mind that these wildfires often get carried by swift winds and move very fast, jumping from place to place, at great distances, without much warning. All of sudden, you realize that the fire is getting close to your home. So, you decide that it is best to get into the car and drive away.

In an era of self-driving cars, here’s what could happen. You get into the self-driving car and the AI announces that due to the prevailing wildfire conditions locally, it is not going to proceed.

This comes as quite a shock to you. Imagine that you have loaded the self-driving car with your family, your beloved pet dog, and the family heirlooms, and hurriedly and desperately need to escape. Now, the self-driving car is refusing to drive. If the self-driving car was a conventional human-driven car, you would not be dependent upon the AI and could merely utilize the driving controls. In the overall case of self-driving cars, it is widely assumed and expected that there will not be any driving controls inside the vehicle, thus precluding any human from attempting to manually drive the car.

This makes sense in that if we are going to have fully autonomous self-driving cars, and presumably glean the advantages, we would want to have only the AI doing the driving. If you open the driving to humans, it means that human foibles come back into the picture. Someone that is dreadfully drunk could potentially decide they want to drive, rather than the AI, and summarily grab the driving controls.

Nope, the idea is that an autonomous self-driving car is supposed to be driven by the AI, not by humans.

To clarify, this does not mean that overnight we are going to have all and only self-driving cars on our roadways. For quite some time, likely decades, there will be a mixture of both self-driving cars and human-driven cars. Part of the reason for this mixture is that there are today 250 million conventional cars and they will not readily be replaced by all self-driving cars. Also, debates are raging about whether people ought to be allowed to drive if they wish to do so. There are going to be those human drivers that will insist on never giving up the wheel, and you’ll need to pry their dead cold hands from the steering column before they relinquish the privilege of driving.

Returning though to the dilemma, you and your family are anxiously piled into a self-driving car and it refuses to move.

What do you do?

One answer so far is that there will likely be an OnStar-like facility within self-driving cars, for which you can contact a remote agent to tell them if you are having an issue with the self-driving car. In this instance, merely reaching a remote agent, if even possible and maybe the communications linkages in a wildfire area are not functioning, does not necessarily resolve the matter. The remote agent might politely explain that the AI has determined it is not safe to drive.

You meanwhile are screaming at the remote agent to override the AI and tell it to start driving. The remote agent might not have such a capability. But suppose the remote agent can get the AI to proceed, this still does not overcome the assertion that the conditions are so foul that is ill-advised to be on the roads.

If you have not yet been in an area that has wildfires, consider taking a concerted look at the myriad of relevant videos posted online. I say this because the wildfires can do more than filling the skies with smoke and ash. Limbs of trees fall off onto the streets, blocking the path ahead. At times, the debris is on fire, making trying to go around the obstructions highly dangerous. Shrubs and bushes that caught fire can become rolling balls of flame. And so on.

Add to this disarray the aspect that there might be slow-moving bulky bulldozers on the roads, piloted by firefighters that are trying to get to the wildfire hot spots to put out the flames. Animals are oftentimes loose, running in a panic from their homesteads, and darting frantically on the streets. With especially extreme wildfires, there are sometimes aggravated winds created, causing impaling shards from destroyed homes and other manmade objects to fly alarmingly into the streets and strike cars.

So, which is it, allow the AI to make a life-or-death decision about whether it is safe for the self-driving car to proceed, or override the AI and force the self-driving car to get going.

Of course, the AI might move along and get a few blocks from your home, and then reach a point that it is abundantly apparent that nothing can proceed. Perhaps the wildfires have engulfed the roadways and not even human drivers can attempt to drive through the danger. Now what? Maybe you would have been better off to stay at home versus becoming stranded on a roadway while sitting inside a self-driving car, one that is now completely surrounded by the wildfire.

Conclusion

There are trade-offs as to have AI self-driving cars that will not allow for human driving, and the wildfires help to illuminate that kind of trade-off.

The response by some self-driving car aficionados is that this is a one in a zillion chance of ever happening. They would argue that the potential for reducing the estimated 40,000 car crash fatalities and 1.3 million car incident injuries in the United States each year via the advent of self-driving cars is worthwhile in comparison to the oddball chances of getting ensnared in a situation that the AI would not drive you out of a dicey situation.

In fact, they would likely further bolster their assertion by pointing out that it is conceivable that the AI will be a safer driver in a setting such as a wildfire than would human drivers. Ergo, it could be that human drivers would drive recklessly and in a panic mode, potentially getting themselves and others into a worse mess than already exists. The AI would presumably not be overcome by emotion, fear, or other panicky afflictions, and thus would drive stably and reliably.

For those of you intrigued by these matters, I’ve previously analyzed the similar facets of driving in a hurricane, a tornado, snowstorms, and other such calamities (see the link here and the link here, for example).

Though the everyday driving of a car is admittedly done in rather mundane conditions, most of the time, we ought not to overlook the unfortunate fact that there are a slew of “extraordinary” circumstances that can befall large swaths of the country, seasonally and seemingly year-round, and if AI-based true self-driving cars are aiming to become our sole means of car transport, it will ultimately become crucial to figuring out how society wants to handle these AI-driving challenging settings.

Contemplate these matters the next time you see a self-driving car sauntering down your neighborhood street, and consider what would the AI do if there was something other than a nice sunny day with pretty blue skies and nothing at all was afoot. Thinking ahead about adverse driving conditions is not especially pleasant, but it could make all the difference when otherwise jammed-up in a dire moment.

As Benjamin Franklin once said, failing to prepare is akin to preparing to fail.



READ NEWS SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.