Transportation

Backup Drivers For AI Self-Driving Cars Aren’t A Guarantee Of Safety, So Here’s What Needs To Be Done


When you get behind the wheel of a car, it is a solemn duty and one that holds you responsible for what the car ends up doing.

While driving, if you suddenly realize that you are about to ram into another car, the odds are that your cognitive focus on the driving task will aid you toward quickly maneuvering to avoid the potential crash. Your mind will command your arms to steer the vehicle away from the other car and your legs might be mentally instructed to slam on the brakes.

At times, you can become distracted from the driving task such as trying to look at the latest text message on your cell phone or while grabbing that eggnog latte that you purchased earlier in the morning.

Being distracted means that you lose precious time when having to make split-second decisions about the driving task.

Adverse outcomes due to being mentally adrift of the driving task can be deadly and lead to many deaths and injuries. Indeed, in the United States alone, there are about 40,000 annual car-related deaths and over 1.2 million injuries as a result of car crashes, many of which are caused by drivers that weren’t fully attentive to the driving chore.

There is crucial time involved in what’s called the driving task cycle, consisting of you having to think about a driving situation, mentally craft a plan of action, instinctively convey the plan to your limbs, and then have your limbs enact the plan, notably all part of a delicate dance that keeps you and your car from getting into trouble.

For most of us, we are in a heightened state of readiness while driving a car and the driving task cycle is incessantly repeating in our minds as we command the car on the roadway.

Your eyes are on the road, your hands are on the steering wheel, your feet are on the pedals of the car, and your mind is avidly watching the world around the car as you scan for what’s ahead of you.

It’s like being a cat that’s ready to pounce.

Imagine though if you were somewhat a second fiddle when it came to the act of driving a car and being behind the wheel.

You might recall that teenage novice drivers often learn how to drive a car by sitting in the driver’s seat and meanwhile an instructor sits in the passenger seat. In some cases, the car has a second set of driving controls for the instructor to use.

The newbie driver is the active driver and the instructor becomes essentially a secondary or relatively passive driver.

If the teenager begins to go astray, the instructor will usually first try to tell the novice to correct whatever is going wrong, and then when things go especially awry the instructor might take over the driving controls to steer out of harm’s way.

Suppose that you were the second driver, the one that’s not actively driving the car, and yet you are presumably there to immediately takeover the driving effort as needed.

It’s a tough spot to be in.

On the one hand, you might be greatly tempted to use your secondary driving controls at the slightest sign that the novice driver is having issues.

Does the teenage driver see those pedestrians at the crosswalk, well, maybe it is safest for you to take over the driving controls just in case.

Or, it seems like the car is going excessively fast, best to tap your foot on the brakes, rather than try to explain to the novice driver that the car needs to slow down.

And so on.

Unfortunately, each time that you act, you are undermining the newbie driver and not allowing them to fully command the car.

Your job is intended to allow the teenager to figure out how to drive and only ensure that dire situations are averted.

There is an awkwardness in co-sharing the driving task, and the second driver is like a conflicted cat that’s not quite supposed to pounce, but still seemingly supposed to be ready to pounce, yet should only pounce when absolutely needed.

Unless you are a soothsayer that can see the future, it is hard to know when you ought to intervene and when there’s no need to intervene.

Furthermore, if you opt to provide a lot of latitude to the newbie driver, the odds are that you will delay in intervening when the crucial moment arises, meaning that it might be too late by the time you do make use of the driving controls.

Life is sometimes like being between a rock and a hard place.

Why bring this up?

I’ve done so to introduce you to a vital role for today’s self-driving cars that are being tested on our public roadways, namely the role of assigned human drivers that serve as the backup to an AI system that’s driving a self-driving car.

Human Backup Drivers For AI Self-Driving Cars

These backup drivers are often referred to as test drivers, though in the case of their being in a second fiddle capacity, they are more aptly called fallback test drivers.

A test driver would be someone that is the primary driver of a car, doing so to test out how the car maneuvers and works.

A fallback test driver is a person that is akin to the second driver in my example of a teenager learning to drive. They don’t actively drive the car per se and instead allow the AI to do so, meanwhile they are supposed to be ready to take over the driving controls when needed.

The in-vehicle fallback test driver (IFTD) serves as a last line of defense, aiming to keep an eye on what the AI driving system is doing, and then intervene when needed, but only when needed and not jumping the gun in doing so.

Sometimes people think that being a fallback test driver must be quite a cushy job and easy as pie.

You sit in a nifty state-of-the-art equipped car and appear to do nothing at all for maybe 99% of the time.

Yes, that might be true, though think carefully about the 99% of the time and the other 1% of the time.

During the 99% of the time, you are theoretically at the ready, all the time, perhaps for hours on end as the driverless car goes throughout a city and town doing its driving learning and testing.

I assure you that boredom can become a significant issue.

Sure, you are presumably on the edge of your seat, but can you really remain in that state for hours at a stretch, doing nothing other than waiting to ascertain whether you should intervene?

And, consider the 1% of the time when you do take over the controls.

The chances are that you took over the controls because something untoward was about to happen.

Plus, later, you might be required to explain why you intervened.

If you intervened and there wasn’t any notable reason to do so, you are likely seen as a jackrabbit that isn’t letting the AI do its thing.

It is hard to prove that your efforts avoided a car accident, since there wasn’t a car accident that happened, and it could be that no such accident would have occurred anyway, such that the AI might not have gotten into an accident at all.

In your mind, you might repeatedly be saying this to yourself: Should I intervene now or should I not?

The most insidious phenomenon is when a driverless car seems to be performing ably for hours on end, and you can become complacent and begin to let your guard down.

You assume that the self-driving car is always going to do the right thing.

So, add together being lulled into complacency, along with the boredom from doing “nothing” (though you are indeed supposed to be doing something, you are tasked with watching the road and acting to ensure the safety of the car driving), and you have a potent combination that can dramatically undercut the purpose of having a fallback test driver.

Worst Practices Have Been Occurring

Worst practices have been aplenty by some firms that initially opted to put their self-driving cars on our roadways and just tossed nearly anyone into the fallback tester role.

A firm might falsely think that the fallback test driver is an insignificant position and therefore hire just about anyone to do the job.

If you are breathing and can drive a car, you are hired.

A firm might pay peanuts to the fallback test drivers, figuring it really is a minimum wage job.

Keep in mind though that flipping burgers versus being inside a multi-ton vehicle that can ram and kill people are two different things.

Low pay might also tend toward having fallback test drivers that take the position as a second job and show-up to be the co-sharing driver after having worked a full day in say a nearby manufacturing plant. Their wakefulness and alertness can be quite dulled, severely impacting their ability to properly perform in the second fiddle driver’s role.

How long a driving journey should be involved when doing a specific run of a driverless car?

A firm might be tempted to keep the number of fallback test drivers to a minimum, thus use them for long hours, maybe eight to ten hours at a stretch. This means that the cat that can’t pounce but is supposed to be ready to pounce has been on its toes for all of those eight to ten hours while inside a moving car.

There are lots of added twists and turns involved.

Another example involves the AI development team that’s making coding changes to the on-board AI driving system.

Suppose the AI team opts to make the driverless car more reactive to nearby pedestrians, and so the code is adjusted accordingly. Whenever a pedestrian gets within two feet of the vehicle, rather than a prior use of four feet, the AI is going to react to avoid a potential collision.

Meanwhile, let’s imagine that the AI team forgets to inform the fallback test driver about this new nuance.

The fallback test driver might have already logged tons of hours of being inside the driverless car and had gotten used to it not reacting until a pedestrian was four feet away.

Now, surprisingly, the self-driving car is suddenly maneuvering whenever a pedestrian is closer to the car.

The fallback test driver is likely to be puzzled by this change in driving behavior by the AI, unsure of what to do, and become either over-reactive or under-reactive to this new change that they weren’t told about.

In short, there are lots of ways to generate ‘worst cases’ if a firm isn’t savvy about how to manage and undertake a proper use of fallback test drivers.

The worst of worst cases involves the self-driving car getting into a car accidents and harming or killing someone.

Sadly, there is the now infamous incident of the Uber self-driving car that rammed and killed a pedestrian walking a bike across the street at nighttime (see my coverage).

You can watch online the video of the fallback test driver that was in the car at the time of the incident.

Controversy still exists about what the fallback test driver was doing and not doing, along with some assertions of potential criminal negligence involved.

Best Practices For Fallback Test Driver Usage

Each company undertaking driverless car usage on our roadways has been to-date devising their own proprietary approach regarding how they hire, train, and utilize their fallback test drivers.

It has been a hodge-podge way to do things.

Fortunately, a recently released set of best practices might inspire the automakers and self-driving car firms to consider a wider and deeper range of aspects that could shore-up how fallback test driving occurs.

The Automated Vehicle Safety Consortium (AVSC) released a document entitledAVSC Best Practice For In-Vehicle Fallback Test Driver Selection, Training, And Oversight Procedures For Automated Vehicles Under Test” that provides a helpful in-one-place compendium of important operational matters on this topic.

The SAE Industry Technologies Consortia (SAE ITC) is the official publisher and notes that the best practices are completely voluntary, and it is the sole responsibility of the automakers and self-driving tech firms to make it suitable for particular use.

At about a dozen pages in size, the best practices include making sure that the candidates for a fallback test driver position have the appropriate prior driving experience, along with the needed mindset for this kind of work.

These backup drivers also need to undertake basic driver training coursework and be evaluated via both a written exam and a demonstrated driving skill test, per the best practice’s recommendations.

Another facet is the need to establish testing protocols, laying out what each driving journey on our roadways will be doing and making sure that there is a pre-trip readiness performed, and that in-trip procedures are stipulated and adhered to, along with a post-trip debriefing.

Even simple things like restricting the fallback test driver from using their personal smartphone while in the driver’s seat are the kind of “common sense” points that though might be obvious to some, regrettably are often overlooked, or completely neglected while in the daily throes of conducting on-the-road efforts.

If you have dozens or maybe even a hundred or more fallback test drivers for your driverless car fleet, you can’t just wing it and hope that all of them will somehow magically do the right things.

As such, using a detailed methodology like this offers useful and crucial best practices.

There are some aspects about the new protocol that I have some minor qualms about, but as an overall template, it is a step in the right direction and firms should be carefully reviewing the recommended approaches, along with making sensible and informed decisions about how their own fallback test driver efforts are being conducted.

Conclusion

Let’s all keep at top of mind that when a fallback test driver falters or fails, it could mean that lives are lost or injuries could arise.

Besides the potential for human harm, there is also the likelihood that the public and regulators will become alarmed at what is taking place on our roadways if self-driving car tryouts go afoul.

Reactions could turn into a stifling of all driverless car efforts underway, forcing potentially a freeze on further efforts industrywide.

That being said, there are critics that say we ought to already be worried about these self-driving car efforts that are being performed and put a stop to them.

For those critics, they view that we are all part of an involuntary experiment, serving as guinea pigs around self-driving cars that aren’t yet ready for prime time.

No matter what your opinion might be about the matter, let’s all agree that having a fallback test driver is not a failsafe guarantee that a self-driving car won’t get into a car accident.

Instead, the fallback test driver is there to lower the odds of the car getting into incidents, though there are still some number of odds or notable risks at play.

Various means have been proposed or are taking place to try and lower further those risky odds.

For example, some assert that there should always be two occupants in a driverless car on our roadways, encompassing the fallback test driver and a human backup to the fallback test driver (this second person is usually an engineer that is examining the on-board systems while the car is underway).

The human backup to the fallback test driver is not necessarily going to be able to readily take over the driving controls if the fallback test driver falters, and instead, the idea is that this second person can be a handy means to keep the fallback test driver on their toes.

The second person can chat with the fallback test driver, asking how things are going, and otherwise aid in keeping an alert mindset on the driving task, and watching too that the fallback test driver doesn’t fall asleep or get distracted.

You might think that certainly using two occupants is wisest and it seems like this should always be done.

Having two occupants increases the costs of the self-driving car tryouts, and there’s a chance too that the second driver might become a distraction to the fallback test driver (this is somewhat ironic, namely that the backup could distract the backup).

One argument is that none of these driverless cars ought to be on our public roadways until they have been thoroughly tested on closed tracks or proving grounds that are specifically intended for testing of cars.

And, the driverless cars should have gone through an extensive and real-world simulation system prior to getting onto any public roadways, some vehemently argue.

This is an ongoing debate and an acrimonious contention that involves somewhat complex trade-offs between what is needed to achieve self-driving cars and how best to get there (see my piece on this).

One final comment to contemplate involves the reaction time of the fallback test drivers.

Assume that the fallback test driver is ideally alert and well-prepared.

In this scenario, pretend that everything that the best practices suggests have been undertaken and are all fully in place (that’s quite a tall order, but go with me on this).

Even in that ultimate state, there is still the question of whether the fallback test driver, serving in the second fiddle driving role, will be able to react in time to avert a car crash.

The point is that no matter how much you do, there are still risks involved.

Smarmy people will retort that you might as well do nothing and just have anyone serve as the fallback test driver, or even remove the fallback test drivers entirely.

I’d hope that we don’t find ourselves tossing out the baby with the bathwater.

There is a real need and vital role for having in-vehicle fallback test drivers, assuming that we are willing to allow this grand experiment of driverless cars on our streets and byways.

Diligence in establishing and maintaining a best practice approach for utilizing fallback test drivers is a means to try and reduce risks, along with meanwhile advancing the advent of self-driving cars.

I urge those that are today still mired in ‘worst practices’ to get out of the mud and shift into the best practices protocols.

It’s a wise move for everyone.



READ NEWS SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.