Transportation

Tesla Lawsuit Over Autopilot-Engaged Pedestrian Death Could Disrupt Automated Driving Progress


A recently filed lawsuit against Tesla in the U.S. District Court of Northern California where Tesla is headquartered alleges that a Tesla Model X using Autopilot struck and killed a pedestrian, doing so on April 29, 2018.

This is reportedly the world’s first pedestrian fatality associated with a Tesla while it was on Autopilot.

There have been other reported Autopilot-engaged fatalities that involved car drivers, and indeed corresponding lawsuits lodged at Tesla in some of those cases, but none as yet that seem to involve a pedestrian death.

Those that closely follow Tesla’s efforts have already anticipated that the posture by Tesla will be to once again use the same defense as earlier employed, namely that the human driver that is behind the wheel of a Tesla is the captain of the ship and therefore ultimately bears full and indivisible responsibility for any car crashes, regardless of whether Autopilot was engaged or not during an incident.

In that sense, Tesla is likely to deny any responsibility or accountability, and nor liability, in this pedestrian fatality.

Some proponents of Tesla would agree with such a position and fervently assert that indeed the human driver is the sole determiner of the actions of the vehicle, and it is wholly appropriate for the automaker to rightfully claim that there is no basis for the manufacturer to be blamed or considered to be at fault.

On the other side of the coin are those that point out the role of the Autopilot capability and its presumed contributory share in such car crashes (see my insights about this viewpoint, at this link here).

How so?

Here are the arguments typically made.

First, it is usually argued that the Autopilot system should have done something to avert a car crash that was imminent, and if it did not do so sufficiently, this would be construed by some as a defect or deficiency in the provided capability and thus the automaker should have some semblance of responsibility thereof.

Second, it is usually argued that the Autopilot system led the vehicle into a car crash and failed to adequately drive the vehicle in a safe manner, which would be construed as a defect or deficiency.

Third, it is often argued that the Autopilot as once engaged has a tendency to inadvertently lull human drivers into a false sense of belief that the car is driving itself and that the driver then becomes inattentive, which, though perhaps somewhat acknowledges the driver being at fault, the viewpoint is that the automaker presumably knew of this propensity of drivers to be lulled and yet has taken insufficient efforts to overcome that tendency.

Fourth, it is typically indicated that the naming of the system as “Autopilot” is in itself an indicator of falsely representing the actual capability of the driving-related system, since drivers presumably believe that anything that can be an autopilot would imply that the vehicle is able to be fully driven by itself and that a human driver is not a necessity per se.

Those are the typical points made in these various cases (for my elaboration of the emerging product liability exposures for certain levels of self-driving cars, see the link here).

Let’s take a look at the details of this particular lawsuit and see what it has to say.

Before doing so, it is helpful to clarify the various levels of car driving automation, including clarifying true self-driving cars from the semi-autonomous partially automated cars.

The Levels Of Self-Driving Cars

True self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public is forewarned about a disturbing aspect that’s been arising lately, namely that in spite of those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Tesla Lawsuits

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

Existing Tesla’s are not Level 4 and nor are they Level 5.

Most would classify them as Level 2 today.

What difference does that make?

Well, if you have a true self-driving car (Level 4 and Level 5), one that is being driven solely by the AI, there is no need for a human driver and indeed no interaction between the AI and a human driver.

For a Level 2 car, the human driver is still in the driver’s seat.

Where the twist comes to play is that the human driver and the Level 2 or Level 3 vehicle are essentially co-sharing the driving task, which can make for confusion and difficulties during the driving effort.

The human driver might not comprehend the boundaries of their need to drive versus the driving system doing the driving, and in the heated moment of split-second decision making while at the wheel, this kind of confusion can lead to quite untoward results.

Furthermore, the untoward results are not necessarily limited to the driver, since it is readily the case that a car crash could harm or kill the passengers within the car, along with harming or killing drivers and passengers in nearby cars, and of course the chance of injuring or killing nearby pedestrians.

This is a fact worth making due to the aspect that sometimes an argument is made that if a human driver decides to use a Level 2 or Level 3 it is entirely up to them and only they will somehow potentially be at risk or suffer any adverse consequences. That is an obviously narrow and indisputably misleading perspective, and we ought to agree that any driving of a car can ultimately have much wider repercussions than the potential harm to the driver only.

Returning to the recent lawsuit that was filed, there are these six alleged causes of action or counts:

·        Strict products liability (design defects)

·        Strict products liability (failure to warn)

·        Negligence

·        Wrongful death

·        Loss of consortium

·        Survival action

An unspecified amount of compensatory and punitive damages is being sought.

The lawsuit was filed on behalf of the widow and daughter of the deceased pedestrian (Mr. Umeda).

Generally, the lawsuit comports to offer the same kinds of basses for why Tesla should be held responsible as the ones that I’ve earlier herein outlined.

The lawsuit language is at times rather blunt and coarse, including alleging that the Autopilot was “half-baked” as a product, and “fatally flawed” in its design, and that Tesla is “beta-testing” with the public-at-large and putting the public at risk accordingly.

It will be informative to see what Tesla proffers as a counterargument to the lawsuit.

Meanwhile, let’s consider the driving situation that took place and see what we can glean from the ground level act of driving.

The Driving Situation

Apparently, the Tesla being driven in this instance was a 2016 Model X and had entered onto the Tomei Expressway in Kanagawa, Japan at approximately 2:11 p.m. in the afternoon of April 29, 2018.

This rendition of the driving circumstances is based on the filed lawsuit along with reported news media accounts, though do keep in mind that as the case proceeds there is undoubtedly going to be a closer look at the details to ascertain what factually happened.

After about 35 minutes or so of driving on the Tomei Expressway at about 2:49 p.m. the Tesla Model X apparently struck and killed the pedestrian, Mr. Umeda.

It seems as though those aspects are relatively undisputed, though as mentioned keep in mind that it might ultimately be contested.

Your first thought might be why was there a “pedestrian” anywhere on an expressway?

We typically think of pedestrians as persons on a sidewalk or perhaps someone crossing a street.

Normally, we would not expect pedestrians to be on a freeway or expressway, and the presence of a pedestrian is likely to right away raise questions about the prudence of the pedestrian and whether a driver might have been caught off-guard or otherwise be taken aback that a pedestrian was within reach of being struck.

On the other hand, reconsider the word “pedestrian” and hark to the somewhat common circumstances of those that might have a stranded vehicle while on an expressway or have ended up on the expressway on-foot, including perhaps changing a tire or offering aid to others

It seems in this specific case that a prior incident had led to a van being parked on the expressway, along with several motorcycles, and the then consequent pedestrian activity.

Here’s what the lawsuit alleges happened: “As the vehicle in front of the Tesla Model X “cut-out” of the lane and successfully changed to the immediate left-hand lane and, the Tesla vehicle, which had been was traveling at a relatively low speed, began to accelerate automatically to the speed that its driver had previously set when Tesla’s Traffic Aware Cruise Control (TACC) feature was engaged. Therefore, the Tesla began rapidly accelerating from about 15 km/h to approximately 38 km/h.”

As a quick aside, 15 km per hour is about the same as 9 mph, and 38 km per hour is about the same as 24 mph.

Continuing the excerpt of the lawsuit, here’s what it further indicated (trigger alert, this is somewhat graphic): “The Tesla Model X’s sensors and forward-facing cameras did not recognize the parked motorcycles, pedestrians, and van that were directly in its path, and it continued accelerating forward until striking the motorcycles and Mr. Umeda, thereby crushing and killing Mr. Umeda as the Tesla Model X ran over his body.”

Those claims return us to the earlier point about the potential argument that the Autopilot was not sufficiently capable to extricate itself from an impending crash, and also that it was not sufficiently driving safely to have avoided entirely the potential circumstance altogether.

In other words, apparently, the Autopilot was not able to realize that a car ahead was performing a cut-out, a frequent traffic maneuver which we’ve all likely experienced, consisting of a circumstance whereby the car in front of you opts to exit from the lane and usually does so due to some obstruction or blockage in the lane ahead.

Most human drivers realize when a cut-out is taking place and will presumably be on alert, expecting that there is likely some reason for the cut-out action of the car ahead of them. By being on alert, the driver might either opt to immediately follow the cut-out driver and do the same action or become especially watchful for what might be ahead and potentially prepare themselves for a sudden stop or radical avoidance maneuver if needed.

It seems that none of that took place in this instance.

What did the driver do in this circumstance?

According to the lawsuit: “This entire incident occurred without any actual input or action taken by the driver of the Tesla vehicle, except that the driver had his hands on the steering wheel as measured by Tesla’s Autosteer system. Indeed, the Tesla Model X was equipped with an Event Data Recorder (EDR) which is intended to enable Tesla to collect data and record information from its vehicles and also provides information on various processes of the vehicle’s functioning systems when a crash occurs. The information regarding vehicle speed as extracted from the Tesla Model X provides proof of the foregoing facts.”

As a precautionary note, we don’t know what the EDR has indicated (for background about the nature of EDR’s, see my analysis here), and until the case proceeds, it is not likely viable to readily ascertain what it captured.

There is more indicated about the status of the driver, including this: “At some point before 2:49 p.m., the drive of the Tesla vehicle began to feel drowsy and had begun to doze off.”

This contention returns us to the notion that a human driver might be lulled into complacency and become inattentive of the driving task, doing so under the false impression that the co-shared driving task is being handled by the automation.

Of course, a counterargument would be that the driver if indeed dozing or in a dozing condition is not adequately performing the driving task and could do likewise in any conventional car (i.e., one without any sensors detecting the driver status), thus, perhaps singling out this instance might seem overly stated; though the counter to that counterargument would be that since the vehicle had some form of driver awareness detection, doing so opens the proverbial can of worms as to whether it was sufficiently designed and implemented altogether.

As such, the lawsuit argues that the existing method of detecting driver attentiveness by a Tesla is insufficient by only making use of steering wheel hand-presence alone, and not encompassing other methods such as an inward-facing camera that might detect the position of the head and the gaze of the eyes on the roadway.

Presumably, the Autopilot and the Tesla driving system were insufficient due to a multitude of deficiencies which included an incomplete means of detecting and alerting a driver about their driving attentiveness, and for not having apparently detected the cut-out action of the car ahead and thus not being reactive thereto, and for not averting the incident at the last moment such as taking a braking action or avoidance maneuver to avoid or reduce the impact that occurred.

Another facet involves the presumed acceleration aspects.

As I’ve covered previously (see the link here), one criticism of a car-following feature or so-called piped piper approach is that if the car ahead suddenly darts out of the lane, the question arises as to how adept the car-following feature and other allied features are at when detecting the status of the roadway ahead.

In brief, if the sensory devices are focused near-to at the car immediately ahead, which then “disappears” from view by a rapid lane change, sometimes the sensors are not adequately quick to ascertain whether the lane is now wide open for use or whether there might be some upcoming traffic or obstructions in the lane.

A simple analogy to human reasoning might be that if your view is narrowly aimed at the bumper of the car ahead of you, and you aren’t fully paying attention, perhaps you look away for an instant, and then upon returning to looking at the lane, there is no longer a car there that was otherwise blocking your longer view, you need to rapidly refocus your gaze to ferret out what is ahead of you.

There are those that believe that the anti-LIDAR stance of Elon Musk and Tesla puts them into the untoward posture that this ability to refocus at what is ahead is lessened by not having a LIDAR capability (which Tesla’s do not have), and that the use of cameras and radar are not sufficiently as capable without the added use of LIDAR (for the acrimonious dispute about the value of LIDAR and Musk and Tesla viewpoints on the matter, see this link here).

Conclusion

A slew of questions abounds:

·        Should the human driver be considered the sole and exclusive responsible party for this incident and the fatality that occurred?

·        Should Tesla have any responsibility or liability for this incident and its horrific outcome?

·        Does the pedestrian have a duty associated with actions undertaken while on an active expressway and being in harm’s way?

·        Are there other potential parties to this matter, such as the maintainer of the roadway infrastructure, as it might relate to this incident?

·        Might the liability be assigned to one, two, three, or more parties in some proportionate manner or to only one?

We’ll need to watch and see how the matter evolves as the court case proceeds.

It will also be interesting too to see if there is an attempt by Tesla to change the venue to Japan, rather than the case taking place in the United States, and thus potentially come under Japanese law versus U.S. law.

Some believe that Tesla will likely settle out-of-court, presumably doing so to avoid the adverse publicity and the potential drain on the company resources when trying to refute the claims.

Others think that this is the kind of case that Tesla and Elon Musk will battle to the bitter end, attempting to uphold its principles and under the prideful notion of wanting to establish the sufficiency of their cars and their Autopilot system.

If they do launch an effort to fight the claims, there is a chance that the teams developing the Autopilot and Full Self-Driving (FSD) capabilities might become distracted and delayed in some aspects of further progress, having to devote time and attention toward legal discovery, depositions, and the like.

There is also speculation that this lawsuit and others that are already underway are likely to increase awareness among the Tesla development teams about the possible need to restructure or revamp some of their Autopilot and FSD code and approaches. In which case, this could be a “delay” of sorts, though presumably a sensible tangent to get their systems righted if they were askew or somehow awry.

One aspect that is doggedly true no matter what else transpires is that the road ahead for Level 2 and Level 3 cars is going to be a rocky and endangering path.

The co-sharing of the driving task is generally a bad idea, and no matter what kind of tech or trickery is used to overcome the co-sharing confusion, there is always the risk of miscommunication or mistaken ill-coordinated steps being taken by the human driver or the automation-based driver.

That’s why some have chosen to skip past the Level 2 and Level 3, and aim entirely and only for the Level 4 and Level 5, seeking to excise the human driver from the equation and have just one driver at the wheel, the AI driving system.

Sometimes, two heads are not better than one.



READ NEWS SOURCE

Also Read  Germans Buying More Cars Despite Pressure To Embrace Bikes, Traffic-Free Cities; Report