Transportation

Your Deep Dive Into Those Waymo Self-Driving Cars In San Francisco That Went Headstrong Up A Dead-End Street Like Floundering Fish Going Upstream


Readers have implored me to do a deep dive into the maelstrom of news reports this week about Waymo self-driving cars that seemed to have been oddly preoccupied with a dead-end street in San Francisco.

You got it!

Here’s my in-depth analysis and close-in street-level inspection.

Okay, in case you perchance missed the bruhaha, some of the blaring headlines this past week proclaimed that there was an invasion of dullard robo-taxi’s afoot, providing ostensibly surefire evidence that autonomous vehicles will act blindly and mindlessly. Others emphasized that the mystery was an enigma wrapped in a self-driving car subconscious mindset that perhaps showcased an obsession with no-way-out avenues.

Some even asked breathlessly whether this is an ominous sign that driverless cars are likely to all drive off the end of a pier, doing so as though they are AI-based lemmings.

Let’s start over and begin with what seem to be the known reported facts of the situation.

Waymo has been having its self-driving cars roam throughout various parts of the San Francisco area.

You can’t miss seeing them.

The other day, I was trekking on foot during lunchtime in the venerable Golden Gate Park and managed to witness numerous occasions of Waymo self-driving cars making their way on the nearby streets and avenues. The autonomous vehicles stand out since they are painted white, have the Waymo logo, contain an outcropping of driverless car sensors, and they are Jaguar I-PACE’s.

I’ll also note that there were plenty of other competing self-driving cars roaming around too, as though San Francisco has become the most treasured place on earth to try out driverless cars.

By the way, using San Francisco as a testbed does make indubitable sense.

The weather here is modest for the preponderance of the year. No need to worry about driving in the snow when the winter rolls around.

The San Francisco streets are relatively straightforward to navigate, though there are indeed lots of handy tricks when it comes to a myriad of oddball one-ways, crazily dangerous left turns, and uphill/downhill nerve-racking challenges. For testing on public roadways, San Francisco offers a nifty range of conventional driving considerations and yet also provides enough edge case instances to keep things lively.

Furthermore, San Francisco is just a stone’s throw away from the heart of Silicon Valley.

This makes the logistics somewhat easier than if the public roadways being used were hundreds or thousands of miles away from where the mainstay of the AI developers and engineers reside. As you’ll see in a few moments, this physical closeness does not guarantee that the organizational and company cultures within the self-driving realm of an entity will necessarily mesh well (it requires hard work and constant attention).

And the other huge influencing factor is the general welcoming embrace for self-driving cars in California and especially in the San Francisco and Silicon Valley areas. In other parts of the country, locals might eschew driverless cars and be upset to have these AI-based driving machines on their everyday roads. Put those darned contraptions onto private lands that have specialized testing tracks for autonomous vehicles, they might stridently bark.

In the Bay Area of California, no such outsized rebellion against self-driving cars has yet appeared (I say that because you never know what might change local sentiment).

For those that do want to test their self-driving cars on non-public roadways hereabouts, I’ll point out as a friendly shoutout that the GoMentum Station sits a mere hour away or less. I’ve often described the facility in my prior columns, indicating that it is a fine facility and it is commonly referenced as the AAA Northern California, Nevada, and Utah’s secure collaborative space having all the bell’s and whistle’s needed to do driverless car testing to your heart’s content (see my coverage at this link here).

But, getting back to San Francisco, on a daily basis there are lots of Waymo self-driving cars making their way to and fro. This is so routine that there is nary much media attention directed toward their ongoing roadway activities.

That being said, many of the media that rashly covered the recent Waymo self-driving car “invasion” tended to underplay the role of so-called Slow Streets that are arranged here in San Francisco.

Allow me a sidebar to bring you up to speed about Slow Streets.

The agency that oversees transit in the city of San Francisco is formerly known as the San Francisco Municipal Transportation Agency (SFMTA). You need to be aware of this because the efforts of the SFMTA dovetail into the alleged mystery of the Waymo self-driving cars going up the dead-end street.

Here’s how the SFMTA describes the Slow Streets initiative: “The SFMTA’s Slow Streets program is designed to limit through traffic on certain residential streets and allow them to be used as a shared space for people traveling by foot and by bicycle. Throughout the city, nearly thirty corridors have been implemented as a Slow Street. On these Slow Streets, signage and barricades have been placed to minimize through vehicle traffic and prioritize walking and biking. The goal of the Slow Streets program is to provide more space for socially distant essential travel and exercise during the COVID-19 pandemic.”

I trust that you see the ample logic in the Slow Streets agenda. The hope is to encourage the public to walk on foot, ride bicycles, use wheelchairs, employ scooters, relish skateboards, and otherwise proceed around town via other related means of micro-mobility. Well-placed barriers and placards inform car drivers about the aspect that they are about to enter into or cross designated Slow Streets.

Drivers are supposed to appropriately honor these designations.

Admittedly, not all human drivers do so. By and large, though, my observation is that people do seem to abide by the Slow Streets for much of the time.

For a self-driving car, you would certainly expect that the AI driving system would obey the Slow Streets provisions. I say that without implying that the AI is somehow sentient and can miraculously read signs like us humans do. It can’t. It isn’t.

This might be a good segue to clarify what I mean when referring to self-driving cars. Let’s do that and lay the foundation for then describing what happened in San Francisco with the self-driving cars that appeared to be fixated on a particular dead-end street.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And The Invasion

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic.

I’d like to briefly lay out for you the specific streets and avenues involved in the recent Waymo story. Try to picture this in your mind, which I think will be pretty easy to do.

First, I’d like you to think of a capital letter “I” that has a line across the top and a line at the bottom of the vertical portion of the letter (that’s the font being used herein).

The vertical part or stem is going to be considered as 15th Avenue for purposes of discussion hereby. The bottom horizontal portion that is perpendicular to the stem is going to be labeled as California Street. The top horizontal portion that is also perpendicular to the stem is going to be labeled as Lake Street.

Are you with me on this?

We thus have two streets that cut across 15th Avenue. The one at the base is California Street, while the one at the top of the stem is Lake Street.

Envision that you make a right turn from California Street and do so to get onto 15th Avenue. You next proceed up to 15th Avenue and gradually reach the next major crosscutting street, namely Lake Street.

You usually have three choices at this juncture.

One choice is to make a right onto Lake Street and continue your journey. Another choice would be to instead make a left onto Lake Street and continue your journey. The third choice would be to proceed ahead and cross through Lake Street, continuing forward and still remaining on 15th Avenue.

Most of the time, you would be unlikely to choose to go the forward ahead approach of continuing on 15th Avenue. The reason you would not do so is that 15th Avenue then dead-ends shortly after you’ve continued forward past Lake Street.

For added trivia, the dead-end on 15th Avenue is not quite conventional. There is a locked gate that leads into The Presidio. This is a famous place, including The Presidio Landmark Building, and film buffs might remember the movie named after and filmed at this location, which included stars Sean Connery, Mark Harmon, Meg Ryan, and others.

When looking at a map, it would be easy to not realize that 15th Avenue is considered a dead-end.

Meanwhile, anyone from out-of-town that perchance were to drive there at this time would have to be completely unawares to not see the dead-end signs and other indications that this is supposed to be a dead-end, and is in fact for all practical intentions is a dead-end.

There is a missing ingredient to this layout that I shall now reveal to you.

It turns out that Lake Street is a member of the Slow Streets program.

Here’s why that is vital to this tale of woe.

I mentioned earlier that when you are coming up 15th Avenue from California Street that you will come to the intersection of Lake Street and 15th Avenue and have to choose what to do next. I said you could go one of three ways, either to the right, or to the left, or straight ahead.

Aha, that is not quite the case when Lake Street is within the Slow Streets program (which it is!).

You would only turn left or right if you had a darned good reason to do so, such as if you were going to a nearby apartment and were going to park there, doing so within a very short distance. Any other driving along that stretch of Lake Street is generally considered verboten. That being said, laser beams are not going to fry your tires and stop you from making those turns.

As per my earlier comment, human drivers might or might not abide by the Slow Street provisions.

You can bet your last dollar that some human drivers make that left or right and do not give a hoot about the signage telling you that this is a Slow Streets location.

One would hope that an AI driving system would abide by the Slow Streets program. This is not because the AI is sentient. It is because the AI would have been programmed to recognize the Slow Streets and then have been coded to abide by the stated provisions.

I’ll also point out that if a self-driving car violates the Slow Streets provisions, this would likely be considered a violation of the requirement that self-driving cars in California must conform to the prevailing driving laws and regulations. Per the Department of Motor Vehicles (DMV) in California, those automakers and self-driving tech firms that have received a permit to test their self-driving cars on California public roadways are obligated to drive legally (I’ve extensively covered this, see the link here).

Any illegal driving would need to be reported and explained, and whether or not so reported, the chances are that the permit could be revoked.

So, there is a lot of pressure on those testing their self-driving cars to avoid getting into situations that might involve a potentially illegal driving action. You don’t want to lose your permit. It would mean that you could not presumably do any further public roadway testing in all of California. There would be a need to resubmit and try to get a new permit, which could be an uphill battle if a firm had already messed up once before.

Do you now see the dilemma upon reaching the intersection of 15th Avenue and Lake Street?

You aren’t supposed to turn left. You aren’t supposed to turn right. Your only seeming option is to proceed straight ahead. The problem though is that this then takes you into a small spit of a leftover portion of 15th Avenue that then dead-ends.

Why care about all of this?

Because any driver worth their salt would figure this out and likely after perchance one time falling into this bit of a roadway trap, would avoid going that way entirely. In essence, the moment that you make the decision to go from California Street onto 15th Avenue and head north, you are boxing yourself in.

You either make an immediate U-turn upon reaching that intersection, though this is dicey and could be construed as an illegal maneuver, or you push ahead and then turnaround at some point in the dead-end, heading back down 15th Avenue and ultimately getting out of the “trap” that you’ve managed to get yourself intertwined within.

A driver that doesn’t know the local roads could easily fall for this trap. But, only once. After that, it would seem that any sensible driver would discover they ought not to go up 15th Avenue from California Street unless they intend to park and stay somewhere along that stretch of 15th Avenue.

Otherwise, there is no kind of thoroughfare along with that piece of roadway.

Whew, you are now up-to-speed about the physical layout underlying this saga.

We next turn our attention to the matter of the Waymo self-driving cars that found themselves entering into this roadway trap.

Before I jump into the throes of things, I’d like to mention that as a tech executive having overseen many large-scale tech deployments, and as a consultant to such firms, plus at times being a corporate spokesperson for what our tech was doing, I am wholly sympathetic to the difficulties involved in managing and leading deployments of complex tech systems including self-driving cars.

Like a doctor that is about to diagnose a patient based solely on the symptoms observed from afar, it is hard to say what per se was going on inside the heads of those overseeing these activities. I will try to offer helpful and constructive insights that might be of benefit to all.

According to the news reports of last week, Waymo self-driving cars have been regularly going up 15th Avenue from California Street. Those vehicles then opted to not make a left or right turn onto Lake Street (rightfully so, due to the Slow Streets provisions). Instead, they proceeded ahead onto the last spit of 15th Avenue and then opted to try and turnaround once amidst this dead-end portion, heading back down to escape the trap.

If a self-driving car did this one time, and then someone or something within the company noticed that the trap existed, it would seem logical to imagine that the AI driving system would get revised or adjusted to avoid this quirky circumstance.

Apparently, that did not seem to happen, and only after the media storm exploded did the matter get somehow rectified (apparently, there aren’t self-driving cars going up that way for now; in theory, all that it would take is to include a command that says don’t go up 15th Avenue from California Street or even easier just mark the onboard electronic maps accordingly, though this is only the tip of the iceberg as to what overall should be undertaken).

Up until then, according to locals that live there (and as per the media reporting), they claim that this was taking place for perhaps 6 to 8 weeks.

Yes, sit back down and calm yourself, the reported time frame is supposedly 6 to 8 weeks of those self-driving cars making that same loop-de-loop, over and over again.

You might even be willing to dismiss the looping if it was just one specific self-driving car instance that was doing this (kind of like a bad apple in the barrel). Unfortunately, it appears that by and large much of the fleet there was doing the same. Again, per locals that were quoted by the media, they said that Waymo self-driving cars would repeatedly come that way.

This happened so much that Waymo self-driving cars would get bunched up.

You see, one of the self-driving cars goes into the trap and ends up at the dead-end. It tries to turn around. Meanwhile, another one of the self-driving cars comes along and is now behind or maybe even facing face-to-face with the other self-driving car that is already at the end of the trap. They can readily get clogged up there.

You can sympathize with those that live on that portion of 15th Avenue. Self-driving car after self-driving car all continuing to repeatedly make that same trek. Those astounded humans standing on the sidewalk or looking from their apartment windows can see this spectacle play out. Some said they gradually took it all in stride.

Amazing!

As I said before, San Francisco seems to be a good place to test your self-driving cars. The odds are that this kind of repeated activity would wear thin in other locales and people might be upset enough to try and curtail it.

Realize too, this was not simply something that happened once on one specific day.

And, realize too, this was not simply only one of their self-driving cars (like a rogue vehicle in the fleet).

Apparently, this went on and on for many weeks and encompassed a slew of the fleet that is being tested there in San Francisco (we don’t know for sure how many of the autonomous vehicles did this, but it seems that many did).

Some have tried to excuse this by suggesting that maybe it was all part of the testing plan. In other words, let the AI driving system keep making the same kind of driving choices and see if the AI self-learns what to do.

I seriously doubt this.

First, there is almost no AI today that is being deployed to make real-time learning decisions like this. The reality is that the AI developers are supposed to review the daily efforts and antics of the fleet of self-driving cars that are being tested. They are supposed to do detailed analyses of where the self-driving cars went. They are looking for patterns of driving behavior or standouts of driving behaviors.

They are supposed to also take into account any feedback from the safety drivers that are in the self-driving cars.

As a related aside, the Waymo self-driving cars reportedly had safety backup drivers in them. This is prudent and part of the DMV requirement for “with driver” testing. There is also testing that can be done via “without driver” which requires a different permit. For my explanation about the role and proper use of backup or safety human drivers during the public roadway testing of self-driving cars, see the link here.

Let’s take a breather and do a quick recap.

Numerous of the self-driving cars being tested were making the same problematic choices of going up 15th Avenue from California and getting themselves snagged into making the turnaround or U-turn once inside the dead-end spit. This occurred supposedly over many weeks, perhaps a month or two in duration.

It would almost seem smarmy to infer that someone in a non-literal semblance fell asleep at the wheel, in the sense that the chain of command was either not informed or was informed and seemingly did nothing to stop this version of Groundhog Day.

Some pundits have said that this is much ado about nothing because no one was harmed during this conveyer belt of self-driving cars coming and going. No harm, no foul, they contend.

Though that is thankfully the case that no one was harmed, we still have to consider whether this is something that portrays self-driving cars and the testing of self-driving cars in the best of lights. Probably not.

Per the reports in the media, here’s what apparently Waymo stated in response to queries about the matter: “We continually adjust to dynamic San Francisco road rules. In this case, cars traveling North of California on 15th Ave have to make a U-turn due to the presence of Slow Streets signage on Lake,” a spokesperson for Waymo told SFGATE in a statement. “So, the Waymo Driver was obeying the same road rules that any car is required to follow.”

You have to give due credit to the communications team that proffered that response. This seems to minimize the matter and merely allude to the aspect that the self-driving cars are legally driving and that’s what they are trained to do. End of story.

Given the rapid pace of the news life cycle, the rather bland response would be sufficient to placate the crazed media at the time. The chances are that the story will blow over soon enough and no one will want to dig any deeper.

Some have suggested that maybe the self-driving cars were all programmed to specifically take that particular route. In essence, the AI developers and the operations team were determined to have the self-driving cars go that specific way.

I doubt this.

My guess is that the self-driving cars were programmed to wander around the San Francisco area. They don’t necessarily have the same pre-programmed route. They might have some preestablished guidance as to which parts of town to visit, though otherwise, they were roaming here and there.

Why then did so many end up at that same kind of chokepoint?

That’s easy to answer.

When my kids were young, I would take them to the local playground that was a rather small park-like space. Rather than only playing on the playground equipment, I would urge them to run around the small park, stretching their legs. The park was pretty small. They would go to one edge, turn and come back. In just a few minutes, they readily crisscrossed the area, many times over.

San Francisco might seem like a large place if you go there as a tourist. If you were to drive around in your car, you might discover that it isn’t as large as it seems. Think too that you are asked to drive your car throughout San Francisco, doing so for several hours a day, and doing so for several days of the week. You would ultimately be revisiting the same streets and avenues, quite a lot.

I would guess that the same was probably happening with the Waymo self-driving cars. They were doing their roaming. When they came toward the 15th Avenue trickery, they would just proceed along, given that they were not programmed to be on alert to stay away from the roadway trap.

In short order, in the time it takes for one self-driving car to try and turn around, you could readily have another self-driving car that is also roaming that happens in the same setting. Nothing untoward about this per se. The self-driving cars are confined to the San Francisco area, presumably, and when you have lots of them in that same playground, they are going to statistically end up in the same places from time to time.

Anyway, that’s a bit of speculation.

Speaking of speculation, I’ll venture a bit further into that space, and do so with some insights based on having dealt with situations involving engineering versus ops disparities.

First, be aware that high-tech companies tend to have engineering or AI developers as a team, and there is a separate operations team that takes the self-driving cars onto the roadways and carries out the operational aspects of the testing.

In most companies, there is an ongoing battle between the AI engineering side of the business and the ops side of the business.  This is a truism just about everywhere. You might say it is akin to the kind of infighting that happens between cats and dogs. Sometimes they just don’t get along.

Part of this is a difference in organization positioning, namely, they might be in two completely different internal groups and therefore have that kind of a gap or split between them. Different incentives for doing their jobs. Different career paths. At times, they even speak different vocabulary in the sense that the AI engineers are apt to use techno-jargon and the ops side might not be familiar with such terminology. Etc.

Another is the usual finger-pointing that almost always happens between engineers and ops.

Ops tries to do something in the field and then tells the engineers that something didn’t work. The engineers roll their eyes and believe that ops fouled up the testing. The ops people get upset that the engineers are treating them unkindly and things spiral downward from there. You’ve probably seen this since it is quite prevalent.

Managers and leaders need to keep on top of such schisms and take overt action to try and prevent the usual splintering that takes place.

You can try all sorts of strategies to overcome this. You might have the engineers undertake an ongoing ride along with the ops team members. You can have daily face-to-face meetings that are well-structured and try to curtail or extinguish the unhelpful finger-pointing and accusatory outbursts. There are numerous ways to try and meld together the two otherwise sometimes disparate teams.

You’ve got to wonder, what were ops telling the AI engineers in this instance?

They must have been reporting this matter internally. But was it not clear? Was it being dismissed as mistaken? Maybe it was being seen as a non-issue. So what, the AI side might exhort, it doesn’t matter that the self-driving cars are going into a dead-end and turning around. This is what self-driving cars are supposed to be able to do.

Of course, this was apparently happening repeatedly over a lengthy time period. It would seem less sensible to simply shrug it off.

One might also imagine that perhaps ops got tired of having to try and tell what was going on and might have decided to just remain somewhat mum. If the gap between ops and engineering gets sizable, they can sometimes revert into the staid notion that it is the job of the AI engineering team to deal with the driving, and as long as the self-driving cars weren’t getting into trouble (nobody was getting hurt, nobody was complaining), might as well just go along for the ride, as it were.

According to the news media, locals supposedly would ask the safety drivers what was going on. The news claimed that the locals claimed that the safety drivers would simply say that the self-driving car was programmed to do the driving.

That brings up another interesting facet.

What should the backup drivers say in such a circumstance?

They could be trained to utter the standard no-comment kind of refrain. The problem with this is that the public members seeking an answer might view this as a form of stonewalling. That could turn those friendly locals into becoming bitter and deciding they no longer want self-driving cars around them.

A handy reply by the backup drivers is to use the now-classic “the computer did it” kind of response. We see this occur throughout our day-to-day existence. Follow me on this. Why was I turned down for that home loan? I don’t know, says the banking clerk, and laments that it is just what the computer decided to do. Those darned computers, keep doing things we don’t like.

Of course, the reality is that there are programmers and teams and managers and all sorts of people in a company that derive those computers and are ultimately responsible for what the computers do. The easy sidestep is to blame things on the computer, as though it does the thinking. Almost nobody will refute this and will acquiesce as though you’ve given them some intelligible response.

In that manner, it would appear that the backup drivers that said the self-driving car was programmed to do what it was doing, this was probably a handy scapegoat answer and was ostensibly the facts at hand. The safety drivers presumably had little or nothing to do with the programming. They could plainly presumably see that the programming was not seemingly doing things as might be hoped for, and they might have been at their wit’s end by those of the public asking about it, but if they were unable to sufficiently inform others and get things changed, well, you know how it goes.

Shifting attention to the AI engineers and developers, one wonders what kind of daily and weekly analyses they are doing about the driving journeys of self-driving cars.

The number of “disengagements” which is when the backup driver takes over the vehicle, would have been quite high. That being said, you can somewhat avoid the formality of the disengagements count if the safety driver takes over the vehicle by the normal course of things, versus if it is an emergency or potential safety issue.

I’ve discussed at length the debates there are about how to best count disengagements, see the link here.

I bring this up because some of the media reported that the safety drivers tended to do the turnaround in the dead-end, especially when there were multiple self-driving cars in that same dead-end area. This makes sense. The AI driving system probably would struggle to deal with the turnaround, particularly given the tighter spaces and the obstacles of having other cars, such as self-driving cars, taking up part of the roadway too.

I’m not saying that the AI driving system couldn’t necessarily do the turnaround on its own. Maybe it could. The odds are though that the safety drivers probably anticipated that it would be a slow process for the AI driving system to do the turnaround, and thus it might be easier and quicker to have the backup driver take the wheel. I’m pretty sure that the backup drivers would have wanted to get out of the dead-end as quickly as possible, especially since they knew that the locals were somewhat irked at the repetition of it all.

This taking of the driving by the backup driver is not in an emergency or dire safety context. Per the DMV official verbiage, a disengagement is undertaken “because of technology failure or situations requiring the test driver/operator to take manual control of the vehicle to operate safely.” You could take the reasoned posture that the manual control was simply a convenience of making the U-turn, and not as a result of any technology qualms or safety qualms per se.

But, nonetheless, internally these manual overrides ought to be logged as manual driving tasks that were being done by the safety driver. When a lot of manual driving tasks occur, this is usually flagged. Keep in mind that this was supposedly occurring day after day, for weeks. An internal analysis that slices and dices the driving data would seemingly catch this.

Conclusion

There’s my deep dive.

I urge all of the self-driving car entities to use this as a learning experience. It could happen to any of them.

As a helpful list of what to do on an ongoing basis, consider these key points:

·        Periodically, make sure to relook at how ops and the AI engineering teams are doing in terms of working jointly and friendly so. Are there additional means to further ensure that they blend together? Are there any indications of potential gaps emerging, and if so, try to close them before they grow larger.

·        Ensure that the ops team and especially those safety drivers are able to readily convey what they are experiencing out in the field. They often have a keen sense of what is working and what is not working. Leverage their seat-of-the-pants observations.

·        Make sure that your AI engineering team is extensively analyzing the driving data that is being collected as part of every self-driving car journey. Do they have the right kinds of automated tools that can look for anomalies? Sometimes, the AI team is overworked already and doesn’t have the time or nor tools to do the grunge work of sifting for potential issues.

·        Another factor is the chain of command. How do reports about the driving efforts get conveyed up to the top? Are they being sufficiently kept in the loop? Sometimes there are blockages in organizations that don’t want to let unsavory aspects float to the top. It is up to the leaders to make sure they don’t get blindsided.

·        Don’t wait until things get outsized and into the public eye. The usual assumption is to be watching for the truly bad things such as perhaps having a human-driven car that rear-ends a self-driving car. You should also be looking for aspects that aren’t necessarily a collision, but that is going to be perceived by the public as something oddball or untoward.

I have a lot more that could be said on this.

For now, this seems sufficient.

I’m guessing that in the days ahead, there might be more reporting on last week’s situation. In that case, there might be more aspects revealed that shed additional light on the matter. Stay tuned!

This was largely a no harm, no foul “use case” and I trust that it can be considered relatively good news in that it got some lighthearted attention to self-driving cars and the advent of autonomous vehicles, out of which we can all glean some useful lessons.

As a tech executive that has been in these shoes, I hope that the tidbits proffered herein will be of some solace and insight for those dealing with the herculean task of trying to make self-driving cars become an everyday reality and inspirationally bring mobility-for-all to fruition.



READ NEWS SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.