Transportation

Here's How Robocars Will Drive In The Most Chaotic Of Cities


The traffic in Indonesia is mostly scooters, bike-taxis and a few cars

Brad Templeton

Sure, robocars are operating in Phoenix. But can they drive in aggressive driving locations like Boston, or worse, places like India, Indonesia or Bangkok?

Nobody wants to try these difficult locations first. One problem at a time. Even so, self-drive projects in those locations, with no choice, are making a crack at it.

Several big issues face the robocars:

  1. The instinct of all robocar programmers is to be timid, conservative and defensive in driving style. In some of these places, a timid driver can’t operate.
  2. Once drivers and pedestrians realize that robocars will never hit them, they will be tempted to drive and walk as if the cars aren’t there. Most people aren’t willing to cross the street when a car is approaching, with good reason. With robocars it will always be safe to do this, and people will quickly learn that and abuse it.
  3. Operating in these places often requires the use of facial and body language with other drivers.

Slow and chaotic

One thing that makes some of these cities more tractable is that while they are chaotic, travel is slow. Slow travel means you have more time to react to everything. Slow travel means you can come to a dead stop in a short distance. Once you have decided to stop, stopping distance goes up with the square of reaction time. With 1/3 second reaction time, it may take 70m to stop at 100km/h, but only 20m at 50km/h. Drop down to urban speeds like 15 km on a crowded street and you can stop in just 3m — 10 feet. At Starship, our sidewalk robots can stop in 30cm (one foot.)

If you think about humans walking in a giant crowded train station, going every which way all at once, they rarely hit one another at walking speed.

For the same reason, experiments have shown that even with cars, the removal of all road markings and signs improves throughput and improves safety — but this happens only at town speeds.

Faster and chaotic

Once you get to higher speeds, in a chaotic place, it is no longer possible to drive extremely conservatively. MobilEye has been promoting a set of algorithms they call RSS. RSS is a core of algorithms that decide where a vehicle has right-of-way. MobilEye believes they can formally prove the correctness of this code. Rather than constraining the car, they hope this approach will allow it to actually drive with more aggression. A higher level driving system, on top of RSS, can then virtually experiment with various possible paths, including aggressive ones, through a situation, and ask the RSS module to rate them on safety and legality. The car can then pick the most aggressive route that fits within the constraints.

While cars in that situation will assert their proper right-of-way, the rule-set includes the principle that right-of-way is given, not taken, and as such if somebody else has refused to give you right-of-way, you don’t take it. You yield in a game of chicken. That is a reasonable rule, but it presents a problem in a world where drivers become aware of that philosophy.

If drivers know you will always yield if they take right-of-way, they will take advantage of that. They will cut you off, and know it is safe to do so. This may be unpleasant or even dangerous for passengers inside — for unmanned vehicles, nobody is inconvenienced, but sudden stops may create other risks and also slow traffic. It’s better to have a world of cooperating drivers who don’t cut one another off, even if that’s been the way of the world in some cities for some time.

European cities are tame, and US ones even moreso.

Brad Templeton

Not to yield

Five years ago I suggested a possible radical answer. That is for unmanned vehicles to, very rarely, refuse to yield when they are in the right, and to control themselves so as to create a low-damage contact with minimal risk to the driver who made the challenge. It seems radical to have a car not do what it can to avoid an accident, but there is a reason to not show weakness.

In these events, the human driver would be in the wrong. There would be detailed recordings of the crash and who was wrong. That driver would get a ticket, be delayed, and their insurance would need to pay for damage to both the robocar and the driver’s car. These days, even a minor bump that bends fenders still comes with a high cost.

If you know the robots will yield to you, you will treat them like they aren’t even there. If you know that one time in 100, you are going to get a small crash, you’re going to completely reverse your approach, and avoid them like the plague. People cut in front of other people because they know that other humans will yield too. Nobody wants to be in a crash. But once the word got out about this, I don’t think the robotaxis would find themselves in many crashes either. People would learn not to cut them off.

The car would need to be very certain that the law is with it, and that the impact will be minor. It would brake just enough to avoid a serious impact but enough to cause a ding. It might want to know the model of car it is hitting and know what speed it is safe to hit at. Or it might do something else.

Tit for Tat

While robocars should not be doing constant surveillance on the roads, they can and should remember when somebody cuts off one of their fleet. Remember the car type and color, remember the license plate, and even remember the face or other characteristics of the driver. The cars can build up a memory for the whole fleet, and know that a given car has been cutting off fleet cars over a certain threshold.

Then, if it sees that car, or ideally person, again, it can alter its behavior to get less and less willing to yield when it has the right of way. On the roads today, those who drive badly do it to a thousand other drivers who have no memory. But a fleet of cars can have a memory, and practice tit for tat. (Game theorists know that tit for tat is the best strategy in situations where there is benefit from mutual cooperation, but benefit and loss when one person doesn’t cooperate. The best strategy is to cooperate by default, but remember those who have refused to cooperate.)

Even in this situation, the cars would still cooperate and yield most of the time. Just knowing they will not always do so is enough deterrent. In addition, you want to come to learn what cars have multiple drivers (like rental cars) and treat them differently. Most aggressive drivers cut off many people in an hour, so you don’t even have to have that long a memory if your fleet is large. Indeed, you can remember when you see a car cut anybody off, not just you.

In some places, there are laws that put a duty on drivers to avoid an accident even when they are in the right. These laws may prohibit actions such as I have described. But maybe it is the right thing to modify these laws, just a little, to make people be better on the roads.

Human retribution

Vacant robocars are one thing, but most cars will have people in them. Most such passengers would not want their car to permit a crash (even when it’s in the right) while they are inside. But some might. A very tiny number, perhaps well strapped in with a 4 point seatbelt, might even enjoy it. It would not take very many to make the cars unpredictable, and thus not worth screwing with.

There is another option, though. When somebody — even a pedestrian crossing at the wrong place — interferes with the proper right of way of a vehicle with passengers, those passengers have a right to complain. And the vehicle might offer them a means to do that, to push a button to cause the video evidence to be saved and forwarded to the company. Using its vehicle and face recognition software, the company would identify the bad offenders. It would then show up at the prosecutor’s with an iron-clad case against the individual, with all the work already done for them.

This would not be some random driver complaining about some random idiot who cut them off. We know the police and prosecutors would laugh at that, and tell them to not waste police time. This would be a major corporation, like Alphabet or General Motors or Tata showing up, with all their influence in the city. And they would show up with exactly the right number of cases so as not to waste police time. The worst offenders would get a knock on the door from police, or possibly a large fine. You would not want to be one of the worst offenders. Which means even the 2nd worst offenders would think twice. Even pedestrians would learn not to treat the vehicles like they are not there — while they would always do everything to not hit any pedestrian, they don’t have to leave it at that.

Facial and body language

Many robocar teams and academics have published research on using neural networks to try to understand the facial language and body language of pedestrians and other drivers. In addition, there has been research in many vehicles on putting signs on the outside, mostly for pedestrians, but potentially also for drivers. After all, robocars will be religiously good at using their turn signals, and there’s no reason they can’t communicate a great deal more about their intent. At first, these signs are there to assure pedestrians that the car sees them. That’s actually only a temporary need. Any robocar ready for deployment must see every pedestrian around it, all the time, with no exceptions. These signs reflect our old instincts, namely that we don’t know if a human driver sees us because people can’t look in all directions at once. We want eye contact and body language to be sure. Robocars look in all directions all the time. They are never not looking at you, and by the time they are ready to deploy, any lack of detection must become extremely rare. So rare, in fact, that a sign becomes useless, since a sign that is always accurately saying “I see you” is very quickly a sign that is never looked at.

It is possible with infrared cameras to detect if a human’s eyes are looking at the camera. We’ve all seen the “red eye” that comes in photos when you look directly at a camera that is lighting you up with a flash that is next to the lens — that same effect can help detect the gaze of people, without high resolution.

While neural networks may not be robust enough to drive you everywhere, recognizing human patterns is actually a problem they were born for. Expect interesting results in this field as time progresses.

Major behavioral change

Many people might declare it impossible that a city of crazy drivers could turn into a city of well behaved ones. We have both types of cities in the world, we often have both types in the same state or country. It all depends on the style of the city, but it’s far from ingrained. For example, Manhattan used to be a place of constant gridlock and honking. It was deeply part of the culture, or so everybody thought. Then the city decided to firmly crack down, after many failed attempts of the past, and it actually made a difference — though the fight continues as services like Uber have brought more traffic into the city, which they hope to cure with congestion charging. I don’t think any city is beyond redemption.



READ NEWS SOURCE

Also Read  What Can Automotive Learn From “Tiger King”?