Transportation

Tesla’s ‘Full Self-Driving’ Is 99.9% There, Just 1,000 Times Further To Go


This week, Tesla
TSLA
released a very limited beta of what Elon Musk has referred to as “feature complete full self-driving” to a chosen subset of their early access customers. It also announced a $2,000 increase in the price of the “FSD in the future” package it sells today to car owners, giving them access to this software when it’s ready.

The package is impressing many of these owners. A few have posted videos to Youtube showing the system in operation on city streets. In spite of the name, “full” self driving is neither self-driving nor full as most people in the industry would refer to it. It is more accurately described as Tesla “autopilot” for city streets. Like autopilot, it requires constant monitoring by the driver as it can, and does make mistakes which require the driver to take control to avoid an accident. It handles a wide variety of urban streets, but does not handle driveways or parking, so a human driver do all driving at the start and end of the journey. Previous autopilot only handled freeways, rural highways and some urban expressway-style streets, and did not handle signs, traffic lights or other essential elements.

What the vehicle does is slightly better than what Google
GOOG
Chauffeur (now Waymo) demonstrated in 2010 while I was working there, though it’s very important to know that it does it using just cameras and minimal maps, while Chauffeur used detailed maps and LIDAR, and made minimal use of cameras. In that intervening decade, neural network computer vision has exploded in capability. Tesla has made a very big point of their effort — effectively alone among major players — to avoid LIDAR and rely heavily on cameras (and radar, which everybody uses.)

The online videos show the vehicles handling a variety of streets and intersections. This includes small streets with no markings, large multi-lane streets, reasonably complex intersections including unprotected turns and more. This is what Tesla means by “feature complete” — that it does at least something on all typical routes. While one can’t make firm conclusions from just a small number of videos, there may be problems with construction and some classes of road, and we haven’t seen videos yet in harsh weather, though we have seen both day and night.

There are also several necessary disengagements — where the human driver has to grab the wheel and take control to avoid a very likely accident — in these videos. While no statistics are available about how frequently those are needed, it appears to be reasonably frequent. This is the norm with systems that require constant manual supervision, and is why they need that supervision. All the full robocar projects have also required one (or really two) “safety drivers” behind the wheel who also have needed to do such interventions, frequently at first, and less and less frequently as time goes on. Only recently have Waymo and now Nuro deployed vehicles with no supervising driver on-board. (Cruise recently got the permit to do this but has not yet done it, though they claim they might by the end of this year. Ama/Zoox also has such a permit.)

Based on the videos and claims by Tesla of it commonly taking Elon Musk on his commute with few interventions and sometimes none, I threw out the number 99.9% in the headline. This is not a precisely calculated number, but a proxy for “seems to work most of the time.” In reality, we would want to calculate how often it is needing interventions

Tesla’s beta test — reportedly given only to drivers with some high level of safety record, though it’s not disclosed how that was measured — presents us with some big questions:

  • Just how frequent are the disengagements, and how serious are the situations?
  • Is Tesla effectively using its customers in the same role as robocar safety drivers, but with no training and no partner — and how does that contrast with what Uber
    UBER
    was doing when it had a fatality?
  • How does Tesla’s decision to avoid LIDAR and detailed maps affect the quality of their system?
  • Can drivers use this safely, and what is gained by their testing?
  • Is this even legal?

How frequent are disengagements?

One can’t tell from just these videos, but we can compare with how often humans have accidents. Disengagements themselves happen when they are not needed sometimes, so the real thing you have to measure are “necessary” disengagements where something bad would have happened without the intervention. Many teams use simulators to play out what would have happened without the intervention. “Bad” can mean an accident of some severity, or an just mean an obvious driving mistake, like veering briefly into another lane, even if one was lucky and it was empty.

Humans get some sort of “ding” every 100,000 miles, which is about 8-10 years of ordinary driving. Insurance companies find out about it every 250,000 miles (25 years) and the police are involved every 500K miles, or 40-50 years, or roughly once per lifetime. A fatality is fortunately very rare – every 8,000 years of human driving, and on the highway, every 20,000 years. As rare as it is for any one driver, we drive so much that we have too many.

As such, driving all day without needing an intervention seems very impressive, particularly to those new to the field, but it’s a very long way from where you need to be for actual full self-driving, rather than monitored driver assistance.

Is it OK to test this way? Can it be?

Most self-driving teams would not test a system at this level of quality without having well trained safety drivers, and putting two in each vehicle. Uber had poor training for their safety drivers, and admits it was mistaken in having only one driver per vehicle when a fatality took place. The main blame for the fatality was on the safety driver who ignored all rules and was watching a TV show on her phone rather than doing her job, but we know other Tesla owners will do similar things.

On the other hand, Autopilot shows that it is possible for drivers who follow a good procedure to use it safely. Statistics show that Tesla drivers using autopilot on the highway are not as safe as drivers not using autopilot, but only moderately less safe. (Tesla publishes misleading numbers claiming they are more safe.) This lower level of safety probably can be explained by the fact that some drivers use it properly and are safe, and others do not, and lower the average. My experience is that a good method to use it is to “shadow drive” in your head at all times, think-moving your hands but applying no pressure, letting your hands move along with the wheel as the car turns that wheel. With that approach if the wheel doesn’t move in the way you are lightly moving, you can quickly change from light force to real force and push the wheel where it should go.

Not all drivers are doing that though, and some are very much not doing that, even trying to defeat Tesla’s warnings. Many suggest that the camera in the car monitor the gaze of the driver to assure attention is paid. Other competing systems do this. Tesla has performed experiments on doing this but so far declines to deploy such a countermeasure.

Of course, having careful drivers test this software is very valuable to Tesla. It’s why other companies spend many millions of dollars having paid safety drivers run test fleets around several states.

There are corner cases even in the middle of the street

The goal of all this testing is what has become the hard problem of deploying these cars. It is relatively “easy” to get to driving ordinary roads without constant interventions (though harder to do what Tesla has done, which is to do urban streets with no LIDAR and no map.) And even though you can handle 99.9% of the situations you are still a tiny fraction of the way there. You need to find all those unusual and rare events people call “corner cases.” But these don’t just occur at the corners, and finding most of them is a very long and difficult project. It took Waymo’s top-notch team a decade to do it. Nobody else has really done it.

The declarations of delight by the drivers in the online videos are those of lay people who imagine that getting 99.9% of the way is solving 99.9% of the problem. It’s not, it’s solving less than 1/10th of 1% of the problem. In a driver-assist tool, with human monitoring, you can ask the human monitor to deal with the rest of the problem, and being the general problem solvers that humans are, we can usually do that.

That said, Tesla has a fantastic tool in the beta testers. Instead of paying people to test the vehicles, drivers are paying for the privilege of doing so. This lets Tesla rack up miles faster and at lower cost than anybody else, and they naturally want to exploit that.

That’s very good news for Tesla, because when I said they were 1/1000th of the way there, you might imply that this means they have thousands of years left of work to do. The good news is that because teams move faster and faster with time, they do the later work faster than the earlier work. Waymo might have taken 2 years to get 1/1000th of the way, but in 10 more years they are now almost there. That’s because over time they both grew their teams and testing, and also got access to all sorts of new technologies like neural nets, advanced simulation and more. Tesla has been doing that and also building its own processors almost as good as Google’s own custom processors. Moore’s law is not yet dead, and it keeps offering more tools to make the rest of the work go faster. But Tesla currently has a lot further to go for now.

Maps and LIDAR

What we see so far doesn’t tell us a lot about Tesla’s controversial choice to avoid LIDAR and the issues around that. I detail those in an earlier article on Tesla and LIDAR. Tesla hopes to create virtual LIDAR using cameras and neural networks, to tell them how far away any visual target seen by the camera is. The videos show us their perception of cars and other obstacles winking on and off, which is common with these vision systems, and shows they are not quite there yet.

We learn more about their decision to not use detailed maps. As I outline in this article on Tesla and maps, people seek to drive without maps to avoid the cost of maps, and to be able to immediately handle an entire class of roads without mapping them. Maps give you more data (even when they are wrong and the road has changed) and can help you be safer. Whatever you might believe about their cost, it is much to soon to think about doing without them, not something you want to do in the first cars to be deployed on the road. We can see this clearly in this video of Tesla “FSD” making a left turn onto a road with a divider.

In this video, you see the vehicle making a left turn. At almost precisely one minute, the vehicle calculates a wrong map for the road it is turning into, putting the road divider in the wrong place. It attempts to drive into the oncoming traffic, but the driver takes over and puts the car in the correct lane.

A car that drives without a map has to make its map on the fly as it drives. It has to figure out where all the curbs, lanes, signs and traffic controls are and the rules of each lane, and from that choose what lane to drive in and how to get there. A car with a map relies on the experience of prior trips through that intersection, as well as human quality assurance of the understanding of the data. It still has to understand the scene, particularly in any area where the road has changed, and of course to understand all the moving things on the road, but it starts with a leg-up that makes it safer.

Tesla will naturally work to make their car better at handling an intersection like this and not making the same mistake again. And their fleet means that they will get reports of this quickly and have an advantage in doing the incremental improvements they must do. But perfect understanding of the layout of the road from camera images is not yet possible, which is why most teams feel having a map with all the details makes them safer and more capable, even if it costs money to make and maintain the maps, and it initially limits the driving area where this higher level of safety is available. (A car that uses maps can still drive off its map, but only at the lower level of safety that a car which doesn’t use maps at all has, presuming that is acceptable safety at all.)

Is this legal?

Some of these testers are in California. This recalls an early incident when Uber’s self-drive team, lead at the time by Anthony Levandowski, was attempting to test Uber’s vehicles in California. California law requires a permit for testing a self-driving vehicle. Uber did not want to get the permits. Levandowski, even though he had been involved (as was I) in the negotiations over drafting that law. claimed that because the laws exempted “driver assist” systems from needing permits, Uber didn’t need them because they always had a safety driver in any test vehicle, and thus it was driver assist.

Car makers had pushed to get this driver assist exception in the laws, because they didn’t want the assist systems they were selling to suddenly need permits.

The California DMV said “no way.” After all, since everybody used safety drivers, then the law would be meaningless and no car would need a permit. Their view was that if you were actually trying to make a self-driving car, you needed the permit even if you had a person monitoring it as the official driver. They told Uber they would revoke the vehicle licence plates of all the Uber cars unless they got in line.

This puts Tesla in an unusual situation. Tesla’s highway autopilot is indeed a driver assist and needs no permits. But they call this a beta test version of “full self driving.” As I asserted above, this is not really full or self-driving as most industry insiders would call them, but as long as Tesla says this is an effort to build a self-driving car, it seems the permit law might apply to them, and shut down their beta program in that state and possibly other states with similar rules.



READ NEWS SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.