Transportation

New LIDARs From Waymo And Others Produce Amazing Results


A recent clip of Waymo’s perception stack shows both good perception, but also the quality of Waymo’s latest LIDAR. We also saw impressive output earlier this year from Argo, whose primary funder is Ford. In both cases, the LIDAR is developed in house (or was acquired) and this is also true at Aurora and Yandex

YNDX
, and there are also a huge range of independent LIDAR companies trying to produce LIDARs to sell to these teams and to auto OEMs.

The modern self-driving revolution was powered by a LIDAR built by Velodyne, prototyped for the DARPA contests in 2005. This “spinning bucket” was large and expensive and showed 64 lines of resoution. Today’s units are vastly cheaper, much smaller, more robust, longer range and higher resolution. Indeed, the images above seem almost photographic, though they show only grayscale, not colour. Some newer LIDARs (including Aurora’s) are able to tell the speed any dot in the image is moving relative to you, either using FMCW Doppler (as radar does) or by sending two rapid fire pulses to detect movement over the course of a few milliseconds.

The result is vision that’s truly superhuman. A real, almost infallible 3D view with the 4th dimension of speed and the 5th dimension of brightness. Our eyes really see only 2 dimensions at a distance, plus the 3rd “dimension” of color. They must figure out distance using the understanding of the living brain, and speed also must be calculated by watching things over a period of time. (Up close, having 2 eyes helps figure out distance.)

Elon Musk says a lot of notable things, but in the self-driving sensor world, his most notorious claim is that “LIDAR is a crutch.” He declares that Tesla

TSLA
will never use it, because he feels that solving self-driving requires computer vision that is so advanced that it can figure out distance and speed the way a human does, and that if you have that, you’re wasting your time and money with LIDAR.

Whether Musk is right about LIDAR being a crutch, today it’s very clear that computer vision has only one leg and needs a crutch, and the new generations of LIDARs are becoming a very powerful crutch, indeed. In the past, LIDAR’s resolution was low enough that you could not reliably identify many targets, but that’s not true for the images above. This is particularly true when your neural networks are trained to recognize them from their 3D shape, not just their flat RGB image. LIDAR always had the advantage that you could be close to 100% sure that something of sufficient size was in front of you, and you should avoid it, even if it didn’t know exactly what it was. Identifying is becoming less of a problem.

The LIDAR world consists of sensors for real self-driving, and sensors aimed at making more reliable “ADAS Pilot” systems which still require human supervision. Self-driving at highway speeds really wants long range of 250m or more, especially for trucks. City speed can get by with less. Near infrared LIDARs claim to go out to around 200m, and some a bit further, but with low quality results out there. The new generation of medium-infrared (1550nm) LIDARS see the long distance very well.

The reporting of speed turns out to be fairly important. One of the significant issues in self-driving today is the “sudden surprise obstacle.” The classic (and tragic) archetype of this is an emergency vehicle stopped in, or intruding into the left lane. You might be driving in that lane behind a van or other large vehicle, not able to see. Suddenly the driver of that van swerves to the right to avoid the stopped vehicle, and it is revealed to you. It is revealed with just a very short time for you to respond. It’s a challenge even for human drivers.

Any LIDAR is useful here, as it will never miss that the vehicle is there in front of you. However, at first, it doesn’t know that it’s stopped. You need to look at 2-3 “frames” of LIDAR or camera video to get good certainty that it’s stopped. Many LIDARs are run at only 10 to 15 frames per second, which means it can be 2 to 3 tenths of a second to figure that out, and you’ll travel 20-30 feet. Humans take even longer to react, but that’s no excuse.

This is why there is interest in getting the speed on any target. That way you can no instantly the obstacle is there, and that it’s not moving, and you have to brake or swerve now. It may be impossible to avoid a crash, but the sooner you start braking, the less severe it will be.

The NTSB currently is doing an investigation of crashes by Teslas into stopped emergency vehicles. These are driver-assist cars, so in all cases the human driver has failed to be alert and avoid the crash, but at the same time, everybody wants the driver-assist (both Autopilot and the Automatic Emergency Braking) to be better. With some irony, since some of these crashes have resulted in injuries, the belief that LIDAR is a crutch may have contributed to some people having to get real crutches.

The very brief “Robocar winter” of 2020 pushed many auto OEMs to focus on ADAS Pilot tools rather than self-driving, while the tech companies and startups continued on the robocar path. GM just announced an “Ultra Cruise” and Ford a “Blue Cruise” product. BMW also has advanced plans for their luxury vehicles, which will use a LIDAR from Israel-based Innoviz that is small and uses tiny MEMs mirrors to steer the beam. GM is reported to be planning to use Silicon Valley based Cepton’s LIDAR which uses a secret steering mechanism based on voice coils and resonant vibrations. Ford, which had invested in Velodyne but sold off its stake, is likely to use the Argo instrument at some level.

ADAS Lidar has to be low cost — car companies will not put very expensive systems in their cars, but they can afford to be lower performance. Almost every LIDAR company claims they will eventually sell for about $250 per LIDAR. That’s probably true, because all electronics devices become cheap in quantity, though for now the number is pulled from the air because they know it’s about as much as car OEMs will pay for a component in a mid-priced car.

Who wins?

So LIDAR is indeed getting better and cheaper. Cameras are already cheap (though the processing for them has a cost) but computer vision is getting better. The same computer vision techniques also work on LIDAR. They can get better results by knowing inherent 3D on closer targets. Computer vision probably does a better job of distant targets because you get more pixels, though only in 2D. The combination works well — there must be no mistakes up close, we have time to recover from mistakes far away. The answer to the question of lasers vs. cameras has always been both, unless it’s much later in the game and you care more about saving money than safety and capability.

It’s not at all clear which LIDAR companies will win the day. They are all searching for different sweet spots in a multi-axis world of:

  1. Range – you need 250m or more for highway
  2. Cost- and you may need several per vehicle
  3. Resolution and point density — more is always good, but adds cost
  4. Doppler or other speed measurement
  5. Field of view — some see 360 degrees by a narrow vertical range, some just see 60 x 15 and everything in between
  6. Robustness to vibration and the environment and ability to keep in calibration, and ease of calibration
  7. Reliability of the manufacturer (before you bet your product on it.)
  8. Frequency of failure before needing service or replacement
  9. Size and to a lesser extent, weight, and the effect on mounting location
  10. Frame rate (how often it scans the same region) or “flash” ability
  11. Special abilities like regions of interest, lower speed steerability (like Waymo’s PBR which sees a narrow field but can be pointed anywhere in the 360 degrees quickly if there’s something interesting.)
  12. Ability to handle rain, fog and other atmospheric conditions
  13. Power budget
  14. Aesthetics (probably last on the list until much later) and also aerodynamic effects for highway electric vehicles.

Note that the primary differentiators in literature, namely things like beam steering method (spinning, mirrors, MEMS, etc.) and laser wavelength are not on this list, but rather affect the factors on the list. The first market for most LIDAR companies was prototype robocars, with the hope that they would become the chosen sensor when the robocar got out of prototype. The slower process of that now has most companies seeing ADAS tools as the earlier and lucrative market, and this is where most business efforts are going currently.

Several LIDAR makers have gone public via SPACs, though most have seen significant drops in share price from their initial price, except Luminar which is above its initial price but well below its early peak. With low sales volumes, it’s hard for these companies to impress Wall Street, though all those who have deals for production vehicles tout it loudly. They also face competition from in-house LIDARs like those above, internal Tier-One units from companies like Bosch and Camera ADAS leader Intel

INTC
/MobilEye, and a wide range of Chinese competitors. They may also face competition from imaging radar and eventually computer vision.

ADAS LIDARs don’t need nearly the specifications of a robocar unit, and they must also be much lower priced. They tend to be used only to look forward, and existing cars demand more hidden aesthetics — unlike robocars which are happy to be strange looking and showing off their futuristic essence.

In other words, this is a hugely complex market, with no one clear path to victory. Because most people who are not Elon Musk feel that LIDAR is the key enabling sensor technology for robocars, there was has been tremendous investor excitement in LIDAR and its competitors. That excitement has faced rocky waters during the brief “trough of disillusionment” of the Gartner hype cycle, but is bound to heat up again soon — though all the players definitely can’t survive.



READ NEWS SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.