MobilEye, an Israeli company which is the pioneer and overwhelming leader in computerized driver-assist systems, has had ambitions in self-driving for many years. Intel
INTC
On Jan 11 at the virtual CES, MobilEye CEO Amnon Shashua outlined more of their strategy in this space, including new LIDAR hardware and their software architecture. While most efforts to turn ADAS into self-driving are probably doomed, MobilEye is an exception and is a real contender. Most press deservedly goes to Waymo, and less deservedly to Tesla
TSLA
The MobilEye strategy is a “vision first” strategy, derived from their long work with computer vision chips for ADAS. Unlike Tesla, they believe the best strategy is to combine that vision technology with LIDAR and other sensors to produce a greater level of safety. Their strategy for combining (fusing) the results of the different systems is fairly different from most teams. They also have their own mapping strategy, building maps from compact data uploads from some cars in the very large fleet of MobilEye equipped vehicles, and constantly updating them in this way.
In the press conference they stated:
- They believe their driving and mapping algorithms translate very easily to different cities and other environments. They can do localized adaptations quickly and deploy quickly in a new city.
- They will be deploying soon in Shanghai, Tokyo, Paris, Detroit and, if they get approval from the city, New York.
- Today they use Luminar but in 2025 they will have their own long-range LIDAR built using Intel silicon photonics expertise, and able to do FMCW (a style of LIDAR which uses lower energy for long range and returns the velocity of every point it identifies.)
- They believe they must do much better than human drivers to be accepted. Shashua cited that human crashes occur every 50,000 hours of driving (it’s actually more frequent) and that would mean a fleet of 50,000 cars would be having a crash every hour which the public will not accept. They think they can do much better.
- Imaging radar will also play a role in their sensor strategy. It acts like a very low resolution, long range Doppler-capable LIDAR.
- Their long promoted “RSS” safety architecture, which prevents a vehicle from taking steps against the vehicle code but still allows some aggression, is “one of [their] crown jewels” and will be central to their plan.
Unusual sensor fusion
All cars use multiple sensors which detect things in different ways. All robocar software stacks must handle this and want the benefit of the different sensors. Most teams do some sort of “sensor fusion,” looking at obstacles seen by the different sensors and attempting to combine them to produce one grand perception map. They work to make sure they understand that a car seen by the radar is the same as one seen in a radar ping and one seen as a set of LIDAR points. Simple fusion approaches have each sensor produce a list of objects and then it tries to combine then. More complex fusion looks at raw sensor data with only preliminary understanding to produce the master object list.
MobilEye believes in a different approach. They claim that each sensor is different enough that they wish to build fairly complete and independent self-driving systems based on the different sensors. One will be vision based (like Tesla or existing MobilEye EyeQ ADAS tools.) Another will be just LIDAR+Radar, like the earliest version of the Waymo/Google car. Both will produce a driving plan.
MobilEye’s hope is that while the first system might make a mistake every 10,000 hours (too frequent) and the second might also have that error rate, the two would rarely make the same error. If that’s true, the resulting system, in theory, could make a mistake every 100 million hours. This statement surprised me, because there are a number of ways it can’t be true. First of all, the errors of the systems are not that independent. They will make the same mistake in the same place, sometimes in low level perception and sometimes in more advanced analysis. Secondly, when the two systems disagree, it is not always clear which system is right. If one system says, “I see a rock in the road, swerve around it!” and the other wants to go straight, which do you believe? The quality of the system will depend greatly on the system which attempts to combine the results.
MobilEye hopes that this approach fits perfectly with the way their company has grown. For many years they have made computer vision chips (sometimes adding radar) to provide ADAS features like lanekeeping, adaptive cruise control, collision warning and more. They hope to just keep making that better and better until it’s good enough for self driving. This is a controversial stance. Many feel that there is not likely to be an “evolutionary” path from driver assist to real self-driving. Sterling Anderson of Aurora, who previously was a leader on Tesla Autopilot, calls it “trying to build a ladder to the moon.” However, Tesla and others are betting on this evolutionary approach. MobilEye is smarter than they are because it’s also building LIDAR — because it knows the problem is hard, too hard to solve today with just a camera.
They hope that they can leverage their prior success. That remains to be seen (but makes sense as a strategy for them) but I remain skeptical of the claim the errors are independent enough to truly take the product of the error rates. However, that doesn’t mean that two systems can’t be better than one — just not that much better.
MobilEye picked their plan because going straight to a real robocar is hard and expensive and no money is made until you’ve done it — on the other hand, they started making lots of money a long time ago in ADAS, letting them build the company. But on the third hand, investors have been very ready to plow tons of money into those taking that direct route to the brass ring.
Scaling to other cities
Recently, MobilEye released a video of their car driving an hour around Munich. The video was nice enough but what was really impressive about it was not revealed before this press conference. Shashua stated they did this drive after sending a couple of cars to Munich, and had 2 non-engineering staff spend 2 weeks driving the cars around making maps, and that with this small amount of effort, they were ready to do the demo. Let’s be clear — it is not longer considered “breakthrough” stuff to do a one hour driving demo without interventions. Most capable teams can do that, particularly if they cherry pick the video to be the first time they pulled off a long drive without a problem. The real accomplishment is doing it quickly with few staff.
Establishing service in a new city is hard, and probably takes more work than MobilEye did. You need to not just adapt to that city’s rules and situations, and map it, you also want to make nice with the city officials and do all the things needed to move any service to a new city.
Shashua claimed their mapping system now builds the maps without need for any manual labour. That makes them very quick and affordable to build, but I think actually goes too far. A little bit of human tweaking and quality assurance is still very worthwhile.
Because MobilEye will sell its products to carmakers, it doesn’t have to worry about the hard on-the-ground work of scaling a fleet. It already knows how to make chips and sell them to automakers.
MobilEye correctly believes that doing robotaxi is easier than making a consumer car. A robotaxi comes home every night and remains under fleet operator control. A robotaxi only has to drive a subset of streets in its service area. A robotaxi can cost a lot more because that adds only a small amount to the price of each ride. A consumer car wants to drive any street that customers would like it to work on, and customers care about the price.
The question of whether “evolutionary” methods from ADAS to robocar will work is still unsettled. But if somebody is going to do it, MobilEye is a strong contender with their good market plan, and the resources of Intel, the world’s leading chip company. They are definitely a company to continue to watch.