Transportation

Cruise Look Under The Hood Reveals Real Details On Extensive Efforts


Cruise, a startup funded by GM and Honda, has received a lot of attention due to that relationship but has not revealed deep details on their plans and technology, but today they held a video session to reveal a lot of new details. The session was aimed at recruiting new staff, but was made available to other outsiders. A recording will be available soon for the public.

This follows on their reveal of their first ride with no safety driver in the vehicle on Monday.

The briefing was fairly detailed, out outlined many of the components of the work at Cruise. Unlike most videos, it was fairly dense with information and as such even though it’s over 2 hours, those with strong interest may find it worthwhile. Just under 1,000 watched it live.

While much of what was reported was what you might expect a well-funded project to be doing. Below is a summary of some of the issues and projects covered, as well as special note of work that Cruise is doing which may not be seen at other major teams.

(Note: Because the recording was not available at this time, it was not possible to confirm quotes and statements with it. This story may be updated tomorrow.)

Sensing and Perception in San Francisco

San Francisco is a complex environment. Every hour Cruise sees a bike blow through road rules, and somebody blows through a stop sign every 20 minutes. Surprise emergency vehicles appear out of nowhere and vehicles will U-turn in front of you where they shouldn’t. They value the experience they have in SF and encounter new situations every day to train their perception and prediction engines. A typical intersection scenario in SF will involve at least 100 potential interactions among agents (other road users) and over 5,000 potential trajectories for all the agents as they react to other agents.

Cruise stated several times they have moved from traditional algorithms to a machine learning first approach. While they outlined a number of ways they have applied ML, there was still plenty of conventional algorithmic development. One potentially novel approach described suggested they are training neural networks on the results of algorithmicly generated simulations and predictions to produce a smaller more general system that does not need to explore an entire space to look for an answer. (It was not entirely clear.)

Prediction

One of the large challenges for all teams is prediction — estimating what other road users are likely to do, and dealing with it. Cruise’s prediction engine considers several types of uncertainty:

  1. Kinematic uncertainty — you don’t know exactly where an agent will go. When this is high, you slow down a bit to be ready for probable contingencies, but not so much as to block traffic.
  2. Existence uncertainty — there are always areas the vehicle’s sensors can’t see, behind corners and vehicles. You have to consider what to do if there’s a car, bike or pedestrian or anything else in those spaces, and be ready for it, without going nuts. This includes understanding hills, and how vehicles might appear to you as they crest a hill.
  3. Modeling error — predictions will make mistakes, so the system must consider what to do if the unexpected happens even if that’s never been seen before.

To operate, the vehicle has to be constantly playing out all these forecasts and uncertainties and make the best plan that minimizes risk, then watch and continue the plan as long as it’s correct. In evaluating these plans, Cruise also weights not just safety, but also, at a lower weight, how it will do with good road citizenship.

Simulation

Cruise showed impressive simulation efforts. They are doing both pre-perception simulation where they simulate sensor outputs, and post-perception simulation for planning. They feel that their simulations are good and they will do less and less on-road testing over time. They are at the point, as teams get when they have driven a lot, where new situations are encountered in the wild less and less often, and it is easier to find them in simulation.

Their simulation tool includes:

  1. A sophisticated parameterized scenario generation tool that easily allows the creation of large numbers of useful variants of any situation.
  2. A “Road To Sim” system that can quickly translate data gathered by car sensors of real world situations and turn it into a good simulation scenario automatically.
  3. NPC AI, where the simulator is able to use little AI agents to simulate what other road users will do, in a manner akin to video game non-player characters. This lets them generate scenarios that are dynamic, with other vehicles which react in a realistic way to what the other actors and the Cruise car do.

They claim they have been able to make the tool capable enough to build an entire city in sim in just a few hours, and plan to use it to generate other cities that they will move to, and test in them virtually without much work.

Extensive work has also been done on lighting simulation, including HDR cameras, and the great challenge of radar simulation. In the end though, one can test things 180x faster in sim than on the road, according to Cruise.

Data pipeline tools

Davide Bacchet outlined the many tools in the Cruise ecosystem for handing the huge volumes of data they process and the tools they give to developers. This was more for the recruits, so I recommend going to the video to learn more about this. They are building this to scale, including a system to mange “a fleet in every city” which they call Starfleet, and automated map generation tools that need minimal human intervention to build their maps.

Cruise emphasized many times that it was important that they do almost all the things in the presentation in-house. They claim that strong vertical integration is the only way to solve the hard problem of driving, and as such only a few players can actually win at it … including Cruise, of course.

Origin

The session on Origin, Cruise’s custom vehicle, was interesting and merits more coverage later. Origin has many parallels with Amazon’s Zoox effort at a custom vehicle. Of interest was the disclosure of a high resolution radar with long range which might eventually replace LIDAR is certain locations. Radar sees fine through fog but has low resolution, normally. Other companies are also eagerly working on that.

Also revealed was that Cruise is making its own computing chips, and will switch away from GPUs and other chips by their 4th generation compute platform in a few years. They have both an edge computing chip and a central compute chip. The chip will over 1500 8 bit integer TOPS and high bandwidth.

Ride Hail Experience

The section on the ride hail experience was modestly disappointing. While it did point out a number of the advantages that come from robotaxi service over human driven Uber

UBER
style service, it seriously overstated several of them. Yes, it can be nice not to deal with the unreliability of Uber drivers from time to time, but that’s not world changing. Perhaps most useful is the fact that robotaxis will be much more willing to be scheduled and sit and wait for you, as robots don’t mind waiting. (In their scenario a nurse scheduled a ride for end of shift, and Cruise suggested she could just walk right out the door into her waiting Cruise car — except clearly a car can’t be at the door for every person getting off shift at that time. As I wrote yesterday, pick-up and drop-off is a complex problem.)

A lot of time was also spent on how much easier it could be to call a vehicle to come back to you if you discover you left your keys in it. That’s probably true, but it ignored what would actually be much better — the car will photograph itself after you exit it, and notice something has been left behind compared to the photo it took before you entered. This is also how a car will know you’ve made a mess in it. So the car actually should tell you about the left item before you get 2 feet away. I first outlined this approach back in 2008 in my essay “A week of Robocars.”

Q&A

The senior executives answered some questions. Kyle Vogt stated that nobody, including Cruise, has any magic that will make them win, but that Cruise has the best combination of factors in its partners, its testing and simulation regimens and many other things noted. They feel they are improving faster than others.

Asked about winter climate locations, Cruise said they want to focus on what they are doing now, and there’s lots of markets in non-snowy cities to keep them busy.

On the question of whether AVs will annoy other drivers, Vogt stated that he felt Cruise AVs were getting better and better at that every day, and that this will continue. In addition, he think that because AVs will be identifiable and more dependable and predictable than human drivers, that will actually make them more welcome than random human cars as long as they don’t do anything annoying.

The question of whether the first drive came on schedule got a clear answer from Vogt. He says he and everybody else expected it would come a lot sooner. Many people feel that, because a small team can get a basic vehicle driving in a short time, that it should be easy, but it’s really 10,000 times more work.

Forward for Cruise

Combined with the earlier start of rides with no safety driver for employees, Cruise has made a big step forward in revealing their progress and hitting milestones. I hope we will soon see them make more milestones, and also match what Waymo has done in terms of disclosure of safety data from on-road activity and simulations of what would have happened after safety driver interventions. The mere fact that they are willing to run unmanned is a sign they now feel those numbers are good, so there is no reason not to trumpet them. In particular, if Cruise and Waymo both publish these numbers, it makes a challenge to all other players to do the same, or be presumed to be far behind.



READ NEWS SOURCE

Also Read  Consumers In For Disappointment As Incentives Shrink