Transportation

Simpler Is Safer: Occam’s Safety Razor


This is the 4th in a series of articles I’m calling ‘Opening the Starsky Kimono,’ where I reveal the Starsky insights we previously considered top secret. You can read about Starsky’s approach to safety here, business model here, and thoughts on AI here.

When it comes to autonomous safety, simpler is almost always better.

That might seem strange.  The consequences of an autonomous car breaking are huge – death, injury, damage, and the likelihood that the entire company will go under as a result.  You might assume, then, that in order to make such a system safe you need to use every tool at your disposal and then chip away at those that don’t seem necessary.  Like how a sculptor might start with a slab of marble which they chisel down into a masterpiece.

The more parts there are in a given system, the more possible combinations of slight failures that might result in a big one.  Too many sensors pointed at the vehicle in front of you and the autonomous agent might jump between which one to follow, creating strange and potentially unsafe driving behavior.  You could solve this by only following one or two of those sensors, which would negate the value of those sensors for that task while maintaining their ability to make other subsystems act strangely.

The more complex a system, the harder it is to understand.  The less understandable, the fewer the people who will be able to raise valid safety concerns.  That can split an engineering organization between those who understand safety and those who understand the system – meaning that many of the most critical safety concerns are never raised.  To empower everyone to make a system safe you need one that everyone can understand.

I think of it as Occam’s Safety Razor, from the principle that the simplest solution is most likely the right one.  The simpler a system is, the easier it is to make safe; the more complicated a system the harder it is to make safe.

To be clear, however, that isn’t to say that simple systems are always the safest, just that they’re the easiest to make safe.  A taught belt around a steering wheel and a brick on the accelerator do not make for a safe robotaxi – even if you would save a lot of money on roboticists.

That isn’t to say that a belt & brick unmanned car can’t be safe.  If you were on a closed track and pointed it at a concrete wall 1km in the opposite direction from you and your team you could be reasonably sure that it wouldn’t hurt anyone.

Your system could also be safe if, instead, you hacked the car’s drive-by-wire system and ordered it to go straight at 45mph towards that wall.  To make it safe, though, you’d have to do a lot more work.  Does your command to drive straight actually work, or is it offset by 15° and going to come back around at you?  Is any of the code telling the transmission to switch into reverse?  Those are all things you can check, but doing so is far harder than the safety checks for the Belt & Brick system.

All of this is counterintuitive – we’re taught to correlate complexity with sophistication,  Autonomy is hard so a better and safer system must be more complex.  That just isn’t the way engineering works, the safest bridge isn’t a futuristic suspension bridge but a causeway.  

And when it comes to autonomous vehicles the safest are those that are the most understandable.



READ NEWS SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.