Transportation

Cloud Breaches Like Capital One Will Strike At Self-Driving Cars


Lessons from the recent cloud breach at Capital One can be applied to self-driving cars. Photocredit: Getty

Getty

The news has covered yet another breach of systems security that involves the theft of massive amounts of data, in this case impacting an estimated 100 million customers of Capital One Financial Corp.

In the past, the public might have reacted vociferously in outright dismay and disgust. Nowadays, these kinds of revelations seem to happen regularly and so people are irked and exasperated but aren’t storming the castle gates of companies that let it happen.

Interestingly, the companies are able to somehow portray themselves as a victim, gaining sympathy at times, having been presumably outfoxed by (one assumes) sinister hackers that must have divine superpowers to crack into Fort Knox (this is rarely the case and much of the time the breach was performed relatively simply).

Furthermore, these intruded-upon companies are usually well-prepared to rush forward to aid those that were impacted, often consisting of simply covering a credit monitoring subscription for some length of time, along with taking on any financial hit in subsequent lawsuits.

As someone that used to be deeply involved in cybersecurity, including having done consulting work and spoken at major industry events, I had said then and continue to repeat today that by-and-large firms are not devoting sufficient resources and attention to cybersecurity.

In fact, firms pretty much knowingly avoid doing so, betting that the costs associated with a breach impact will be less than the upfront costs toward truly extensive preventative system security efforts.

Which is worse, paying now with precious dollars that you could use for other purposes, or paying out some fines and penalty fees later on if a cybersecurity breach actually befalls you?

The math seems to sway toward mildly putting effort toward cybersecurity today and hoping that the hacked future will not arise (or possibly occur on someone else’s watch).

If society opted to shun the firms that allowed these breaches to occur, the overall cost to the companies would be large enough that they would take a more somber tone and a heftier allotment toward their systems security.

There is also what I refer to as the earthquake phenomena.

A lot of firms don’t consider themselves fully at risk of a cybersecurity breach, just as a lot of homeowners don’t think their house is vulnerable to being wrecked by an earthquake. And, even if earthquakes harm other houses, the homeowner still believes that they won’t be harmed and will stubbornly cling to not getting earthquake insurance.

Cloud Attention

This latest earthquake-like cybersecurity incident with Capital One has prompted the media to twist things more so than usual, becoming somewhat preoccupied about the cloud aspects.

In this case, Capital One had earlier famously embraced moving off of their own hardware and into the cloud, making a large-scale deal with Amazon’s AWS cloud services, doing so in a vocal and bragging manner, taking a stand that many in the industry were still unsure of equally making.

Many financial institutions have been queasy about moving away from their own bought and managed platforms since they worried about whether the cloud would be secure enough.

I have pointed out to companies unsure about adopting an external cloud service that one of the potential benefits is that the cloud service company potentially has hordes of cybersecurity specialists that have as their mainstay the protection of their overall cloud systems. An individual firm would be unlikely to command such a large force devoted to such security, while the cloud service provider can do so since they are needing to secure the overarching cloud capability across all of their clients (in a sense, spreading out their cybersecurity costs).

There is a rub though.

You have to make sure you differentiate the so-called “security of the cloud” versus the “security in the cloud.”

The cloud provider typically states that they have primary responsibility for security of the cloud, meaning that the cloud system or platform is kept relatively secure by the cloud provider, and less so is a responsibility of the company using the cloud service.

Meanwhile, security in the cloud means that the company using the cloud service has the primary responsibility of making sure they are using the cloud platform in a secure manner, and this is less so a requirement of the cloud provider.

Translated, this implies that a locksmith will try to make sure that your home has a good lock on the front door (that’s the cloud provider as the “security of the cloud”), but if you leave the house key laying under the doormat it’s your fault for undermining the security of the home (that’s the company using the cloud and having usurped their role of “security in the cloud”).

According to news reports to-date, allegedly Capital One did not adequately configure the use of their AWS provided cloud service.

If that’s the case, one could argue that it was a misstep of security in the cloud rather than the security of the cloud.

Sometimes these matters devolve into a finger-pointing match.

A company might say that they thought the cloud provider was doing this-or-that, and therefore the company didn’t need to do so. Or, a company might complain that the cloud provider didn’t sufficiently warn them about needing to do certain configurations or was lax in not letting the company know that existing configurations were weak.

These co-shared responsibilities can become an inadvertent gap in cybersecurity.

A famous line in cybersecurity is that a company has to be right all of the time (always protected), while the crook or hacker only has to be “right” once (finding a hole and then exploiting it).

Applies To AI Self-Driving Cars

Let’s shift our attention away from clouds that contain banking data to instead discuss clouds that contain other kinds of data.

Specifically, consider the role of clouds in the advent of self-driving driverless cars.

A self-driving car typically has lots of sensors, including cameras, radar, ultrasonic, LIDAR, and so on. These sensors are collecting vast amounts of data. The data normally goes into the on-board systems and electronic memory of the driverless car.

Pretty quickly, the volume of data gets large.

The AI system driving the car generally needs live data at the time the data is collected and doesn’t need prior data quite as much. That being said, the prior or previously collected data can be a treasure trove for doing Machine Learning and Deep Learning, allowing the AI system to improve its driving capabilities over time by analyzing and “learning” from the amassed data.

Keeping that data on-board a specific car is not going to leverage it entirely, thus, most of the automakers and tech firms push the data up to a cloud.

The cloud could be a homegrown platform by the automaker or tech firm, or it might be an externally provided cloud service (similar to how Capital One selected an outside cloud provider).

If you sense where I’m going on this, yes, I’m suggesting that ultimately there will be tons of data collected from our self-driving cars and this data will be housed in clouds somewhere.

Suppose a hacker breaks into a cloud that houses the sensory data collected from your own self-driving car.

So what?

Essentially, the hacker could likely figure out where you went, how long you stayed there, how many trips you made per day, and so on. This might seem innocent and seemingly inconsequential, but it could allow the hacker to target you by knowing where you go. Other nefarious uses that I won’t even mention herein come to mind too.

Furthermore, it is anticipated that self-driving cars will likely have an inward-facing camera, doing so to catch people being disruptive in self-driving ride-sharing cars, and there will be audio capabilities too such as a microphone allowing you to interact with the AI system (akin to Alexa or Siri).

Assume that every car journey that you’ve made is recorded on video and audio and sitting in the cloud of the automaker or tech firm.

Does this seem to disturb you a little bit more about the potential for privacy invasion?

As an aside, I’m focusing in this discussion on the possibility of your data being leaked or stolen by a hacker. Keep in mind that the automaker or tech firm might choose to use that collected data, perhaps to do marketing campaigns aimed at your ascertained interests or might sell the data to third parties that want to use it for various purposes.

Thus, we all still have yet to confront the matter of how data about you, collected via self-driving cars, will be utilized and whether you will be allowed to have any control over the collection and distribution of such private data.

Moving Beyond Read-Only

One aspect about having self-driving car data in the cloud involves it being seen by a hacker and then copied from the official cloud site to some other online hacker-preferred location.

In theory, the hacker might be able to get not only the automaker or tech firm cloud-stored data but even seek to get the official cloud to request additional data from the self-driving car itself.

Therefore, beyond simply picking up data already pushed from the self-driving car up to the cloud, it could be that the hacker invokes the cloud to grab data from the driverless car, either new data that had not yet made its way to the cloud or possibly even other data that the automaker or tech firm had not envisioned would be pushed up to the cloud.

I’ll next move into even scarier waters.

The cloud transmissions are often referred to as OTA (Over The Air) electronic communications. OTA is used to grab data from the self-driving cars and place it into the cloud.

Plus, OTA is used to push data and even programs down into the self-driving car.

Here’s where we veer into a potential nightmare.

If a hacker can get the cloud to push down data that might confuse or mislead the AI of the self-driving car, in theory, the hacker could potentially get the car to go places or do things that the hacker desires.

Similarly, if the OTA is normally providing routine updates to executable programs that are on-board the AI system of the driverless car, the hacker could try to put their own hijacked program into the self-driving car, doing so readily without having to get physical access to the self-driving car.

In that manner, the OTA is a dual-edged sword. It is helpful and handy for self-driving cars since it allows for remote updates to be applied to the AI system, yet it also provides a nifty and convenient conduit to allow a hacker to do some reprehensible things.

It is noteworthy that particular automakers or tech firms will have fleets of self-driving cars. A hacker would possibly be able to infect an entire fleet, doing so via the OTA, and do this without breaking a sweat, having the OTA transmit to all of the self-driving cars wherever they might be on planet earth.

Ironically, I suppose, the fact that to-date each of the automakers and tech firms is generally using separate and proprietary clouds means that a hacker couldn’t necessarily hack all driverless cars per se. To do so, the hacker would have to crack into each of the distinct clouds of the various automakers and tech firms. This would be harder, though still possible.

Conclusion

Having dragged you through the cloud aspects of self-driving cars, my point is that the same kind of breeching action that happened to Capital One and to lots of other companies could also happen to self-driving car companies.

The potential loss of privacy is certainly serious in the case of online banking break-ins, and the same could happen with self-driving car data, but I think we can all agree that the danger is ratcheted up if the hacking can potentially impact the driving of a car.

A myriad of doomsday scenarios has already been voiced that a hacker might decide to stop all traffic in a given city and cause a swirl of calamity. There are of course even worse scenarios to consider.

Right now, though the self-driving car companies are providing some attention to cybersecurity, they really aren’t much of an attractive target as yet by hackers. Until we have a lot of driverless cars on the roads, it just isn’t as magnetic a pull for hackers to go after this arena.

The number of cybersecurity holes or avenues for hacking is enormous for driverless cars.

Besides the cloud connection, there is also the possibility of Internet of Things (IoT) devices that might be inside or near to a self-driving car that can create security issues. The infotainment systems inside a driverless car provide a platform for launching a hacking attack upon the car. Even your smartphone carried with you when you go into a self-driving car for a driving journey, it too can be a springboard for making an attack.

Step further back and consider the entire supply chain related to making a car. Throughout that supply chain, something can potentially be implanted that would allow for an electronic opening once the self-driving car is on our roadways.

I realize that some might be upset that I’ve seemingly been saying that the sky is falling. The focus of the automakers is primarily on getting their self-driving cars to work, and we don’t yet know how well guarded those systems are.

Cybersecurity in the case of autonomous cars must be given top priority, else we’ll be discussing later on that though we did achieve driverless cars, and assume they are safe as drivers, they might be unsafe due to the chances that a hacker might decide to create their own kind of earthquake.



READ NEWS SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.