Close
  • Français
  • English

What happens when Hackers attack self-driving cars? (By SecGate)

What happens when Hackers attack self-driving cars?

If hackers can attack the world’s largest banks, oil and gas companies, the military, government, and even the NSA—organizations who, presumably, have the best cybersecurity available—then you can safely conclude that no system is hacker-proof or immune to a cyber attack.

What is going to happen when hackers attack self-driving cars and run them off the road? That’s not just going to cause an embarrassment, like leaked private emails and documents, or have financial implications, like when a credit card is stolen. Hackers could kill someone.

 

The Growing Self-Driving Fleet

Self-driving cars have been very much in the news lately with Uber conducting tests of self-driving taxis in Pittsburgh, Pennsylvania. And a Tesla driver-assist car caused a fatal accident when the car could not see a white truck in the blinding sun (the Tesla is not specifically self-driving, as are Google self-driving vehicles. Instead, Tesla allows drivers to take their hands off the wheel for up to 3 minutes and does such things as help the driver stay inside their lane).

The tech evangelists who promote this technology point out that if all cars were self-driving then there would be no accidents. That would be ideal. Accidents kill tens of thousands of people in the UK and US every year. A self-driving car does not get sleepy and it does not get drunk.

It does however get confused. For example, it cannot respond to signals from pedestrians or from other drivers. But a self-driving car should be able to talk to another self-driving car. So safety should increase as the fleet grows.

 

Attacking the Weakest Link

All of this sounds great, but there is a risk that hackers will seek to do harm. Their antics could range from a simple prank, like making the horn sound or turning off the engine. Or they could do something far worse, like crash a car into a wall.

The possibility of that was made clear when Chris Roberts, the founder of a security firm, hacked into the controls of an airliner in flight from his passenger seat. His goal was to make a point about the threat. He says he got into the airliner’s system via the entertainment system and caused the airbags to drop down. He also says that he caused the jet to change course, although that claim could not be verified.

The take-away message is that while Google might be building technically-advanced vehicles, this might not matter as hackers usually gain access to complex systems via the weakest link. In the case of that multi-million dollar airliner, it was the entertainment system. So where are cars exposed?

Self-driving cars use lasers, radar, cameras, GPS, and streaming data on road conditions, weather, and traffic to operate. They also use GSM cellular, a decades old technology whose encryption was cracked many years ago.

Regarding GPS, there are GPS spoofing systems that can cause a GPS receiver to lock in on what it believes to be a GPS satellite. That would be disastrous if the fake GPS signal caused the vehicle to think it was somewhere else.

But it is the zero-day software defect that is the main concern. These cars are going to use proprietary and open databases, messaging systems, encryption, Linux, and other software that has security weaknesses. As soon as security researchers find such a weakness, the vendor or open source software project rushes to fix that. But this game of cat and mouse, by definition, leaves such systems exposed at times.

 

Preying on Human Weakness

Yet it is not just machines and software that can be exploited. Hackers often gain access to systems via phishing attacks. That is how North Korea allegedly hacked SONY. Phishing works because hackers prey on human weakness—curiosity, deference to authority, greed, and lust—to trick people into clicking on items that they should not.

Doing that is bad because it goes right around any kind of outward-facing perimeter security and lets the hacker get inside the network. What is going to happen when someone clicks on a spam message coming from that on-board entertainment system?

Hackers can also attack the car physically, just as they do when they walk into a data centre and yank out a disk drive.

The drivers cannot be expected to take their car to the secure manufacturer’s facility every time they need something routine, like an oil change. A foreign spy could pose as an employee at Jiffy Lube and attach a device to the body or motor of the diplomat’s car. Viewed from that vantage point, the car is much more exposed than, say, a corporate accounting system, because the car is not behind any kind of firewall. And it is something people can physically touch.

 

What Can We Do?

What can be done to protect the self-driving automobile fleet when a foreign power decides to launch a full-blown infrastructure attack? There are some obvious ideas.

In the case of the GPS spoofing, the GPS receiver can be programmed to verify the authenticity of the signal in the same way that websites verify SSL, i.e. with a signed certificate. And as these cars move into the mainstream, government can place sensors in the highway to help keep autos on track and provide some redundancy. Already, such sensors gather data on things like icy roads and traffic. And new satellite and radio systems are certain to come online to help those cars in various ways.

Also, cybersecurity systems are already getting better at spotting attacks through artificial intelligence. Big data analytic systems, like the Apache Spark ML (Machine Learning) library, are making it easier for programmers to monitor logs for bad behaviour. Existing log monitoring systems simply do not work, as any security analyst will tell you.

Third-party vendors must be thoroughly vetted and walled off. Systems that are not built by the manufacturer itself, such as the entertainment system, must run on a network that is separate from the network that power crucial systems, like the brakes and drivetrain.

And any suspicious event must cause the car to pull over and stop. Already passengers in driving-assisted Tesla’s have been found to often ignore the direction “Driver take the wheel”. So the car will need to stop and the manufacturer called so that they can do a thorough screening.

 

Telling the Grandchildren

One day our grandchildren will be laughing when we say that we used to start cars with something called keys and people routinely failed the driver’s test when they could not parallel park.

Metal keys and cars without cameras are going to fade away into history like the Model T. That will certainly make roads safer as we take control of the vehicle away from reckless human beings. But to keep the motorist safe, car manufacturers are going to have to figure out how to isolate the car from the dangers of the public internet and otherwise wall it off from malicious attacks. Every onboard system is going to need a duplicate too, to allow for redundancy, just like jets and spacecraft have today.

 

This article originally appeared in Cyber World, published by Secgate.