By Calypso Team, May 17, 2019
In 2015, whitehat hackers exploited a self-driving car’s cellular internet connection to change HVAC and entertainment settings, turn on the windshield wipers, and even shut off the engine and disable the brakes. But when it comes to self-driving cars’ vulnerabilities, this hack is just the tip of the iceberg.
How They Hacked is a series detailing how adversaries were able to successfully hack machine learning and artificial intelligence applications.
Imagine you’re driving down the highway when an unseen force takes control of your vehicle. The steering wheel turns, the brakes are disabled, and you spin off the road. This nightmare scenario is a real danger for self-driving cars.
Incorporating complex, cutting-edge artificial intelligence (AI) technologies, autonomous vehicles are vulnerable to hackers in ways that extend beyond traditionally cybersecurity concerns. As such, until they can be adequately secured, self-driving cars are unlikely to reach the market dominance many industry insiders believe they are destined to achieve.
Since each of their components are controlled by a computer system — and, either directly or indirectly, connected to the internet — self-driving cars are at risk of being compromised by bad actors in a variety of ways. Many autonomous vehicle prototypes were not designed from the ground up; rather, engineers grafted AI and other control systems onto existing vehicular models.
Unfortunately, current automotive technology standards are outdated and insecure. Most vehicles use a Controller Area Network, which lacks the speed and capacity that self-driving cars require. Further, Automotive Ethernet systems follow the User Datagram Protocol, and therefore are not encrypted.
While essential for maintaining the security of self-driving cars, real-time encryption poses a significant challenge for these vehicles. First and foremost, self-driving cars’ sensors generate immense volumes of data that must be processed quickly, and sophisticated encryption slows this down. What’s more, autonomous vehicles incorporate digital components from a wide variety of manufacturers, many of which are coded in different languages. The connections between these systems could present an entry point for hackers to exploit.
Inconsistent software and firmware updates represent another potential issue. Self-driving cars will need to be periodically updated to address emerging security threats, yet, inevitably, not all owners will follow manufacturer-recommended maintenance schedules.
However, arguably the greatest risk presented by self-driving cars stems from their connection to the internet. These vehicles must constantly communicate with cloud computing networks to source data on traffic, road hazards, detours, and more. Their collision-avoidance systems must also be in constant contact with other cars on the road. If not properly secured, these wireless connections will amount to a goldmine for hackers.
For self-driving cars to realize their full potential, engineers will need to incorporate cybersecurity into every stage of the design process. Automobiles designed from the beginning to be autonomous will be the most secure.
The first step in ensuring robust security is to limit extraneous features and external connections. Minimizing the number of internet-connected systems means fewer entry points for hackers. And, of course, it’s essential that all data sent and received by a self-driving car is encrypted.
Delivering regular software updates will also enhance security. As hackers develop new means of manipulating the technologies that underpin self-driving cars, these cars’ autonomous systems will need to be capable of adapting in turn. That said, steps must be taken that will ensure that only verified updates can be installed — unauthorized use of the updating system could wreak havoc on an autonomous vehicle’s functionality.
Finally, while machine learning algorithms are an essential component of any self-driving car, these often proprietary, closed-source systems are vulnerable to being manipulated by hackers without a driver’s knowledge. At the most fundamental level, prioritizing transparent, explainable designs will keep these systems secure against bad actors while ensuring their safety — and communicating such safety to the public and transportation regulators alike.
Automotive engineers will never be able to think of — let alone safeguard against — every conceivable security risk upfront. That’s why, among other things, extensive penetration testing is necessary.
In 2015, Charlie Miller and Chris Valasek went public with their experiment to wirelessly take control of a Jeep Cherokee. The whitehat hackers were able to use the car’s cellular internet connection to change HVAC and entertainment settings, turn on the windshield wipers, and even shut off the engine and disable the brakes. In response, Chrysler recalled 1.4 million vehicles to remove the vulnerability.
Simply put, self-driving cars’ commercial viability will hinge on these kinds of issues being addressed before they manifest on the open road. In other words, experiments like Miller and Valasek’s must be performed throughout the design and development process, not merely at the end of production.
Self-driving cars have a great deal of promise, but they also bring new risks. Only when their AI, machine learning, and network connections are secured will these vehicles reach widespread adoption. An effective design requires protection against external threats as well as fully transparent insight into the car’s decision-making process. Calypso AI helps forward-thinking companies develop precisely this kind of robust, trusted, and explainable AI solution. Contact us to learn more.
Get in touch to learn how we can identify your systems’ vulnerabilities and keep them secure.