Autonomous vehicles
this pales in comparison to the code inside a 2016 Ford pick-up, which consists of around 130 million lines (Armerding 2019). It is therefore not too shocking to say that the code for an autonomous vehicle could easily surpass a billion lines, making it notoriously difficult to patch known areas of vulnerability in the code for the sensors and CPU which control everything: the acceleration, the brakes, the steering. Also, inbound connections to a self-driving car (like V2I and V2V) pose a risk of remote-connection attacks that would compromise and entire fleet of vehicles. According to a report conducted by Ponemon Institute (a successful cyber security firm), 63% of respondents from the autonomous- automotive industry admitted that they have less than 50% of their hardware, software, and other tech tested for vulnerabilities to potential threats, with only 10% of these individual companies/suppliers/manufacturers consisting of an established cyber security team. Regarding DSRC (including V2I and V2V), the U.S Department of Transportation (DoT) has claimed to have made ‘ significant efforts to build privacy into the data collection [for V2I and V2V] ’. However, the Electronic Frontier Foundation (EFF) stat es that the amount of data being communicated (10 ‘messages’ per second per vehicle) will still allow for surveillance from outside threats, even if the certificate credentials were to change every 5 minutes. Rebecca Herold, CEO of Privacy & Security Brainiacs and a member of the Ponemon Institute, stated that no set of data today ‘ can truly be 100% anonymized. If that data is combined with other data sets, results from AI, big data analytics, etc. performed using the multiple data sets will often reveal specific individuals ’ (Armerding 2019). This is essentially warning manufacturers and customers of the risk that comes with connectivity, and how a fully autonomous car will never be truly safe from attack. Although we can apply this mentality to any piece of technology, most examples will not result in a fatal accident. Possible solutions to improve software security have been raised, but these all are futile in the long- term for different reasons: no driver would want to wait 30-60 seconds for their engine to start because of the electronic control units (ECUs) verifying digital signatures and running through a secure boot (a system that makes it harder for unauthorized software updates or malicious attacks to be carried out). Implementing elements like a secure boot could prove to be more dangerous than a system without these safety features, as when a driver or the car itself decides to brake, the brake ECU cannot take 1 second to verify the authenticity of the brake message otherwise a crash could occur. Instead, suppliers will have to redesign the software development life cycle (SDLC), integrating tests and incorporating cybersecurity throughout the development of the software, instead of just conducting a ‘penetration testing phase’ at the end, where the software’s resilience against incoming cyber-attacks is tested. Likewise, all software that interacts with the vehicle must undergo the same levels of testing (such as V2V and V2I) (Armerding 2019).
Ethics
One of the greatest challenges that engineers must overcome when designing their autonomous vehicle is the ethics behind the computer’s decisions.
The first question that must be answered is ‘Who dictates the ethics for the vehicle?’ Should it be drivers, consumers, passengers, manufacturers, programmers, or politicians? Although this may seem like a relatively simple question, the answer will be fiercely debated and argued throughout the industry, parliament, and the public. The problem revolves around finding the right balance between freedom and clear, established moral grounds; an individual driver may feel powerless and restless if
25
Made with FlippingBook interactive PDF creator