Autonomous vehicles
will struggle to identify and predict areas which are designated for travelling vehicles. This could lead to the self-driving car confusing itself when predicting where other vehicles will travel, as well as misinterpreting which direction of travel has the right of way (Heineke 2017). In challenging weather conditions, not only are the sensors hindered, but the algorithms also find it hard to perceive and distinguish different objects. For example, if there is a glare of sun obscuring the front camera’s view, the algorithms will be tr ying to conduct calculations and probabilities based on a blinding light. This will mean that sensor fusion will be relying on the other sensors and hardware like RADAR, ultimately increasing uncertainty in predictions. Furthermore, if the front-facing camera is obscured by the sun’s glare, then Pathfinder and Object Detection DNNs will be temporarily dysfunctional, as they would have no clear images of the road and nearby objects to work with. How can Pathfinder DNNs identify lane-markings when all it can see it a bright light? Another example of this weather-related issue would be snow. When driving through a heavily snowed-upon area, the road is not significantly different from the rest of the environment; the algorithms will struggle to identify objects correctly (pattern recognition algorithms which function by identifying lines, edges and arcs of objects will not be able to operate reliably). Sensors would pick up and recognize different objects as different things, so sensor fusion would not be accurate and would have to rely on estimating outcomes and continuing to scan and find distinctive features in the environment. DNNs pose another hurdle for programmers and software engineers. Due to their sheer complexity, it is extremely difficult to back-track and understand the root cause or logic for a decision. This becomes an even bigger concern when a self-driving makes a wrong decision that results in a crash; the manufacturers would have to recall all their cars from that fleet and uncover where the algorithms went wrong. This could prove nearly impossible since the onboard computer is having to calculate and consider so many different variables (Heineke 2017). Also, the decision- making can be based on ‘rules’ predetermined by engineers: the engineers come up with all possible permutations of ‘if . . . then . . . ’ rules to help the autonomous vehicle make a decision. But the time required to cover all possible combinations and scenarios (as well as the statistical impossibility of including every single circumstance) makes this process redundant (Patrick 2020). Research suggests that 275 million miles needs to be accumulated for autonomous vehicles to demonstrate, with 95% confidence, that their failure rate will be at a maximum of 1.09 fatalities per 100 million miles (the equivalent of the 2013 US human fatality rate). To prove that the fatality rate of autonomous vehicles would be less than humans, the mileage would have to reach the billions. To represent this statistic meaningfully, we can simulate the mileage of 100 self-driving cars: if a fleet of 100 drove continuously for 24 hours a day, for 365 days of the year, at an average speed of 25 miles an hour, it would take more than 10 years to accumulate 275 million miles. To add to this impossible challenge, companies would not want to collaborate, after all, they are all racing to accomplish the same goal quicker than their rivals (Heineke 2017). The final concern regarding software is the security aspect. As more self-driving cars hit the road, hackers and criminals will be drawing their attention to exploiting flaws in autonomous vehicles, and if these criminals were successful in corrupting a self-driving car, they would have the potential to gain control of the individual car, or even fleet. A Dreamliner jet has roughly 6.5 million lines of code, but
24
Made with FlippingBook interactive PDF creator