Semantron 22 Summer 2022

Autonomous vehicles

optimum task for the vehicle to undertake in response to external factors. However, this whole process stems from the performance of the hardware, so to understand how the sensors operate is essential in tracking the decision-making process. On all autonomous vehicles, GPS is use d to plot the vehicle’s approximate position in real time using a series of coordinates. If all self-driving vehicles were to be digitally connected, the finding of a new and/or updated road can be plotted using the GPS and would (if the technology provided) simultaneously update the digital maps in the other vehicles (Waymo 2016). Connected data from the GPS allows the computer system to know when the next/ desired corner is, or how the road progresses, i.e., how far the next corner is and at what angle it follows. Nevertheless, there is a time difference between the position of the vehicle and the syncing of the map. The v ehicle’s position is , therefore, approximated. Autonomous vehicles also feature prebuilt maps which are constructed using regular vehicles fitted with lasers. As the vehicles drive on roads, the lasers periodically send out beams of light which bounce back from surrounding objects. Sensors then pick up the reflected beams of light and the distance and dimensions of these surrounding objects are ascertained from the time it takes for the beam of incident light to bounce back and reach the sensors. This data is then sent to a team which identifies key features of the road such as signs or posts. All of this means that when the software on an autonomous car produces an image which matches the prebuilt maps, the vehicle’s computer will know its situation on the map within 10cm of accuracy (Waymo 2016), meaning GPS does not need to be relied upon. Knowing these permanent characteristics and features of roads allows sensors and software to focus their ‘attention’ on moving objects and changing variables, speeding up the process of recognizing, predicting, and avoiding certain situations. In addition, changes to these features on a road can be detected through multiple sensors on a self-driving vehicle. This data can then be sent to a team automatically who can update the maps of all connected cars accordingly.

Building maps for a self- driving car. Waymo. 2016

The image depicts the elevation changes of a junction being picked up using lasers.

Ultrasonic sensors are used for object detection

and these operate in a similar way to echo location for marine animals. The sensor emits sound at a frequency humans cannot hear (usually greater than 30 kilohertz). The time it takes for the sound wave to be reflected by an object and received by the sensor can be used to calculate the distance that object relative to the sensor, using the equation:

Distance = ½ T x c (where T= time and c= the speed of sound in air).

Ultrasonic sensors can be more effective and accurate than radar and LIDAR as they are not affected by colours. However, if a material is absorbent or reflects the sound wave in an irregular way, the data gathered by the ultrasonic sensor will be inaccurate and therefore unreliable (Cook 2020).

16

Made with FlippingBook interactive PDF creator