Semantron 22 Summer 2022

Autonomous vehicles

An autonomous vehicle uses 3 different types of RADAR when operating: short range radar (SSR), medium range radar (MDR) and long range radar (LRR). SSR is responsible for collision proximity warnings and parking assist. MDR monitors blind spots around the vehicle, observes other vehicles changing lanes, and is used in the avoidance of side and corner collisions. LRR is focused on the front of the car and is used for adaptive cruise control (ACC) as well as early collision detection functions (Carpenter 2018). Light detection and ranging (LIDAR) operate using the same principles as ultrasonic sensors. A laser is beamed towards the surroundings: the time taken for the beams to travel back determines the range of the object, the direction that the light is picked up by a receiver determines the direction of the object, and multiple pulses sent out and received is used to calculate the velocity of the object. An infrared (IR) sensor has a receiver which can pick up emitted infrared radiation from any object within range. These sensors allow thermal cameras to see in complete darkness, as well as fog and other similar conditions that would limit the performance (and accuracy) of cameras that rely on visible light and LIDAR. In rural conditions, thermal cameras can reliably identify a human from 200 metres away, four times the distance of standard headlights (FLIR 2019). Though IR sensors may be highly effective in conditions with limited visibility, the silicon-based sensors have poor sensitivity in the wavelength required for IR to operate in. The wavelength of IR must be >850nm to prevent damage to the eye, but even this can reach the retina so exposure must be controlled (Thakur 2017). Cameras are arguably the most important piece of equipment and they are crucial for: ACC (Adaptive Cruise Control), adaptive high beam, automatic emergency braking, lane departure warning, blind spot detection, and other functions. Exterior infrared cameras, however, are limited in performance; they do not act as reliably in the dark, as the sensors would need more infrared light to be emitted. Nevertheless, large numbers of infrared photons are needed for this amount of performance and eye safety comes into play due to the different effects materials have on the rays of incident infrared light (different materials will reflect the light at different intensities). On top of this, the more infrared photons emitted, the higher the energy output of the sensors would be resulting in an increased battery consumption (Thakur 2017). For fully autonomous vehicles, interior cameras are needed to monitor the driver. The vehicle must give the driver an unmistakable warning when switching into autonomous (level 5) mode, whilst ensuring they are ready to leave the driving to the computer. These interior cameras are also to monitor the physical ‘state’ of the driver. Facial recognition technology can determine drowsiness and fatigue, resulting in the car producing warning sounds/visuals or even pulling over safely. Moreover, if the driver is distracted, the cameras will identify that their gaze is not on the road and alert the driver. This could be the solution to minimizing the lives lost due to the increasing number of distracted drivers: 3,142 lives were lost due to distracted driving in the U.S in 2019 ( Distracted Driving 2019). Exterior cameras have high enough resolution to process external variables, such as a dog, a baby, or even a pothole. But the camera must be able to capture the object in time for the algorithms in the autonomous vehicle to recognize and classify the object, while leaving enough time for the vehicle to take evasive action. On- board computers can be thought of as the ‘brains’ of any vehicle. They hold data and can alter the vehicle’s physical performance (such as power) based on pre -determined settings inputted by the manufacturer. Current production cars using just cameras and radar generate around 6GB of data every

17

Made with FlippingBook interactive PDF creator