Neural networks
Justice
One interesting new application of neural networks is in the justice system. Using a computer to determine the punishment of a criminal can seem almost dystopian but, ethical reasons aside, using computers to help the judicial process and the prevention of crime could make the process fairer and free from human bias. Already most states in America encourage or even mandate judges to use algorithms to help inform sentencing decisions (Nishi, 2019). The same is true in many western countries. Most actuarial models take a small number of predicting factors such as age, past convictions and behavioural traits and then use logistic regression (a type of traditional statistical model) to determine the likelihood of recidivism (Min Yang, 2010). These models depend on a homogenous population, specific predicting factors and large data sets, which limits the accuracy of most models to a ceiling of 70%. Recent studies have shown that using neural networks can easily surpass this ceiling with a less consistent population and a smaller data set. Neural networks can take many more, seemingly irrelevant factors and form a much more accurate model, particularly for young and violent offenders (Min Yang, 2010). This leads to a cost reduction from the unnecessary incarceration of inmates who won’t reoffend and ensures safety from those inmates who certainly will.
Aviation
Flight safety is already an area in which computing has had a big effect. All new passenger planes are controlled almost entirely by in-flight computers, with minimal input from the pilots. Many major airports are fitted with autopilot landing systems, so that the pilots can allow the plane to land itself on a runway as designated by air traffic control. Currently, humans programme the software that takes inputs from the plane’s avionics and then outputs instructions to the control surfaces of the pl ane. With the growth of artificial intelligence, it has been asked whether the autopilot systems on modern planes could be controlled by a neural network instead of a human-programmed system. Research over the last decade has shown that this may soon be possible (Khosravani, 2012). Obviously, the trial-and-error style of training a neural network will not work with an actual plane, so simulations must be used. With accurate simulations that could train a neural network to control an actual plane in-flight, we could replace current autopilot systems with far more responsive and adaptive systems that can eventually replace the pilot. Obviously the black-box nature of neural networks poses a serious limitation, meaning the system could be unpredictable and dangerous if improperly tested.
Cryptography
Neural networks are typically not good at cryptography but an experiment by Google’s DeepMind AI research organization managed to get two neural networks to design their own encryption system, which a third network could not break (Martin Abadi, 2016). The three neural networks were known as Alice, Bob and Eve – fictional characters used as placeholders in cryptography. Alice was trained to encrypt and send secretmessages to Bob. Bobwas required to decrypt themessages and Eve was trained to decipher the messages without the key. Bob managed to decrypt 80% of the messages sent by the Alice, whilst Eve only managed to decrypt around 20% of the intercepted messages on average. The big problem with this experiment is that we cannot know what the encryption method used by the computer was, because of the black box problem. This could be viewed as an advantage, because
193
Made with FlippingBook Digital Publishing Software