Neural networks
without knowing what encryption algorithmwas used it couldmake it harder to break and could create a more secure communications protocol.
Neuromorphic Computing
The lack of computing power is the main thing holding back neural networks and machine learning at the moment. If we could develop significantly faster and more efficient neural networks with current computing power, we would be able tomake huge leaps forwards in artificial intelligence. This is where the idea of neuromorphic computing comes in – creating a physical chip that holds the neural network instead of a virtual neural network represented in memory on a computer. The chip could physically hold the neurons and their weights using a technology called a spiking neural network. Instead of using a normal computer with a CPU, a special neuromorphic chip could be used whose architecture holds the neural network, similar to an actual brain. This chip could n’t be programmed for general purpose use, it could only be used as a spiking neural network. Spiking neural networks are of a similar architecture to any normal neural network, but instead of using mathematical activation functions or McCulloch-Pitts gates, they use spiking neurons, where the activation is detected as a sudden change, or spike in voltage (Architectures, 2019). Each neuron can fire independently of the others, changing the electrical state of the connected neurons. By coordinating the timing of the pulses, an effective neural network can be created (Maass, 2016). Future developments in the fields of neuromorphic computing and spiking neural networks could massively increase the power of AI systems and could make deep AI much cheaper and more efficient.
Bias
A number of studies have shown that gender and racial bias is a big problem in AI. There have been many high-profile examples of this, such as the Apple credit card which consistently gave much lower credit limits to women (The Verge, 2019). Part of this bias is to do with training data used to train the weights in neural networks. This means that any bias that is present in the training data could be reflected in the decisions the neural network makes. This is why it’s important to ass ess the reliability of the training data used for a neural network to prevent it leaking into the decisions made by the model. Currently the development of new digital technologies is done predominantly by white men, meaning certain technologies can be less suited to other demographics. For instance, it has been shown that machine-learning-based facial recognition systems on mobile phones are less effective for people with darker skin or facial hair (Joy Buolamwini, 2018). It has also been shown that the natural language interpreter software inmobile voice assistants is less accurate with women because the neural network is trainedmore onmen (MarkWest, 2019). When developing voice recognition software, it is important to assess the diversity of the training recordings, accounting for different accents and the different registers of men and women’s voices.
The Black Box Problem
The main problem with the neural network and most machine learning algorithms is that they are considered to be ‘ black boxes ’ , meaning it is possible to know and verify the output of a neural network (Judith E. Dayhoff, 1999), but it is impossible to know how or why it arrived at that output. The internal workings and logic of a neural network are generally unknowable – similar to an actual human brain.
194
Made with FlippingBook Digital Publishing Software