IMGL Magazine January 2024

ARTIFICIAL INTELLIGENCE

A rtificial Intelligence (AI) is a broad discipline and is of understandable concern to many. Much of AI is either misunderstood or not understood at all and, as humans, we have a fear for the unknown. One element of AI that is attracting a lot of attention is Generative AI i.e. technology which can apparently produce its own content. Examples include applications such as ChatGPT which has the ability to create text content and other technologies which can output unique image and/or video content such as deepfakes. Whilst there is a fear of the unknown, it is hoped that, with greater understanding of how AI can work, we will embrace it for its benefits, understanding its limitations and risks. In this article we will explore some of the more practical applications for AI that are well positioned for use in the regulated gaming industry and which have similar characteristics to those used from other regulated industries such as Banking and Medicine. Machine Learning is a part of AI and, on the surface, the name alone can instill fear and pose many questions. How does a machine learn? Can we understand how it makes its choices? Interrogating how a decision is made and the factors driving such a decision can be as important and sometimes more important than the decision itself. It is a common misconception that the use of AI will remove the human element and our ability to understand how and why decisions are made. A computer, they say, is making human-like decisions but without human insight or discretion. Whilst this can certainly be true it does not have to be the case. For Machine Learning to occur we need three essential ingredients: good data, algorithms – the set of instructions given to the machine – and computing power. Applications in the Gaming Industry would therefore appear to stand a good chance of success as the industry has good quality data which can be exploited by Machine Learning for a number of noble objectives. This data is, however, extremely sensitive and proprietary to operators who are understandably reluctant or unable to share due to commercial and privacy regulatory concerns. A dichotomy exists between the principle of data minimization for privacy and cybersecurity concerns and the feature rich data required for effective training of AI models. As we will go on to see, AI has the potential to do a great deal of heavy lifting in areas such as Prediction and Classification. We will discuss both in this article, but let’s first think about the notion of AI being a ‘black box’. Many worry that the

inner workings of AI, particularly when it comes to making predictions or classifications, are done without recourse to human judgement or that, over time, the machines will become smarter than our ability to interrogate them. Whilst there may be some truth to this, it is important to recognize that such opacity is not an inherent requirement of AI systems. It is more helpful to think about AI as a vast range of tools and methodologies, in some cases using fundamental probability and statistics going back to the 1700’s, which are implemented in software to allow incredibly fast and numerous computations. As creators of AI, we are capable of designing and deploying highly interpretable systems that facilitate easy comprehension and provide significant insights into data and the decision-making process. However, this level of clarity and transparency is not something that happens by default. It must be intentionally integrated as a critical requirement from the outset of the development process. Clearly defining the desired outcome No practical deployment of AI can deliver useable results without a clear definition of the problem to be solved and desired outcome. This is especially important if the model is required to have high interpretability and transparency. In regulated industries it is expected that we should understand how and why decisions are being made and that this be aligned to policy. This is perfectly possible in an AI-powered system, however it should be addressed early on in the design stage. This will lead to different choices as to the algorithms and methodologies to be used and, in some cases, a sacrifice in efficiency and accuracy may result. One factor that is often overlooked is how the model will be tested and the range of scores that are deemed to be acceptable. Data is split into two elements: training data – anonymized data used to educate the AI as to the range and nature of behaviors given certain situations or stimuli – and testing data which is the AI operating in practice. Generally speaking 70 percent to 80 percent of data is used for training with the remainder used for testing. The science behind splitting the data can have significant impact on the final design and performance of the AI model. Models may perform Prediction or Categorization and the learning may be supervised or unsupervised. This will depend on what it is we are seeking to achieve and what data we have

PAGE 19

IMGL MAGAZINE | JANUARY 2024

Made with FlippingBook flipbook maker