Conference proceedings, Zurich 2023

pre-registered your passwords, do you feel secure or not? You feel secure because you’re not asking yourself the question. So online is a different animal than offline in the land-based gaming industry, where are most of the servers? On premise. If they’re on premise how secure are they? They’re not. So if you look at a company like Microsoft Azure or AWS or IBM, they’re spending literally hundreds of millions of dollars every year on security packages. If we’re trying to catch up in regulation as an industry, can you imagine how operators are trying to catch up with their security packages, their firewalls, the licenses, they’re trying to re-educate every month to reinvent themselves to be more secure. It is impossible to be secure in the land-based sector of the servers are on property, just impossible. IH: So obviously, cybersecurity is a big issue, protection and privacy of information. But from what I hear, we’re talking about the information being stored on your phone or in the cloud, and then being sent. So you don’t have to fill in multiple forms each time, you can send the data to whatever platform system. So that requires a lot of APIs, a lot of application programming interfaces, a lot of connections between different systems, sometimes across different jurisdictions. And to what Joseph was saying earlier, you’re right, we’re talking about seas of data, massive amounts of data that is just impossible for a human to sort through. But more importantly, and this is where we’re seeing some legal cases, and some prosecutions and bad media, in some cases, where decisions are being made. We get the information, we’ve got the analysis and what happens is the operator sees these flags of alert and it’s impossible to sort through them because they’re getting alerts by the minute on all these false positives as we call them. So the system will then make an automated decision about something either to block play or escalate at time in play. In the gaming space that may not have huge ramifications, but we are seeing it in other industries. If we look at say the mortgage or the banking sector, where loans can be denied automatically by the system. What protections should the gaming industry be looking at and what potential risks will be there if we move towards taking a step beyond analysis to the system making an automated decision about whether the player can continue that action except a bet not accept a bet? JB: It is important to realize that AI is a tool. It’s not the end. So AI will do 80% or 90% of the work but the final decisions have to be taken by a human. So if you’re looking at the transactions of a player and you want to see whether the player actually has a gambling problem or is a money launderer, it’s ultimately the same data that you’re looking at, and probably the same red flags that you’re looking at. So, this person said that they earn 60k per year when they were putting their data in the application form. So if this person is playing 100k, there’s something wrong, whether it’s from a money laundering perspective, or from a Responsible Gaming perspective. So it’s one or the other. And so that will raise the red flag. Then obviously, it’s the role of the person who will assess the data to decide, okay, what shall I ask for now, shall I ask them if they have other source of funds. IH: I’ve seen many applications where those transactions or those suspicious activity reports are being sent directly to FinCEN, or Austrac, without any human intervention. JB: But at least in Europe, authorities are taking action against those notifications that are not researched enough. So one of the focuses now by AML authorities in particular is to ensure that the SDR that is being raised is something that has to be really researched and really detailed. If you’re going to send an SDR just for the sake of raising the SDR to cover your back, then, obviously, you are going to get fined anyway by that authority. So it’s a short cut, it’s cutting corners, you won’t get anywhere like that. But AI will help improve that more. RK: I totally agree. We see our solution CamScanner, the AI solution as a decision-making tool, as a support for the compliance and RG people to do proper messaging or intervention at the stage where there is a flag because there is suspicious behavior, for example. And actually it goes both ways. Because if you have a less accurate detection solution as a foundation, with a lot of false positives and negatives, then you do intervention where it was not needed. But you also likely have to do other intervention to protect yourself. And both are actually not great, because you risk annoying the customer. But maybe also of course you miss some intervention and that’s what we work with in terms of handling, also the operational challenge. So we can at an earlier stage with a more individualized and granulated

61

Made with FlippingBook flipbook maker