October 11, 2023

  • Phil Savage, Head of European Affairs, IMGL

Artificial Intelligence in gaming

It’s the talk of the conference circuit, but what are the applications of A.I. in the world of land-based and online gambling?

The launch of ChatGPT in November 2022 catapulted Artificial Intelligence into public consciousness. Depending on who we choose to believe, the apparently limitless power of AI represents either the brightest future imaginable or the gravest threat to mankind. Amid the hype and hyperbole, there are three areas where most commentators agree: we are at an inflection point – we will look back on this period as the time when everything changed; no area of human activity is likely to go untouched by AI; and most applications of the technology have yet to be thought of or developed.

As a highly technology driven sector, gaming is already in the vanguard of AI adoption and there are companies who have been investing in applications for years for whom the surge of interest is likely to be a game changer. A quick glance at the conference programs of this autumn’s industry events shows there is no shortage of people promoting AI-driven applications or prepared to unwrap their crystal balls and predict the future. This article is an attempt to capture the consensus views as to how AI will impact the gaming industry and the legal and regulatory challenges that will follow.

Big data: patterns and human behavior

Most commentators agree that ChatGPT and similar generative AI products represent quite a basic use of the technology (by some definitions, they are not, strictly speaking, AI products at all). Producing A-grade essays on any subject in seconds is astounding and will be a challenge for universities or school examiners, but it is not truly intelligent. At this relatively embryonic stage, the power of AI is its ability to process vast quantities of data in timeframes that are completely beyond the reach of humans. Not only can it parse the data, it can extract useful patterns from it. It can spot variations in behavior and correlate these with behaviors which fall outside the range of what its instructors deem normal, providing its own list of data points which it can then use to spot abnormal behavior in future. AI algorithms have already been developed which can detect signs of unusual patterns in online gambling which indicate likely money laundering and fraud. Data points derived from the decision-making processes followed by psychologists can also be used to detect problem gambling. Where these systems have been deployed, the results concur with the psychologists’ own human assessments in over 88 percent of cases and rising. The results will probably never be 100 percent given there will always be differences of opinion between psychologists themselves. The AI algorithm spots behaviors that fall outside accepted norms. They are statistical in nature rather than causal, so one of the challenges lies in applying context to the findings. For example, sports betting in the middle of the night could raise a red flag, but the customer may have just changed their shift pattern or be betting on sports taking place on the other side of the world if local sports are not available. Whilst its abilities are still improving as reference data increases, it is already clear that AI will mean that it is possible to monitor many more transactions. Indeed, such are the responsible gambling (RG) requirements now placed upon operators that many are saying that only through AI will it be possible to meet them.

ChatGPT is essentially a glorified chatbot and it is no surprise that there is work now being done on AI-supported chatbots on gaming sites which can pop up and intervene if problematic patterns are observed. This is not an approach favored by everyone but it can be appropriate for players who are early in their slide into addiction. Either way, AI is doing a lot of the heavy lifting. If over 80 percent of the work can be done in the background, the human resource can be freed up, enabling them to intervene where they are most needed. When a member of the RG team contacts a customer, AI should mean they will have a much richer picture to use to explain why the intervention is being made, hopefully making it more effective. It is a decision-making tool designed to support those making interventions. As it becomes more sophisticated and accuracy increases, the number of false positives will be reduced keeping customers happy while prompting fewer but more necessary interventions.

The patterns associated with problem gambling, for example, players losing control, playing more rapidly or chasing losses, can be detected in new or inexperienced players well before their behavior would normally be defined as problematic. When these early patterns are observed, it can indicate a player who will go on to develop a gambling addiction. Flagging such players does not mean they are automatically excluded from a platform, but it does mean they can be steered away from more risky forms of gambling keeping them in a safe zone where they can exist happily as long-term customers. 

As well as AML and RG, AI can help operators better target their marketing and retention strategies for example, by providing customised player experiences which will optimise revenues. The flipside is also true, with the Advertising Standards Agency  in the UK using AI to identify gambling adverts on social media which may break its rules. AI can also help bookmakers to set better odds and even monitor athletes visually to spot signs of fatigue or injury which could impact their performance.

Big data: privacy, automated decision making and limitations on use

The use cases above show the first steps in applying AI in the world of gambling, and there are sure to be many more that have yet to be dreamt up. Even at this early stage, however, there are those raising issues both regulatory and real world which will place limitations on what the future might look like and how quickly we will get there.

All of the examples given so far show rely on AI’s ability to sport patterns in vast quantities of data. That may be fine in the online world where customers provide forms of ID linked to their game play habits and bank records. In the world of land-based casinos it is a very different story where cash is much more prevalent. Even where credit cards are used and linked to loyalty cards and other sources of data this is often stored onsite. Theoretically this might improve data security (although often not in reality), but the way it is stored, the frequency with which it is updated and the widely varying levels of quality mean the hygiene simply is not there for AI to be effective. Unless a sizeable investment is made in data quality and security it will simply be a case of garbage in, garbage out. Regulators have for years focused on online gambling as inherently more risky than offline, and it is certainly true that the physical requirement to be onsite is a limiting factor. But it is ironic that brick and mortar casinos may be left behind in the AML and RG revolution that AI promises to bring to their online counterparts. The chance to integrate AI AML and RG tools should certainly be part of the decision to go cashless.

Quality of data aside, there are major issues of privacy and data ownership that will have to be addressed. Gaming companies will need to ensure that their use of AI complies with applicable privacy legislation in their jurisdiction, and this is changing rapidly. Certainly, the principle of data minimisation should be applied so that the only data collected is that which is necessary for the purpose and that it is retained only for as long as necessary to fulfil that purpose. The unique issues that AI can create will also need to be considered. AI relies on training data from real-world players and that personal information is used to at least some degree in the AI’s later outputs. How this can be reconciled with the right to be forgotten under GDPR is not clear. The complexity and opacity of AI algorithms mean consumers may not have a sufficient basis on which to give informed consent to the use of their data and may subsequently lose control over it. The AI regulation proposed by the EU categorises AI by risk level, banning the riskiest and regulating some that are less harmful. Those currently on the list of outlawed practices include programs that use subliminal techniques to distort behavior or exploit vulnerable individuals. The difficulty with framing such regulations is that there are two sides to every coin. Subliminal techniques to distort behavior can be as easily used to protect as to exploit and there should not be a presumption that technology which identifies vulnerable individuals is necessarily exploitative. It will be interesting to see whether, if gaming is captured by the new rules, it is effectively prevented from deploying some player protections.

This dilemma highlights AI’s ability not just to identify individuals who need protection, but also those, often the same people, who can most easily be exploited. Taking a fundamentally opaque technology like AI and ensuring it is only ever deployed to the benefit of consumers will be a regulatory headache. For example, AI can be used to spot where a player is preparing to cash out and close their account and prompt the operator to send them a bonus offer to re-engage them. At some point, inadvertently or otherwise, this crosses over from being a legitimate marketing tool to a technique to distort behavior. Just when that point is reached may be different for every player and will be hard to establish in law.

When it comes to making decisions based on AI algorithms, caution is the watchword. As previously stated, AI spots correlations not causality: its conclusions are statistical rather than showing cause and effect. The variety of human behavior means there will always be outliers and if an automated approach is taken, there will be swathes of people excluded not just from gambling but from insurance, banking and many other services. Technology can do the heavy lifting but a human should be able to make and explain the final decision, especially when those decisions are going to have a profound effect on an individual’s life.

Explaining the decision is made more difficult by the fact that AI tends to function as a ‘black box’ with even those behind the systems unable completely to know how decisions are made. This will get harder as the machine learning capabilities of AI come more into play. Hard as it may be, we must insist that AI be able to explain its decision-making process, that it show the data points it has counted and demonstrate where they correlate with certain outcomes. Only then can it report that it has proven links which guide its decisions. Ultimately, AI algorithms should be reviewable by a human, and we need to know how and where data is being processed and used.

Game integrity will also need to be regulated and, especially where AI itself forms part of a game, there will need to be human oversight and monitoring to ensure that it is conducted fairly. Regulatory standards in jurisdictions around the world have fairness as a key objective, but in the context of AI-designed and operated games, this will need to be independently tested and verified.

Regulation & consumer confidence

The mechanics of different verticals are often built into their license and this magazine has observed more than once the limitations placed on the advance of technology in gaming as a result. Where regulation is highly prescriptive, every tweak or iteration has to be certified. With a technology that is moving as fast as AI this could be seen as either a block on innovation or brake on a technology that is out of control, depending on the perspective of the observer. Either way, limited use of AI in gaming is unlikely to stand isolated from the tide of innovation if it continues to surge into so many other parts of our lives. Regulators who strive to be technology-agnostic using terms like ‘behavior monitoring tools’ rather than defining these too closely are likely to find their rules are more future proof than those that do not. Here, as in many other areas, a constructive dialogue with the industry will be vital. The good news is that the AI outputs which inform AML and RG decisions can also be harnessed to drive policy decisions making it an exciting time for operators and regulators alike.

Aside from gaming, there are several notable moves to regulate AI which recognise its potential upsides and downsides. In Europe, Spain unveiled the Spanish Agency for the Supervision of Artificial Intelligence (AESIA) in August, making it the first EU country to establish a regulatory body. Preferring to take an active role, Germany announced a substantial AI Action Plan, with the Federal Ministry of Education and Research committing to more than €1.6 billion to boost investments in AI research and skills training.

Elsewhere, copyright concerns have been the focus for a technology that is becoming increasingly sophisticated at generating text content and images. The United States Copyright Office is currently processing applications to register works containing AI-generated materials and has launched a comprehensive review of AI copyright infringement claims. It is consulting on a range of issues including the use of copyrighted works to train AI models, the required levels of transparency and disclosure, and the legal status of AI-generated outputs.

ChatGPT has carved out a place in public consciousness that is largely positive and fun. Students and journalists may have misbehaved and there are dire warnings for the impact on jobs, but for the most part, people seem curious about the possibilities. Heavy-handed law suits or a wave of scandals about exploitation could bring this honeymoon period to a rapid halt. Consumer awareness of big data may be limited but that does not stop people from being deeply suspicious about, for example, dynamic airline ticket pricing, and that is not even a smart AI application. If as an industry, the online gambling sector gained a reputation for using AI to exploit customers or target vulnerable groups the initial curiosity would quickly turn hostile It is the responsibility of the industry and its regulators to ensure this powerful technology is rolled out and regulated in ways that consumers understand and accept, because the consequences of not doing so would have an affect far beyond gaming.