When a new technology appears or an existing one makes a big leap forward, it triggers a rush to find applications for it. So it is for artificial intelligence (AI), these days.
Big technology companies- Google in particular- have made AI breakthroughs in recent years. Many of these, like AlphaGo and other projects by Google’s DeepMind team, are more proofs-of-concept than marketable products. They also have people in other industries wondering whether the key to revolutionizing their respective fields lies in these technologies.
SBC held a digital summit this week, replacing what would have been an in-person gathering in Barcelona if not for the COVID-19 pandemic. One of the topics of discussion on Sept. 9 was using AI to improve profitability in the casino industry.
Américo Loureiro, director of the casino firm Solverde, led the discussion. In his speech, he forecast that in three years, no operator will be able to manage its business without using AI. Other business leaders shared their experiences with developing and deploying AI for everything from designing promotions to virtual customer service representatives.
Meanwhile, people both inside and outside the industry have mused about the potential for AI to help with responsible gaming. Norwegian state operator Norsk Tipping, for instance, believes a well-trained algorithm could identify and intervene with problem gamblers more effectively than a human team.
AI isn’t magic, though. Before counting on it to solve problems, it’s important to look at its limitations. Here are a few ways AI could go wrong for the gaming industry, if its proponents aren’t careful.
#1: It’s often quite opaque
There are many approaches to AI. The one generating most of the excitement these days is machine learning. Such algorithms take input data and produce output that, at first, is mostly random. But after each trial, they measure the quality of the result, or accept feedback from a human trainer. The algorithms then modify the intermediate steps to try to improve the output.
After many iterations of this process, the resulting algorithm may be better than a human at generating the desired output from a given set of data. However, all of the steps it uses to get from the input to the output are its own creation. Exactly how comprehensible they are to humans can vary.
The more opaque algorithms are sometimes described as “black boxes.” If one of these gives a weird answer, it can be hard to tell if there’s a problem with the computer, or merely with humans’ ability to understand the reason for that answer.
Businesses relying too heavily on machine learning run the risk of becoming a cargo cult of sorts, following the machine’s instructions blindly. Human employees may not understand the reasoning for or importance of its recommendations, and make mistakes in executing them as a result.
#2: AI’s performance depends on the quality of the input
Take any computer programming course and you’re likely to hear the acronym GIGO at some point. This stands for “Garbage In, Garbage Out.” It’s a reminder that the results produced by even a perfect algorithm can only be as good as the data you provide it.
Computers don’t have what we would call “common sense.” Whatever sanity checks an algorithm contains depend on the programmer’s ability to anticipate others’ mistakes.
Try giving a human market analyst a weather report instead of a stock chart. They’ll tell you the data isn’t appropriate to the question you’re asking. A computer may happily crunch the numbers anyway and return nonsense for an answer.
In the case of the gaming industry, there’s a related problem in terms of the sheer number of variables. Some of these may not seem important to include in an AI’s data set, but turn out to have been critical later on.
Consider this hypothetical: perhaps the air conditioning in one corner of the casino is set too cold. A human manager might realize that’s the reason slots in that part of the floor are under-performing. A computer, however, might just replace the games ad nauseam. It may never identify the actual problem because the room temperature wasn’t considered relevant by the people assembling its data set.
#3: It can amplify unconscious or systemic biases
In the case of machine learning, the quality of data isn’t only important in the field. Even more important is the data used to train the algorithm in the first place. Unquestioned assumptions by the programmers can also creep in at that stage.
Already, we’ve seen the consequences of this in early applications of the technology in other fields. In one study, facial and voice recognition technologies were found to perform poorly with minorities who weren’t represented in their training data. AI for the gambling industry could run into similar difficulties if it doesn’t account for differences in socio-economic class, or gambling behaviors that have a cultural component to them.
Even real world data sets can be problematic. Norsk Tipping trained its responsible gaming algorithms using a combination of player behavior data collected through its sites and self-assessments. There’s a risk, however, that the self-assessments could select for players who’ve already realized they have a problem. As a result, the algorithm could end up missing those who are less aware, for whom intervention might be even more important.
#4: It can only follow rules to the letter
Related to the idea that AI has no “common sense,” it also has no understanding of the intent behind a rule. Whatever parameters you supply, it will follow, but it will do so very literally. Sometimes that means results that are quite different from what a human would have expected.
There is, in one example, the case of a machine learning algorithm created to optimize circuits. Some of its designs ended up featuring components completely disconnected from the rest of the circuit. These turned out to be necessary, as their mere presence affected the electromagnetic fields produced by the rest of the circuit. It was quite a departure from what a human circuit designer would expect.
When it comes to a human system like the gambling industry, this creates a big ethical risk. As it is, there is already a problem in most industries with people justifying unethical business strategies on the basis that they comply with the letter of the law. AI designed to optimize revenue for a gambling business will likely come up with some rather predatory strategies.
If such technologies become widespread, it will increase the burden on regulators. They would need to be extremely precise and thorough in their rule-making, because AI is very good at skirting boundaries.
#5: It needs everything to be quantified
Of course, engineers can design an algorithm to optimize for multiple things at once. For instance, a profit-optimizing AI could also consider the risk of encouraging problem gambling. But to do so, it needs a way of comparing the two.
A human decision-maker would usually rely on their own subjective moral compass to weigh such things. For better or worse, a computer relies on numbers. It needs to know exactly how many dollars of profit it can sacrifice to save one person from potential gambling addiction.
On the one hand, this is an important conversation that we need to have anyway. On the other, it’s a discussion that people are deeply uncomfortable with when it’s framed so bluntly.
None of these problems are insurmountable. However, using AI responsibly requires understanding its flaws. It’s one thing to develop a superhuman AI for a narrow and well-defined task like playing chess. Real world decision-making often requires a more holistic and subjective approach, something which computers are still quite bad at.
Used carefully and in a focused manner, AI will be a powerful tool for the gaming industry. It’s important, however, that we don’t demand more of it than it can provide, lest it lead us into some very foolish decisions.