Industry regulators will require operators and other industry participants to have full visibility of how they are utilizing artificial intelligence and the ways in which the technology interacts with customer information if they hope to avoid liability should something go wrong, according to a former Executive Director for Liquor & Gaming NSW.
Jane Lin, who until recently oversaw Regulatory Operations & Enforcement for the state’s gambling regulator, told attendees of the Regulating the Game conference in Sydney on Thursday that how operators use AI to sift through customer data was becoming an increasingly important focus for regulators within the gaming space but warned a failure to fully understand the technology was no longer deemed an acceptable excuse.
“Explainability is really important when it comes to AI,” Lin said.
“You need to know what your AI is doing – not only for your own benefit so that you can understand whether what it’s doing is legal and ethical but because you may need to explain it to a regulator someday. And you need to be able to ensure that you can do that.
“It will never be okay if a regulator comes knocking to say, ‘We don’t know what happened, the AI did it, it wasn’t us.’ That just won’t fly so make sure you know what is happening within your systems because conduct will be attributed to an organization whether it’s carried out by a human or machine.”
Like many jurisdictions around the world who are still grappling with how the law should deal with AI in the corporate world, Australia has yet to commit to a mandatory AI framework, although a Voluntary AI Safety Standard was published by the federal government in August outlining 10 voluntary guardrails that can be applied throughout the AI supply chain.
For Lin, a key starting point for gambling operators is how data is used for marketing purposes, noting that “the same kinds of behavioral tracking algorithms and data that can be used to identify risks and red flags and to help people that might be on the brink of experiencing gambling harm can also be used in a predatory manner to target direct marketing at customers who might be experiencing harm.
“Particularly where there’s an element of self-learning, it’s important to be careful. It’s easy to see a slippery slope there if you set an AI model loose on your customer database and instruct it to target advertising in a manner that will maximize gambling behavior. Without other guardrails in place that could go wrong very quickly.”
Organizations should, Lin said, consider informing customer-facing staff of how AI is being deployed across the business as they are most likely to be able to identify when something has gone wrong, while legal and compliance teams would benefit from being involved from an early stage of the technology’s development.
Most important, however, is for boardrooms to be fully across the relevant use cases.
“It is important that the board and the relevant management or leadership structure is aware of how AI is being used in the business. It’s not going to be sufficient for that board to claim that they had no idea if something goes wrong,” she said.
Regulators will, Lin explained, provide reasonable scrutiny to instances where a result of AI interaction produces an unexpected result.
“It will come down to an evidentiary question as to what actually happened,” she said. “In each case we would ask, ‘How did you get there? To whom can we attribute liability? Was it genuinely one off that had not occurred in any testing or any prior use of the product?’ Those are all the kinds of things the regulator will scrutinize in a particular situation.
“It’s going to be really interesting to see how the law deals with attribution of liability. I think when you talk about a corporation, it’s a bit different than trying to attribute liability to an individual director in those circumstances, so these are all things we are grappling with.”
Ian Hughes, Chief Commercial Officer of GLI and CEO of GLI APAC, said his company was now advising boards and directors to examine the frameworks being implemented by authorities around the world in order to protect themselves from unexpected outcomes.
“It’s very important at board level that they understand what is occurring because we’ve seen examples where something has gone wrong, particularly with generative AI, and they are like, ‘The dog ate my homework, I don’t know anything about it’.
“That’s no longer an excuse for boards and directors, so the very basic practices that are currently being put by governments, particularly in places like Europe, are vitally important,” Hughes said.