Back to Insights

Accountable for a regulatory AI initiative? Don't lose the community along the way

The last few weeks have been big ones for public service AI. To understand why, it's worth reflecting on how actually disruptive AI is shaping up to be for regulators.
Related Topics:
Rethinking work
21 November 2024
Darren Menachemson - ThinkPlaceX Partner
5 minutes

The last few weeks have been big ones for public service AI. To understand why, it's worth reflecting on how actually disruptive AI is shaping up to be for regulators.

There is broad if uneasy acceptance that AI is going to create seismic shifts in how governments deliver their regulatory programs. 'Broad', because the power of AI and the value it offers to almost every facet of public administration makes adoption obvious and unavoidable. 'Uneasy', because AI brings a long list of novel risks that, untreated, can harm people and fracture trust.

In a post-Robodebt Australia, there is very little tolerance for regtech gone wrong. Of course, the issues with Robodebt went beyond technology into matters of culture, calculus and regulatory philosophy. Still, the notion of regulatory decisions being delegated to 'robo-deciders' unimpeded by empathy remains vivid and worrying to many Australians.

 

The AI Compassion Gap

Research bears this out. Last year in our benchmark survey on how Australians feel about the rise of AI, we asked Australians what worried them about an AI-enabled future. We anticipated some of the responses we received (job losses, more surveillance, and a Terminator-style downfall of humanity).

But we also saw something new - a distinct fear that smart machines taking on human roles would result in less compassionate institutions overall, creating a 'Compassion Gap'.

Robodebt - despite being a deterministic algorithm rather than true AI - came up again and again as the exemplar of this. The Compassion Gap issue reflects a strong community reservation about removing humans - and therefore human traits like empathy and kindness- from the workflows of government and society at large.

 

Regulatory rocket fuel

In the midst of this unease, AI continues to be woven into mainstream technology like Microsoft's Co-pilot or Salesforce's Einstein, and AI-driven analytics is being eyed as a holy grail for better regulation. And it should be.

After all, there are regulatory use cases that are well within current or near-future AI capabilities that only a few years ago would have felt like science fiction. Using well-trained AI with enough processing power that means:

  • If a place can be seen from space, it can be evaluated for non-compliant activity - from identifying instances of environmental vandalism within large tracts of terrain, to flagging urban planning issues well before the first bulldozer breaks ground
  • If an individual's behaviour can be monitored, it can be predictively matched against non-compliance patterns - from tax evasion to drug trafficking, giving regulators a crystal ball and a new playbook for intervention
  • if wide-ranging operating details of a business can be lifted from the internet, B2G transactions and public reporting, it can be assessed for issues like price gouging, human trafficking, safety violations or predatory behaviour, resulting in fast regulatory action
  • If an event is occurring that creates social media chatter and news articles, it can trigger a lightning-fast diagnosis and response - be it a pandemic spill over event or an unfolding crime

The combination of masses of data, human-level smarts, and lightning-fast AI software will make truly breathtaking regulatory models possible that, in the before-times, would have needed an army of humans to operate, or would have simply been unhinged from reality.

This is the upside of AI for government, and it is compelling.

But all that's needed are one or two 'tech gone wrong' events, founded in AI bias, hallucination, model drift, unexplainable decisions, training errors, poor supervision or simply incorrectly applying a green flag - to lose public trust.

Worse, such events could harm vulnerable people, damage regulated entities, or risk the loss of irreplaceable natural resources. This is the Robo-X scenario, where a new Robodebt-like issue emerges from the perfect storm of a new powerful technology, big vision and capability that is still too lean and too fresh to cover all the bases.

Getting the right safeguards in place is urgent business.

 

Getting ahead of Robo-X

it is this context that explains why the Digital Transformation Agency's recent unveiling of its policy for responsible use of AI in Government is such an important milestone. It gives a clear signal about the need for doing AI well from the start, and creates a (needfully) aggressive timeframe for agencies to get their AI ducks lined up.

As a rough timeline, on 1 September, the switch flicked on the policy. Within 90 days, agencies must have named one or more accountable officials for AI, who will be on the hook for making sure their agency implements the policy effectively, and for tracking and notifying DTA about high-risk use cases.

By March 2025, every government entity in the policy's scope (most of them) will need a public statement on their approach to AI adoption and use, including safeguards against public harm - and keep it under rolling review, they have also been strongly advised to train all staff in AI fundamentals, and invest in specialists for those involved in tasks like buying or making AI.

Other recommendations include setting up registers of AIs-in-use, integrating AI controls into other agency frameworks, and monitoring operationalised AIs to detect if things are going wrong - preferably before actual harm occurs.

At a glance, you may think that the effect of all this will be to slow things down. After all, the policy acts as a high-performance seatbelt, hooter and set of brakes while new drivers learn to drive the car. In fact, the exact opposite is likely true: it will speed up adoption, and nowhere will this be truer than in the space of regulation.

Creating a well-governed, culturally competent and standards-based incubator for baby AIs means that it will not be long before agencies feel empowered to conduct AI experiments and ultimately integrate AI into mainstream operations.

And this means that agencies will need to invest - truly invest- in ensuring that they stay engaged with and connected to the community, to understand their needs.

 

On your AI to-do list: connect with the regulated community

AI standards and guidelines have much to say about important AI concepts like 'safe', 'fair' and 'transparent'. but there is no standard formula for how these concepts are translated into your specific regulatory context.

Rather, these must be tailored to your regulated community, to the issues they face in complying, the culture of compliance across different cohorts, and to the needs they have when it comes to support - including for those experiencing vulnerability.

What does 'fairness' mean for people experiencing your AI-powered regulatory process?

What does 'safe' and 'unsafe' AI look like for the people, communities and organisations who will experience your AI system, directly or indirectly?

How does 'transparency' need to work in your domain to make people feel in control, and to stop them from being overwhelmed by process and information?

What level of human supervision needs to be present, and in what circumstances, to ensure that AIs are functioning not just with accuracy and lawfulness, but with compassion and empathy?

If you're accountable for a regulatory AI initiative, you probably don't know the answers, yet. Instead, you'll need to tackle these questions with a mindset that experts will know some of the answers, and those with lived experience of the regulatory system will know the rest.

The Compassion Gap is one example of an issue that is important to many Australians, yet that won't appear in any manual, standard or ChatGPT prompt response. Instead, it, and other issues like it, will emerge from engagement, co-design and early testing of AI concepts with real people in the community.

There's lots to do in building a safe, responsible future for regulatory AI: but like all technological breakthroughs, it starts with people. You should too.

 

Related Insights
Insight

Ready Set Go - New IC Review Directions

If you missed our last update on the Information Commissioner (IC) FOI Review Procedures – too late, they’re now in force! Jokes aside, the IC’s Review Procedures went live on 1 July 2024. Considering that the starting gun’s been fired, it’s going to be ready, set and go for Commonwealth Freedom of Information (FOI) teams to get on board with the IC’s efforts to streamline and speed-up their FOI Review protocols.
Read More
Insight

eBook: The force multiplier effect of converging exponential technologies

We can identify exponential technology as technologies that double, or deliver an exponential increase, in performance or speed each year. In the past few years, we have seen many emerging technologies (some of which also fit into the category of exponential technology) make strides in the corporate world in an effort to improve business performance, customer service and more. But what happens when a group of these exponential technologies come together?
Read More