Back to Insights

Let's talk about the robots (A.I.)

For those in the Australian Public Service (APS) hankering to play around with ChatGPT or fear the attack of any of its clones (Bad Star Wars pun intended), the guidance form on-high has likely been a resounding 'no.' the question is - should all A.I's be banned in workplaces?
Related Topics:
Rethinking work
4 September 2024
David Mesman
5 minutes

The Commonwealth Department of Home Affairs has gone as far as to ban the use of ChatGPT in the workplace unless APS staff receive express authorisation. Other Australian companies, like the Commonwealth Bank, and many law enforcement agencies have followed suit. Significant numbers of organisations worldwide are considering similar bans. The reason is relatively straightforward - sharing your organisation's data with externally housed A.I.s can pose risks to security frameworks, and reveal confidential and personal information - and let's not forget about precipitating the extinction of the human race. Thank you, Skynet and Arnie.

In all seriousness, are those fears grounded in reality? And what are the actual implications - for good and bad - and particularly in the legal, policy and administrative realms? If you're a fan of the podcast - Freakonomics, you may have heard those types of questions raised in Episode 554 - Can A.I. Take a Joke?, which was the first of a three-part series entitled How To Think About A.I.  The Freakonomics guest host, Adam Davidson steered the discussion away from the classic tropes, i.e. A.I. leading us into a golden future vs. A.I. creating genocidal killing machines. Instead, Davidson reframed A.I. as one of a series of new technologies that society traditionally views with scepticism and fear, particularly when 'The Machines' start taking over human beings' jobs. Davidson wanted to know if the machines were on the 'winning side' of net job gains vs. net job losses for humans.

To explore that idea, he looked at manual, telephone switchboard operators who lost their jobs with the advent of electronic switchboards in the early 20th Century. In the short term, there were job losses. But over the longer term, most switchboard operators retrained in new fields. And they likely found work in new industries that were often the byproduct of new technologies. In other words, emerging technologies created new opportunities and occupations that were unimaginable to telephone exchange operators at the time they lost their jobs in the 1930s and '40s.

So, what kind of opportunities are we talking about with A.I.? One of the most obvious involves running through endless lines of computer code and letting a Bot check for errors that human coders tend to miss. According to media reports, that was one of the experiments that Home Affairs' APS staff ran.

One not-so-obvious area where A.I. could add significant value - and particularly in the public sector, is records management and developing robust methods to cull those records. Before you nod off to sleep, records management shouldn't let you. Not off to sleep, that is. It's a key risk faced by all organisations across Australia, whether they're in the private or public sector. If you doubt that fact, cast your minds back to the series of data breaches that hit Australia in 2022. Think Medibank, Optus, and the list goes on.

How does that relate to A.I.? All organisations face ever-growing mountains of data and their staff won't have the time, nor the resources to understand each-and-every legislative, regulatory or industry standard related to each-and-every bit of data that they hold. Even beginning the process of categorising, sorting and metadata tagging individual bits of data in huge datasets would be an impossible task for a human 'operator.' The task becomes progressively more challenging when assigning data sets a retention period under the Archives Act 1983 for Commonwealth public sector agencies, all while factoring in scores of competing legislative and regulatory requirements.

And that's where A.I. could come in handy and, especially, in one of my main practice areas: privacy law and compliance. A.I. tools could help organisations identify personal information (PI) holdings, check whether only authorised users have accessed PI, and destroy PI when an organisation no longer needs it for a valid purpose. And if the organisation is uncertain - or it would take too much time to assess all the regulatory requirements, a Bot could help lift-and-shift the data into temporary storage encrypted, segmented and harder for hackers to get to.

Considering the upcoming privacy reforms and the expansion of the definition of PI, there will undoubtedly be a massive demand for DSOs (not the UK military honour, but 'Data Sentencing Officers') - a new HITL or 'Human in the loop' role conjured into being as I wrote this article, I.e. I made the term up. So, no - there's no DSOs. Not yet - at least not according to my Google searches. But these imagined workers would be archivists-on-steroids and backed up with the newest in A.I. search tools. DSOs would be trained in 'sentencing' datasets - and determining whether an organisation should destroy, retain or put a given dataset into cold storage with limited access, and apply security controls and using a risk-based assessment of competing legislative priorities. A DSO would need a healthy skills mix - an understanding of legislative and regulatory requirements, risks,  as well as a heavy dose of ICT knowledge, security controls and, above all, judgment. Why? Because you need a human being to assess the risks organisations will face when holding on to mountains of data vs. the ever-increasing risk of data breaches.

There are a host of other possibilities where A.I. can make a significant difference in increasing efficiencies, reducing data entry errors and 'freeing up' human resources to tackle more complex tasks that require judgement and discretion. That sort of within the GDPR, the EU's General Data Protection Regulation's limits on automated decision-making. Like the GDPR, Australia's privacy reforms will include similar limitations that do not ban an A.I. or automated process from making a decision. Rather, human beings will have a right for another human-in-the-loop to review an A.I. or automated decision process where their rights are at play - and if it relates to their personal information.

My sense is that we shouldn't fear the use of A.I. Rather, we should embrace the use of Bots and A.I. to do the heavy-data-lifting, leaving us human beings to play the HITL role that requires sound judgement. Rather than out-and-out bans on A.I., we should consider embracing hybrid A.I. solutions that are safely housed within the 'walled garden' of our workplaces or used with a VPN so that Skynet cannot track us, all while delivering real value, time and resource savings.

At Synergy Group, my incredibly clever ICT colleagues are on the cutting edge of creating robotic process automation (RPA) tools that help our clients undertake tasks that should be done by 'The Machines' - things that are necessary, but repetitive and add little value, unlike the HITl or 'The People' who can add real value. To my mind, that is where the real debate about A.I. should be focused - on where we can extract value for the broader community using A.I. That is not different from any other technological innovation that our society will eventually understand and master = and hopefully before it kills us all (bad pun intended).