Seeing the mountains for the clouds – 10 Principles of Good Administration
Back in the mists of time, 2007 to be precise, the Commonwealth Ombudsman issued an incredibly useful Fact Sheet – Ten Principles for Good Administration. The Ombudsman’s ‘Ten Commandments’ contain evergreen principles about good administration that are as useful today, if not more so. And that’s not because of the advent of AI and automated decision-making. These Ten Commandments not only underpin good decision-making, but they are also key ‘principles’ that govern any ‘rules’ that are set by humans – and especially if there is ever a ‘machine in the loop.’
Not so far back in the mists of time, I wrote an article ‘Am I safe? Am I legal?,’ which examined ways in which administrative decision-makers could learn from the airline industry. In particular, I focused on the Crew Resource Management (CRM) training protocol, which creates a ‘safe harbour’ for junior officers to challenge the captain’s flight plan, protocols and – most importantly, responses to emergency situations. For those of you old enough to remember Star Trek: The Next Generation, this is the ‘battle stations’ moment, where Captain Picard firmly, but gently barks out: ‘Ideas?’ to crew members to provide him with ideas to save the ship, all while a barrage of photon torpedoes rain down on the ship. In the case of an airplane, the equivalent scenario is where the junior co-pilot tells the pilot (Picard) that they are about to hit a mountain range and the autopilot is dead wrong – and so will they be, along with their passengers.
How does this apply to administrative law and decision-making? Unfortunately, it often doesn’t. Exhibit A is the Royal Commission into the Robodebt Scheme Report, which revealed that yes, machines helped identify possible overpayments. However, it was human beings who pressed the ‘go’ button on the policy which was later revealed to be ‘unlawful’. In the legal and administrative realm, the main way that we learn about and how to fix sometimes catastrophic errors is after the fact and in a ‘postmortem’ examination of the matter. Instead of a coroner performing the autopsy in a clinical fashion, it is often the media wielding the scalpel. Speaking of postmortems, it is arguably the medical profession where something similar to CRM or Crew Resource Management exists – and where we can learn good lessons.
Known as Root Cause Analysis (RCA), medical professionals analyse situations where there’s been a preventable death or injury. In RCAs, the doctors, nurses and related medical professionals examine each step in a failed procedure. What makes the RCA unique is that the participants, their statements and assessments are strictly confidential and protected through a form of privilege, specifically designed to elicit full-and-frank discussions. Discussions operate on a ‘blame free,’ objective basis and focus on uncovering the problem or systemic flaws that led to a patient’s death or some serious injury. Like the CRM technique, it creates a safe harbour in which the marketplace of ideas is meant to triumph – and the best ideas can rise to the surface.
The challenge is how to establish safe harbour environments – not as a postmortem, but a pre-mortem in legal or administrative law scenarios. That way, you don’t end up with an injured patient, or – in the case of RoboDebt, situations where lives are lost, reputations runed and businesses destroyed. I think that the answer lies in the name of the CRM technique, itself. It is focussed on training and preparation. Of course, there is no flight simulator in the government context. Or is there? Or more to the point, can there be one?
I think the answer to that question is yes, there can be ‘flight simulator’ type exercises in administrative decision-making. How? It requires a degree of creativity, and a healthy understanding of risk protocols – and a willingness on the part of managers to be open to their staff. However, that’s not easy to achieve in practice, particularly within hierarchical organisations such as Commonwealth agencies, where decision-making is regularly pushed upwards to executives and Departmental Secretaries, rather than downwards to subject matter experts. In agencies where decision-making operates under a delegation framework, internal and external factors can exert significant pressures on decision-makers, thereby impacting the nature and manner of their decisions. This power dynamic is particularly acute where more junior officers feel pressured to provide an outcome pushed down from above.
That does not mean that the situation is insoluble. The principles that we can draw upon, which are outlined in the Ombudsman’s ’10 Commandments’ are as follows:
This particular elephant was not even invented when the Ombudsman’s Office delivered their 10 Commandments – and that is: AI. In my view, the Ombudsman’s principles apply equally to automated decision-making (and AI). Why? For all the bluster about big data, AI and machine learning, you will always find a human behind the machine that sets the ‘rules’ for the machine to make a decision.
As we all know, the legislation, policy position, the facts, circumstances and evidence all play in setting rules and informing decisions. However, a machine does not have ‘emotional intelligence’ to know, understand or begin to conceptualise the ‘emotion’ and feeling behind a decision. And that inability to feel has significant consequences, not to mention potential unintended consequences that require policy settings to be reviewed and potentially changed.
This is why we have to have humans-in-the-loop. That is also why AI can’t be left alone to make decisions. Like the airplane – the machine can be a very useful tool to check the human decision and vice versa, making sure the other doesn’t miss a mountain for the clouds. In other words, nothing can be ‘set and forgotten’. Everything should be up for contest, deliberation and ongoing improvement and continually checked for errors.
The same applies to good decisions and good decision-making practices. It is not an end in-and-of-itself, it is an ongoing, iterative process. What is ‘correct and preferable,’ can always be changed with further information and change of circumstances. The same can be said of continuous learning – it is the practice that is important, like the CRM protocols in aviation.
In administrative decision-making, the real risk is not in making an incorrect decision, or even in getting the law wrong. It’s in not going through the process of properly considering the issues or being unwilling to change or to be fixated on a specific way of thinking or achieving a set of pre-ordained outcomes. Going through the practice of continually challenging assumptions will help mitigate the administrative equivalent of airplanes flying into a mountain range.
Figure 1. AI generated photo of ‘the evolution of life and machine to improve outcomes.’