Right intentions, wrong turns: Building an AI bot
AI bots are all the craze these days. So, jumping on the trend, I decided to create my own. I started with good intentions – to build a bot that would analyse event planning and provide a series of recommendations based on the Australian Government’s Good Practice Guidelines for Engaging with People with Disability (Good Practice Guidelines). The goal? To provide practical advice on engaging with people with a disability in an inclusive, respectful and appropriate way.
Surprisingly, building the AI bot wasn’t the biggest hurdle. Despite having no coding knowledge, I discovered that you could build a bot in a matter of minutes – if you know the right places to go! But it’s one thing to build a bot – and another to create one that is valuable, or at the very least not harmful. The ‘harm’ caused by my creation wouldn’t rank up with famous AI disasters. I didn’t critically injury someone with a self-driving car, nor did my bot cause an innocent person to be unjustly imprisoned. Its harm was more subtle. In providing a series of recommendations to improve the accessibility of events, it effectively stripped away the nuance that is essential to inclusivity. More on that later!
Limits to Creativity
The allure of Generative AI, AKA Gen-AI caught my attention when building the bot. Unlike traditional AI, Gen-AI creates new content and ideas rather than drawing solely from your own information sources. So, after feeding my bot information relating to the Good Practice Guidelines, enabling Gen-AI produced some weird and wonderful outcomes. My bot wrote poetry, provided recipe ideas for left-over ingredients in my fridge and – just to annoy my colleagues, I programed it to speak highly of me whenever my name was mentioned in a question. With a playful tone and seemingly objective outputs, I thought this bot would be a welcomed addition to the team.
The trouble started when it began giving recommendations which were supposedly based on the Good Practice Guidelines. The recommendations were basic at best – combining the guidelines with broad and general information from the web. Though superficially creative, Gen-AI didn’t create the robust recommendations that I initially intended. At its worst, it created a false sense of security that we were considering a broad spectrum of outcomes and innovative solutions.
AI bots also have a hidden talent in appearing to be neutral and objective. In my case, the AI-generated responses seemed to have exhaustively canvas and consider all possible accommodations for people with diverse needs. In reality, it missed a few that this ‘human-in-the-loop’ came up with on her own. Without a critical eye or knowledge of the underlying principles of the Good Practice Guidelines, it was easy to assume that the bot was objectively correct – turning the focus away from creatively exploring ways to make events more accessible. The scary part about AI-generated ideas isn’t just that they could create something overtly harmful, but that it can make us complacent when something may not be ‘right.’ As stated by Alberto Savoia, an Innovation Agitator at Google, “Make sure you are building the right ‘it’ before you build ‘it’ right.” Most of the time, this mantra is used to encouraged questions around whether a bot, application or process can serve a business need, be used frequently or scaled. Of course, what is ‘right’ will depend on a host of different factors and requires us to unpack what ‘right’ is. It’s a bold ask, but one starting point is Australia’s recently developed Voluntary AI Safety Standards.
Australia’s Voluntary AI Safety Standard
The Australian Government’s Voluntary AI Safety Standard provides 10 Guardrails for safe and responsible AI use. The standard does not create new legal duties about AI systems, but it can help in preparing for future regulatory requirements, emerging international practices – and reflects community expectations regarding safe AI use. The good thing about the Standard is that it’s not prescriptive. Rather, it is principles-based, which enables organisations to rely on a ‘north star’ to guide them in exploring the specifics of AI use and unpack current and future unknowns.
While the standard includes what many would expect – data governance, risk management processes – at the core of the guidelines is a consideration of the human impact. For example, it is critical to enable human control or intervention mechanisms across the AI system life cycle under Guardrail 5. The hope is to have meaningful human oversight that can intervene and reduce the potential for unintended harm.
So, as you or your organisation start considering the creation or deployment of AI systems, it may be useful to consider broader questions of impact, such as:
While not legally binding (yet), the Voluntary AI Safety Standard is an invitation and guide to consider whether the AI you are building or deploying is ‘right.’ That invitation should be welcomed as an opportunity to exercise our judgement and celebrate the unique and critical traits that make us human.
Figure 1. AI generated photos of an ‘AI bot finding ways to improve accessibility’, generated 5 times with the instruction to reduce creepiness.
P.S. The generated images above provide insight into the creative capabilities of generative AI. While impressive, can this type of technology be relied on to make value-based decisions?