News

Ok Computer? Time to think about Responsible AI

Did you get the chance to have a play with ChatGPT over the summer break?

ChatGPT – which Microsoft is incorporating into its Bing search engine – generates incredibly convincing, human-like text*. It also provides tangible evidence of the impact artificial intelligence (AI) will have on our lives going forward. As Microsoft’s Vice Chair and President, Brad Smith, said recently:

“It’s now likely that 2023 will mark a critical inflection point for artificial intelligence…[which] represents the most consequential technology advance of our lifetime…Like no technology before it, these AI advances augment humanity’s ability to think, reason, learn and express ourselves. In effect, the industrial revolution is now coming to knowledge work. And knowledge work is fundamental to everything.”

What is “AI”?

AI refers to the ability of machines to perform typically human-like tasks, including recognising patterns, making decisions and solving problems. This is achieved through algorithms and models designed to learn from data and improve over time. “AI” is typically used as an umbrella term to describe a collection of related technologies such as machine learning, facial recognition, predictive analytics, natural language processing and robotics.

What’s so great about AI?

The benefits of AI tools include far greater productivity, efficiency, accuracy and speed – saving time and money. In New Zealand, a growing number of organisations are developing and deploying AI. Banks are using machine learning algorithms to detect and prevent fraud, certain retailers are using facial recognition to improve security and government agencies are using AI to automate routine tasks and improve decision-making processes.

What are the downsides?

The development and deployment of AI presents numerous risks, including bias and discrimination, lack of transparency, explainability and accountability, over-reliance by humans, job displacement and privacy concerns. For example:

What about privacy?

AI is powered by data and often that will include personal information. So all the usual privacy risks we know and love are therefore amplified – like data breaches, lack of transparency (hello Clearview AI!), function creep, lack of proportionality, unfair or incorrect outputs and excessive data retention, to name a few.

Aren’t there laws prohibiting this kind of thing?

Existing privacy, human rights and discrimination laws all apply to AI, but there are growing demands to address the specific risks of AI. The EU’s “AI Act” – expected to come into force next year – employs a risk-based approach that will prohibit certain high-risk AI systems and sets out obligations for the development and use of AI systems. Just like the EU’s General Data Protection Regulation (GDPR), the AI Act will have extra-territorial effect. And it’s expected the AI Act will have an equivalent global impact to GDPR – if not larger.

In the meantime, the GDPR and several copy-cat laws include restrictions around automated decision-making. And in New Zealand, the Office of the Privacy Commissioner is currently exploring how best to regulate the use of biometrics, including facial recognition. In short, the days of unregulated AI will soon be over.

The solution – Responsible AI

“Responsible AI” is an approach to AI development and deployment that is guided by values-driven actions taken to mitigate potential harms to people and the planet. Alternatively referred to as “AI ethics”, Responsible AI aims to focus on taking responsibility for AI outputs rather than more subjective and amorphous notions of what is “ethical”. It is a response to the growing awareness of the risks associated with the different forms of AI.

Every organisation is different so each Responsible AI programme needs to be tailored to meet its specific business – and soon regulatory – needs. But a typical Responsible AI programme shares many similarities with a good privacy framework:

  • Executive engagement and support and an appropriate governance structure
  • A clear strategy on how AI will be used and why
  • A robust set of AI ethics or Responsible AI principles and ways to operationalise them
  • Training and awareness raising
  • Risk monitoring and vendor due diligence
  • Good data governance practices as a critical underpinning.

Now is the time for New Zealand organisations exploring AI to take an “ethics by design” approach to anticipate and address potential risks. We all know that building privacy – and now Responsible AI – into the design phases is the best way to manage risks further along the data lifecycle. And just like privacy, Responsible AI will soon no longer be just a “nice to have”, but a core foundation for maintaining trust and social licence.

Want to know more?

Please get in touch if you’d like to find out more about Responsible AI, or AI issues generally. Whether your organisation is only just starting to think about using AI, or is already forging ahead, it’s never too late to adopt a responsible approach.

*And yes – I did use ChatGPT to generate some of this article…but then I binned it all because it wasn’t that great. Humans are still good for some things – for now!

This article was written by our Principal Frith Tweedie. Frith has a deep interest in AI and its impacts on society. She has served on the Executive Committee of the AI Forum since 2018, was previously part of the governance group for the New Zealand Algorithm Hub and is currently on the advisory panel for the Women in AI Awards ANZ 2023. Her AI-related experience includes drafting the AI Forum’s Trustworthy AI in Aotearoa AI principles, contributing to the national AI ethics framework for the Government of Malta and developing Auror’s Responsible AI Framework. She is currently working with Stats NZ on operationalising the Algorithm Charter.

Photo by Alexander Sinn on UnSplash