News

AI Impact Assessments – your new best friend!*

AI Impact Assessments – your new best friend!

No doubt you’re already familiar with the value Privacy Impact Assessments (PIAs) contribute to your privacy programme. They’re a key tool for really getting to grips with the key privacy issues in any new data-based project, product or initiative.

But have you met the PIA’s new, hotter friend – the AI (or Algorithmic) Impact Assessment?

Just like PIAa, AI Impact Assessments (AIAs) are all about systematically identifying potential risks and then working out appropriate ways to mitigate them. They are ideally done at the start of a new AI project and are a critical tool in your AI governance toolkit. They go further than a standard PIA, examining the entire algorithmic ecosystem — the data, the model and the outcomes.

Why do we need them?

Algorithms and AI systems, fuelled by data and guided by code, now shape our lives in all sorts of ways. They determine which cat videos grace our social media feeds, who receives a loan and which job applicants get an interview. And the potential for these systems to produce unfair, opaque and even dangerous outcomes means we need to understand what is going on under the hood so we can avoid harm to people and risk to organisations.

We know that Kiwis have particularly low trust in AI. AIAs can help your organisation adopt a  more responsible approach to using AI by shining a light on the inner workings of these systems, helping to build trust and giving you a licence to continue innovating.

Meet the AIA Toolkit

Last year, Simply Privacy Principal Frith Tweedie was fortunate to be asked to help Statistics NZ Tatauranga Aotearoa (Stats NZ) design and develop an AIA Toolkit for government agencies. The purpose of the toolkit was to help the 29 signatories to the Algorithm Charter for Aotearoa New Zealand (the Charter) operationalise the Charter and meet their commitments under it.

The AIA Toolkit is designed to facilitate informed decision-making by government agencies on the benefits and risks of using AI and algorithms. After all, the stakes are high in the public sector – we don’t want a Kiwi version of Australia’s Robodebt debacle or the horror story that is the Dutch child welfare benefit scandal.

There are four components to the AIA Toolkit.

  1. Algorithm Threshold Assessment: to determine the need for a more in-depth assessment.
  2. AIA Questionnaire: a series of detailed questions about the algorithm and its potential impacts.
  3. AIA User Guide: a reference guide featuring explanations of key issues and case studies.
  4. AIA Report template: to help agencies articulate and summarise the key risks and controls.

The AIA Toolkit takes a risk-based and best practice approach to satisfying the Charter commitments, recognising that each agency will need to tailor the process and assessments in a way that’s most appropriate for its own role, context and risk profile. That means each agency is free to adopt the Toolkit – or aspects of them – as best suits its needs.

It even got a shout out in the Department of Internal Affairs’ “Responsible AI Guidance for the Public Service: GenAI”, which we discuss here.

“So what” I hear the private sector people at the back saying….

This means the Toolkit can also be tailored for use in the private sector. Those in the business world get to skip over the public sector commitments, while still being able to avail themselves of key questions and guidance on key issues like algorithmic bias, transparency, explainability, safety and the big questions around training data and AI development and procurement.

After all, it’s not like business use of AI is risk-free – just ask Apple Card, Samsung, Air Canada, Workday and the lawyers who used ChatGPT to help draft their legal submissions…

And you’ll probably find the User Guide pretty helpful when it comes to understanding the various risks that can exist with algorithms and AI systems beyond the privacy issues you may already be more familiar with.

We’ve had great feedback on the AIA Toolkit, including a “two thumbs up”  from Australia-based Helios Salinger. Anna Johnson and her team looked at 14 different AI risk assessment frameworks across Australia and New Zealand, considering ease of use in particular. They found that “a particular strength [of the AIA Toolkit] is its broader application than just AI, covering algorithmic impact assessment. It’s worth remembering that not all automated decision-making or algorithmic systems use AI, and that you don’t need AI to cause great harm – Robodebt being a classic example of a devastating project which sounds like it used AI, but didn’t”.

If you’d like to find out more about what an AIA is, when you might need one and whether it needs to be separate from your trusty PIA (another topic in itself), then please get in touch for a chat.

*Image generated by AI – see how much fun it is to do an “Impact Assassment”?!