News

OPC Guidance on Applying AI

OPC guidance on applying the Information Privacy Principles to AI

The Office of the Privacy Commissioner (OPC) has released guidance on how the Information Privacy Principles (IPPs) apply to Artificial Intelligence (AI).

This builds on recent guidance setting out the OPC’s expectations around the use of generative AI by agencies. Both sets of guidance are timely given the relatively recent explosion of AI into the public consciousness – as well as directly into the hands of agencies and the community at large.

The OPC states that it wants to make it easy for agencies to know both the potential privacy risks associated with AI and the OPCs’ expectations in that regard. Overall, the guidance provides helpful insights for agencies wanting to understand the OPC’s perspective on how the IPPs apply to AI systems. However, it is light on practical solutions and (understandably) focuses only on privacy and does not address other relevant AI risks.

The guidance states very clearly that the Privacy Act applies to everyone using AI systems and tools in New Zealand. It says that thinking about privacy is vital if you are going to use AI responsibly, noting that privacy impacts from AI can arise whether you’re developing your own AI systems, using AI to support decision making or have team members who are informally using AI tools such as ChatGPT in their work.

Key OPC expectations for AI

OPC has stated the following expectations for agencies using AI systems.

  • Have senior leadership approval based on full consideration of risks and mitigations
    • Make sure senior leaders understand the implications – both good and bad – of using AI. AI Impact assessments (AIAs) are a key way of identifying potential AI risks and mitigants.
  • Review whether an AI tool is necessary and proportionate
    • Consider the impacts of using AI, who may be impacted and whether a less privacy-invasive method is available and appropriate.
  • Conduct a Privacy Impact Assessment (PIA) before using AI systems
    • The importance of PIAs is emphasised throughout the guidance. Before using an AI system, the OPC says you need to understand enough about how it works to be confident you’re upholding the IPPs. The best way to do that is by conducting a PIA before you start and updating it regularly. PIAs are a natural corollary to AIAs; more mature organisations may look to use a combined approach.
  • Be transparent
    • You need to tell people how and when an AI system or tool is collecting their information, why and how their personal information will be used.
  • Engage with Māori about potential risks and impacts to the taonga of their information
    • The guidance states that Māori perspectives on privacy need to be considered and recommends agencies proactively engage with Māori.
  • Develop procedures about accuracy and access by individuals to their information
    • Focuses on the importance of appropriate processes to ensure compliance with IPPs 6, 7 and 8.

How do the IPPs apply to AI?

The guidance adopts a broad view of AI systems, encompassing machine learning, classifier, interpreter, generative and automation systems. It looks at each of the key areas represented by the IPPs as summarised below.

Collection (IPPs 1-4). The guidance looks at the Privacy Act’s purpose limitation focus and whether collection is fair.  It emphasises the importance of understanding what is in your training data (i.e. the data that trains the AI model, which impacts how the model behaves), how relevant and reliable it is for your intended purpose and whether it is gathered and processed in ways that comply with your legal obligations and ethical/responsible approaches.

Security and retention (IPPs 5 and 9). Agencies need to take appropriate cybersecurity steps to protect information, particularly given the new and emerging security risks associated with AI. That includes risks of fraud arising from the ease with which anyone can now create deep fakes, simulate voices and automate hacking and phishing campaigns.

Access and correction (IPPs 6-7). The OPC says it is essential to develop procedures for how you will respond to requests from individuals to access and correct their personal information processed by AI tools.

Accuracy (IPP 8). Agencies must take reasonable steps to ensure information is accurate, up to date, complete, relevant and not misleading. Beware the risks of automation “blindness”, or the tendency of humans to rely on computer outputs at the expense of their own judgement. Detecting accuracy and fairness issues like bias can be challenging and the OPC suggests engaging with experts who can offer an independent perspective as well as the people and communities likely to be harmed from the use of any biased or inaccurate information.

Use and disclosure (IPPs 10 & 11). Agencies must clearly identify their purposes for collecting personal information and then limit subsequent use and disclosure of that information to those purposes or a directly related purpose.

Overseas disclosure (IPP 12). Make sure you check and confirm any offshore technology providers will not be using personal information in your care for their own purposes, otherwise this will be a disclosure and IPP 12 will apply. The guidance includes a reminder that agencies remain responsible for protecting personal information when they use third-party service providers to handle personal information on their behalf (section 11 of the Privacy Act).

Unique identifiers (IPP 13). There is scope for AI systems to find patterns in a person’s behaviour that qualifies as a unique identifier, even if that is not an intended outcome.

What else should we be thinking about?

Privacy is not the only AI risk

It’s important to remember that while privacy is a significant and fundamental consideration  when AI uses personal information, it is not the only risk. Issues relating to poor model performance, inadequate monitoring, bias and discrimination, copyright infringement, lack of explainability and inadequate governance can all contribute to AI-related harms to people and related risks for agencies.

Data is important – but so is the AI model

Data is what powers AI systems so it’s critical you have a clear understanding of the source, nature and quality of both your training and production data. But AI harms don’t only arise because of data. There are numerous ways the AI model itself can create problems as well. AI models need to be carefully tested before deployment to understand whether they are performing accurately and as intended. They should also be monitored and tested on an ongoing basis across the AI lifecycle to identify potential errors and unfair outcomes.

Practical solutions

The OPC materials are light on practical solutions and many agencies will struggle to know what they should be doing to minimise the risks of non-compliance with the Privacy Act and privacy and AI risks generally. Don’t hesitate to reach out if you need assistance in this regard – we have helped numerous organisations develop Responsible AI frameworks to do just that.

Use AIAs to identify and mitigate the full range of AI risks

Similar to a PIA, an Algorithmic Impact Assessment (AIA) is a key tool for identifying, assessing and mitigating the full range of potential AI risks. A good AIA will take a comprehensive look at an AI project that involves consideration of the problem to be solved, whether AI is the best solution, key benefits of the AI system, who the impacted stakeholders are likely to be, AI governance and human oversight, the source and quality of training and production data, the performance, testing and monitoring of the AI system across its lifecycle and key privacy, safety, security, reliability, transparency and explainability risks and how to mitigate them. Simply Privacy conducts both PIAs and AIAs for our clients.