The Biometric Processing Privacy Code 2025 (the code) is now in force for new biometric processing starting after 3 November 2025. Think facial recognition, fingerprint scanning and voice recognition for everything from workplace access controls and payroll systems through to customer authentication and retail loss prevention.
Organisations that already use biometrics have until 3 August 2026 to comply with the new rules.
When does the Biometric Processing Privacy Code apply?
In case you missed it, we put together a simple guide to help you figure out if the code applies to your use case. It will help you navigate the new definitions, from “biometric characteristics” and “biometric templates” to “biometric information” and “biometric processing”. You can access the guide here. And you can find out about the background to the code here.
We’ve been helping a number of clients navigate the new rules. That includes conducting proportionality assessments under Rule 1 of the code and developing biometric assessment templates. So we thought we’d share a few thoughts on what we’re seeing so far.
Biometric proportionality assessments
The “proportionality assessment” in Rule 1 is the heart of the code. It requires you to assess if:
- the collection of biometric information is for a lawful purpose;
- the collection is necessary. This includes showing it will be effective at achieving the lawful purpose and there are no reasonable alternatives with less privacy risk;
- appropriate “privacy safeguards” are being implemented; and
- the impacts of the biometric processing are proportionate to the expected benefits. This involves taking account of:
- The scope, extent and degree of privacy risk from the biometric processing.
- Whether those benefits of achieving the lawful purpose through biometric processing outweigh the privacy risks.
- The cultural impacts and effects of biometric processing on Māori.
The Privacy Commissioner and Deputy Privacy Commissioner recently acknowledged the complexity of Rule 1 and the fact that organisations will be required to do some upfront work to comply. But they also said that thinking through the risks and benefits of new technology is fundamental. This will only become more important as new forms of technology continue to emerge.
Do a PIA
We find that the best way to complete the Rule 1 proportionality assessment is within a Privacy Impact Assessment (PIA). That’s because proportionality can’t be assessed in isolation. It requires consideration of all of the key privacy issues, including the relevant privacy risks and privacy safeguards.
A good approach is to work through the other rules in the code to identify the relevant privacy risks and safeguards or mitigants. A PIA provides a useful structure to identify the key risks, test assumptions and document the safeguards you will rely on in the proportionality assessment. A good PIA will also include risk ratings.
Ask the right questions
Make sure you carefully interrogate the proposed biometric system, especially when it is being supplied by a third party. You’ll need a clear understanding of how the biometric system works, its training data, how the system processes and stores biometric information and how long it retains biometric information. Vendors of biometric services should be ready with clear and robust answers to these kinds of questions.
You’ll also want to be clear whether section 11 of the Privacy Act 2020 applies to make the service provider your “agent” or data processor. Are they using or disclosing the biometric information for their own purposes?
Necessity: more than just “a few reckons”
You will need to collate evidence of effectiveness to meet the necessity requirement. This is not just about whether the biometric system can do what it is designed to do, like detect someone’s fingerprints or face. You have to be able to objectively demonstrate the biometric processing achieves your stated lawful purpose. As the Privacy Commissioner said recently, he expects people to bring evidence and data – “not just a few reckons”.
You also need to consider the availability of non-biometric alternatives. If a non-biometric option (such as a manual approach) could reasonably achieve the same outcome with less privacy risk, the biometric system will struggle to meet the test. Stakeholder interviews can be insightful here. For example, if the biometric system is intended for use by front-line staff, check with them whether they expect it to be a helpful tool. You may be surprised by the answers.
Alternatives can also function as privacy safeguards. Providing meaningful non-biometric options can provide choice, reduce privacy risk and improve fairness.
Accuracy
Biometric systems are probabilistic by nature. This means they never give a perfectly certain yes-or-no answer. Instead, they work in terms of likelihood by estimating how similar biometric samples are. Each time you use your fingerprint or face, the system compares it with what it has stored and essentially asks “How likely is it that this is the same person?”
False positives (i.e. incorrect matches) and false negatives (i.e. missed matches) are therefore inherent features of this technology. A biometric proportionality assessment needs to carefully consider accuracy, confidence thresholds and the impacts of errors on the people concerned.
Bias
You’ll need to consider bias risks when assessing whether a biometric system operates fairly, accurately and proportionately. That includes considering whether the biometric system’s underlying algorithm was trained on data that reflects NZ demographics. Many public datasets used to train biometric systems have been found to include significant demographic biases. For example, predominantly featuring white males with fewer women and people of colour. This could have substantial implications for downstream applications like facial recognition technology, such as misidentifying individuals.
Biased training data can result in people being wrongly investigated or treated as suspicious, restricting their freedom of movement, damaging their reputation and causing stress, fear and embarrassment. Where biometrics are used to control access to work, shops, education, healthcare, welfare or travel, errors can lock individuals out of essential services. They can also be difficult to challenge, particularly if organisations treat biometric outputs as objective or unquestionable. These harms can fall disproportionately on certain groups if systems are less accurate for them, reinforcing existing inequalities.
Cultural impacts
The Biometric Processing Privacy Code explicitly requires organisations to consider the cultural impacts and effects of the biometric system on Māori.
In practice, this often raises challenging but necessary questions such as:
- Has there been any consultation on how biometric information of Māori is collected, stored or shared?
- What tikanga or Māori values are relevant to the proposed biometric processing?
- Do you or the vendor store or process templates offshore where Māori data sovereignty principles might be compromised?
- Might Māori be disproportionately affected or represented in the training data, leading to different impacts on Māori?
What’s at stake?
The Office of the Privacy Commissioner has a compliance function with the power to investigate systemic breaches that cause harm. The Privacy Commissioner has said he will increasingly look to use this power, including consideration of broader societal impacts.
Beyond formal enforcement, there are likely to be significant trust and reputational consequences. Biometric information is highly sensitive, and perceived overreach or unfairness can quickly attract public, media and stakeholder scrutiny.
A failure to do a robust proportionality assessment under the Biometric Processing Privacy Code weakens an organisation’s ability to defend its decisions, making it harder to justify why biometric processing was necessary, fair and reasonable in the first place. In practice, that can undermine board confidence, delay projects and erode social licence.
So a good proportionality assessment within a robust PIA is a key way to make the right decisions before significant time, money and credibility are on the line. They also help ensure biometrics are used because they are appropriate, not simply because they are available.