Guidance

Responsible AI Guidance: What Kiwi Businesses Need to Do Next

When MBIE published the Responsible AI Guidance for Businesses in July this year, it was overshadowed by criticism of the Government’s national AI strategy. So where does that leave the Responsible AI Guidance? Is it a useful tool for business? We take a look at what Kiwi businesses should do next to build trust in their use of AI.

NZ’s new AI strategy: more vibe than vision?

The AI Strategy: Investing with Confidence aims to accelerate private‑sector adoption of AI across the New Zealand economy. But it has been widely condemned by a range of stakeholders. Many pointed to an absence of clear deliverables, timelines, funding or measurable KPIs. Others argued it was “all hype and no vision“. Some pointed out that it was “poorly written, badly structured, and under-researched“. 

The inside word is that the final document was vastly different from what MBIE originally developed and put to the Minister.

Flying under the radar: Responsible AI Guidance for Businesses

Against that backdrop, sadly the Responsible AI Guidance for Businesses (the Guidance) had far less impact. Despite several shortcomings, it still provides a useful introduction to Responsible AI for Kiwi businesses dipping their toes in AI water.

What’s Good

  • Accessible entry point: The Guidance is relatively short, easy to read and pitched at non-experts. It helpfully emphasises the need to “understand your ‘why’ for AI“. If also points to the importance of leveraging existing business processes and systems.
  • Values-based and distinctly NZ: The NZ government has chosen to ground its overall approach to AI around the OECD AI principles. The Guidance also takes account of New Zealand values and interests, including references to tikanga, mātauranga Māori and sustainability.
  • Existing laws apply: The Guidance reminds businesses that AI does not operate in a legal vacuum in New Zealand. It points to the Privacy Act 2020, Human Rights Act 1993, Commerce Act 1986 and Fair Trading Act 1986 as examples.
  • Practical-ish: The Guidance encourages some concrete activities relating to governance, AI inventories, procurement and documentation. But it stops short of giving enough detail for businesses to actually implement those activities without external guidance or tools.

What’s Missing

  • No clear AI risk explanation: The Guidance doesn’t clearly explain the risks of AI. Understanding how issues like inaccuracy, bias, privacy, security vulnerabilities and IP infringement can manifest is the foundation of any AI governance response. If you don’t know what could go wrong, you can’t design controls that are proportionate or effective. This risks leaving organisations unaware why Responsible AI matters in the first place. Let alone understanding why regulators, overseas markets and the public are watching so closely.
  • Privacy left behind: Beyond a nod to appointing a Privacy Officer, the guidance underplays privacy risks. Many AI systems depend on personal information, so good privacy practices are critical. Not just from a compliance perspective, but also to build and maintain the trust of customers and staff.
  • High-level and voluntary: The Government has been clear that it will not be regulating AI, instead taking a “light-touch and principles-based approach to AI policy. So the Guidance is only a voluntary resource, featuring “tips” to think about. This is unlikely to drive real behaviour change, particularly in businesses where Board and exec attention is primarily focused on compliance risk.
  • Missing proportionality: The Guidance would be stronger with greater recognition of the wide range of different AI technologies and use cases. And the overarching need for a risk-based approach that recognises those differences. Without proportional risk-based guidance, businesses may over-engineer trivial projects. Or undercook serious ones.
  • International disadvantage: NZ’s voluntary, non-prescriptive stance may suit local economic conditions. But it also leaves Kiwi businesses with global aspirations at a disadvantage. When they take their AI systems overseas, they will face an uphill compliance battle under stricter laws. The EU’s AI Act is a prime example.
  • Public sector contrast: The government is taking a more detailed approach to AI governance in the public sector. The Public Service AI Framework was released early this year, encouraging actions like completing Privacy Impact Assessments and AI Impact Assessments. The private sector should take note.

What Next for Businesses?

So, what can – and should – Kiwi businesses do now?

In our work with clients, we’ve seen first-hand how risk-based Responsible AI frameworks provide the structure that’s often missing from this kind of high-level guidance. Here’s what we recommend.

  • Understand your AI activities: You need a clear view of what AI is being used across your business and the key use cases. This will help you understand your key opportunities and risks and adopt a risk-based approach. Don’t forget shadow AI. And don’t overlook AI systems you’re planning to develop or use going forward.
  • Confirm your AI strategy and risk appetite: Your AI strategy should define your objectives for AI adoption, ensure alignment with business goals and adhere to the Guidance. Consider whether your current risk appetite statement needs updating to reflect specific AI issues.
  • Clarify accountability: Appoint an AI governance lead and ensure AI risk is reported at board and executive level.
  • Conduct an AI governance gap assessment to understand what governance and risk management is currently in place and what might be missing. You might be doing a good job of managing security and privacy risks. But what about bias, accuracy and  explainability? Who “owns” and addresses those issues?
  • Benchmark globally: If you have international ambitions, look to frameworks like ISO/IEC 42001 or the NIST AI Risk Management Framework. And for those bound for Europe, investigate the EU AI Act sooner rather than later.
  • Build AI literacy: Train your teams to spot and manage AI risks early.

Where to Start with Gen AI?

Most kiwi businesses are exploring generative AI in some form. The Guidance is light on Gen AI specifics, but getting the basics right can make all the difference. Here are a few practical steps we recommend.

  • Draft a clear Acceptable Use Policy (AUP)
    Your Gen AI AUP should recognise both the opportunities and limitations of Gen AI. It should then define what’s allowed and what is to be avoided. That includes explicit statements of which classes of data can and cannot be used with different Gen AI tools. And don’t overlook the ongoing importance of human review. Clarity here prevents confusion, misuse and exposure of sensitive information.

  • Develop an “Endorsed tools” list
    Let teams know which Gen AI tools have been reviewed and approved for safe use across the business. Many organisations operate “endorsed” and “non-endorsed” lists that they update as new tools are approved.
  • Train your teams
    Don’t assume users just “get it” with Gen AI. Raise awareness of potential issues by getting your teams to complete a short, engaging training module, like Simply Privacy’s Gen AI Guardrails Made Simple. Refresh training periodically to reinforce best practices and keep pace with changes in the tech.

  • Feedback and escalation pathways
    Encourage staff to let you know about unreliable Gen AI outputs or other issues by implementing an easy feedback and escalation process.

Turning Guidance into Action

MBIE’s Responsible AI Guidance for Businesses is a valuable starting point. But the real challenge is moving from high-level principles and “tips” to practical, proportionate governance that reflects our local environment.

At Simply Privacy, we’ve been working with clients in different sectors to design and implement Responsible AI in real-world settings. That experience, combined with our close eye on international developments, means we can help translate global best practice into practical approaches that work here in Aotearoa.

We look forward to continuing the conversation as New Zealand’s approach to Responsible AI evolves.