News

New AI guidance for the public sector released

The public sector started the year with a burst of activity on the AI governance front.

In January, the Department of Internal Affairs (DIA) unveiled the “Public Service Artificial Intelligence Framework”, a one pager aiming to guide the public sector’s responsible adoption of AI technologies.  That was quickly followed in February with the “Responsible AI Guidance for the Public Service: Gen AI”.

Those documents align with the government’s stated approach to support the increased uptake of AI in New Zealand. In July 2024 they pledged to “take a light-touch, proportionate and risk-based approach to AI regulation”. In other words, no AI-specific regulation at this point, just lots of high-level guidance?

Public Service AI Framework

Led by the Government Chief Digital Officer, the Public Service AI Framework (the Framework) presents a balanced approach to the opportunities and risks of AI. It uses the well-regarded OECD AI principles as a base, with a sensible work programme that traverses governance, guardrails and social licence alongside innovation, capability and “global voice”.

But was this a missed opportunity for Aotearoa NZ to develop its own set of guiding principles that reflect the unique aspects of this country? The OECD AI principles do not, for example, reflect the Te Tiriti o Waitangi.

And as you would expect from a one pager, the document was very light on detail, including in respect of privacy, the meaning of AI “safety” and how to go about implementing the vision. It’s certainly important to “ensure human accountability for inclusive implementation of data and AI use”. But what does that mean in practice?

Responsible AI Guidance for the Public Service: GenAI

The “Responsible AI Guidance for the Public Service: GenAI” (the Guidance) brings in some much needed detail, albeit  focused specifically on generative AI (GenAI) only.

Overall, the DIA has done a solid job of providing balanced guidance that addresses both the opportunities and potential risks of generative AI in the public sector. Positive aspects include good alignment with international best practice and encouragement of a “Build by Design” approach, including the use of Privacy Impact Assessments and AI Impact Assessments (with a link to the AIA Toolkit developed by Simply Privacy for government agencies on behalf of Stats NZ).

We also liked the important focus on procurement, the emphasis on “skills and capabilities” and long overdue recognition that Māori representatives hold diverse views on Government use of Māori data and, by extension, GenAI systems.

We would also have liked to see:

  • More emphasis on the importance of having an AI strategy – all organisations should be clear on why they are using GenAI and what problems they are trying to solve
  • More detail on privacy and the critical role of data governance and data quality.
  • At least some discussion of copyright considerations
  • More discussion of Māori concerns about data sovereignty and potential biases in AI systems.

Simply Privacy Director and Principal, Frith Tweedie, was interviewed by BusinessDesk for an article about the Guidance, which you can read here (sorry, it’s paywalled).

Where to from here?

Government agencies still have a lot of work to do to figure out exactly what the Framework and the Guidance mean for them in practice. The “how” questions will have to be resolved by individual agencies.

For organisations – both private and public sector – aiming to align with the DIA’s AI guidance, we recommend the following steps.

  1. Understand your AI activities by first conducting an AI system audit so you’re clear what AI tools and systems are currently in use across the organisation – including shadow AI. Don’t overlook AI systems you’re planning to procure/develop/use going forward.
  2. Determine your AI risk profile, or how much risk your AI systems actually present within your organisation. Using a bit of GenAI for internal productivity purposes will have quite a different risk profile to using machine learning and/or GenAI to predict fraud, for example. Understanding your AI systems and risk profile helps inform the best, risk-based approach for your organisation.
  3. Confirm your AI strategy and risk appetite. Your AI strategy should define your objectives for AI adoption, ensuring alignment with organisational goals and adherence to the published Guidance. Then consider you appetite for AI risk and whether current risk appetite statements need updating to reflect specific AI considerations.
  4. Conduct an AI governance gap assessment to understand what’s currently in place in terms of governance and risk management and what might be missing. For example, you might be doing a good job of managing security and privacy risks; but what about bias, accuracy and  explainability? Who “owns” and addresses those issues?
  5. Prioritise right-sized risk management to address privacy, accuracy, transparency, bias and IP infringement risks and ensure AI doesn’t do more harm than good. PIAs and AIAs are fundamental to this approach.
  6. Invest in AI training & literacy to ensure your team gets the most out of AI tools. We were thrilled to see our Gen AI Guardrails Made Simple” e-learning module referenced in the Guidance!

The Guidance gives the business sector a taste of what we can expect to see from MBIE, which is currently developing a parallel stream of AI guidance for business and has been collaborating with DIA on this work.

For those working in government, now is a good time to upskill on AI governance. You might be interested in the IAPP’s AI Governance Professional training course – we’re running another one in March. See here for details and to register.