On 25-26 June 2025, our Principal Daimhin Warner had the privilege of attending the first-ever IAPP Navigate Digital Policy Retreat in Portsmouth New Hampshire. The retreat saw a select community of leaders from industry, government, academia and civil society engage in deep discussions about developing digital policy to promote innovation, connection and societal progress.
Hosted by the Berkman Klein Center for Internet & Society at Harvard University and the IAPP, Navigate examined the complex web of digital regulation, risk and responsibility spanning privacy, AI governance, cybersecurity law, online safety, competition and digital policy. Sessions featured exchanges of ideas across disciplines about resolving tensions and realising opportunities society faces in governing new technologies.
Here are Daimhin’s reflections on the overall themes running through the retreat:
- We’re not ready – Only 23% of attendees were confident that the risks associated with scaling and safeguarding AI could be managed effectively over the next 12 months.
- Children are at particular risk – Children were recognised as one of the groups most impacted by digital harms, including through the development of unsafe AI products or services. As always, Ireland is at the forefront here, with its media and online safety regulator Coimisiún na Meán developing an Online Safety Framework to hold digital services accountable for how they protect people, especially children, from harm online. The framework, which is underpinned by several laws, requires platforms to remove illegal content and diligently apply their own rules about acceptable content, have easy-to-access, user-friendly ways for users to report illegal content, and take specific steps to protect children from online harm.
- We don’t know how to do this yet – Given the terrific velocity of AI development, we knew as a group that some form of effective regulation was needed. However, there was no consensus on the form that regulation should take, with some favouring more prescriptive laws (like the EU AI Act) and others calling for a principles-based approach similar to the OECD’s Fair Information Principles. However, there was consensus that the regulatory approach needed to be global and consistent, to reduce the compliance burden on organisations.
- In the meantime, thank goodness for the privacy community – In this new digital world, there was overwhelming agreement that the expectations on privacy professionals were increasing rapidly. In addition to privacy and data protection, privacy professionals are now expected to take a lead on AI governance, data governance, cybersecurity, data ethics, human rights, online safety, accessibility, content moderation and product liability. In-house privacy functions have already built effective tools and processes for managing enterprise privacy risk, and there was agreement that organisations were looking to leverage those tools and processes to address new data risks. This was expected to expand further in the next two to five years.
We know that whatever happens globally will take time to reach New Zealand, and may never influence the way digital risks are regulated in this country. However, we are certainly seeing these same themes play out. Many of our clients’ privacy functions are already being asked to managed new domains, including data ethics and AI governance, and this is adding strain to already overworked teams. Even in the absence of AI regulation, organisations are starting to hire AI governance practitioners to dedicated roles. We’ll continue to support these organisations as they navigate their way to responsible AI.