Simply Privacy Principal, Frith Tweedie, travelled to Boston in early November for the IAPP’s AI Governance Global conference. She shares some of the key takeaways from the event.
The IAPP’s inaugural AI-focused conference, AI Governance Global 2023, took place during a very busy week for AI governance on a global scale.
In the same week as the conference, the US released President Biden’s Executive Order on Safe, Secure and Trustworthy AI while in the UK, the Bletchley Declaration on AI Safety saw 28 countries agree on the need to understand and collectively manage potential risks posed by so-called “frontier AI” systems, such as ChatGPT.
Amidst this global focus on AI, over 1,200 people gathered in Boston to discuss practical, actionable solutions for implementing AI systems and governing them responsibly. It was also a great opportunity for me to meet many of my fellow AI Governance Center advisory board members in the flesh.
Keynote speakers included New Zealand’s own Dame Jacinda Ardern (or “Prime Minister Ardern” as the Americans like to call her), who spoke about her ongoing work on the Christchurch Call. New York Times journalist Kevin Roost shared his astonishing experience of having ChatGPT declare its undying love for him, while Jonathan Zittrain, a professor of both law and computer science at Harvard, gave a very entertaining and insightful keynote on the impact of AI. Author and publisher Jane Friedman provided sobering insights into how AI can affect authors and other creatives. Videos of the key note presentations are available on the IAPP website and are well worth a watch.
Consistent themes emerged across the conference as to how best to implement effective AI governance.
- Building an AI governance system is a must for organisations developing and using AI. And you should be starting now in order to navigate the fact-changing landscape. As Christina Montgomery, Chief Privacy and Trust Officer at IBM noted, if you establish your Responsible AI framework now, you’ll be 90% ready for much of the AI regulation that’s coming.
- Start with a set of AI ethical principles. These should be aligned to your corporate values and strategy to provide the “north star” for your overall Responsible AI approach. You should then look to leverage existing risk management policies and processes to take account of AI risks and opportunities.
- AI Impact Assessments are a critical tool for identifying and managing AI risks. Building on concepts familiar from Privacy Impact Assessments, AIAs take a wider look at issues relating to training and production data, algorithm performance and monitoring, procurement considerations, bias and discrimination risks and transparency and explainability issues.
- Privacy regulators around the world are paying close attention to AI and we should expect to see a lot more from them. Various sessions looked at Europe’s forthcoming AI Act as well as the US Federal Trade Commissioner’s use of “model disgorgement”. This involves requiring organisations to delete data and AI models – a powerful regulatory tool that could have far greater impact than a mere fine.
- Selecting an appropriate AI governance framework is important. NIST’s AI Risk Management Framework emerged as a popular choice with its practical and concrete standards coalescing around the foundational pillars of Govern, Map, Measure and Manage.
- Privacy professionals have a key role to play in AI governance. Many speakers argued privacy pros are naturally well positioned to lead AI governance programmes because they understand how to assess risks and apply mitigants. However, in many companies it’s business stakeholders who are owning AI governance – with potentially better access to budgets.
- Diverse, multi-disciplinary teams are critical. This is true both at the senior oversight level and in data and development teams. The goal should be to include people with a range of backgrounds, perspectives and skillsets to help minimise risks like bias.
- AI bias and fairness testing is a complex and nuanced area. Cathy O’Neil, author of “Weapons of Math Destruction” emphasised that the focus must be on identifying who could be harmed by unfair algorithms. And non-technical people should not allow themselves to feel imposter syndrome – O’Neil encouraged everyone to keep asking questions until they get answers that make sense. The issues are too important not to.
- Everyone is learning – especially with generative AI. Even representatives from the Big Tech companies were clear that no one has all the answers at this stage. A curious mind and ability to respond to change is key.
What was clear overall is that AI risks are here to stay. And the event clearly demonstrated that knowledgeable AI governance professionals are needed to address those risks so the many opportunities of AI can be maximised for all.
This is clearly behind the IAPP’s thinking in launching the AI Governance Professional training and certification (AIGP), which Simply Privacy is now pleased to offer as an Official Training Partner of the IAPP. The AIGP training is designed to help develop knowledge and strategies to respond to the complex risks of AI. Successful completion of the AIGP exam provides a globally-recognised certification that demonstrates an understanding of AI systems, risks and governance and how to ensure safety and trust in their usage.
As part of the first cohort to complete the AIGP training in Boston immediately before the conference, I found completing two full (looooong) days of in-person training pretty onerous. So I’m looking forward to providing live virtual training spread over four half days instead – see our website for details. I’ll be sharing further insights from both AI Governance Global and the Boston AIGP training when we kick off the training in the new year. So have a wonderful Christmas break and I hope to see you in 2024!