top of page

The Policy Problem (Wasting Time on AI, 5 of 6)

  • Writer: Tom Barraclough
    Tom Barraclough
  • Oct 6
  • 7 min read

In this post, I address the fourth of four problems outlined in this series. I call this the Policy Problem, which relates to the difficulties in setting a clear guiding policy direction for AI in New Zealand that other groups can coordinate around on a structured, well-informed basis. You can read about the other problems here: the Information Problem, the Coordination Problem, and the Economic Problem.


What do we want from AI and AI regulation?


What is "policy"?


"Policy" is a boring word to many people. What does it really mean? Business people, government officials and others all probably use it in a slightly different way. To me, policy is simply about setting a direction, or declaring what we want. If we take that idea slightly further, policy is about saying what someone somewhere should do in specific circumstances, and perhaps defining the consequences if they don’t do it.


Organisations have policies and governments pursue Policy (sometimes called "big P policy"). The "little p policies" often give effect to a "big P Policy" that operates at a higher level, and is set by political leaders, boards, or executives. If you think back to my first post, many of the questions you have to answer if you want to see legislation drafted are essentially questions about policy (leaving aside any question of spelling and capitalisation for now).


"What does good look like" when it comes to AI?


When it comes to AI regulation, it has always been a bit difficult to articulate two key areas of policy.


  1. What do we mean by AI? How do we cast the net in such a way that we include cutting-edge large language models or predictive analytics systems for automated decision-making, but exclude things like email filters, thermostats and spreadsheet formulae. (Let's ignore for now the complexities of defining what constitutes an automated decision-making system.)

  2. What is "good" when it comes to AI, what is "bad", what is "very bad", and how do we distinguish between these categories? There are often some types of AI system, ways they are trained or ways they are deployed that people can agree shouldn't happen. However, even with things like facial recognition, lethal autonomous weapons systems, or deepfakes, there are usually exceptions where people will agree they might be permissible. It's easy to say that systems should be fair and unbiased, but what does "biased" mean, and can a biased system be deployed in a situation where its bias can be accounted for and controlled? This is the reality for the facial recognition systems that are deployed at the New Zealand border in our e-Gates.


For a long time, this has meant that we default to sets of principles or values as a guide to what we expect from AI systems and the people who deploy them.


  • The OECD principles situated at the center of the New Zealand AI Strategy are the classic example of this. We have also seen roughly 200+ statements of "AI principles" internationally since about 2017.

  • Another useful framework for assessing AI systems and navigating the trade-offs in the ways they're deployed lies with human rights instruments – for example, rights to freedom of expression, privacy, and bodily integrity are well-established, and we have well-established ways of dealing with the situations where those rights are in conflict.


The trouble with these ways of talking about what we want from AI systems is that they don't easily translate into clear rules. They really just give us a starting point for saying what matters. Principles like this also need to be applied in a huge number of situations, reflecting the diversity of AI systems, the people and organisations who interact with them, and the different ways of managing and governing them. The reality is that the same set of principles or rights, depending on who is applying them and in what situation, can lead to quite radically different interpretations of what is acceptable.


Why does this matter?


This matters because it makes it difficult to articulate a national or community direction for what we want from AI and the way it is deployed. If we can't articulate a shared vision, it's hard to identify points of coordination, or to set a structured agenda for investigation and discussion. It's also difficult to identify what information might be relevant to the discussion.


This is what I refer to as the Policy Problem. If we can't say what we want from AI or AI regulation, then we don't have something to coordinate around, we can't work out what information is relevant, and we don't have a way of making sensible and predictable value judgements. The OECD principles in particular are easy to agree to because they are so wide open to interpretation – but countries who endorse the OECD principles can take wildly different approaches.


What could be a national policy vision for AI in New Zealand?


Last year I heard someone use the term "Sovereign AI". When I first heard it, I was skeptical. In particular, I took it to mean the idea that countries (and governments in particular) should be building their own AI models. In my mind, and in the minds of many others, this would mean the New Zealand government starting a process that ends with some kind of ChatGPT system developed by government with some set of characteristics or flavours that make it uniquely "New Zealand" in some way.


For what it's worth, I still think this is an extraordinary idea that has a huge number of hurdles to overcome. But I also sat and asked myself what a viable and realistic vision of "Sovereign AI for New Zealand" might look like. Perhaps there is a case to be made for a brand new foundation model, and I'm interested to explore what that would require. By contrast, there are other ways to think about sovereignty and about AI that could achieve the same thing that "Sovereign AI" advocates are looking for which are much easier and cheaper to act upon.


How am I thinking about Sovereign AI (or AI sovereignty)?


Different people mean different things when they talk about Sovereign AI. I also know that some people will want to explore whether Sovereign AI is the same as AI Sovereignty. I’m not that interested in those discussions.


When I talk about AI sovereignty, or a desire for Sovereign AI, the key points for me are as follows.


  1. Not just nations: I don’t think AI Sovereignty, or the pursuit of Sovereign AI, has to be all about working at the nation-state level. It can also be about actions by communities, multi-sector groups, or even individuals.

  2. Not just governments: It also doesn’t have to be exclusively about activity by the Government and the public service. Any Sovereign AI model would have to include a range of different sectors, and in fact many of the "Sovereign AI" models being created around the world are created by companies or public/private partnerships.

  3. Not just about foundation models: Sovereign AI also doesn’t have to be focused solely on brand new foundation models trained on New Zealand data in New Zealand, which are hosted and run on New Zealand computers.

  4. Not "all-or-nothing": I think we can increase AI sovereignty, or enable access to "Sovereign AI", without having to be absolute purists – for example, we don't have to get into local GPU computer chip production, and any Sovereign AI model (or approach) wouldn't be irreparably polluted if it includes some data from outside New Zealand.


I unpack all of these things in a lot more detail in a separate discussion paper. The paper addresses things like the fact there are more types of AI than just large language models, that AI is only one part of a wider digital sovereignty picture, and that sovereignty is a complicated concept in a small interdependent trading nation founded on te Tiriti o Waitangi.


What are some practical ways of enhancing AI sovereignty?


If we open up the discussion about Sovereign AI in the way I've described above, then we can think about quite different approaches to what we want from AI in New Zealand that are much more achievable.


A practical approach to Sovereign AI for New Zealand could emphasise three key categories of work:


  1. AI Literacy. Empowering people, organisations, businesses and communities to have greater agency over AI systems. That includes AI literacy and measures to enhance equity and equality of access. You can use whatever systems you want, really, as long as you’re equipped to make informed decisions. This inevitably flows through to greater digital literacy and skills on privacy, cybersecurity and data protection.

  2. Digital infrastructure. This includes measures to promote access to computer equipment and software that let people use AI in the way that meets their requirements, as informed by the knowledge, power and skill they’ve developed from the above. If we do this, we can have competent people choosing their systems and how they use them, including where they use them and under what conditions.

  3. Fine-tuning. Before we buy $150 million of computer equipment and throw open the vaults of our shared digital heritage for open extraction, let’s check how far we can get what we need from customising existing models. Has anyone tried this yet? What are the limits of this approach? What do we need to test it properly? I work in this area and I don’t know (see problems 1-3 above). Let’s find out what fine-tuning can achieve and make sure we’re sharing our findings.


If we adopt this way of thinking about the change we want to see in the world, then many of the tricky decisions about who must do what, when, under what circumstances, and in what order of priority will probably emerge quite naturally. Then, if we really want to, or the case is very strong, someone somewhere might like to train a new foundation model.


What has Sovereign AI got to do with AI Regulation?


To pursue Sovereign AI is to adopt a regulatory approach. It sets a direction which signals to everyone I've mentioned in this series so far that things which align with this direction will be encouraged, and things which don't will be discouraged. This helps people understand what new regulation might be required, which changes to regulation will be prioritised, which initiatives could or should be proposed or funded, and how the public service will interpret and apply existing regulation. By emphasising AI Sovereignty, we could also consider lifting a national vision for AI beyond electoral cycles and party politics.


If we can agree on a vision for Sovereign AI (the Policy Problem), then we can begin to collate and manage information which is relevant (the Information Problem), set a structured agenda for coordination, agreement and disagreement (the Coordination Problem), and fund initiatives with confidence that they will be a useful and productive component of the wider whole (the Economic Problem).


What next?


In my next and final post, I’ll outline what I’ve done already to try and make a difference to these four problems. I'll also suggest how this work could be taken further, bearing in mind that the purpose of this work is to call for action well beyond anything I can (or want to) initiate or manage alone.

 
 

Brainbox Institute is a non-partisan organisation that supports constructive policy, governance, and regulation of digital technologies.

Subscribe to our news

Thanks for submitting!

© 2023 Copyright Brainbox Ltd. All Rights Reserved. Privacy Policy.

bottom of page