The Information Problem (Wasting Time on AI, 2 of 6)
- Tom Barraclough
- Sep 26
- 7 min read
Updated: Sep 30
Tom Barraclough
This is the second in a series outlining four problems with the way that New Zealand is approaching AI regulation. This post outlines the first problem, related to sharing of useful information (The Information Problem). You can read the introductory post in the series here.
Distributed regulatory approach = distributed regulatory material
New Zealand has taken a distributed approach to regulating artificial intelligence. What this means is that we won't be deploying a dedicated "AI Act". I think that is a sensible decision, and, in fact, an unavoidable outcome.
But taking that approach has consequences. One of those consequences is that regulatory material on AI is littered all over the place. I know this because I've played a role in collating it all in the NZ AI Policy Tracker.
Our distributed approach to AI regulation means relevant information is published widely by many institutions. That means anyone wanting to engage in a productive discussion about the design, development, deployment and governance of AI in New Zealand has to traverse at least ten statutes.

People wanting to make a difference on AI regulation are also faced with identifying and analysing not just statutes, but a significant number of softer regulatory instruments and guidance. These materials come from government, the private sector, and other interested bodies that have a social, moral or pragmatic influence on decision-making on AI deployment, as well as the way it’s judged by external parties.
That can mean relevant information is overlooked. As one example, few people seem to have noticed the dedicated framework for "automated electronic (decision-making) systems" which has governed existing AI systems like the e-Gates used by nearly everyone entering or exiting the country since about 2018.
It can also mean that it's difficult to know ahead of time whether non-statutory information on AI is still relevant and worth taking the time to read and absorb. There's a lot of relevant information out there that has been prepared for a world where "AI" meant "predictive analytics". Some of that information is useful at a high level – it relies on the OECD principles for example – but it's basically out-of-date, and the only way to know that is by reading it with the benefit of substantial contextual knowledge.
Helpfully, most of the statutes were identified in the Responsible AI Guidance for Businesses from MBIE. Importantly, this means that within Government somewhere there is a longer analysis of why these statutes are relevant and how they apply. That background analysis has not been made available, which is a missed opportunity to save other people time doing the same analysis again.
There's another problem. AI is largely inseparable from other digital technologies, as well as the contexts in which those technologies are they're deployed. That means no single regulatory vertical or silo can reach across all this material. It also means that it’s difficult to say within government which agency should take on the task of collating or summarising it all in one place – acknowledging that MBIE and GCDO have taken on slices of this task.
Doing the homework
Finally, even if you can bring all of this information together in one place, coming to grips with it all is a massive task. It takes time, and therefore money, and involves significant cognitive exertion.
Some of you might be thinking: “Oh, I’ll just use AI for all that”. But for any analysis with real world consequences, that's probably not good enough. You'll need to be able to explain and verify your work.
The question you should ask yourself is: "How much money am I willing to bet that this AI system had access to all the relevant information, properly understood my context, and has given me a reliable answer?"
Given how hard it is to find everything already, and how difficult it is to know whether that information is still relevant, people delegating this task to AI should be proceeding cautiously.
You can still have an opinion without doing the homework
To be very clear, you DO NOT have to be an expert to participate in public discussions about artificial intelligence. This is NOT about excluding people from a necessary conversation, especially when it affects them and their communities.
But if people want to participate in that discussion, and they can't find or grapple with this information, they're coming to the discussion at a disadvantage. This hampers their ability to have an informed and nuanced discussion on how AI should or shouldn't be used. That's unfair – and it means otherwise valid perspectives are disregarded on the basis they’re insufficiently informed.
Policy discussions on an uneven playing field are inefficient and disabling, driving frustration, undermining trust, and further complicating good faith collaboration.
AI is already regulated
It's vital to remember that AI is deeply regulated in New Zealand – our regulatory approach is just based off general purpose statutes. Our approach also embraces a diversity of different types of regulatory documents that are intended to guide or control behaviour. Not everything needs the big hammer of an Act of Parliament.
As the Government Chief Digital Officer Paul James recently pointed out, the pace of change with digital technologies can sometimes mean that a more flexible regulatory approach is more effective. It's also fundamental to realise that this general purpose systematic approach is how we do legislation generally in New Zealand. It makes for a much tidier statute book than other jurisdictions like the US and the European Union.
What it takes to draft an Act of Parliament
Any agency wanting the Parliamentary Counsel's Office to draft a new Bill first has to answer a number of questions (these are published in the Legislation Guidelines). That means anyone doing policy work to explore AI legislation has to answer the same questions. The questions include things like:
What does the law say already on this? What's wrong with the law as it is now?
How well do you understand the problem now? What is the evidence base you're relying on to make the case for change? Will the changes you're proposing actually influence the problem you've identified?
Is legislation the right tool for the job here? What other types of regulation might be appropriate?
Who (not what) does the law apply to? Note that we're not regulating AI, we're regulating people who do things with AI.
What is the domain you're regulating, who is responsible for regulating it (now and in the future), what are the consequences of non-compliance, and what kinds of penalties should follow? Are the penalties proportionate and consistent with other similar kinds of conduct covered by legislation? Are the consequences civil or criminal? What powers should a regulator have, subject to what checks and balances?
How does the world look now, what is the problem to be solved, and how will the world look if the legislation: (1) exists; and (2) is implemented successfully (or unsuccessfully)?
What might the consequences of the legislation be for fundamental things like human rights, constitutional conventions, and te Tiriti o Waitangi?
How might the legislation you're proposing conflict with other legal requirements? Could people be put in an impossible position, where the law requires them to do two inconsistent things at once? This is a significant issue with the EU's tech regulation framework, where the GDPR, DSA and AI Act appear to be in conflict.
Do we have the regulatory capacity to enforce the law being proposed? Which agency will do what, how will that be funded, and how will that activity overlap with self-regulation or the regulatory responsibilities of other agencies and bodies?
Can the law be meaningfully enforced by New Zealand Courts and agencies within our sovereign borders? This is a big one in relation to technology companies.
These questions will become relevant again in my discussion of the fourth problem, “the policy problem” (coming soon).
Not a fair fight
Now – at this point I have to emphasise that I know firsthand from prior projects how difficult it is to answer all of these questions on AI and in other policy areas.
The burden of answering these questions can't fall solely on people and groups without access to information or necessary resourcing to answer them. The fact these questions exist must not be an insurmountable barrier placed in the way of necessary reform. Ministers and public service agencies should not (and I'm sure are not) just sitting there refusing to turn their attention to these questions. But people and organisations beyond the public service can play a meaningful role in asking and answering these questions.
A whole-of-society approach
When it comes to technology governance and regulation, there is a powerful and necessary role for an all-of-society approach. Even aside from the principled reasons for this approach, there are fundamental pragmatic reasons why reciprocal learning and sharing of perspectives is essential.
Government: "Wellington" cannot be everywhere at once and people working in government aren't exposed to the same things as people in business or the community. This means relevant things can be overlooked.
Business and industry: Some of the things public servants have to deal with, and the ways they have to work, are important. However, they're often not well understood or as highly valued by businesses and industry. By contrast with the public service, businesses also have reasonable limits on how far they can ingest or respond to the insights and perspectives of the wider community.
People in academia or the community will bring important perspectives, but they may never have experienced what it's like to have to operationalise a regulatory system, or build large-scale high-risk business or compliance systems. These systems require years of work, substantial financial investment, public/private accountability, and navigating trade-offs between equally important factors.
This is true of almost all public policy, but it's especially true for technology regulation. This is because of the way that technologies like AI, the Internet, and communications systems are so powerful and pervasive.
All of these factors mean that, when it comes to meaningful action on AI regulation, we're wasting time because of the way we work with, distribute and disclose information.
Coming next
In the next post, I'll describe another problem related to how different groups coordinate on AI policy. This coordination problem is both influenced by the information problem I've described, and also influences the problems I've identified above with circulating and sharing information.


