Constructive suggestions for time well spent (Wasting Time on AI, 6 of 6)
- Tom Barraclough
- Oct 22
- 7 min read
This is the conclusion to a series that has outlined four problems which work together to mean we're wasting time when it comes to work on AI Regulation and Sovereign AI: an information problem, a coordination problem, an economic problem, and a policy problem. In this post, I outline four things I’m doing that address one or more of those problems as I see them. I also share a bigger picture programme of work I'd like to see happen, and which I'm willing to support.

Thank you for the great work you're already doing
I’ve observed that it’s generally much easier in public policy to lob criticisms than to propose something new and constructive, let alone execute that proposal. I accept that publishing this blog series could lead to some criticism coming back at me – and that’s fair enough.
I want to be clear that a lot of incredible work has been done by an enormous variety of diligent, well-informed and constructive people and organisations, who all face the same four problems identified in this series. The breadth of material in the NZ AI Policy Tracker and the thoughtful comments accompanying a recent open letter on AI regulation show the diversity and strength New Zealand can bring to bear on this topic.
In this series I've tried to be direct and provocative, but that's because I think we are wasting time, and wasting time has consequences – whether through missed opportunities to realise benefits, or through preventable situations of real world harm.
A quick recap
In a nutshell, what's the problem with AI regulation and Sovereign AI in New Zealand?
We urgently need a whole-of-society coordinating vision for how AI and other automated systems should be designed, developed, deployed and governed at all levels. To meaningfully design or act on that vision requires re-using information, removing barriers to coordination, and addressing perverse financial incentives faced by every sector group that can exacerbate these issues. We can address these problems, but until we do, I fear we'll just be wasting time.
In a slightly larger nutshell:
There's an information problem. Artificial intelligence is already regulated in New Zealand, but it's regulated through a patchwork of different materials, documents and websites. This means it's hard to work out how to comply with that regulation and where the gaps might be. We need participation and contribution from groups across society (government, industry, community, academia) but all of these groups see the world differently – that's the point of collaboration. If we want AI to be regulated, there's a series of questions we'll need to answer, and answering those questions is harder than it needs to be because information is all over the place.
There's a coordination problem. No one can take on this topic alone, but we're missing opportunities to re-use information and share useful insights. There are incentives on different groups to be selective about the information they share or withhold, which amplifies the information problem. Because of the information problem, there's a risk we come at the discussion from incorrect or incompatible starting points. Collaboration could produce a structured agenda for investigation allowing us to systematically ask and answer relevant questions that meaningfully address the things we need to address to produce an effective regulatory system that fosters responsible adoption of artificial intelligence.
There's an economic problem. Different organisations, individuals and sector groups face different economic realities, and stand to gain different economic benefits from their participation in public discussion. The economic problem exacerbates the coordination problem and the information problem. Beyond money, a key part of solving this problem relates to institutional design: how can people contributing money be satisfied it's well spent? Getting money from people who have it involves answering a lot of questions, but answering those questions is difficult because of the information, coordination and economic problems, creating a "chicken-and-egg" dynamic.
There's a policy problem. What's the guiding vision that different actors can coordinate around? In recent times, I've come to believe this vision can and should be "Sovereign AI" – and that doesn't necessarily mean training new foundation models. Instead, it relates to a series of different approaches that together enhance the agency, empowerment and autonomy of individuals, businesses, groups, communities and the nation as a whole when it comes to use and governance of AI and other digital technologies. That might include new models or fine-tuned models, but it definitely includes things like accessible digital infrastructure and meaningful AI literacy. It also doesn't mean we have to raise the drawbridge and manufacture GPUs in New Zealand.
So what should we do about it? Here are some things I'm already doing and some other things I would like to see done. To be clear, I want to see this work done by a wide network of people, which does not necessarily have to include me. In saying that, I’m happy to play a part.
What I’m doing or would like to see done
Collating AI regulation into one place
With others at Brainbox, I've published an "AI Policy Tracker" for New Zealand. It's a big list of most of the things you'd have to read to understand what already exists in this area. It points people all over the Internet to a mixture of legislation, PDFs and websites, some of which have already disappeared (404).
I'd like to make sure the tracker is complete and keep it up to date. No single public sector agency is going to do this because (see all of the above), and no commercial organisation is going to do it unless it drives value, builds customers, and avoids legal and reputational risk. Academia might be a good home for it, but I haven’t received any requests to support or host it yet. Until then, I'm doing what I can to keep it up to date.
A policy tracker would address the information problem and the coordination problem. It would mitigate some of the economic problem and enable meaningful action on the policy problem.
A machine-readable repository of AI policy and regulation
My professional career has been dominated by the frustrations of working with regulatory documents published by the public service, by private entities and by academics. This experience is partly what has led me to co-found a software company that turns that information into machine-readable structured data. I'm sick of opening 50 browser tabs and downloading 50 PDFs when I want to work with regulation.
The AI Policy Tracker is a big list of things that I want to convert into structured re-usable data to make freely available to others. I've already made a start, but I just need the time to spend on converting those documents with our systems, or paying someone else to do it.
What it would produce is a single downloadable dataset that anyone can use to work with AI regulation in New Zealand that can be versioned and updated over time. If you're so inclined, you can even feed it to an AI system and ask it questions. If you think it's making things up, you can get bullet-point-level citations for the system's responses so you can go and check the answers yourself. This repository could also be packaged up in a model context protocol (MCP) server for easy access and use by AI systems.
A machine readable referenceable repository would address the economic problem and the information problem by making it easier to find and analyse relevant information. It would also substantially improve the policy problem and the coordination problem by bringing everyone to a common starting point and giving them the tools to engage effectively.
Articulating a vision for "Sovereign AI" to foster collaboration
What does "Sovereign AI" mean? What are the different bits and pieces of a sovereign approach? What can we learn from others? How can we break the space down into some meaningful actions?
I've tried to lead this with a discussion paper, with public presentations (and podcasts) and through supporting a Sovereign AI community of interest. I think there are meaningful projects that can be initiated right now on AI literacy, fine-tuning open-weight systems, and assessing the state of our national digital infrastructure.
This would address the policy problem and the coordination problem. It could also play a role in mitigating the economic problem to the extent that it produces a more constructive public discussion ("ban it!" <--> "adopt it for everything!") that incentivises wider participation.
A not-for-profit tech policy organisation
I've been blunt about the incentives different groups have in collaborating on AI and technology policy and the economics of how that works (or doesn't). I'm not immune to those incentives and I've thought about them a lot. I've been clear that one solution to the economic problem is money, but beyond that we need institutional structures that give people confidence that the money they contribute will be well spent.
I think we need an institutional infrastructure to foster trust and collaboration between groups working on tech policy. It needs a global perspective to bring to domestic work, driven by an empowerment approach to people, businesses, government agencies and communities and their use and relationship with technology.
That institution may exist already, but I'm not sure it does. I've committed publicly to converting the Brainbox Institute into such an institution. I have a trust deed ready to sign to establish this structure – what's holding me back is the time, support and resourcing necessary to activate this plan with conviction. Perhaps you can help?
Other initiatives
I've been thinking about this topic and talking about it with others. There are some other initiatives that I'd like to see pursued related to AI literacy and various scoping studies for Sovereign AI initiatives.
I'm going to continue progressing this work and "thinking in the open" – at the moment, my thoughts are recorded here. Get in touch if you'd like the password for access.
Let's talk about this
Thank you for the time you’ve taken to read this series. If you've gotten to the end of this series and feel your work hasn’t been recognised, then I apologise (see also problems 1-3 earlier in this series). I have also tried to deliberately avoid naming people, agencies and organisations to avoid straying into unintended criticism or endorsement along the way.
If you feel like writing a public comment in response, consider reaching out to me and talking about it first. I look forward to hearing what you think about all this.
You can reach me via the Brainbox website or through LinkedIn.


