top of page

Search results

28 results found with an empty search

  • Constructive suggestions for time well spent (Wasting Time on AI, 6 of 6)

    This is the conclusion to a series that has outlined four problems which work together to mean we're wasting time when it comes to work on AI Regulation and Sovereign AI: an information problem , a coordination problem , an economic problem , and a policy problem . In this post, I outline four things I’m doing that address one or more of those problems as I see them. I also share a bigger picture programme of work I'd like to see happen, and which I'm willing to support. Photograph: Phil Walter/Getty Images ( Source ) Thank you for the great work you're already doing I’ve observed that it’s generally much easier in public policy to lob criticisms than to propose something new and constructive, let alone execute that proposal. I accept that publishing this blog series could lead to some criticism coming back at me – and that’s fair enough.  I want to be clear that a lot of incredible work has been done by an enormous variety of diligent, well-informed and constructive people and organisations, who all face the same four problems identified in this series. The breadth of material in the NZ AI Policy Tracker and the thoughtful comments accompanying  a recent open letter on AI regulation  show the diversity and strength New Zealand can bring to bear on this topic. In this series I've tried to be direct and provocative, but that's because I think we are wasting time, and wasting time has consequences – whether through missed opportunities to realise benefits, or through preventable situations of real world harm.  A quick recap In a nutshell, what's the problem with AI regulation and Sovereign AI in New Zealand? We urgently need a whole-of-society coordinating vision for how AI and other automated systems should be designed, developed, deployed and governed at all levels. To meaningfully design or act on that vision requires re-using information, removing barriers to coordination, and addressing perverse financial incentives faced by every sector group that can exacerbate these issues. We can address these problems, but until we do, I fear we'll just be wasting time. In a slightly larger nutshell: There's an information problem . Artificial intelligence is already regulated in New Zealand, but it's regulated through a patchwork of different materials, documents and websites. This means it's hard to work out how to comply with that regulation and where the gaps might be. We need participation and contribution from groups across society (government, industry, community, academia) but all of these groups see the world differently – that's the point of collaboration. If we want AI to be regulated, there's a series of questions we'll need to answer, and answering those questions is harder than it needs to be because information is all over the place. There's a coordination problem . No one can take on this topic alone, but we're missing opportunities to re-use information and share useful insights. There are incentives on different groups to be selective about the information they share or withhold, which amplifies the information problem. Because of the information problem, there's a risk we come at the discussion from incorrect or incompatible starting points. Collaboration could produce a structured agenda for investigation allowing us to systematically ask and answer relevant questions that meaningfully address the things we need to address to produce an effective regulatory system that fosters responsible adoption of artificial intelligence. There's an economic problem. Different organisations, individuals and sector groups face different economic realities, and stand to gain different economic benefits from their participation in public discussion. The economic problem exacerbates the coordination problem and the information problem. Beyond money, a key part of solving this problem relates to institutional design : how can people contributing money be satisfied it's well spent? Getting money from people who have it involves answering a lot of questions, but answering those questions is difficult because of the information, coordination and economic problems, creating a "chicken-and-egg" dynamic. There's a policy problem. What's the guiding vision that different actors can coordinate around? In recent times, I've come to believe this vision can and should be "Sovereign AI" – and that doesn't necessarily mean training new foundation models . Instead, it relates to a series of different approaches that together enhance the agency, empowerment and autonomy of individuals, businesses, groups, communities and the nation as a whole when it comes to use and governance of AI and other digital technologies. That might include new models or fine-tuned models, but it definitely includes things like accessible digital infrastructure and meaningful AI literacy. It also doesn't mean we have to raise the drawbridge and manufacture GPUs in New Zealand. So what should we do about it? Here are some things I'm already doing and some other things I would like to see done. To be clear, I want to see this work done by a wide network of people, which does not necessarily have to include me . In saying that, I’m happy to play a part. What I’m doing or would like to see done Collating AI regulation into one place With others at Brainbox, I've published an "AI Policy Tracker" for New Zealand. It's a big list of most of the things you'd have to read to understand what already exists in this area. It points people all over the Internet to a mixture of legislation, PDFs and websites, some of which have already disappeared (404). I'd like to make sure the tracker is complete and keep it up to date. No single public sector agency is going to do this because (see all of the above), and no commercial organisation is going to do it unless it drives value, builds customers, and avoids legal and reputational risk. Academia might be a good home for it, but I haven’t received any requests to support or host it yet. Until then, I'm doing what I can to keep it up to date. A policy tracker would address the information problem and the coordination problem. It would mitigate some of the economic problem and enable meaningful action on the policy problem. A machine-readable repository of AI policy and regulation My professional career has been dominated by the frustrations of working with regulatory documents published by the public service, by private entities and by academics. This experience is partly what has led me to co-found a software company that turns that information into machine-readable structured data. I'm sick of opening 50 browser tabs and downloading 50 PDFs when I want to work with regulation. The AI Policy Tracker is a big list of things that I want to convert into structured re-usable data to make freely available to others. I've already made a start, but I just need the time to spend on converting those documents with our systems, or paying someone else to do it. What it would produce is a single downloadable dataset that anyone can use to work with AI regulation in New Zealand that can be versioned and updated over time. If you're so inclined, you can even feed it to an AI system and ask it questions. If you think it's making things up, you can get bullet-point-level citations for the system's responses so you can go and check the answers yourself. This repository could also be packaged up in a model context protocol (MCP) server for easy access and use by AI systems. A machine readable referenceable repository would address the economic problem and the information problem by making it easier to find and analyse relevant information. It would also substantially improve the policy problem and the coordination problem by bringing everyone to a common starting point and giving them the tools to engage effectively. Articulating a vision for "Sovereign AI" to foster collaboration What does "Sovereign AI" mean? What are the different bits and pieces of a sovereign approach? What can we learn from others? How can we break the space down into some meaningful actions? I've tried to lead this with a discussion paper, with public presentations (and podcasts ) and through supporting a Sovereign AI community of interest. I think there are meaningful projects that can be initiated right now on AI literacy, fine-tuning open-weight systems, and assessing the state of our national digital infrastructure. This would address the policy problem and the coordination problem. It could also play a role in mitigating the economic problem to the extent that it produces a more constructive public discussion ("ban it!" <--> "adopt it for everything!") that incentivises wider participation. A not-for-profit tech policy organisation I've been blunt about the incentives different groups have in collaborating on AI and technology policy and the economics of how that works (or doesn't). I'm not immune to those incentives and I've thought about them a lot. I've been clear that one solution to the economic problem is money, but beyond that we need institutional structures that give people confidence that the money they contribute will be well spent. I think we need an institutional infrastructure to foster trust and collaboration between groups working on tech policy. It needs a global perspective to bring to domestic work, driven by an empowerment approach to people, businesses, government agencies and communities and their use and relationship with technology. That institution may exist already, but I'm not sure it does. I've committed publicly to converting the Brainbox Institute into such an institution. I have a trust deed ready to sign to establish this structure – what's holding me back is the time, support and resourcing necessary to activate this plan with conviction. Perhaps you can help? Other initiatives I've been thinking about this topic and talking about it with others. There are some other initiatives that I'd like to see pursued related to AI literacy and various scoping studies for Sovereign AI initiatives. I'm going to continue progressing this work and "thinking in the open" – at the moment, my thoughts are recorded here . Get in touch if you'd like the password for access. Let's talk about this Thank you for the time you’ve taken to read this series. If you've gotten to the end of this series and feel your work hasn’t been recognised, then I apologise (see also problems 1-3 earlier in this series). I have also tried to deliberately avoid naming people, agencies and organisations to avoid straying into unintended criticism or endorsement along the way. If you feel like writing a public comment in response, consider reaching out to me and talking about it first. I look forward to hearing what you think about all this. You can reach me via the Brainbox website or through LinkedIn .

  • The Policy Problem (Wasting Time on AI, 5 of 6)

    In this post, I address the fourth of four problems outlined in this series . I call this the Policy Problem, which relates to the difficulties in setting a clear guiding policy direction for AI in New Zealand that other groups can coordinate around on a structured, well-informed basis. You can read about the other problems here: the Information Problem , the Coordination Problem , and the Economic Problem . What do we want from AI and AI regulation? Photo by Alix Lee. What is "policy"? "Policy" is a boring word to many people. What does it really mean? Business people, government officials and others all probably use it in a slightly different way. To me, policy is simply about setting a direction, or declaring what we want. If we take that idea slightly further, policy is about saying what someone somewhere should do in specific circumstances, and perhaps defining the consequences if they don’t do it. Organisations have policies and governments pursue Policy (sometimes called "big P policy"). The "little p policies" often give effect to a "big P Policy" that operates at a higher level, and is set by political leaders, boards, or executives. If you think back to my first post, many of the questions you have to answer if you want to see legislation drafted are essentially questions about policy (leaving aside any question of spelling and capitalisation for now). "What does good look like" when it comes to AI? When it comes to AI regulation, it has always been a bit difficult to articulate two key areas of policy. What do we mean by AI? How do we cast the net in such a way that we include cutting-edge large language models or predictive analytics systems for automated decision-making, but exclude things like email filters, thermostats and spreadsheet formulae. (Let's ignore for now the complexities of defining what constitutes an automated decision-making system.) What is "good" when it comes to AI, what is "bad", what is "very bad", and how do we distinguish between these categories? There are often some types of AI system, ways they are trained or ways they are deployed that people can agree shouldn't happen. However, even with things like facial recognition, lethal autonomous weapons systems, or deepfakes, there are usually exceptions where people will agree they might be permissible. It's easy to say that systems should be fair and unbiased, but what does "biased" mean, and can a biased system be deployed in a situation where its bias can be accounted for and controlled? This is the reality for the facial recognition systems that are deployed at the New Zealand border in our e-Gates. For a long time, this has meant that we default to sets of principles or values as a guide to what we expect from AI systems and the people who deploy them. The OECD principles situated at the center of the New Zealand AI Strategy are the classic example of this. We have also seen roughly 200+ statements of "AI principles" internationally since about 2017. Another useful framework for assessing AI systems and navigating the trade-offs in the ways they're deployed lies with human rights instruments – for example, rights to freedom of expression, privacy, and bodily integrity are well-established, and we have well-established ways of dealing with the situations where those rights are in conflict. The trouble with these ways of talking about what we want from AI systems is that they don't easily translate into clear rules. They really just give us a starting point for saying what matters. Principles like this also need to be applied in a huge number of situations, reflecting the diversity of AI systems, the people and organisations who interact with them, and the different ways of managing and governing them. The reality is that the same set of principles or rights, depending on who is applying them and in what situation, can lead to quite radically different interpretations of what is acceptable. Why does this matter? This matters because it makes it difficult to articulate a national or community direction for what we want from AI and the way it is deployed. If we can't articulate a shared vision, it's hard to identify points of coordination, or to set a structured agenda for investigation and discussion. It's also difficult to identify what information might be relevant to the discussion. This is what I refer to as the Policy Problem. If we can't say what we want from AI or AI regulation, then we don't have something to coordinate around, we can't work out what information is relevant, and we don't have a way of making sensible and predictable value judgements. The OECD principles in particular are easy to agree to because they are so wide open to interpretation – but countries who endorse the OECD principles can take wildly different approaches. What could be a national policy vision for AI in New Zealand? Last year I heard someone use the term "Sovereign AI". When I first heard it, I was skeptical. In particular, I took it to mean the idea that countries (and governments in particular) should be building their own AI models. In my mind, and in the minds of many others, this would mean the New Zealand government starting a process that ends with some kind of ChatGPT system developed by government with some set of characteristics or flavours that make it uniquely "New Zealand" in some way. For what it's worth, I still think this is an extraordinary idea that has a huge number of hurdles to overcome. But I also sat and asked myself what a viable and realistic vision of "Sovereign AI for New Zealand" might look like. Perhaps there is a case to be made for a brand new foundation model, and I'm interested to explore what that would require. By contrast, there are other ways to think about sovereignty and about AI that could achieve the same thing that "Sovereign AI" advocates are looking for which are much easier and cheaper to act upon. How am I thinking about Sovereign AI (or AI sovereignty)? Different people mean different things when they talk about Sovereign AI. I also know that some people will want to explore whether Sovereign AI is the same as AI Sovereignty . I’m not that interested in those discussions. When I talk about AI sovereignty, or a desire for Sovereign AI, the key points for me are as follows. Not just nations: I don’t think AI Sovereignty, or the pursuit of Sovereign AI, has to be all about working at the nation-state level. It can also be about actions by communities, multi-sector groups, or even individuals. Not just governments: It also doesn’t have to be exclusively about activity by the Government and the public service. Any Sovereign AI model would have to include a range of different sectors, and in fact many of the "Sovereign AI" models being created around the world are created by companies or public/private partnerships. Not just about foundation models: Sovereign AI also doesn’t have to be focused solely on brand new foundation models trained on New Zealand data in New Zealand, which are hosted and run on New Zealand computers. Not "all-or-nothing" : I think we can increase AI sovereignty, or enable access to "Sovereign AI", without having to be absolute purists – for example, we don't have to get into local GPU computer chip production, and any Sovereign AI model (or approach) wouldn't be irreparably polluted if it includes some data from outside New Zealand. I unpack all of these things in a lot more detail in a separate discussion paper . The paper addresses things like the fact there are more types of AI than just large language models, that AI is only one part of a wider digital sovereignty picture, and that sovereignty is a complicated concept in a small interdependent trading nation founded on te Tiriti o Waitangi. What are some practical ways of enhancing AI sovereignty? If we open up the discussion about Sovereign AI in the way I've described above, then we can think about quite different approaches to what we want from AI in New Zealand that are much more achievable. A practical approach to Sovereign AI for New Zealand could emphasise three key categories of work: AI Literacy. Empowering people, organisations, businesses and communities to have greater agency over AI systems. That includes AI literacy and measures to enhance equity and equality of access. You can use whatever systems you want, really, as long as you’re equipped to make informed decisions. This inevitably flows through to greater digital literacy and skills on privacy, cybersecurity and data protection. Digital infrastructure. This includes measures to promote access to computer equipment and software that let people use AI in the way that meets their requirements, as informed by the knowledge, power and skill they’ve developed from the above. If we do this, we can have competent people choosing their systems and how they use them, including where they use them and under what conditions. Fine-tuning. Before we buy $150 million of computer equipment and throw open the vaults of our shared digital heritage for open extraction, let’s check how far we can get what we need from customising existing models. Has anyone tried this yet? What are the limits of this approach? What do we need to test it properly? I work in this area and I don’t know (see problems 1-3 above). Let’s find out what fine-tuning can achieve and make sure we’re sharing our findings. If we adopt this way of thinking about the change we want to see in the world, then many of the tricky decisions about who must do what, when, under what circumstances, and in what order of priority will probably emerge quite naturally. Then, if we really want to, or the case is very strong, someone somewhere might like to train a new foundation model. What has Sovereign AI got to do with AI Regulation? To pursue Sovereign AI is to adopt a regulatory approach. It sets a direction which signals to everyone I've mentioned in this series so far that things which align with this direction will be encouraged, and things which don't will be discouraged. This helps people understand what new regulation might be required, which changes to regulation will be prioritised, which initiatives could or should be proposed or funded, and how the public service will interpret and apply existing regulation. By emphasising AI Sovereignty, we could also consider lifting a national vision for AI beyond electoral cycles and party politics. If we can agree on a vision for Sovereign AI (the Policy Problem), then we can begin to collate and manage information which is relevant (the Information Problem), set a structured agenda for coordination, agreement and disagreement (the Coordination Problem), and fund initiatives with confidence that they will be a useful and productive component of the wider whole (the Economic Problem). What next? In my next and final post, I’ll outline what I’ve done already to try and make a difference to these four problems. I'll also suggest how this work could be taken further, bearing in mind that the purpose of this work is to call for action well beyond anything I can (or want to) initiate or manage alone.

  • The Economic Problem (Wasting Time on AI, 4 of 6)

    This piece is part of a series where I outline four problems with the way we’re approaching AI regulation and the concept of Sovereign AI. In this piece, I touch on the economic factors driving inefficiencies in policy discussions, which also drive, and are driven by, the first two problems: the Information Problem  and the Coordination Problem . Introduction In this piece, it might get awkward. I’m going to touch on some of the economic and financial drivers that, from my personal experience, and in my opinion, amplify the problems related to information and coordination covered in previous posts.  What follows is intended as a generalised, descriptive, empathetic and explanatory exercise, without any value judgement. Effective regulation of AI and our broader national direction on AI policy requires a well-informed multi-sector approach. This is undermined by the Information Problem and the Coordination Problem. In my view, there is another problem contributing to those problems, which relates primarily to financial considerations, or what I refer to loosely as an “economic” problem. The Economic Problem is that everyone needs to generate money to pay their bills and take care of their families and communities. In itself, that is not a problem, but it becomes a problem where different groups derive different economic benefits from their participation in AI policy discussions, because it impacts the pace of their work, the things they prioritise, and their ability to participate effectively in the discussion. A chicken and egg problem Addressing this economic problem requires raising money and having the institutional arrangements to inspire confidence among the people and organisations contributing those resources. But answering the necessary questions from those people or organisations requires information, coordination, and resourcing, creating a chicken and egg problem. Photo by ROMAN ODINTSOV . The economic context for different sectors Here is my blunt assessment of the economic realities facing different sector groups and how this influences their participation. Government and public sector Government as a broader institution is funded to do valuable but uneconomic things that may not produce an immediate financial return. It funds these things through compulsory taxation on other participants. This has two implications for pacing and coordination. Most people participating from government generally receive a salary – a dependable paycheck every week, independent of the time it takes to perform the work. They're also responsible for addressing important processes and requirements that other participants can't see, don't understand, and do not have to face. This takes time and energy. It also frequently means just having to wait while someone else reviews your work or decisions. Aside from financial considerations, public media exposure and external critical comment are often the other strongest drivers of behaviour because of the incentives held by Ministers and elected politicians (described below).  This means that Government's incentives are generally to move slowly, unless some other non-financial factor is imposed to accelerate activity.  People working within the government find this frustrating too. I have yet to meet anyone in government who deliberately wants to move slowly just for the sake of it. There are some other important points to note.   The money that funds government activity is not endless. People in government are also accountable for how they spend it.  It would also be a mistake to ignore the recent and historic rounds of redundancies facing otherwise fantastic public servants.  People are also committed to their work as a matter of professional pride and public service, and this is a very real thing. Given all this hedging, why raise this point? It’s relevant because the financial pressures government participants face are fundamentally different from those faced by the other groups. This has a corresponding impact on the information sharing and coordination problems.  Government participants have the resources to create and publish information for regulatory purposes, but may be reluctant to share it. The financial situation of government participants influence the pressure (or lack of pressure) to coordinate with other parties, as well as the pace of their information production and coordination efforts. Industry Businesses fall on a spectrum. On one end of the spectrum are exceptionally well-funded businesses. These businesses often earn money from:  the technology being regulated (AI); and/or advising those companies (or the agencies who regulate them) on how to comply with regulation (AI assurance and regulation).  On the other end of the spectrum are businesses who are barely surviving. Some of them want to become companies that work with and sell the technology, or want to establish a market position in advising those businesses (or the associated government agencies).  For all businesses, wherever they fall on the spectrum, they need to consider how the time and money they're applying can be justified from a financial perspective. The bigger the business, the more money can be allocated to this task, either in the name of business development or market competition. Even for exceptionally well-funded businesses, the money (and staff time with it) doesn't come from nowhere. Revenue has to cover expenses. This sounds abstract to people who haven't faced this situation, but it's a very real issue for anyone who's borne the responsibility of paying staff and themselves each week.  Again, the point here is to note that the financial realities facing businesses are different from those facing other participants, which drives the problems above. Large and well-funded industry participants operate more like governments. Small, poorly funded industry participants have the incentive to work rapidly and establish a market position, whether through selective coordination, public activity, or competition. Communities and real people Communities are often the most affected by AI, but are completely under-resourced. Communities show up to policy and regulatory discussions facing the kinds of questions I raised under the Information Problem. They have none of the time, information or support required to meaningfully address them. The onus of justifying change is often put on them.  Everybody else in the policy discussion is speaking a completely different arcane language. Discussion seems to revolve exclusively around barriers to action, writing new documents, reading existing documents, or proposing non-specific things that can never realistically be implemented. Many of the real barriers to action are never actually spoken out loud – for example, “I’m not going to do that because it wouldn’t fly with the Minister”, or “Could you imagine how this is going to look on the front page?”, or “That doesn’t fit with our market strategy.”  Communities find all of the above incredibly frustrating. They are not monolithic, and can sometimes have better knowledge and skills than paid participants from other sectors. But their contributions are seldom compensated, even though the skills and experience they bring are often essential. There is another growing group of people in New Zealand with a powerful interest in AI policy, significant personal resources and deep domain expertise. I suggest these people (or groups) could play a significant role in addressing the Economic, Information and Coordination problems. Academia and researchers The economics of academia are perhaps the most difficult of all. Anyone who's had to chase research funding or juggle teaching, research and service knows what I'm talking about. The academics are expected to have done the reading, or even to have written most of it, but they face most if not all of the same barriers as everyone else outlined above in terms of information and coordination. Multi-stakeholder bodies or professional associations Organisations like professional associations or industry membership bodies represent multiple entities, or multiple types of entities. They play an essential and positive role in all of the above. They’re also deeply acquainted with the Information and Coordination problems and bring vital experience navigating those issues. Economically speaking, these organisations are constantly being asked to justify why they should be funded, while also navigating the explicit or implicit objectives, preferences and incentives of their members. Many of the people who are the real engine-room of these organisations can be treated as falling into one of the categories above in terms of their immediate economic situation. These groups and associations are thought to bring the resources of well-funded companies or governments, but frequently face conditions similar to the other participants. They face conflicting pressures to create, disclose and withhold information, and to initiate, avoid, or maintain coordination depending on the circumstances at hand. Elected politicians Ministers and elected politicians play a significant role in determining the economics and overall direction of all of the above. They’re oriented towards doing things and solving problems, but not at all costs. More than perhaps any other participant, elected politicians face the responsibility of taking a birds-eye view, where there are a number of equally important and urgent priorities which must be weighed against each other. Key motivating factors that influence the Information and Coordination problems include media risk, political relationships, and election cycles (which draw in things related to numbers of votes, and political fund-raising). These can hamper information production and drive behaviours that undermine coordination. They can also be powerful contributors to completely resolving the Information and Coordination problems. The public interest also drives Ministers and elected politicians, as well as their genuine commitment to serve their communities, empower industry, and support the public service to produce their best work. All I’m saying is that the financial reality facing Ministers and elected politicians is often out of step with other participants and this has an impact. Why raise this issue? The reason I’ve painted this picture is because the economic considerations driving participation in public policy discussions about AI are very influential. The economic factors drive a mismatch in pace and priorities between different stakeholder groups, which have a compounding impact on problems of information-sharing and coordination. They also have a downstream impact on the policy orientation we advocate for and operate around – more on that in the next post. “Okay, well who should solve this problem?” There are two complementary ways to solve this problem, which I see as two sides of the same coin. The first way is money, but a necessary foundation for that money is to have institutional arrangements that justify the trust and confidence of the people or organisations contributing that money. Money To be blunt, someone or some institution with money who wants to see effective activity on AI regulation should solve this problem. Another option is for multiple people or entities to set aside a contribution to a coordinated effort to solve the Economic Problem, the Information Problem, the Coordination Problem – and by extension, the Policy Problem. In light of the comments I’ve made about the different sectors above, here are the key factors that complicate the task of simply giving up money to address these problems. Government is expected to have internal competency that does not require external procurement. Government agencies are accountable for public funds, but often find selecting and designing initiatives to be performed beyond the public service to be a difficult exercise. Government faces significant procurement impediments to the rapid deployment of public funds. Industry cannot spend money unless it drives a return of some kind, or serves a broader public interest objective which is in some way related to its commercial activity. Communities and real people do not have the resources to contribute, but can contribute time. Some members of the community and real people do have significant economic resources, and could contribute those resources toward effective AI policy in the right circumstances without being accountable to anyone else. Academia and researchers can secure research funding, but the fundraising cycle is slow, significant volume is taken up by administrative overheads. Funding is generally secured for named research personnel to perform specific research tasks. Multi-stakeholder bodies can recruit economic resources from members or other sources, and distribute them with flexibility unavailable to other groups. Often they are not empowered to distribute those resources beyond their own control, and resource distributions have to serve the members or contributers to the relevant body. Elected politicians can enable the provision of funds, or seek to raise funds independently of government, but they cannot realistically distribute those funds themselves. Their role as elected politicians also risks politicising the way the work is perceived, undermining trust and perhaps worsening the Coordination Problem. Institutional design Anyone being asked to contribute economic resources will ask the following questions, to themselves, or to the applicant –  and not without justification . However, the task of answering them takes time, and they are difficult to answer because of the Information, Coordination and Economic problems. The questions are: What are you going to do specifically?  Why would that help? Are you sure that would help? Wouldn’t an alternative set of actions help more instead? Who’s going to pay for it? Why should it be me? Why not those other people? Why hasn’t someone else already paid for this? Do you really need all this money or just some of it? What’s in it for me? How does it serve my objectives, or the objectives of people I'm accountable to? What are the downsides or risks of me supporting this? How far am I likely to be associated with or affected by what you eventually do? Why you/these applicants? Isn’t someone else already doing this? Who’s accountable if you don’t deliver? Again, these questions are justified, and should be accounted for in the institutional design necessary to build confidence in the deployment of economic resources. But people trying to answer them face a kind of chicken and egg problem, and may answer them all without ever receiving any funding. In those circumstances, it's frankly irrational to even start. Conclusion on the economic problem Most people reading this will self-identify as an exception to the generalisations about each sector group I’ve made above. For what it’s worth, they’re probably correct. By necessity, I’ve painted a broad-brush generalisation of the different sectors and groups which some may find offensive. It's important to emphasise again that there's nothing inherently wrong with being driven by economic considerations or participating in public policy out of some level of self-interest. Being driven by those factors does not mean you are not also driven by factors like the public interest or a sense of community service. However, those things cannot pay for food and electricity. By being blunt about the economic dimension of this problem, I’m hoping to bring it out into the open as something that could be meaningfully addressed – through economic support, through institutional design, or through networks and relationships. Without solving this problem, it’s difficult to imagine any progress on the other problems, which unfortunately means we are wasting time.

  • The Information Problem (Wasting Time on AI, 2 of 6)

    Tom Barraclough This is the second in a series outlining four problems with the way that New Zealand is approaching AI regulation. This post outlines the first problem, related to sharing of useful information ( The Information Problem ). You can read the introductory post in the series here . Distributed regulatory approach = distributed regulatory material New Zealand has taken a distributed approach to regulating artificial intelligence. What this means is that we won't be deploying a dedicated "AI Act". I think that is a sensible decision, and, in fact, an unavoidable outcome. But taking that approach has consequences. One of those consequences is that regulatory material on AI is littered all over the place. I know this because I've played a role in collating it all in the NZ AI Policy Tracker . Our distributed approach to AI regulation means relevant information is published widely by many institutions. That means anyone wanting to engage in a productive discussion about the design, development, deployment and governance of AI in New Zealand has to traverse at least ten statutes . People wanting to make a difference on AI regulation are also faced with identifying and analysing not just statutes, but a significant number of softer regulatory instruments and guidance. These materials come from government, the private sector, and other interested bodies that have a social, moral or pragmatic influence on decision-making on AI deployment, as well as the way it’s judged by external parties. That can mean relevant information is overlooked. As one example, few people seem to have noticed the dedicated framework for " automated electronic (decision-making) systems " which has governed existing AI systems like the e-Gates used by nearly everyone entering or exiting the country since about 2018. It can also mean that it's difficult to know ahead of time whether non-statutory information on AI is still relevant and worth taking the time to read and absorb. There's a lot of relevant information out there that has been prepared for a world where "AI" meant "predictive analytics". Some of that information is useful at a high level – it relies on the OECD principles for example – but it's basically out-of-date, and the only way to know that is by reading it with the benefit of substantial contextual knowledge. Helpfully, most of the statutes were identified in the Responsible AI Guidance for Businesses from MBIE . Importantly, this means that within Government somewhere there is a longer analysis of why these statutes are relevant and how they apply. That background analysis has not been made available, which is a missed opportunity to save other people time doing the same analysis again. There's another problem. AI is largely inseparable from other digital technologies, as well as the contexts in which those technologies are they're deployed. That means no single regulatory vertical or silo can reach across all this material. It also means that it’s difficult to say within government which agency should take on the task of collating or summarising it all in one place – acknowledging that MBIE and GCDO have taken on slices of this task. Doing the homework Finally, even if you can bring all of this information together in one place, coming to grips with it all is a massive task. It takes time, and therefore money, and involves significant cognitive exertion. Some of you might be thinking: “Oh, I’ll just use AI for all that”. But for any analysis with real world consequences, that's probably not good enough. You'll need to be able to explain and verify your work. The question you should ask yourself is: "How much money am I willing to bet that this AI system had access to all the relevant information, properly understood my context, and has given me a reliable answer?" Given how hard it is to find everything already, and how difficult it is to know whether that information is still relevant, people delegating this task to AI should be proceeding cautiously. You can still have an opinion without doing the homework To be very clear, you DO NOT have to be an expert to participate in public discussions about artificial intelligence. This is NOT about excluding people from a necessary conversation, especially when it affects them and their communities. But if people want to participate in that discussion, and they can't find or grapple with this information, they're coming to the discussion at a disadvantage. This hampers their ability to have an informed and nuanced discussion on how AI should or shouldn't be used. That's unfair – and it means otherwise valid perspectives are disregarded on the basis they’re insufficiently informed. Policy discussions on an uneven playing field are inefficient and disabling, driving frustration, undermining trust, and further complicating good faith collaboration. AI is already regulated It's vital to remember that AI is deeply regulated in New Zealand – our regulatory approach is just based off general purpose statutes. Our approach also embraces a diversity of different types of regulatory documents that are intended to guide or control behaviour. Not everything needs the big hammer of an Act of Parliament. As the Government Chief Digital Officer Paul James recently pointed out , the pace of change with digital technologies can sometimes mean that a more flexible regulatory approach is more effective. I t's also fundamental to realise that this general purpose systematic approach is how we do legislation generally in New Zealand. It makes for a much tidier statute book than other jurisdictions like the US and the European Union.  What it takes to draft an Act of Parliament Any agency wanting the Parliamentary Counsel's Office to draft a new Bill first has to answer a number of questions (these are published in the Legislation Guidelines ). That means anyone doing policy work to explore AI legislation has to answer the same questions. The questions include things like: What does the law say already on this? What's wrong with the law as it is now? How well do you understand the problem now? What is the evidence base you're relying on to make the case for change? Will the changes you're proposing actually influence the problem you've identified? Is legislation the right tool for the job here? What other types of regulation might be appropriate? Who (not what) does the law apply to? Note that we're not regulating AI, we're regulating people who do things with AI. What is the domain you're regulating, who is responsible for regulating it (now and in the future), what are the consequences of non-compliance, and what kinds of penalties should follow? Are the penalties proportionate and consistent with other similar kinds of conduct covered by legislation? Are the consequences civil or criminal? What powers should a regulator have, subject to what checks and balances? How does the world look now, what is the problem to be solved, and how will the world look if the legislation: (1) exists; and (2) is implemented successfully (or unsuccessfully)? What might the consequences of the legislation be for fundamental things like human rights, constitutional conventions, and te Tiriti o Waitangi? How might the legislation you're proposing conflict with other legal requirements? Could people be put in an impossible position, where the law requires them to do two inconsistent things at once? This is a significant issue with the EU's tech regulation framework, where the GDPR, DSA and AI Act appear to be in conflict. Do we have the regulatory capacity to enforce the law being proposed? Which agency will do what, how will that be funded, and how will that activity overlap with self-regulation or the regulatory responsibilities of other agencies and bodies? Can the law be meaningfully enforced by New Zealand Courts and agencies within our sovereign borders? This is a big one in relation to technology companies. These questions will become relevant again in my discussion of the fourth problem, “the policy problem” (coming soon). Not a fair fight Now – at this point I have to emphasise that I know firsthand from prior projects how difficult it is to answer all of these questions on AI and in other policy areas.  The burden of answering these questions can't fall solely on people and groups without access to information or necessary resourcing to answer them. The fact these questions exist must not be an insurmountable barrier placed in the way of necessary reform. Ministers and public service agencies should not (and I'm sure are not) just sitting there refusing to turn their attention to these questions. But people and organisations beyond the public service can play a meaningful role in asking and answering these questions. A whole-of-society approach When it comes to technology governance and regulation, there is a powerful and necessary role for an all-of-society approach. Even aside from the principled reasons for this approach, there are fundamental pragmatic reasons why reciprocal learning and sharing of perspectives is essential. Government: "Wellington" cannot be everywhere at once and people working in government aren't exposed to the same things as people in business or the community. This means relevant things can be overlooked. Business and industry : Some of the things public servants have to deal with, and the ways they have to work, are important. However, they're often not well understood or as highly valued by businesses and industry. By contrast with the public service, businesses also have reasonable limits on how far they can ingest or respond to the insights and perspectives of the wider community. People in academia or the community will bring important perspectives, but they may never have experienced what it's like to have to operationalise a regulatory system, or build large-scale high-risk business or compliance systems. These systems require years of work, substantial financial investment, public/private accountability, and navigating trade-offs between equally important factors. This is true of almost all public policy, but it's especially true for technology regulation . This is because of the way that technologies like AI, the Internet, and communications systems are so powerful and pervasive . All of these factors mean that, when it comes to meaningful action on AI regulation, we're wasting time because of the way we work with, distribute and disclose information . Coming next In the next post, I'll describe another problem related to how different groups coordinate on AI policy. This coordination problem is both influenced by the information problem I've described, and also influences the problems I've identified above with circulating and sharing information.

  • The Coordination Problem (Wasting Time on AI, 3 of 6)

    This is part 3 of a series where I explain why I think we're wasting time on AI Regulation in New Zealand because of four key problems. The first problem relates to information about AI regulation and the way it's circulated. You can find a post on that topic here .  Image credit: Illustrated | Skathi/iStock, Slim3D/iStock. When it comes to AI regulation and the concept of AI sovereignty, our time could be better spent if we coordinated more effectively. I refer to this as "The Coordination Problem", which means in summary: Some people and groups are coming at the discussion from the wrong starting point. This means people are having discussions that don’t line up productively.  Other people have access to useful information that, for whatever reason, has not been or cannot be shared. This means that AI regulatory discussion is duplicative, non-specific, and poorly aligned. No actor can take this topic on alone. We need a network of experts and actors working on AI and AI policy, because the topic requires a diversity of perspective. But people have incentives to occupy and protect a certain space in the discussion, and this can serve to undermine collaboration. Coordination can be encouraged by a systematic approach and a structured agenda for investigation, but so far neither of these have taken hold. Before going further, I note the coordination problem is driven by and contributes to the information problem . It’s also driven by The Economic Problem – more on that in the next post.   What do we need for effective coordination? A logical starting point Regulation for AI already exists. The volume of instruments that could apply to people who work with AI is enormous and overwhelming. On that basis, the real task is how to implement existing regulation systematically, efficiently and effectively, and we should focus on that.  Modifications to that existing regulation are inevitable and necessary, because all regulation changes over time. Adoption of AI products and services is putting new capabilities in new people’s hands, and some form of encouragement or prohibitions for certain practices are going to be necessary.  Regulation can enable rather than inhibit adoption, because it provides certainty, defensibility and a common baseline for market competition. If we want AI to be deployed in the right way, we need to clearly state our expectations. No one can meet theoretical best practice standards if they can’t know where those standards are or what they require. When it comes to implementing AI regulation, and modifying it where necessary, we may as well get going. But modification and implementation require big picture thinking across multiple regulatory systems, in and around what’s already in place. Coming at this discussion on the basis that AI is completely unregulated is unproductive. We need to be more specific Whenever someone advocates in favour of AI regulation, I’m never sure what their position is on any of the above, or any of the matters in my previous post. In light of The Information Problem , it seems doubtful whether anyone is truly acquainted with all of the relevant regulatory information that already applies to AI. Another coordination problem relates to information re-use. A lot of organisations and institutions hold fantastic knowledge and insights on the current state of regulation here and overseas, but it can be hard to share that knowledge with others. In particular, for any output on AI regulation shared by the public service, it’s reasonable to assume there is a much larger volume of information sitting behind that output which informed the final analysis. If we aren’t sharing information effectively, and people aren’t more specific in their advocacy positions, discussion is ambiguous, duplicative and ineffective. If we want to avoid speaking past each other or reinventing the wheel, how can we re-use information and build upon it more effectively? We need a systematic approach To coordinate effectively, we need a systematic approach to the various predictable issues that come up in AI policy and regulation. Ideally, that would include some kind of structured agenda, which lets people identify who is working on what, and which information resources exist already. This would allow us to address key points through a structured and well-informed discussion, and then move on. In New Zealand, this is possible, perhaps more than anywhere else in the world. Over the years, the questions and possible answers on AI policy and regulation have been articulated quite comprehensively. In New Zealand, the policy process is quite transparent and well-structured. Because of our size and culture, key relationships can be easily established, or exist already. I’m confident that useful data on the scale of any problems and the gaps in AI regulation is also readily available. If we could find our way to a structured approach on AI regulation, with the benefit of effective coordination and decent information, we could address key issues relatively quickly. Without that agenda or a systematic approach, we’re going around in circles, duplicating work in some areas and skipping past others, and wasting time on the wrong questions.  We need to be realistic about incentives Disclaimer : What I’m about to say is uncomfortable. It also relies on generalisations which may not be valid in specific circumstances. I also don't pretend to be exempt from these generalisations. Different groups and sectors have different interests and obligations when it comes to AI, and they all have a part to play in effective public policy. People and organisations who participate in AI policy discussions are driven by the public good, but on a pragmatic level, we have to acknowledge participation by all sectors is driven to some extent by self-interest. Self-interest in policy discussions is not inherently bad. It is also not an immediate disqualifier for any valid and reasonable points made by a participant that happen to align with that self-interest. However, from a coordination perspective, there are two ways that self interest is relevant: in public communications; and in private information-sharing. It's also important to recognise that participants in policy discussions with some self interest or economic return from the discussion have greater staying power than others who don't.   When it comes to public communications, compelling and simple public statements on AI and regulation have an impact on the reputation and profile of people and businesses. This can serve commercial or professional interests, and it influences the way that people and organisations communicate in public and in multi-stakeholder environments. It can also undermine effective coordination (see my points on starting points and specificity above). When it comes to private communication and information sharing, it's important to realise that a lot of the most useful information is shared through private relationships in closed discussions. In itself, that is neither good nor bad – in fact, trusted discussions in private forums are essential for meaningful progress. However, in combination with the reputational, commercial and professional incentives described above, the most useful information is often shared selectively. Withholding some information happens for important reasons – to protect trust and confidence, or manage the risk of misunderstandings. But some information is shared selectively because it provides a competitive advantage. We need to factor in these incentives to the way we approach AI policy from a coordination perspective. “Okay, but so what? What’s your proposed solution?” The real world consequence of this coordination problem is that we’re inhibiting independent and proactive activity, as well as collaboration. Because of the coordination and information problems, it is difficult to initiate or maintain a systematic, well-informed and diverse approach across multiple stakeholder groups, when that is otherwise desirable.  For example, one actor could embark on an exhaustive analysis of AI-related legislation in the public interest, only to find that a similar analysis already exists somewhere else, but hasn’t been shared publicly. That would mean wasted time and energy. If anyone did produce such a cross-sector analysis, then that same analysis could be used by a private advisory organisation to generate significant economic value. From experience, it’s also likely that one person or group completing that exhaustive analysis could finish it, only to find that someone else has been funded – or will be funded – to perform the same exercise. My solution so far consists of specific outputs and initiatives to address each of the four problems (Information, Coordination, Economic and Policy). I'll draw these together in the final post. Apart from specific initiatives to address those problems, I'm hoping that this series identifying those problems and bringing them into public discussion can enable better coordination, as well as making it less awkward to talk about those factors when it comes to institutional design, or the design and selection of specific projects and initiatives. Finally I’ll acknowledge that many of the issues I’ve identified above aren’t unique to AI policy, as opposed to other policy areas. But I do believe they are having an acute impact on productive activity in an important area, where things move very fast, and we do not have time to waste. Coming next  In the next post, it gets even more awkward as I talk about the mismatch in priorities and pacing between different groups as a result of economic factors, as well as the way economic realities contribute to the Information and Coordination problems.

  • Wasting time on AI regulation and Sovereign AI in New Zealand (1 of 6)

    A personal perspective, inspired by professional experience since 2012 in tech, law and public policy. Produced 100% organically. Photo by Mihai Vlasceanu from Pexels.   In its AI Strategy, the New Zealand Government – as distinct from the Ministry for Business, Innovation and Employment – has taken an approach that emphasises potential for economic growth . Perhaps relatedly, New Zealand has taken a distributed approach to regulating artificial intelligence. What this means is that we won't be deploying a dedicated, cross-cutting " AI Act ", like the European Union. To be frank, I think that is a sensible decision, and a probably unavoidable result (more on this in the next post). Instead, we’re focused on how to adopt and deploy AI well. Again, I think that’s a sensible approach. But taking that approach has consequences. One of those consequences is that even people wanting to adopt and deploy artificial intelligence are faced with a difficult task. If they want to understand what they can or can’t do, how are they meant to find out? What kinds of substantive support is available to drive adoption, even beyond self-help guidance? A diverse range of regulatory material on AI is littered all over the place. I know this because I've played a role in collating it all in the NZ AI Policy Tracker . On top of that information problem, there are three other problems that undermine meaningful effort on AI policy. That leaves four high level problems in total. An information problem. How are people meant to find the information they require? A coordination problem. How are different groups and sectors meant to collaborate productively? An economic problem. How do the financial and economic realities facing different sectors shape the information and coordination problems above? A policy problem. What’s the overall guiding vision we’re shooting for, and how are we meant to coordinate initiatives by government, industry, academia and communities around this vision? In a series of posts I’m releasing in the coming fortnight, I'm going to explain why I think all of the above means we're wasting time when it comes to AI regulation, to the possibilities and concept of Sovereign AI, and to the benefits and negative consequences of artificial intelligence more generally . Some of what I say feels uncomfortable. I will clearly say at this point that I don’t pretend to be above the fray or immune to the factors and incentives I describe. But this series is intended to be descriptive and constructive, and in the final post I'm also going to explain what I've done and I'm doing towards resolving those problems. If you’d like to be notified when the next post is published, you can sign up for the Brainbox mailing list (see page footer), or follow along on LinkedIn ( me , and Brainbox ). If you don't want to wait to read all the posts, you can reach out and get access to the whole series at once. If you want a head start, you can find a presentation I gave on New Zealand's path in regulating AI in a global context below, released in 2024 - some of the shifts since then (particularly on the rules based international order and global trade) mean my conclusions require a little bit of refinement.

  • Help us build Internet infrastructure climate resilience for Aotearoa New Zealand

    We're moving to the next phase of an action research project to help communities stay reliably connected to the internet during extreme weather events, and in the face of climate change.   We’ll be learning from past experiences, like those from Cyclone Gabrielle, as well as looking at our current infrastructure and any vulnerabilities.   We want you to join us!   We need input and participation from people with direct experience of things like climate change impacts and disruptions to Internet infrastructure.  This includes community members, researchers and first-line responders, as well as those from community organisations, local and national government agencies and infrastructure companies.   We are looking for as much input as possible, including a small group of people for a coalition to collaborate with the research team here in Aotearoa, and an advisory group from across the globe, all towards improving Internet Infrastructure and Climate Resilience for Aotearoa. The lead researcher for the project is Dr Ellen Strickland , with the project team based at the Brainbox Institute and working closely with Pāua Interface . The work is also supported by an expert advisory group . The project is made possible with support from the Internet Society Foundation as well as support from the NZ Telecommunications Forum .   You can get in touch and find out more on the project website , or join us for an online information-sharing session on 26th March 2025 .

  • Internet infrastructure in a changing climate: new research to improve resilience for Aotearoa New Zealand

    Ellen Strickland As we wind down the final days of 2024, I wanted to share a bit about an exciting new research project getting underway for next year. The project focuses on growing the resilience of New Zealand’s Internet infrastructure in a world of climate change and extreme weather.  I’m particularly excited for this project because Internet infrastructure is something I’m passionate about, both nerding out about cables and satellites and signals and also caring deeply about how connectivity can support communities. Also, like more people than ever, I’ve recently been personally impacted by extreme weather events; I was caught in a dangerous flash flood event while driving through Northland in February 2023 and then recently in October this year I was with family at their home in Florida during Hurricane Milton, the quickest growing Category 5 hurricane on record. In both these instances, Internet infrastructure was impacted, making a stressful, dangerous, and physically difficult experience all the worse due to connectivity issues. Many people and communities in Aotearoa New Zealand experienced extreme weather events in early 2023, including experiencing impacts on Internet infrastructure and disruption to connectivity, which can be vital for enabling response and supporting community needs.  This project will learn from these recent experiences.  Many within industry, government, in research, and in communities have insights they’ve gained, as well as things they are currently doing and more they want to do in this area. The project will work to convene some of these people and organisations for action-focused and collaborative research. Its core component will be a national Internet Infrastructure Climate Resilience Coalition, which will convene in early 2025 with work extending into 2026, alongside desk research.  I’ve played a role in leading a few collaborative cross-sector initiatives around Internet related issues, through structures like New Zealand’s NetHui and a range of global Internet governance initiatives. One of the things I find most satisfying in my work life has been bringing people together who are passionate about a topic and facilitating collaborative learning and action. This action-focused research project will use that kind of approach to help build understanding around the vulnerabilities and context of New Zealand internet infrastructure by bringing together people from across sectors to learn from each other and to take action, together and separately, to improve resilience of our Internet in Aotearoa New Zealand. This project is being made possible through funding from the Internet Society Foundation . The project grant application was supported by the  Brainbox Institute  and the New Zealand Telecommunications Forum , who will both be involved in the project. The project was inspired and informed by my recent fellowship with the Critical Infrastructure Lab at the University of Amsterdam. We’ll have a lot more to say about the project in early 2025 but I’m keen to hear from any people and organisations working in this area who might be interested to engage in the project. There will be lots of ways to provide input into the research project and the coalition, so if you or your organisation are interested in hearing updates about the project next year or would be interested in engaging in its work, you can use the sign-up form below, or email coalition@brainbox.institute

  • Will a new bill save the New Zealand news media from extinction?

    Ximena Smith, Communications Lead and Senior Consultant The crisis we are currently seeing in the news media was on full display yesterday morning during the oral submissions to a parliamentary select committee for the Fair Digital News Bargaining Bill. “It is a real fight for survival for us”, TVNZ’s executive editor Phil O’Sullivan said. Sinead Boucher, owner of Stuff, warned that the news media’s ability to help keep New Zealand “free of corruption and our societies healthy” is currently in “great peril”. Some of the figures raised by submitters helped expose the dire reality of this crisis: NZ Geographic publisher James Frankham said magazine advertising revenue had fallen from $210m to $117m since 2012, and chair of the Radio Broadcasters Association Jana Rangooni predicted that, unless some intervention happens, all commercial media would go extinct in the next decade. While a range of perspectives were aired yesterday on what should be done to rectify the situation, there was little disagreement about why the news media is in this position: in essence, the digital age has disrupted the business models of news media, and now, they are struggling to compete with global tech platforms like Google and Meta for digital ad revenue. It’s this competitive relationship between news media and big tech that the Fair Digital News Bargaining Bill targets. Simply put, the bill would compel digital platforms to negotiate commercial deals with news companies, in order to try to balance the scales financially and to ensure the future viability of the New Zealand news media. “People should have to pay for using content” A key premise of the bill is the argument that tech giants use news media content from kiwi outlets for their own commercial benefit without paying for it. For example, a number of news company submitters complained about the impact of ‘zero click searches’, where search engines like Google scrape and summarise information from webpages – like news sites – to answer users’ search queries without having to click away from the search engine. Another example that came up in submissions was the use of news content to train generative AI models, with no compensation paid to news outlets. Michael Boggs, Chief Executive of NZME, likened this to radio stations playing music on air: if they want to play a song, then they have to pay a licence fee. “You have to pay royalties, it’s a no-brainer. People should have to pay for using content.” Stuff’s Sinead Boucher put it more bluntly, describing generative AI products as “no more than modern day succubi”. However, this logic can go both ways. Digital platforms like Facebook and Google unquestionably provide news outlets with free referral traffic. While some media executives downplayed the importance of this traffic during their submission, the fact of the matter is that news outlets do have the option to opt-out from having snippets of their content displayed on digital platforms – and yet, they have chosen not to do so. The reason for this comes down to another point raised by several submitters, which is the huge amount of control that big global tech platforms have over New Zealand’s digital infrastructure. At the end of the day, the news media needs big tech more than the other way around. New Zealand media isn’t alone in this power imbalance with digital platforms – for example, we’re currently seeing the same dynamic play out in Canada, where a similar bill has recently gone into effect. Rather than coming to the bargaining table, Meta has dug its heels in and blocked news links from appearing on its platform for Canadian users, insisting that it doesn’t need this content in order to be commercially successful. The Copyright Act Former District Court Judge David Harvey suggested in his submission that news outlets already do have a tool at their disposal for dealing with tech giants using New Zealand news content: the Copyright Act. However, some newsroom executives dismissed this as an option. In her submission as President of the News Publishers’ Association, Sinead Boucher said the Copyright Act was not a viable option for New Zealand newsrooms dealing with this issue, as it “plunged people into endless litigation with the biggest media companies in the world." Another reason for newsrooms’ hesitancy to pursue this in the courts is probably because it’s unclear whether a case would actually succeed. For example, newsrooms will be closely watching the current copyright lawsuits against OpenAI in the US. Just this week, a court partially dismissed two lawsuits brought by authors against the artificial intelligence company for copyright infringement, with the judge saying that the authors had not sufficiently demonstrated that there was “substantial similarity” between ChatGPT’s output and their copyrighted works. While commentators have noted the New York Times’ case against OpenAI appears to be strong, as they have clear evidence of ChatGPT outputs regurgitating some of their stories verbatim, a judgement could still conceivably go either way. It’s understandable, then, that New Zealand newsrooms are backing the Fair News Digital Bargaining Bill instead of potentially expensive litigation, as the Bill could provide them with a more certain method of revenue sharing with digital platforms. However, the problem is that this bill wouldn’t do anything to address the aforementioned reliance that New Zealand media has on digital platforms – in fact, it would make them even more reliant on these platforms, as it would firmly establish the platforms as a critical source of funding. Another way? Ultimately, the weakness of this bill is that it tries to bring a copyright-based argument to a markets and competition problem. Instead, a stronger approach would treat these issues as separate, and would address the root cause of the underlying power imbalance without making media dependent on the platforms for income. For example, one alternative could be a bill that breaks-up the dominance digital platforms have in the online advertising market.  This strategy is already being tested in other jurisdictions; for example, The Competition and Transparency in Digital Advertising Act in the US could be a useful blueprint for New Zealand lawmakers keen to bring greater transparency and competition to New Zealand’s digital advertising market, and to level the playing field between the global tech titans and local media players. Of course, breaking up concentration in digital advertising would not be a silver bullet to the woes of the New Zealand media industry. Other options also need to be considered – for example, despite some of the negative optics of the Public Interest Journalism Fund, the government shouldn’t completely dismiss ways in which public funds can be distributed at-arms-length towards public-interest media, in ways that build public trust. Right now, it’s indisputable that the media is at a crisis point, and there will be dire consequences for democracy if local media outlets were to collapse. But as policymakers now deliberate upon solutions, their focus must pivot towards fostering a resilient, competitive, and autonomous news ecosystem – one that steers clear of overreliance on major tech platforms for sustenance.

  • The Brainbox Institute Welcomes Dr Ellen Strickland, Begins Transition to Non-Profit Structure

    Tom Barraclough, Founder and Director I’m excited to announce two significant developments for the Brainbox Institute today. Since Brainbox was founded in 2018, the design, development, deployment and governance of the internet, artificial intelligence and other technologies has only become more important. Based on our work during that time, and the positive response we’ve received, we strongly believe there’s an important place for an organisation like Brainbox in the domestic and international landscape. To grow in the way we want to, and to deal with the topics we want to address, we’re going to need a different approach. On that basis, Brainbox is beginning a gradual transition toward a non-profit structure over the next 12 months. This shift will formalise our existing values and commitment to the public interest while opening up new opportunities to produce public goods that explore the impacts of technology on individuals, communities and society: communications, consulting, engagement, research, analysis, and education. Values-driven consulting remains a core component of what we offer. We are committed to ensuring our work is grounded in practical realities, and advising and empowering businesses, government agencies and other clients on technology regulation and governance, while also expanding our capacity for public-interest research and initiatives as a think tank. I’m also delighted to share that Dr Ellen Strickland will be joining the organisation as a Director, working alongside me to help lead this transition. Ellen is an expert in internet governance and technology related policy, and she brings fantastic experience in academic research, working with government, and leadership roles within technical and civil society organisations. She has a strong vision for what the Brainbox Institute could become with a firm eye toward both domestic and international landscapes, and I’m very excited to have her join us. None of this would be possible without the vision and commitment of the existing team, who have been part of shaping this decision and are enthusiastic about Brainbox’s evolution. From AI governance to digital trade to online information ecosystems, the internet and digital technologies are shaping every dimension of society. I’m confident that the Brainbox Institute can become an anchor point for engaging proactively on these issues, bringing an international perspective to New Zealand, and a New Zealand-based perspective to the world. We believe in the potential of technology for empowerment - for individuals, communities, industries, and New Zealand as a whole. We want to be a good partner to existing actors, and to empower other organisations to take a more active role. We want to create a space for dialogue and discussion across stakeholder groups, as well as an avenue for nurturing coming generations of technology leaders. I encourage you to reach out to me, Ellen, or the Brainbox team with any questions, suggestions, or collaborations you’d like to explore. You can learn more about Ellen here, and contact us via the website here. We’re excited for what this next chapter will bring.

  • Global Digital Compact (GDC) presents critical opportunity for NZ

    Discussions at the United Nations this week on the Global Digital Compact (GDC) present a critical opportunity for New Zealand to play a key role internationally in shaping digital governance. The Global Digital Compact is incredibly important to both Aotearoa and the future of the internet as a whole. New Zealand is working toward growing its technology sector, and our geographic isolation means the Internet is vital. Digital technologies also dominate national discussions, and we think more New Zealanders should be empowered to participate in those discussions. We know that New Zealand can also play an important role in the international community. We’re experienced at navigating difficult geopolitical landscapes, and have direct experience of both the immense benefits and terrible costs of the Internet and digital technologies. At present, the GDC will have two long-lasting impacts The GDC will have far-reaching implications for the unified global Internet, with substantial economic, political, and social repercussions for New Zealand and New Zealanders. First, it will send a strong signal to national governments about what kinds of actions are legitimate to respond to digital technologies and their impacts. Key topics include things like child safety, disinformation, and artificial intelligence. This will have significant implications for issues like freedom of expression, privacy, and encryption globally. Second, it will shape how governance of the Internet, AI and digital technologies evolves in the coming decades, which is currently governed collaboratively by states, companies, non-government entities (civil society), international organisations, and the technical community. A good start, but needs work While the current draft is heading in the right direction, there is more work to be done. Crucially, text on topics like Artificial Intelligence, multistakeholder governance, and nebulous concepts like “disinformation” and “safety” all need greater clarification and refinement. Artificial Intelligence The GDC treats AI like a special category, rather than like any other digital technology. While the impacts and capabilities of AI may be daunting, the GDC shouldn’t treat it like science fiction. Multistakeholder Governance While expressing support for the existing Internet Governance model, the GDC must resolve some lingering ambiguities that point in the opposite direction. In addition, meaningful participation by civil society in governance processes is unrealistic without adequate access to resourcing. “Disinformation”, “safety”, etc Substantial limitations on human rights can be imposed in the name of safety, and talk of “eliminating” disinformation risks profound limitations on freedom of expression. There needs to be some way of narrowing down these concepts and limiting what can be done in response. For more information For more detail, you can access our position statement below. Interactive digital tool We've also incorporated our comments into an interactive digital version of the GDC using software from Syncopate Lab.

  • AI for Organisations: Free Webinar and Q&A

    The Brainbox Institute is pleased to announce a free upcoming seminar and Q&A session focused on artificial intelligence (AI) for organisations. Led by our highly knowledgeable AI Lead, Allyn Robins, this event is designed for decision-makers seeking to navigate the complexities of AI in the workplace. In a rapidly evolving technological landscape, understanding AI is no longer a luxury for leaders – it’s a necessity. On 20 March at 11am NZT, Allyn will discuss the present and future of AI, cutting through the noise to provide clear, practical insights. Attendees will have the opportunity to engage directly with Allyn and get specific queries answered in the interactive Q&A session. This webinar will go beyond surface-level discussions, offering decision-makers a solid foundation for understanding AI’s relevance to their organisations. By the end of the one hour session, attendees will feel more empowered and informed to make strategic decisions around how AI fits into their workplace. Spaces are limited, so secure your spot by registering today. If you can’t make it this time, email info@brainbox.institute to stay informed about future sessions. About Brainbox AI Lead Allyn Robins: Allyn guides the Brainbox Institute's synthetic media and AI-focused initiatives, and remains a key part of many other projects. Prior to starting with Brainbox, he worked as an intelligence analyst at the Department of Prime Minister and Cabinet, where he founded the Emerging Technologies portfolio and played a key role in coordinating efforts to allow New Zealand to navigate an increasingly technologically sophisticated world. He is a highly sought-after expert, offering insightful commentary for leading media outlets like Newshub, Stuff, and Lawfare. Allyn holds a Master’s degree in Physics and Bachelor’s degrees in Philosophy and Theatre, which has been a more useful combination than you might think.

Brainbox Institute is a non-partisan organisation that supports constructive policy, governance, and regulation of digital technologies.

Subscribe to our news

Thanks for submitting!

© 2023 Copyright Brainbox Ltd. All Rights Reserved. Privacy Policy.

bottom of page