top of page

The Coordination Problem (Wasting Time on AI, 3 of 6)

  • Writer: Tom Barraclough
    Tom Barraclough
  • Sep 29
  • 6 min read

This is part 3 of a series where I explain why I think we're wasting time on AI Regulation in New Zealand because of four key problems. The first problem relates to information about AI regulation and the way it's circulated. You can find a post on that topic here



When it comes to AI regulation and the concept of AI sovereignty, our time could be better spent if we coordinated more effectively. I refer to this as "The Coordination Problem", which means in summary:


  1. Some people and groups are coming at the discussion from the wrong starting point. This means people are having discussions that don’t line up productively. 

  2. Other people have access to useful information that, for whatever reason, has not been or cannot be shared. This means that AI regulatory discussion is duplicative, non-specific, and poorly aligned.

  3. No actor can take this topic on alone. We need a network of experts and actors working on AI and AI policy, because the topic requires a diversity of perspective. But people have incentives to occupy and protect a certain space in the discussion, and this can serve to undermine collaboration.

  4. Coordination can be encouraged by a systematic approach and a structured agenda for investigation, but so far neither of these have taken hold.


Before going further, I note the coordination problem is driven by and contributes to the information problem. It’s also driven by The Economic Problem – more on that in the next post.

 

What do we need for effective coordination?


A logical starting point


Regulation for AI already exists. The volume of instruments that could apply to people who work with AI is enormous and overwhelming. On that basis, the real task is how to implement existing regulation systematically, efficiently and effectively, and we should focus on that. 


Modifications to that existing regulation are inevitable and necessary, because all regulation changes over time. Adoption of AI products and services is putting new capabilities in new people’s hands, and some form of encouragement or prohibitions for certain practices are going to be necessary. 


Regulation can enable rather than inhibit adoption, because it provides certainty, defensibility and a common baseline for market competition. If we want AI to be deployed in the right way, we need to clearly state our expectations. No one can meet theoretical best practice standards if they can’t know where those standards are or what they require.


When it comes to implementing AI regulation, and modifying it where necessary, we may as well get going. But modification and implementation require big picture thinking across multiple regulatory systems, in and around what’s already in place. Coming at this discussion on the basis that AI is completely unregulated is unproductive.


We need to be more specific


Whenever someone advocates in favour of AI regulation, I’m never sure what their position is on any of the above, or any of the matters in my previous post. In light of The Information Problem, it seems doubtful whether anyone is truly acquainted with all of the relevant regulatory information that already applies to AI.


Another coordination problem relates to information re-use. A lot of organisations and institutions hold fantastic knowledge and insights on the current state of regulation here and overseas, but it can be hard to share that knowledge with others. In particular, for any output on AI regulation shared by the public service, it’s reasonable to assume there is a much larger volume of information sitting behind that output which informed the final analysis.


If we aren’t sharing information effectively, and people aren’t more specific in their advocacy positions, discussion is ambiguous, duplicative and ineffective. If we want to avoid speaking past each other or reinventing the wheel, how can we re-use information and build upon it more effectively?


We need a systematic approach


To coordinate effectively, we need a systematic approach to the various predictable issues that come up in AI policy and regulation. Ideally, that would include some kind of structured agenda, which lets people identify who is working on what, and which information resources exist already. This would allow us to address key points through a structured and well-informed discussion, and then move on.


In New Zealand, this is possible, perhaps more than anywhere else in the world. Over the years, the questions and possible answers on AI policy and regulation have been articulated quite comprehensively. In New Zealand, the policy process is quite transparent and well-structured. Because of our size and culture, key relationships can be easily established, or exist already. I’m confident that useful data on the scale of any problems and the gaps in AI regulation is also readily available.


If we could find our way to a structured approach on AI regulation, with the benefit of effective coordination and decent information, we could address key issues relatively quickly. Without that agenda or a systematic approach, we’re going around in circles, duplicating work in some areas and skipping past others, and wasting time on the wrong questions. 


We need to be realistic about incentives


Disclaimer: What I’m about to say is uncomfortable. It also relies on generalisations which may not be valid in specific circumstances. I also don't pretend to be exempt from these generalisations.


Different groups and sectors have different interests and obligations when it comes to AI, and they all have a part to play in effective public policy. People and organisations who participate in AI policy discussions are driven by the public good, but on a pragmatic level, we have to acknowledge participation by all sectors is driven to some extent by self-interest.


Self-interest in policy discussions is not inherently bad. It is also not an immediate disqualifier for any valid and reasonable points made by a participant that happen to align with that self-interest. However, from a coordination perspective, there are two ways that self interest is relevant: in public communications; and in private information-sharing. It's also important to recognise that participants in policy discussions with some self interest or economic return from the discussion have greater staying power than others who don't.

 

When it comes to public communications, compelling and simple public statements on AI and regulation have an impact on the reputation and profile of people and businesses. This can serve commercial or professional interests, and it influences the way that people and organisations communicate in public and in multi-stakeholder environments. It can also undermine effective coordination (see my points on starting points and specificity above).


When it comes to private communication and information sharing, it's important to realise that a lot of the most useful information is shared through private relationships in closed discussions. In itself, that is neither good nor bad – in fact, trusted discussions in private forums are essential for meaningful progress. However, in combination with the reputational, commercial and professional incentives described above, the most useful information is often shared selectively.


Withholding some information happens for important reasons – to protect trust and confidence, or manage the risk of misunderstandings. But some information is shared selectively because it provides a competitive advantage. We need to factor in these incentives to the way we approach AI policy from a coordination perspective.


“Okay, but so what? What’s your proposed solution?”


The real world consequence of this coordination problem is that we’re inhibiting independent and proactive activity, as well as collaboration. Because of the coordination and information problems, it is difficult to initiate or maintain a systematic, well-informed and diverse approach across multiple stakeholder groups, when that is otherwise desirable. 


For example, one actor could embark on an exhaustive analysis of AI-related legislation in the public interest, only to find that a similar analysis already exists somewhere else, but hasn’t been shared publicly. That would mean wasted time and energy. If anyone did produce such a cross-sector analysis, then that same analysis could be used by a private advisory organisation to generate significant economic value. From experience, it’s also likely that one person or group completing that exhaustive analysis could finish it, only to find that someone else has been funded – or will be funded – to perform the same exercise.


My solution so far consists of specific outputs and initiatives to address each of the four problems (Information, Coordination, Economic and Policy). I'll draw these together in the final post. Apart from specific initiatives to address those problems, I'm hoping that this series identifying those problems and bringing them into public discussion can enable better coordination, as well as making it less awkward to talk about those factors when it comes to institutional design, or the design and selection of specific projects and initiatives.


Finally I’ll acknowledge that many of the issues I’ve identified above aren’t unique to AI policy, as opposed to other policy areas. But I do believe they are having an acute impact on productive activity in an important area, where things move very fast, and we do not have time to waste.


Coming next 


In the next post, it gets even more awkward as I talk about the mismatch in priorities and pacing between different groups as a result of economic factors, as well as the way economic realities contribute to the Information and Coordination problems.


 
 

Brainbox Institute is a non-partisan organisation that supports constructive policy, governance, and regulation of digital technologies.

Subscribe to our news

Thanks for submitting!

© 2023 Copyright Brainbox Ltd. All Rights Reserved. Privacy Policy.

bottom of page