top of page

Search results

71 results found with an empty search

Pages (43)

  • Transparency-based approaches to social media regulation | Brainbox Institute

    Brainbox worked with the University of Otago and the Global Partnership on AI's responsible AI working group. GPAI was proposing collaborative study of how social media recommendation systems deal with terrorist content. < Back Transparency-based approaches to social media regulation Past Project November 2021 and ongoing Transparency and social media for the Global Partnership on AI Brainbox worked with the University of Otago and the Global Partnership on AI's responsible AI working group. GPAI was proposing collaborative study of how social media recommendation systems deal with terrorist content. This work is continuing in 2022 and Brainbox has participated in a range of meetings hosted by the Global Internet Forum to Counter Terrorism and others working within the wider Christchurch Call. GPAI aims to conduct research within one or more social media companies to observe the effect of “recommender systems” on platform user behaviour. In particular, whether such systems have the effect increasing user consumption of Terrorist Violent Extremist Content. It outlined its proposed study in a separate technical report.Brainbox provided an analysis of the legal and policy issues relevant to conducting research of this kind. Our analysis was grounded by reference to human rights principles and frameworks, to ensure that any research partnership was broadly appealing, fair, and respectful of due process for all parties. It drew on previous work by Brainbox to an investor coalition led by the New Zealand Superannuation Fund on its engagement with Facebook, Alphabet and Twitter on the companies’ responses to the Christchurch terror attacks of 15 March 2019. It also drew on Brainbox's reports on automated decision making and legislation as code . Brainbox's report for GPAI GPAI's technical report Previous Next

  • Dispute resolution systems and access to justice | Brainbox Institute

    Brainbox has published work on dispute resolution systems and access to justice: in medico-legal disputes; and in an online safety context. See our submission on the proposed New Zealand voluntary code on online harms and safety. < Back Dispute resolution systems and access to justice Past Project 2014 onward Dispute resolution systems, justice policy and access to justice Brainbox has published work on dispute resolution systems and access to justice in two areas: medico-legal disputes; online safety. Tom Barraclough has co-authored a number of publications on access to justice in New Zealand's medico-legal systems. These consist of a range of reports on access to justice for ACC claimants and articles in peer-reviewed journals. These insights have been applied in submissions to the Justice Committee and have been recognised in Parliamentary debates and independent ministerial inquiries . In 2021, Brainbox made a submission on a proposed voluntary online safety code for New Zealand . The Code was drafted by industry signatories such as Meta, YouTube and Twitter and led by Netsafe. The submission drew on other Brainbox investigations into platform content moderation systems for responding to terrorist incidents and global regulatory trends , and transparency-based approaches to social media regulation . The submission is available below. Submission on voluntary online safety code Back to Projects Previous Next

  • Assisting the Human Rights Commission on responding to COVID-19 | Brainbox Institute

    COVID-19 policy is moving rapidly, cutting across a range of policy areas and fundamental human rights. We worked with Antistatic to prepare a series of briefings to support the Commission to fulfil its statutory role. < Back Assisting the Human Rights Commission on responding to COVID-19 Past Project December 2021 COVID-19 policy is moving rapidly, cutting across a range of policy areas and fundamental human rights. Brainbox worked with partners at Antistatic to prepare a series of briefings on key COVID-19 policy issues, as well as compiling frequent current events round-ups. Brainbox's work supported Commission staff and Commissioners in formulating public positions and fulfilling statutory obligations. Get in Touch Back to Projects Previous Next

View All

Blog Posts (28)

  • Constructive suggestions for time well spent (Wasting Time on AI, 6 of 6)

    This is the conclusion to a series that has outlined four problems which work together to mean we're wasting time when it comes to work on AI Regulation and Sovereign AI: an information problem , a coordination problem , an economic problem , and a policy problem . In this post, I outline four things I’m doing that address one or more of those problems as I see them. I also share a bigger picture programme of work I'd like to see happen, and which I'm willing to support. Photograph: Phil Walter/Getty Images ( Source ) Thank you for the great work you're already doing I’ve observed that it’s generally much easier in public policy to lob criticisms than to propose something new and constructive, let alone execute that proposal. I accept that publishing this blog series could lead to some criticism coming back at me – and that’s fair enough.  I want to be clear that a lot of incredible work has been done by an enormous variety of diligent, well-informed and constructive people and organisations, who all face the same four problems identified in this series. The breadth of material in the NZ AI Policy Tracker and the thoughtful comments accompanying  a recent open letter on AI regulation  show the diversity and strength New Zealand can bring to bear on this topic. In this series I've tried to be direct and provocative, but that's because I think we are wasting time, and wasting time has consequences – whether through missed opportunities to realise benefits, or through preventable situations of real world harm.  A quick recap In a nutshell, what's the problem with AI regulation and Sovereign AI in New Zealand? We urgently need a whole-of-society coordinating vision for how AI and other automated systems should be designed, developed, deployed and governed at all levels. To meaningfully design or act on that vision requires re-using information, removing barriers to coordination, and addressing perverse financial incentives faced by every sector group that can exacerbate these issues. We can address these problems, but until we do, I fear we'll just be wasting time. In a slightly larger nutshell: There's an information problem . Artificial intelligence is already regulated in New Zealand, but it's regulated through a patchwork of different materials, documents and websites. This means it's hard to work out how to comply with that regulation and where the gaps might be. We need participation and contribution from groups across society (government, industry, community, academia) but all of these groups see the world differently – that's the point of collaboration. If we want AI to be regulated, there's a series of questions we'll need to answer, and answering those questions is harder than it needs to be because information is all over the place. There's a coordination problem . No one can take on this topic alone, but we're missing opportunities to re-use information and share useful insights. There are incentives on different groups to be selective about the information they share or withhold, which amplifies the information problem. Because of the information problem, there's a risk we come at the discussion from incorrect or incompatible starting points. Collaboration could produce a structured agenda for investigation allowing us to systematically ask and answer relevant questions that meaningfully address the things we need to address to produce an effective regulatory system that fosters responsible adoption of artificial intelligence. There's an economic problem. Different organisations, individuals and sector groups face different economic realities, and stand to gain different economic benefits from their participation in public discussion. The economic problem exacerbates the coordination problem and the information problem. Beyond money, a key part of solving this problem relates to institutional design : how can people contributing money be satisfied it's well spent? Getting money from people who have it involves answering a lot of questions, but answering those questions is difficult because of the information, coordination and economic problems, creating a "chicken-and-egg" dynamic. There's a policy problem. What's the guiding vision that different actors can coordinate around? In recent times, I've come to believe this vision can and should be "Sovereign AI" – and that doesn't necessarily mean training new foundation models . Instead, it relates to a series of different approaches that together enhance the agency, empowerment and autonomy of individuals, businesses, groups, communities and the nation as a whole when it comes to use and governance of AI and other digital technologies. That might include new models or fine-tuned models, but it definitely includes things like accessible digital infrastructure and meaningful AI literacy. It also doesn't mean we have to raise the drawbridge and manufacture GPUs in New Zealand. So what should we do about it? Here are some things I'm already doing and some other things I would like to see done. To be clear, I want to see this work done by a wide network of people, which does not necessarily have to include me . In saying that, I’m happy to play a part. What I’m doing or would like to see done Collating AI regulation into one place With others at Brainbox, I've published an "AI Policy Tracker" for New Zealand. It's a big list of most of the things you'd have to read to understand what already exists in this area. It points people all over the Internet to a mixture of legislation, PDFs and websites, some of which have already disappeared (404). I'd like to make sure the tracker is complete and keep it up to date. No single public sector agency is going to do this because (see all of the above), and no commercial organisation is going to do it unless it drives value, builds customers, and avoids legal and reputational risk. Academia might be a good home for it, but I haven’t received any requests to support or host it yet. Until then, I'm doing what I can to keep it up to date. A policy tracker would address the information problem and the coordination problem. It would mitigate some of the economic problem and enable meaningful action on the policy problem. A machine-readable repository of AI policy and regulation My professional career has been dominated by the frustrations of working with regulatory documents published by the public service, by private entities and by academics. This experience is partly what has led me to co-found a software company that turns that information into machine-readable structured data. I'm sick of opening 50 browser tabs and downloading 50 PDFs when I want to work with regulation. The AI Policy Tracker is a big list of things that I want to convert into structured re-usable data to make freely available to others. I've already made a start, but I just need the time to spend on converting those documents with our systems, or paying someone else to do it. What it would produce is a single downloadable dataset that anyone can use to work with AI regulation in New Zealand that can be versioned and updated over time. If you're so inclined, you can even feed it to an AI system and ask it questions. If you think it's making things up, you can get bullet-point-level citations for the system's responses so you can go and check the answers yourself. This repository could also be packaged up in a model context protocol (MCP) server for easy access and use by AI systems. A machine readable referenceable repository would address the economic problem and the information problem by making it easier to find and analyse relevant information. It would also substantially improve the policy problem and the coordination problem by bringing everyone to a common starting point and giving them the tools to engage effectively. Articulating a vision for "Sovereign AI" to foster collaboration What does "Sovereign AI" mean? What are the different bits and pieces of a sovereign approach? What can we learn from others? How can we break the space down into some meaningful actions? I've tried to lead this with a discussion paper, with public presentations (and podcasts ) and through supporting a Sovereign AI community of interest. I think there are meaningful projects that can be initiated right now on AI literacy, fine-tuning open-weight systems, and assessing the state of our national digital infrastructure. This would address the policy problem and the coordination problem. It could also play a role in mitigating the economic problem to the extent that it produces a more constructive public discussion ("ban it!" <--> "adopt it for everything!") that incentivises wider participation. A not-for-profit tech policy organisation I've been blunt about the incentives different groups have in collaborating on AI and technology policy and the economics of how that works (or doesn't). I'm not immune to those incentives and I've thought about them a lot. I've been clear that one solution to the economic problem is money, but beyond that we need institutional structures that give people confidence that the money they contribute will be well spent. I think we need an institutional infrastructure to foster trust and collaboration between groups working on tech policy. It needs a global perspective to bring to domestic work, driven by an empowerment approach to people, businesses, government agencies and communities and their use and relationship with technology. That institution may exist already, but I'm not sure it does. I've committed publicly to converting the Brainbox Institute into such an institution. I have a trust deed ready to sign to establish this structure – what's holding me back is the time, support and resourcing necessary to activate this plan with conviction. Perhaps you can help? Other initiatives I've been thinking about this topic and talking about it with others. There are some other initiatives that I'd like to see pursued related to AI literacy and various scoping studies for Sovereign AI initiatives. I'm going to continue progressing this work and "thinking in the open" – at the moment, my thoughts are recorded here . Get in touch if you'd like the password for access. Let's talk about this Thank you for the time you’ve taken to read this series. If you've gotten to the end of this series and feel your work hasn’t been recognised, then I apologise (see also problems 1-3 earlier in this series). I have also tried to deliberately avoid naming people, agencies and organisations to avoid straying into unintended criticism or endorsement along the way. If you feel like writing a public comment in response, consider reaching out to me and talking about it first. I look forward to hearing what you think about all this. You can reach me via the Brainbox website or through LinkedIn .

  • The Policy Problem (Wasting Time on AI, 5 of 6)

    In this post, I address the fourth of four problems outlined in this series . I call this the Policy Problem, which relates to the difficulties in setting a clear guiding policy direction for AI in New Zealand that other groups can coordinate around on a structured, well-informed basis. You can read about the other problems here: the Information Problem , the Coordination Problem , and the Economic Problem . What do we want from AI and AI regulation? Photo by Alix Lee. What is "policy"? "Policy" is a boring word to many people. What does it really mean? Business people, government officials and others all probably use it in a slightly different way. To me, policy is simply about setting a direction, or declaring what we want. If we take that idea slightly further, policy is about saying what someone somewhere should do in specific circumstances, and perhaps defining the consequences if they don’t do it. Organisations have policies and governments pursue Policy (sometimes called "big P policy"). The "little p policies" often give effect to a "big P Policy" that operates at a higher level, and is set by political leaders, boards, or executives. If you think back to my first post, many of the questions you have to answer if you want to see legislation drafted are essentially questions about policy (leaving aside any question of spelling and capitalisation for now). "What does good look like" when it comes to AI? When it comes to AI regulation, it has always been a bit difficult to articulate two key areas of policy. What do we mean by AI? How do we cast the net in such a way that we include cutting-edge large language models or predictive analytics systems for automated decision-making, but exclude things like email filters, thermostats and spreadsheet formulae. (Let's ignore for now the complexities of defining what constitutes an automated decision-making system.) What is "good" when it comes to AI, what is "bad", what is "very bad", and how do we distinguish between these categories? There are often some types of AI system, ways they are trained or ways they are deployed that people can agree shouldn't happen. However, even with things like facial recognition, lethal autonomous weapons systems, or deepfakes, there are usually exceptions where people will agree they might be permissible. It's easy to say that systems should be fair and unbiased, but what does "biased" mean, and can a biased system be deployed in a situation where its bias can be accounted for and controlled? This is the reality for the facial recognition systems that are deployed at the New Zealand border in our e-Gates. For a long time, this has meant that we default to sets of principles or values as a guide to what we expect from AI systems and the people who deploy them. The OECD principles situated at the center of the New Zealand AI Strategy are the classic example of this. We have also seen roughly 200+ statements of "AI principles" internationally since about 2017. Another useful framework for assessing AI systems and navigating the trade-offs in the ways they're deployed lies with human rights instruments – for example, rights to freedom of expression, privacy, and bodily integrity are well-established, and we have well-established ways of dealing with the situations where those rights are in conflict. The trouble with these ways of talking about what we want from AI systems is that they don't easily translate into clear rules. They really just give us a starting point for saying what matters. Principles like this also need to be applied in a huge number of situations, reflecting the diversity of AI systems, the people and organisations who interact with them, and the different ways of managing and governing them. The reality is that the same set of principles or rights, depending on who is applying them and in what situation, can lead to quite radically different interpretations of what is acceptable. Why does this matter? This matters because it makes it difficult to articulate a national or community direction for what we want from AI and the way it is deployed. If we can't articulate a shared vision, it's hard to identify points of coordination, or to set a structured agenda for investigation and discussion. It's also difficult to identify what information might be relevant to the discussion. This is what I refer to as the Policy Problem. If we can't say what we want from AI or AI regulation, then we don't have something to coordinate around, we can't work out what information is relevant, and we don't have a way of making sensible and predictable value judgements. The OECD principles in particular are easy to agree to because they are so wide open to interpretation – but countries who endorse the OECD principles can take wildly different approaches. What could be a national policy vision for AI in New Zealand? Last year I heard someone use the term "Sovereign AI". When I first heard it, I was skeptical. In particular, I took it to mean the idea that countries (and governments in particular) should be building their own AI models. In my mind, and in the minds of many others, this would mean the New Zealand government starting a process that ends with some kind of ChatGPT system developed by government with some set of characteristics or flavours that make it uniquely "New Zealand" in some way. For what it's worth, I still think this is an extraordinary idea that has a huge number of hurdles to overcome. But I also sat and asked myself what a viable and realistic vision of "Sovereign AI for New Zealand" might look like. Perhaps there is a case to be made for a brand new foundation model, and I'm interested to explore what that would require. By contrast, there are other ways to think about sovereignty and about AI that could achieve the same thing that "Sovereign AI" advocates are looking for which are much easier and cheaper to act upon. How am I thinking about Sovereign AI (or AI sovereignty)? Different people mean different things when they talk about Sovereign AI. I also know that some people will want to explore whether Sovereign AI is the same as AI Sovereignty . I’m not that interested in those discussions. When I talk about AI sovereignty, or a desire for Sovereign AI, the key points for me are as follows. Not just nations: I don’t think AI Sovereignty, or the pursuit of Sovereign AI, has to be all about working at the nation-state level. It can also be about actions by communities, multi-sector groups, or even individuals. Not just governments: It also doesn’t have to be exclusively about activity by the Government and the public service. Any Sovereign AI model would have to include a range of different sectors, and in fact many of the "Sovereign AI" models being created around the world are created by companies or public/private partnerships. Not just about foundation models: Sovereign AI also doesn’t have to be focused solely on brand new foundation models trained on New Zealand data in New Zealand, which are hosted and run on New Zealand computers. Not "all-or-nothing" : I think we can increase AI sovereignty, or enable access to "Sovereign AI", without having to be absolute purists – for example, we don't have to get into local GPU computer chip production, and any Sovereign AI model (or approach) wouldn't be irreparably polluted if it includes some data from outside New Zealand. I unpack all of these things in a lot more detail in a separate discussion paper . The paper addresses things like the fact there are more types of AI than just large language models, that AI is only one part of a wider digital sovereignty picture, and that sovereignty is a complicated concept in a small interdependent trading nation founded on te Tiriti o Waitangi. What are some practical ways of enhancing AI sovereignty? If we open up the discussion about Sovereign AI in the way I've described above, then we can think about quite different approaches to what we want from AI in New Zealand that are much more achievable. A practical approach to Sovereign AI for New Zealand could emphasise three key categories of work: AI Literacy. Empowering people, organisations, businesses and communities to have greater agency over AI systems. That includes AI literacy and measures to enhance equity and equality of access. You can use whatever systems you want, really, as long as you’re equipped to make informed decisions. This inevitably flows through to greater digital literacy and skills on privacy, cybersecurity and data protection. Digital infrastructure. This includes measures to promote access to computer equipment and software that let people use AI in the way that meets their requirements, as informed by the knowledge, power and skill they’ve developed from the above. If we do this, we can have competent people choosing their systems and how they use them, including where they use them and under what conditions. Fine-tuning. Before we buy $150 million of computer equipment and throw open the vaults of our shared digital heritage for open extraction, let’s check how far we can get what we need from customising existing models. Has anyone tried this yet? What are the limits of this approach? What do we need to test it properly? I work in this area and I don’t know (see problems 1-3 above). Let’s find out what fine-tuning can achieve and make sure we’re sharing our findings. If we adopt this way of thinking about the change we want to see in the world, then many of the tricky decisions about who must do what, when, under what circumstances, and in what order of priority will probably emerge quite naturally. Then, if we really want to, or the case is very strong, someone somewhere might like to train a new foundation model. What has Sovereign AI got to do with AI Regulation? To pursue Sovereign AI is to adopt a regulatory approach. It sets a direction which signals to everyone I've mentioned in this series so far that things which align with this direction will be encouraged, and things which don't will be discouraged. This helps people understand what new regulation might be required, which changes to regulation will be prioritised, which initiatives could or should be proposed or funded, and how the public service will interpret and apply existing regulation. By emphasising AI Sovereignty, we could also consider lifting a national vision for AI beyond electoral cycles and party politics. If we can agree on a vision for Sovereign AI (the Policy Problem), then we can begin to collate and manage information which is relevant (the Information Problem), set a structured agenda for coordination, agreement and disagreement (the Coordination Problem), and fund initiatives with confidence that they will be a useful and productive component of the wider whole (the Economic Problem). What next? In my next and final post, I’ll outline what I’ve done already to try and make a difference to these four problems. I'll also suggest how this work could be taken further, bearing in mind that the purpose of this work is to call for action well beyond anything I can (or want to) initiate or manage alone.

  • The Economic Problem (Wasting Time on AI, 4 of 6)

    This piece is part of a series where I outline four problems with the way we’re approaching AI regulation and the concept of Sovereign AI. In this piece, I touch on the economic factors driving inefficiencies in policy discussions, which also drive, and are driven by, the first two problems: the Information Problem  and the Coordination Problem . Introduction In this piece, it might get awkward. I’m going to touch on some of the economic and financial drivers that, from my personal experience, and in my opinion, amplify the problems related to information and coordination covered in previous posts.  What follows is intended as a generalised, descriptive, empathetic and explanatory exercise, without any value judgement. Effective regulation of AI and our broader national direction on AI policy requires a well-informed multi-sector approach. This is undermined by the Information Problem and the Coordination Problem. In my view, there is another problem contributing to those problems, which relates primarily to financial considerations, or what I refer to loosely as an “economic” problem. The Economic Problem is that everyone needs to generate money to pay their bills and take care of their families and communities. In itself, that is not a problem, but it becomes a problem where different groups derive different economic benefits from their participation in AI policy discussions, because it impacts the pace of their work, the things they prioritise, and their ability to participate effectively in the discussion. A chicken and egg problem Addressing this economic problem requires raising money and having the institutional arrangements to inspire confidence among the people and organisations contributing those resources. But answering the necessary questions from those people or organisations requires information, coordination, and resourcing, creating a chicken and egg problem. Photo by ROMAN ODINTSOV . The economic context for different sectors Here is my blunt assessment of the economic realities facing different sector groups and how this influences their participation. Government and public sector Government as a broader institution is funded to do valuable but uneconomic things that may not produce an immediate financial return. It funds these things through compulsory taxation on other participants. This has two implications for pacing and coordination. Most people participating from government generally receive a salary – a dependable paycheck every week, independent of the time it takes to perform the work. They're also responsible for addressing important processes and requirements that other participants can't see, don't understand, and do not have to face. This takes time and energy. It also frequently means just having to wait while someone else reviews your work or decisions. Aside from financial considerations, public media exposure and external critical comment are often the other strongest drivers of behaviour because of the incentives held by Ministers and elected politicians (described below).  This means that Government's incentives are generally to move slowly, unless some other non-financial factor is imposed to accelerate activity.  People working within the government find this frustrating too. I have yet to meet anyone in government who deliberately wants to move slowly just for the sake of it. There are some other important points to note.   The money that funds government activity is not endless. People in government are also accountable for how they spend it.  It would also be a mistake to ignore the recent and historic rounds of redundancies facing otherwise fantastic public servants.  People are also committed to their work as a matter of professional pride and public service, and this is a very real thing. Given all this hedging, why raise this point? It’s relevant because the financial pressures government participants face are fundamentally different from those faced by the other groups. This has a corresponding impact on the information sharing and coordination problems.  Government participants have the resources to create and publish information for regulatory purposes, but may be reluctant to share it. The financial situation of government participants influence the pressure (or lack of pressure) to coordinate with other parties, as well as the pace of their information production and coordination efforts. Industry Businesses fall on a spectrum. On one end of the spectrum are exceptionally well-funded businesses. These businesses often earn money from:  the technology being regulated (AI); and/or advising those companies (or the agencies who regulate them) on how to comply with regulation (AI assurance and regulation).  On the other end of the spectrum are businesses who are barely surviving. Some of them want to become companies that work with and sell the technology, or want to establish a market position in advising those businesses (or the associated government agencies).  For all businesses, wherever they fall on the spectrum, they need to consider how the time and money they're applying can be justified from a financial perspective. The bigger the business, the more money can be allocated to this task, either in the name of business development or market competition. Even for exceptionally well-funded businesses, the money (and staff time with it) doesn't come from nowhere. Revenue has to cover expenses. This sounds abstract to people who haven't faced this situation, but it's a very real issue for anyone who's borne the responsibility of paying staff and themselves each week.  Again, the point here is to note that the financial realities facing businesses are different from those facing other participants, which drives the problems above. Large and well-funded industry participants operate more like governments. Small, poorly funded industry participants have the incentive to work rapidly and establish a market position, whether through selective coordination, public activity, or competition. Communities and real people Communities are often the most affected by AI, but are completely under-resourced. Communities show up to policy and regulatory discussions facing the kinds of questions I raised under the Information Problem. They have none of the time, information or support required to meaningfully address them. The onus of justifying change is often put on them.  Everybody else in the policy discussion is speaking a completely different arcane language. Discussion seems to revolve exclusively around barriers to action, writing new documents, reading existing documents, or proposing non-specific things that can never realistically be implemented. Many of the real barriers to action are never actually spoken out loud – for example, “I’m not going to do that because it wouldn’t fly with the Minister”, or “Could you imagine how this is going to look on the front page?”, or “That doesn’t fit with our market strategy.”  Communities find all of the above incredibly frustrating. They are not monolithic, and can sometimes have better knowledge and skills than paid participants from other sectors. But their contributions are seldom compensated, even though the skills and experience they bring are often essential. There is another growing group of people in New Zealand with a powerful interest in AI policy, significant personal resources and deep domain expertise. I suggest these people (or groups) could play a significant role in addressing the Economic, Information and Coordination problems. Academia and researchers The economics of academia are perhaps the most difficult of all. Anyone who's had to chase research funding or juggle teaching, research and service knows what I'm talking about. The academics are expected to have done the reading, or even to have written most of it, but they face most if not all of the same barriers as everyone else outlined above in terms of information and coordination. Multi-stakeholder bodies or professional associations Organisations like professional associations or industry membership bodies represent multiple entities, or multiple types of entities. They play an essential and positive role in all of the above. They’re also deeply acquainted with the Information and Coordination problems and bring vital experience navigating those issues. Economically speaking, these organisations are constantly being asked to justify why they should be funded, while also navigating the explicit or implicit objectives, preferences and incentives of their members. Many of the people who are the real engine-room of these organisations can be treated as falling into one of the categories above in terms of their immediate economic situation. These groups and associations are thought to bring the resources of well-funded companies or governments, but frequently face conditions similar to the other participants. They face conflicting pressures to create, disclose and withhold information, and to initiate, avoid, or maintain coordination depending on the circumstances at hand. Elected politicians Ministers and elected politicians play a significant role in determining the economics and overall direction of all of the above. They’re oriented towards doing things and solving problems, but not at all costs. More than perhaps any other participant, elected politicians face the responsibility of taking a birds-eye view, where there are a number of equally important and urgent priorities which must be weighed against each other. Key motivating factors that influence the Information and Coordination problems include media risk, political relationships, and election cycles (which draw in things related to numbers of votes, and political fund-raising). These can hamper information production and drive behaviours that undermine coordination. They can also be powerful contributors to completely resolving the Information and Coordination problems. The public interest also drives Ministers and elected politicians, as well as their genuine commitment to serve their communities, empower industry, and support the public service to produce their best work. All I’m saying is that the financial reality facing Ministers and elected politicians is often out of step with other participants and this has an impact. Why raise this issue? The reason I’ve painted this picture is because the economic considerations driving participation in public policy discussions about AI are very influential. The economic factors drive a mismatch in pace and priorities between different stakeholder groups, which have a compounding impact on problems of information-sharing and coordination. They also have a downstream impact on the policy orientation we advocate for and operate around – more on that in the next post. “Okay, well who should solve this problem?” There are two complementary ways to solve this problem, which I see as two sides of the same coin. The first way is money, but a necessary foundation for that money is to have institutional arrangements that justify the trust and confidence of the people or organisations contributing that money. Money To be blunt, someone or some institution with money who wants to see effective activity on AI regulation should solve this problem. Another option is for multiple people or entities to set aside a contribution to a coordinated effort to solve the Economic Problem, the Information Problem, the Coordination Problem – and by extension, the Policy Problem. In light of the comments I’ve made about the different sectors above, here are the key factors that complicate the task of simply giving up money to address these problems. Government is expected to have internal competency that does not require external procurement. Government agencies are accountable for public funds, but often find selecting and designing initiatives to be performed beyond the public service to be a difficult exercise. Government faces significant procurement impediments to the rapid deployment of public funds. Industry cannot spend money unless it drives a return of some kind, or serves a broader public interest objective which is in some way related to its commercial activity. Communities and real people do not have the resources to contribute, but can contribute time. Some members of the community and real people do have significant economic resources, and could contribute those resources toward effective AI policy in the right circumstances without being accountable to anyone else. Academia and researchers can secure research funding, but the fundraising cycle is slow, significant volume is taken up by administrative overheads. Funding is generally secured for named research personnel to perform specific research tasks. Multi-stakeholder bodies can recruit economic resources from members or other sources, and distribute them with flexibility unavailable to other groups. Often they are not empowered to distribute those resources beyond their own control, and resource distributions have to serve the members or contributers to the relevant body. Elected politicians can enable the provision of funds, or seek to raise funds independently of government, but they cannot realistically distribute those funds themselves. Their role as elected politicians also risks politicising the way the work is perceived, undermining trust and perhaps worsening the Coordination Problem. Institutional design Anyone being asked to contribute economic resources will ask the following questions, to themselves, or to the applicant –  and not without justification . However, the task of answering them takes time, and they are difficult to answer because of the Information, Coordination and Economic problems. The questions are: What are you going to do specifically?  Why would that help? Are you sure that would help? Wouldn’t an alternative set of actions help more instead? Who’s going to pay for it? Why should it be me? Why not those other people? Why hasn’t someone else already paid for this? Do you really need all this money or just some of it? What’s in it for me? How does it serve my objectives, or the objectives of people I'm accountable to? What are the downsides or risks of me supporting this? How far am I likely to be associated with or affected by what you eventually do? Why you/these applicants? Isn’t someone else already doing this? Who’s accountable if you don’t deliver? Again, these questions are justified, and should be accounted for in the institutional design necessary to build confidence in the deployment of economic resources. But people trying to answer them face a kind of chicken and egg problem, and may answer them all without ever receiving any funding. In those circumstances, it's frankly irrational to even start. Conclusion on the economic problem Most people reading this will self-identify as an exception to the generalisations about each sector group I’ve made above. For what it’s worth, they’re probably correct. By necessity, I’ve painted a broad-brush generalisation of the different sectors and groups which some may find offensive. It's important to emphasise again that there's nothing inherently wrong with being driven by economic considerations or participating in public policy out of some level of self-interest. Being driven by those factors does not mean you are not also driven by factors like the public interest or a sense of community service. However, those things cannot pay for food and electricity. By being blunt about the economic dimension of this problem, I’m hoping to bring it out into the open as something that could be meaningfully addressed – through economic support, through institutional design, or through networks and relationships. Without solving this problem, it’s difficult to imagine any progress on the other problems, which unfortunately means we are wasting time.

View All

Brainbox Institute is a non-partisan organisation that supports constructive policy, governance, and regulation of digital technologies.

Subscribe to our news

Thanks for submitting!

© 2023 Copyright Brainbox Ltd. All Rights Reserved. Privacy Policy.

bottom of page