top of page

Our perspective on New Zealand’s proposed platform regulation

Last month, the Department of Internal Affairs (DIA) released a proposal for revising media regulation in New Zealand, called Safer Online Services and Media Platforms (SOSMP). The proposal seeks to regulate large online technology companies (like Meta and Google) under the same umbrella as traditional media outlets (TVNZ and Stuff), and it recommends introducing industry-specific codes of practice for platforms whose primary purpose is to distribute content. The system would be overseen by a refreshed independent regulator.


Platform regulation is a complex topic that we have worked on for some time, so we wanted to share some of our thoughts on the proposal as it stands. Our views are shaped by a strong commitment to human rights as a foundation for public policy, including freedom of expression. We’re also strong advocates for government transparency as a fundamental and necessary method of justifying public trust (especially among concerned or sceptical groups).


A core theme of Brainbox’s work has been to call out the potential risks created in situations where governments gain increased control over digital infrastructures, including platform systems. If you’re interested in finding out more about how our past work relates to these values and informs our perspective on SOSMP, we’ve collated a summary of various projects at the bottom of this post.


Importantly, the SOSMP document itself says – and DIA officials have emphasised to us – that the details of the proposal are very much open to discussion. In the proposal document itself, a large number of crucial details have been left open-ended. You can read more about the DIA’s proposed changes – and how you can provide feedback – here.


Ahead of the 31 July submission date, we wanted to share some of our initial thoughts. You might find these useful if you want to make a submission. We’d also welcome the opportunity to test our thinking with others, including tech and media companies, journalists, and other experts. We've also expanded on our thoughts in a more detailed discussion paper below.



While the work isn’t directly related, we also think it’s only fair to disclose:


  • Brainbox is currently engaged as the project lead for a global multi-stakeholder network on tech company transparency, which includes working alongside the world’s largest tech companies, leading academics, and civil society organisations.

  • Brainbox's previous and current contracts include providing independent advice to the Department of the Prime Minister and Cabinet on work programmes related to building resilience to disinformation. Brainbox is not authorised to make public statements about this work, and nothing we say should be attributed to the Department of the Prime Minister and Cabinet.

  • This work on the SOSMP and our work on the Aotearoa New Zealand Code of Practice on Online Safety and Harms (ANZCPOSH) has been funded by the Borrin Foundation and InternetNZ and we are grateful for that support.


What stands out to us about the SOSMP proposal?


Drawing on our previous work and research, here are some things that stand out to us from the discussion document. We’re sharing these publicly to help you reach your own conclusions. We encourage you to draw on these points if you make a submission – whether you agree or disagree.


(1) The SOSMP is an opportunity ​​to influence how governments and platforms draw the line on freedom of speech.


When it comes to free speech, the question is where and how we ‘draw the line’, not whether ‘a line’ should exist at all. While the right to freedom of expression is extremely important, it’s not absolute, and it can be restricted in certain circumstances. However, these limitations must be for legitimate human rights objectives, reasonable, necessary, proportionate, and imposed using clear legal rules that can be challenged or appealed. It’s worth pointing out that all widely-used platforms can and do impose some restrictions already on what content can be distributed, and the SOSMP is an opportunity for enhanced transparency over how platforms do this. It’s also an opportunity to shed light on the role of government agencies in influencing platform content moderation (if any).


These are important points for submitters to consider, because a completely “hands-off” approach to freedom of expression is unrealistic. To truly promote freedom of expression requires some positive vision of what New Zealand’s information environment looks like. We recommend being cautious of anyone who only has criticisms or suspicion, but no reasonable solutions.


(2) The proposal should create public power and oversight over how limitations are set and enforced – by companies and by governments.


The purpose of platform regulation isn’t just about where limits should be set. It’s also about creating legal structures that give the public and the courts some power and oversight over how those limits are being set and enforced.


As such, the SOSMP should include more detail about the checks and balances against risk of abuse. What information should be provided about how regulators, government agencies, and platforms are operating? If the government and platforms are communicating about content (including through “trusted flagger” programmes), what should they be required to disclose? What would create confidence that the public could use the courts to protect freedom of expression if necessary, including against the regulator?


(3) We think an independent regulator is a good choice.


We support an independent regulator and think it’s important to limit the risk that Parliament and MPs influence the way codes of practice are developed and enforced in improper ways. An independent regulator can be more flexible, less preoccupied with politics, and can be checked by the courts in ways that Parliament can’t be. For example, unlike legislation, a code of practice could be “struck down” by New Zealand courts if it goes too far. The SOSMP also refers to a power to create “policy statements” to influence codes of practice – we’d like more detail about who sets those policy statements and any limits on them.


(4) User empowerment is a good goal, but we are cautious about the ‘child protection’ and ‘harm and safety’ framing.


We admire the SOSMP’s focus on user empowerment, but we recommend a balanced and cautious approach to approaches based on child safety. Child safety is important, but children have rights too, and there are clear moves globally to drastically curtail civil and digital rights in the name of child safety. Also, while some content can be clearly harmful, it’s important to acknowledge that the relationship between online content and real world harm is sometimes complex and indirect. The research on this is still emerging. A specialist regulator could engage with these issues in a nuanced and expert way, including by commissioning research and education initiatives.


(5) Lumping tech companies and news organisations together creates problems.


There are important differences between news media and social media, and the proposal currently fails to account for these differences. There are some powers proposed for the regulator that make sense for regulating tech companies, but would be unacceptable if applied to news companies. In addition, the primary difference between traditional and social media companies is the way they deal in user-generated content. Harmful user-generated content is already regulated under the Harmful Digital Communications Act, and it’s not clear why the HDCA has been excluded from this proposal. Separating the SOSMP and the HDCA without good reason is going to produce confusing systems and processes. We also think the definition of a platform needs to be clear in advance, and shouldn’t rely on a lot of interpretation or discretion.


(6) We need more clarity on illegal content.


While new kinds of content are in theory not being made illegal by the SOSMP, in practice the law would empower the regulator to take a greater role in requiring platforms to intervene against particular types of content. The SOSMP does create legal obligations to deal with expanded categories of content. This is not inherently negative, but any steps to expand the boundaries of content which is subject to regulatory oversight should be dealt with directly and transparently, with careful design to ensure that any implementation is subject to legal oversight mechanisms.


(7) We need to be pragmatic about New Zealand’s place in the world.


The SOSMP makes ambitious statements about the impact that it will have on the conduct of global tech platforms and their products and services, and we think some pragmatism is important when it comes to setting expectations for the SOSMP – not least because it’s hard to assess the risks of the proposal without a realistic sense of potential benefits. Fundamentally, New Zealand’s relatively small user base means our leverage over platforms is very limited. By contrast, our ability to lead through positive incentives, advocacy, and adherence to human rights principles is very high.


The SOSMP is good in the way that it anticipates engaging with emerging international standards-setting processes and creates space for codes of practice to align with global efforts. However, it is participation in these networks that is likely to have the most impact on platform conduct, not the SOSMP itself.


Conclusion


People’s ability to express themselves and to hold powerful entities to account is one of the most important features of a democracy. We can understand why many people will find this proposal unsettling. But this is the start of an ongoing discussion and we believe well-informed, diverse participation is important. DIA should be congratulated for the work it has done to foster these discussions, as well as being tested rigorously. We’ll be doing what we can to contribute toward effective public policy based on human rights that gives people trust and confidence that these significant and important powers are being designed and used properly.


Selection of previous work


Legal responses to Deepfakes and Synthetic Media (May 2019):

In May 2019, we published a public report examining the legal implications of deepfakes and synthetic media. We concluded that a range of existing legal frameworks already apply to the use of synthetic media for harmful purposes, and advised against any legislative reform for two reasons. First, “deepfakes” and synthetic media are difficult to define, and harms related to false information, impersonation, or privacy effects were already regulated. Second, in those circumstances, synthetic media technologies are fundamentally expressive communications technologies, and any intervention would create serious risks to freedom of expression, particularly in relation to political speech. We identified one area of urgent reform, which relates to the use of deepfakes to synthesise non-consensual sexual imagery. The New Zealand Parliament has declined to act on this recommendation. You can find the executive summary of the report here.

Implementing law as computer code (March 2021)

Statement on “The Edge of the Infodemic” report (June 2021)

Submission against proposed national internet filter (October 2021)

Platform responses to terrorist and violent extremist content incidents, and legal frameworks for content moderation (August 2021)

Human rights approaches to investigating recommender systems and terrorist and violent extremist content (November 2021)

Report for DPMC on non-governmental approaches to monitoring social media for disinformation (June 2022)

Position paper on legal frameworks for disinformation (Ongoing, 2023)


Recent Posts

See All

Comments


bottom of page