Last month, the Department of Internal Affairs (DIA) released a proposal for revising media regulation in New Zealand, called Safer Online Services and Media Platforms (SOSMP). The proposal seeks to regulate large online technology companies (like Meta and Google) under the same umbrella as traditional media outlets (TVNZ and Stuff), and it recommends introducing industry-specific codes of practice for platforms whose primary purpose is to distribute content. The system would be overseen by a refreshed independent regulator.
Platform regulation is a complex topic that we have worked on for some time, so we wanted to share some of our thoughts on the proposal as it stands. Our views are shaped by a strong commitment to human rights as a foundation for public policy, including freedom of expression. We’re also strong advocates for government transparency as a fundamental and necessary method of justifying public trust (especially among concerned or sceptical groups).
A core theme of Brainbox’s work has been to call out the potential risks created in situations where governments gain increased control over digital infrastructures, including platform systems. If you’re interested in finding out more about how our past work relates to these values and informs our perspective on SOSMP, we’ve collated a summary of various projects at the bottom of this post.
Importantly, the SOSMP document itself says – and DIA officials have emphasised to us – that the details of the proposal are very much open to discussion. In the proposal document itself, a large number of crucial details have been left open-ended. You can read more about the DIA’s proposed changes – and how you can provide feedback – here.
Ahead of the 31 July submission date, we wanted to share some of our initial thoughts. You might find these useful if you want to make a submission. We’d also welcome the opportunity to test our thinking with others, including tech and media companies, journalists, and other experts. We've also expanded on our thoughts in a more detailed discussion paper below.
While the work isn’t directly related, we also think it’s only fair to disclose:
Brainbox is currently engaged as the project lead for a global multi-stakeholder network on tech company transparency, which includes working alongside the world’s largest tech companies, leading academics, and civil society organisations.
Brainbox's previous and current contracts include providing independent advice to the Department of the Prime Minister and Cabinet on work programmes related to building resilience to disinformation. Brainbox is not authorised to make public statements about this work, and nothing we say should be attributed to the Department of the Prime Minister and Cabinet.
This work on the SOSMP and our work on the Aotearoa New Zealand Code of Practice on Online Safety and Harms (ANZCPOSH) has been funded by the Borrin Foundation and InternetNZ and we are grateful for that support.
What stands out to us about the SOSMP proposal?
Drawing on our previous work and research, here are some things that stand out to us from the discussion document. We’re sharing these publicly to help you reach your own conclusions. We encourage you to draw on these points if you make a submission – whether you agree or disagree.
(1) The SOSMP is an opportunity to influence how governments and platforms draw the line on freedom of speech.
When it comes to free speech, the question is where and how we ‘draw the line’, not whether ‘a line’ should exist at all. While the right to freedom of expression is extremely important, it’s not absolute, and it can be restricted in certain circumstances. However, these limitations must be for legitimate human rights objectives, reasonable, necessary, proportionate, and imposed using clear legal rules that can be challenged or appealed. It’s worth pointing out that all widely-used platforms can and do impose some restrictions already on what content can be distributed, and the SOSMP is an opportunity for enhanced transparency over how platforms do this. It’s also an opportunity to shed light on the role of government agencies in influencing platform content moderation (if any).
These are important points for submitters to consider, because a completely “hands-off” approach to freedom of expression is unrealistic. To truly promote freedom of expression requires some positive vision of what New Zealand’s information environment looks like. We recommend being cautious of anyone who only has criticisms or suspicion, but no reasonable solutions.
(2) The proposal should create public power and oversight over how limitations are set and enforced – by companies and by governments.
The purpose of platform regulation isn’t just about where limits should be set. It’s also about creating legal structures that give the public and the courts some power and oversight over how those limits are being set and enforced.
As such, the SOSMP should include more detail about the checks and balances against risk of abuse. What information should be provided about how regulators, government agencies, and platforms are operating? If the government and platforms are communicating about content (including through “trusted flagger” programmes), what should they be required to disclose? What would create confidence that the public could use the courts to protect freedom of expression if necessary, including against the regulator?
(3) We think an independent regulator is a good choice.
We support an independent regulator and think it’s important to limit the risk that Parliament and MPs influence the way codes of practice are developed and enforced in improper ways. An independent regulator can be more flexible, less preoccupied with politics, and can be checked by the courts in ways that Parliament can’t be. For example, unlike legislation, a code of practice could be “struck down” by New Zealand courts if it goes too far. The SOSMP also refers to a power to create “policy statements” to influence codes of practice – we’d like more detail about who sets those policy statements and any limits on them.
(4) User empowerment is a good goal, but we are cautious about the ‘child protection’ and ‘harm and safety’ framing.
We admire the SOSMP’s focus on user empowerment, but we recommend a balanced and cautious approach to approaches based on child safety. Child safety is important, but children have rights too, and there are clear moves globally to drastically curtail civil and digital rights in the name of child safety. Also, while some content can be clearly harmful, it’s important to acknowledge that the relationship between online content and real world harm is sometimes complex and indirect. The research on this is still emerging. A specialist regulator could engage with these issues in a nuanced and expert way, including by commissioning research and education initiatives.
(5) Lumping tech companies and news organisations together creates problems.
There are important differences between news media and social media, and the proposal currently fails to account for these differences. There are some powers proposed for the regulator that make sense for regulating tech companies, but would be unacceptable if applied to news companies. In addition, the primary difference between traditional and social media companies is the way they deal in user-generated content. Harmful user-generated content is already regulated under the Harmful Digital Communications Act, and it’s not clear why the HDCA has been excluded from this proposal. Separating the SOSMP and the HDCA without good reason is going to produce confusing systems and processes. We also think the definition of a platform needs to be clear in advance, and shouldn’t rely on a lot of interpretation or discretion.
(6) We need more clarity on illegal content.
While new kinds of content are in theory not being made illegal by the SOSMP, in practice the law would empower the regulator to take a greater role in requiring platforms to intervene against particular types of content. The SOSMP does create legal obligations to deal with expanded categories of content. This is not inherently negative, but any steps to expand the boundaries of content which is subject to regulatory oversight should be dealt with directly and transparently, with careful design to ensure that any implementation is subject to legal oversight mechanisms.
(7) We need to be pragmatic about New Zealand’s place in the world.
The SOSMP makes ambitious statements about the impact that it will have on the conduct of global tech platforms and their products and services, and we think some pragmatism is important when it comes to setting expectations for the SOSMP – not least because it’s hard to assess the risks of the proposal without a realistic sense of potential benefits. Fundamentally, New Zealand’s relatively small user base means our leverage over platforms is very limited. By contrast, our ability to lead through positive incentives, advocacy, and adherence to human rights principles is very high.
The SOSMP is good in the way that it anticipates engaging with emerging international standards-setting processes and creates space for codes of practice to align with global efforts. However, it is participation in these networks that is likely to have the most impact on platform conduct, not the SOSMP itself.
Conclusion
People’s ability to express themselves and to hold powerful entities to account is one of the most important features of a democracy. We can understand why many people will find this proposal unsettling. But this is the start of an ongoing discussion and we believe well-informed, diverse participation is important. DIA should be congratulated for the work it has done to foster these discussions, as well as being tested rigorously. We’ll be doing what we can to contribute toward effective public policy based on human rights that gives people trust and confidence that these significant and important powers are being designed and used properly.
Selection of previous work
Legal responses to Deepfakes and Synthetic Media (May 2019):
In May 2019, we published a public report examining the legal implications of deepfakes and synthetic media. We concluded that a range of existing legal frameworks already apply to the use of synthetic media for harmful purposes, and advised against any legislative reform for two reasons. First, “deepfakes” and synthetic media are difficult to define, and harms related to false information, impersonation, or privacy effects were already regulated. Second, in those circumstances, synthetic media technologies are fundamentally expressive communications technologies, and any intervention would create serious risks to freedom of expression, particularly in relation to political speech. We identified one area of urgent reform, which relates to the use of deepfakes to synthesise non-consensual sexual imagery. The New Zealand Parliament has declined to act on this recommendation. You can find the executive summary of the report here.
Implementing law as computer code (March 2021)
In March 2021, we published a report looking at the way governments and others were proposing to draft and implement legislation as machine-executable computer code. We identified a significant opportunity to make sure that digital systems better comply with rule of law values, but we also advised strongly against the suggestion that law or legislation should be implemented as computer code. The primary reason for this is that it would break down fundamental constitutional rights to challenge the way that Executive government actors decide to interpret and implement the law, and that it risked excluding the Judiciary from its fundamental constitutional role in arbitrating disputes about how the law should be interpreted. You can access that report here.
Statement on “The Edge of the Infodemic” report (June 2021)
In June 2021, we expressed serious reservations about the Office of Film and Literature Classification’s report, “The Edge of the Infodemic: Challenging Misinformation in Aotearoa”. In particular, we expressed doubts about the way that conclusions were expressed in light of the methodology used. We raised concerns about the report’s findings and the way it would be used to justify future policy work on mis- and disinformation, as well as the risks associated with expanding the OFLC’s mandate, and delegitimising news media. We also noted the absence of any reference to work on disinformation and “fakes news” by the United Nations Special Rapporteur on Freedom of Expression. You can find that quote here.
Submission against proposed national internet filter (October 2021)
In October 2021 we submitted strongly against a Bill that would have implemented a nationwide internet filtering system. We raised significant concerns about the impact of this filter on freedom of expression, the ability to challenge any automated decisions made by this filter, and fundamental definitional issues around how objectionable content would be identified and restricted. We also noted the serious implications for the rule of law in using automated systems to prevent access to information. We expressed the view that there was no need to modify the Films Videos and Publications Classification Act 1993 in order to capture live-streamed objectionable material, and expressed concerns about the potential implications of expanding these definitions. You can access that submission here.
Platform responses to terrorist and violent extremist content incidents, and legal frameworks for content moderation (August 2021)
In October 2021, we released a report looking at platform responses to the Christchurch terrorist attack and the attacker’s use of livestream technologies. We also offered an opinion on “what good regulation looks like” when it comes to content moderation. We also outlined recommendations by academic commentators calling for greater transparency from platforms and from the Global Internet Forum to Counter Terrorism around the way that its hash-sharing database operates. We concluded there were significant human rights concerns with any content regulatory frameworks that seek to define new categories of illegal content, or intervene in content that is “awful but lawful”. We concluded that the best way forward for human rights compliance was to implement careful transparency-oriented legislative approaches that would build an evidence base around how content is being moderated. We strongly advocated for a human rights approach based on principles of legality, proportionality, necessity, and justifiability. You can find that report here.
Human rights approaches to investigating recommender systems and terrorist and violent extremist content (November 2021)
In November 2021, we worked with the Responsible AI working group for the Global Partnership on Artificial Intelligence. In that report, we discussed a range of issues related to freedom of expression, human rights principles, and the definitions used to classify terrorist and violent extremist content – both computationally and legally. We expressed particular concern about the risk that these definitions could be unjustifiably expanded, or applied imprecisely. This report also draws on human rights frameworks to comment on forthcoming legislative proposals on transparency. You can find that report here.
Report for DPMC on non-governmental approaches to monitoring social media for disinformation (June 2022)
In June 2022 we delivered a report to the Department of the Prime Minister and Cabinet examining the issues raised by government activity intended to monitor open source communications for ‘disinformation’. Despite broad-based calls for monitoring of this kind from the public, we advocated for rigorous transparency requirements, and for this work to be conducted outside of government. We believed it was important to play a role in promoting structures that allow public scrutiny of such activities where possible, particularly in the context of disinformation policy, with a view to justifying trust and confidence in the quality and propriety of the work. We proposed an entity of some kind be established based around a number of principles, including that the work should be transparent, methodologically rigorous, open to scrutiny, and high quality. You can find that report here.
Position paper on legal frameworks for disinformation (Ongoing, 2023)
This year, we published a position paper on the approach we would be taking to the question of whether “disinformation” can or should be regulated. This work flowed from our concerns that disinformation was a poorly defined concept that cannot be operationalised in legal frameworks, and that there were serious human rights consequences to any suggestion that government actors should be empowered to dictate what is true or false. We intend to publish conclusions from this work in July/August 2023. You can access the position paper here.