top of page

Search results

28 results found with an empty search

  • Discussion paper and event recording: Deciphering Media Literacy in Aotearoa New Zealand

    Media literacy is increasingly important in our information-saturated, high-connectivity world. It’s commonly offered as a core solution to challenges posed by artificial intelligence, synthetic media, platform regulation, and disinformation. But what is it exactly? Discussion paper released Brainbox is proud to share our research paper “Deciphering Media Literacy: Charting the future in Aotearoa”. The paper is a primer for policymakers and other community leaders coming to grips with the subject and includes recommendations for a national stocktake and strategy of media literacy initiatives in Aotearoa New Zealand. Key points from the paper Strong media literacy skills do not just help people distinguish between trustworthy and untrustworthy sources of information – they also enable people to engage with and produce all forms of media more creatively, effectively, and authentically. In this respect, a commitment to support for media literacy is a crucial component of promoting freedom of expression. We identified six key themes, and made four recommendations. In summary: Effective media literacy interventions are designed and delivered for specific groups in local contexts Media literacy education must balance between two approaches: protectionism and empowerment Good media literacy education acknowledges people interact with media in active and complex ways Media literacy education should examine how media is constructed Effective interventions are interactive and participatory, and acknowledge that instructors do not have all the ‘right’ answers Sustained interventions are more effective than one-offs, but length doesn’t guarantee effectiveness Key recommendations We also make the following recommendations for policymakers and communities in Aotearoa New Zealand: Map existing efforts – Before embarking on any major new programmes, it’s important to obtain a clear picture of the scope and impact of ongoing projects. Any new efforts should use existing infrastructure effectively and ensure they’re targeted at the areas of greatest need. Strengthen local and international networks – Collaboration will be key to effective media literacy efforts. Some local and international networks already exist, and tapping into and strengthening them will enable productive and successful cooperation. Share knowledge and insights as much as possible – While every organisation and intervention is unique, lessons learned from one can and must be used to improve all others. Sharing knowledge will both improve efforts across the board, and promote closer and more collaborative relationships. Develop a coordinated and comprehensive strategy – For the best results, a coordinated long-term strategy across government, civil society, and international partners will be necessary. We acknowledge that this is a difficult task, but it will be eased by the previous three – and we think that the benefits are worth it. Hybrid event held Monday 18 December 2023 The discussion paper served as the foundation for a panel discussion held on December 18 in Auckland and online, featuring distinguished panelists. The panel included AUT Associate Professor Helen Sissons, Ian Thomas, President of the National Association of Media Educators, and Atakohu Middleton, a journalist, researcher, and communications consultant. The conversation was moderated by Ximena Smith from Brainbox. You can watch the full discussion below. Follow us for more You can follow us on social media or sign up for our website mailing list at the bottom of this page to receive similar invitations in future, or to keep updated about our work in this area.

  • The Brainbox Institute unveils new NZ AI Policy Tracker

    Introducing the NZ AI Policy Tracker – a new resource launched today by the Brainbox Institute. This tool is designed to centralise information about Aotearoa New Zealand’s disparate AI regulatory environment and to provide a convenient one-stop spot for accessing relevant materials. The tracker contains key outputs from government organisations, non-government organisations and experts, such as guidance on generative AI from various ministries, long-term insights plans, and policy documents. Researchers, academics, policy professionals and other interested parties will be able to use the tool to help assess and critique Aotearoa New Zealand’s AI regulatory and planning competency, or to simply understand what efforts are being made across the country’s various agencies. The tracker is intended to be a comprehensive and current resource, so will be regularly updated with new outputs and continue to be freely available. If you believe there is an output we have missed, or there is something you are working on that you would like us to add, please submit it to info@brainbox.institute. Please note that the tracker is predominantly aimed at government and non-government outputs intended to foster AI research and policy efforts in Aotearoa New Zealand. It is not intended to be a list of every organisation or agency within Aotearoa New Zealand working in the broader AI space.

  • Sharing our submission on New Zealand's proposed content regulation framework

    In late 2023, submissions closed for the Department of Internal Affairs' Safer Online Services and Media Platforms proposal. The proposal was led by the Department itself, rather than the then-Government given proximity to the election period. The future of the SOSMP proposal is unclear at present. While submissions will be released publicly by DIA in future, we wanted to share our submission so it can be accessed by others – including people with an interest in New Zealand's approach outside New Zealand. This work was produced with funding support from the Borrin Foundation and Internet New Zealand, for which we are very grateful, and forms part of a broader programme of work by the Brainbox Institute on human rights approaches to content regulation and disinformation.

  • Privacy Week 2023: Contemporary issues in AI and privacy

    Allyn Robins, Senior Consultant AI is a powerful tool, and like any powerful tool it can be used for both good and ill. This blog post is the second in a loose series that seeks to deliver an accessible-yet-measured take on the ways AI is being used, with the first part available on Newsroom here. This one was adapted from a seminar I presented for Privacy Week 2023 (recording linked here), which means it’s a little different in tone and structure. In the seminar, I outlined the four big impacts AI is having on privacy. AI development is driving the capture of more data. Most modern forms of AI rely on enormous datasets that allow them to be ‘trained’ to do various tasks. In general, the more data you have to train your AI (provided it’s the right sort of data), the better it’ll be at whatever it is you’re training it to do. This means that data is even more valuable today than when market analysts were dubbing it “the new oil”. My co-presenter at Privacy Week, the distinguished Andrew Chen, focused almost exclusively on this issue, and it’s a rich one. In the future, I hope to write a post - or even a series of posts - delving into its many details. For now, I want to focus on an effect of this hunger for data that I haven’t seen discussed that much: Because AI is being trained to do a huge variety of tasks, that means that data is being collected about things - and in ways - that most people wouldn’t expect. To see an example of this, look no further than Roombas - those adorable automatic vacuum cleaners. Most people who own one have no idea, but unless they’ve opted out, their Roomba has assembled a floor plan of their house and sent it back to headquarters for analysis. Their intent may not be nefarious - they plan to use this information to make their products more effective and efficient - but that data is sensitive, and it’s not hard to imagine an authoritarian government forcing them to hand it over. And even if that doesn’t happen, Roomba’s got a less-than-stellar record of protecting sensitive information. And while most of the information collected these days is ‘anonymised’, that means less and less because… AI is reducing the effectiveness of some privacy protections. You’ve almost certainly accepted a privacy policy at some point that reassured you any data collected would be ‘anonymised’. What this means in practice can vary wildly, but usually this anonymisation consists of redacting the pieces of information that could easily identify someone such as name, address, and phone number. This form of anonymisation can very often be reversed, however: By carefully analysing the data that’s left, and cross-referencing it with other available information, it’s almost always possible to reverse-engineer your way back to a unique identity. As an incredibly powerful tool for collecting, analysing, and comparing data, AI makes this process even easier - and it’ll continue to make data anonymisation techniques even less effective as it grows in power. But it’s not only big datasets we need to worry about - personal privacy provisions are being undermined too. Face blurring, for example: people have blurred or pixelated faces in photographs for decades to protect the privacy of their subjects, but new AI systems are getting better and better at reversing this process. The tech has already come a long way since it “deblurred” Barack Obama into a white guy, and it’s only going to get better. This is a privacy time bomb that’s incredibly difficult to mitigate - while in future people can avoid using the techniques that AI is undermining, there’s a lot of ‘anonymous’ information already out there that will be rendered personally identifiable in the future. Covering your face is hardly a perfect solution, because… AI is augmenting our ability to gather information. AI is creating new ways of collecting information, and enhancing old ways. A great example of AI enhancing old ways is facial recognition - applying facial recognition capabilities to a network of cameras takes them from a system that requires a lot of time and effort to track someone through, to a system that can track thousands of individuals in real time. The latter is clearly more of a threat to privacy. Covering your face isn’t fully effective either, because AI enables other ways of identifying and tracking individuals such as gait recognition. AI also allows inferences about individuals to be made at a scale and with an efficacy that was previously unthinkable. You can find any number of people who’ll tell you that TikTok seemed to know they were gay before they did, for example. Now, TikTok wasn’t trying to identify queer users - it was just serving people ‘gay content’ because their algorithm had identified it as content they’d probably engage with - but if TikTok wanted to, it probably could. This also enables methods of data collection that could - in the right light - be spun as ‘enhancing privacy’. An AI model could live on your phone, analysing your chat logs and camera roll, and report back only the inferences it makes. None of your personal information need ever leave the phone - it’ll just report back to central that you might be in the market for a new washing machine, for example. Whether that’s better or worse than current data collection practices is something we’ll all have to decide collectively. And speaking of things we’re going to have to collectively decide how to deal with… Synthetic media brings its own set of challenges. Synthetic media, or generative AI, is very much at the peak of a hype cycle. Daily headlines trumpet its power and ‘disruptive potential’, and self-proclaimed ‘AI gurus’ are eager to expound on how (they think) it’ll change the world and everything in it. There are huge concerns arising from the hype surrounding synthetic media right now. Countless companies are eager to get into generative AI, but to train a model of your own - or to fine-tune an existing one - takes a lot of data. As a result, many are simply imitating the pioneers in the area and ‘scraping’ whatever data they can, often not worrying about trifling concerns like ‘privacy’ and ‘copyright’. The market is moving so fast - and regulators so comparatively slowly - that the prevailing incentives are to simply try to get your model trained and released as quickly as possible. And once a model has been trained and put on the internet, it’s very hard to take down. All it takes is one torrent seed to keep something on the internet indefinitely. The model-makers would likely tell you that you shouldn’t worry, because while their models may be trained on some sensitive data, training data isn’t stored in the model itself. This is technically true, but misleading - because as researchers are showing, it’s not too difficult to get generative AI models to spit out near-exact reproductions of some of their training data. This is concerning enough for images, but for text especially, private information can be unintentionally shared with anyone who uses the model. And all the hype about what text-generation models especially are capable of (you can find no end of people who will tell you you can turn ChatGPT into your lawyer, your dating coach, or your therapist, but please do not do any of these things) is leading thousands of people to confide extensively in AI chatbots. But while treating ChatGPT as your therapist could sometimes be better than nothing, it’s not going to be nearly as effective as a real human - and it provides OpenAI, the company behind ChatGPT, with incredibly intimate information about you. Now these are all issues caused by the novelty of generative AI in the market, and in time they are likely to lessen. But synthetic media also poses privacy challenges that will apply no matter how mature the market becomes: How do we protect privacy when AI tools allow convincing pictures, audio, and even (eventually) video of real people to be produced with minimal effort? These can be used to enable ‘traditional’ invasions of privacy, things like scams and identity theft. But they can also be used for harassment, nonconsensual AI-generated pornography (one of the original use-cases for generative AI and still an appallingly common one, as well as an issue the New Zealand Parliament has declined to address multiple times), and the simple exploitation of everyday peoples’ voices or images. No longer do these tools require thousands of images or hundreds of hours of audio - a handful of pictures and five minutes’ audio is now enough. And in the social media age, that means that almost anyone is vulnerable. In 2019, Brainbox published a report on synthetic media entitled ‘Perception Inception’, which argues that information about a person - whether it’s true or not, whether it’s been created by humans or generated by AI - should be legally considered personal information. That question has not been legally confirmed yet, in New Zealand or anywhere else (that we’re aware of), but we’re going to need to grapple with it before we can even really start dealing with the privacy implications of this technology. Get in touch At Brainbox we’re aiming to make sure that citizens, governments, and companies can engage effectively with the challenges posed by this and other emerging technologies. If you’d like to be kept up to date or have some work we can help with, you can get in touch or sign up to our contact list using the form in our website footer.

  • Introducing: the Digital Legal Systems Lab

    We’re excited to finally share a new project that we’ve been working on for a little while now, and formally announce a new institution - the Digital Legal Systems Lab. The context No system - whether legal, political, or digital - is ever going to be perfect. But here at Brainbox, we’ve always found it difficult to accept systems that could be working better. We find it especially difficult if they have a significant impact on a wide range of people, or particularly vulnerable groups. Before Brainbox existed, the primary area of work for our director, Tom Barraclough, related to New Zealand’s medico-legal system, which is especially unique because of the ACC scheme. It’s a system that, for a surprisingly large number of people, doesn’t work very well. That’s largely because of system design. As a legal system, it requires more than 3000 staff to process millions of claims a year, and even greater numbers of potential disputes. But as a system, it wasn’t designed with that in mind, and it’s hideously complex even to medico-legal experts. Looking at that system around 2017 was when Tom first wondered about the similarities between law/legislation and computer programs. One day, like a bolt from the blue (Twitter’s recommender system), he saw Hamish Fraser post about “coding the ACC Act”. Tom reached out, and since then, they’ve done some fascinating work together, influenced by the enthusiasm and effort of a wider community of inspiring people. One of the first things Tom and Hamish published together (alongside co-author Curtis Barnes) was a report funded by the Law Foundation, which concluded that the best way to manage the risks and realise the opportunities presented by “legislation as code” was through responsible experimentation. To build trust, and responsibly balance the power in those systems, knowledge and experience needed to be gradually built in a systematic way. Enter: The Lab Since the Law Foundation report was published, we’ve looked quite hard, and we haven’t yet found anywhere prepared to foster responsible "legislation as code" experimentation. So, we’ve founded our own place - the Digital Legal Systems (DLS) Lab. It’s a natural outgrowth and complement to the existing work we do at Brainbox. At the DLS Lab, we want to design digital systems to solve real world problems that are created or touched by the legal system. Importantly, these systems have to be built and designed to justify trust, as appropriate for legal and regulatory contexts. Your role You might be thinking - what does this all have to do with you? In essence: the lab is a place for collaboration. Just like a regulatory system, it needs multi-stakeholder input. We want to work with government, academia, lawyers and legal drafters, and especially non-government organisations. We need specific examples of systems we can work on to demonstrate the potential we see. We need people who want to work in multidisciplinary contexts and understand that the word “code” has at least three different meanings to different types of people (to lawyers, developers, academics, and policymakers). We’ll also need a way to pay the bills, and we’re confident that will come. To kick things off and mark the launch of the DLS Lab, Hamish Fraser has shared a post about a system we’ve worked on together. It’s fantastic work and he’s extremely thoughtful and talented. If this captures your interest, no matter where you are in the world, please get in touch.

  • Brainbox receives funding for legal research into content regulation and disinformation

    We’re pleased to announce that the Borrin Foundation and InternetNZ are funding a new Brainbox research project into legal frameworks for regulating disinformation and other content. This research comes at a critical time, during which many jurisdictions around the world – including New Zealand – are considering a range of regulatory responses to address the harms caused by online disinformation. But while there is considerable urgency around the issue of disinformation, it is still poorly understood. This combination of urgency and lack of full understanding poses a risk both to the success of the response and the integrity of the legal regimes surrounding free expression in New Zealand. With the support of the Borrin Foundation and InternetNZ, this project will provide concrete answers to legal questions, clarity on technical considerations, and a usable framework for balancing the many rights, responsibilities, and risks inherent in this area. Brainbox will seek to ensure that government and civil society responses to disinformation are both better informed and grounded in a strong legal and human rights framework, and that forthcoming regulation of the way New Zealanders communicate works in favour of citizens and communities. You can find out more about the project here or read our position paper, which outlines the approach we’re taking to this research project in more detail.

  • Techweek 2023: Debuting a digital legal systems case study

    We recently had the opportunity to present a webinar at Techweek 2023 about how we've supported New Zealand's water services regulator, Taumata Arowai, to enable API reporting of water quality data. The project sits under The Digital Legal Systems Lab, which is a physical and digital space for exploring new methods of digital legal design through applied case studies, and is a joint venture between Brainbox and Verb's Hamish Fraser. We think this project is a great case study of exciting things to come in the 'law as code' space, and we were grateful to all those who were able to attend the session. If you missed it, you can watch a recording of the presentation below: We'd love to stay in touch if this work is of interest to you. We will be hosting an in-person event in Wellington in the coming months, and also plan to formalise our international networks in this space through a programme of shared discussions about best practice and various institutions' ongoing activities in the law as code area. If you'd like to hear more about these activities, you can express your interest in the Lab using this form. If you're interested in wider work by Brainbox, you can sign up to our mailing list via the form in our website footer.

  • Our perspective on New Zealand’s proposed platform regulation

    Last month, the Department of Internal Affairs (DIA) released a proposal for revising media regulation in New Zealand, called Safer Online Services and Media Platforms (SOSMP). The proposal seeks to regulate large online technology companies (like Meta and Google) under the same umbrella as traditional media outlets (TVNZ and Stuff), and it recommends introducing industry-specific codes of practice for platforms whose primary purpose is to distribute content. The system would be overseen by a refreshed independent regulator. Platform regulation is a complex topic that we have worked on for some time, so we wanted to share some of our thoughts on the proposal as it stands. Our views are shaped by a strong commitment to human rights as a foundation for public policy, including freedom of expression. We’re also strong advocates for government transparency as a fundamental and necessary method of justifying public trust (especially among concerned or sceptical groups). A core theme of Brainbox’s work has been to call out the potential risks created in situations where governments gain increased control over digital infrastructures, including platform systems. If you’re interested in finding out more about how our past work relates to these values and informs our perspective on SOSMP, we’ve collated a summary of various projects at the bottom of this post. Importantly, the SOSMP document itself says – and DIA officials have emphasised to us – that the details of the proposal are very much open to discussion. In the proposal document itself, a large number of crucial details have been left open-ended. You can read more about the DIA’s proposed changes – and how you can provide feedback – here. Ahead of the 31 July submission date, we wanted to share some of our initial thoughts. You might find these useful if you want to make a submission. We’d also welcome the opportunity to test our thinking with others, including tech and media companies, journalists, and other experts. We've also expanded on our thoughts in a more detailed discussion paper below. While the work isn’t directly related, we also think it’s only fair to disclose: Brainbox is currently engaged as the project lead for a global multi-stakeholder network on tech company transparency, which includes working alongside the world’s largest tech companies, leading academics, and civil society organisations. Brainbox's previous and current contracts include providing independent advice to the Department of the Prime Minister and Cabinet on work programmes related to building resilience to disinformation. Brainbox is not authorised to make public statements about this work, and nothing we say should be attributed to the Department of the Prime Minister and Cabinet. This work on the SOSMP and our work on the Aotearoa New Zealand Code of Practice on Online Safety and Harms (ANZCPOSH) has been funded by the Borrin Foundation and InternetNZ and we are grateful for that support. What stands out to us about the SOSMP proposal? Drawing on our previous work and research, here are some things that stand out to us from the discussion document. We’re sharing these publicly to help you reach your own conclusions. We encourage you to draw on these points if you make a submission – whether you agree or disagree. (1) The SOSMP is an opportunity ​​to influence how governments and platforms draw the line on freedom of speech. When it comes to free speech, the question is where and how we ‘draw the line’, not whether ‘a line’ should exist at all. While the right to freedom of expression is extremely important, it’s not absolute, and it can be restricted in certain circumstances. However, these limitations must be for legitimate human rights objectives, reasonable, necessary, proportionate, and imposed using clear legal rules that can be challenged or appealed. It’s worth pointing out that all widely-used platforms can and do impose some restrictions already on what content can be distributed, and the SOSMP is an opportunity for enhanced transparency over how platforms do this. It’s also an opportunity to shed light on the role of government agencies in influencing platform content moderation (if any). These are important points for submitters to consider, because a completely “hands-off” approach to freedom of expression is unrealistic. To truly promote freedom of expression requires some positive vision of what New Zealand’s information environment looks like. We recommend being cautious of anyone who only has criticisms or suspicion, but no reasonable solutions. (2) The proposal should create public power and oversight over how limitations are set and enforced – by companies and by governments. The purpose of platform regulation isn’t just about where limits should be set. It’s also about creating legal structures that give the public and the courts some power and oversight over how those limits are being set and enforced. As such, the SOSMP should include more detail about the checks and balances against risk of abuse. What information should be provided about how regulators, government agencies, and platforms are operating? If the government and platforms are communicating about content (including through “trusted flagger” programmes), what should they be required to disclose? What would create confidence that the public could use the courts to protect freedom of expression if necessary, including against the regulator? (3) We think an independent regulator is a good choice. We support an independent regulator and think it’s important to limit the risk that Parliament and MPs influence the way codes of practice are developed and enforced in improper ways. An independent regulator can be more flexible, less preoccupied with politics, and can be checked by the courts in ways that Parliament can’t be. For example, unlike legislation, a code of practice could be “struck down” by New Zealand courts if it goes too far. The SOSMP also refers to a power to create “policy statements” to influence codes of practice – we’d like more detail about who sets those policy statements and any limits on them. (4) User empowerment is a good goal, but we are cautious about the ‘child protection’ and ‘harm and safety’ framing. We admire the SOSMP’s focus on user empowerment, but we recommend a balanced and cautious approach to approaches based on child safety. Child safety is important, but children have rights too, and there are clear moves globally to drastically curtail civil and digital rights in the name of child safety. Also, while some content can be clearly harmful, it’s important to acknowledge that the relationship between online content and real world harm is sometimes complex and indirect. The research on this is still emerging. A specialist regulator could engage with these issues in a nuanced and expert way, including by commissioning research and education initiatives. (5) Lumping tech companies and news organisations together creates problems. There are important differences between news media and social media, and the proposal currently fails to account for these differences. There are some powers proposed for the regulator that make sense for regulating tech companies, but would be unacceptable if applied to news companies. In addition, the primary difference between traditional and social media companies is the way they deal in user-generated content. Harmful user-generated content is already regulated under the Harmful Digital Communications Act, and it’s not clear why the HDCA has been excluded from this proposal. Separating the SOSMP and the HDCA without good reason is going to produce confusing systems and processes. We also think the definition of a platform needs to be clear in advance, and shouldn’t rely on a lot of interpretation or discretion. (6) We need more clarity on illegal content. While new kinds of content are in theory not being made illegal by the SOSMP, in practice the law would empower the regulator to take a greater role in requiring platforms to intervene against particular types of content. The SOSMP does create legal obligations to deal with expanded categories of content. This is not inherently negative, but any steps to expand the boundaries of content which is subject to regulatory oversight should be dealt with directly and transparently, with careful design to ensure that any implementation is subject to legal oversight mechanisms. (7) We need to be pragmatic about New Zealand’s place in the world. The SOSMP makes ambitious statements about the impact that it will have on the conduct of global tech platforms and their products and services, and we think some pragmatism is important when it comes to setting expectations for the SOSMP – not least because it’s hard to assess the risks of the proposal without a realistic sense of potential benefits. Fundamentally, New Zealand’s relatively small user base means our leverage over platforms is very limited. By contrast, our ability to lead through positive incentives, advocacy, and adherence to human rights principles is very high. The SOSMP is good in the way that it anticipates engaging with emerging international standards-setting processes and creates space for codes of practice to align with global efforts. However, it is participation in these networks that is likely to have the most impact on platform conduct, not the SOSMP itself. Conclusion People’s ability to express themselves and to hold powerful entities to account is one of the most important features of a democracy. We can understand why many people will find this proposal unsettling. But this is the start of an ongoing discussion and we believe well-informed, diverse participation is important. DIA should be congratulated for the work it has done to foster these discussions, as well as being tested rigorously. We’ll be doing what we can to contribute toward effective public policy based on human rights that gives people trust and confidence that these significant and important powers are being designed and used properly. Selection of previous work

  • Online platform accountability: EU and NZ perspectives explored in new policy paper

    In August, the Brainbox Institute collaborated with the European Union in Aotearoa New Zealand and Victoria University of Wellington to host a panel addressing the approaches of both the EU and New Zealand toward online platform accountability. The event brought together a number of prominent expert voices, including Gerard de Graaf, the EU’s Senior Envoy for Digital to the US, Paul Ash from the Department of the Prime Minister and Cabinet, Anjum Rahman from Inclusive Aotearoa Collective Tāhono, Victoria University’s Professor Ali Knott, and Brainbox Director Tom Barraclough. The discussion revealed that while the approaches taken by the EU and New Zealand differed, both were underpinned by a shared goal: ensuring online accountability while protecting fundamental human rights and enabling innovation. It is reassuring to see that both approaches are complementary, and that the EU and New Zealand have considerable avenues to learn from and teach each other in this area – highlighting the importance and benefit of continued partnership. Further, we welcome the panel’s acknowledgement that both civil society and community groups hold an important role in online platform regulation. Following the panel, the Brainbox Institute has produced a policy paper as part of the Policy Futures initiative of the EU Delegation to Aotearoa New Zealand. The paper is intended to fairly capture the discussion and is structured around important areas which arose during the discussion including key issues in online accountability, the differing approaches being taken by the EU and New Zealand, as well as the commonalities and the path forward. The European Union in Aotearoa New Zealand also created a video summary of the event, which you can watch below:

  • Brainbox Fellows Programme

    We're excited to announce the details of The Brainbox Fellows Programme, a collaborative initiative built on mutual admiration and respect between the Brainbox Institute and other thought leaders in the tech policy community. The Brainbox Fellows Programme enables Fellows to work collaboratively with the Brainbox Institute on projects of mutual interest, and offers graduates, experienced researchers and policy enthusiasts alike access to Brainbox Institute’s networks and infrastructure. The Programme was born out of a desire to increase collaboration, build relationships, and foster a diverse and high quality community in the law, technology and policy spaces. The long-term goal of the Programme is to strengthen the existing community base that works in these spheres, while also supporting new entrants. The Brainbox Fellows programme has been developed with the following goals in mind: To provide a means of recognising the informal relationships Brainbox has developed over time with highly engaged individuals in our areas of interest. To amplify the work of Brainbox Fellows and open further opportunities for them with Brainbox and with other organisations. To support the development of a diverse and high quality community of public policy contributors outside government. To diversify the areas of interest that Brainbox covers, as well as supporting Brainbox Fellows to do great work. In pursuit of these goals, Brainbox Fellows will have access to the following opportunities, with more on the way as the programme grows and develops: Affiliation to Brainbox Institute through representation on our website and communication channels. The opportunity to connect with other Fellows through communications and networking programmes. Identifying opportunities where appropriate, to deepen the connection between Brainbox and Brainbox Fellows, including through potential collaborations, shared projects, or partnerships. Where applicable, providing limited mentoring and professional support toward careers in public policy either within government, or in non-governmental organisations. For further updates or more information, sign up for our newsletter or follow us on social media using the links on our home page.

  • Taylor Swift, Non-Consensual Deepfake Pornography, and What It Means for New Zealand

    Bella Stuart is a Brainbox Fellow and a recent law graduate from the University of Otago. Last year, she wrote her honours dissertation on the need to explicitly criminalise deepfake pornography in Aotearoa New Zealand. Below, she explains why the recent Taylor Swift deepfake images are a timely reminder for New Zealand lawmakers. Deepfake pornography made headlines last week when Taylor Swift was depicted without her consent in pornography generated using artificial intelligence. New Zealand is not immune from this phenomenon, with Netsafe having noted an increase in reports of deepfake pornography, and the New Zealand Police describing it as a “phenomenon of concern… to be watched closely.”[1] Swift’s experience speaks to increasing global concerns regarding whether existing legal systems can control this technology. While the United Kingdom and United States are taking action to address legislative deficiencies, New Zealand remains disappointingly complacent – despite the probability that if Swift resided here what happened to her would not be a crime. What is a Deepfake? Deepfakes are hyper-realistic manipulated images produced using artificial intelligence.  Using existing images of an individual, machine learning programs can create new content depicting that individual doing things they have never done. While deepfakes have some beneficial uses, they have also introduced a treacherous new frontier of image-based sexual abuse when used to create non-consensual pornography. What is the Harm? While some question whether this fake content actually harms those depicted, an ever-increasing body of qualitative research demonstrates victims experience profound psychological, economic, professional and social harms. Victims – ranging from celebrities, to journalists, to school-aged girls – have described their experiences as “being fetishised”, “digital rape”, and “humiliating, shaming and silencing.” Some experience ‘memory appropriation’, where they themselves struggle to distinguish between real and fake. Women are exponentially more likely to be depicted in non-consensual deepfake pornography, and experience more extreme harms due to persisting sexual double standards which “enable humiliation, stigma and shame to be visited on women” more readily than men.[2] The New Zealand Legal System’s Capability to Respond These extreme harms require a carefully designed, fit-for-purpose legal response – which New Zealand currently lacks. This response must involve the explicit criminalisation of non-consensual pornographic deepfakes. While some victims may benefit from suing their aggressor for financial damages, criminal law generally provides a more effective legal response. Specifically, the State’s ability to punish perpetrators both allows the law to respond to the phenomenon, and deters prospective perpetrators from distributing this content in the first place. Unfortunately, while New Zealand has several offences targeting image and communication-based harms, they all fail to adequately capture this emergent phenomenon. For example, the Films, Videos and Publications Act 1993 (FVPCA) establishes New Zealand’s content censorship regime by criminalising, among other things, the making and distributing of objectionable publications.[3] An objectionable publication is one that “describes, depicts, expresses, or otherwise deals with matters such as sex… in such a manner that the availability of the publication is likely to be injurious to the public good.”[4] Two issues arise regarding the FVPCA’s application to non-consensual deepfake pornography. Firstly, the Court of Appeal has restricted objectionable publications to those dealing with the activity of sex,[5] meaning while paradigmatic deepfake pornography could be objectionable, deepfake imagery falling short of sexual activity (such as mere nudity) could not. Secondly, even if dealing with the activity of sex, there may be issues establishing injury to the public good where the content targets only an individual. Further, s 22 of the Harmful Digital Communications Act 2015 (HDCA) criminalises the causing of harm by posting a digital communication where the posting individual intends to cause the victim harm, the victim actually experiences harm, and the posting would cause harm to an ordinary reasonable person in the victim’s position. While this appears at first glance to capture deepfake pornography, the posting of these images can be motivated by various factors beyond the intention to cause harm, including financial gain, sexual gratification, and notoriety among peers – all of which would prevent the offence from applying. Further, requiring proof that the victim experienced harm, and that this be reasonable, is completely inappropriate in a sexual violence context, requiring victims relive their trauma and have their experiences challenged – and potentially rejected – in court. Finally, the Crimes Act 1961 s 216J and the HDCA s 22A respectively criminalise the distributing and posting of “intimate visual recordings”. Unfortunately, non-consensual pornographic deepfakes are likely neither “visual recordings” nor “intimate”. By nature, a deepfake is not a recording, and Parliament made it disappointingly clear it intended fake imagery to fall outside this definition. When enacting the s 22A offence in 2021, numerous submissions – including by Brainbox – urged the Justice Committee to clarify that “visual recording” captured fake/manipulated content, but these recommendations were rejected by the Committee and subsequently by the House when proposed as an amendment to the Bill. Further, as these offences are designed to address real-life scenarios, what is “intimate” does not apply comfortably to situations where content is manufactured – for example, where there is no expectation of privacy (because the events did not occur), or the intimate areas depicted in an image do not belong to the individual whose face is shown. A Call to Action These examples demonstrate the inadequacy of using laws designed for the ‘real’ to address the fake. To vindicate victims’ interests and deter creation of this harmful content, the distribution of non-consensual deepfake pornography must be explicitly and comprehensively criminalised through a for-purpose offence. Reliance on this piecemeal framework of existing offences is entirely unacceptable. We cannot simply wait and see whether a judge is willing to apply these inadequate existing offences in ways which are both unnatural, and inconsistent with Parliamentary intentions. At best, this approach leaves the law unacceptably ambiguous. At worst, it leaves us to discover that non-consensual pornographic deepfakes are legal when the first victim is told by a court that their interests cannot be vindicated. Swift’s experience is a timely reminder that New Zealand only has so long to take proactive action before we are left scrambling to respond. Parliament must heed this timely warning and act quickly to protect New Zealanders from this newest manifestation of image-based sexual abuse. Bella completed her Bachelor of Laws (Honours, First Class) and Bachelor of Arts at the University of Otago in 2023. While at University, Bella tutored property law and summered in the litigation and corporate teams at Bell Gully. As a graduate, she is now working at the Ministry of Justice. Bella has been a Brainbox Fellow since January 2024. Photo credit in feature image: Eva Rinaldi [1] Miriam Lips and Elizabeth Eppel Mapping Media Content Harms: A Report Prepared for Department of Internal Affairs (Victoria University of Wellington Te Herenga Waka, 22 September, 2022) at 12. [2] Clare McGlynn and Erika Rackley “Image-Based Sexual Abuse” (2017) 37 OJLS 534 at 544. [3] Films, Videos, and Publications Classification Act, ss 123-124. [4] Section 3(1). [5] Living Word Distributors v Human Rights Action Group, above n 261, at [28] per Richardson P.

  • What New Zealand commentators are missing about the Christchurch Call

    Tom Barraclough, Director at the Brainbox Institute – In New Zealand, the time has come for a discussion about our ongoing support for the Christchurch Call. There’s some indication that a cost benefit analysis of sorts is taking place, but public perception of the Call is also likely to play a role in any future decisions. So far, the commentary I have seen in New Zealand is fundamentally misguided, and so I’m offering here an alternative set of considerations that also ought to be considered from three perspectives: the people affected by the attacks; free speech advocates; and – more cynically – our national interest. Firstly, violent extremism is still a significant issue that is only growing in its potentially devastating impacts. It is essential in any discussion to note that the people caught up in the Christchurch attacks have only just completed a harrowing coronial inquiry. The ongoing impact of the Christchurch attacks is incredibly real, even if it is beginning to fade for some. More than that, the machinery created by the Christchurch Call has actually been activated on a number of occasions globally since 2019 in response to its extremely narrow mandate – real world terrorist attacks that include an online propaganda component intended to amplify the impact of that violence. All indications suggest that we are likely to see more violent extremism in the coming years, rather than less. If you haven’t seen the impact of the Call in your feeds, that is because it has been remarkably effective. Secondly, people who value freedom of expression highly may have reasonable reservations about the Call. In particular, there’s a need to ensure the Call doesn’t unwittingly expand its scope – away from black and white abhorrent violent content, and toward more grey content like disinformation or hate speech. The need to manage these risks is a foundational principle of the Call, and they are issues that should be – and are – repeatedly analysed in a transparent manner. Building trust in the Call is a necessary part of its legitimacy. But I think critics concerned about free speech are missing two key points. To start with, the scope of the Call is intentionally narrow. It has an external advisory group who are constantly concerned about the risk of scope creep, and whose members include some organisations founded to manage the risks presented by government censorship. In addition, the Call is a voluntary partnership between governments and companies, many of whom are US-based and take a strident approach to free expression. These companies often operate in the most repressive countries in the world, navigating take down requests and requests for information about users from governments in countries such as Vietnam, Turkey, or Thailand. When it comes to government overreach, they know exactly how bad it can get. What’s more, the heat that content moderation draws for those companies is an ongoing headache, drawing ire from users, politicians and journalists. The last thing they want is to undermine trust, or attract controversy, through an ever-creeping scope. Paradoxically for free speech advocates, the Christchurch attacks were identified by some at the 2023 Internet Governance Forum as an event that could have prompted the most repressive crack down on technology companies we’ve ever seen. Motivated governments could have ridden a wave of public sentiment of the kind seen in Australia’s Abhorrent Violent Material legislation, or the United Kingdom’s response to child safety cases in the recent Online Safety Act. By contrast, the Call is an example of non-coercive, transparent, and careful action that was by no means guaranteed when it was developed. Advocates for free expression ought to proceed very carefully in calling for the Call’s infrastructure to be disbanded. It’s even more perplexing to hear the Call criticised on the basis that it hasn’t been effective at somehow reducing extremism or polarisation. Any work on assessing the impact of algorithms on radicalisation was always the hardest part of the Call’s mission, and the bit that should be approached with the most caution. We simply don’t know how impactful algorithms are on shaping political behaviour, and the first step needs to be assessing what that impact is, before we start intervening at a political level. It’s absurd to hear advocates for freedom of expression saying the Call has failed because people are still polarised, which assumes that people’s behaviour is driven more by algorithms than their real world values and experiences. If neither of these areas of principle or morality move you, the final perspective that I think we’re missing is more cynical – and that’s the benefit of the Call for our own national geopolitical interests. The world is entering difficult and unstable times in a transition towards a multipolar environment – and one of the core locations of interest is our own place in the Asia-Pacific. This is happening in a century where technology, the way it’s used, and the values that determine how it is governed are potentially the most impactful issues on the global stage short of climate change and direct armed conflict. Even aside from the moral obligation to respond to TVEC in careful and transparent ways, the Call has been New Zealand’s entry point to global conversations we simply would not have been invited to, let alone led, in the years since 2019. To scupper it now would be an act of extraordinary self sabotage in an area of immense soft power. Any government with an interest in the geopolitics of technology and the importance of free speech should weigh this heavily in any cost benefit analysis. Enhanced transparency, an emphasis on real world impact, and ongoing vigilance against unjustified scope creep remain essential, but in addressing those issues, it would be foolish to relinquish our globally influential role. Disclosure: the Christchurch Call has provided funding toward the Transparency Initiatives Portal for the Action Coalition on Meaningful Transparency, a multi-stakeholder coalition where the Brainbox Institute acts as project lead.

Brainbox Institute is a non-partisan organisation that supports constructive policy, governance, and regulation of digital technologies.

Subscribe to our news

Thanks for submitting!

© 2023 Copyright Brainbox Ltd. All Rights Reserved. Privacy Policy.

bottom of page