top of page

Taylor Swift, Non-Consensual Deepfake Pornography, and What It Means for New Zealand

Bella Stuart is a Brainbox Fellow and a recent law graduate from the University of Otago. Last year, she wrote her honours dissertation on the need to explicitly criminalise deepfake pornography in Aotearoa New Zealand. Below, she explains why the recent Taylor Swift deepfake images are a timely reminder for New Zealand lawmakers.


Deepfake pornography made headlines last week when Taylor Swift was depicted without her consent in pornography generated using artificial intelligence. New Zealand is not immune from this phenomenon, with Netsafe having noted an increase in reports of deepfake pornography, and the New Zealand Police describing it as a “phenomenon of concern… to be watched closely.”[1] 


Swift’s experience speaks to increasing global concerns regarding whether existing legal systems can control this technology. While the United Kingdom and United States are taking action to address legislative deficiencies, New Zealand remains disappointingly complacent – despite the probability that if Swift resided here what happened to her would not be a crime.


What is a Deepfake?

Deepfakes are hyper-realistic manipulated images produced using artificial intelligence.  Using existing images of an individual, machine learning programs can create new content depicting that individual doing things they have never done. While deepfakes have some beneficial uses, they have also introduced a treacherous new frontier of image-based sexual abuse when used to create non-consensual pornography.


What is the Harm?

While some question whether this fake content actually harms those depicted, an ever-increasing body of qualitative research demonstrates victims experience profound psychological, economic, professional and social harms. Victims – ranging from celebrities, to journalists, to school-aged girls – have described their experiences as “being fetishised”, “digital rape”, and “humiliating, shaming and silencing.” Some experience ‘memory appropriation’, where they themselves struggle to distinguish between real and fake. Women are exponentially more likely to be depicted in non-consensual deepfake pornography, and experience more extreme harms due to persisting sexual double standards which “enable humiliation, stigma and shame to be visited on women” more readily than men.[2]


The New Zealand Legal System’s Capability to Respond

These extreme harms require a carefully designed, fit-for-purpose legal response – which New Zealand currently lacks. This response must involve the explicit criminalisation of non-consensual pornographic deepfakes. While some victims may benefit from suing their aggressor for financial damages, criminal law generally provides a more effective legal response. Specifically, the State’s ability to punish perpetrators both allows the law to respond to the phenomenon, and deters prospective perpetrators from distributing this content in the first place.


Unfortunately, while New Zealand has several offences targeting image and communication-based harms, they all fail to adequately capture this emergent phenomenon.


For example, the Films, Videos and Publications Act 1993 (FVPCA) establishes New Zealand’s content censorship regime by criminalising, among other things, the making and distributing of objectionable publications.[3] An objectionable publication is one that “describes, depicts, expresses, or otherwise deals with matters such as sex… in such a manner that the availability of the publication is likely to be injurious to the public good.”[4] Two issues arise regarding the FVPCA’s application to non-consensual deepfake pornography. Firstly, the Court of Appeal has restricted objectionable publications to those dealing with the activity of sex,[5] meaning while paradigmatic deepfake pornography could be objectionable, deepfake imagery falling short of sexual activity (such as mere nudity) could not. Secondly, even if dealing with the activity of sex, there may be issues establishing injury to the public good where the content targets only an individual.


Further, s 22 of the Harmful Digital Communications Act 2015 (HDCA) criminalises the causing of harm by posting a digital communication where the posting individual intends to cause the victim harm, the victim actually experiences harm, and the posting would cause harm to an ordinary reasonable person in the victim’s position. While this appears at first glance to capture deepfake pornography, the posting of these images can be motivated by various factors beyond the intention to cause harm, including financial gain, sexual gratification, and notoriety among peers – all of which would prevent the offence from applying. Further, requiring proof that the victim experienced harm, and that this be reasonable, is completely inappropriate in a sexual violence context, requiring victims relive their trauma and have their experiences challenged – and potentially rejected – in court.


Finally, the Crimes Act 1961 s 216J and the HDCA s 22A respectively criminalise the distributing and posting of “intimate visual recordings”. Unfortunately, non-consensual pornographic deepfakes are likely neither “visual recordings” nor “intimate”. By nature, a deepfake is not a recording, and Parliament made it disappointingly clear it intended fake imagery to fall outside this definition. When enacting the s 22A offence in 2021, numerous submissions – including by Brainbox – urged the Justice Committee to clarify that “visual recording” captured fake/manipulated content, but these recommendations were rejected by the Committee and subsequently by the House when proposed as an amendment to the Bill. Further, as these offences are designed to address real-life scenarios, what is “intimate” does not apply comfortably to situations where content is manufactured – for example, where there is no expectation of privacy (because the events did not occur), or the intimate areas depicted in an image do not belong to the individual whose face is shown.


A Call to Action

These examples demonstrate the inadequacy of using laws designed for the ‘real’ to address the fake. To vindicate victims’ interests and deter creation of this harmful content, the distribution of non-consensual deepfake pornography must be explicitly and comprehensively criminalised through a for-purpose offence. Reliance on this piecemeal framework of existing offences is entirely unacceptable. We cannot simply wait and see whether a judge is willing to apply these inadequate existing offences in ways which are both unnatural, and inconsistent with Parliamentary intentions. At best, this approach leaves the law unacceptably ambiguous. At worst, it leaves us to discover that non-consensual pornographic deepfakes are legal when the first victim is told by a court that their interests cannot be vindicated.


Swift’s experience is a timely reminder that New Zealand only has so long to take proactive action before we are left scrambling to respond. Parliament must heed this timely warning and act quickly to protect New Zealanders from this newest manifestation of image-based sexual abuse.


Bella completed her Bachelor of Laws (Honours, First Class) and Bachelor of Arts at the University of Otago in 2023. While at University, Bella tutored property law and summered in the litigation and corporate teams at Bell Gully. As a graduate, she is now working at the Ministry of Justice. Bella has been a Brainbox Fellow since January 2024.


Photo credit in feature image: Eva Rinaldi

 

[1] Miriam Lips and Elizabeth Eppel Mapping Media Content Harms: A Report Prepared for Department of Internal Affairs (Victoria University of Wellington Te Herenga Waka, 22 September, 2022) at 12.

[2] Clare McGlynn and Erika Rackley “Image-Based Sexual Abuse” (2017) 37 OJLS 534 at 544.

[3] Films, Videos, and Publications Classification Act, ss 123-124.

[4] Section 3(1).

[5] Living Word Distributors v Human Rights Action Group, above n 261, at [28] per Richardson P.

Recent Posts

See All

Comments


bottom of page