Transparency-based approaches to social media regulation
November 2021 and ongoing
Transparency and social media for the Global Partnership on AI
Brainbox worked with the University of Otago and the Global Partnership on AI's responsible AI working group. GPAI was proposing collaborative study of how social media recommendation systems deal with terrorist content.
This work is continuing in 2022 and Brainbox has participated in a range of meetings hosted by the Global Internet Forum to Counter Terrorism and others working within the wider Christchurch Call.
GPAI aims to conduct research within one or more social media companies to observe the effect of “recommender systems” on platform user behaviour. In particular, whether such systems have the effect increasing user consumption of Terrorist Violent Extremist Content. It outlined its proposed study in a separate technical report.Brainbox provided an analysis of the legal and policy issues relevant to conducting research of this kind.
Our analysis was grounded by reference to human rights principles and frameworks, to ensure that any research partnership was broadly appealing, fair, and respectful of due process for all parties.
It drew on previous work by Brainbox to an investor coalition led by the New Zealand Superannuation Fund on its engagement with Facebook, Alphabet and Twitter on the companies’ responses to the Christchurch terror attacks of 15 March 2019.
It also drew on Brainbox's reports on automated decision making and legislation as code.