The
law, history teaches us, lags inevitably behind technological change. In
respect of no development is this more true than the ‘Information Revolution,’
those sweeping and manifold transformations brought about by popular and
near-instantaneous access to the internet. Proving a particular challenge to
regulators is social media. Compare, by way of illustration, the regulatory
scheme governing more traditional media with that applicable to the media of
this new digital age. Whereas television content is beholden to a comprehensive
set of guidelines and overseen by a government-approved regulator, content
published via social media exists in what may more accurately be described as a
regulatory ‘wild west’. It is my argument that more stringent legal safeguards need
to be built into the online sphere, and that this is a challenge that the state
cannot shy away from.
In my
view, the most insidious by-products of this technological revolution is the
emergence of ‘fake news’. Of course, the dissemination of false information has
long been used as a tool to manipulate public debate, but the confluence of
social media and AI has undoubtedly intensified the phenomenon. Individuals can
now be targeted via sophisticated algorithms that draw on data from our online
activity. Examples of fake news are innumerable, but, according to snopes.com,
some of the most viral false stories from 2017-2018 include: claims that Black
Lives Matter protestors blocked emergency services from reaching hurricane
victims, claims that illegal immigrants started California wildfires, and
claims the leaders of Islamic State had Barack Obama on speed dial. And the
problem is growing exponentially: evidence of concerted online disinformation
campaigns has been found in fort-eight countries since 2018.
Why
is this so harmful? Partly, the answer lies in our psychology. Social
psychologist Sander Van Der Linden posits that, as fakes news takes advantage
of our pre-existing biases, it often spreads faster and more-widely than real
news stories; moreover, once we have been exposed to doctored information, it
is very difficult to remove the impression, even once myths have been debunked.
This has stark implications for our public life. If citizens do not share, as a
starting point, a common appreciation of what is broadly true and false, how
can any semblance of mature, healthy democratic debate take place? This is
before we even begin to consider the foreign interference aspect of fake news
or the more palpable impacts of disinformation; last year, for instance, more
than twenty people were killed in India after false rumours went viral alleging
the presence of child abductors in several villages across the country.
The
legal status-quo is essentially toothless in the face of these challenges.
Being a platform as opposed to a publisher, Facebook is not liable for the
content it hosts, and is only obliged to remove patently illegal material –
child sexual exploitation, incitement to violence – once brought to its
attention. In regard to those harms with a less clear definition, such as
online disinformation, there is even greater legal immunity, if not total
discretion. Clarifying Facebook’s policy, Nick Clegg has stated that content
from political campaigns will, by default, by treated as “newsworthy,” even if
otherwise in violation of the platform’s standards, and will thus be exempt
from fact-checking. Politicians will be allowed, indeed tacitly encouraged, to continue lying to us.
The
UK Government has recently published a ‘White Paper on Online Harms’ which aims
to fill this regulatory vacuum. Key to the proposals is the imposition of a
statutory duty of care on social media companies to take reasonable steps to
protect their uses, thus shifting responsibility from individuals to the
platforms themselves. This duty of care is to be enforceable via an independent
regulatory body.
Although
welcome, these proposals do not go far enough. Firstly, there is a strong case
for imposing a regime of strict, as opposed to negligence based, liability – social
media companies have powerful AI tools at their disposal which could enable
harmful content and disinformation to be filtered out before even first posted.
Secondly, to truly tackle the problem of disinformation, the response needs to
be not just reactive but also proactive, incorporating a public education
element. For example, researchers at the
University of Cambridge have developed a technique to psychologically
‘inoculate’ people to fake news by exposing them, via an online game, to the
methods used to spread disinformation. Whilst this may seem somewhat
far-fetched to be the stuff of public policy, it is essentially the position
taken in Finland. Following the country’s 2014 Anti-Fake News Initiative, a
response to increasing Russian electoral interference, all high school students
undergo a course designed to build a ‘digital literacy toolkit,’ enabling them
to spot false and inflammatory information. It is worth noting that the Press
Freedom Index ranks Finland first in terms of public trust in both the media
and democratic institutions.
Opponents
of online regulation argue that greater state oversight threatens to have a
chilling-effect on freedom of speech. In particular, many are uncomfortable
with the prospect of a government agency setting rules as to what constitutes
‘harmful speech,’ and inherently normative category. However, the problem with
this line of libertarian thinking is that we are in fact already in a situation
where regulators set arbitrary rules as to balancing of competing rights
online. These regulators are Facebook and other social media giants, and they
are far from neutral actors; the more time we spend on these platforms, the
more profitable they are from an advertising perspective, and there is no
better way to ensure our continuous attention than through emotive content that
plays on our fears. Somebody always sets the rules. We need to ask: who, how
and for what purpose? Surely it is better to have the framework for online
engagement set by a legally reviewable public agency than by Mark Zuckerberg.
More
broadly, I’d ask the White Paper’s critics to reflect on what we mean when we
talk about freedom in this context. From the consumers perspective, does
genuine freedom of thought not entail informed choice, which can only be
ensured within a reasonably fair, thorough and unmanipulated media landscape?
From the publishers perspective, ought not freedom of the press, historically
fundamental to our public life, be paired with a corresponding duty to exercise
this freedom responsibly?
The writer, Philip Matthews, is an aspiring barrister based in London, with a keen interest in European Law and Tort Law. Previously, he has studied history at the University of Oxford and the GDL at City, University of London. He is currently pursuing the BPTC at BPP Law School in London.
Law Tutors Online, UK Law Tutor, UK Law Notes, Manchester Law Tutor, Birmingham Law Tutor, Nottingham Law Tutor, Oxford Law Tutor, Cambridge Law Tutor, New York Law Tutor, Sydney Law Tutor, Singapore Law Tutor, Hong Kong Law Tutor, London Tutors, Top Tutors Online and London Law Tutor are trading names of London Law Tutor Ltd. which is a company registered in England and Wales. Company Registration Number: 08253481. VAT Registration Number: 160291824 Registered Data Controller: ZA236376 Registered office: Berkeley Square House, Berkeley Square, London, UK W1J 6BD. All Rights Reserved. Copyright © 2012-2024.