Human Rights & Disinformation

Human Rights & Disinformation
“While the emergence of internet technology has brought human rights benefits, allowing a plurality of voices, a new freedom of association and more widespread access to information than ever before, it has also brought distortions to electoral and political processes that threaten to undermine democracy. The rapid pace of technological change has facilitated non-compliance with existing human rights law and related regulation, because the activities are new and because the infrastructure has not been in place to explain, monitor or enforce compliance with existing laws. Urgent action is needed, as the challenges we are currently seeing to our democracies are challenges of the scale being tackled when the UDHR was drafted in the late 1940s.”
Kate Jones [University of Oxford’s Faculty of Law]
The material on this page is derived primarily from the three sources (which are referenced in the text), and relates to three important international declarations / covenants concerning our basic human rights:
•    the Universal Declaration of Human Rights (UDHR)
•    the International Covenant on Civil and Political Rights (ICCPR)
•    the International Covenant on Economic, Social and Cultural Rights (ICESCR)


Page Content

1   Why is Disinformation a Human Rights Issue?
The first source is a paper by Richard Wingfield (Global Partners Digital) in which he explains that disinformation is a human rights issue because, “simply put, [its spread] can cause harm to a range of human rights, some more than others”, and he lists the following:
•    the right to free and fair elections (Art. 25, ICCPR);
•    the right to health (Art. 12, ICESCR);
•    the right to freedom from unlawful attacks upon one’s honour and reputation (Art. 17, ICCPR);
•    the right to non-discrimination (Art. 2(1) & 26, ICCPR).

And Wingfield provides the following clarification: “For an election to be free and fair, voters need to have accurate information about the parties, candidates and other factors when they vote. Incorrect information may influence the way that individuals vote... Inaccurate information about health care and disease prevention, such as false information on risks associated with vaccines, may deter people from taking healthcare decisions that protect their health, putting them (and others) at greater risk... Disinformation often relates to a particular individual — particularly political and public figures, as well as journalists — and is designed to harm that person’s reputation."
"Disinformation sometimes focuses on particular groups in society — such as migrants, or certain ethnic groups — and is designed to incite violence, discrimination or hostility.” And he notes that “inappropriate policy responses to disinformation can, themselves, also pose risks to human rights, particularly the right to freedom of expression (Article 19, ICCPR).” [1]

The paper concludes with some advice for policymakers considering developing a policy or law, including some initial 'guiding questions', for example: “Does your policy include any restrictions on particular forms of speech or content? If so, are these restrictions set out in law? Is there clarity over the precise scope of the law? — general prohibitions based on vague or ambiguous ideas such as ‘false news’ or ‘non-objective information... would fail this test. Do any restrictions in the law account for instances where the individual reasonably believed the information to be true? Are any responses or sanctions proportionate?
2   Online Disinformation and Political Discourse
The second source is a report by Kate Jones (published by Chatham House) which sets out in more detail the way human rights are impacted by disinformation; it clarifies terms and concepts discussed; provides an overview of cyber activities that may influence voters; summarizes a range of responses by states, the EU and digital platforms themselves; and discusses relevant human rights law, with specific reference to “the right to freedom of thought, and the right to hold opinions without interference; the right to privacy; the right to freedom of expression; and the right to participate in public affairs and vote.” And it concludes by setting out recommendations on how human rights ought to guide state and corporate responses.
Dr Jones argues that: “Online political campaigning techniques are distorting our democratic political processes. These techniques include the creation of disinformation and divisive content; exploiting digital platforms’ algorithms, and using bots, cyborgs and fake accounts to distribute this content; maximizing influence through harnessing emotional responses such as anger and disgust; and micro-targeting on the basis of collated personal data and sophisticated psychological profiling techniques. Some state authorities distort political debate by restricting, filtering, shutting down or censoring online networks.”
She also points out that: “It is important to dispel the misconception that the core challenge posed by disinformation and other election manipulation techniques is the transmission of incorrect information. The veracity of information is only the tip of the challenge. Social media uses techniques not just to inform but also to manipulate audience attention.”[2] She notes that: “At present the boundaries between acceptable and unacceptable online political activity are difficult to discern. The emergence of cyber campaigning in elections has been a phenomenon of the last decade... Political consultancies now focus on digital capabilities as these are regarded as fundamental to effective campaigning. Technology has developed more quickly than the norms that guide it, such that digital platforms are currently operating in largely unregulated fields. 

Here (from the report) are some examples of cyber activities that may influence voters: “the creation of disinformation and divisive content; the use of specific methods to maximize the distribution of that material; and the use of personal data in order to maximize the influence of the material over individuals... in some countries politically-motivated state disruption to networks presents a further challenge.” The author notes that: “Digital platforms have reported that they have been the subject of an ‘exponential increase in politically motivated demands for network disruptions, including in election periods’. These can include restrictions, filtering or shutdown as well as censorship.”[3]
3   Freedom & Accountability
"In this moment, the conversation we should be having — how can we fix the algorithms? — is instead being co-opted and twisted by politicians and pundits howling about censorship and miscasting content moderation as the demise of free speech online. It would be good to remind them that free speech does not mean free reach. There is no right to algorithmic amplification."
Renee DiResta [Director of Research at New Knowledge] (emphasis added)
So how might social media platforms / regulators reduce ‘freedom of reach’ without compromising ‘freedom of speech’? This was the main issue tackled by a Transatlantic High Level Working Group on Content Moderation Online and Freedom of Expression which reported in June 2020.
One of the group’s members, Peter Pomerantsev, points out that the Working Group were “trying to find a way of imagining what an internet in line with human rights could look like — and which different democracies could agree on... At the start it looked like we could never find common ground: American free speech fundamentalists argued that any control on content was authoritarian; people who’ve experienced how actual authoritarians twist freedom of speech to crush any critical voices despaired. But after much debate a consensus began to emerge. We began to think about ‘disinformation’ not just as a type of content, but as a type of deceptive behaviour.”
“The problem with the cyber militias and troll farms is not so much individual pieces of content they post, but the way they distribute them en masse in a way that looks organic, as if it’s real citizens exercising their freedom of speech, when in reality these are hidden, coordinated campaigns from a single source."
"This sort of mass, inauthentic campaign actually takes away people’s right to receive information about its origins, to understand how the information environment around them is shaped. I’m not talking about individual anonymity — that’s an important right — but the warping of reality where what seems to be one person saying something online is actually part of a network of accounts all saying the same thing, at the same time, according to lines passed down from a hidden manipulator.”
The group called this ‘viral deception’ and saw it as “the sort of behaviour that can be regulated against. And most importantly this is in the spirit of Article 19 of the [UDHR]: it’s a demand for more information, not less."[4] As David Kaye, the UN Rapporteur on Freedom of Expression says: “Coordinated amplification… interferes with the individual’s right to seek, receive, and impart information and ideas of all kinds. Conceived this way, it seems to me that solutions could be aimed at enhancing individual access to information rather than merely protecting against public harm.”[ibid]

Notes
1     Wingfield provides the example of the Anti-Fake News Act in Malaysia, which “criminalises the publication and circulation of ‘fake news’, defined to include any information which is ‘wholly or partly false’ — even if no harm is caused — leading the UN Special Rapporteur on Freedom of Expression [David Kaye] to observe that it would lead to ‘censorship and the suppression of critical thinking and dissenting voices’. Indeed, the law has already seen a Danish citizen imprisoned for “inaccurate” criticism of the Malaysian police.” [ibid]
2      “Evidence shows that determining whether a message is appealing and therefore likely to be read and shared widely depends not on its veracity, but rather on four characteristics: provocation of an emotional response; presence of a powerful visual component; a strong narrative; and repetition. The most successful problematic content engages moral outrage and ‘high-arousal’ emotions of superiority, anger, fear and mistrust. Capturing emotion is key to influencing behaviour. As for repetition, the reiteration of a message that initially seems shocking and wrong can come to seem an acceptable part of normal discourse when repeated sufficiently: repetition normalizes.” [ibid]
3     A report published in early 2020 estimates that deliberate government action in 2019 caused more than 18,000 hours of internet shutdowns across 122 countries and cost more than $8 billion. India, Chad and Myanmar were the worst offenders in terms of the amount of time the internet was disrupted, while Iraq was the most economically affected, losing an estimated $2.3 billion.
4     There is nothing in Article 19 of the UDHR about ‘disinformation’ being illegal — the Article states only that people should have the right “to seek, receive, and impart information”.
Share by: