Cost & Consequences
"We didn’t focus on how you could wreck this system intentionally... You could argue with hindsight that we should have, but getting this thing [the Internet] to work at all was non-trivial.”
Vinton G. Cerf [a Google
Vice President]
[1]
This page explores the cost and consequences for society / democracy of the infodemic of misinformation that has been unleashed on the world in recent years. The page is new and still under contruction.
1 The Problem
As we've seen, misinformation is generated by a broad range of actors, both naïve and malign, and spread via the internet and social media, often using encryption and the Dark Web (to hide identity/cover tracks). It has the effect of:
• undermining public trust in science, government, the media, business and civil society;
• damaging economic prospects, confidence and morale;
• destabilizing the political process / democratic government; and
• putting lives at risk — not least by increasing the political tension between peoples / nation states and compromising our ability to tackle existential global threats.
• destabilizing the political process / democratic government; and
• putting lives at risk — not least by increasing the political tension between peoples / nation states and compromising our ability to tackle existential global threats.
2 How Big is The Problem?
The problems created by misinformation are clearly significant, but just how
significant is not easy to say. Indeed, we are not yet able to answer even the most basic of questions, such as:
1 What proportion of online content consumed is misinformation?
2 How much of this is created for financial gain, and how much to exert political influence or simply for malice or revenge?
3 What is the evidence that misinformation adversely affects recipients’ wellbeing, views or behaviour?
1 What proportion of online content consumed is misinformation?
2 How much of this is created for financial gain, and how much to exert political influence or simply for malice or revenge?
3 What is the evidence that misinformation adversely affects recipients’ wellbeing, views or behaviour?
And what exactly do we mean by ‘consumed’, and how might we measure it? Also, how close might we get using some combination of the following indicators (assuming we can measure them)?
• the number of items identified as fake or misleading that are in circulation (weighted somehow for significance, perhaps on a scale from titillating to potentially life-threatening )?
• the number of bogus websites taken down, and the significance and reach of their content?
• the number of people or companies that come into contact with different forms of misinformation and how they are affected by it?
• the number of people or companies that come into contact with different forms of misinformation and how they are affected by it?
One wonders whether it might be possible one day to estimate what proportion of bogus websites / social media accounts actually get spotted...
Cyber-security guru, Ben Nimmo, elaborates on the difficulties in this short video. As he says:
“Unless you actually have a very reliable way of judging what people were thinking before and after [being exposed to misinformation] you can't measure the change that went on and you can't separate it out from the impact of all the other things that are happening at the same time.”
3 Analysis of Costs & Consequences
It is difficult to put a figure on the net harm from misinformation.[2] To do so requires careful consideration of the damage to the reputation of individuals and organisations, and their ability to function; the kind of financial losses that may be incurred, and other factors such as the risk to public safety, privacy and people’s mental health — and the opportunity cost of devoting resources to tackling the problem.
I've tried in this section to give a flavour of some of these disbenefits as they affect different sectors of society / the economy, starting with a brief observation on how misinformation undermines democracy. I've illustrated the discussion with real-life examples of:
• Reputational Damage
• Compromised Function
• Market Loss
• Financial Loss
• Community Polarisation
• People Hurt / Compromised
• Community Polarisation
• People Hurt / Compromised
• Lives put at Risk
• Opportunity Costs
• Opportunity Costs
The main examples [carried mainly in the pulldowns below] are taken from high income
countries but similar (or worse) consequences can be expected in less privileged regions of the world.[3]
a) Democracy
The EU
has described
the challenge posed by bad information in stark terms: "Disinformation," it says "erodes trust in institutions and in digital and traditional media and harms our democracies by hampering the ability of citizens to take informed decisions. It can polarise debates, create or deepen tensions in society and undermine electoral systems, and have a wider impact on European security.” [4]
Some misinformation is a direct consequence of mistrust in authority, both governmental and scientific. How much of this is generated by or a result of conspiracy theorist influencers is hard to say, but a significant minority of the followers of Q-Anon, Anti-vax
and other protest movements believe that 'the elites' are lying to them / us and they will stop at nothing to prove it!
b) International Agencies
It is not uncommon for international agencies to face bogus claims or accusations about their goals or activities. The United Nations, NATO
and the World Health Organisation
have all been attacked. Criminals have also been mimicking agency websites and domain names to entrap unwary punters. There are some examples in the pulldown.[5]
How much agencies are at fault in respect of the general failure to regulate big tech and control hateful / anti-social online content and cybercrime is an open question. UN panels do regularly call for such action, but unless their financial backers (member states) get behind them, their calls fall on deaf ears.
c) Governments
"TRUTH is important in politics. Never more so than today, when huge issues are at stake affecting the lives of every voter and the future of the nation and the world. Political deceit is a form of theft. When people or businesses get money by deceit they face criminal charges. When politicians win power by deceit they can do vastly more harm, but face no penalty at all."
Peter Oborne [6]
Politicians are often guilty of ‘spinning’ bad news (i.e. not telling the whole
truth), and at times manufacturing or embellishing stories or retweeting misinformation, to damage or embarrass political opponents or rival parties. And this is not without consequence. There have also been many examples over the years of autocratic regimes putting out misinformation specifically designed to damage or undermine Western democracies. [See pulldown] We don’t know with any confidence what effect such efforts have had — we only hear of unsucessful
attempts... [7]
Misinformation is also generated when a regime is accused of direct involvement or complicity in a heinous crime, for example, Russia accused of involvement in the shooting down of Malaysia Airlines flight MH17
over Ukraine in 2014 — the Kremlin's continued denials in the face of the evidence are intended to provide a smokescreen. [8]
We can, of course, expect black propaganda [9] when countries are at war, but there is no consensus today, in the digital era, as to when a cyber-attack [like the massive SolarWind
Orion
customer attack
exposed in Dec 2020] or spreading malicious material constitutes an ‘act of war’ and a justification for hostile action in return — assuming the perpetrator(s) can be identified![10]
d) Business
All businesses, great and small, are vulnerable to misinformation and can (and do) suffer in diverse ways. Malicious stories about a company, or fake reviews of its products or services, may be put online by a competitor or an aggrieved employee or dissatisfied customer. Rumours can also be spread by speculators seeking to scare the market and deflate a company’s share price. [see pulldown]
One Senior Manager [at Kroll], Betsy Blumenthal, points out
that:
“Very often, the perpetrators want to manipulate the market and sow consumer confusion, and disparaging a company is a simple way of doing it. It’s often very difficult to tell whether these stories are mischievous, malicious or the work of professional clickbaiters hired by rival companies. Bringing them to justice can also be difficult as they may not even be operating on the same continent.”
Almost half of businesses and a quarter of charities in the UK report having cyber security breaches or attacks in 2019. [11]
e) Social Media
Social media is both villain and victim when it comes to misinformation: it provides the means for bad actors to circulate and amplify their voice / ply their nefarious 'trades' over the Internet. Their intention may be to defraud or otherwise damage an individual or company; or it may be to influence public attitudes or voting patterns. What's more, studies show
that falsehoods spread faster and more widely on social media than the truth.
Misinformation can damage people and organisations in so many ways. Indeed, some argue
that social media actually promote
antisocial human behavior; it can make people angrier and nudge them towards increasingly extreme online material.
At the same time, mis-/bad information takes a heavy toll on those charged with content moderation: it is their job to seek out and take down material that is malicious, indecent, inciteful or illegal, and they do this work under intense time pressure. This has resulted in thousands of content moderators being seriously damaged by the experience with many suffering from PTSD. [Here's one man's story.]
The Impact on Young People
A report from the Commission on Fake News & Critical Literacy in School
[13] found that only 2% of children and young people in the UK have the critical literacy skills they need to tell whether a news story is real or fake, and that fake news was "driving a culture of fear and uncertainty among young people".
Almost two-thirds of teachers surveyed believed that 'fake news' was "having a harmful effect on children’s well-being by increasing levels of anxiety, damaging self-esteem and skewing their world view." Indeed, teachers gave numerous examples of why they were concerned.
Teachers most often cited how 'fake news' increased pupils’ anxieties and fears, and how it "caused confusion and mistrust and allowed skewed or exaggerated views to be spread." It also affected pupils’ body image and self-esteem. In the focus group sessions run by the Commission teachers also raised concerns about pupils’ tendency to “believe everything without questioning it.”
The teachers' collective views are summarized in the adjacent word cloud.
f) Media
It is the goal of some bad actors, especially hostile foreign governments and their proxies, to entrap professional journalists into writing or covering stories for them and in doing so to amplify the volume and reach of their handiwork (whilst hiding their involvement).[14]
The mainstream media have been concerned about the proliferation of misinformation on the Internet for years and have recently stepped up their efforts to create systems whereby the provenance and technical integrity of content can be confirmed so as to establish a 'chain of trust' from the publisher to the consumer.[15] There are also growing numbers of factcheckers
and organisations (like Newsguard) that rate news websites for factual accuracy...
g) NGOs
It is easy for charities to fall victim to misinformation: misleading reports or false accusations of bad intent or inappropriate behaviour can undermine public trust. Charities involved in human rights and international development are especially vulnerable because of the nature of their work, and this can and does sometimes put frontline staff in real danger.
This issue is explored in 'Faking It'. As the report notes: "The circulation of misinformation about NGOs, malicious or otherwise, carries a reputational risk. It must be carefully monitored, and where necessary, challenged. Misinformation disseminated by NGOs carries an even greater reputational risk. All content has to be accurate, relevant and, if based on information from another source, rigorously verified. Conflicts and natural disasters are increasingly accompanied by rumours and misinformation on social media making humanitarian operations in these areas even more difficult."
One example of disinformation — investigated by Snopes
and detailed in the pulldown — involved allegations that the American Red Cross
had been charging victims of Hurricane Harvey for its services... By undermining confidence, such attacks can reduce charity donations / income.
h) Netizens
It is becoming increasingly difficult for netizens to know what’s true and what's false, especially when some elements of a bogus story are correct, albeit with information misleadingly presented or key facts omitted. This can lead to people drawing the wrong conclusions and making bad decisions — an extreme example
is false posts and rumours of antifa looting during wildfires in the States which led some people to ignore evacuation orders and stay put, risking their lives.
Internet users are exposed to a wide range of scams, from phishing emails, designed to access sensitive data or steal people's identity, to fake customer reviews on social media platforms like Amazon
and popular comparison sites. False information can make things look bad, dangerous or unreliable. Here's an example from a wellknown consumer magazine (and there are more examples in the pulldown):
In 2019 Which?
reported
finding "thousands of ‘fake’ customer reviews on popular tech categories on Amazon." In the course of its investigation it found: "A network of Facebook groups set up to reimburse shoppers for Amazon purchases in exchange for positive reviews. Sellers demanding a high or five-star rating in return for a refund on their purchase. Refusal to reimburse costs when ‘honest’ reviews were posted." How many people may have been decieved by these practices over the years, and at what cost to their wellbeing or peace of mind is impossible to know.
One historian, Janet Abbate [Virginia Tech] puts the dilemma faced by netizens in the following terms: "We’ve ended up at this place of security through individual vigilance... It’s kind of like safe sex. It’s sort of ‘the Internet is this risky activity, and it’s up to each person to protect themselves from what’s out there'... There’s this sense that the [Internet] provider's not going to protect you. The government’s not going to protect you. It’s kind of up to you to protect yourself.”
Notes
1 The quote is from a 2015 article in The Washington Post
[part of a serialisation of Craig Tmberg's book: 'The Threatened Net: How the Internet Became a Perilous Place']. Vinton Cerf is credited (with Bob Kahn) with inventing the Internet communication protocols we use today and the system referred to as the Internet.
2 This has not stopped a few brave souls from trying: one recent study
put the figure at around $78 billion a year. This total includes: health misinformation
(including anti-vaccination stories) leading to $9 billion in unnecessary healthcare costs and other expenditures; financial misinformation
costing companies $17 billion a year; platforms
spending $3 billion a year trying to combat misinformation and increase safety; brands
losing $235 million annually by advertising next to fake news items; and companies
and individuals,
spending $9 billion a year trying to repair damaged reputations due to 'fake news'.
3 Please note, the examples quoted are only intended as illustrative of the kind of things that is happening; no attempt has been made to provide a balanced overview of the impact in each of the different categories of disbenefit.
4 Democracy can’t prosper without accurate information and public trust in politicians, government and the media — or more accurately, people need leaders and institutions that are trustworthy, and practices that are transparent. Trustworthiness is a combination of honesty, competence and reliability. Interesting to note that Wikipedia
talks, not about ‘truth’, but ‘verifiability’...
5 Of course, there’s always the possibility that some accusations turn out to be true, for example the abuse of local women and girls by UN
peacekeepers in Haiti. There also appears to have been a coverup at the Organisations for the Prohibition of Chemical Weapons
about a possible chemical gas attack in Douma, Syria (Independent).
6 Peter Oborne has a website
entitled 'The lies, falsehoods and misrepresentations of Boris Johnson and his government'.
7 Examples of European or North American agencies’ efforts to use dirty tricks on Russia, China, N Korea and the like are rather less in the news, at least in the West. Up until recently, Britain’s favoured approach appears to have been to support (with grants) indigenous NGOs in countries of interest / concern. That said, GCHQ does has a ‘dirty tricks’ unit, the Joint Threat Research Intelligence Group
(whose existence was exposed by Edward Snowden), and in April 2020 a new National Cyber Force
(run jointly by GCHQ/MOD) was established, “dedicated to offensive action to combat security threats, hostile states, terror groups, extremism, hackers, disinformation and election interference.” The British Army
also has a psychological warfare unit (77th Brigade) and is reconfiguring its 6th Division
to fight cyber threats. GCHQ, MI6, MI5
and the National Cyber Security Centre
are also working to neutralise malefactors and mischief-makers and protect vital infra-structure from cyberattack...
8 Russia’s approach to information has been characterised by one European Commissioner
in the following terms: “there are no facts only interpretations” and “the truth is what people believe.” Comment by Véra Journová, Commissioner for Justice, Consumers & Gender Equality
during an Atlantic Council
Fireside Chat
[8Dec2020]
9 Black propaganda is a form of propaganda intended to create the impression that it was created by those it is supposed to discredit. It is typically used to vilify or embarrass the enemy through misrepresentation. It contrasts with 'grey propaganda' (which does not identify its source), and 'white propaganda' (which does not disguise its origins at all). The major characteristic of black propaganda is that the audience are not aware that someone is influencing them, and do not feel that they are being pushed in a certain direction. Wikipedia
cites many examples.
10 The Tallinn Manual
(originally the 'Tallinn Manual on the International Law Applicable to Cyber Warfare') is an academic, non-binding study on how international law (in particular the jus ad bellum
and international humanitarian law) applies to cyber conflicts and cyber warfare.
11 The UK publishes an annual Cyber Security Breaches Survey
of businesses and charities. In its most recent survey (published in Mar 2020) it notes that "cyber attacks have evolved and become more frequent. Almost half of businesses (46%) and a quarter of charities (26%) report having cyber security breaches or attacks in the last 12 months. Like previous years, this is higher among medium businesses (68%), large businesses (75%) and high-income charities (57%).
12 In Myanmar WhatsApp
has been implicated in genocide
— in March 2018 the UN accused Facebook
[its owner] of playing “a determining role in stirring up hatred against the Rohingya Muslim minority.” The platform “had morphed into a ‘beast’ that helped to spread vitriol against them.” It has been reported that some 354 villages were levelled, >10,000 are believed killed, and many more subject to sexual violence, and almost 700,000 driven out of Rakhine State.
13 'Fake News and Critical Literacy', final report of the Commission on Fake News and the Teaching of Critical Literacy in Schools,
compiled by the National Literacy Trust
(13 June 2018)
14 An example
is a cluster of websites known as IUVM ('International Union of Virtual Media') which appears to have been laundering Iranian state messaging by claiming it as their own and passing it on to other users, who reproduce it without showing its ultimate origin...
15 An example is Project Origin, 'protecting trusted media'.