The Cost

Cost & Consequences
"We didn’t focus on how you could wreck this system intentionally... You could argue with hindsight that we should have, but getting this thing [the Internet] to work at all was non-trivial.” 
Vinton G. Cerf [a Google Vice President] [1]
This page explores the cost and consequences for society / democracy of the infodemic of misinformation that has been unleashed on the world in recent years. The page is new and still under contruction.
1   The Problem
As we've seen, misinformation is generated by a broad range of actors, both naïve and malign, and spread via the internet and social media, often using encryption and the Dark Web (to hide identity/cover tracks). It has the effect of:
•   undermining public trust in science, government, the media, business and civil society;
•   damaging economic prospects, confidence and morale;
•   destabilizing the political process / democratic government; and
•   putting lives at risk — not least by increasing the political tension between peoples / nation states and compromising our ability to tackle existential global threats.
2   How Big is The Problem?
The problems created by misinformation are clearly significant, but just how significant is not easy to say. Indeed, we are not yet able to answer even the most basic of questions, such as:
1    What proportion of online content consumed is misinformation?
2   How much of this is created for financial gain, and how much to exert political influence or simply for malice or revenge?
3  What is the evidence that misinformation adversely affects recipients’ wellbeing, views or behaviour?
And what exactly do we mean by ‘consumed’, and how might we measure it? Also, how close might we get using some combination of the following indicators (assuming we can measure them)?
•   the number of items identified as fake or misleading that are in circulation (weighted somehow for significance, perhaps on a scale from titillating to potentially life-threatening )?
•   the number of bogus websites taken down, and the significance and reach of their content?
•   the number of people or companies that come into contact with different forms of misinformation and how they are affected by it?

One wonders whether it might be possible one day to estimate what proportion of bogus websites / social media accounts actually get spotted...
Cyber-security guru, Ben Nimmo, elaborates on the difficulties in this short video. As he says:

“Unless you actually have a very reliable way of judging what people were thinking before and after [being exposed to misinformation] you can't measure the change that went on and you can't separate it out from the impact of all the other things that are happening at the same time.”
3   Analysis of Costs & Consequences
 It is difficult to put a figure on the net harm from misinformation.[2] To do so requires careful consideration of the damage to the reputation of individuals and organisations, and their ability to function; the kind of financial losses that may be incurred, and other factors such as the risk to public safety, privacy and people’s mental health — and the opportunity cost of devoting resources to tackling the problem.
I've tried in this section to give a flavour of some of these disbenefits as they affect different sectors of society / the economy, starting with a brief observation on how misinformation undermines democracy. I've illustrated the discussion with real-life examples of:
•    Reputational Damage
•    Compromised Function
•    Market Loss
•    Financial Loss
•    Community Polarisation
•    People Hurt / Compromised
•    Lives put at Risk
•    Opportunity Costs

 The main examples [carried mainly in the pulldowns below] are taken from high income countries  but similar (or worse) consequences can be expected in less privileged regions of the world.[3]
a)   Democracy
 The EU has described the challenge posed by bad information in stark terms: "Disinformation," it says "erodes trust in institutions and in digital and traditional media and harms our democracies by hampering the ability of citizens to take informed decisions. It can polarise debates, create or deepen tensions in society and undermine electoral systems, and have a wider impact on European security.” [4]
Some misinformation is a direct consequence of mistrust in authority, both governmental and scientific. How much of this is generated by or a result of conspiracy theorist influencers is hard to say, but a significant minority of the followers of Q-Anon, Anti-vax and other protest movements believe that 'the elites' are lying to them / us and they will stop at nothing to prove it!
  • Trust in Democracy & the Internet

    •    Serious damage has been done to democracy and the USA’s standing in the world by the behaviour of Donald Trump. He is said to have been responsible for more than 20,000 lies and misstatements during his Presidency (see also Anne Applebaum).


    •    According to a survey by the Pew Research Centre (in Apr 2018) one third (32%) of experts consulted felt that people’s well-being was being "more harmed than helped by the Internet.”[a]


    •    There has been much discussion about interference by Russia in US elections. Here's an example of apparent Chinese interference in Taiwan's electoral processes. On 10 Jan, the day before Taiwan's 2020 election a rumor started circulating online that a new type of SARS had reached Taiwan and that it would be unsafe for citizens to vote in person. There were also rumours that Taiwan’s Central Election Commission would be using a new form of ink that does not dry easily, which would invalidate votes for one of the candidates, Han Kuo-yu. There is no secret about China's concern that the DPP, a party hostile to Beijing, looked likely to win. And it did.[b]


    a]  The survey found that “a sizable majority of online adults (70%) continue to believe the internet has been a good thing for society. Yet the share of online adults saying this has declined by a modest but still significant 6 percentage points since early 2014, when the Center first asked the question. Interestingly, this worsening perspective on the social benefits of the Internet contrasts with the view that these same respondents believed that the Internet continued to be a good thing for them individually...


    b] The President Tsai Ing-wen of the DPP ran on a ticket that that "her people would never relinquish their democratic freedoms" to China.

b)   International Agencies
It is not uncommon for international agencies to face bogus claims or accusations about their goals or activities. The United Nations, NATO and the World Health Organisation have all been attacked. Criminals have also been mimicking agency websites and domain names to entrap unwary punters. There are some examples in the pulldown.[5]

How much agencies are at fault in respect of the general failure to regulate big tech and control hateful / anti-social online content and cybercrime is an open question.  UN panels do regularly call for such action, but unless their financial backers (member states) get behind them, their calls fall on deaf ears.
  • Attempts to Discredit International Agencies

    •    In July 2020 the United Nations condemned claims shared hundreds of times in multiple Facebook posts purporting to list a series of ‘mission goals’, supposedly part of a ‘new world order’.  It said they were "completely false" and "part of a long-standing far-right misinformation campaign.”


    •    Around the same time, a study claimed that hackers — said to be aligned with Russian security interests — had been engaged in a sustained campaign to compromise selected news websites in Poland and Lithuania to plant false stories aimed at "discrediting NATO and delegitimising the transatlantic alliance." In one example, a false article was inserted into the archive of a Lithuanian publisher claiming that German soldiers serving with NATO had desecrated a Jewish cemetery. (More than a dozen other incidents are referenced in the report.)


    •    The World Health Organisation is regularly accused by conspiracy theorists of being “the lackeys of Big Pharma”. Whether this leads to serious reputation damage seems unlikely. However, it is a concern. In the summer of 2020 the WHO did report hackers and cyber scammers taking advantage of the Covid-19 pandemic to send fraudulent email and WhatsApp messages mimicking its websites and news releases and designed to trick people into clicking on malicious links or opening corrupted attachments. [Newsguard has produced a series of reports for the WHO dealing with Covid-19 disinformation.] 

c)   Governments
 "TRUTH is important in politics. Never more so than today, when huge issues are at stake affecting the lives of every voter and the future of the nation and the world. Political deceit is a form of theft. When people or businesses get money by deceit they face criminal charges. When politicians win power by deceit they can do vastly more harm, but face no penalty at all." Peter Oborne  [6]
 Politicians are often guilty of ‘spinning’ bad news (i.e. not telling the whole truth), and at times manufacturing or embellishing stories or retweeting misinformation, to damage or embarrass political opponents or rival parties. And this is not without consequence. There have also been many examples over the years of autocratic regimes putting out misinformation specifically designed to damage or undermine Western democracies. [See pulldown] We don’t know with any confidence what effect such efforts have had — we only hear of unsucessful attempts... [7]
Misinformation is also generated when a regime is accused of direct involvement or complicity in a heinous crime, for example, Russia accused of involvement in the shooting down of Malaysia Airlines flight MH17 over Ukraine in 2014 — the Kremlin's continued denials in the face of the evidence are intended to provide a smokescreen. [8]
We can, of course, expect black propaganda [9] when countries are at war, but there is no consensus today, in the digital era, as to when a cyber-attack [like the massive SolarWind Orion customer attack exposed in Dec 2020] or spreading malicious material constitutes an ‘act of war’ and a justification for hostile action in return — assuming the perpetrator(s) can be identified![10]
  • Reputational Damage to Governments

    •    The Kremlin and its proxies are accused of promulgating an average of around 5 fake claims or stories a day. Russia is also accused of reacting to accusations by obfuscation and denial, for example in response to the downing of flight MH17 over Ukraine, and the Novichok poisoning of the Skripals and dissident Alexei Navalny. There have also been multiple accusations of state-sponsored doping in international sporting competitions which have led to lengthy bans on Russia competing.


    •    China’s Wolf Warrior Diplomacy[a]  has done much to undermine / damage the Chinese Communist Party’s soft power, and especially during the coronavirus pandemic when China has been accused of putting out disinformation about Covid-19. Other examples of skulduggery include: claims that some 80 French lawmakers had co-signed a disparaging statement about the WHO — France was incensed; and the circulation of a doctored image of an Australian soldier holding a blooded knife to the throat of an Afghan child.


    •    The UK’s reputation was seriously damaged by the ‘Dodgy Dossier’ incident in 2003, when the Blair Government issued a highly suspect briefing document as ‘proof’ that Iraq had weapons of mass destruction. 


    a]  Wolf Warrior Diplomacy describes an aggressive style of diplomacy which Chinese diplomats have adopted under President Xi Jinping. The term was coined from a Rambo-style Chinese action film, Wolf Warrior 2. 

d)   Business
All businesses, great and small, are vulnerable to misinformation and can (and do) suffer in diverse ways. Malicious stories about a company, or fake reviews of its products or services, may be put online by a competitor or an aggrieved employee or dissatisfied customer. Rumours can also be spread by speculators seeking to scare the market and deflate a company’s share price. [see pulldown]
One Senior Manager [at Kroll], Betsy Blumenthal, points out that: “Very often, the perpetrators want to manipulate the market and sow consumer confusion, and disparaging a company is a simple way of doing it. It’s often very difficult to tell whether these stories are mischievous, malicious or the work of professional clickbaiters hired by rival companies. Bringing them to justice can also be difficult as they may not even be operating on the same continent.”
Almost half of businesses and a quarter of charities in the UK report having cyber security breaches or attacks in 2019. [11]
  • Damage to Businesses Casued by Rumour & Fakery

    •    In 2016 shares in €35 billion French construction firm Vinci briefly crashed by almost 20% after a fake press release claimed that the company had sacked its chief financial officer and mis-stated its results. 


    •    In June 2017, Ethereum (an open-source blockchain cryptocurrency) was targeted by someone on 4chan who put about a rumour that its founder, Vitalik Buterin, had been killed in a car crash. After the news was posted online the  market value of Ethereum fell by around $4 billion.


    •    In Jan 2019, a video surfaced purportedly showing a Tesla self-driving vehicle knock over a robot prototype at CES, one of the world’s biggest consumer electronics shows. “The video went viral and media outlets ran sensational headlines declaring that the robot was either ‘killed’ or ‘mowed down’ by a driverless car that failed to spot what could have been a pedestrian. However, Tesla did not have a self-driving model at the time, and the robot that was ‘killed’ was actually part of an elaborate publicity stunt by Promobot, the Russian firm that developed it. Despite subsequent debunking of the hoax, many people still believe it happened.”


    •    In May 2019 a WhatsApp message shared by Metro Bank customers on Twitter, linked to a BBC story about Metro’s falling share price, urged anyone with a Metro Bank account or safety deposit box to “empty [them] as soon as possible”, claiming the lender was facing financial difficulties and may be “shut down … or going bankrupt.” A spokesperson for Metro said  there was no truth to these rumours. But the damage had been done... 


    •    In Aug 2020 it was revealed that over 300,000 malicious links advertising fake get-rich-quick schemes designed to trick people into handing their money to cyber criminals had been taken down in a crackdown by the UK's National Cyber Security Centre.

e)   Social Media
Social media is both villain and victim when it comes to misinformation: it provides the means for bad actors to circulate and amplify their voice / ply their nefarious 'trades' over the Internet. Their intention may be to defraud or otherwise damage an individual or company; or it may be to influence public attitudes or voting patterns. What's more,  studies show that falsehoods spread faster and more widely on social media than the truth. 
Misinformation can damage people and organisations in so many ways. Indeed, some argue that social media actually promote antisocial human behavior; it can make people angrier and nudge them towards increasingly extreme online material.

One extreme example is false rumours of child abduction posted on WhatsApp which have led to many people being lynched in India and elsewhere in recent years... [12]
At the same time, mis-/bad information takes a heavy toll on those charged with content moderation: it is their job to seek out and take down material that is malicious, indecent, inciteful or illegal, and they do this work under intense time pressure. This has resulted in thousands of content moderators being seriously damaged by the experience with many suffering from PTSD. [Here's one man's story.]
The Impact on Young People
A report from the Commission on Fake News & Critical Literacy in School [13] found that only 2% of children and young people in the UK have the critical literacy skills they need to tell whether a news story is real or fake, and that fake news was "driving a culture of fear and uncertainty among young people".

Almost two-thirds of teachers surveyed believed that  'fake news' was "having a harmful effect on children’s well-being by increasing levels of anxiety, damaging self-esteem and skewing their world view." Indeed, teachers gave numerous examples of why they were concerned.
Teachers most often cited how 'fake news' increased pupils’ anxieties and fears, and how it "caused confusion and mistrust and allowed skewed or exaggerated views to be spread." It also affected pupils’ body image and self-esteem. In the focus group sessions run by the Commission teachers also raised concerns about pupils’ tendency to “believe everything without questioning it.”
The teachers' collective views are summarized in the adjacent word cloud.
  • Fake Websites & Bogus Accounts Taken Down / People Damaged

    •    In Nov 2019, Facebook revealed that it had shut down 5.4 billion fake accounts on its main platform during the year -- that compares with about 3.3 billion fake accounts removed in 2018. Facebook has acknowledged that as much as 5% of its monthly user base of nearly 2.5 billion consists of fake accounts.


    [You can find a long list of Twitter accounts suspended or closed down on Wikipedia, and watch a short video on how to report fake accounts on YouTube, viewed more than 1 m times since it was posted in 2013.]


    •    In May 2020, Facebook agreed to pay $52m to content moderators in the US as compensation for mental health issues they developed — some 11,250 moderators are eligible.


    •    Since the term 'depfake' was coined (in 2017), the amount of detected deepfakes on the internet has been increasing exponentially. In July 2020 the number was estimated to have reached almost 50,000.

f)   Media
 It is the goal of some bad actors, especially hostile foreign governments and their proxies, to entrap professional journalists into writing or covering stories for them and in doing so to amplify the volume and reach of their handiwork (whilst hiding their involvement).[14]
The mainstream media have been concerned about the proliferation of misinformation on the Internet for years and have recently stepped up their efforts to create systems whereby the provenance and technical integrity of content can be confirmed so as to establish a 'chain of trust' from the publisher to the consumer.[15] There are also  growing numbers of factcheckers and organisations (like Newsguard) that rate news websites for factual accuracy...
  • Fact Checking Improving, but Much Still to Do

    •    At the end of 2020 there were ~300 fact-checking organisations operating in 80+ countries. Over 60 of these groups are verified signatories of the International Fact Checking Network


    [Politics isn’t the only driver for fact-checkers. Many are concentrating their efforts on viral hoaxes and other forms of bad information online — often in coordination with the big digital platforms on which it spreads. The goal of factcheckers is ultimately “to increase the cost of lying.”]


g)   NGOs
It is easy for charities to fall victim to misinformation: misleading reports or false accusations of bad intent or inappropriate behaviour can undermine public trust. Charities involved in human rights and international development are especially vulnerable because of the nature of their work, and this can and does sometimes put frontline staff in real danger.
This issue is explored in 'Faking It'. As the report notes: "The circulation of misinformation about NGOs, malicious or otherwise, carries a reputational risk. It must be carefully monitored, and where necessary, challenged. Misinformation disseminated by NGOs carries an even greater reputational risk. All content has to be accurate, relevant and, if based on information from another source, rigorously verified. Conflicts and natural disasters are increasingly accompanied by rumours and misinformation on social media making humanitarian operations in these areas even more difficult."
One example of disinformation — investigated by Snopes and detailed in the pulldown — involved allegations that the American Red Cross had been charging victims of Hurricane Harvey for its services... By undermining confidence, such attacks can reduce charity donations / income.
  • NGOs' Work Compromised by Bad Information

    Here are a couple of illustrative examples from the 'Faking It' report [there are many more...]:


    •    "The Red Cross in the US came under attack from a raft of fake news stories in the aftermath of Hurricane Harvey in Texas. In one video posted on Facebook which subsequently went viral, it was alleged that the charity had stolen donated items from churches in Houston and then sold some and burned others. The claim was found to be “mostly false” by online fact checking organisation Snopes. Snopes also investigated claims that the Red Cross was charging victims of the hurricane for its services. The fact checker concluded that this was a false claim rooted in the fact that the Red Cross did, at one time, charge WWII soldiers for off-base food and lodging."


    •    Individuals working in the aid sector can be the targets of fake news stories and online abuse. "Girish Menon, Chief Executive of ActionAid, gave an interview to Sky News in the early part of 2017. In it he expressed the charity’s concerns about the planned state visit of President Trump in the light of his views towards women and marginalised communities. At midnight I got a message from my son to say something had popped up on LinkedIn about me being an ISIS agent. I was tired and laughed it off at that stage, but the next morning I had received many messages as had the Chair of ActionAid. We discovered that the message originated from a fake news site hosted in the US." 


    LinkedIn removed the post, but the reputational risk for ActionAid was very clear...

h)   Netizens
It is becoming increasingly difficult for netizens to know what’s true and what's false, especially when some elements of a bogus story are correct, albeit with information misleadingly presented or key facts omitted. This can lead to people drawing the wrong conclusions and making bad decisions — an extreme example is false posts and rumours of antifa looting during wildfires in the States which led some people to ignore evacuation orders and stay put, risking their lives.
Internet users are exposed to a wide range of scams, from phishing emails, designed to access sensitive data or steal people's identity, to fake customer reviews on social media platforms like Amazon and popular comparison sites. False information can make things look bad, dangerous or unreliable. Here's an example from a wellknown consumer magazine (and there are more examples in the pulldown):
In 2019 Which? reported finding "thousands of ‘fake’ customer reviews on popular tech categories on Amazon." In the course of its investigation it found: "A network of Facebook groups set up to reimburse shoppers for Amazon purchases in exchange for positive reviews. Sellers demanding a high or five-star rating in return for a refund on their purchase. Refusal to reimburse costs when ‘honest’ reviews were posted." How many people may have been decieved by these practices over the years, and at what cost to their wellbeing or peace of mind is impossible to know.
One historian, Janet Abbate [Virginia Tech] puts the dilemma faced by netizens in the following terms: "We’ve ended up at this place of security through individual vigilance... It’s kind of like safe sex. It’s sort of ‘the Internet is this risky activity, and it’s up to each person to protect themselves from what’s out there'... There’s this sense that the [Internet] provider's not going to protect you. The government’s not going to protect you. It’s kind of up to you to protect yourself.” 
  • Phishing, Scams & Fake Advice...

    •    An FBI review of internet cybercrime (published in Feb 2020) found that the UK  was top of the list for victims, and by some margin, with almost 94,000 victims in 2019. [Canada was second, with 'just' 3,721...] 


    •    A study estimates that around 5,800 people were admitted to hospital as a result of false information about Covid-19 on social media in the first three months of 2020, and that at least 800 may have died.


    •    Research by Gallup (in 2019) reported that one in three French people believed all vaccines to be dangerous — the highest percentage of the 144 countries surveyed. An  Ipsos survey (Nov 2020) suggested that 46% of French adults will refuse — or say they will refuse — the Pfizer jab or any other kind of anti-Covid jab. This compares with 36% in the United States, 30% in Germany, 21% in Britain and 16% in India.) [Examples cited in Unherd]


    [More examples to be added...]





Notes
1     The quote is from a 2015 article in The Washington Post [part of a serialisation of Craig Tmberg's book: 'The Threatened Net: How the Internet Became a Perilous Place']. Vinton Cerf is credited (with Bob Kahn) with inventing the Internet communication protocols we use today and the system referred to as the Internet.
2    This has not stopped a few brave souls from trying: one recent study put the figure at around $78 billion a year. This total includes:  health misinformation (including anti-vaccination stories) leading to $9 billion in unnecessary healthcare costs and other expenditures; financial misinformation costing companies $17 billion a year; platforms spending $3 billion a year trying to combat misinformation and increase safety; brands losing $235 million annually by advertising next to fake news items; and companies and individuals, spending $9 billion a year trying to repair damaged reputations due to 'fake news'.
3    Please note, the examples quoted are only intended as illustrative of the kind of things that is happening; no attempt has been made to provide a balanced overview of the impact in each of the different categories of disbenefit.
4    Democracy can’t prosper without accurate information and public trust in politicians, government and the media — or more accurately, people need leaders and institutions that are trustworthy, and practices that are transparent. Trustworthiness is a combination of honesty, competence and reliability. Interesting to note that Wikipedia talks, not about ‘truth’, but ‘verifiability’...
5    Of course, there’s always the possibility that some accusations turn out to be true, for example the abuse of local women and girls by UN peacekeepers in Haiti. There also appears to have been a coverup at the Organisations for the Prohibition of Chemical Weapons about a possible chemical gas attack in Douma, Syria (Independent).
6    Peter Oborne has a website entitled 'The lies, falsehoods and misrepresentations of Boris Johnson and his government'.
7    Examples of European or North American agencies’ efforts to use dirty tricks on Russia, China, N Korea and the like are rather less in the news, at least in the West. Up until recently, Britain’s favoured approach appears to have been to support (with grants) indigenous NGOs in countries of interest / concern. That said, GCHQ does has a ‘dirty tricks’ unit, the Joint Threat Research Intelligence Group (whose existence was exposed by Edward Snowden), and in April 2020 a new National Cyber Force (run jointly by GCHQ/MOD) was established, “dedicated to offensive action to combat security threats, hostile states, terror groups, extremism, hackers, disinformation and election interference.” The British Army also has a psychological warfare unit (77th Brigade) and is reconfiguring its 6th Division to fight cyber threats. GCHQ, MI6, MI5 and the National Cyber Security Centre are also working to neutralise malefactors and mischief-makers and protect vital infra-structure from cyberattack...
8    Russia’s approach to information has been characterised by one European Commissioner in the following terms: “there are no facts only interpretations” and “the truth is what people believe.” Comment by Véra Journová, Commissioner for Justice, Consumers & Gender Equality during an Atlantic Council Fireside Chat [8Dec2020]
9    Black propaganda is a form of propaganda intended to create the impression that it was created by those it is supposed to discredit. It is typically used to vilify or embarrass the enemy through misrepresentation. It contrasts with 'grey propaganda' (which does not identify its source), and 'white propaganda' (which does not disguise its origins at all). The major characteristic of black propaganda is that the audience are not aware that someone is influencing them, and do not feel that they are being pushed in a certain direction. Wikipedia cites many examples.
 10   The Tallinn Manual (originally the 'Tallinn Manual on the International Law Applicable to Cyber Warfare') is an academic, non-binding study on how international law (in particular the jus ad bellum and international humanitarian law) applies to cyber conflicts and cyber warfare.
11  The UK publishes an annual Cyber Security Breaches Survey of businesses and charities. In its most recent survey (published in Mar 2020) it notes that "cyber attacks have evolved and become more frequent. Almost half of businesses (46%) and a quarter of charities (26%) report having cyber security breaches or attacks in the last 12 months. Like previous years, this is higher among medium businesses (68%), large businesses (75%) and high-income charities (57%).
12  In Myanmar WhatsApp has been implicated in genocide — in March 2018 the UN accused Facebook [its owner] of playing “a determining role in stirring up hatred against the Rohingya Muslim minority.” The platform “had morphed into a ‘beast’ that helped to spread vitriol against them.” It has been reported that some 354 villages were levelled, >10,000 are believed killed, and many more subject to sexual violence, and almost 700,000 driven out of Rakhine State.
13   'Fake News and Critical Literacy', final report of the Commission on Fake News and the Teaching of Critical Literacy in Schools, compiled by the National Literacy Trust (13 June 2018)
14  An example is a cluster of websites known as IUVM ('International Union of Virtual Media') which appears to have been laundering Iranian state messaging by claiming it as their own and passing it on to other users, who reproduce it without showing its ultimate origin...
15  An example is  Project Origin, 'protecting trusted media'.
Share by: