The World Economic Forum (WEF) ranks the spread of misinformation and fake news, as among the world’s top global risks. It points out that more than 50% of the world’s population (an estimated 4.1 billion) is now online — with roughly one million more joining each day — and it notes that “the current lack of global technology governance and the presence of cybersecurity blind spots increase the risk of a fragmented cyberspace and competing technology regulations”. The Forum goes on to speculate on how “a fragmented cyberspace and differing technological standards could hinder economic growth, exacerbate geopolitical rivalries and further divide societies.”
In 2014 the WEF launched the Global Commission on Internet Governance, which produced a hard-hitting report, 'One Internet' (in 2016), which argues that “Internet governance is one of the most pressing global public policy issues of our time." It goes on to point out that "to realize its full potential, the Internet of the future will need to be open, secure, trustworthy and accessible to all”.
Here are my six top Internet conundrums — all highly contentious!
1 Should the Internet be regulated to reduce misuse, disinformation and criminal activity, and if so how and by whom?
2 Should big tech be broken up to increase competition and encourage innovation, if so, how?
3 How should society weigh the benefits of anonymity and encryption against the social, economic & political costs?
4 Should netizens have rights, including the right to own their data and have protection from prying eyes and surveillance capitalism?
5 Should access to the Internet be a basic human right, and if so, how might this be achieved?
6 What can be done to maintain the Internet as a global resource and prevent its fragmentation?
These conundrums are explored here.
Here are eight challenges we face:
1: False information on the internet helping undermine public trust in government, the media, business and civil society, damaging confidence and morale, and destabilizing the political process.
2: The problem of not knowing what’s true anymore, especially when aspects of ‘fake news’ items are often correct, albeit with information misleadingly presented .
3: ‘Fake news’ turning out to be ‘stickier’ and more toxic than real news — it can be produced anonymously and at little cost; and it spreads significantly faster, corrupting public understanding and provoking distrust, hatred and violence. And for the victims, it can be difficult, time-consuming and expensive to counter (‘mud sticks’).
4: Social media’s tendency to bring out the worst in us, and to attract trolls, crooks, perverts and other mendacious individuals. Bad bots and cyborgs now infest the web, capitalising on Big Tech’s attention-seeking algorithms, and promoting and amplifying ill-informed or malicious voices.
5: The Tech Giants perfecting ‘surveillance capitalism’ — they use a multiplicity of (unregulated) black box algorithms and business models that involve profiting from our private data and biometrics. They have also shown themselves to be unable or unwilling to purge their platforms of fake, extremist or illegal material, and have become too powerful to control.
6: A vocal minority of conspiracy theorists disseminating a toxic mixture of fabricated content and misleading argument, often in pursuit of some ‘deeper truth’. This promotes polarisation and constrains society’s ability to tackle existential threats, not least threats to public health and the environment.
7: Malign actors, extremists and hostile foreign powers engaging in information warfare, using disinformation to poison social intercourse, damage markets and discredit open society, and in the process putting at risk peaceful coexistence — there is no consensus on when a cyber-attack or spreading malicious material becomes an ‘act of war’.
8: Failure to regulate/control online content and cybercrime and protect people’s data, privacy and security; and poor coordination between agencies and organisations that are fighting fake and seeking the truth.
These issues are discussed in more detail on a separate page.
It is difficult to provide a satisfactory definition of the ‘Digital Economy’ because the boundaries between digital and other economic activities have become increasingly blurred as a result of social media, internet searching, do-it-yourself publishing and a plethora of Apps. But in essence the term refers to an economy that is based on computing conducted over the internet. It includes:
• e-infrastructure — hardware, software, telecom, networks, human capital, etc. over which people and organizations communicate, collaborate and search for information;
• e-enterprise — how work is conducted over computer networks; and
• e-commerce — items sold online and goods transferred.
The sheer size and impact of the digital economy is breath-taking — it is worth trillions of dollars, and according to some estimates, currently accounts for a tenth of the world's electricity (and concomitant CO2 emissions).
There needs to be close cooperation and engagement between countries — at least those that want the internet to be open and free (as originally envisaged). This will mean inter alia setting goals, prioritising actions / interventions, sharing skills and experience, and putting the question of the ownership and safeguarding of our personal data centre stage. A variety of cooperative arrangements will be necessary, and given the diversity of issues — and differences in organisational size, resources and operating styles — some will need to be led by international organisations or governments, others by the private sector or civil society. The UN Panel on Digital Cooperation has called for a 'systems approach' that is “inclusive and fit for purpose for the fast-changing digital age" and suggested that cooperation be: "consensus-oriented, polycentric, customised, accessible, inclusive, agile, accountable, resilient, open, innovative and tech-neutral, with subsidiarity, clarity in roles and responsibility and equitable outcomes."
There have also been calls for: more countries to adopt privacy laws (like the EU's General Data Protection Regulations) and a ‘common global framework’ (rather than having different laws from country to country); and all internet-related laws and practices to adhere to international human rights law and standards. Last but not least there should be a root-and-branch examination of ‘surveillance capitalism’.
The measures proposed include:
• formulating a new category where big tech combine the functions of platform and publisher, with standards equivalent to those required of public service broadcasters, and enforced by third-party bodies;
• developing a global code of ethics which sets down in writing what is and what is not acceptable on social media, with possible liabilities for companies;
• ensuring paid-for political advertising data on social media platforms is transparent [It should identify the source/country of origin, who uploaded it, and who sponsored it].
Big Tech companies should:
• state their terms and conditions — and in language children can understand;*
• embrace ‘freedom of thought’ as a policy commitment and perform due diligence on how their activities may harm it;
• release regular transparency reports which explain how they are tackling hate speech and mis/disinformation.
* This needs to explain clearly what data is collected and how it will be used. It should also be more transparent how they manipulate websites and hide identity in advert purchasing.
There are all sorts of things that could be done to reduce the problems being generated by social media. Here are some really useful practical proposals from journalist Helen Lewis:
"You could remove the ability to quote-tweet on Twitter, which is used as a kind of 'dunking mechanism.' You could introduce a pause button, which says 'are you sure you want to Tweet this?' and you have to click down for five seconds to see it. You could stop Facebook's architecture making everything look the same, so that news stories from the BBC or Guardian look the same as something from a sham site that has been created five minutes ago... You could ban the YouTube recommendation engine which is, as I see it, just a driver for more extreme content...
All these things are more or less illiberal and it depends just how much you want to tell private companies what to do... because [they are] actually at this point public utilities, and I would argue that Facebook is now like a water company which pumps out information into our system and it needs a kind of sewerage system."
Action is required by governments to develop a coherent digital / media literacy strategy and establish a major rolling programme of public education and skills training. The work — which should be funded in whole or in part by a levy on Big Tech — will help internet users navigate the digital environment and foster a better understanding of and engagement with our fast-evolving information technologies and alert the public to the tricks and deceptions used by corporations as well as criminals, troublemakers and political opportunists, including those who spread ‘fake news’ and disinformation. The programme also needs to explain people’s rights over their data, and encourage them to report adverts or digital campaigning that they consider misleading or unlawful.
The UN Panel on Digital Cooperation recommended the establishment of regional and 'global digital help desks' to help governments, civil society and the private sector to understand digital issues and develop capacity to steer cooperation related to social and economic impacts of digital technologies.
Online platforms, news publishers, broadcasters, voluntary organisations and academics should be involved in helping to formulate such a programme. There is much expertise to call on — in the UK more than 50 non-governmental organisations are currently working in some capacity to raise awareness of the threat posed by monopoly capitalism, cyber-criminals and mis/disinformation, and a number of these are actively campaigning on the issue.
Finland has faced down a determined Kremlin-backed propaganda campaign ever since it declared independence from Russia in 1918. The trolling ramped up after Moscow annexed Crimea... President Niinisto called on every Finn to take responsibility for the fight against false information. He brought in experts to advise on how to recognize 'fake news', understand why it goes viral, and develop strategies to fight it; and the education system was also reformed to emphasize critical thinking. Today Finland excels in league tables on media literacy (1st), happiness (1st), press freedom (2nd), transparency and social justice (3rd), gender equality (4th).
To be really effective, one needs an integrated approach across all organisations / platforms, nationally and internationally. Lithuania presents a good example — Russia is seen as the main threat.
Demaskuok ['Debunk' in English] is a counter-disinformation campaign supported by more than 4,000 ‘elves’ — volunteer journalists, IT professionals, business people, students and scientists. The elves scans thousands of articles against a database of trigger words and narratives and Demaskuok sends their findings to interested parties, NGOs, newsrooms, politicians, etc. The Defence Ministry regularly produces written/video ‘debunks’ for the public, including sometimes aggressive debunking.
Governments should ensure:
• the same rights online as offline, with strong data protection laws for personal data and clear rules on who's responsible when data is moved from one service to another.
Social media platforms should:
• have a legal duty to inform users of their privacy rights, especially with regard to profiling & automated decision-making. [This should include richer data in ads and notifications on who is communicating with you, and identify / highlight potential bot accounts.]
Governments should ensure:
• the same rights online as offline, with strong data protection laws for personal data and clear rules on who's responsible when data is moved from one service to another.
Social media platforms should:
• have a legal duty to inform users of their privacy rights, especially with regard to profiling & automated decision-making. [This should include richer data in ads and notifications on who is communicating with you, and identify / highlight potential bot accounts.]
Trust in the media, our organisations (and democracy) has been badly damaged in recent years (see Edelman’s Trust Barometer). Measures such as the above should, taken together, help facilitate the slow process of rebuilding lost trust.
The Cairncross Review noted that: investigative and campaigning journalism and the humdrum task of reporting on the daily activities of public institutions matter greatly and are “essential in a healthy democracy”. But neither of these public services come cheap — especially investigations into abuses of power in both the public and the private sphere; and at the local level, reporting discussions of local councils or the proceedings in a Magistrates Courts — and as Cairncross observed: “each is often of limited interest to the public.” There is evidence of a “market failure” in the supply of public-interest news, for which "the only remedy may be public intervention."
Those in publishing are: “seeking a fairer deal between news publishers and the digital giants — one which fairly rewards the creators of the content on which these platforms rely.” They also point out that: “The very existence of journalists, who might investigate and write about abuses of power, acts as a threat that keeps the powerful in check... They act as a ‘scarecrow’.”
The Big Tech corporations could (and should) undoubtedly do more to purge their systems and reduce the profile of false or misleading information, for example, by taking on more staff and prioritising (fact-checked) trusted news over suspect news on newsfeeds, and ensuring that dubious material rapidly sinks down the list and out of sight. But the task is immense, and platforms will not willingly change their lucrative business models.
The Cairncross Review proposed the establishment of an Institute for Public Interest News dedicated to amplifying “efforts to ensure the sustainability of public-interest news.” It called for:
• online platforms to set out codes of conduct for commercial agreements with news publishers;
• a market study of the online advertising industry;
• new forms of tax relief on digital publications, and support for public interest journalism;
• an expansion of the local democracy reporting service. [ Agreements should be approved and overseen by a regulator “with powers to insist on compliance”.]
In Nov 2018, shortly before Cairncross, the LSE’s Truth, Trust & Technology Commission proposed that:
• the news industry develop a News Innovation Centre to support journalism innovation and quality news, funded by the levy on digital platform revenue; and that
• an Independent Platform Agency [IPA] be established — protected financially and through security of tenure of its governing Board.
The Agency would work closely with Ofcom and the Competition & Markets Authority to monitor the level of market dominance and the impact of platforms on media plurality and quality. It should seek close links with civil society and be transparent in respect of its operation. It should have powers to request data from all the major platforms on the topmost shared news and information stories, referrals, news-sharing trends and case studies; impose fines on platforms if they fail to comply; and provide reports on request to other agencies such as the Electoral Commission, Ofcom and the Information Commissioner’s Office, to support the performance of their duties, according to agreed criteria.
If these measures fail to improve the UK information environment, the IPA should set standards in collaboration with civil society, Parliament and the public. (Until now, standards have been set by the platforms themselves). It should “provide a permanent forum for monitoring and review of platform behaviours, reporting to Parliament on an annual basis. [It] should be asked to conduct annual reviews of ‘the state of disinformation’ that should include policy recommendations to a parliamentary committee. These should encompass positive interventions such as the funding of journalism.”
We’ve seen how the Internet and social media have been weaponised to interfere in elections and party politics...
• Politicians need to be more proactive in responding when interference is detected.
• Counter-measures must be employed which make attacks more expensive politically and economically.
• Security treaties need to be updated to recognise this new form of attack.
* Suspend biased media channels (like RT); publicise illicit activities; freeze oligarch’s assets / restrict their travel [eg 2010 US Magnitsky Act]; sanction perpetrator’s goods & services.
The UK has a National Cyber Security Centre (established in 2017) which is seen as integral to government and private sector efforts to improve the resilience of Critical National Infrastructure to cyber-attack. It provides a ‘one-stop shop’ for technical advice. And whilst its work has been praised, concerns have also been raised about aspects of its operation — in 2018 a Parliamentary Committee noted that “its effectiveness will be limited unless it has access to the experts it needs in the numbers it requires” and it identified “unresolved tensions derived from its status as part of GCHQ”. The Committee also called on the Government to “publish Annual Reports for the National Cyber Security Programme to improve transparency and aid external scrutiny... It should also share [with other governments] information on risks, vulnerabilities, and best practices to counter Russian interference, and co-ordinate between parliamentarians across the world.” The Committee mentioned that “the current complex arrangements for ministerial responsibility... is wholly inadequate to the scale of the task facing the Government, and inappropriate in view of the Government’s own assessment that major cyber-attacks are a top-tier national security threat.”