Recommendations

Recommendations
This page highlights the recommendations made in selected reports published in the last few years and where appropriate the principles that underlie them. It is important to understand the context in which these recommendations were made, so please consult the reports (which are available online) — and most importantly, read the fine print!

Page Content
Reports Featured

It is a hopeless task trying to keep abreast of all the books and reports now being published that make recommendations for tackling the multitude of problems associated with protecting freespeech and the mainstream media, and with internet governance, 'fake news' and disinformation.

I have brought together some of the best proposals on the Battle for Truth Page. The purpose of this page is to provide more information on some of the key underlying documents.
I'll try to keep to just four reports per category...
1    The Online World
Advancing Cyberstability

Nov 2019

The mission of the Global Commission on the Stability of Cyberspace* is "to develop proposals for norms and policies to enhance international security and stability and guide responsible state and non-state behavior in cyberspace." The Commission helps to "promote mutual awareness and understanding among the various cyberspace communities working on issues related to international cybersecurity."

The Commission's Final Report offers "a cyberstability framework, principles, norms of behavior, and recommendations for the international community and wider ecosystem."

*  The Commission was launched at the 2017 Munich Security Conference.
The Commission notes that while some continue to believe that "ensuring international security and stability is almost exclusively the responsibility of states," in practice "the cyber battlefield (i.e., cyberspace) is designed, deployed, and operated primarily by non-state actors" and "their participation is therefore necessary to ensure the stability of cyberspace."
It concludes that these non-state actors should be guided by some basic principles and bound by norms, and argues that: 1)  Everyone is responsible for ensuring the stability of cyberspace; 2) No state or non-state actor should take actions that impair the stability of cyberspace; 3) State or non-state actors should take reasonable and appropriate steps to ensure the stability of cyberspace; and 4) Efforts to ensure the stability of cyberspace must respect human rights and the rule of law. The Commission goes on to make six recommendations focus around "strengthening the multistakeholder model, promoting norms adoption and implementation, and ensuring that those who violate norms are held accountable." [See pulldowns]
  • The GCSC's Four Principles

    The following four principles are critical to ensuring the stability of cyberspace:


    A. The Responsibility Principle: The first principle speaks to the decentralized and distributed nature of cyberspace. It reaffirms the need for a multistakeholder approach to ensuring the stability of cyberspace and, notably, extends “stakeholders” to include every individual. Every individual has responsibilities, in a personal and/or professional capacity, to ensure the stability of cyberspace. While it may be obvious that those responsible for government cyber policies and employees that manage cloud services have a role to play, every individual connected to cyberspace must take reasonable efforts to ensure their own devices are not compromised and, perhaps, used in attacks. Even those who are not connected to the Internet may be dependent upon its capabilities to receive goods and services, and they too have a stake in ensuring that cyberspace policy is being addressed appropriately in their communities.


    B. The Restraint Principle: The second principle contains a general requirement of restraint. For states, this is consistent with the 2018 resolutions of the United Nations General Assembly (UNGA) concerning responsible state behavior in cyberspace and the 2015 UN GGE report which notes that “Consistent with the purposes of the United Nations, including to maintain international peace and security, States should…prevent ICT practices that are acknowledged to be harmful or that may pose threats to international peace and security…” But it is not just about states, as non-state actors can also engage in actions, such as hacking their attackers, that might also undermine the stability of cyberspace.


    C. The Requirement to Act Principle: The third principle contains a general requirement to take affirmative action to preserve the stability of cyberspace. When acting, states should take care to avoid inadvertently escalating tensions or increasing instability. This is consistent with the obligation noted in the 2015 UN GGE report to “cooperate in developing and applying measures to increase stability and security in the use of ICTs.” But again, it is not just about states, as private companies and individuals can also take cooperative steps to help ensure the stability of cyberspace. For example, private companies can work together to mitigate cyber threats, and individuals can ensure they are employing best practices, such as upgrading, patching, and using multi-factor authentication, to reduce the risk that botnets will take over their machines and then be used to launch broad-based attacks that threaten the stability of cyberspace.


    D. The Human Rights Principle: The fourth principle recognizes the importance of safeguarding human rights as an important element of cyberspace stability. As the reliance of individuals on information and communications technologies increases, the disruptive effect on human activity resulting from threats to its availability or integrity is amplified. Thus, it is imperative that as states pursue their national strategic interests in cyberspace, they give due consideration to the resulting impact on individuals, in particular their human rights. In a similar vein, non-state actors should consider and minimize risks that their activities pose to individuals’ enjoyment of their rights online and offline. At a minimum, compliance with the Human Rights Principle requires that states abide by their human rights obligations under international law as they engage in activities in cyberspace.

    Universally accepted human rights have been enshrined in the Universal Declaration of Human Rights. Additionally, a large number of international agreements providing for a variety of specific human rights have been adopted and create binding legal obligations for state parties. In the context of cyberspace, the applicability of international human rights law has been explicitly confirmed on several occasions by the United Nations General Assembly, the UN Human Rights Council (HRC), as well as the UN GGE reports of 2013 and 2015. Upholding rights and ensuring users trust that their rights are being respected is critical to ensuring the stability of cyberspace.


    We note that the four principles are not intended to be all-inclusive or cover every aspect of cyberspace policy, and there are many organizations that have produced broad-based sets of principles covering a wide variety of issues. There are also other organizations focused on issues relating to Internet governance and human rights online (including privacy, freedom of expression, and freedom of association). Our goal is to achieve widespread acceptance of principles that support the stability of cyberspace, especially in an era of unprecedented and sophisticated hostile activity where rules may be unclear or, even if clear, may be neither embraced nor enforced.

  • The GCSC's Six Recommendations

    The Commission recommends that:


    1.  State and non-state actors adopt and implement norms that increase the stability of cyberspace by promoting restraint and encouraging action.

    2.  State and non-state actors, consistent with their responsibilities and limitations, respond appropriately to norms violations, ensuring that those who violate norms face predictable and meaningful consequences.

    3.  State and non-state actors, including international institutions, increase efforts to train staff, build capacity and capabilities, promote a shared understanding of the importance of the stability of cyberspace, and take into account the disparate needs of different parties.

    4.  State and non-state actors collect, share, review, and publish information on norms violations and the impact of such activities.

    5.  State and non-state actors establish and support Communities of Interest to help ensure the stability of cyberspace.

    6.  A standing multistakeholder engagement mechanism be established to address stability issues, one where states, the private sector (including the technical community), and civil society are adequately involved and consulted.


Contract for the Web

Jul 2019

"The Web was designed to bring people together and make knowledge freely available. It has changed the world for good and improved the lives of billions. Yet, many people are still unable to access its benefits and, for others, the Web comes with too many unacceptable costs."  The Contract for the Web is "a global plan of action to make our online world safe and empowering for everyone", with the Contract created by representatives from over 80 organizations, representing governments, companies and civil society, and sets out commitments to guide digital policy agendas. To achieve the Contract’s goals, "governments, companies, civil society and individuals must commit to sustained policy development, advocacy, and implementation of the Contract text."
The pulldowns below contain the main proposals, but do read the original for clarification and examples of things that can / should be done.
  • Governments Will...

    PRINCIPLE 01: Ensure everyone can connect to the internet 

    So that anyone, no matter who they are or where they live, can participate actively online.

    1. By setting and tracking ambitious policy goals

    2. By designing robust policy-frameworks and transparent enforcement institutions to achieve such goals, through

    3. By ensuring systematically excluded populations have effective paths towards meaningful internet access

    PRINCIPLE 02: Keep all of the internet available, all of the time 

    So that no one is denied their right to full internet access

    1. By establishing legal and regulatory frameworks to minimize government-triggered internet disruptions, and ensure any interference is only done in ways consistent with human rights law 

    2. By creating capacity to ensure demands to remove illegal content are done in ways that are consistent with human rights law 

    3. By promoting openness and competition in both internet access and content layers

    PRINCIPLE 03: Respect and protect people’s fundamental online privacy and data rights 

    So everyone can use the internet freely, safely, and without fear

    1. By establishing and enforcing comprehensive data protection and rights frameworks to protect people’s fundamental right to privacy in both public and private sectors, underpinned by the rule of law. 

    2. By requiring that government demands for access to private communications and data are necessary and proportionate to the aim pursued, lawful and subject to due process, comply with international human rights norms, and do not require service providers or data processors to weaken or undermine the security of their products and services.

    3. By supporting and monitoring privacy and online data rights in their jurisdictions.


  • Companies Will...

    PRINCIPLE 04: Make the internet affordable and accessible to everyone 

    So that no one is excluded from using and shaping the Web

    1.  By crafting policies that address the needs of systematically excluded groups. 

    2.  By working towards an ever-increasing quality of service. 

    3.  By ensuring full use of the internet by all, through a close coordination with Government and Civil Society towards

    PRINCIPLE 05: Respect and protect people’s privacy and personal data to build online trust 

    So people are in control of their lives online, empowered with clear and meaningful choices around their data and privacy 

    1. By giving people control over their privacy and data rights, with clear and meaningful choices to control processes involving their privacy and data. 

    2. By supporting corporate accountability and robust privacy and data protection by design, carrying out regular and pro-active data processing impact assessments that are made available to regulators which hold companies accountable for review and scrutiny, to understand how their products and services could better support users’ privacy and data rights. 

    3. By making privacy and data rights equally available to everyone

    PRINCIPLE 06: Develop technologies that support the best in humanity and challenge the worst 

    So the Web really is a public good that puts people first

    1. By being accountable for their work, through regular reports.

    2. By engaging with all communities in an inclusive way.

    3. By investing in and supporting the digital commons:


  • Citizens Will...

    PRINCIPLE 07: Be creators and collaborators on the Web 

    So the Web has rich and relevant content for everyone 

    By being active participants in shaping the Web, including content and systems made available through it

    PRINCIPLE 08: Build strong communities that respect civil discourse and human dignity 

    So that everyone feels safe and welcome online By working towards a more inclusive Web.

    PRINCIPLE 09: Fight for the Web 

    So the Web remains open and a global public resource for people everywhere, now and in the future 

    By being active citizens of the Web.


The Age of Digital Interdependence

Jun 2019

'The Age of Digital Interdependence' was prepared by a High-level Panel on Digital Cooperation convened by the UN Secretary-General to "advance global multi-stakeholder dialogue on how we can work better together to realize the potential of digital technologies for advancing human well-being while mitigating the risks."

The report argues that  "our  rapidly  changing  and  interdependent  digital  world  urgently  needs  improved  digital cooperation founded on common human values". It makes 11 main recommendations which are grouped under the following headings: 1) Build an inclusive digital economy and society; 2) Develop human and institutional capacity; 3) Protect human rights and human agency; 4) Promote digital trust, security and stability; and 5) Foster global digital cooperation.
The recommendations are listed below together with some key principles of cooperation.
One Internet

Jun 2016

The Global Commission on Internet Governance provides recommendations and practical advice on the future of the Internet. Its primary objective is the creation of 'One Internet' that is protected, accessible to all and trusted by everyone. In its final report, the Commission "puts forward key steps that everyone needs to take to achieve an open, secure, trustworthy and inclusive Internet." As is says: "Half of the world’s population now uses the Internet to connect, communicate and interact. But basic access to the Internet is under threat, the technology that underpins it is increasingly unstable and a growing number of people don’t trust it to be secure."
The statement (in the pulldown below) provides the Commission’s view of the issues at stake and describes in greater detail the core elements that are essential to achieving a social compact for digital privacy and security.
  • Core Elements of New Social Compact

    Core elements that the Commission advocates for the new Social Compact:

    • Fundamental human rights, including privacy and personal data protection, must be protected online. Threats to these core human rights should be addressed by governments and other stakeholders acting both within their own jurisdiction and in cooperation.

    • Interception of communications, collection, analysis and use of data over the Internet by law enforcement and government intelligence agencies should be for purposes that are openly speciŒed in advance, authorized by law (including international human rights law) and consistent with the principles of necessity and proportionality. Purposes such as gaining political advantage or exercising repression are not legitimate.

    • In particular, laws should be publicly accessible, clear, precise, comprehensive and non-discriminatory, openly arrived at and transparent to individuals and businesses. Robust, independent mechanisms should be in place to ensure accountability and respect for rights. Abuses should be amenable to appropriate redress, with access to an e”ective remedy provided to individuals whose right to privacy has been violated by unlawful or arbitrary surveillance.

    • Businesses or other organizations that transmit and store data using the Internet must assume greater responsibility to safeguard that data from illegal intrusion, damage or destruction. Users of paid or so-called “free services” provided on the Internet should know about, and have some choice over, the full range of commercial use on how their data will be deployed, without being excluded from the use of software or services customary for participation in the information age. Such businesses should also demonstrate accountability and provide redress in the case of a security breach.

    • There is a need to reverse the erosion of trust in the Internet brought about by the non-transparent market in collecting, centralizing, integrating and analyzing enormous quantities of private information about individuals and enterprises — a kind of private surveillance in the service of “big data,” often under the guise of o”ering a free service.

    • Consistent with the United Nations Universal Declaration of Human Rights, communications should be inherently considered private between the intended parties, regardless of communications technology.  The role of government should be to strengthen the technology upon which the Internet dependsand its use, not to weaken it.

    • Governments should not create or require third parties to create “back doors” to access data that would have the e”ect of weakening the security of the Internet. E”orts by the Internet technical community to incorporate privacy-enhancing solutions in the standards and protocols of the Internet, including end-to-end encryption of data in transit and at rest, should be encouraged.

    • Governments, working in collaboration with technologists, businesses and civil society, must help educate their publics in good cyber-security practices. ey must also collaborate to enhance the training and development of the software workforce globally, to encourage creation of more secure and stable networks around the world.

    • The transborder nature of many signiŒcant forms of cyber intrusion curtails the ability of the target state to interdict, investigate and prosecute the individuals or organizations responsible for that intrusion. States should coordinate responses and provide mutual assistance in order to curtail threats, to limit damage and to deter future attacks. 

2    The Information Crisis
Working Group on Infodemics

Nov 2020

In Nov 2020, the Forum on Information and Democracy published a report prepared by its Working Group on Infodemics. The work is based on more than 100 contributions from experts, academics and jurists from all over the world and "offers 250 recommendations on how to rein in a phenomenon [the infodemic*] that threatens democracies and human rights, including the right to health."

*  An infodemic is an overload of information, often false or unverified, about a problem, especially during major crisis.

Main Recommentations
The Transparency Paradox

"What could be perceived as a transparency paradox arises, as on the one hand greater access to information and metadata is recommended, while on the other an attempt must be made to prevent future damaging misuse of data, such as in the Cambridge Analytica scandal. Differential privacy could allow a safe approach to transparency. ‘Differential privacy’ describes a promise, made by a data holder, or curator, to a data subject: ‘You will not be affected, adversely or otherwise, by allowing your data to be used in any study or analysis, no matter what other studies, data sets, or information sources, are available.’ At their best, differentially private database mechanisms can make confidential data widely available for accurate data analysis, without resorting to data clean rooms, data usage agreements, data protection plans, or restricted views.... Differential privacy addresses the paradox of learning nothing about an individual while learning useful information about a population." [p22]
What can be done?
Digital Media Policy Options for Europe (and beyond)

Nov 2019

This report identifies policy options available for the European Commission and EU member states should they wish to "create a more enabling environment for independent professional journalism... Many of these options are relevant far beyond Europe and demonstrate what democratic digital media policy could look like." The authors argue that to thrive "independent professional journalism needs freedom, funding, and a future. To enable this, media policy needs: a) to protect journalists and media from threats to their independence and to freedom of expression; b) to provide a level playing field and support for a sustainable business of news; and c) to be oriented towards the digital, mobile, and platform dominated future that people are demonstrably embracing – not towards defending the broadcast and print-dominated past."
The report "identifies a number of real policy choices that elected officials can pursue, at both the European level and at the member state level, all of which have the potential to make a meaningful difference and help create a more enabling environment for independent professional journalism across the continent while minimising the room for political interference with the media." In particular, it explores four important areas of traditional and new media policy where policymakers have options available that can help create a more enabling environment for independent professional journalism: 
1. Free expression and media freedom;
2. Disinformation and online harms;
3. Competition and data protection;
4. News media policy.
After detailing the main problems associated with these four areas, the report makes a number of policy suggestions (listed in the pulldowns below):
  • Free Expression & Media Freedom

    • Recognising the positive obligation, established by the European Court of Human Rights, that states must carry out effective investigations following the killing or disappearance of a journalist.

    • Using Article 7 of the Treaty of the European Union to investigate and sanction serious breaches of the fundamental rights and values that the EU (per Article 2) is founded on.

    • Considering a new annual rule of law review of all EU member states (to supplement existing Article 7 procedure) to identify, document, and publicise any backsliding from the norms and values all member states are committed to via the Treaty, with freedom of expression and media freedom as key parts of this review. Outcomes could be tied to the implementation of the draft law passed by the European Parliament, so that member states who do not protect free expression and media freedom risk suspension of EU funds (thus avoiding the reliance on qualified majorities and unanimity in Article 7 proceedings).

    • Addressing the ‘implementation gap’ that exists between the numerous dedicated resolutions adopted by the Council of Europe and various UN bodies, starting with the recommendations on the protection of journalism and safety of journalists and other media actors. 

    • Reviewing existing defamation laws to ensure alignment with ECHR case-law, and providing clear and explicit public-interest defences and protections for independent professional journalists in counter-terrorism, online harms, and surveillance laws.

    • Ensuring that private companies moderating online speech at scale: (a) embrace multi-stakeholder collaboration, including with civil society; (b) provide increased transparency; (c) are subject to human-rights compliant oversight; and (d) moderate speech within the framework of international human rights.

    • Protecting private media from capture through regulation of state advertising, and ownership, through greater transparency in both of these areas, and by protecting the independence of relevant regulators, including by ensuring that media regulators are independent, operate transparently, are accountable to the public, demonstrate respect for the principle of limited scope of regulation, and provide appropriate oversight of private actors. 

    • Protecting the independence of public-service media by ensuring that both governance and funding have actual autonomy from both government and legislative bodies.

    • Considering action at the European level when individual member states fail to protect private media from capture or reduce public-service media to de facto state media.


  • Disinformation & Online Harms

    • Clearly distinguishing between responses to illegal behaviours and forms of content (election interference, terrorism, child sex abuse, hate speech, and the like) and broader problems of different kinds of disinformation which, while problematic and potentially harmful, are often legal and protected by the right to free expression.

    • Avoiding direct forms of content regulation based on broad and amorphous definitions of terms like ‘fake news’, especially when underpinned by assumptions about the intent (‘malicious’), veracity (‘false or misleading’), and/or effect (‘potentially harmful’) of specific types of content that are extraordinarily hard to establish in practice. Safeguards for fundamental communications rights should be built into both internal and external oversight mechanisms to ensure due process and the opportunity to appeal.

    • Incentivising collaborative responses to address different disinformation problems, bringing together public authorities, platform companies, private news media, public-service media, and civil society actors.

    • Encouraging the development of self-regulatory, co-regulatory, or independent regulatory bodies that can oversee these efforts, have greater data access, can analyse performance, and issue guidance, for example, linked to the model of an ‘Independent Platform Agency’ outlined by the LSE Truth, Trust, Technology Commission (2018) or by means of academic oversight in collaboration with independent regulators, such as the oversight of media regulators in ERGA on the self-regulatory Code of Practice on disinformation as envisaged in the EC tender for the ‘European Digital Media Observatory’ (2019/1087). 

    • Increasing funding for research that studies the impact of various kinds of disinformation across the EU, either by setting up dedicated research centres or by creating grants that can support existing ones. A possibility could be to do both, and provide for EU-wide coordination by following up on the initial announcement of a planned ‘European Digital Media Observatory’ that can secure data access and coordinate best practices for researchers. 

    • Investing in independent media literacy efforts to promote media and information literacy to counter disinformation and help users navigate the digital media environment.

    • Furthering societal resilience against disinformation and online harms within the EU by ensuring a future-proof diverse media landscape – pledging significant financial support for independent news media, fact-, and source-checking. Ideally, these, as with media literacy efforts, should emphasise independent initiatives, and be free from potential interference from public authorities or from technology companies.


  • Competition & Data Protection

    • Issuing guidance on the journalism exemption in GDPR Article 85, and clearly reiterating the application of the European Union Charter of Fundamental Rights and the European Convention on Human Rights. 

    • Funding EU-level research on the adtech ecosystem, and possible privacy-preserving ways forward, with a particular focus on helping smaller publishers identify alternative/supplementary revenue sources.

    • More broadly ensuring that relevant authorities have access to data and greater analytical capabilities to be able to assess possible harm both downstream and upstream and act in an evidence-based and timely way.

    • Continuing to pursue measures related to the transparency and fairness of online marketplaces, continuing to develop dispute-resolution mechanisms and avenues for affected parties to pursue recourse.

    • Acknowledging that digital policy measures (including new forms of data protection and competition enforcement), while important issues in themselves, could have various unintended consequences and knock-on effects for journalism and are not in themselves likely to significantly increase investment in independent professional journalism, underlining the need for a holistic news media policy in parallel with steps taken in the data protection and competition space.


  • News Media Policy

    • The European Commission issuing guidance to member states on the considerable discretion they enjoy when it comes to offering state aid for private-sector media and/or support for independent public-service media.

    • Using Creative Europe, Digital Europe, Horizon Europe, and similar programmes to provide more resources for media innovation and research.

    • Instituting indirect and direct forms of support that incentivise investment in news production and innovation in news without giving political actors or public authorities direct leverage over publishers. 

    • Investing in the public-service media – provided they are genuinely independent, adequately funded, can operate across all platforms, have a clear role and remit, and avoid crowding out private competitors – can make a significant difference for European democracy.

    • Recognising that private-sector news media and public-service news media need to be able to compete and coexist, and any interventions that risk distorting their ability to do so – such as requiring third-party platforms to privilege certain designated ‘quality’ news providers or public-service providers – will undermine this competition and co-existence. (Both News Media Europe (2018a) and the European Broadcasting Union have stressed the need for a fair online platform environment.)

    • Recognising the legal status of independent professional journalism as a charitable cause, easing the creation of non-profit news media, and incentivising charitable and foundation support for independent professional journalism.

    • Making independent professional journalism easier and cheaper by providing greater access to data, recordings, and transcripts (at both the member state and EU institutional level) to better enable reporting.


Tackling the Information Crisis

Nov 2018

The report of the LSE's Truth, Trust & Technology Commission makes an important contribution to the debate. The Commission spent more than a year addressing questions such as 'How should we reduce the amount of misinformation?’ 'How can we protect democracy from digital damage?’ and 'How can we help people make the most of the extraordinary opportunities of the Internet while avoiding the harm it can cause?’

The Commission identifies 'Five Giant Evils of the Information Crisis' (shown below) and recommends the formation of an Independent Platform Agency (IPA), a watchdog which would evaluate the effectiveness of platform self-regulation and the development of quality journalism, reporting to Parliament and offering policy advice.
IPA should be funded by a new levy on UK social media and search advertising revenue. It should be "a permanent forum for monitoring and reviewing the behaviour of online platforms and provide annual reviews of ‘the state of disinformation’. In addition to it’s key recommendation the Commission also proposes a new programme of media literacy and a statutory code for political advertising."

"The UK and devolved governments should introduce a new levy on UK social media and search advertising revenue, a proportion of which would be ring-fenced to fund a new Agency [which] should be structurally independent of government but report to Parliament."
The IPA's initial purpose will be "not direct regulation, but rather an ‘observatory and policy advice’ function, and a permanent institutional presence to encourage the various initiatives attempting to address problems of information." It should be established by legislation and report on trends in online news and information sharing and the effectiveness of self-regulation. In addition:
•    Government should mobilise and coordinate an integrated new programme in media literacy.
•    Legislative change is needed to regulate political advertising.
  • Recommendations for the Short Term

    ■  The news media should continue their important work to develop high-quality and innovative revenue and distribution models. They should also continue to work with civil society and the platforms on signalling the credibility of content.

    ■  The platforms should develop annual plans and transparent, open mission statements on how they plan to tackle misinformation. They should work with civil society and news providers to develop trust marking. 

    ■  The Government should mobilise an urgent, integrated, new programme in media literacy. This could also be funded by the social platform levy and include digital media literacy training for politicians. 

    ■  Parliament should bring forward legislation to reform electoral regulation. The UK should not find itself having to go to the polls again before the legislative framework is modernised. Legislative change is needed to manage political advertising.

  • Recommendations for the Medium Term [3 yrs]

    Once the IPA is established it can help to mobilise efforts to encourage the traditional news industry to develop ways of supporting journalism innovation to combat the information crisis. This crisis has seen mounting numbers of interventions aimed at promoting the circulation of misinformation, disinformation and mal-information that contributes to the undermining of an informed public.


    The IPA would work to encourage the news industry to establish a News Innovation Centre operated by the news industry itself to support journalism innovation and quality news. The Centre would act as a research and networking body, helping connect journalists and news organisations with funders interested in supporting innovation, training and specialist journalism. The Centre would generate and administer funding from philanthropists, the platforms and other sources.

  • Recommendations for Longer Term [5 yrs]

    The IPA would provide vital coordination with all parts of the complex media system that are affected by the information crisis. The IPA is needed to start the short-term measures and to encourage the other measures for the medium term. The issues addressed will remain matters of long-term concern, requiring continuing coordination and assessment of the effectiveness of the recommended actions.

    ■  The IPA should provide a permanent forum for monitoring and review of platform behaviours, reporting to Parliament on an annual basis. 

    ■  The IPA should be asked to conduct annual reviews of ‘the state of disinformation’ that should include policy recommendations to a parliamentary committee. These should encompass positive interventions such as the funding of journalism

3    Tackling Threats Posed by AI
‘Fake news’ and disinformation are today being facilitated by social media with the aid of artificial intelligence [AI] and machine learning [ML]
Deepfakes

Jul 2020

In late 2018 researchers succeeded in using ML to generate highly realistic fake images and videos known as 'deepfakes.' Artists, pranksters and others (including hostile foeign powers) have subsequently used these techniques to create a growing collection of audio and video depicting high-profile leaders, such as Donald Trump, Barack Obama, and Vladimir Putin, saying things they never said. This trend has driven fears within the national security community that recent advances in ML will enhance the effectiveness of malicious media manipulation efforts like those deployed during the 2016 U.S. presidential election.
This paper examines the technical literature on deepfakes to assess the threat they pose. It draws two conclusions. First, the malicious use of crudely generated deepfakes will become easier with time as the technology commodifies. That said, the current state of deepfake detection suggests that such fakes can be "kept largely at bay". Second, tailored deepfakes produced by technically sophisticated actors will represent the greater threat over time. Even moderately resourced campaigns can access the requisite ingredients for generating a custom deepfake. However, factors such as the need to avoid attribution, the time needed to train an ML model, and the availability of data will constrain how sophisticated actors use tailored deepfakes in practice."
Edited versions of the report's four main recommendations are provided in the pulldowns below:
  • 1 Build a Deepfake 'Zoo'

    Absent some major technical breakthrough, deepfake detection will evolve as a cat-and-mouse game. Novel means of creating synthetic media will be invented, and detection systems trained to account for the new method. Success will therefore depend on how quickly detection systems used by giant social media platforms and smaller entities can account for new methods. Rapid integration means disinformation campaigns will confront a hostile environment where synthetic media is quickly identified and removed before proliferating. If the time between the first use of a new technique and its widespread integration into detection systems is sufficiently narrowed, it may render the use of machine learning for these purposes an unattractive option for malicious actors. Making this integration work will require rapid access to samples of media produced by different deepfake models... In order to accelerate this process, stakeholders—platforms, researchers, companies—should invest in the creation of a deepfake “zoo” that continuously aggregates and makes freely available datasets of synthetic media as they appear online... By lowering the costs of acquiring relevant, up-to-date training data to augment detection algorithms, the ‘zoo’ would make overall detection more robust. This would improve on the infrequently updated set of common datasets in use in the research community.

  • 2 Ecourage Better Capabilities Tracking

    Inconsistent documentation poses a significant issue in assessing the current state and future prospects of media manipulation and deep generative models. It is difficult to ascertain the speed at which research advances make it possible for certain actors to produce cutting-edge synthetic media at a low cost, hindering threat assessment and the effective allocation of resources. Research communities, funding organizations, and academic publishers should work toward developing common standards for reporting progress in generative models. This might include raising the bar on documenting the processes used in training a new model, as well as integrating this information in a machine-readable way into the metadata included with published academic papers. Such standardization would improve transparency around the state of the field in ways that facilitate better strategic planning. 

  • 3 Commodify Detection

    Simple deepfakes can still pose a threat in a world where detection systems are widely implemented. While these fakes will be quickly detected and removed on the most popular, mainstream platforms for distributing content, they will still spread in the less monitored spaces of the web. This includes distribution through private messaging platforms, which already serve as channels for false narratives even in the absence of ML-generated fakes. This content will also continue to spread among smaller platforms with a more hands-off approach to synthetic content. In these cases, the spread of a deepfake depends on the receptiveness of the viewer, rather than the effectiveness of a detection algorithm... A trained eye can identify crude deepfakes without any special procedures and processes. It may be important in this context to raise public awareness about deepfakes and to highlight indicative examples. Regular training sessions for journalists and people in professions likely to be targeted may also help limit the extent to which members of the public are duped.83In parallel, philanthropic organizations and government agencies should give grants that facilitate the translation of research findings in deepfake detection into user-friendly apps for analyzing media that members of the public might encounter while browsing the web...

  • 4 Proliferate 'Radioactive Data'

    ML researchers have recently demonstrated a method that enables datasets to be made “radioactive,” containing traces non-obvious to the human eye, but later extractable from media produced by models trained on that data. Usage of “radioactive” data can be detected even when it constitutes as little as one percent of the data used to train the model. These subtle modifications do not significantly affect the performance of models trained on marked datasets. Deepfakes trained on “radioactive” data can be easily identified, offering a way to check whether an image or video is synthetically generated by an ML model without elaborate media forensics techniques. The unwary disinformation actor might draw on publicly available data, train a generative model, and produce synthetic media all without knowing that their training corpus has been marked. Even if the tainted dataset is combined with others prior to training, these markers would persist in the resulting media. Stakeholders interested in mitigating the harm from deepfakes should encourage the “radioactive” marking of public datasets likely to be used as raw material for training deep generative models. 

Use of AI in Online Content Moderation

Jul 2019

"In recent years a wide-ranging, global debate has emerged around the risks faced by internet users, with a specific focus on protecting users from harmful content. A key element of this debate has centred on the role and capabilities of automated approaches (driven by AI & ML techniques) to enhance the effectiveness of online content moderation and offer users greater protection from potentially harmful material."

"These approaches may have implications for people’s future use of — and attitudes towards — online communications services, and may be applicable more broadly to developing new techniques for moderating and cataloguing content in the broadcast and audiovisual media industries, as well as back-office support functions in the telecoms, media and postal sectors."
This report — penned by Cambridge Consultants and commissioned by Ofcom — highlights four potential policy implications:
Context
Policy Implications
Share by: