The following four principles are critical to ensuring the stability of cyberspace:
A. The Responsibility Principle: The first principle speaks to the decentralized and distributed nature of cyberspace. It reaffirms the need for a multistakeholder approach to ensuring the stability of cyberspace and, notably, extends “stakeholders” to include every individual. Every individual has responsibilities, in a personal and/or professional capacity, to ensure the stability of cyberspace. While it may be obvious that those responsible for government cyber policies and employees that manage cloud services have a role to play, every individual connected to cyberspace must take reasonable efforts to ensure their own devices are not compromised and, perhaps, used in attacks. Even those who are not connected to the Internet may be dependent upon its capabilities to receive goods and services, and they too have a stake in ensuring that cyberspace policy is being addressed appropriately in their communities.
B. The Restraint Principle: The second principle contains a general requirement of restraint. For states, this is consistent with the 2018 resolutions of the United Nations General Assembly (UNGA) concerning responsible state behavior in cyberspace and the 2015 UN GGE report which notes that “Consistent with the purposes of the United Nations, including to maintain international peace and security, States should…prevent ICT practices that are acknowledged to be harmful or that may pose threats to international peace and security…” But it is not just about states, as non-state actors can also engage in actions, such as hacking their attackers, that might also undermine the stability of cyberspace.
C. The Requirement to Act Principle: The third principle contains a general requirement to take affirmative action to preserve the stability of cyberspace. When acting, states should take care to avoid inadvertently escalating tensions or increasing instability. This is consistent with the obligation noted in the 2015 UN GGE report to “cooperate in developing and applying measures to increase stability and security in the use of ICTs.” But again, it is not just about states, as private companies and individuals can also take cooperative steps to help ensure the stability of cyberspace. For example, private companies can work together to mitigate cyber threats, and individuals can ensure they are employing best practices, such as upgrading, patching, and using multi-factor authentication, to reduce the risk that botnets will take over their machines and then be used to launch broad-based attacks that threaten the stability of cyberspace.
D. The Human Rights Principle: The fourth principle recognizes the importance of safeguarding human rights as an important element of cyberspace stability. As the reliance of individuals on information and communications technologies increases, the disruptive effect on human activity resulting from threats to its availability or integrity is amplified. Thus, it is imperative that as states pursue their national strategic interests in cyberspace, they give due consideration to the resulting impact on individuals, in particular their human rights. In a similar vein, non-state actors should consider and minimize risks that their activities pose to individuals’ enjoyment of their rights online and offline. At a minimum, compliance with the Human Rights Principle requires that states abide by their human rights obligations under international law as they engage in activities in cyberspace.
Universally accepted human rights have been enshrined in the Universal Declaration of Human Rights. Additionally, a large number of international agreements providing for a variety of specific human rights have been adopted and create binding legal obligations for state parties. In the context of cyberspace, the applicability of international human rights law has been explicitly confirmed on several occasions by the United Nations General Assembly, the UN Human Rights Council (HRC), as well as the UN GGE reports of 2013 and 2015. Upholding rights and ensuring users trust that their rights are being respected is critical to ensuring the stability of cyberspace.
We note that the four principles are not intended to be all-inclusive or cover every aspect of cyberspace policy, and there are many organizations that have produced broad-based sets of principles covering a wide variety of issues. There are also other organizations focused on issues relating to Internet governance and human rights online (including privacy, freedom of expression, and freedom of association). Our goal is to achieve widespread acceptance of principles that support the stability of cyberspace, especially in an era of unprecedented and sophisticated hostile activity where rules may be unclear or, even if clear, may be neither embraced nor enforced.
The Commission recommends that:
1. State and non-state actors adopt and implement norms that increase the stability of cyberspace by promoting restraint and encouraging action.
2. State and non-state actors, consistent with their responsibilities and limitations, respond appropriately to norms violations, ensuring that those who violate norms face predictable and meaningful consequences.
3. State and non-state actors, including international institutions, increase efforts to train staff, build capacity and capabilities, promote a shared understanding of the importance of the stability of cyberspace, and take into account the disparate needs of different parties.
4. State and non-state actors collect, share, review, and publish information on norms violations and the impact of such activities.
5. State and non-state actors establish and support Communities of Interest to help ensure the stability of cyberspace.
6. A standing multistakeholder engagement mechanism be established to address stability issues, one where states, the private sector (including the technical community), and civil society are adequately involved and consulted.
PRINCIPLE 01: Ensure everyone can connect to the internet
So that anyone, no matter who they are or where they live, can participate actively online.
1. By setting and tracking ambitious policy goals
2. By designing robust policy-frameworks and transparent enforcement institutions to achieve such goals, through
3. By ensuring systematically excluded populations have effective paths towards meaningful internet access
PRINCIPLE 02: Keep all of the internet available, all of the time
So that no one is denied their right to full internet access
1. By establishing legal and regulatory frameworks to minimize government-triggered internet disruptions, and ensure any interference is only done in ways consistent with human rights law
2. By creating capacity to ensure demands to remove illegal content are done in ways that are consistent with human rights law
3. By promoting openness and competition in both internet access and content layers
PRINCIPLE 03: Respect and protect people’s fundamental online privacy and data rights
So everyone can use the internet freely, safely, and without fear
1. By establishing and enforcing comprehensive data protection and rights frameworks to protect people’s fundamental right to privacy in both public and private sectors, underpinned by the rule of law.
2. By requiring that government demands for access to private communications and data are necessary and proportionate to the aim pursued, lawful and subject to due process, comply with international human rights norms, and do not require service providers or data processors to weaken or undermine the security of their products and services.
3. By supporting and monitoring privacy and online data rights in their jurisdictions.
PRINCIPLE 04: Make the internet affordable and accessible to everyone
So that no one is excluded from using and shaping the Web
1. By crafting policies that address the needs of systematically excluded groups.
2. By working towards an ever-increasing quality of service.
3. By ensuring full use of the internet by all, through a close coordination with Government and Civil Society towards
PRINCIPLE 05: Respect and protect people’s privacy and personal data to build online trust
So people are in control of their lives online, empowered with clear and meaningful choices around their data and privacy
1. By giving people control over their privacy and data rights, with clear and meaningful choices to control processes involving their privacy and data.
2. By supporting corporate accountability and robust privacy and data protection by design, carrying out regular and pro-active data processing impact assessments that are made available to regulators which hold companies accountable for review and scrutiny, to understand how their products and services could better support users’ privacy and data rights.
3. By making privacy and data rights equally available to everyone
PRINCIPLE 06: Develop technologies that support the best in humanity and challenge the worst
So the Web really is a public good that puts people first
1. By being accountable for their work, through regular reports.
2. By engaging with all communities in an inclusive way.
3. By investing in and supporting the digital commons:
PRINCIPLE 07: Be creators and collaborators on the Web
So the Web has rich and relevant content for everyone
By being active participants in shaping the Web, including content and systems made available through it
PRINCIPLE 08: Build strong communities that respect civil discourse and human dignity
So that everyone feels safe and welcome online By working towards a more inclusive Web.
PRINCIPLE 09: Fight for the Web
So the Web remains open and a global public resource for people everywhere, now and in the future
By being active citizens of the Web.
Core elements that the Commission advocates for the new Social Compact:
• Fundamental human rights, including privacy and personal data protection, must be protected online. Threats to these core human rights should be addressed by governments and other stakeholders acting both within their own jurisdiction and in cooperation.
• Interception of communications, collection, analysis and use of data over the Internet by law enforcement and government intelligence agencies should be for purposes that are openly specied in advance, authorized by law (including international human rights law) and consistent with the principles of necessity and proportionality. Purposes such as gaining political advantage or exercising repression are not legitimate.
• In particular, laws should be publicly accessible, clear, precise, comprehensive and non-discriminatory, openly arrived at and transparent to individuals and businesses. Robust, independent mechanisms should be in place to ensure accountability and respect for rights. Abuses should be amenable to appropriate redress, with access to an eective remedy provided to individuals whose right to privacy has been violated by unlawful or arbitrary surveillance.
• Businesses or other organizations that transmit and store data using the Internet must assume greater responsibility to safeguard that data from illegal intrusion, damage or destruction. Users of paid or so-called “free services” provided on the Internet should know about, and have some choice over, the full range of commercial use on how their data will be deployed, without being excluded from the use of software or services customary for participation in the information age. Such businesses should also demonstrate accountability and provide redress in the case of a security breach.
• There is a need to reverse the erosion of trust in the Internet brought about by the non-transparent market in collecting, centralizing, integrating and analyzing enormous quantities of private information about individuals and enterprises — a kind of private surveillance in the service of “big data,” often under the guise of oering a free service.
• Consistent with the United Nations Universal Declaration of Human Rights, communications should be inherently considered private between the intended parties, regardless of communications technology. The role of government should be to strengthen the technology upon which the Internet dependsand its use, not to weaken it.
• Governments should not create or require third parties to create “back doors” to access data that would have the eect of weakening the security of the Internet. Eorts by the Internet technical community to incorporate privacy-enhancing solutions in the standards and protocols of the Internet, including end-to-end encryption of data in transit and at rest, should be encouraged.
• Governments, working in collaboration with technologists, businesses and civil society, must help educate their publics in good cyber-security practices. ey must also collaborate to enhance the training and development of the software workforce globally, to encourage creation of more secure and stable networks around the world.
• The transborder nature of many signicant forms of cyber intrusion curtails the ability of the target state to interdict, investigate and prosecute the individuals or organizations responsible for that intrusion. States should coordinate responses and provide mutual assistance in order to curtail threats, to limit damage and to deter future attacks.
• Recognising the positive obligation, established by the European Court of Human Rights, that states must carry out effective investigations following the killing or disappearance of a journalist.
• Using Article 7 of the Treaty of the European Union to investigate and sanction serious breaches of the fundamental rights and values that the EU (per Article 2) is founded on.
• Considering a new annual rule of law review of all EU member states (to supplement existing Article 7 procedure) to identify, document, and publicise any backsliding from the norms and values all member states are committed to via the Treaty, with freedom of expression and media freedom as key parts of this review. Outcomes could be tied to the implementation of the draft law passed by the European Parliament, so that member states who do not protect free expression and media freedom risk suspension of EU funds (thus avoiding the reliance on qualified majorities and unanimity in Article 7 proceedings).
• Addressing the ‘implementation gap’ that exists between the numerous dedicated resolutions adopted by the Council of Europe and various UN bodies, starting with the recommendations on the protection of journalism and safety of journalists and other media actors.
• Reviewing existing defamation laws to ensure alignment with ECHR case-law, and providing clear and explicit public-interest defences and protections for independent professional journalists in counter-terrorism, online harms, and surveillance laws.
• Ensuring that private companies moderating online speech at scale: (a) embrace multi-stakeholder collaboration, including with civil society; (b) provide increased transparency; (c) are subject to human-rights compliant oversight; and (d) moderate speech within the framework of international human rights.
• Protecting private media from capture through regulation of state advertising, and ownership, through greater transparency in both of these areas, and by protecting the independence of relevant regulators, including by ensuring that media regulators are independent, operate transparently, are accountable to the public, demonstrate respect for the principle of limited scope of regulation, and provide appropriate oversight of private actors.
• Protecting the independence of public-service media by ensuring that both governance and funding have actual autonomy from both government and legislative bodies.
• Considering action at the European level when individual member states fail to protect private media from capture or reduce public-service media to de facto state media.
• Clearly distinguishing between responses to illegal behaviours and forms of content (election interference, terrorism, child sex abuse, hate speech, and the like) and broader problems of different kinds of disinformation which, while problematic and potentially harmful, are often legal and protected by the right to free expression.
• Avoiding direct forms of content regulation based on broad and amorphous definitions of terms like ‘fake news’, especially when underpinned by assumptions about the intent (‘malicious’), veracity (‘false or misleading’), and/or effect (‘potentially harmful’) of specific types of content that are extraordinarily hard to establish in practice. Safeguards for fundamental communications rights should be built into both internal and external oversight mechanisms to ensure due process and the opportunity to appeal.
• Incentivising collaborative responses to address different disinformation problems, bringing together public authorities, platform companies, private news media, public-service media, and civil society actors.
• Encouraging the development of self-regulatory, co-regulatory, or independent regulatory bodies that can oversee these efforts, have greater data access, can analyse performance, and issue guidance, for example, linked to the model of an ‘Independent Platform Agency’ outlined by the LSE Truth, Trust, Technology Commission (2018) or by means of academic oversight in collaboration with independent regulators, such as the oversight of media regulators in ERGA on the self-regulatory Code of Practice on disinformation as envisaged in the EC tender for the ‘European Digital Media Observatory’ (2019/1087).
• Increasing funding for research that studies the impact of various kinds of disinformation across the EU, either by setting up dedicated research centres or by creating grants that can support existing ones. A possibility could be to do both, and provide for EU-wide coordination by following up on the initial announcement of a planned ‘European Digital Media Observatory’ that can secure data access and coordinate best practices for researchers.
• Investing in independent media literacy efforts to promote media and information literacy to counter disinformation and help users navigate the digital media environment.
• Furthering societal resilience against disinformation and online harms within the EU by ensuring a future-proof diverse media landscape – pledging significant financial support for independent news media, fact-, and source-checking. Ideally, these, as with media literacy efforts, should emphasise independent initiatives, and be free from potential interference from public authorities or from technology companies.
• Issuing guidance on the journalism exemption in GDPR Article 85, and clearly reiterating the application of the European Union Charter of Fundamental Rights and the European Convention on Human Rights.
• Funding EU-level research on the adtech ecosystem, and possible privacy-preserving ways forward, with a particular focus on helping smaller publishers identify alternative/supplementary revenue sources.
• More broadly ensuring that relevant authorities have access to data and greater analytical capabilities to be able to assess possible harm both downstream and upstream and act in an evidence-based and timely way.
• Continuing to pursue measures related to the transparency and fairness of online marketplaces, continuing to develop dispute-resolution mechanisms and avenues for affected parties to pursue recourse.
• Acknowledging that digital policy measures (including new forms of data protection and competition enforcement), while important issues in themselves, could have various unintended consequences and knock-on effects for journalism and are not in themselves likely to significantly increase investment in independent professional journalism, underlining the need for a holistic news media policy in parallel with steps taken in the data protection and competition space.
• The European Commission issuing guidance to member states on the considerable discretion they enjoy when it comes to offering state aid for private-sector media and/or support for independent public-service media.
• Using Creative Europe, Digital Europe, Horizon Europe, and similar programmes to provide more resources for media innovation and research.
• Instituting indirect and direct forms of support that incentivise investment in news production and innovation in news without giving political actors or public authorities direct leverage over publishers.
• Investing in the public-service media – provided they are genuinely independent, adequately funded, can operate across all platforms, have a clear role and remit, and avoid crowding out private competitors – can make a significant difference for European democracy.
• Recognising that private-sector news media and public-service news media need to be able to compete and coexist, and any interventions that risk distorting their ability to do so – such as requiring third-party platforms to privilege certain designated ‘quality’ news providers or public-service providers – will undermine this competition and co-existence. (Both News Media Europe (2018a) and the European Broadcasting Union have stressed the need for a fair online platform environment.)
• Recognising the legal status of independent professional journalism as a charitable cause, easing the creation of non-profit news media, and incentivising charitable and foundation support for independent professional journalism.
• Making independent professional journalism easier and cheaper by providing greater access to data, recordings, and transcripts (at both the member state and EU institutional level) to better enable reporting.
■ The news media should continue their important work to develop high-quality and innovative revenue and distribution models. They should also continue to work with civil society and the platforms on signalling the credibility of content.
■ The platforms should develop annual plans and transparent, open mission statements on how they plan to tackle misinformation. They should work with civil society and news providers to develop trust marking.
■ The Government should mobilise an urgent, integrated, new programme in media literacy. This could also be funded by the social platform levy and include digital media literacy training for politicians.
■ Parliament should bring forward legislation to reform electoral regulation. The UK should not find itself having to go to the polls again before the legislative framework is modernised. Legislative change is needed to manage political advertising.
Once the IPA is established it can help to mobilise efforts to encourage the traditional news industry to develop ways of supporting journalism innovation to combat the information crisis. This crisis has seen mounting numbers of interventions aimed at promoting the circulation of misinformation, disinformation and mal-information that contributes to the undermining of an informed public.
The IPA would work to encourage the news industry to establish a News Innovation Centre operated by the news industry itself to support journalism innovation and quality news. The Centre would act as a research and networking body, helping connect journalists and news organisations with funders interested in supporting innovation, training and specialist journalism. The Centre would generate and administer funding from philanthropists, the platforms and other sources.
The IPA would provide vital coordination with all parts of the complex media system that are affected by the information crisis. The IPA is needed to start the short-term measures and to encourage the other measures for the medium term. The issues addressed will remain matters of long-term concern, requiring continuing coordination and assessment of the effectiveness of the recommended actions.
■ The IPA should provide a permanent forum for monitoring and review of platform behaviours, reporting to Parliament on an annual basis.
■ The IPA should be asked to conduct annual reviews of ‘the state of disinformation’ that should include policy recommendations to a parliamentary committee. These should encompass positive interventions such as the funding of journalism
Absent some major technical breakthrough, deepfake detection will evolve as a cat-and-mouse game. Novel means of creating synthetic media will be invented, and detection systems trained to account for the new method. Success will therefore depend on how quickly detection systems used by giant social media platforms and smaller entities can account for new methods. Rapid integration means disinformation campaigns will confront a hostile environment where synthetic media is quickly identified and removed before proliferating. If the time between the first use of a new technique and its widespread integration into detection systems is sufficiently narrowed, it may render the use of machine learning for these purposes an unattractive option for malicious actors. Making this integration work will require rapid access to samples of media produced by different deepfake models... In order to accelerate this process, stakeholders—platforms, researchers, companies—should invest in the creation of a deepfake “zoo” that continuously aggregates and makes freely available datasets of synthetic media as they appear online... By lowering the costs of acquiring relevant, up-to-date training data to augment detection algorithms, the ‘zoo’ would make overall detection more robust. This would improve on the infrequently updated set of common datasets in use in the research community.
Inconsistent documentation poses a significant issue in assessing the current state and future prospects of media manipulation and deep generative models. It is difficult to ascertain the speed at which research advances make it possible for certain actors to produce cutting-edge synthetic media at a low cost, hindering threat assessment and the effective allocation of resources. Research communities, funding organizations, and academic publishers should work toward developing common standards for reporting progress in generative models. This might include raising the bar on documenting the processes used in training a new model, as well as integrating this information in a machine-readable way into the metadata included with published academic papers. Such standardization would improve transparency around the state of the field in ways that facilitate better strategic planning.
Simple deepfakes can still pose a threat in a world where detection systems are widely implemented. While these fakes will be quickly detected and removed on the most popular, mainstream platforms for distributing content, they will still spread in the less monitored spaces of the web. This includes distribution through private messaging platforms, which already serve as channels for false narratives even in the absence of ML-generated fakes. This content will also continue to spread among smaller platforms with a more hands-off approach to synthetic content. In these cases, the spread of a deepfake depends on the receptiveness of the viewer, rather than the effectiveness of a detection algorithm... A trained eye can identify crude deepfakes without any special procedures and processes. It may be important in this context to raise public awareness about deepfakes and to highlight indicative examples. Regular training sessions for journalists and people in professions likely to be targeted may also help limit the extent to which members of the public are duped.83In parallel, philanthropic organizations and government agencies should give grants that facilitate the translation of research findings in deepfake detection into user-friendly apps for analyzing media that members of the public might encounter while browsing the web...
ML researchers have recently demonstrated a method that enables datasets to be made “radioactive,” containing traces non-obvious to the human eye, but later extractable from media produced by models trained on that data. Usage of “radioactive” data can be detected even when it constitutes as little as one percent of the data used to train the model. These subtle modifications do not significantly affect the performance of models trained on marked datasets. Deepfakes trained on “radioactive” data can be easily identified, offering a way to check whether an image or video is synthetically generated by an ML model without elaborate media forensics techniques. The unwary disinformation actor might draw on publicly available data, train a generative model, and produce synthetic media all without knowing that their training corpus has been marked. Even if the tainted dataset is combined with others prior to training, these markers would persist in the resulting media. Stakeholders interested in mitigating the harm from deepfakes should encourage the “radioactive” marking of public datasets likely to be used as raw material for training deep generative models.