By Victoria Coleman, the chief scientist of the U.S. Department of the Air Force, and Janet Napolitano, a professor of public policy and the director of the new Center for Security in Politics at the University of California, Berkeley.
Digital authoritarianism—the use of information technology by authoritarian regimes to surveil, repress, and manipulate domestic and foreign populations—is on the rise. In China, the Great Firewall and other systematic tools of digital oppression define the norms of public and private discourse. In Turkey, Wikipedia was banned for nearly three years before the country’s Constitutional Court ruled that the ban was a violation of freedom of expression. In Myanmar, a military coup has instituted nightly internet shutdowns. The Washington Post, as part of the global Pegasus Project, has uncovered widespread abuse of spyware technology to monitor dissidents.
The list of such abuses of the human right to participate in free and open digital discourse is long. Moreover, the methods of digital authoritarianism are broad: blocking access to the internet, censoring content, flooding the information sphere with disinformation in addition to co-opting social media and other online platforms. These methods are enabled by an array of tools and technologies, including surveillance, censorship, social manipulation, cyberattacks, and targeted online persecution. Technologies such as artificial intelligence vastly expand the reach of these tools.
While authoritarian regimes have always tried to tightly control their people, the modern means by which we communicate bring new and frightening dimensions. What is equally disturbing is that democratic governments, including that of the United States, have been for the most part observers of digital oppression.
Democracies can no longer afford to stay on the sidelines. We have ample research showing that democracies are not immune. In recent years, we’ve seen political parties, interest groups, and private companies develop and adopt the tools, techniques, and strategies of digital authoritarianism. In April 2018, Facebook CEO Mark Zuckerberg testified in two congressional hearings about his company’s role in the Cambridge Analytica scandal, where it was revealed that Facebook exposed the data of up to 87 million users to political exploitation.
For democracies, computational propaganda—the use of algorithms, automation, and human curation to enable purposeful distribution of misleading information—is particularly threatening. Trolling, strategic distraction, and conspiracy theories freely flow and often gain prominence in the “marketplace of ideas” where platforms’ recommender systems favor the radical over the rational. The Center for Countering Digital Hate identified just 12 people—called the “Disinformation Dozen”—who produced 65 percent of the anti-vaccine content shared or posted on Twitter and Facebook between February and March 2021. This form of falsehood amplification is unprecedented in human history. Its effects can be weaponized and used to cause disruption and confusion. Russia’s attempts to sow doubt in U.S. elections and foment civil unrest along its borders as a precursor to so-called “police actions” are further chilling examples.
Unlike organized authoritarian regimes, democracies are not well equipped to combat digital authoritarianism. The very nature of democracy strives to strike a balance between individual freedoms and protecting society from abusive practices. Authoritarian regimes are far more organized. The Cyberspace Administration of China (CAC) is but one of the multiplicity of agencies that regulate, censor, and control the internet in the country. Tellingly, the CAC oversees the China Internet Investment Fund, which has ownership stakes in technology companies such as Weibo, ByteDance, SenseTime, and others.
China’s system of digital authoritarianism is not just a danger to those who live within its borders. It is also a peril for the rest of the world. China exports its tools and even legal frameworks of digital authoritarianism to more than 60 countries as part of its Belt and Road Initiative. And by judicious and prolific participation in international standards, China has sought to systematically influence the core network infrastructure that the internet is built on.
This digital conflict is asymmetric. It is fought between individuals on one side and organized, well-resourced groups on the other. And asymmetry favors the aggressor. It is virtually impossible for the average citizen to know when they are being manipulated or how to address the flood of intentionally misleading claims. The United States and its democratic allies need to respond to protect their citizens and promote internet freedom through international cooperation. We have a far more compelling vision to share with the world.
But who is responsible for formulating, implementing, and sharing that vision? Inexplicably, and alarmingly, there is no singular focus within the U.S. government that is tasked with protecting Americans from digital authoritarianism. The internet infrastructure, the citizen protection tools, and the legal and policy frameworks that govern digital civil society are fragmented and uncoordinated. We must act with urgency and alacrity to begin redressing this imbalance between democracies and authoritarian regimes.
First, the systems on which our online discourse is built must be architecturally open to prevent unilateral authority over information. This will require rebuilding, and in doing so, it is important to consider the needs of the average user. We must equip users with tools to identify and combat disinformation. Ubiquitous open-source and anti-censorship solutions are needed. While such tools exist, they are not natively integrated into modern platforms. They are not built in the most popular browsers, for example. Simplification and ubiquitous deployment of protections are essential if we are to generate an effective counterbalance to digital authoritarianism. The internet is built on international standards through the Internet Engineering Task Force. So we need not only develop new tools and methods; we must also advocate for their incorporation into these key standards. And we need to be present deliberately and strategically in these standards bodies.
Second, a comprehensive and coordinated research and development agenda is needed to develop the technology and tools necessary to protect citizens. Without citizen protection in mind, digital authoritarianism countermeasure tools can cause a great deal of unintentional harm to the individual. For example, a common way of circumventing censorship is to use a virtual private network (VPN), which masks the origin of the request for a piece of information. This VPN sits on a person’s device, the so-called client side. So if a citizen of an authoritarian regime is suspected of circumventing censorship, the VPN can be discovered by the authorities and the citizen punished. To offer both a means of circumvention and citizen protection, the masking of the origin of the information request must be pushed upstream in the network, the so-called server side, to dissociate the individual from the request. These tools need to be safe and easy to use for everyone, not just the sophisticated (and few) internet freedom fighters.
Third, developing tools and systems to counter digital authoritarianism, while necessary, is not sufficient for their deployment and use. Take the Tor (short for “The Onion Router”) network, for example. Tor is built on open-source software that enables anonymous communication by directing internet traffic through a free, worldwide volunteer overlay network that conceals a user’s location and prevents network surveillance. That is a boon to the personal privacy of its users. On the other hand, Tor is also the gateway to the so-called dark web, a subset of the internet where drug dealing, child pornography, fraud, and ransomware hide. Are we enabling this dark web and a host of illicit activity unintentionally by supporting Tor in our desire to offer citizens privacy from authoritarian regime surveillance? Technology does not know how it will be used. The context for deciding what’s right and what’s wrong is set by policy and by law. A well-developed policy framework endorsed by government, NGOs, and the tech sector must be created to guide and direct responses to exercises of digital authoritarianism. With such a framework, we could assess when a response is required and what form a proportional response could take. And we could work with our allies and partners to counter digital authoritarianism exports.
This will be no easy task. The reach of digital authoritarianism is as broad as it is deep. It is also new and cross-cutting, which means that the United States lacks the coordinated capacity that is needed to respond systematically. The Department of Homeland Security, the State Department, the Defense Department, the intelligence community, the Federal Communications Commission, the Commerce Department, the Treasury Department, and the National Security Council all have equities. But when everyone is responsible, no one is responsible. Creating a singular point of focus within the U.S. government to be the guardian of internet freedom and democracy-affirming technologies must become a key priority of the United States. An independent federal government Agency for Digital Human Rights must be established and resourced to have primary responsibility for internalizing, planning, coordinating, and executing the digital human rights agenda, with a focus at home but also working with allies and partners to provide development technology and policy aid to protect the most vulnerable.
The rise of digital authoritarianism is a significant risk to the national interests of the United States and democracies around the world. We must act to protect citizens and counterbalance the reach of our adversaries. If we do not, the threat will metastasize and choke off our freedoms. We can meet this 21st-century challenge—but only by taking action today, before this dystopian future becomes the status quo.
The views expressed are those of the authors and do not reflect the official guidance or position of the U.S. government, the U.S. Defense Department, or the U.S. Air Force.
Victoria Coleman is the chief scientist of the U.S. Department of the Air Force, former director of the Defense Advanced Research Projects Agency, and former chief technology officer of the Wikimedia Foundation.
Janet Napolitano is a professor at the Goldman School of Public Policy and the director of the Center for Security in Politics at the University of California, Berkeley, president emerita of the University of California, former U.S. secretary of homeland security, and former governor of Arizona.
This publication originally appears in Foreign Policy.