Internet | November 3, 2013 Written by Jonathan Benney. The term “Great Firewall”, intended to convey that China’s Internet is surrounded in its entirety by a border which restricts outside content, was first used in 1996 by technical journalists and was popularized in 1997 by Geremie Barmé and Sang Ye in their article “The Great Firewall of China”, in the then trendy magazine Wired. Almost twenty years later, the term, and its implied metaphor, are still nigh universal in popular thought about the Chinese Internet. The idea of a Chinese “intranet”, walled off by an impenetrable barrier, is deceptively straightforward. In fact, the automated barring of certain controversial websites is the least distinctive aspect of Chinese control of the Internet; many countries use or are adopting their own national firewalls. China’s automated censorship is broader than some of these countries, but not all of them. Beyond this, circumvention strategies for crossing the Chinese firewall are well known (as the recent case of Apple’s removal of the OpenDoor proxy application demonstrates). Where China really distinguishes itself is in its use of surveillance and psychological strategies for Internet control. Recent research from King, Roberts and Pan demonstrates that the human resource that China devotes to Internet surveillance and manual censorship is unprecedented in scale. Censors examine posts made on public forums and social networks and delete posts based on a complex series of directives from higher levels of the state. Since the information about who is posting what is so specific and detailed, censorship can be different in different areas, and individuals can be targeted, detained, and punished on the basis of their online behaviour. Hence, the “firewall” process is regional rather than national (as the censorship of information on Tibetan or Uighur independence shows), and as much manual as it is automatic. During major international events like the Olympics, Internet censorship is modified in extremely complicated ways, even down to wall socket level. Despite this substantial expense of time and resources, most Internet users find ways to say the things they want to online. When their discussion clashes with the aims of the state, users change their language to evade automatic censorship, as shown in the famous case of the Grass Mud Horse, or make their discussions private rather than public (note, for example, Chinese netizens’ gradual shift from the public Weibo to the private WeChat). Hence, the Chinese party-state has adopted psychological strategies as well as strategies based on the control of flows of information. In general terms, the aim of these strategies has been to inculcate attitudes of vigilance and wariness about the content of the Internet, so that users are both less likely to seek controversial information and less likely to believe information contrary to the aims of the state. From the mid-2000s onwards, the Chinese state media began to “hype” the potential dangers of the Internet and to promote individuals whose response to the internet was characterized by a prudish sort of shock and vigilance. Two famous cases, broadcast on CCTV, were of a young man, who was claimed to be a university student (but was actually a CCTV intern), claiming that his roommate had been “disturbed” by pornography found on Google, and of a young girl who claimed that a “very sexy and very violent” (hen huang, hen baoli) image had popped up randomly as she browsed the Internet. These particular cases are well known because they were mocked so thoroughly by Internet users, but it is important to note that more subtle cases of these “Internet exemplars” (to draw from Børge Bakken’s characterization of China as an “exemplary society”) may well be having an effect on the Chinese public consciousness. For example, articles about “online life” in the People’s Daily refer almost entirely to the use of government-facilitated websites, even when they discuss Internet chat, entertainment or education. Further, despite the well-known online Chinese subculture which subverts censorship, the inevitable influx of new users less familiar with Internet history and culture may lead to greater public acceptance of this state-promoted exemplary Internet use. In recent years, the state has amplified its psychological campaign. In 2009, having blocked popular global microblogs such as Twitter, as well as shutting down previously popular Chinese microblogs like Fanfou and Digu, the Chinese state tacitly authorized the setup of a quasi-official microblog service, Sina Weibo. The rapid escalation of use of Sina Weibo, particularly to comment on public events, may have taken the state by surprise. Given its large number of users, including commercial and government organizations, shutting Weibo down or blocking it has not been a plausible option. Manual censorship of “sensitive words” or deletion of inappropriate posts, even at a large scale, has not eradicated the rapid transmission of such posts. In mid-2011, then, the state embarked upon an “anti-rumour” campaign. A People’s Daily editorial published on 10 August 2011 assessed the work of a “rumour-busting alliance” (piyao lianmeng), supposedly an independent group of citizens who took it upon themselves to investigate and debunk rumours being spread on microblogs. The editorial did not attack Weibo per se, but it demonstrated a discourse of fear and mistrust of online information which continues to be characteristic of Chinese official media. In 2012, Sina Weibo introduced a community code of practice which places “rumour” (defined vaguely, but potentially including any piece of non-approved information) on the same level as spam and pornography. “Online rumour” is now often referred to as a “malignant tumour” (duliu) in official discourse, and is now the subject of public education campaigns, as if it were a public health or crime issue. During 2013, after a series of public crackdowns and arrests on “rumour-mongers”, the Supreme People’s Court and Supreme People’s Procuratorate released a judicial opinion which announced that any online rumour which is “clicked and viewed more than 5000 times, or reposted 500 times” would be viewed as “serious defamation” and could lead to jail sentences of up to three years. Within ten days of the announcement, several people had been detained, including a teenager. Legally speaking, the anti-rumour regulation is best described as bizarre. The suggestion that a defamatory piece of information becomes more defamatory the more people see it is inconsistent with defamation law in any jurisdiction. But, in terms of Chinese state morality and state policies for information control, it is logical and strategically consistent. It places the burden on the user rather than the actual publisher of the information (that is, Weibo), and requires users to guess both whether their post will be deemed a “rumour” and whether it will viewed or distributed by many other users. More popular users are at greater risk. But perhaps even more importantly, it inculcates in users a sense of hyper-vigilance. The intention is that users second-guess everything they post online, just as the broader rumour discourse makes them second-guess everything they read. If this process achieves its aims, controlling the flow of information would become less of a priority for the state; problematic online behaviour would be stopped before it had even begun. Jonathan Benney is a Postdoctoral Fellow in the Institute for U.S.-China Issues at the University of Oklahoma, and author of Defending Rights in Contemporary China (Routledge 2012). Beijing’s Uyghur Policy is Not Just Counter-Productive, it’s Disastrous What to expect from the Third Plenum?