What Can Civil Society Learn from Evgeny Morozov’s Critique of Web 2.0?

Regions: Global

Tags: ethics, new media, online, risks, safety, security, video authenticity, web 2.0

It's easy to get excited about the potential power of the internet to fight government impunity, curb human rights abuses, and induce democratic reforms in authoritarian states. New Media experts like Clay Shirky see enormous potential in social media. Twitter, Facebook, and YouTube have all gotten a lot of attention for their roles in protests movements from Iran to Burma to Moldova, demonstrating that they can be powerful tools for change. But Evgeny Morozov, Yahoo! Fellow at Georgetown University and contributing editor at Foreign Policy, isn't so sure we should be celebrating such successes so soon. Morozov writes in the summer 2009 issue of Dissent Magazine that the tools of Web 2.0 can just as easily be used to foment hatred and violence or secure government control in authoritarian states.

In the video below, Morozov discusses his ideas at a TED Conference:

 

Morozov's views touch upon a number of ethical issues that we here at WITNESS have been thinking and blogging about related to video advocacy in an open, socially-networked world. The broad point is that the internet is a neutral medium that can be used for good or ill. The key is to learn to use it in such a way that promotes positive change and human rights while minimizing the dangers that come along with it.

One important issue Morozov highlights is the danger posed by governments using social networking sites to spy on their own citizens, especially the politically and socially active. Says Morozov:

"[B]oth Facebook and Twitter give Iran's secret services superb platforms for gathering open source intelligence about the future revolutionaries, revealing how they are connected to each other. These details are now being shared voluntarily, without any external pressure. Once regimes used torture to get this kind of data; now it's freely available on Facebook." (Priscila Néri discusses this in her blog post on the Iran protests.)

Indeed, the Iranian government is moving forward along those lines. Last week, as reported in The Guardian, Iran created a special task force to police the internet. The head of the task force, Colonel Mehrdad Omidi, was quoting as saying, "Given the spread of internet use, police must confront crimes taking place in the web atmosphere, a special committee has been set up to monitor the internet and deal with crimes... such as fraud... insults and the spreading of lies." He also pledged to intervene in "political matters... should there be an illegal act." Perhaps more worrisome are reports that the Iranian government this week began sending SMS messages to citizens warning them that they had been identified as past protesters and should stop participating in protest events. Iran is already listed by Reporters Without Borders as one of its 12 'internet enemies,' having blocked an estimated 10 million Web sites deemed politically or socially offensive. In the days after the contested June election, Iran reportedly purchased equipment used to monitor internet and e-mail communications. During last summer's election protests there were also reports of the Iranian security services using fake Twitter accounts to infiltrate protest groups and spread disinformation.

As WITNESS and the organizations it partners with begin to use more social networking and new media technology to produce and distribute videos, the ability of repressive governments to use that same technology against those involved with and featured in the videos will have to be weighed in along with more traditional security concerns. The structures of social networks mean that if one person's data or information is insecure, his or her connections' might be as well. Cyber-security risks will have to be taken into account and explained when seeking an individual's informed consent to use footage of them in a video.

While government censorship, monitoring, and infiltration are serious concerns, there are ways for human rights advocates and civil society members to defend against them. In Iran for example, members of the ‘Twitterverse' such as Twitspam outed government infiltrators on Twitter by identifying suspected fraudulent accounts. Other social platforms have built-in means of defense. Wikipedia for example features discussion, or Talk pages, which offer the opportunity to discuss improvements to a particular Wikipedia page. As the Tamil Tigers Talk page demonstrates, such discussion can include whether misinformation is being spread on a page. The Talk pages model might be suitable for adaptation to networks in order to protect against infiltrators and misinformation. Perhaps the most impactful way for civil society to overcome government disruption is the use of online anonymity systems like Tor. Borne out of a U.S. military project, Tor is an open source system that allows users in countries where certain sites are blocked or where online action is monitored to access blocked sites and to do so completely anonymously. That way, a pro-democracy blogger in China can keep his identity secret from the government and a user in Syria can access Web sites blocked by the government.

For a quick lesson in how Tor works, watch this video:

Technology Review: Tor

Morozov also highlights the rise of internet companies dedicated to removing or burying complaints against other companies so that they do not show up on internet searches. These companies frequently target sites like The Consumerist, but could easily target rights groups as well. On the less legal side of things, distributed denial-of-service (DDOS) attacks can be used to shut down sites containing harmful content. It is not too hard to imagine that a government might use similar tactics and technology to bury Web sites and web-based videos alleging that it was committing human rights violations. Indeed, Russia has been thescene of many such attacks. Mass DDOS attacks have been launched against the Web sites of Human Rights in Russia, Memorial, and the newspaper Kommersant, to name a few. In Burma, pro-democracy Web sites were attacked leading up to the first anniversary of the monks' uprising. Morozov highlights the story of Georgian blogger CYXYMU, who in 2008 was pushed off numerous blogging platforms by persistent DDOS attacks. Morozov calls him a "digital refugee."

Another problem identified by Morozov, which has important implications for video advocacy, is how to ensure that relevant - and authentic - videos and images rise above the din of extraneous and sometimes fraudulent videos that flood the internet. (See my previous post on Sri Lanka for one example.) How do rights groups and advocates separate the digital wheat from the digital chaff? Morozov illustrates this issue using the example of YouTube videos that support false findings about vaccine safety. In the human rights world, governments, NGOs, and civil society groups must face the problem of verifying footage shot by anonymous third-parties. Locating and debunking fraudulent or doctored video is paramount. Regarding his example, Morozov asks "whether a technology company such as YouTube (and ultimately its parent company, Google) should verify scientific claims made in the videos uploaded to the site; if yes, how should they go about it?" (YouTube recently introduced YouTube Direct, which allows media organizations to "request, review, and rebroadcast" video directly from users, providing a level of editorial oversight.) This question can easily be applied to human rights organizations and claims of rights abuses. Morozov is optimistic that algorithms and programs will be created to form an "electronic lie detector" that will be able to differentiate between the two.

Perhaps another option would be to create a Web site in the model of Snopes.com that investigates the veracity of human rights-related videos and images that arise on the internet. Such a site could bring to bear the focused expertise of the human rights community to separate verifiable from fraudulent data. Wikipedia's Talk pages model could also be applied here. Indeed, the bulk of the discussion on Talk pages consists of debate over whether information is false or misleading.

As Shirky argues, the social platforms of Web 2.0 offer unprecedented opportunities to create positive change. But such opportunities will not realize themselves. In Morozov's words, "[c]yberspace politics is a zero-sum game." While initiatives like Ushahidi have been used to garner real-time data about conflicts and rights abuses that can be used to save lives and promote positive change, similar initiatives, according to Morozov, are used for ill by xenophobic Russian nationalist groups to identify the locations of ethnic populations in that country. Rights groups can post videos documenting human rights abuses, but hostile groups can flood the web with remixed videos in order to discredit the originals and disrupt advocacy efforts.

The dilemma posed by Web 2.0 is perhaps best framed as an economic equation that encapsulates each of the issues discussed above. On the supply side is the issue of how to ensure access to accurate, authentic data while weeding out bogus information and keeping data providers secure. Morozov is reasonably certain that this side of the equation will ultimately be solved. More problematic is the demand side - how do we use verifiable documentation of human rights violations safely, ethically, and effectively once we have access to it? That, hopefully, is an issue that WITNESS, by empowering human rights groups to use video for change, can help to solve.

We'd like to hear your thoughts on these issues:

  • Do you agree with Morozov or is he too pessimistic about Web 2.0 and social media?

  • How can we use social media platforms more securely?

  • How do we bring order and a standard of veracity social media platforms while maintaining the freedom of Web 2.0?