Social media and terrorism


Nina Iacono Brown

Assistant Professor,
Communications


What was the focus of the project?

In June 2016, the widow of a government contractor killed in a terror attack abroad sued Twitter, alleging that it shared blame in her husband’s death. The basis of her suit was the material support statute, a federal law that prohibits individuals and companies from supporting terrorism activities or organizations. It was the first lawsuit against a social network under these laws, and alleged that Twitter provides material support to terrorists because it allows them to use its platform to organize, recruit, raise funds, and spread propaganda. My early research focused on how the laws might apply to social media, particularly in light of federal laws that immunize websites for content posted by third parties. As new plaintiffs have emerged (there have been six more lawsuits since the first was filed) my subsequent research has examined the application of new legal theories and what it means for social media governance.

What questions did your project seek to address? What were the research questions, hypotheses, etc.?

Could social networks face liability under the material support laws for allowing terrorists to use their platforms to organize, recruit, raise funds and spread propaganda?

How would the application of Section 230 of the Communications Decency Act impact any potential liability, when the claim is not made on the basis of content posted by third parties, but allowing them to use the service?

A follow up research project (for Slate) examined whether the Section 230 immunity would be lost if social media profited (through advertising revenue) from the terrorist content on its platform.

What were your findings? 

Although social networks could face liability under the material support laws for allowing terrorists to use their platforms to organize, recruit, raise funds and spread propaganda, Section 230 of the Communications Decency Act should bar these claims. This is the case even though the claims are not made on the basis of content posted by third parties, because claims that attempt to assign liability for allowing certain users (here, terrorists) to use the service are akin to the publishing function that 230 immunizes.

Section 230 immunity could potentially be lost, however, if social media profited (through advertising revenue) from the terrorist content on its platform. This does not translate to a “win” for plaintiffs, but rather it avoids an early dismissal and potentially allows plaintiffs to move forward with litigation.

What do you think are the implications for the discipline/profession?

This is a new area of the law that hasn’t before been litigated—so these first few cases will serve as important precedent (even if non-binding) for those that follow. They will also test the durability of Section 230 in a way that hasn’t been done before.

What do you think are the implications for the public?

Social media has become an incredible tool for many different organizations to communicate—including terrorists. And the response by social networks has been lacking. Social networks may have an ethical responsibility to maintain a safe network, but does that result in civil liability when there is a terrorist attack? Should it?

Social networks are private companies, and can freely regulate the speech of its users (which it largely reserves the right to do in its terms of service agreements). The question we must confront is whether we trust social media to regulate speech on their platforms, and decide what should be censored.

If there are implications for the future or new directions for the work, what are they?

Victims and families will continue to look for someone to hold accountable when a terrorist attack happens. When the first lawsuits against social media for terrorist attacks failed, subsequent plaintiffs tried a new legal theory. As cases emerge, so will new theories to analyze. The broader implications, however, are whether social media is an appropriate defendant in these types of cases, and the response social media should have to terrorists on its platforms.

Nina Brown: Social media and terrorism

Presentations:

Nina Brown presented this work at the 2016 AEJMC conference, at a Symposium at Loyola of Los Angeles Law School, and the work has been published in a law review, and by Slate magazine.

Fight Terror, Not Twitter: Insulating Social Media from Material Support Claims, 37 Loyola L.A. Ent. L. Rev. 1 (2016-2017)

Should Social Networks Be Held Liable for Terrorism?Slate,June 16, 2017

Abstract:

Social media companies face a new threat: as millions of users around the globe user their platforms to exchange ideas and information, so do terrorists. Terrorist groups, such as ISIS, have capitalized on the ability to spread propaganda, recruit new members, and raise funds through social media with little to no cost. Does it follow that when these terrorists attack, social media is on the hook for civil liability to victims?

Recent lawsuits by families of victims killed in terrorist attacks abroad have argued that the proliferation of terrorists on social media, and social media’s reluctance to stop it, violates the Antiterrorism Act. This article explores the dangers associated with holding social media companies responsible for such attacks, and offers a solution to avoid liability.

This is a new challenge for social media, and little to no scholarship on the topic. This article examines the basis for this liability—the Antiterrorism Act—in depth as it relates to suits against social media, and Section 230 of the Communications Decency Act, which provides that an interactive computer service (broadly defined to include a variety of websites, including social media platforms) cannot be treated as the publisher or speaker of third-party content.

I argue that Section 230 of the Communications Decency Act should provide immunity for social media from suits based on the actions of its users. This is in spite of the fact that Section 230 has traditionally been interpreted by courts to immunize content providers for liability from the content posted by third parties, as opposed to the acts of those parties themselves.