Twitter v Taamneh, SCOTUS Clarifies Aiding and Abetting in Antiterrorism Act Claims

The Supreme Court of the United States unanimously ruled in favor of social media platforms in a lawsuit brought by the family of an ISIS attack victim who sought to hold Twitter, Google, and Facebook liable for aiding and abetting international terrorism by allowing ISIS to use their platforms. A unanimous Supreme Court held that Plaintiffs’ allegations that these social-media companies aided and abetted ISIS in its terrorist attack fail to state a claim under 18 U. S. C. §2333(d)(2). The Court didn’t discuss Section 230 of the Communications Decency Act which generally shields tech companies from liability for content published by users, but reserved its analysis for the case Gonzales v Google issued on the same day.

The Twitter lawsuit relied on the Antiterrorism Act, which allows U.S. nationals to sue anyone who “aids and abets, by knowingly providing substantial assistance,” international terrorism. The Taamneh family argued that Twitter and the other tech companies failed to remove ISIS content from their platforms although they knew about the content. U.S. Court of Appeals for the 9th Circuit allowed the suit to continue. The Supreme Court reversed the ruling because they didn’t find a sufficient link (nexus) between the social media platforms and the attack to make out liability. They should have demonstrated that the platforms and their algorithms in some way favored or gave preferential treatment to ISIS as opposed how they treated regular users prior to the attack. It appears on a first view that the Court apprehends that such lawsuit, if allowed to go forward, would open the floodgates of similar claims for alleged aiding and abetting against anything and anyone on the internet, including email servers and phone companies.

In sum, 18 U. S. C. §2333(d)(2) doesn’t create a new test for aiding and abetting or a new framework for online activity. The common law test remains, which I find reassuring, but I disagree that there is no duty to act (i.e. terminate users) when platforms are notified of illicit activity. Terminating users has little impact overall, because each terminated account likely will double down with 10 new accounts. If platforms were notified and did nothing thereby allowing the planning of the attack to be carried out on their platforms, then, in my view, that’s knowledge enough and voluntary blindness, both forms of culpable intent enough to make out liability. Account termination is not a solution also due to the risk to arbitrarily terminate plenty of innocent accounts, as witnessed during the pandemic.

As for algorithmic preference being passive assistance (not aiding and abetting) we shouldn’t forget that algorithms are always actively programmed by humans and corporations own the code. I am not sure of the degree of passive assistance in there. From here, it looks that algorithms allow social media companies to discriminate against certain users (for example women who denounce sexual harassment and gender non-conforming individuals) and favor others who pay for ads, or who subscribe to a monthly fee, such as corporations, but I see why at this stage it is not seen as active abetting. I hope that other cases will go forward and we’ll begin to compel massive discovery on algorithmic bias. I am certain we will end up making big tech liable, or that we will have enough evidence to hold big tech liable for everything in the world. What I take away from this judgment at this point is that all moderation issues with extreme content can be excused away through algorithms, because the Supreme Court wrote them off as passive assistance.

Statutory Background and Questions

(a) In 2016, Congress enacted the Justice Against Sponsors of Terrorism Act (JASTA) to impose secondary civil liability on anyone “who aids and abets, by knowingly providing substantial assistance, or who conspires with the person who committed such an act of international terrorism.” §2333(d)(2). The question here is whether the conduct of the social-media company defendants gives rise to aiding-and-abetting liability for the Reina nightclub attack.

(b) The text of JASTA begs two questions: What does it mean to “aid and abet”? And, what precisely must the defendant have “aided and abetted”?

Judicial Definitions of Aiding and Abetting, Legal Test and Citations

In Halberstam, the D. C. Circuit undertook an extensive survey of the common law with respect to aiding and abetting and synthesized the surveyed cases as resting on three main elements: (1) there must be a wrongful act causing an injury performed by the person whom the defendant aided; (2) at the time assistance was provided, the defendant must have been “generally aware of his role as part of an overall illegal or tortious activity;” and (3) the defendant must have “knowingly and substantially assist[ed] the principal violation.” 705 F. 2d, at 477.

The court then articulated six factors to help determine whether a defendant’s assistance was “substantial.” They are (1) “the nature of the act assisted,” (2) the “amount of assistance” provided, (3) whether the defendant was “present at the time” of the principal tort, (4) the defendant’s “relation to the tortious actor,” (5) the “defendant’s state of mind,” and (6) the “duration of the assistance” given. Id., at 488 (emphasis deleted). Halberstam also clarified that those who aid and abet “a tortious act may be liable” not only for the act itself but also “for other reasonably foreseeable acts done in connection with it.” Id., at 484.

At common law, the basic “view of culpability” animating aiding and abetting liability is that “a person may be responsible for a crime he has not personally carried out if he helps another to complete its commission.” Rosemond v. United States, 572 U. S. 65, 70. However, the concept of “helping” in the commission of a crime or a tort has never been boundless and ordinarily requires some level of blameworthy conduct; those limits ensure that aiding and abetting does not sweep in mere passive bystanders or those who, for example, simply deliver mail that happens to aid criminals.

In tort law, many cases have thus required a voluntary, conscious, and culpable participation in the wrongful conduct to establish aiding and abetting.

Analysis Citations

Plaintiffs have satisfied Halberstam’s first two elements by alleging both that ISIS committed a wrong and that defendants knew they were playing some sort of role in ISIS’ enterprise. But plaintiffs’ allegations do not show that defendants gave such knowing and substantial assistance to ISIS that they culpably participated in the Reina attack. Pp. 21–30.

Plaintiffs allege that defendants aided and abetted ISIS in the following ways: First, they provided social-media platforms, which are generally available to the internet-using public; ISIS was able to up- load content to those platforms and connect with third parties on them. Second, defendants’ recommendation algorithms matched ISIS-re- lated content to users most likely to be interested in that content. And, third, defendants knew that ISIS was uploading this content but took insufficient steps to ensure that its content was removed. Plaintiffs do not allege that ISIS or Masharipov used defendants’ platforms to plan or coordinate the Reina attack. Nor do plaintiffs allege that defend- ants gave ISIS any special treatment or words of encouragement. Nor is there reason to think that defendants carefully screened any content before allowing users to upload it onto their platforms.

None of plaintiffs’ allegations suggest that defendants culpably “associate[d themselves] with” the Reina attack, “participate[d] in it as something that [they] wishe[d] to bring about,” or sought “by [their] action to make it succeed.” Nye & Nissen, 336 U. S., at 619. Defendants’ mere creation of their media platforms is no more culpable than the creation of email, cell phones, or the internet generally. And defendants’ recommendation algorithms are merely part of the infrastructure through which all the content on their platforms is filtered.

At bottom, the allegations here rest less on affirmative misconduct and more on passive nonfeasance. To impose aiding-and-abetting liability for passive nonfeasance, plaintiffs must make a strong showing of assistance and scienter. Plaintiffs fail to do so.

Plaintiffs make no allegations that defendants’ relationship with ISIS was significantly different from their arm’s length, passive, and largely indifferent relationship with most users. (…) Second, plaintiffs provide no reason to think that defendants were consciously trying to help or otherwise participate in the Reina attack, and they point to no actions that would normally support an aiding-and-abetting claim.

The allegations plaintiffs make here are not the type of pervasive, systemic, and culpable assistance to a series of terrorist activities that could be described as aiding and abetting each terrorist act by ISIS.

Ninth Circuit Errors

The Ninth Circuit’s analysis obscured the essence of aiding-and-abetting liability. First, the Ninth Circuit framed the issue of substantial assistance as turning on defendants’ assistance to ISIS’ activities in general, rather than with respect to the Reina attack. Next, the Ninth Circuit misapplied the “knowing” half of “knowing and substantial assistance,” which is designed to capture the defendants’ state of mind with respect to their actions and the tortious conduct (even if not always the particular terrorist act). Finally, the Ninth Circuit ap- pears to have regarded Halberstam’s six substantiality factors as a sequence of disparate, unrelated considerations without a common conceptual core. In doing so, the Ninth Circuit focused primarily on the value of defendants’ platforms to ISIS, rather than whether defendants culpably associated themselves with ISIS’ actions.

There is also one set of allegations specific to Google: that Google reviewed and approved ISIS videos on YouTube as part of a revenue-sharing system and thereby shared advertising revenue with ISIS. But the complaint here alleges nothing about the amount of money that Google supposedly shared with ISIS, the number of ac- counts approved for revenue sharing, or the content of the videos that were approved. Nor does it give any other reason to view Google’s revenue sharing as substantial assistance. Without more, plaintiffs thus have not plausibly alleged that Google knowingly provided substantial assistance to the Reina attack, let alone (as their theory of liability would require) every single terrorist act committed by ISIS. Pp. 29– 30.