4 minute read | June.13.2023
The U.S. Supreme Court decided two cases recently that left untouched Section 230 of the Communications Decency Act, which provides online platforms immunity from claims based on content that their users create and share on those platforms.
One decision raised the bar for aiding and abetting liability, and even included language that may strengthen their defenses against claims challenging the algorithms that underlie their services, such as recommendation algorithms that automatically queue and recommend content. Yet changes are brewing that may affect the protections that Section 230 provides down the line.
Here’s what you need to know about the implications for online platforms of the Supreme Court cases – and what changes may loom on the horizon.
The Supreme Court heard cases brought against specific internet platforms by relatives of people killed in ISIS terrorist attacks. They sought to hold internet and social media companies liable for allegedly hosting and recommending content posted by the terrorist group and its supporters that radicalized users, spread propaganda, and encouraged and allowed individual terrorists to carry out the attacks.
In Twitter, Inc. v. Taamneh, the victims’ relatives argued that, by recommending ISIS’s harmful and dangerous content, the platforms were facilitating the attacks and aiding and abetting ISIS in violation of the Anti-Terrorism Act.
The Supreme Court rejected that argument on the grounds that the platform:
Treated ISIS the same as any other users – at “arm’s length, passive, and largely indifferent” and “agnostic as to the nature of the content.”
Deployed algorithms that were content-neutral. The court observed that this was the equivalent of standing back and watching – not direct or intentional enough to constitute aiding and abetting.
Importantly, the Supreme Court also said that knowing about and failing to stop users from posting offending content also did not add up to aiding and abetting terrorism.
Justices punted the Section 230 challenge to the lower court in Gonzalez v. Google LLC with the instruction to decide the case consistent with its decision in Taamneh.
The decision in the Taamneh case included language that internet platforms may use to defend against lawsuits about how its algorithms suggest content.
Writing for the court, Justice Clarence Thomas compared online platforms to other companies that provide communication services: “It might be that bad actors like ISIS are able to use platforms like defendants’ for illegal – and sometimes terrible – ends,” the opinion read. “But the same could be said of cell phones, email, or the internet generally. Yet, we generally do not think that internet or cell service providers incur culpability merely for providing their services to the public writ large.”
Because the Court did not address Section 230, the statute continues to provide internet services and platforms immunity from claims that treat them as publishers or speakers of third-party content.
The rulings may add fuel to action that is already underway in other arenas in the name of online safety. Here are three ways that could play out:
1. Congress May Act
These two decisions may convince Congress to take Section 230 on themselves. Members of Congress from both parties are already pushing criminal and civil online safety legislation that would implement restrictions for internet platforms and social media companies. The debate is unfolding against the backdrop of growing concern from Congress and the Justice Department for the safety of minors online.
2. State Legislators Also Are Advocating for Change
Elected leaders in several states may pass their own online safety legislation – and not just in the arena of content moderation. At least three states have passed online safety bills that will take effect in the next year.
3. Court Battles Continue – Possibly With Fresh Arguments
Some plaintiffs may seize on language from the Supreme Court opinions to argue, for example, that a company could be liable if it “consciously and selectively chose to promote content provided by a particular terrorist group.” More cases challenging Section 230 from different angles are sure to follow.
Meanwhile, other cases are wending their way through the courts, including two involving NetChoice LLC, a coalition of trade associations, eCommerce businesses, and online consumers. The outcome could inform how the First Amendment, Section 230, and the burgeoning legislation regulating content moderation relate to and interact with each other.
Even though the U.S. Supreme Court passed on the opportunity to address Section 230 this time, this topic is far from settled. This turbulent landscape presents risks and challenges for internet companies looking to build their businesses while protecting users and staying on the right side of the law.