ike most other providers of interactive computer services, such as websites or mobile applications that allow their users to post or contribute their own content, Twitter through its Terms of Service and community guidelines has long prohibited its users from posting or communicating, among other things, defamatory, profane, infringing, obscene, unlawful, exploitive, harmful, racist, bigoted, hateful, or threatening content through its service. Yet for many years, Twitter has declined to deactivate or take any further action against President Trump’s account, despite tacitly acknowledging that his tirades might very well violate these prohibitions, on the basis that the blusterous Tweets were nevertheless newsworthy. Facebook’s Marc Zuckerberg has similarly stood by his company’s decision not to fact check politicians on the platform, expressing concerns over free speech and democratic values and being an “arbiter of truth.”
That was until last week. On Wednesday, citing its civic integrity policy, Twitter added a label advising viewers to “Get the facts about mail-in ballots” from a page of curated news articles hyperlinked below two of President Trump’s Tweets that had falsely claimed California was “sending ballots to millions of people, anyone living in the state no matter who they are or how they got there” to seemingly undermine voter confidence in mail-in voting when, in fact, ballots were only being sent to registered California voters. Then on Friday, Twitter limited the viewability of President Trump’s Tweet about protestors in Minneapolis that contained the racially inflammatory trope “when the shooting starts, the looting starts” by placing the Tweet behind a notice stating the Tweet violated Twitter’s rules against glorifying violence before allowing viewers to click through to see it. In neither case did Twitter remove or delete the Tweets.
On Thursday, President Trump channeled his ire towards Twitter and other social networking platforms (namely, Facebook, Instagram, and YouTube) who he believes are censoring speech, particularly conservative speech, into a highly controversial executive order. The purpose of the order was to undermine the immunity from civil liability found in Section 230 of the Communications Decency Act (CDA), 47 U.S.C. § 230(c), which protects interactive computer service providers and their users from liability for certain types of content posted or transmitted by users through those services, websites, apps, etc. and any actions or harm resulting from that content so long as the service provider or user, as the case may be, does not exercise control over the content akin to that of the publisher or speaker. Specifically, the law says, a provider or user of an interactive computer service will not be “treated as the publisher or speaker of any information provided by another information content provider” or be liable for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected, or any action taken to enable or make available to information content providers or others the technical means to restrict access to [information provided by another information content provider].” Without this liability shield, operators of websites or mobile apps that contain user-generated content or facilitate communication between users will be open to civil liability for such causes of action as defamation, invasion of privacy, products liability and negligent design of the service, failing to screen users’ communications and protect them from one another, among others, for the content that they allow their millions of users to post, contribute, or transmit through their services, despite perhaps not having the resources—monetary, technological, personnel, legal, or otherwise—to police all user-generated content and communications flowing through their service.
The Executive Order on Preventing Online Censorship clarified the federal government’s interpretation of CDA Section 230 to say that “the immunity should not extend beyond its text and purpose to provide protection for those who purport to provide users a forum for free and open speech, but in reality use their power over a vital means of communication to engage in deceptive or pretextual actions stifling free and open debate by censoring certain viewpoints.” The executive order goes on to state that the safe harbor should not extend so far as to “provide liability protection for online platforms that—far from acting in ‘good faith’ to remove objectionable content—instead engage in deceptive or pretextual actions (often contrary to their stated terms of service) to stifle viewpoints with which they disagree.” The executive order directs the Federal Communications Commission (FCC) and Federal Trade Commission (FTC) to propose new administrative regulations to narrow the scope of immunity provided under the CDA’s safe harbor in a manner that would, among other things, draw greater scrutiny to the alleged misalignment between these companies’ stated policies and “good faith” enforcement and their algorithms for the content and users they promote or do not promote. The administration framed this alleged discrepancy as a deceptive trade practice, again, harkening back to the notion that social media platforms disfavor conservative voices and viewpoints (despite a lack of evidence of such bias).
The executive order will surely be challenged in court and the long line of caselaw reinforcing the safe harbor in the interest of protecting freedom of expression on the Internet and service providers and their users from liability therefrom, as well as recent lawsuits alleging political bias by social media platforms, will likely render the executive order unenforceable. However, until then, the executive order has the force of law and the FCC and FTC will commence their rulemaking processes so, this policy shift is something every website or mobile app provider whose service contains user-generated content or communications—and the lawyers who represent them—should pay close attention to.
Filed in: Digital Media, Legal Blog, Policy and Government Affairs, Social Media
June 4, 2020