In Taiwan, over 80 percent of adults use social media. The significant usage not only underscores how social media platforms have become the predominant forum of information exchange, but also highlights the susceptibility of Taiwan’s democratic ecosystem to Chinese influence.
Although the threat of disinformation and misinformation remains an ongoing challenge, especially against the risk of encroaching on free speech, Taiwan has maintained one of the freest online environments in Asia. Approaching the issue of social media regulation, Taiwan has built a sound democratic infrastructure that hinges on collaboration between civil society and the technology sector.
In the U.S., social media regulation remains an unresolved question. Over 80 percent of adults use YouTube, and nearly 70 percent use Facebook. Last week, reviewing NetChoice v. Paxton and Moody v. NetChoice, the U.S. Supreme Court weighed in on whether state laws regulating social media companies under big tech violate Free Speech. The high court’s review of Texas and Florida social media restrictions provides insight on American regulatory culture and the extant protection of big tech from civil liability. Notably, the legal challenge emphasizes the urgency of collaboration between civil society and the technology sector, a sound democratic infrastructure that Taiwan has effectively established.
Editorial Judgment or Censorship
In Texas and Florida, laws passed in 2021 restrict large social media platforms from moderating users’ speech. For instance, the Florida law prevents platforms from banning political candidates or limiting the exposure of their posts. Both laws require that platforms display individualized explanations when posts are removed or limited. The issue at hand is whether the First Amendment (Free Speech) prohibits the laws’ restrictions on content moderation.
The thorny issue of content moderation belies an embittered partisan divide in regulatory culture. Proponents of the Florida and Texas laws argue that social media companies’ content moderation is a form of political censorship. In multiple states, Republican elected officials are concerned that the more liberal-leaning Silicon Valley seeks to censor conservative ideology under the guise of removing hate speech or misinformation. Critically, Florida and Texas passed their sweeping social media regulations after former President Donald Trump was banned from social media sites due to his false claim that the 2020 election was fraudulent – which in turn had led to the Jan. 6, 2021 Capitol riot. Both states characterized their laws as efforts to address discrimination by social-media platforms.
Florida and Texas contend that their laws do not implicate the First Amendment because they are regulating conduct instead of content. They urge the high court to differentiate between a “selective speech” host with editorial discretion and a “common carrier” host. They maintain that social media platforms are like utility or telecommunications companies, so they should be regulated by the government. Since they host content in the new “public square” unlike a newspaper that publishes, platforms should not discriminate based on viewpoint and are compelled by common-carrier regulation to host all users’ content. States further contend that the requirement for platforms to provide individualized notice and explanations for content moderation adheres to the Supreme Court’s past decision. The requirement is far from onerous because platforms could adopt an automated process.
The Scope of Free Speech and Section 230
Tech groups, on the other hand, argued that the state laws violate the First Amendment because they do not simply regulate conduct, but directly interfere with content-specific editorial decisions. Social media platforms are privately owned. Therefore, tech groups argued that the First Amendment protects the editorial discretion of private platforms to decide what is objectionable content.
The tech groups’ assertion harkens back to Section 230 of the 1996 Communications Decency Act. Credited as the 26 words that “created the Internet,” Section 230 protects “internet service providers,” including social media platforms, from being treated as “publishers” or “speakers” of any content they distribute, so long as they are not the content creators. Online computer services are given immunity when it comes to moderating user-generated content. This protection from civil liability for platforms has enabled the development of modern Internet services and applications, from social media to advanced search engines, and remains enforced today.
In the face of voluminous content, social media platforms argue that they must make billions of editorial decisions daily. Chiefly, platforms must determine if content should be removed and how the remaining content should be presented. State laws directly contravene free speech because they force platforms to disseminate all speech, even if the speakers evidently violate the social media sites’ terms of use.
Like how the government cannot force a newspaper to publish a political candidate’s rejoinder against criticism, tech groups reasoned that the First Amendment protects the right to make editorial decisions. States cannot countermand private companies’ editorial decisions on what is published or restricted on their sites.
Further, tech groups state that there has been no legal tradition in categorizing a private party as a common carrier. The state laws do not regulate conservative social media sites like Parler, Gab, and Truth Social, demonstrating that they are not common carrier laws which would regulate all platforms. Finally, tech groups contend that providing individualized explanations and disclosures on content moderation are heavy burdens which would not stand if they were requirements for a newspaper. A newspaper editor is not obligated to explain every decision behind each rejected piece of content.
Taiwan as Case Study and U.S. Future Policy Outlooks
The Supreme Court’s review of the two social media laws reveals justices’ skepticism towards highly restrictive regulations. In addition, Free Speech and Section 230 may serve as a collective shield against any civil liability for big tech.
Critically, the Biden administration’s amicus brief, or “friend of the court” legal brief, supports the tech groups in their right to moderate content, while noting that social media sites may eventually be subject to regulation. However, without a universally established definition on disinformation or how to identify disinformation, attempts at content moderation would default to self-regulation. Moreover, the Supreme Court cases reveal how big tech may be categorized in the foreseeable future in the U.S.: private platforms exercising editorial judgements protected by the First Amendment. The question is whether the self-regulation of big tech is sufficient in stemming the tide of mis/disinformation, especially given that social media companies get more bang for their buck with posts circulating false content.
Under a partisan regulatory culture weary of big tech’s liberal bias and government overreach, Taiwan’s approach towards social media serves as a case study of an effective solution in lieu of extensive regulation. Social media sites’ indubitable role as the broker of information exchange calls for a commensurate requirement of transparency in the public interest. Taiwan has formed successful partnerships between government, third-party fact-checkers, and social media platforms to ensure accountability and improve transparency.
A key example is LINE’s Digital Responsibility Plan in 2019, which included a public-private initiative. As one of the most widely used messaging apps in Taiwan, with 21 million monthly active users, LINE incorporated LINE Fact Checker, a chatbot that enables users to submit links or statements and responds with analyses and verifications against content fact-checked by non- partisan civil society actors, such as CoFacts and MyGoPen. The direct coordination between third-party fact checkers and social media platforms allows for an automated transparency that mitigates accusations of political bias on the part of privately owned platforms themselves. With public affairs regarding national security, public health, and disaster prevention, LINE also collaborates with Taiwan’s Executive Yuan to verify information according to the fact-checker Executive Yuan Real-Time News Clarification page.
Private social media platforms may insist on undisclosed content moderation processes. Yet, digital responsibility plans like that of LINE reflect a willingness for social media sites to at least take a clear step towards self-regulation. Another positive example is Facebook’s partnership with MyGoPen in Taiwan in 2020, which centers on content verification and data literacy education for the public. The role of civil society actors should not be understated in the maintenance of big tech’s accountability and transparency. Where lawmakers in the U.S. fall short in technical expertise, third-party civil society actors with advanced data literacy, notably independent from politicization, could take up the objective role of combating false information on social media. Scheduled to be issued by June of this year, the Supreme Court’s ruling would ascertain the direction of development for a much-needed regulatory framework.
This article was previously published on CommonWealth Magazine on March 08, 2024.
Comments