Opinion

Social Media Is a Mess. Government Meddling Would Only Make It Worse.

This term, the Supreme Court will reconsider America’s laissez-faire approach to regulating the internet, and in doing so it will address vital and new First Amendment questions. Can states stop social media sites from blocking certain content? Can the federal government pressure platforms to remove content it disagrees with?

In each of these cases, the Supreme Court must decide whether the government can interfere with private companies’ editorial judgments, and I hope the justices will articulate sufficiently clear principles that can endure and continue to protect online speech. Despite the unprecedented new societal challenges created by the internet, the court should not back away from its firm stance against most government intervention.

In 1997, when fewer than one in five U.S. homes had an internet connection, the court rejected the government’s request to narrow the internet’s First Amendment protections as it had done for television and radio broadcasters. In striking down much of the Communications Decency Act, Justice John Paul Stevens recognized the internet as “a vast platform from which to address and hear from a worldwide audience of millions of readers, viewers, researchers, and buyers.”At the same time, the court left alone Section 230 of the law, which immunizes online platforms for liability from user-generated content. Section 230, combined with strong First Amendment protections,left courts and government agencies with little control over platforms’ content decisions.

Since then, many on the left and right have questioned that approach, as social media providers and other centralized platforms gain increased power over everyday life. Some conservatives, angry at what they view as politically biased moderation decisions, championed the passage of laws in Florida and Texas that limit platforms’ discretion to block user content.

Some liberals, upset that the companies have left up or algorithmically promoted too much constitutionally protected but harmful content such as health misinformation and hate speech, have pressured the companies to become more aggressive moderators.

The Supreme Court will weigh in on the constitutionality of both efforts by this summer. The U.S. Court of Appeals for the 11th Circuit struck down much of the Florida law, ruling that limiting platforms’ discretion over their content is likely unconstitutional. Judge Kevin Newsom, a Trump appointee, wrote last year that “social-media platforms’ content-moderation activities — permitting, removing, prioritizing and deprioritizing users and posts — constitute ‘speech’ within the meaning of the First Amendment.” But later in the year, the Fifth Circuit upheld the Texas law that also limited platforms’ ability to remove user content, rejecting this reasoning. (The Florida and Texas laws have been combined into one case.)

In the other case headed to the Supreme Court, the Fifth Circuit concluded that some efforts by the White House, surgeon general and some federal agencies to encourage social media companies to remove constitutionally protected content, such as alleged Covid misinformation and claims of election fraud, likely violated the First Amendment, finding that officials repeatedly “coerced the platforms into direct action via urgent, uncompromising demands to moderate content.”

If the Texas and Florida laws are upheld, 50 state legislatures could inject their political preferences into content moderation, potentially leading to the absurd and unworkable result of different content moderation rules based on an internet user’s home state. And if the Supreme Court gives wide latitude for the government to threaten platforms if they don’t remove constitutionally protected content, such “jawboning” could lead to frequent and indirect government censorship. While the court should allow the government to respond to harmful content — something at which it has not been terribly effective in recent years — it should draw a clear line that prohibits the use of state power to coerce censorship.

I understand the temptation to overhaul Justice Stevens’s approach. The internet is far more pervasive than it was in 1997, so any problems with the internet today have a larger impact. But greater government control of the internet is a cure worse than the disease.

Solutions typically require agreement on the problem. And we don’t have that. Some people think that platforms moderate far too aggressively, and others think that they are not aggressive enough. Under the hands-off approach, platforms are largely free to develop their own moderation policies, and they’ll be rewarded or punished by the free market.

And even if everyone could agree on the One Problem — the shortcoming that causes many to believe that the internet is responsible for society’s problems — diluting First Amendment protections would make things worse. As seen in the many countries that have more power to regulate “fake news,” at some point, a judge or elected official will take advantage of power over online speech to suppress dissent or stifle debate. For instance, in 2018, Bangladesh passed the Digital Security Act, which gave the government greater power to prosecute people who spread falsehoods. In March, a reporter for the country’s largest daily newspaper was arrested and jailed for nearly a week for allegedly reporting false information in an article about the nation’s cost of living. (The act was scrapped earlier this year.)

Rather than immediately seeking to prohibit online misinformation, we should examine why people are so eager to believe it. When people lack trust in their government and other institutions, they might be more likely to believe misinformation.

Other countries, like Finland, have invested heavily in media literacy programs starting in primary school, equipping citizens with the tools to better evaluate the veracity of online claims. Some research has found that these countries have higher levels of media literacy. Similarly, it is hard to consider the rise of online misinformation without looking at the rapid decline of regional and local media outlets.

Revitalizing trusted news sources is a tougher task than allowing the government to arbitrarily forbid “misinformation,” but it avoids the abuse and censorship that we have seen around the world. And decentralized online services such as Mastodon, which give users a greater say over the level of content moderation that they receive, address many of the concerns about concentrating power in the hands of a few large internet companies.

We should be under no illusions that such solutions are anything close to a panacea for the many concerns about the modern internet. But even the most stringent regulations fail to present full solutions, and often worsen the harms. If, for instance, people are concerned about misinformation leading to the spread of authoritarianism, weakening the First Amendment should not be at the top of their agenda. A fire prevention plan should not call for the elimination of the fire department. And widespread government censorship would not lead to greater trust of institutions.

Messy problems arise from speech, and many will continue to exist with or without government intervention. As Justice Stevens recognized, regulation “is more likely to interfere with the free exchange of ideas than to encourage it.” I hope that his successors share this wisdom.

Jeff Kosseff is a senior legal fellow at The Future of Free Speech Project and the author of the new book “Liar in a Crowded Theater: Freedom of Speech in a World of Misinformation.”

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].

Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.

Back to top button