Op-ed: Why social media CEOs like Zuckerberg and Dorsey secretly like fueling debate over fake news

US News

U.S. President Donald Trump is seated prior to signing an executive order regarding social media companies in the Oval Office of the White House in Washington, U.S., May 28, 2020.

Jonathan Ernst | Reuters

The social media content debate has flared up to a level of intensity in recent days that we could never have imagined. Earlier this week, President Trump took to Twitter to suggest that there would be inevitable bias coming from the use of mail-in ballots – implications that Twitter affirmed amounted to misinformation. Twitter, which has long had a mechanism by which it can flag content disseminated by public figures while keeping it online, used it for the first time against President Trump’s content, noting that his claims about mail-ins were misleading, and linking to a page featuring more information about mail-in voting.

The response from the administration was swift: a social media executive order targeting the content policies of internet firms. The president’s new policy was attacked by public experts immediately, with many scholars suggesting that legally speaking, the order was a mess in that it attempted to override the longstanding Section 230 of the Communications Decency Act, which proffers internet firms immunity for content-takedown decisions.

The attention to content policy issues only redoubled Friday morning: Trump and the official White House account tweeted that “when the looting starts, the shooting starts,” in reference to the George Floyd protests – words that Twitter fingered Trump for yet again, flagging the tweet on the basis that it glorifies violence. The scrutiny over content issues does not appear to be waning at all soon.

Online disinformation, hate speech and violence

That scrutiny over content policies is indeed important. The world is focused on the matter of passing content policy reforms, aimed at keeping offending content off of internet firms’ platforms. As recent events in many parts of Europe, Sri Lanka, India, Brazil, and throughout the United States — most noteworthy, perhaps, the genocidal conduct of Myanmar military officials — clearly indicate, internet firms need to do more to keep their platforms free of disinformation, hate speech, discriminatory content and violence.

But we must not forget that discussion of content policy regulations is a black hole of sorts in our present political climate. There are two reasons for this. The first is political conflict, particularly in the U.S., over how and how much we should maintain the national commitment to free speech in the forum of consumer internet platforms. The debates surrounding First Amendment rights are rife with controversy in the U.S., where conservatives have raised deep concerns with the account suppressions of such far-right thought leaders as Richard Spencer, Jared Taylor, and Laura Loomer, who have spread the spirit of white supremacy through their tweets, posts, and videos.

Second — and I believe more critically — the standards applied in regulating hate speech, disinformation and other classes of offending content will vary greatly throughout the world. Different cultural norms, ranging from the fairly liberal to the ultra-conservative, exist between countries and within countries. It will be a monumental challenge for civil society, national governments, and international organizations to arrive at a set of norms that the internet should apply in perpetuity. This is a task that will necessitate many hard discussions over many years — and even after all that deliberation, we may have no clear path forward on the syndication of global standards.

We must categorize policy discussions around hate speech, disinformation, terrorism, and the like as matters of content policy and treat them independently of a second class of regulations: economic regulations that attack the business models of Silicon Valley internet firms, focused around privacy, transparency, and competition.

Economic regulation versus content moderation

To be sure, both classes of future regulations — content policy and economic policy — are vital, and they are equally important. Society desires peace; but our internet platforms sow chaos by enabling the spread of disinformation, sow hatred by enabling the spread of white supremacists’ messaging, and sow violence by giving authoritarian officials a platform to spread racist conspiracies. Society desires fairness; but our internet companies systematically exploit the individual, artificially and unjustly disable the vibrancy and dynamism of open markets, and make questionable decisions behind our backs.      

But the public, politicians, and regulators the world over have focused primarily on content policy regulation. The reason is understandable: politics and public perception is focused on the here and now. At the same time, we cannot leave the matter of economic regulations — policies that target the corrosive business model of the consumer internet firms — by the wayside.

We cannot allow our consternation over the Russian disinformation campaign and the industry’s ill-placed adjudications about what should or should not stay online to override the deeper concern at hand: that it is the business model of the consumer internet that engendered and maintains these harms. We cannot let our deliberations over the content policy regulations to stay our hand for deeper-lying problems. To treat these problems and contain them at their source, we must not lash only at the leaves of the weed. We must poison its evil roots.

While it is important to address content to limit discrimination, protect elections, and save lives, these are largely all administrative concerns that will ultimately be determined at the discretion of regional politics and culture. It is not an intellectual debate; drawing the lines of content acceptability is a determination of the collective attitudes of users in a given locality. In the meantime, internet firms can hire content policy executives who are charged with exploring users’ concerns and reflecting them into the platform’s governance.

Mark Zuckerberg and ‘arbiters of truth’

The consumer internet firms have decided that determining what constitutes offending content is a responsibility that will have to eventually be graduated out of the industry. Consider, for instance, Facebook CEO Mark Zuckerberg’s seemingly benevolent proclamation that he does not wish to be the arbiter of truth.

Did he say this out of concern for humanity? The answer is likely no: he does not want to be the arbiter of truth because he does not want the weighty responsibility to rest on his and his company’s shoulders. Why should he take the blame for the Russians’ activity on his platforms and its impact on the 2016 U.S. presidential election when we as a society cannot even determine what kinds of content should be considered fake news? Whatever the negative externality, he wishes to pass on the responsibility of making such determinations.

But pass on to whom? That does not seem important to industry executives, so long as it is a third party — an entity external to the firm — that has the public’s trust. That third party could be a governmental agency, a civil society organization, an industry consortium, a review committee, or a nonprofit set up exclusively to resolve questions concerning offensive content. The organization should, in the industry’s view, simply have authority and the public’s trust in its local jurisdiction. It should be seen as the source of truth by the users of the platforms.

The industry knows that the many questions that such arrangements to address content policy challenges would necessarily raise — in terms of who should have such authority over content policy, how involved the regional and national governments should be in the decision-making processes, how to prevent political influence, and, perhaps most critically, just where to draw the line — will take an eternity to develop.

Consider the situation in the United States, where Democrats and Republicans cannot even resolve to pass the commonsense policy advocated in the Honest Ads Act — which simply proposes imposing transparency over the provenance and dissemination of digital political ads. If we cannot find resolution on that issue after four years of deliberation, we are unlikely to be able to develop the content policy standards that Twitter, Google, Snapchat, Facebook, and Microsoft should follow anytime soon.

Consumer internet executives will secretly encourage the public debates over fake news and hate speech; they will add fuel to these flaring deliberations for as long as they can, drawing our eyes away from the subtler, more fundamental problems at the heart of the industry’s commercial regime.

The industry knows this well. Its leaders are aware that heated debates around content policy will persist for a very long time given our political circumstances, and that while they persist, we will be less focused on the more fundamental problem of economic regulation.

Their greatest fear is economic regulation. They fear true privacy, competition, and transparency standards that would force changes to their business practices, because such regulations, if earnestly designed to curb the exploitation of consumers, would seriously cut into their business models. This would jeopardize both their personal wealth and their shareholders’ interests. Any curbing of the business model would significantly diminish the firms’ profit margins. How significant that margin reduction would be would depend only on how serious the regulatory standards advanced are.

Consumer internet executives will secretly encourage the public debates over fake news and hate speech; they will add fuel to these flaring deliberations for as long as they can, drawing our eyes away from the subtler, more fundamental problems at the heart of the industry’s commercial regime. Facebook’s proposed new “oversight board” is the perfect example of this: the board should be designed by the company to tackle not only questions of content policy violation, but also economic overreach by the company itself. Therein lies society’s true demons.

All this is to say that we must always prioritize the question of economic regulations; let the war room for strategizing the passage of comprehensive privacy law be our point d’appui. Let us not die on the battlefield of content policy regulation – which will entail a series of global debates that are unlikely to ever have a clear unifying international norm given varying political views, even within countries like the United States. For the more our attention is diverted to the problem of content policy, the less we will focus on curing society of the virus that lurks beneath – and the more its malevolence will spread.

By Dipayan Ghosh is co-director of the Digital Platforms & Democracy Project at the Harvard Kennedy School. He was a privacy and public policy advisor at Facebook and prior, an economic advisor in the Obama White House. He is author of the forthcoming book on the future of technology from the Brookings Institution, “Terms of Disservice.” This commentary has been excerpted from the book and adapted for publication. 

Products You May Like

Articles You May Like

How To Find Comfortable Dress Shoes For Men
Timothée Chalamet Tears Up Acceptance Speeches After Losing
College sports valuations: Top 75 athletic programs
How the operating window of LFP/Graphite cells affects their lifetime – Physics World
Listen to SZA’s New Album SOS Deluxe: Lana