Why Meta, X and TikTok face more pressure from Europe than the U.S. on Israel-Hamas war disinformation

US News

Thierry Breton, a former CEO of French IT consulting firm Atos, is seen as a key architect of the European Union’s digital reforms.
Anadolu Agency | Getty Images

Days after the Israel-Hamas war erupted last weekend, social media platforms like Meta, TikTok and X (formerly Twitter) received a stark warning from a top European regulator to stay vigilant about disinformation and violent posts related to the conflict.

The messages, from European Commissioner for the internal market Thierry Breton, included a warning about how failure to comply with the region’s rules about illegal online posts under the Digital Services Act could impact their businesses.

“I remind you that following the opening of a potential investigation and a finding of non-compliance, penalties can be imposed,” Breton wrote to X owner Elon Musk, for example.

The warning goes beyond the kind that would likely be possible in the U.S., where the First Amendment protects many kinds of abhorrent speech and bars the government from stifling it. In fact, the U.S. government’s efforts to get platforms to moderate misinformation about elections and Covid-19 is the subject of a current legal battle brought by Republican state attorneys general.

In that case, the AGs argued that the Biden administration was overly coercive in its suggestions to social media companies that they remove such posts. An appeals court ruled last month that the White House, the Surgeon General’s office and the Federal Bureau of Investigation likely violated the First Amendment by coercing content moderation. The Biden administration now waits for the Supreme Court to weigh in on whether the restrictions on its contact with online platforms granted by the lower court will go through.

Based on that case, Electronic Frontier Foundation Civil Liberties Director David Greene said, “I don’t think the U.S. government could constitutionally send a letter like that,” referring to Breton’s messages.

The U.S. does not have a legal definition of hate speech or disinformation because they’re not punishable under the constitution, said Kevin Goldberg, First Amendment specialist at the Freedom Forum.

“What we do have are very narrow exemptions from the First Amendment for things that may involve what people identify as hate speech or misinformation,” Goldberg said. For example, some statements one might consider to be hate speech might fall under a First Amendment exemption for “incitement to imminent lawless violence,” Goldberg said. And some forms of misinformation may be punished when they break laws about fraud or defamation.

But the First Amendment makes it so some of the provisions of the Digital Services Act likely wouldn’t be viable in the U.S.

In the U.S., “we can’t have government officials leaning on social media platforms and telling them, ‘You really should be looking at this more closely. You really should be taking action in this area,’ like the EU regulators are doing right now in this Israel-Hamas conflict,” Goldberg said. “Because too much coercion is itself a form of regulation, even if they don’t specifically say, ‘we will punish you.'”

Christoph Schmon, international policy director at EFF, said he sees Breton’s calls as “a warning signal for platforms that European Commission is looking quite closely about what’s going on.”

Under the DSA, large online platforms must have robust procedures for removing hate speech and disinformation, though they must be balanced against free expression concerns. Companies that fail to comply with the rules can be fined up to 6% of their global annual revenues.

In the U.S., a threat of a penalty by the government could be risky.

“Governments need to be mindful when they make the request to be very explicit that this is just a request, and that there’s not some type of threat of enforcement action or a penalty behind it,” Greene said.

A series of letters from New York AG Letitia James to several social media sites on Thursday exemplifies how U.S. officials may try to walk that line.

James asked Google, Meta, X, TikTok, Reddit and Rumble for information on how they’re identifying and removing calls for violence and terrorist acts. James pointed to “reports of growing antisemitism and Islamophobia” following “the horrific terrorist attacks in Israel.”

But notably, unlike the letters from Breton, they do not threaten penalties for a failure to remove such posts.

It’s not yet clear exactly how the new rules and warnings from Europe will impact how tech platforms approach content moderation both in the region and worldwide.

Goldberg noted that social media companies have already dealt with restrictions on the kinds of speech they can host in different countries, so it’s possible they will choose to contain any new policies to Europe. Still, the tech industry in the past has applied policies like the EU’s General Data Privacy Regulation (GDPR) more broadly.

It’s understandable if individual users want to change their settings to exclude certain kinds of posts they’d rather not be exposed to, Goldberg said. But, he added, that should be up to each individual user.

With a history as complicated as that of the Middle East, Goldberg said, people “should have access to as much content as they want and need to figure it out for themselves, not the content that the government thinks is appropriate for them to know and not know.”

Subscribe to CNBC on YouTube.

WATCH: EU’s Digital Services Act will present the biggest threat to Twitter, think tank says

Products You May Like

Articles You May Like

‘Blue Bloods’ Holds a Wake for Beloved Reagan at Family Dinner (RECAP)
5 Winter Shoe Trends You’ll Soon See at the Airport
Spirit Airlines stock jumps as carrier plans to sell planes, cut jobs
‘Magpie’ Exclusive Interview: Daisy Ridley and Tom Bateman
Infinix Zero 40 5G Review: Mid-Range Hero?