Supreme Court appears wary of restricting government action to control misinformation
Connecting state and local government leaders
The justices' skepticism in the case has state and local election officials breathing a little easier as they worry about misinformation efforts ahead of November elections.
A majority of U.S. Supreme Court justices on Monday appeared highly skeptical of the argument by Louisiana's solicitor general that the federal government should be barred from asking social media platforms to remove false information, saying it would violate the free speech rights of those who made the posts.
During oral arguments in the case, Murthy v. Missouri, both conservative and liberal justices appeared to doubt that federal officials had acted unlawfully. Their skepticism may help state and local election officials, who have been following the case closely, breathe a little easier. They worry that a ruling in favor of the plaintiffs could hamper efforts to combat misinformation on social media, particularly from artificial intelligence, ahead of a critical and contentious election.
“We go into this election cycle expecting bad actors to use misinformation turbocharged through AI to divide, deceive and deter voter participation throughout our country,” said Michigan Secretary of State Jocelyn Benson at a U.S. Senate hearing on elections security last week. “I'm not just talking about deepfakes,” she said, referring to the use of technology to impersonate elected officials. During New Hampshire’s presidential primary last month, a robocall purportedly from President Joe Biden was made discouraging voters from going to the polls.
“AI will also make it easier to mislead voters about the voting process or even conditions at a polling place,” she continued. “Imagine a voter receiving a text saying there are long lines at a precinct or another voter seeing a social media post showing a polling location changing because of flooding. All of these can be false and all of these could deter participation.”
The case before the high court involves a lawsuit filed by the attorneys general of Louisiana and Missouri and five people who said their free speech rights were violated when social media platforms like Facebook deleted their posts complaining about mask and vaccine requirements during the pandemic.
While the government never required the posts be removed, Louisiana Solicitor General Benjamin Aguiñaga argued that federal officials had contentious conversations with social media companies over posting misinformation. While there was no evidence of coercion, the companies could have feared the government could take action against them.
“The government has no right to persuade platforms to violate Americans' constitutional rights. And pressuring platforms in back rooms shielded from public view is not using the bully pulpit at all,” Aguiñaga argued. “That's just being a bully.”
However, justices at times appeared confused by Aguiñaga’s argument, asking how the government violated First Amendment rights when it never forced the platforms to remove the posts. Justices also asked if the government does not have a right to push platforms to remove posts that could be a danger to others.
State and local elections officials have been following the case since U.S. District Judge Terry Doughty ruled last July that the federal government had “apparently engaged in a massive effort to suppress disfavored conservative speech.”
Doughty restricted the ability of the White House and several other government agencies, including the FBI, from communicating with social media platforms for “the purpose of urging, encouraging, pressuring or inducing in any manner the removal, deletion, suppression or reduction of content containing protected free speech.”
Doughty’s ruling caused confusion about whether it also applied to state and local election officials because the ruling also extended to those “acting in concert with” the federal government, said Gowri Ramachandran, deputy director of the Brennan Center for Justice's Elections & Government Program.
But the Fifth Circuit Court of Appeals, which largely affirmed Doughty’s ruling, removed the “acting in concert with” provision, which removes any direct implications for state and local governments.
Still, election officials have remained concerned that federal intelligence agencies would be barred from sharing and communicating information with the platforms about misinformation efforts. And indeed, The Washington Post reported last November that the federal government has stopped warning some social media companies about foreign disinformation campaigns despite a stay in the case while it is being heard by the Supreme Court.
In an amicus brief filed on behalf of current and former state elections officials, the Brennan Center for Justice argued that not allowing the federal government to communicate with social media companies could allow disinformation to disrupt the right to vote.
“False information about elections has proliferated on social media in recent years, leading to voter confusion and sowing mistrust in the public about the integrity of the nation’s elections,” the brief said.
For the government’s part, Brian Fletcher, principal deputy solicitor general for the Department of Justice, argued that the government had not coerced the platforms into removing posts. He also said the circuit court decision not only applied to removing posts but also the government sharing information about potential threats with the platforms. He pointed to communications sent by the FBI that said, “For your information, it has come to our attention that the following URLs or email addresses are being used by malign foreign actors like Russian intelligence operatives to spread disinformation on your platforms. Do with it what you will.”
Fletcher said the federal government does not disagree with the circuit court barring it from “coercing” the platforms to remove the posts. However, he said officials didn’t cross that line in urging platforms not to post misinformation as the nation dealt with the pandemic. It never threatened any action, still leaving it up to the companies to decide.
The court, he said, should “reaffirm that government speech crosses the line into coercion only if, viewed objectively, it conveys a threat of adverse government action. And because no threats happened here, the court should reverse [the lower court’s ruling].”
If the Supreme Court were to rule in favor of the state attorneys general, Fletcher continued, “the FBI would have to think very hard about whether it could continue to [send communications about disinformation].”
Justices also questioned whether the government has a right to intervene with social media posts when, like those opposing the vaccine or mask mandates, the posts could endanger others.
“Suppose someone started posting about a new teen challenge that involves teens jumping out of windows at increased elevations. And kids all over the country start doing it,” Justice Ketanji Brown Jackson asked Aguiñaga. “There is an epidemic. Children are seriously injuring or even killing themselves. Is it your view that the government authorities could not declare those circumstances of public emergency and encourage social media platforms to take down the information that is instigating this problem?”
“Some might say that the government actually has a duty to take steps to protect the citizens of this country,” Jackson continued. “And you seem to be suggesting that that duty cannot manifest itself in the government.”
Chief Justice John Roberts picked up Brown’s example and asked if social media platforms sent a message saying, “‘We encourage you to stop that.' Is it that it violates the Constitution?”
Aguiñaga responded he believes as a policy matter the government might want to intervene. “But the moment that the government identifies an entire category of content that it wishes to not be in the modern public sphere,” he said, “that is a First Amendment problem.”
This is the second case before the Supreme Court that would affect the ability of social media platforms to remove content. The other case involves challenges to laws passed by Florida and Texas in 2021 to protect residents from what lawmakers decried as the censorship of conservative viewpoints on social media. The laws were designed to prevent discrimination and what Texas Gov. Greg Abbott called a “dangerous movement … to silence conservative viewpoints and ideas.”
The Murthy v. Missouri case also drew concerns from senators at the elections security hearing last week. “We are potentially less protected as we go into 2024 in terms of the security of our elections than we were during 2020,” said Sen. Mark Warner, a Virginia Democrat and chairman of the intelligence committee. “That's a pretty stunning fact.”
In part, he said, it’s because foreign adversaries to the U.S. learned from the Russian interference in the 2016 elections “how cheap and effective it is to interfere in our elections” through social media.
Warner also said, “AI brings at a scale and speed, tools to interfere, misinform, disinform.”
Michigan Secretary of State Benson urged Congress at the Senate hearing to take action and pass a bipartisan bill that would bar the use of deepfakes in federal elections. Michigan, she said, has required that political ads that use AI have to disclose that the technology is involved. The state also made it against the law to “knowingly and deceptively” spread deepfakes.
While Warner said he appreciates the work of the secretaries of state, he agrees that more action is needed as efforts to influence elections “can happen at a scale and speed […] that there's no way you can keep up with.”
Sen. Michael Bennett, a Colorado Democrat and a member of the intelligence committee, called the lower court’s decision in Murthy vs. Missouri “a disastrous effect on our ability to combat foreign influence operations and protect the integrity of our elections.”
Kery Murakami is a senior reporter for Route Fifty, covering Congress and federal policy. He can be reached at kmurakami@govexec.com. Follow @Kery_Murakami
NEXT STORY: In rural Arizona, Maricopa County uses tech to make voting more secure