Lawmakers guilty of ‘do something disease’ with social media bans
Connecting state and local government leaders
While experts agree that something must be done to prevent the worst effects of the platforms on minors, they caution that the current efforts are technologically unsound and undermine civil liberties.
State efforts to regulate social media and mitigate its worst effects on minors are well-meaning, but will ultimately fail because they violate civil liberties and lack sound technology to back them up, experts said this week.
Already, Arkansas, Louisiana and Utah have passed measures mandating various controls on social media platforms, including age verification requirements, a curfew to prevent minors using the platforms at night and a prohibition on collecting user data on young people.
But those pieces of legislation—as well as a separate bill in Montana to ban TikTok in the state—have received heavy criticism for infringing on the First Amendment and for relying on technology that is neither sufficiently advanced or safe enough to properly verify users’ ages.
Panelists at an event this week on age verification technology for social media agreed that while something must be done to protect young people from social media’s most harmful content, these state approaches are ineffective.
The legislators behind these laws are guilty of “do something disease,” said Nicole Saad Bembridge, associate counsel at the open internet advocacy group NetChoice. She said these lawmakers want to appear to be acting, but like prior attempted bans on video games or curbs on certain types of music, their approach is doomed to fail.
“We know that hundreds of millions of people just in the United States alone use social media to share and receive political speech, to express themselves artistically and even to engage in religious exercise,” Bembridge said at the event hosted by the Information Technology and Innovation Foundation. “This is the core of what the First Amendment protects, and the government can't make those activities contingent on willingness to share personal data.”
NetChoice is suing Arkansas, alleging in its lawsuit that the state’s law violates the U.S. Constitution by tying users’ First Amendment rights to the sharing of private information with the state government. Legal actions against Montana’s law banning TikTok have made similar arguments. Those lawsuits have also accused the state of violating the Constitution’s provisions against so-called “bills of attainder,” which declare a party or business is guilty of a crime without allowing them to defend themselves in court.
NetChoice’s lawsuit against Arkansas also alleges that the law threatens state residents’ safety and privacy by using third-party software to “track, verify and store information on minors.” The group said the effort means Arkansans will be giving over their personal information to a company—like that in Louisiana that verifies users’ ages before they view adult websites—that they have no relationship with.
Academics have started to wrestle with the question of how social media companies can determine a user’s age safely and effectively. A recent policy paper for The Center for Growth and Opportunity at Utah State University acknowledged that verifying age online is “hard,” and that “no perfect solution exists.”
The paper calls on regulators to be specific about what is expected of platforms when it comes to verifying users’ ages and how they should do so securely. It also says that bodies like the National Institute for Standards and Technology and the Federal Trade Commission should be tasked with releasing guidance on how age verification should work, including instituting a voluntary certification program for vendors that verify ages.
Scott Brennen, the head of online expression policy at the Center on Technology Policy at the University of North Carolina Chapel Hill and a report co-author, said the lack of specificity in the legislation means social media companies and vendors are in the dark about what is expected of them. For example, language that calls for companies to use a “commercially reasonable method” to verify users’ ages is wholly inadequate as those three words are not defined in the bill, or anywhere else.
“I think the big problem here is that a lot of these bills don't seem to acknowledge that this is actually kind of a hard problem, and that there are these trade-offs that we need to consider,” Brennen said. “[In] a lot of the bills, there's minimal reference to the actual techniques that platforms should be [using] to do age verification.”
State laws with language “passing the buck” to a commission or working group to work out how to verify ages are inadequate, Brennen said.
The impact of social media on young people has come into sharper focus in recent times. U.S. Surgeon General Vivek Murthy issued a health advisory in 2021 warning that the platforms have helped exacerbate the current teen mental health crisis. Brennen and Bembridge echoed the calls of other experts that improving teens’ digital literacy could be a better approach.
“We need to find a way forward here, whether it’s age verification or not,” Brennen said.
But, Bembridge noted, “civil liberties cannot be the price of solving these problems.”
NEXT STORY: How one region consolidated homelessness data to improve care