‘More research needed’ on regulating human and AI relationships

Andriy Onufriyenko via Getty Images
Chatbots can provide companionship in certain situations, but speakers at a recent event said much more work is needed to understand and mitigate the risks, as well as educate users.
The 2013 movie “Her,” which follows a man who develops a relationship with his sentient artificial intelligence device, introduced the notion of humans relying on chatbots for companionship and complex relationships.
But the science fiction of the silver screen, which incidentally some argue was set in 2025, could be becoming a reality as people use generative AI for more than just productivity assistance and instead ask for life advice and even therapy. And that creates a lot of issues for those policymakers, who are debating how best to regulate this nascent space and the relationships that may be developing between humans and AI.
“This issue, particularly in the public policy context, regulating chatbots and companion chatbots is about regulating the form and function of human software interaction,” Taylor Barkley, director of public policy at the Abundance Institute, a pro-tech nonprofit, said during a recent event hosted by the Information Technology and Innovation Foundation. “Humans and software are complex; they don't respect borders. Humans are everywhere, software is everywhere, and therefore it's easy to get regulation wrong, so that can lead to foreclosing all the positive things that AI can bring to humanity the world over.”
Some research has suggested that the likelihood of humans developing a relationship with AI is unlikely. A joint paper published in March by the MIT Media Lab and OpenAI, the company behind generative AI tool ChatGPT, found that emotional engagement is “rare” and that only a “small sub-population of users” had emotionally expressive interactions with the tool’s voice mode.
Panelists argued that more study is needed, however, especially to understand how young people are interacting with these tools.
“We need much, much more research … to really start to understand the differences between human to human social relationships and the ways that humans, especially young humans, develop their sense of attachment, healthy attachment in these relationships, healthy communication styles within these relationships, and can effectively communicate their needs and develop a sense of self,” said Melodi Dinçer, policy counsel at the Tech Justice Law Project.
“We're not at that place yet as a society where we fully understand how these somewhat well studied and understood aspects of social relationality tie into these technological interactions and interfaces.”
It means lawmakers must strike a difficult balance. New York Assemblyman Clyde Vanel, a Democrat who claimed in 2023 to have introduced the first AI-written bill, has since introduced various bills on the technology. He said he first became aware of AI companionship programs through a South Korean company that was piloting an effort in New York to provide companionship to the elderly through AI.
But the allegation that Florida teenager Sewell Setzer III killed himself after being convinced to do so by the Character.AI chatbot stopped many people in their tracks, including Vanel, who said he and his staff “saw how easy it was for the platform to be able to engage with someone, knowing that the person is underage, in inappropriate conversation.”
But Barkley said regulation can be tricky, as it is difficult to know how to regulate platforms when it is hard to work out how they are used. Though some may not use certain generative AI platforms for companionship, for example, others might find a way to do so. It “adds to the complexity here,” he said.
Dinçer said it may be easier to regulate “the companies themselves” rather than specific companion chatbots, as they operate in such a “gray space.”
Regulations cannot be the only answer, however. Vanel said every user must be educated on the role of AI in companionship so they can make the best decisions for themselves.
“Regulations won't address everything,” he said. “We have to make sure that we educate folks. We have to educate parents with their children's uses of these technologies; we have to make sure we educate seniors. We also have to make sure the government will be able to provide resources to organizations to be able to close that gap also. Government can't reach everyone well but there are great organizations out there that also can make sure that we provide the resources so that they can close that gap.”
Barkley said it is important for legislators to get the balance right between regulating this sector and allowing it to flourish. Preventing harm is a noble endeavor, he said, but that cannot come at the expense of preventing technology’s advancement.
“AI is a general-purpose technology, much like electricity,” Barkley said. “When I think about this issue in regulation, I think what if we had, for the sake of preventing harms of electricity in the early days, maybe a requirement that users of electric applications need to submit proper paperwork or be certified or have their age verified before plugging in an application, or factories need to go through an amount of paperwork or have requirements for what electric tools they can or can't use. It's complex, there are real issues out there, the benefits, the harms.”