A top state official used AI to draft public policy. The AI hallucinated.
Connecting state and local government leaders
False citations in a policy document from Alaska's education commissioner show how AI misinformation can influence state policy.
This story was originally published by the Alaska Beacon.
Alaska’s top education official relied on generative artificial intelligence to draft a proposed policy on cellphone use in Alaska schools, which resulted in a state document citing supposed academic studies that don’t exist.
The document did not disclose that AI had been used in its conception. At least some of that AI-generated false information ended up in front of state Board of Education and Early Development members.
Policymakers in education and elsewhere in government rely on well-supported research. The commissioner’s use of false AI generated content points to a lack of state policy around the use of AI tools, when public trust depends on knowing that the sources used to inform government decisions are not only right, but real.
A department spokesperson first called the false sources “placeholders.” They were cited throughout the body of a resolution posted on the department’s website in advance of a state board of education meeting, which was held in the Matanuska-Susitna Borough this month.
Later, state Education Commissioner Deena Bishop said they were part of a first draft, and that she used generative AI to create the citations. She said she realized her error before the meeting and sent correct citations to board members. The board adopted the resolution.
However, mistaken references and other vestiges of what’s known as “AI hallucination” exist in the corrected document later distributed by the department and which Bishop said was voted on by the board.
The resolution directs DEED to craft a model policy for cellphone restrictions. The resolution published on the state’s website cited supposed scholarly articles that cannot be found at the web addresses listed and whose titles did not show up in broader online searches.
Four of the document’s six citations appear to be studies published in scientific journals, but were false. The journals the state cited do exist, but the titles the department referenced are not printed in the issues listed. Instead, work on different subjects is posted on the listed links.
Ellie Pavlick, an assistant professor of computer science and linguistics at Brown University and a research scientist for Google Deepmind, reviewed the citations and said they look like other fake citations she has seen AI generate.
“That is exactly the type of pattern that one sees with AI-hallucinated citations,” she said.
A hallucination is the term used when an AI system generates misleading or false information, usually because the model doesn’t have enough data or makes incorrect assumptions.
“It’s just very typical that you would see these fake citations that would have a real journal, sometimes even a real personal person, a plausible name, but not correspond to a real thing,” she said. “That’s just like the pattern of citations you would expect of a language model — at least, we’ve seen them do something like that.”
The document’s reference section includes URLs, which led to scholarly articles on different subjects. Instead of “Banning mobile phones improves student performance: Evidence from a quasi-experiment” in the journal Computers in Human Behavior, the state’s URL led to the article “Sexualized Behaviors on Facebook,” a different article in the publication. A search for the correct title did not yield any results. The same was true for two studies the state said were to be found in the Journal of Educational Psychology.
After the Alaska Beacon asked the department to produce the false studies, officials updated the online document. When asked if the department used AI, spokesperson Bryan Zadalis said the citations were simply there as filler until correct information would be inserted.
“Many of the sources listed were placeholders during the drafting process used while final sources were critiqued, compared and under review. This is a process many of us have grown accustomed to working with,” he wrote in a Friday email.
Zadalis wrote the draft resolution and Bishop “then put it into a generative AI platform just to see if it could be helpful in finding additional sources,” Zadalis said in an email on Monday. They found that it was ultimately not helpful.
Bishop later said it was a first draft that had been posted in error, and that it was later corrected.
But vestiges of the AI generated document are still found throughout the document Bishop said the board reviewed and voted on.
For example, the department’s updated document still refers readers to a fictitious 2019 study in the American Psychological Association to support the resolution’s claim that “students in schools with cellphone restrictions showed lower levels of stress and higher levels of academic achievement.” The new citation leads to a study that looks at mental health rather than academic outcomes. Anecdotally, that study did not find a direct correlation between cellphone use and depression or loneliness.
While that claim is not correctly sourced in the document, there is a study that shows smartphones have an effect on course comprehension and well-being – but among college students rather than adolescents. Melissa DiMartino, the researcher and professor at New York Tech who published that study, said that even though she has not studied the effects of cellphones on adolescents, she thinks her findings would only be amplified in that population
“Their brains are still developing. They’re very malleable. And if you look at the research around smartphones, a lot of it is mirroring that of a substance addiction or any other type of addictive behavior,” she said.
She said the tricky part about actually studying adolescents, like the titles of the false studies from the state suggest, is that researchers must get permission from schools to research their students.
The department updated the document online on Friday, after multiple inquiries from the Alaska Beacon about the origin of the sources. The updated reference list replaced the citation of a nonexistent article in the more-than-100-year-old Journal of Educational Psychology, with a real article from the Malaysian Online Journal of Educational Technology.
Bishop said there was “nothing nefarious” at play with the mistakes and no discernable harm came from the incident.
The false citations do point to how AI misinformation can influence state policy, however — especially if high-level state officials use the technology as a drafting shorthand that causes mistakes that end up in public documents and official resolutions.
The statement from the education department spokesperson suggests the use of such “placeholders” is not unusual in the department. This kind of mistake could easily be repeated if those placeholders are typically AI-generated content.
Pavlick, the AI expert, said that the situation points to broader reckonings with where people get their information and the circulation of misinformation.
“I think there’s also a real concern, especially when people in positions of authority use this, because of this kind of degrading of trust that’s already there, right?” she said. “Once it comes out a few times that information is fake, whether intentionally or not, then it becomes easy to dismiss anything as fake.”
In this example, scientific articles — long accepted forms of validating an argument with research, data and facts — are in question, which could undermine the degree to which they remain a trusted resource.
“I think for a lot of people, they think of AI as the substitute for search in the same way, because in some ways it feels similar. Like, they’re at their computer, they’re typing into a text box, and they’re getting these answers,” she said.
She pointed to a legal case last year, where an attorney used an AI chatbot to write a filing. The chatbot cited fake cases that the lawyer then used in court, which led the judge to consider punishing the attorney. Pavlick said those errors there remind her of what happened in the DEED document.
She said it is concerning that the technology has become so widely used without a corresponding increase in public understanding of how it works.
“I don’t know whose responsibility this really is — it probably falls more on us, the AI community, right, to educate better, because it’s kind of hard to fault people for not understanding, for not realizing that they need to treat this different than other other search tools, other technology,” she said.
She said boosting AI literacy is one way to avoid misuse of the technology, but there aren’t universally acknowledged best practices for how that should happen.
“I think a few examples of stuff like this, hopefully will escalate so that the whole country, the world, gets a little more interested in the outcomes of this,” she said.
Alaska Beacon editor’s note: This article has been updated to note that Zadalis wrote the resolution.
Route Fifty editor's note: This article's original headline was edited by Route Fifty.