AI tools and student data: Teachers can endanger kids’ privacy without robust training
Connecting state and local government leaders
Artificial intelligence is helping teachers save time. But popular AI platforms can also significantly endanger student privacy. The risks are prompting school districts and others to respond.
This story was originally published by Chalkbeat. Sign up for their newsletters at ckbe.at/newsletters.
High school teacher Armando Martinez is a big enthusiast of using artificial intelligence to help him teach.
For his English and media literacy classes at a charter school in Albuquerque, New Mexico, he frequently uses ChatGPT to help brainstorm new ideas for lesson plans and to create little poems or music lyrics about the topics he’s discussing.
But more importantly, there are many tasks for which he intentionally does not use AI. These include creating Individualized Education Programs and grading student work. One of the main reasons he doesn’t lean on AI for these purposes is to protect student privacy. He knows he has to be careful.
“I don’t put any student information into it, I just use it to perform mundane tasks,” Martinez said. “As teachers, our biggest resources are time and energy. And we never have enough of them. For me, AI is a tool to streamline some processes.”
As AI companies have proliferated, many have offered services like AI-powered tutors for students, and AI chatbots and platforms that serve as teaching assistants. But many of them do not sufficiently protect students’ personal data.
Depending on what data teachers provide to the model, they may run afoul of student privacy laws. Without guidance from their districts or elsewhere, teachers who experiment with AI could lack crucial understanding of these platforms’ privacy risks, and expose personal student information in ways that could have repercussions for years.
Earlier this year, the Los Angeles Unified School District rolled out Ed, an AI-powered assistant for students, which was quickly discontinued after the company responsible for its development went into financial trouble. The abrupt shutdown of the system left parents and advocates without answers as to what was done with the student data held by the platform.
These challenges aren’t wholly new. In recent years, teachers have been investigated for accidentally sharing students’ grades on social media, and even fired for sharing students’ information via email.
“Many of the risks posed by AI are similar to the ones other ed-tech tools already presented, but on a much larger scale,” said Calli Schroeder, Senior Counsel and Global Privacy Counsel at the Electronic Privacy Information Center.
At the same time, many school districts appear to have been slow to prepare teachers for the new learning environment. According to an Education Week survey of 1,135 educators conducted in September and October, 58% of educators had received no training on AI. A previous edition of the survey conducted last spring showed that only 29% had been through some training.
The risks vary a lot depending both on the platform and how teachers use it. The most common AI platforms are created by big technology companies and were not designed specifically for the use in education. These include ChatGPT from OpenAI and Google’s Gemini.
Other tools, created specifically for educational purposes like MagicSchool or Khanmigo by Khan Academy, have more safeguards in place, yet still depend heavily on teachers being cautious about what information they input.
But in many cases, both types of AI platforms incorporate the information users provide in their models. This means that when a different user accesses the platform, they might retrieve that piece of information and share it, according to Schroeder.
Before entering any information into a platform, it is important to know if it will use the data to target ads to students or share data with third parties, and even how long the platform will keep student data, said Anjali Nambiar, an education research manager at Learning Collider whose research into student privacy and AI has been published by MIT’s RAISE Initiative.
Then there’s the chance that giving AI platforms personally identifiable attendance, grades, or even work from students can lead to discrimination against them in adult life, such as when they look for jobs.
And private information uploaded to these platforms like parents’ names or Social Security numbers can spur identity theft.
In short, Nambiar said, “Having this information out there can harm students going forward.”
Despite these unsettling scenarios, there are approaches school districts can use — and are using — to safeguard student privacy.
Educators Use AI Safely To Improve Student Relationships
A review of AI ed-tech tools by the investment fund Reach Capital found over 280 platforms, with AI tutors and teacher assistants the two most common types.
The adoption of AI by teachers has not moved as fast as the proliferation of these tools, but it has consistently grown. In a survey of 1,020 teachers conducted by the nonprofit RAND Corporation in the fall of 2023, 18% of teachers reported to use AI for teaching. Among those who use AI, 53% said they use chatbots like ChatGPT at least once a week, and another 16% said they used AI grading tools with the same frequency.
Some platforms that are designed specifically for education include mechanisms to reduce privacy risks.
Khanmigo and MagicSchool, for example, show multiple messages alerting teachers to not disclose students’ personal data. They also try to identify any sensitive information that teachers load into the platform and delete that information.
“We have an agreement that no student or teacher data is used to train their model,” said Kristen DiCerbo, Khan Academy’s chief learning officer. “We also anonymize all the information that is sent to their model.”
Various federal and other laws protect student data like students’ names and family information, as well as attendance and behavioral records, disabilities, and disciplinary history. However, the relevant statutes can vary from state to state. Some states are discussing bills to regulate AI.
Congress passed the Family Education Rights and Privacy Act, or FERPA, in 1974. Calls to update FERPA date back many years, and concerns about AI have reinforced them.
Schools are ultimately responsible for student data, according to the law. FERPA determines a series of conditions for schools to disclose students’ information for third parties like contractors or technology vendors, including that they should be “under the direct control of the agency or institution with respect to the use and maintenance of education records.”
Experienced, tech-savvy teachers sometimes know the law well enough to use AI tools without breaking it. But understanding the potential risks is a very complex issue that teachers shouldn’t be expected to navigate by themselves, according to Randi Weingarten, president of the American Federation of Teachers.
“Ensuring the ethical and successful integration of AI in education is vital but cannot become the responsibility of only a few teachers,” Weingarten said in a statement.
The union’s Commonsense Guardrails for Using Advanced Technology in Schools states that it is fundamental that school and district technology departments take the lead in vetting the tools that educators can use.
It is very simple to sign up for an account to use ChatGPT, Google Gemini, or Microsoft Copilot with a personal email. And even platforms focused on education allow teachers to sign up for an account without any prior authorization.
“I feel like this just falls into one of those weird gaps in people’s knowledge where they think, for some reason … that ChatGPT doesn’t really count as outside technology”, said Schroeder, from the Electronic Privacy Information Center.
Even though “AI can be very helpful,” Schroeder said, “school districts need to be more explicit and address this topic to inform teachers.”
Balancing the need to comply with laws and safeguard student data with preserving educators' autonomy and curiosity about AI has been a big challenge for schools. But the education technology department in Moore Public Schools just outside Oklahoma City is up for it.
One pillar of its work is providing teachers and school administrators with training to understand the risks and responsibilities involved in using these platforms, said Brandon Wilmarth, the district’s director of educational technology.
Right after the release of ChatGPT, as the AI frenzy started, one of the first such sessions it held was about how principals could use the language model to help them write behavioral reports.
“We very openly said: You must omit any personally identifiable information,” Wilmarth recalled. “You can write down students’ behaviors, but do not include their real names.”
After using the tool, principals would transfer the AI response to the districts’ template for these documents. In the process, they would review the quality of the output and make any necessary adjustments. The AI assistance made this process faster and provided helpful insight for principals.
“We found that a lot of times, the AI analysis of the cases was really spot on,” Wilmarth said. “A lot of principals struggle with their subjective relationships with students. So whenever they would enter the facts, just the facts, it was nice to have something to bounce that back in a very objective way.”
Since then, training sessions have become more common in the district. Many professional development sessions are devoted to exploring the potential, limitations, and risks of using AI.
Moore Public Schools also gives teachers easy access to information on which AI platforms are vetted as safe for use. It also has a process that allows teachers to submit requests to have the district vet new software before they start using it.
“Everyone is pretty aware that you don’t sign up for an account with your Moore Schools email unless it’s an approved tool and that you never get students to sign up for anything that hasn’t been approved,” Wilmarth said.
Wellington Soares is Chalkbeat’s national education reporting intern based in New York City. Contact Wellington at wsoares@chalkbeat.org.
Chalkbeat is a nonprofit news site covering educational change in public schools.
NEXT STORY: Cross-state partnership looks to boost drone use and research