What Do We Know About the Edtech Services That Watch Students?

Data Privacy

What Do We Know About the Edtech Services That Watch Students?

Worries persist about companies keeping tabs on students through school-issued technology.

By Daniel Mollenkamp     Oct 14, 2025

Last year, journalism students at Lawrence High School, a public school in Kansas, convinced the district to exempt them from the watchful eye it paid to keep tabs on their classmates.

The district had plunked down more than $162,000 for a contract with Gaggle, looking for a means to bolster student mental health and “crisis management," according to documents posted online. When school shootings and teen mental health crises proliferate, the district hoped that Gaggle’s digital monitoring service would help.

But “heated discussions” with the journalism students convinced the district that their activity had to be exempt from the edtech-enabled spyware as part of their First Amendment rights, according to coverage from The Lawrence Times.

Along with other companies such as GoGuardian and Bark, Gaggle belongs to the school surveillance category of edtech. Concerns over teen mental health are high, especially due to the tragic prevalence of suicide. Plagued by inadequate mental health staff, schools continue to turn to these companies to fill in the gap. The companies rely on artificial intelligence to go through student messages and search histories to notify school districts if students are deemed a risk for bullying or self-harm; and also to block students from visiting websites schools haven’t approved.

But skeptics and students worry. In recent conversations, teens described the ways these tools sometimes hinder learning in schools, explaining why they lobby to resist the ways artificial intelligence can actually impede education. And the Electronic Frontier Foundation rated Gaggle an “F” for student privacy, pointing toward the AI’s trouble understanding context when flagging student messages.

In fact, this isn’t new. Concerns over digital surveillance have kicked around for some time, says Jim Siegl, senior technologist with The Future of Privacy Forum’s Youth and Education Privacy Team.

Similar to other measures schools feel pushed to adopt for student safety, such as active shooter drills, the digital surveillance industry has raised questions about efficacy and the trade-offs these practices bring.

The Age of Surveillance

There are about a dozen companies that specialize in school surveillance, according to an article published earlier this year in the Journal of Medical Internet Research. That monitoring reaches into students’ lives beyond school hours, with all but two of those companies monitoring students around the clock. (Devices provided by schools tend to track students more than students’ personal devices, raising concerns that students from low-income families get less privacy than high-income ones, according to a report from the Center for Democracy and Technology.)

During the COVID-19 pandemic and the switch to remote instruction, schools turned to these kinds of tools, says William Owen, communications director for the Surveillance Technology Oversight Project, a nonprofit that advocates against surveillance technologies. They were useful at the time for proctoring exams and other school needs.

But the problem, in Owen’s view, is that the services rely on biased algorithms that have made spying on students — watching their every move — normal. And the services target students with disabilities, those that are neurodivergent and LGBTQ students, flagging them much more often than other students, Owen says.

The tools examined in the research study rely on a mix of artificial intelligence and human moderators. But while most of the companies use artificial intelligence to flag student activity, only six of them — less than half — have a human review team, the report notes.

Surveillance firms are really good at selling these technologies to schools, Owen says. They claim that the services will help students, so it can be hard for administrators and parents to fully understand the extent of the possible harm, he adds.

In recent years, concerns over these tools’ impact on student privacy have grown.

Several of these companies, including Gaggle, were signatories to edtech’s “privacy pledge,” a voluntary commitment to uphold best practices for handling student data. The Future of Privacy Forum “retired” the pledge earlier this year. At the time, John Verdi, senior vice president for policy for that group, told EdSurge that privacy issues in edtech had shifted, among other issues, to the fast-moving world of AI. GoGuardian, another student monitoring service and signatory to the pledge, remarked that the retirement would have no effect on their practices.

All this has led some people to worry about the rise of “digital authoritarianism,” in an ecosystem in which students are constantly surveilled.

Meanwhile, companies argue that they have saved thousands of lives, based on internal data concerning its alerts around possible student self-harm and violence. (Gaggle did not respond to an interview request from EdSurge.)

Some researchers are skeptical that the monitoring services deliver the safety they promise schools: There’s little evidence of the effectiveness of these surveillance services in identifying suicidal students, wrote Jessica Paige, a racial inequality researcher at RAND, in 2024. But the services raise privacy risks, exacerbate inequality and can be difficult for parents to opt-out of, she added.

In 2022, a Senate investigation into four of the most prominent of these companies raised many of these issues, and also found that the companies had not taken steps to determine whether they were furthering bias. And parents and schools weren’t adequately informed about potential abuse of the data, the investigation found.

In response, companies shared anecdotes and testimonials of their products safeguarding students from harm.

In 2023, in response to claims that its services perpetuate discrimination against LGBTQ students, Gaggle stopped flagging words affiliated with the LGBTQ community — like “gay” and “lesbian” — which the company attributed to “greater acceptance of LGBTQ youth.”

Next Steps for Schools to Consider

This summer, EdSurge spoke with students who have lobbied to limit the ways they feel artificial intelligence is harming their education. The students described how AI tools blocked educational websites such as JSTOR, which prevented them from accessing academic articles, and also blocked sites such as the Trevor Project, used as a suicide-prevention line by LGBTQ students. The students also described how their school districts struggle to anticipate or explain precisely what websites will get caught by the web filters they pay companies for, causing confusion and generating murky rules.

They have called on education leaders to listen to student concerns while crafting policies related to AI tools and surveillance systems and to prioritize preserving students’ rights.

Some commentators also worry that these tools feed fear of punishment in students, leaving them unwilling to explore or express ideas, and therefore limiting their development. But perhaps most concerning for skeptics of the industry is that these platforms can increase student interactions with the police.

Districts may not realize they are authorizing these companies to act on their behalf, and to hand over student data to police, if they do not review the contracts carefully, according to Siegl, of FPF, who was previously a technology architect for Fairfax County Public Schools in the suburbs outside of Washington, D.C. It's one of the most risky and concerning issues these tools raise, he says.

In practice, the tools are often used to control student behavior, collecting data that’s used to discipline students and manage the limited bandwidth schools have, he says.

Schools need clear policies and procedures for handling student data in a way that preserves privacy and accounts for bias, and also to review the contracts carefully, Siegl says. Parents and students should ask what districts are trying to achieve with these tools and what measures are in place to support those goals, he adds.

Others think these tools ought to be avoided in schools, or even banned.

Schools should not contract with surveillance firms that put students, including especially students of color, at risk of dangerous police interactions, Owen argues.

New York, for example, has a ban on facial recognition technology in schools in the state, but schools are free to use other biometric technology, like fingerprint scanners in lunch lines.

But for some, the problem is categorical.

“There's no correcting the algorithm, when these technologies are so biased to begin with, and students [and] educators need to understand the degree of that bias and that danger that is posed,” Owen says.

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up