At a Glance
- AI detection tools are now common on college campuses.
- Students use “humanizers” to avoid false positives.
- The debate has led to lawsuits and campus policy changes.
Why it matters: Students face academic penalties and emotional distress when AI detectors misclassify their work.
AI detection has become a central issue in higher education, prompting professors to run papers through software that flags large-language-model usage. The technology, first introduced a few years ago, has been criticized for unreliability and for disproportionately flagging non-native English speakers. Many students have sued universities, claiming emotional distress and punitive measures after their work was misidentified as AI-generated.
Humanizers are a new class of generative-AI tools that scan essays and suggest edits so the text no longer reads like it was produced by a chatbot. Some are free; others cost around $20 a month. Students use them to avoid detection, or to prove they did not use AI when a detector flags their work. Turnitin and GPTZero have responded by upgrading their software to catch text that has been modified by a humanizer and by offering tools that let students record keystrokes or browsing history.
“Students now are trying to prove that they’re human, even though they might have never touched AI ever,” said Erin Ramirez, associate professor of education at California State University, Monterey Bay. “So where are we? We’re just in a spiral that will never end.”
The conflict has put faculty, administrators, and software companies in a race. Some professors believe that a detector’s score alone is insufficient, and that a conversation with the student is required. “Turnitin tells schools never to use its tools as the sole basis for deciding whether a student cheated,” said Annie Chechitelli, chief product officer at Turnitin. “It should instead prompt a conversation about how and why AI was used.”
Students’ stories illustrate the stakes. Brittany Carr, a Liberty University student, received failing grades on three assignments flagged by a detector. She showed revision history, including handwritten drafts, but the social-work school still required her to take a “writing with integrity” class. “I spoke about my cancer diagnosis and being depressed and my journey and you believe that is AI?” she wrote in a December 5 email. Carr’s experience caused her to run every document through Grammarly’s AI detector and edit any highlighted sections until the software reported a human author. She ultimately left Liberty, uncertain where to transfer.
Other students, like Aldan Creo, a graduate student at the University of California San Diego, feel the pressure to “dumb down” their writing. After a teaching assistant accused him of using AI in November, Creo began to run all assignments through a detector pre-emptively. “I have to do whatever I can to just show I actually write my homework myself,” he said.
Legal and policy responses vary. Liberty University said it does not comment on individual students, but that all academic-integrity concerns are handled with “care and discretion.” The university’s statement emphasized that students are afforded an exhaustive process to address any concerns about unfair treatment. Eric Wang, vice president of research at Quillbot, argued that unless educators move away from automatically deducting points, the fear will persist. “If it’s an unsupervised assessment, don’t bother trying to ban AI,” said Tricia Bertram Gallant, director of the academic-integrity office at UC San Diego. “And don’t bother trying to prove AI was used because you end up spending more time doing that.”

The humanizer industry has grown rapidly. Turnitin’s August update now detects text altered by more than 150 tools, some charging up to $50 for a subscription. Joseph Thibault, founder of Cursive, tracked 43 humanizers that had a combined 33.9 million website visits in October. He warned that the shift is toward more monitoring of students completing assignments. “There is a new agreement that needs to be made,” he said. “What level of surveillance are you willing to subject yourself to so that we can actually know that you’re learning?”
Superhuman, the company behind Grammarly, introduced Authorship, a tool that logs keystrokes and pasted text in Google Docs or Microsoft Word. Jenny Maxwell, head of education at Superhuman, said the tool was inspired by a TikTok video from Marley Stevens, who was placed on academic probation after a false AI accusation. Maxwell noted that as many as 5 million Authorship reports were created in the past year, though most were not submitted.
Student pressure on institutions is also mounting. In upstate New York, a petition calling on the University at Buffalo to drop AI detectors gathered more than 1,500 signatures last year. The university stated that it has no institution-wide rule on AI use; instructors must have evidence beyond a detector score to report academic dishonesty.
The debate over AI detection, humanizers, and campus policy reflects a broader uncertainty about how to integrate emerging technology into education. While some experts argue that detection software should never be used to punish students, others see it as a necessary tool to maintain academic integrity. The future will likely involve more nuanced conversations between students and faculty, clearer institutional policies, and continued evolution of both detection and humanization technologies.
Contact reporter Marcus L. Bennett
