You On AI Encyclopedia · AI Detection Software The You On AI Encyclopedia Home
Txt Low Med High
TECHNOLOGY

AI Detection Software

The class of algorithmic tools deployed by schools and universities in 2025–2026 to identify AI-generated student writing — the paradigmatic case of Skenazy's overprotection trap, implementing a protective measure whose harms exceeded the risk it was designed to mitigate.
AI detection software — products like Turnitin's AI writing detection, GPTZero, Originality.AI, and dozens of competitors — purported to distinguish human-written text from AI-generated text with high accuracy. The tools were deployed across thousands of educational institutions between 2023 and 2026. The documented reality was that the detection was unreliable, systematically biased against non-native English speakers and students with unusually formal writing styles, and produced false accusations that subjected students to humiliating administrative processes based on flawed algorithmic assessment. The software became Skenazy's paradigmatic case of institutional safetyism in the AI age: a protection whose harms exceeded the risk it addressed, deployed for institutional liability rather than student welfare.

In The You On AI Encyclopedia

The technical problem with AI detection is structural rather than contingent. Detection tools work by measuring statistical properties of text — perplexity, burstiness, token distribution patterns — that are associated with AI generation in training data. The measures are noisy. Human writers who produce unusually regular, syntactically precise, or low-perplexity text trigger false positives. This population is not random. It is systematically composed of non-native English speakers (whose English was learned with deliberate attention to grammatical regularity), students with unusually academic writing styles (often from families with academic backgrounds), and students with certain neurodivergent profiles. The tools were effectively profiling the students most likely to have carefully crafted their prose and flagging them as cheaters.

The institutional response to the tools' unreliability was revealing. Most schools that deployed AI detection did not pair it with procedures that acknowledged its error rate. Students flagged by the software were subjected to administrative processes — required handwritten drafts, mandatory interviews, temporary suspension — that assumed the detection was approximately correct. The burden was placed on students to prove their innocence, with institutional resources weighted against them. For students from disadvantaged backgrounds, the process was particularly harsh: the students with the fewest resources to contest the accusation were the students most likely to be falsely accused.

The framework parallel to Skenazy's physical-world documentation is exact. The tools functioned as institutional safety theater — visible action that demonstrated the institution's seriousness about AI, regardless of whether the action produced the outcomes it claimed. No administrator was fired for deploying AI detection software; many were praised for taking the issue seriously. The incentive structure rewarded deployment regardless of outcome. Meanwhile, the students who bore the cost of the tools' failures had no institutional voice comparable to the parents and advocates who had fought the overprotection battles in physical-world contexts.

The Skenazy response was characteristically direct. The problem was not that schools were trying to address AI use in student work. The problem was that they had chosen a response that substituted algorithmic theater for the harder work of reforming assessment — shifting from output evaluation to process evaluation, asking students questions instead of running their writing through software, designing assignments that could not be completed through uncritical AI use in the first place. The detection software was a lazy institutional answer to a problem that required harder institutional thinking.

Origin

AI detection software proliferated rapidly after the public release of ChatGPT in November 2022. By early 2023, major educational technology vendors had deployed detection products. Documentation of the tools' unreliability began appearing in academic papers and journalism within months, with the bias against non-native English speakers being the most extensively studied failure mode.

Key Ideas

Algorithmic bias as systematic harm. Detection tools systematically misidentify writing from non-native English speakers and students with unusually formal styles as AI-generated.

Institutional liability over student welfare. The tools' deployment was driven by institutional risk management rather than evidence of efficacy or student benefit.

Safety theater diagnosis. The tools performed the appearance of addressing AI in schools while producing harms that exceeded the problem they addressed.

Process over detection. The Skenazy alternative is assessment reform — grading questions rather than essays, designing assignments resistant to uncritical AI use, treating AI engagement as subject to educational scaffolding rather than algorithmic surveillance.

Explore more
Browse the full You On AI Encyclopedia — over 8,500 entries
← Home 0%
TECHNOLOGY Book →