express gazette logo
The Express Gazette
Saturday, December 27, 2025

AI cheating data shows method, not amount, study finds

New high-school data indicate generative AI shifts cheating methods toward idea generation and editing, while overall prevalence remains high and policy questions proliferate.

Technology & AI 3 months ago
AI cheating data shows method, not amount, study finds

New longitudinal data from U.S. high schools indicate that generative AI has changed how students cheat, not how much. After ChatGPT entered classrooms, cheating remained widespread, but the patterns shifted toward using AI for ideation and editing rather than simply outsourcing entire assignments. In 2024, 11 percent of students were using AI to complete all of a paper, project, or assignment, rising to 15 percent in 2025. More than half of students in 2024 used AI to generate ideas, and by 2025 about 40 percent used AI to improve the work they produced, such as revising drafts or checking answers.

Cheating has long existed in educational settings, with researchers noting high baseline levels before AI arrived. Don McCabe, a business-school professor, and colleagues documented substantial self-reported cheating among college students in the 1990s and 2000s, including surveys that allowed respondents to describe behaviors like using electronic devices to look up information during tests. Across the literature, researchers have argued that anonymous surveys tend to yield higher estimates of cheating than direct questions about whether a student has ever cheated. As a result, the historical picture shows cheating as a persistent challenge, not a new phenomenon tied solely to the digital age. In high schools during the 2010s, multiple studies reported cheating rates above 80 percent across diverse regions, with students citing pressure to perform, time constraints, and the perception that others were cheating as factors.

The past two school years offer a more concrete look at AI’s role in cheating behaviors. In the 2022–2023 school year, researchers revisited the same set of high schools—encompassing a private, a charter, and a public school—to assess how cheating behaviors evolved after ChatGPT became widely accessible. The data suggested that overall cheating did not spike in the immediate wake of the AI tool. Before the pandemic, 61.3 percent to 82.7 percent of students reported engaging in any cheating behavior in the prior month. In late spring 2023, the range was 59 percent to 64.4 percent. For behaviors that involve copying from other sources, including paraphrasing without attribution, the figures prior to AI hovered around 21 percent to 30.6 percent; after ChatGPT, the range was 24.8 percent to 31.2 percent. Around 11 percent of students at a public school were using AI to write all of a paper, project, or assignment, suggesting that AI had carved out a niche in complete submissions even as broader cheating patterns persisted.

In 2024, researchers expanded the sample dramatically, collecting data from more than 28,000 students and 22 public and charter high schools; in 2025 the sample grew to more than 39,000 students across 24 schools. The newer data show a shift in how AI is used. About 11 percent were still writing entire papers with AI in 2024, rising to 15 percent in 2025. At the same time, more than half of students reported using AI to generate ideas in 2024, and by 2025 roughly 40 percent were using AI to improve the work they produced, including revising drafts, verifying information, or enhancing writing that the student had already begun. Researchers caution that these figures reflect the use of AI as a tool rather than a simple yes/no on whether a piece of work was written by a machine. For many students, AI serves as a facilitator rather than a replacement for effort.

Focus-group conversations with high school students reveal a nuanced picture of how and why AI is used. One student described staying up late to finish assignments and turning to AI because “it’s 11:30 and the assignment is due at 11:59, and I don’t know what else to do.” Another reported pressure to rely on AI after being accused of plagiarism, even when she insists she did not use AI; such cases contribute to a perception that AI use is being policed unevenly and can harm a student’s reputation. Some students noted that adults—teachers, parents, and even college professors—also rely on AI, which can feel hypocritical and erode trust when students are sanctioned for AI use. In other narratives, students described mixed messages from teachers about what constitutes acceptable AI assistance, with some instructors explicitly encouraging AI to generate code or start projects while others prohibit or restrict its use.

Researchers also found that the policy landscape is uneven. A study of more than 1,400 teachers across middle and high schools found that only about 10 percent had explicit, written policies about AI in their classes. District-level guidance remains variable, and schools face challenges in enforcing rules across diverse subjects, assignments, and technology access. The result is a climate in which students navigate unclear expectations about what AI use is permissible, what counts as cheating, and how to balance efficiency with the development of critical thinking.

The questions facing educators, policymakers, and families are not simply about banning or allowing AI. Some researchers advocate for embracing AI as a teaching tool while redesigning assignments to emphasize skills that align with a world in which automation is common. Others warn that over-restriction may push students toward covert, poorly understood forms of AI use. An MIT study cited in the discourse suggested that composing with AI could alter neural engagement and recall, though the researchers noted the study’s artificial task setting and the perspective that real-world writing tasks differ significantly from laboratory conditions. The takeaway is not a verdict on AI’s cognitive impact but a reminder that educational contexts and expectations must evolve in tandem with technology.

Looking ahead, the authoring researchers propose four core questions to guide policy and practice: why students cheat, and how high-stakes workloads and life stress contribute to such behavior; whether educators practice what they preach regarding AI; whether schools clearly communicate acceptable uses and why; and what essential knowledge students should carry into a future in which AI is ubiquitous. The overarching message is pragmatic: AI is unlikely to disappear, and the education system may need to adapt by teaching students how to use AI responsibly, evaluating information from AI sources, and designing assessments that emphasize reasoning, synthesis, and analysis over rote completion. As the landscape evolves, the field of education research will continue to track cheating behaviors, AI usage patterns, and the implications for learning—and for trust between students and teachers.

In sum, the current data do not support the sensational claim that AI has caused a cheating crisis across the board. Instead, they portray AI as a shift in technique—one that demands thoughtful policy, clearer expectations, and a reimagined curriculum that prepares students for a world where artificial intelligence is a regular component of problem solving.


Sources