Close Menu
    Facebook X (Twitter) Instagram
    Friday, May 8
    • Home
    • About Us
    • Contact Us
    • Submit Your Story
    • Terms of Use
    • Privacy Policy
    Facebook X (Twitter) Instagram
    Fortune Herald
    • Business
    • Finance
    • Politics
    • Lifestyle
    • Technology
    • Property
    • Business Guides
      • Guide To Writing a Business Plan UK
      • Guide to Writing a Marketing Campaign Plan
      • Guide to PR Tips for Small Business
      • Guide to Networking Ideas for Small Business
      • Guide to Bounce Rate Google Analyitics
    Fortune Herald
    Home»Education»These Students Asked ChatGPT to Write Their Thesis , Their University Just Changed Its Entire Grading System
    These Students Asked ChatGPT to Write Their Thesis
    These Students Asked ChatGPT to Write Their Thesis
    Education

    These Students Asked ChatGPT to Write Their Thesis , Their University Just Changed Its Entire Grading System

    News TeamBy News Team15/04/2026No Comments6 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email

    One particular feature from a 2024 University of Reading study merits more attention than it has gotten. There, researchers investigated whether AI-generated work could be identified by their institution’s current assessment systems. The response was that they failed 94% of the time. Not every now and then. Not with complex evasion strategies or in edge circumstances. Almost always. The ramifications of that figure are not subtle: the majority of AI-generated work that is now passing through university evaluation systems goes unnoticed, and the cases that are generating headlines are, as Dr. Peter Scarfe of Reading told The Guardian, “the tip of the iceberg.”

    Over the past two years, the scope of what is occurring in higher education has been more apparent. According to a Higher Education Policy Institute survey conducted in February 2025, 88% of UK students said they used generative AI technologies for exams, up from 53% the previous year. It’s not a fringe behavior.

    According to the Digital Education Council, 86% of students worldwide use AI in their studies, and more than half of them do so on a weekly basis. This is the predominant practice among students in the UK’s university system. Universities are already debating whether or not students are utilizing AI to do their schoolwork. The majority of them are. When individual human writing is no longer consistently what is being submitted, what should be done with a grading system that was created to evaluate it?

    Important Information

    FieldDetails
    Scale of AI Use in AssessmentsHEPI 2025 survey: 88% of UK students reported using generative AI tools for assessments — up from 53% the year prior; Digital Education Council found 86% of students globally use AI in studies
    UK AI Misconduct SurgeTimes Higher Education (Freedom of Information data): leading UK universities recording up to a fifteenfold increase in suspected AI-related cheating cases between 2022–23 and 2023–24
    University of Sheffield92 suspected AI-related misconduct cases in 2023–24, up from 6 the previous year; 79 students penalised
    Queen Mary University of London89 suspected AI cheating cases in 2023–24 — all resulting in penalties — up from 10 the year before
    Detection Failure RateUniversity of Reading study: AI-generated work was not detected by existing assessment systems 94% of the time
    Australia’s Assessment RegulatorTEQSA (Tertiary Education Quality and Standards Agency) warned in 2025 that AI-assisted cheating is “all but impossible” to detect consistently; urged universities to redesign assessments rather than depend on AI detectors
    How Universities Are RespondingReturn to in-person handwritten exams; mandatory oral defenses; process-based assessment (requiring documented drafts and revision histories); AI literacy integration; some institutions requiring disclosure of AI tool usage
    UC Berkeley Blue BooksCampus store saw 80% jump in blue-book purchases over two academic years as professors reverted to handwritten in-class essays
    The False Positive ProblemTurnitin’s own data shows 4% sentence-level false-positive rate; Temple University evaluation found Turnitin only 77% accurate at detecting AI text — meaning innocent students are being flagged
    China’s ResponseDuring the 2025 gaokao national exams, Deepseek and ByteDance reportedly shut down access to their services during exam hours; radio signals blocked in some exam halls

    Six students at the University of Sheffield were accused of engaging in AI-related wrongdoing during the 2022–2023 school year, which coincided with the launch of ChatGPT. That figure was 92 the next year, and 79 of them were officially penalized. 89 suspected instances resulted in 89 punishments at Queen Mary University of London. Through Freedom of Information requests to Russell Group universities, Times Higher Education was able to obtain these figures.

    They discovered that there were such significant differences between institutions—some reporting hundreds of cases, others claiming zero—that the data itself demonstrated both the scope of the issue and the inconsistency of institutional responses. In their records, almost 25% of colleges did not distinguish AI cheating from other types of wrongdoing. AI-assisted cheating has become “all but impossible” to detect consistently using current tools, according to a 2025 survey conducted by Australia’s higher education regulator TEQSA. Instead of continuing to invest in ineffective detection technology, TEQSA advised institutions to redesign assessments from the ground up.

    The revamp that is emerging in universities is not as futuristic as it might seem. For a pupil sitting in a room without a phone, it is essentially a return to outdated formats that AI is unable to perform. Over the course of two academic years, sales of blue books, the customary handwritten exam booklets used on American campuses for generations, increased by 80% at UC Berkeley’s campus store as instructors switched back to in-class essays that require students to be present, think in real time, and write in their own hand.

    These Students Asked ChatGPT to Write Their Thesis
    These Students Asked ChatGPT to Write Their Thesis

    Oral defenses, which have long been a feature of doctoral programs in Europe but are less common at the undergraduate level, are being added to assessment frameworks so that a student who turned in a well-written thesis must also explain in a live conversation what they meant by each section and why they made specific analytical decisions. In that type of conversation, a thesis that ChatGPT produced in the afternoon usually shows its flaws.

    The more difficult institutional issue is the one that detection tools exacerbated before academic institutions understood they were doing so. According to Turnitin’s own data, its AI identification capability has a 4% sentence-level false-positive rate, which means that real student writing is being marked as AI-generated on a significant scale.

    According to Temple University’s independent assessment, Turnitin has a 77% accuracy rate in recognizing real AI material and a 7% misflagging rate for human-written content. This is a problem with repercussions: based on a tool that is incorrect around once in fourteen times, students who did not utilize AI are being probed for wrongdoing and, in some circumstances, earning grade reductions. According to Dr. David Grundy of Newcastle University Business School, this raises “just cause” issues that are essential to fair discipline because you cannot start a misconduct investigation based on a technology that consistently flags innocent work.

    Redesigning the assessment won’t happen quickly or consistently. AI is completely prohibited at several universities. Some demand complete disclosure and citation. Some are switching to process-based evaluation, which calls for students to turn in timed draft histories that show how their ideas changed from first notes to a completed argument. That last approach has a subtle revealing quality. Students are asked to present their work, including the ideas behind it as well as the final product. One may argue that education was always meant to illustrate that. Universities have been compelled by AI to make that clear in ways that they may have been doing before a language model allowed for complete product outsourcing.

    These Students Asked ChatGPT to Write Their Thesis
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    News Team

    Related Posts

    Philippe Jabre: Philanthropy in Education

    16/03/2026

    6 Effective Tips For Educating Scientific Topics Through High Quality Visuals

    03/03/2026

    What Happens When GDP Grows but Prosperity Doesn’t

    10/02/2026
    Leave A Reply Cancel Reply

    Fortune Herald Logo

    Connect with us

    FortuneHerald Logo

    Home   About Us   Contact Us   Submit Your Story   Terms of Use   Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.