South African universities are seeing a rise in incidents of generative artificial intelligence (GenAI)-related academic misconduct and inappropriate use of AI tools in completing assessments and online exams.
Higher learning institutions, the University of Johannesburg (UJ), the University of South Africa (Unisa), the University of Pretoria (UP) and the University of Cape Town (UCT), tell ITWeb they are concerned about incidents of AI-assisted plagiarism, amid a rise in the number of students caught misusing chatbots and other related AI tools.
This, as the rapid use of GenAI-integrated chatbots, such as Google Gemini, ChatGPT and Microsoft Copilot, has put to test long-standing academic integrity frameworks, with questions being raised on their long-term impact on the field of education.
While institutions emphasise that AI can play a constructive role in learning, the rising incidents have led them to tighten their AI policies and reform assessment methods to counter the rising AI-assisted cheating phenomenon.
“With access to AI tools becoming almost ubiquitous, especially via mobile phones and browsers, so has the reliance on these technologies for learning and assessment,” says professor Sehaam Khan, deputy vice-chancellor for UJ’s academic affairs.
“Like universities globally, UJ has seen an uptick in suspected and confirmed cases in which students appear to have relied on generative AI tools, such as ChatGPT, Gemini and similar systems to generate text, code, or answers in ways that are inconsistent with assessment instructions.
“Most of our cases to date have been concentrated in take-home written work, online quizzes, and coding or problem-solving tasks, with comparatively fewer confirmed cases in tightly invigilated, on-campus examinations,” Khan says.
Rikus Delport, spokesperson for UP, says AI-related misconduct has been recorded across different forms of assessment.
“UP has definitely seen an increase in AI-related cheating during exams and tests,” Delport states.
“This misconduct has also been traced in assessments such as assignments and projects. While the incidents mostly arise in written form assessments, we have periodically received complaints emanating from calculative assessments and within the area of coding.”
Delport adds that the university has already handled dozens of such cases.
“Based on our records, we received approximately 53 AI-related disciplinary matters during the course of 2024 and 2025. These have been referred to the legal department.”
Unisa says the spread of GenAI across society means universities must assume students are encountering these technologies regularly.
Professor Boitumelo Senokoane, acting executive director of the Department of Institutional Advancement at Unisa, says: “There have been incidents of AI-related academic misconduct and inappropriate use of AI tools in completing assessments, identified through colleges and departments’ academic integrity committees.
“These reports are then reported to examinations committee until Senate. The existence of such incidents speak not to a failure of our systems, but rather to the strengths of our detection and monitoring systems and process to identify and address misconduct rather than playing ignorance.”
Senokoane notes these incidents reflect a broader global reality: as generative AI becomes more sophisticated and accessible, the line between permissible AI assistance and impermissible AI substitution can be difficult for students to navigate without clear guidance.
“Many incidents involve students who did not fully appreciate the boundaries. Which is why our guidelines place such strong emphasis on transparency, disclosure and academic integrity education rather than punitive responses alone,” he comments.
UCT says it has encountered cases of inappropriate AI use in assessments, although it has not observed a rise in exam-related incidents due to strict invigilation procedures.
“As with other universities globally and locally, UCT has encountered instances where students have inappropriately used generative AI tools in assessments,” says UCT spokesperson Elijah Moholola. “These cases typically involve failure to declare AI use where required, submitting AI-generated content in assessments where AI use was explicitly prohibited, or misrepresenting AI-assisted work as entirely original.”
The University of the Witwatersrand says it has not seen incidents of AI-related academic misconduct or misuse of AI tools during exams or assignments.
“Our focus is not to detect cheating but to promote the ethical and fair use of AI,” says Nicole De Wet-Billings, associate professor and senior director of academic affairs at the University of the Witwatersrand, Johannesburg.
Local universities previously told ITWeb they are updating their plagiarism policies, in line with the rising use of AI tools.
Detecting AI-assisted cheating
Universities say identifying AI-generated work is far more complex than traditional plagiarism detection, as GenAI systems can produce original text that does not match existing sources.
At UP, Delport says AI detection tools alone cannot be relied on, to prove misconduct.
“AI-assisted cheating is more difficult to detect reliably than conventional copy-and-paste plagiarism, because AI-generated text can be ‘original’ in form,” he says.
“AI detection tools are not sufficiently reliable for high-stakes decisions and risk both false positives and false negatives.”
Instead, the university relies on a combination of evidence sources to determine whether misconduct has occurred.
“The preferred evidence-based approach is to triangulate using the student’s AI declaration, drafts and version history, working notes, code or source checks, and where needed a brief oral clarification to confirm authorship,” Delport explains. “The analysis goes beyond looking at a Turnitin report and still requires the input of the subject matter expert.”
Moholola says UCT has also taken a cautious stance on AI detection technologies.
“So-called AI detectors are widely regarded as unreliable and prone to both false positives and false negatives,” he says. “For this reason, UCT discontinued use of the Turnitin AI Score feature from 1 October 2025. Continued reliance on such tools risks undermining academic fairness and student trust.”
Instead, the institution focuses on redesigning assessments to make misuse more difficult.
At Unisa, detection is approached through multiple layers of monitoring and system controls.
“Unisa has invested in a multi-pronged detection strategy to identify unethical and irresponsible AI use in assessments,” Senokoane says. “These include AI-enabled proctoring tools that monitor student behaviour during online assessments, and AI-sensitive similarity checkers that analyse writing patterns, linguistic markers and statistical signatures associated with machine-generated outputs.”
Enforcing disciplinary measures
When misuse is identified, universities say existing academic misconduct frameworks are applied, often with a focus on education as well as punishment.
At UJ, AI misuse is treated as a form of fraudulent authorship if students fail to disclose the use of such tools.
“Our policy developments explicitly recognise that the core offence in AI misuse is fraudulent authorship and failure to disclose, not the mere presence of technology,” Khan says. “AI-related misconduct is handled under UJ’s existing academic misconduct framework, with sanctions that are commensurate to the severity, intent and level of study.”
Unisa applies a tiered disciplinary approach that distinguishes between different levels of misconduct.
“Our AI guidelines establish a tiered consequence management framework,” Senokoane says. “Minor violations, such as first-time failures to disclose AI use, are addressed as learning opportunities, while repeated or deliberate attempts to deceive can result in serious consequences, including failing grades, suspension, or further disciplinary measures.”
She adds that the university maintains strict rules around academic dishonesty.
“The university has a zero tolerance for any forms of academic dishonesty,” Senokoane asserts. “Students’ marks are withheld and 0% is awarded for certain contraventions, while more serious cases may be referred to the Student Disciplinary Unit to initiate formal proceedings.”
At UP, disciplinary processes follow established academic integrity procedures once sufficient evidence is gathered.
“The appropriate disciplinary process, in accordance with the disciplinary rules for students, follows once the allegation is founded on substantiated evidence,” according to Delport. “Suspected misuse is handled through academic integrity and academic dishonesty processes, with a fair fact-finding approach and a proportionate response.”
Share