Subscribe

ChatGPT forces SA varsities to rethink plagiarism policies

Sibahle Malinga
By Sibahle Malinga, ITWeb senior news journalist.
Johannesburg, 15 Jun 2023

South African universities are updating their plagiarism policies, or devising a set of guidelines, to maintain academic integrity, as more students use generative artificial intelligence (AI) tools, such as ChatGPT.

Since the emergence of ChatGPT in November, questions have been raised as to the impact of AI-assisted chatbots − which can generate high-level text on a wide range of topics − on the field of education.

Since Microsoft-backed OpenAI released the bot, it has gone viral, with some praising its ability to interact in conversational dialogue form, while others raised fears over its potential risks relating to infringement of user rights, copyright protection or its ability to write essays for students.

Academia and industry have also expressed concern around ChatGPT’s role in enabling AI-assisted plagiarism, which cannot be detected by an average plagiarism detection software program.

As more South African students take to the chatbot to gain better knowledge of their respective modules, local universities tell ITWeb the potential of ChatGPT and similar generative AI as an educational tool far outweighs its risks. As such, they are working on policies that will allow the use of similar tools, in a guided and controlled manner.

The University of Pretoria (UP) says it recently approved a guideline paper on leveraging generative AI and ChatGPT for teaching and learning enhancement, which contains sections on addressing plagiarism in a generative AI context and safeguarding the integrity of assessmentsand assignments.

Rikus Delport, director of Institutional Advancement and spokesperson for UP, notes: “In addition, an advisory group has been established to advise the executive on generative AI, focusing on its implications for higher education regarding teaching and learning, research, ethical and legal impact, and the future of work. The advisory group will also formulate a generative AI policy for the university.”

In its guide paper, UP highlights: “ChatGPT should only assist learning and not act as a substitute for human creativity and critical thinking. The use of generative AI like ChatGPT should be aligned with the goals of teaching and learning, and the purpose should be clearly defined.”

For online assignments and assessments, plagiarism checker Turnitin is used by the university to identify potential plagiarism, even in cases where AI-generated text doesn't directly copy from a specific source, adds Delport.

Questions have been raised about whether the use of an AI program in school essays can really be interpreted as “plagiarism” in the traditional sense, as the text is generated by a bot and may not have a human author.

Cheating the learning system

Dictionary.com defines plagiarism as an act or instance of using or closely imitating the language and thoughts of another author without authorisation, and the representation of that author's work as one's own, by not crediting the original author.

According to Business Insider, schools and colleges across various parts of the globe − including US, Australia, France and India − have banned the use of generative AI tools on their networks and devices amid fears of plagiarism.

An academic research paper titled: “Chatting and cheating: Ensuring academic integrity in the era of ChatGPT”, published by UK-based University of St Mark & St John, shows how AI makes plagiarism harder to detect, by “cheating the system” often used by colleges.

To prove the point, the co-author of the research paper later admits the entire thesis was written by ChatGPT.

Professor Diane Grayson, senior director of academic affairs at the University of Witwatersrand, tells ITWeb that last year the higher learning institution approved a new student academic misconduct policy that goes beyond plagiarism.

Where students have used generative AI, they are required to acknowledge this, as they would any other information source, and indicate which text was AI generated, she notes.

“We cannot and should not prevent students from using generative AI, which can be a very useful tool for generating ideas, any more than we can ban the use of grammar-checking software.

“Instead, we must help students to develop critical AI literacy by, for example, incorporating AI-generated text into assignments and requiring students to demonstrate critical thinking in working with the text,” says Grayson.

Professor Diane Grayson, senior director of academic affairs at the University of the Witwatersrand.
Professor Diane Grayson, senior director of academic affairs at the University of the Witwatersrand.

Stellenbosch University (SU) says anecdotal feedback from lecturers and academics suggests they have had some cases of suspected irregularities with respect to the use of generative AI tools.

“The university is devising a set of guidelines, together with students, for responsible and allowable use of generative AI tools in assessment,” says Dr Hanelie Adendorff, senior advisor at the university’s centre for teaching and learning.

“Using learning as the point of departure, the guidelines offer principles, based on existing SU policies, such as accountability, transparency, authenticity and fairness. By unpacking the implications, and offering advice and questions related to each principle, for both lecturers and students, we acknowledge that responsible and ethical use of generative AI tools is a shared endeavour.”

The University of Johannesburg says a policy is in development and will include a declaration by the student if they have used any AI tools, and to identify where and to what purpose.

Nelson Mandela University says it is reviewing all its learning and teaching policies, including the plagiarism policy, to align its practices, procedures and guidelines to include the different online learning measures.

Share