The problem with AI in education may not be the technology itself – it may be that students are using it incorrectly, treating it not as a tool, but as a solution. And because educators tend to see it as a threat, they aren’t taking students along a learning path that enables them to use GenAI as an enabler rather than a cheating tool.
That is a genuine concern, because without guided instruction on ethical, critical and creative use, students are left to experiment on their own – often reinforcing misuse and widening the gap between how AI could support learning and how it is actually used in practice.
By outsourcing their work to GenAI, students risk missing the critical thinking building blocks they will need to solve practical problems independently later in life.
AI faces the same suspicion as calculators in classrooms once did in the 1960s – 1970s. While electronic calculators removed the need to manually apply functions like sin and cos, they didn’t take away the need to understand when and why to use them.
Just because the tool changes, it doesn’t mean thinking must end. Tools exist to extend human capabilities and the step change with AI is the kind of capability being extended.
An enabler
GenAI solutions like ChatGPT are designed to help people process information faster, such as running calculations, supporting thinking, not substituting it and expanding what one can explore, test and create.
This is where things get complicated. How does one “support thinking”? If a student hands in a paper fully comprised of GenAI output, can they credibly argue the tool supported their thinking? Or should educators be considering whether GenAI crunching the data, enabling a conclusion, is more defensible than if ChatGPT simply provided the conclusion?
ChatGPT sees itself as a thinking partner in education.
There needs to be a distinction on whether students are using AI to outsource thinking (and get it to do their homework) or to interrogate ideas, test assumptions and refine their own reasoning. The former erodes understanding, while the latter means AI becomes a tool for deeper learning.
Changing teaching methods
In computer science, which is my area of specialisation, the use of GenAI to generate code in place of students poses a significant concern.
A 2025 study by Kaléu Delphino for Georgia Tech established that one in four students in this field of study anonymously admit to using ChatGPT for plagiarism. Crucially, though, Delphino frames this as a threat to computer science “in its current form”.
When we look beyond GenAI as a threat and start seeing it as a tool, we can adjust how we teach in a way that will enable our students to work alongside it. It becomes an enhancement and we move beyond subjects “in their current form”.
A sound method of doing this is through helping students develop strong prompting skills that enhance their analytical and problem-solving abilities. When educators help students develop strong prompts and review AI-generated responses thoughtfully, they empower students to use AI as a tool for thinking – not just for producing content and cheating.
ChatGPT sees itself as a thinking partner in education. It specifically told me that if AI is “treated as a thinking partner, it can strengthen analysis”. This GenAI tool also pointed out that “the danger isn’t that I analyse instead of humans. It’s that people stop analysing because the answer is already there.”
This is why we need to start teaching students how to prompt AI. Misuse of AI asks for an answer. Correct prompting opens a conversation in which the relative merits of a topic can be debated to strengthen a student’s understanding – and enable them to argue a point against the machine.
Adapt curricula
Rick Holbeck, writing for Faculty Focus, argues that integrating prompting into higher education requires structured engagement – assignments with clearly defined AI-related tasks, AI used for formative feedback before final submission, and assessment that evaluates how students engage with AI output rather than whether they used it at all.
This is precisely the kind of teaching that is not yet happening consistently and needs to find its way into curricula.
To make it possible for AI to play a meaningful role in education, we can’t treat it purely as a threat. Instead, we must redefine what learning looks like in an environment where answers are readily available.
This means placing less emphasis on the answer itself and more on the process used to arrive at it. It means teaching students how to question, guide and challenge the outputs they receive.
In this context, prompting is not a technical skill, but a form of structured thinking – which is the exact point of education anyway.

