
27 November 2025
The launch of ChatGPT in 2022 was a watershed moment for HE, with Generative AI tools producing text that looks human-authored whilst mimicking the demonstration of knowledge, independent research, evidence gathering and argument building that are fundamental to our learning outcomes. As Stuart Fox explains below, the concerns this raises about academic integrity – particularly for essays (often seen as the staple of humanities and social sciences assessments) are not easily addressed, but can be managed by reviewing and rethinking our assessment approaches.
Ultimately, the challenge posed by AI is not simply a question of assessment integrity, but a more fundamental matter of what and how we teach and assess in an AI-integrated world that expects university graduates to be AI literate. In the short term, our immediate concern is with how we ensure the assessments we are using but cannot change in-year due to accreditation requirements remain valid.
One way of tackling this is to look to the details of our assessment instructions. Module accreditation commits us to a particular form of assessment (such as a 2,000 word essay), but it doesn’t commit us to the specific requirements of that essay or our application of marking criteria. We can be innovative about just what is meant by ‘essay’, and that gives us room to implement tweaks that won’t solve the threat to integrity posed by AI, but will reduce it.
There are two fundamental questions behind this: first, what do students need to know or do? The answer to this one will be fundamental to the learning outcomes you need to assess. Second, what is AI not good at doing? Our best way of making ‘essays’ work is to find ways of assessing whether students can do or know what we need them to whilst exploiting the weaknesses or limitations of AI.
Based on that logic, and following a lot of reading about and experimentation with AI, I’ve identified some tips that we can use to modify ‘essays’ or essay-like assessments. By using one or more of them, and in various combinations, it’s possible to come up with a series of assessment instructions that improve the validity of our assessments without the need for reaccreditation.
1. Critical Article Comparison
I teach a second-year undergraduate module on political behaviour. It is organised around a major theoretical debate that divides the literature. My first assessment requires students to compare two articles—one from each side—and critique each one using evidence from the other.
These are the changes I made to strengthen integrity for this year:

2. Policy Evaluation Essay
Whereas previously I required students to write a traditional 2,000 word essay in response to a question from a list I provided, I now ask students to evaluate the likely success of a specific policy (lowering the voting age) in achieving an outcome central to module content (increasing youth political participation). Having spent some time asking AI to answer my assessment questions, I learned that AI can draft policy evaluations, but is less effective at combining academic material with grey literature, and integrating module concepts, into its answer.
I also require students to provide a 400-word critical reflection explaining how their experience of politics influenced their research, and reflecting on links to module content. Again, as AI hasn’t taken the module or experienced politics, it can’t do this well.

What next?
These strategies are not permanent solutions: they’re stop-gaps while we all try to adapt to an AI-integrated world and what that means for our teaching. They’re also not foolproof. But they will help maintain assessment validity and integrity without major module redesigns. Most importantly, they encourage students to develop skills AI cannot replicate: critical thinking, reflection, and nuanced application of knowledge. And in the meantime, I continue to learn about and reflect on how to design assessments that mitigate the risks of misconduct while requiring students to actively engage with the task at hand and use it to demonstrate and extend their learning.
This post was written by Dr Stuart Fox, Senior Lecturer in Politics in the departments of Humanities Arts Social Sciences, Cornwall.