University of Exeter logo
EduExe logo

EduExe blog

Home About Contact Toggle navigation Open menu

How Can I Make My Assessments Work in an Age of AI? How to Keep Integrity Without Rewriting Your Modules

27 November 2025

4 minutes to read

How Can I Make My Assessments Work in an Age of AI? How to Keep Integrity Without Rewriting Your Modules

The launch of ChatGPT in 2022 was a watershed moment for HE, with Generative AI tools producing text that looks human-authored whilst mimicking the demonstration of knowledge, independent research, evidence gathering and argument building that are fundamental to our learning outcomes. As Stuart Fox explains below, the concerns this raises about academic integrity – particularly for essays (often seen as the staple of humanities and social sciences assessments) are not easily addressed, but can be managed by reviewing and rethinking our assessment approaches.

How Can I Update My Assessments in an Age of AI?

Ultimately, the challenge posed by AI is not simply a question of assessment integrity, but a more fundamental matter of what and how we teach and assess in an AI-integrated world that expects university graduates to be AI literate. In the short term, our immediate concern is with how we ensure the assessments we are using but cannot change in-year due to accreditation requirements remain valid.

Modifying ‘Essays’

One way of tackling this is to look to the details of our assessment instructions. Module accreditation commits us to a particular form of assessment (such as a 2,000 word essay), but it doesn’t commit us to the specific requirements of that essay or our application of marking criteria. We can be innovative about just what is meant by ‘essay’, and that gives us room to implement tweaks that won’t solve the threat to integrity posed by AI, but will reduce it.

There are two fundamental questions behind this: first, what do students need to know or do? The answer to this one will be fundamental to the learning outcomes you need to assess. Second, what is AI not good at doing? Our best way of making ‘essays’ work is to find ways of assessing whether students can do or know what we need them to whilst exploiting the weaknesses or limitations of AI.

Based on that logic, and following a lot of reading about and experimentation with AI, I’ve identified some tips that we can use to modify ‘essays’ or essay-like assessments. By using one or more of them, and in various combinations, it’s possible to come up with a series of assessment instructions that improve the validity of our assessments without the need for reaccreditation.

  • Use tables, charts, or graphs: AI is great for producing cartoons but it struggles with accurate labelling and formatting. Requiring students to present data analysis, literature reviews, concept maps, critical comparisons etc. in tabular or graphical form plays to the weaknesses of AI and allows creative and innovative expressions of academic work.
  • Critical reflection: AI is rubbish at critical reflection – because it can’t experience, critique or reflect. Ask students to write about how module content relates to real-world issues, their own experiences, or their process of completing a research project.
  • Limit literature scope: Gen AI tools are more effective at finding and using written material accurately if the source has been around for a long time (giving the tools more time to be trained on it). Require students to use articles published in the last three years to limit the capacity of GenAI to do the heavy lifting for them, allowing you to assess their research skills.
  • Reference specific module content: AI can mimic theory and ideas but it hasn’t taken your module. Require students to explicitly reference and discuss module concepts – ideally, several of them – to assess their grasp of it and make it harder for AI to give the appearance of understanding through bland summaries of ideas.
  • Break essays into smaller tasks: Instead of one long essay that could be written various ways, use multiple short tasks targeting specific skills—e.g., a concept application essay, a reflective piece, and a conceptual chart. Some of these will be more ‘AI-resistant’ than others, requiring deep thinking and original work on the part of the student.
  • Assess effective AI use: Rather than trying to minimise AI, go the other way and assess students’ effective use of it. Require them to use AI to aid their research and provide evidence of its use, such as through copies of prompts or (better still) a reflective journal. You can then reward effective AI use in the marking criteria.

Examples from My Practice

1. Critical Article Comparison

I teach a second-year undergraduate module on political behaviour. It is organised around a major theoretical debate that divides the literature. My first assessment requires students to compare two articles—one from each side—and critique each one using evidence from the other.

These are the changes I made to strengthen integrity for this year:

  • Students are restricted to using only the two articles as evidence for their work (making it harder to use AI to easily find other sources to name-drop).
  • Students must use articles published in the last three years (which AI would be less familiar with).
  • Students must provide explicit links to content from the module (which no GenAI model has taken).

2. Policy Evaluation Essay

Whereas previously I required students to write a traditional 2,000 word essay in response to a question from a list I provided, I now ask students to evaluate the likely success of a specific policy (lowering the voting age) in achieving an outcome central to module content (increasing youth political participation). Having spent some time asking AI to answer my assessment questions, I learned that AI can draft policy evaluations, but is less effective at combining academic material with grey literature, and integrating module concepts, into its answer.

I also require students to provide a 400-word critical reflection explaining how their experience of politics influenced their research, and reflecting on links to module content. Again, as AI hasn’t taken the module or experienced politics, it can’t do this well.

What next?

These strategies are not permanent solutions: they’re stop-gaps while we all try to adapt to an AI-integrated world and what that means for our teaching. They’re also not foolproof. But they will help maintain assessment validity and integrity without major module redesigns. Most importantly, they encourage students to develop skills AI cannot replicate: critical thinking, reflection, and nuanced application of knowledge. And in the meantime, I continue to learn about and reflect on how to design assessments that mitigate the risks of misconduct while requiring students to actively engage with the task at hand and use it to demonstrate and extend their learning.

Share


For more information please contact:

This post was written by Dr Stuart Fox, Senior Lecturer in Politics in the departments of Humanities Arts Social Sciences, Cornwall.

Contributors

Back home
TOP