11 October 2023
Online assessments provide flexibility but also opportunities for misconduct. Some reports estimate that 1 in 6 students have cheated (Wonkhe, 2022), others suggest the figure is 1 in 14, with 3 in 5 engaging in poor academic conduct (THE, 2022). This cheating often involves prohibited collaboration in non-invigilated settings.
It is a particular problem where there is a unique answer, and the online environment enables students to check answers together, work jointly on questions, pay for solutions, or have someone else take the exam. The key question is: who really did the work?
In our module BIO2090 Analytical Techniques in Biochemistry, the exam comprises a data handling section (40%). Students perform calculations based on an experiment, leading to unique correct answers. In a 24-hour open-book exam, checking or sharing answers would be straightforward.
Dr Alison Hill, Associate Professor (Chemistry and Biochemistry) and Dr Nicholas Harmer, Associate Professor in Biochemistry and Co-Director of Business Engagement and Innovation, set out to address this. They created individualised datasets that each student downloaded with their exam paper. This undercut incentives to share responses since there were now 60 unique solutions rather than one. They also generated complete worked solutions for grading.
The personalised data approach was very effective. Analysis detected no collusion, confirming independent work. However, manual grading time doubled given the individualised answers. With 100 students expected, this workload was unsustainable. An automated solution was needed.
Nic developed a custom grading algorithm to evaluate the data handling responses. Students now enter numerical solutions into an online form. The software compares submissions to the correct answers for each personalised dataset.
This automated approach provides key benefits:
Developing the algorithm required significant initial effort, but this investment has paid off through enormous time savings and cheating reduction. We can now effectively assess 100 students without being buried in papers.
With traditional exams, students showed work and gave a final numerical answer. Graders had to manually check each response. Now, students simply input their final value into the online form. The grading programme pulls up the personalised dataset and calculates the expected result, automatically comparing against the correct answer.
If a student makes a mistake, the software can identify the error and provide customised feedback. For example, if a response is off by a factor of 10, they likely miscalculated a factor. The programme deducts partial points and provides targeted guidance.
Over time, data gathered provides invaluable insights. We can analyse patterns in wrong answers to understand where students struggle and improve instruction.
To test the system, Nic, Alison, and a colleague deliberately cheated on an exam. Using fake candidate numbers their results were put in with the students. The grading programme correctly identified them as cheating. As Alison notes, “we were not trying to evade detection. We were doing what we would think students might try if they weren’t aware that we were trying to check this.”
This example shows how education technology, thoughtfully applied, can help address challenges facing modern universities. While risks like cheating exist, instructors can leverage automation and data to mitigate these issues. With creativity, we can use online tools to enhance, not hinder, assessment and learning.
The key is striking the right balance between flexibility and oversight. Although substantial initial investment is required, the long-term benefits make it worthwhile. As class sizes rise, more schools will need to rethink conventional assessment models. With careful implementation, technology can be part of the solution.
This blog post was developed by Jo Sutherst, following an interview with Dr. Alison Hill and Dr. Nic Harmer.