
30 October 2025
Seventy-five years after Alan Turing asked if machines could think, Stephen Hickman and Lisa Harris attended a remarkable event at the Royal Society where a new generation gathered to examine how imagination, ethics, and vigilance must guide AI’s next chapter.
From the discovery of gravity to the development of Generative AI, the Royal Society in London has always been at the forefront of scientific endeavour. At a recent event organised by the Web Science Institute to celebrate 75 years of the “Turing Test” (can machines think?) a group of experts channelled the spirit of Alan Turing to challenge the rhetoric and consider what a responsible way forward for society might look like. Here are a few highlights from an amazing day of special moments, one of which was generated by a surprise guest…
It was a moment that fused art and science in a way few academic gatherings ever do. In front of a large and attentive audience. Peter Gabriel—musician, activist, and founding member of Genesis—took the stage not to perform, but to introduce his friend, Dr. Gary Marcus, a leading voice in AI.
Gabriel’s presence was more than ceremonial—it was symbolic. A creative visionary ushering in a scientific provocateur, reminding us that the future of intelligence must be shaped not only by algorithms, but by imagination, ethics, and scrutiny. It was a moment that echoed the spirit of genesis itself: the birth of something new, tempered by the wisdom of experience.
Gary Marcus, making an argument for the limitations of AI, asserted that the very idea of Artificial General Intelligence (AGI) may not be the right goal. Emphasising how Turing’s test is a test of human gullibility, not a test of machines, and illustrating how Large Language Models (LLMs) get things wrong, clarified our need to learn more about AI. Marcus’s scepticism – AI scalability claims and the AI hype – was infectious. “Turing had intellectual humility, we need this now” exclaimed Marcus.
Computing pioneer Dr. Alan Kay and Professor Sir Nigel Shadbolt also urged the audience to look beyond the hype. We should be “masters not servants” by focusing on augmenting human intelligence with AI rather than replacing it. Professor Anil Seth warned of “confabulation”, whereby AI fills gaps in its knowledge with plausible but incorrect information. Collectively, their message was clear: AI is powerful, but it is not infallible. And in the context of UK Higher Education, this means we must not only use AI—we must be vigilant about synthetically generated data. For example, we need to watch out for autophagy (Greek for self-eating), whereby Generative AI models repeatedly train themselves on previous generations of AI models.
Here are six reflections that emerged from the day, each a call to action for educators, institutions, and policymakers.
AI systems are trained on historical data, and that data is rarely neutral. In education, this means that biases – gendered, racial, colonial – can be quietly perpetuated under the guise of objectivity. If we’re not careful, we risk embedding yesterday’s prejudices into tomorrow’s pedagogy.
AI doesn’t need to fool us; we’re quite capable of fooling ourselves. The allure of intelligent machines can lead to a kind of academic delusion – where we overestimate AI’s capabilities and underestimate the complexity of human learning. Universities must resist the temptation to outsource critical thinking to systems that cannot truly think.
Much of AI’s promise lies in its ability to generalise and transfer knowledge. But education is not just about transfer – it’s about transformation. If we reduce learning to pattern recognition and replication (mimicking), we risk losing the creative, constructive essence of scholarship.
This is not a rhetorical question. Institutions like Trinity College Dublin have already launched AI Accountability Labs. UK universities must ask themselves: are we merely using AI, or are we also watching it? Establishing AI Safety Departments would signal a commitment not just to innovation, and AI literacy, but to vigilance – guarding against delusion, misuse, and unintended consequences.
AI can simulate reasoning, but it struggles with genuine critical thinking. It doesn’t question its own assumptions, and it’s prone to confirmation bias. This is a golden opportunity for educators: to teach students not just how to use AI, but how to critique it.
There’s a growing scepticism about the role of “big tech” in shaping the future of education. The same companies that flooded our classrooms with tools are now offering fixes for the very disruptions they caused. In higher education, we must learn hard lessons from the growth of social media and ask: who benefits when we adopt their solutions wholesale?
The Turing Test was never just about machines – it was about us. What we believe, what we value, and what we’re willing to delegate. As AI becomes more embedded in UK Higher Education, we must remain not just users, but critics. Genius and Genesis reminds us that the birth of new technological ideas, and our prospective productivity expectations, must be accompanied by the wisdom (research!) to allow them to grow – and to watch them closely before we deploy (release) them. As the recently published Science in the Age of AI Report concludes, we need realistic, ongoing, AI evaluation and regulatory oversight to minimise security risks and other real-world harmful implications.
In a twist of irony not lost on us, while this blog critiques and reflects on the rise of AI, we also enlisted its help — not to write the whole thing (we promise!) but to assist with research and structure. Consider it a case of practising what we’re preaching… cautiously.
This post was written by Stephen Hickman and Lisa Harris from the University of Exeter Business School.