How I Learned to Stop Worrying and Love AI
Authors: Woodfall, A.
Conference: FLIE Learning and Teaching Conference
Dates: 7 July 2025
Abstract:In case of any confusion, my Dr Strangelove (that’s the 1964 film) reference in the title doesn’t mean I see the rise of AI as the equivalent to the development and deployment of nuclear weapons. And I don’t assume that it will all end in a bomb-dropping montage, underscored by Vera Lynn’s ‘We’ll Meet Again” (as Dr Strangelove did), or assume that Skynet (that’s another film reference, this time to Terminator from 1984), that Skynet will decide that humanity is an existential threat and plan to systematically wipe us all out - perhaps starting with the university sector? Dystopian assumptions like that would of course be very silly (…wouldn’t they)
But anyway here we are, and here I am. And it seems that after a couple of years of hesitant watching, and much procrastinating, that I have decided to jump - whole heartedly - into the AI in education Matrix (and yet another dystopian AI reference there)
And by AI of course, I am as lost as most in the terminology soup of Machine Learning, Natural Language Processing, Deep Learning, Artificial Intelligence AI, Generative AI, Artificial Narrow Intelligence AKA Weak AI or Artificial General Intelligence AKA Strong AI, and the potentially dystopian big bad boy itself, Artificial Superintelligence, ASI With those last two being what science fiction folk (not necessarily under those terms) have explored in one way or another on-screen through the likes of Metropolis (from as long ago as 1927), through the malfunctioning spaceship computer HAL 9000, in 2001: A Space Odessey (68), the theme park AI robots gone badly wrong in Westworld (73), and the military supercomputer near nuclear apocalypse of War Games (of 83), that preceded the military supercomputer actual nuclear apocalypse of Terminator by a year.
And moving on, to a very interesting on-screen sub-genre of organic/machine hybridisation, as seen in the likes of Doctor Who’s Cybermen (initially from 63), RoboCop - he’s a cop, he’s a robot! - (from 87) and Star Trek’s the Borg (initially from 89).
And then looking at the contemporary, and real world fears led, proliferation of AI antagonists across film, TV and gaming, maybe we just pause on a lot of Black Mirror. The ‘Metalhead’ episode of 2017, with the cyberdogs, particularly giving many the - oh that looks believable - shivers
All that complexity set aside, and catching up with many others who are already working with AI in their teaching (and I like that phrase ‘working with’, but being careful not to anthropomorphise), I see AI as having two, of course interconnected, sides: simplistically perhaps - the lecturer (preparatory and administrative) side and the student side.
So by ‘learning to stop worrying and love AI’, I plan on asking (I should say prompting probably), asking AI to help me with teaching planning, research and even, I say this carefully, with feedback. Indeed the list of potential uses is intimidatingly long, when you get out there and look at what other people are doing. But the real interest - and challenge - appears to be what we do with students in terms of genuinely effective pedagogy and preparing students for the AI skillsets they, we assume, will need for times ahead - which we hope of course won’t be too dystopian. (And a quick pause reality-check here, on the data that suggests that entry level jobs are being hit by ready access to AI already)
A starting point for me in learned to stop worrying and love AI, will include setting tasks where students are asked to specifically deploy Generative AI - but building in plenty of time and energy to decode and critique the process and outcomes, in ways that allow us to address the ethical and any (possible) academic integrity issues. To a bare minimum, help students consider the ways in which ChatGPT, for example, doesn’t replace a search engine, and how fully handing over unit work to Generative AI, would seriously hamper all those good things about a doing a degree, like learning or creativity…
So the balance I hope to foster, for students, and me, is not about using AI to shortcut hard thinking and work, but working with AI to enhance thinking, and open up options. To help with the research – and the criticality! For example, as a lecturer in TV, asking students to use AI to help generate a slate of programme ideas (for perhaps you guessed it, a short sci-fi dystopian AI takeover film), and then they develop and pitch the idea they feel works best – and along the way make plenty of room for dialogue, fun, human creativity and critical thinking
And on that note, I’ve tried hard, for a long time, sometimes successfully, to leverage a deeply dialogic pedagogy - that foregrounds student agency and co-creation. But perhaps that ‘dialogue’ can’t now just be seen as a student-with-academic one, but it might also productively include AI as a (near) actor ( - and I’ve been dusting down my Actor Network Theory reading here).
And if things do go a little dark - and I really don’t trust the Tech Billionaire AI evangelists bros here - I’m hoping any AI-Human dialogue will be with the Arnie in Terminator 2 version of things, and preferable not the T1000 version Thank you to my friend, TV Producer and Academic Colin Ward at the University of York for the Dr Strangelove inspiration
Source: Manual