Skip to main content

Code Acts in Education: Critical Keywords of AI in Education

Notes for a keynote talk presented at the event Digital Autonomy in Education: a public responsibility, convened by the Governing the Digital Society initiative at Utrecht University and Kennisnet, 7 November 2024, for an audience of school leaders, teachers, teacher educators, academics, and school sector organizations.

The development of AI for education has a long history, but has only become a matter of mainstream excitement and anxiety since so-called “generative AI” arrived. If you want a flavour of the excitement about generative AI in education there are by now plenty of conferences, opinion articles, guidebooks, showcase events and so on about the most recent technical developments, applications, best practices, and forecasts of the future of AI for schools. My approach is different, because while generative AI is undoubtedly impressive in many ways – and may prove to have specific use cases for teaching and learning – it’s also a big problem for education.  

To take one example – last month the US press reported that wealthy parents had launched a legal case against a school where a teacher had penalized their child for using AI to complete an assignment. The school, they and their lawyer argued, had no AI policy in place at the time. It’s a compelling example of the problems with AI in education. At issue here is not whether the technology ‘works’, but – as the family’s lawyer has put it – whether using AI is plagiarism at all, or just ‘an output from a machine’. 

It also reveals the difficult position schools are in to mitigate against its use when AI remains ‘underregulated, especially in a school setting’. It shows how AI is running up against expectations of academic integrity, which are historically central to education systems and systems of credentialing and qualification.  And it surfaces the unintended consequences of AI in educational settings, with schools now potentially pitted against students and parents and their lawyers because of it.

Maybe this will prove to be an edge case, or it could set a ‘legal precedent’. Whatever the outcome, it clearly demonstrates that treating AI simply as a bundle of innovative technologies with beneficial effects, to which schools need to bend themselves, is highly naïve.   

As I have argued before, we should instead see AI in education as a public problem

The sociologist Mike Ananny wrote an essay earlier this year suggesting that AI needs to be understood as a public concern in the same ways we treat the climate, the environment and – in fact – childhood education itself as public problems. These are issues that affect us all, even if indirectly. Generative AI, Ananny argues, is fast emerging as a medium through which people are learning, making sense of their worlds, and communicating. That makes it a public problem that requires collective debate, accountability, and management.

‘Truly public problems,’ Ananny argues, ‘are never outsourced to private interests or charismatic authorities’. Instead we should convene around AI as a public problem in education, deliberate on its consequences and discuss creative, well-informed responses. 

Keywords of AI in education

My approach is to highlight some ‘keywords’ for engaging in discussion and deliberation about the contemporary public problem of AI in education. I take inspiration from other efforts to define and discuss the keywords that help us describe, interpret, conceptualize and critique dominant features of our cultures, societies and technologies. Keywords provide vocabularies for engaging with current issues and problems.

So my aim with the following keywords is not to offer technical definitions or describe AI features, but to offer a critical vocabulary centring AI as a public problem that might help provoke further discussions about the practical applications of AI in schools and the AI futures that are said to be coming. 

Speculation. The first critical keyword about AI in education is ‘speculation’. This is to do with hype, visions, imaginaries and expectations of AI futures. AI speculation related to education did not appear with ChatGPT, but has certainly been a significant feature of edtech marketing, education press coverage, consultancies’ client pitches and more over the last two years. The significance here is that such speculative claims are mobilized to catalyse actions in the present, as if the future is already known.

At the Centre for Sociodigital Futures, Susan Halford and Kirsten Cater have recently written about how speculative ‘futures in the making’ are often actively mobilized to produce conviction in others and incite them to act. But, they argue, the futures being claimed and made about AI and related technologies are often characterized by taken for granted technological inevitability and determinism that erases expertise in the social aspects of any technology, and by thin evidence and linear assumptions that simply take current technical R&D trends as straightforward signals of what is to come. 

This is also the case with many speculative claims about the future of AI in education. They erase the long history of research showing that technologies are rarely as transformative as some make out, and are based on conjecture rather than evidence.

Intensification. While speculation might be one issue, another is that actually-existing technologies are already interweaving with school settings and practices. Rather than speculation about teacherbots coming to save education systems in the future, we have things like data analytics and generative AI interfaces helping to intensify and amplify existing trends and problems in schools. We can detect this in current demands for teachers to dedicate their labour to integrate AI into pedagogy and curriculum content, with the implicit threat that they will be ‘left behind’ and fail to educate their students appropriately for the ‘AI future’ unless they ‘upskill’. This demand on teachers, leaders and administrators to undertake professional upskilling represents an intensification of teachers’ work, with consequences including even more teachers leaving the profession

It also intensifies the role of external experts, consultants and various edu-influencers in setting goals for schools and determining teachers’ professional development. External influence in schools isn’t new of course, but AI has proven to be a big opportunity for consultants and tech experts to sell their expertise and guidance to schools. As Wayne Holmes has argued in a recent report for Education International, a failure to anticipate the unintended consequences of introducing AI into education by such external authorities can lead to a further intensification of workload demands as new challenges and problems have to be addressed in schools. 

As such, we should be examining how AI does not transform schooling in the beneficial ways often imagined, but interweaves with and intensifies trends and logics that are already well in train.

Contextlessness. As intensification already indicates, how AI is actually used and its effects will be highly context-sensitive. Sociological studies of tech have long insisted that technologies are not just technical but ‘sociotechnical’ – they are socially produced, and socially adopted, used, adapted, and sometimes refused in specific settings. 

But the majority of commentary about AI in education tends towards context-free assertions of AI benefits. This glosses over how technologies actually get taken up (or not) in social settings. It also ignores how AI can be politically appropriated for potentially regressive purposes – one example being US schools using AI to identify books to ban from libraries in the context of conservative mandates to ban books with any sexual content.         

Additionally, many AI advocates tend to pick evidence and data that suits their narratives and interests without considering whether it would apply in other contexts. The best example here is tech entrepreneurs like Sal Khan, Bill Gates and Sam Altman routinely citing Benjamin Bloom’s ‘2 sigma achievement effect’ study of one-to-one tutoring to support AI in schools. Despite this original research from 40 years ago having never fully replicated, and applying only to human tutoring in the context of very specific curricular areas, ‘2 sigma’ is routinely cited to support the contextless ideal of personalized learning chatbots. 

Likewise, it’s common to see modest evidence from highly controlled studies exaggerated to support generalized claims of AI benefits for learning, part of a widespread evidence problem in relation to edtech products. And more broadly, AI research itself tends towards over-optimism, is often not reproducible, can’t be verified, and focuses on engineering problems rather than highly context-specific social factors and implications.

Standardization. Related to contextlessness is the significant risk that AI amplifies further standardization. Standardization, of course, seeks to make contexts irrelevant – the idea is the standard model can work everywhere. This again isn’t a new trend in education, but the issue is AI reinforcing it through the reproduction of highly standardized formats of teaching and learning. Such formats and templates might include partially scripted lessons, bite-sized tutorials, multiple-choice quizzes, or standardized assignments – all things that AI can do quite easily.

But, as Philippa Hardman has recently observed, there is also an increasing move with AI towards the ‘buttonification’ of pedagogic design and curriculum content creation. You can design a course or plan a lesson ‘at the push of a button’ with new AI functions that are built in to education platforms. This is accelerating automated standardization. This AI-enabled standardization, argues Marc Watkins, risks ‘offloading instructional skills uncritically to AI’, leaving us with ‘watered-down, decontextualized “lessons”’ that are devoid of a teacher’s knowledge and give students a ‘disjointed collection of tasks’ to complete rather than a pedagogically ‘structured experience’.

Buttonified education may be a streamlined, efficient, time-saving and cost-saving approach, but such standardization risks degrading teachers’ autonomy in planning and students’ experience of a coherent curriculum.

Outsourcing. Indeed, this standardization works in concert with the next keyword – outsourcing. Not only does AI involve outsourcing to external technology vendors. As Carlo Perrotta argues in his new book, Plug and Play Education, AI implies the outsourcing of teachers’ professional pedagogic autonomy itself. 

It means, for example, delegating professional judgment to AI’s mechanisms for measuring, clustering and classifying students – for example if we allow it to perform assessment tasks or to measure a student’s progress and then generate ‘personalized’ recommendations about the next steps to take. As Perrotta argues, in ‘a best-case scenario’ these ‘automated classifications may prove to be erroneous or biased and require constant oversight’. This is outsourcing where the role of the teacher is reduced to a quality assurance assistant.

But in ‘a worst-case scenario’, Perrotta adds, ‘teachers may become unable to exercise judgment [at all], as multiple automated systems operate synchronously behind the scenes … leading to a fragmentation of responsibility’. In this sense, then, outsourcing should be understood not simply in terms of vendor contracts but in terms of the offloading of professional discretion, judgment, decision-making and, potentially, control over the processes by which students are assessed, ranked and rewarded. 

Bias. The example of outsourcing already indicates the next problem with AI – ‘bias’. AI biases may manifest in several ways. One is the use of historic data in analytics systems discriminating against students in the present, as Perrotta indicates – because the past data tells us that students clustered in this or that group tend towards underachievement, automated discriminations can be made about what content or tasks to personalize, or prescribe, or proscribe them from accessing. The real risk here is excluding some students from access to material due to latent biases in the systems.

An interesting study from the Stanford Human-AI Interaction lab recently also found that generative AI produces ‘representational harms’. They tested how generative AI systems represent diverse student populations, and found them significantly biased at a massive magnitude. This is because of the ways such groups are represented, or underrepresented, in AI training data. The researchers reported that such representational biases can lead to the erasure of underrepresented groups, reinforcement of harmful stereotypes, and the triggering of various psychosocial harms. The headline issue here is that a chatbot tutoring application built on top of an AI model with these training data might be biased against already marginalized groups and individuals.

Pollution. Besides the bias in the training data is also the possibility that data reproduced by generative AI systems are already polluted by automatically generated text. A couple of weeks ago it turned out, for example, that Wikipedia editors had been forced to identify and remove AI-generated material that could endanger the veracity of its content.

Last year Matthew Kirschenbaum memorably wrote that ‘the “textpocalypse is coming’ – by which he meant that the internet itself could become overrun with ‘synthetic text devoid of human agency or intent’. It could contain outright hoaxes and misinformation, or just AI-generated summaries that misrepresent their original sources.

If this textpocalypse is now unfolding, then AI could exert degenerative effects on the information environment of schools too – as teachers come to rely on AI-generated teaching resources whose content has not been vetted or evaluated. Students’ processes of knowledge construction could be undermined by encountering synthetic text that’s polluted with plausible-sounding falsehoods.

As some of you might have noticed, all language models come with some ‘small text’ disclaimers that you should always independently verify the information provided. The implication is that the role of students now is not to synthesize material from well-selected authoritative sources, but merely to check the plausibility of the automated summaries produced by AI, and for teachers to spend their time ‘cleaning up’ any polluted text.

Experimentation. Perhaps the best way to characterize the last couple of years is as a global technoscientific experiment in schools. Schools have been treated as petri dishes with squirts of AI injected in to them, then left to see what happens as it spreads and mutates. As a keyword, ‘experimentation’ captures a number of developments and issues.

One is the idea that we are witnessing a kind of experiment in educational governance, as government departments have contracted with AI firms to run hackathons and build prototypes, often as a kind of live experiment involving teachers and schools.  The sociologists Marion Fourcade and Jeff Gordon have called this kind of public-private arrangement ‘cyberdelegation’ of governance authority to tech firms. It’s experimental ‘digital statecraft’ that often results in the private sector profiting from public sector contracts. 

An example here is the tech firm Faculty AI, which has run a hackathon and produced an AI marking prototype for the Department for Education in England. It was awarded a further £3million contract last month to build a ‘content store’ of official educational materials for AI model training and use by edtech companies. As such we now have an AI firm doing the work of government – cyberdelegated to perform digital statecraft on behalf of the department for education.

One aspect of this work by Faculty AI, it has suggested, is the need for a ‘codification of the curriculum’ to fit the demands of AI. What this means is that for the AI to work as intended, the materials it is trained on need to ‘incorporate AI-friendly structures … that AI tools can recognize and interpret’. So what we have here is a live experiment in AI-enabled schooling that requires the adaptation of official curriculum documents, learning outcomes and so on to be machine-readable. It’s making education AI-ready.

This initiative is also part of efforts by the UK government to reduce teacher workload – by reducing their lesson planning and marking demands. But you could see this as a kind of experiment in what I’ve previously called ‘automated austerity schooling’. By this I mean that common problems in schools, like teacher shortages, overwork, and classroom crowding, all of which are results of more than a decade of austerity funding, are now being treated as problems that AI can solve.

It’s an experiment in techno-solutionism, through publicly-funded investments in private tech actors, rather than investment in the public schooling sector itself, perpetuating austerity through automation.

Infrastructuring. This kind of experimentation is also assisting Big Tech and Big AI companies into education. If we are embedding AI into education, then we are embedding it into the existing digital systems of schooling – the edtech platforms, the learning management systems, the apps, and all the practices that go with them.

These edtech systems and platforms in turn depend on the ‘stack’ of services provided by the ‘Big AI’ companies like Amazon Web Services, Microsoft and Google. This means the digital systems of schooling become nested in Big Tech and AI infrastructures, potentially enabling these companies to exert influence in everyday school routines and processes while schools lose autonomy and control over critical systems, operations and processes.

So the keyword of ‘infrastructuring’ here refers to an ongoing techno-economic structural transformation in the digital substratum of schooling. It will integrate AI ever-more tightly into pedagogic practices, learning processes and administrative and leadership practices with unknown consequences.

Habituation. Laying down the infrastructural conditions for AI to operate in schools also necessitates accustomizing users to the systems so that they function smoothly. This is what in infrastructure studies is termed ‘habituation’ – getting systems to work by getting users to synchronize their practices with them. This is why we might view many efforts to make teachers, leaders and students ‘AI literate’ or ‘AI skilled’ as infrastructure habituation programs. If you’re a Big AI vendor like Google looking to ensure your new AI applications are widely used in schools, then you need to invest in training habitual users.

Radhika Gorur and Joyeeta Dey have described this as ‘making the user friendly’ to what the technology offers so that they use it as its proprietor hopes. It involves seeking ‘alliances’ with educators and institutions, ‘making friends’ and changing the habitual ways they work. As Gorur and Dey note, ‘systems and products carry scripts for the ways users are expected to engage with them’. But these expected uses also depend on teachers and students having the right AI skills and literacies to do so habitually, as companies like Google know well enough to be investing millions in AI training for schools.

Assetization. Why would companies like Google be spending so lavishly on this kind of habituation of users? It’s because AI is a tremendous value proposition. The language of financial ‘assetization’ is useful here. Simply put, a product or a platform can be understood as a financial asset when processes are in place to ensure it returns economic benefits into the future. 

Almost all big tech and edtech companies can be understood to be engaged in assetization processes. Big tech, venture capital investors and edtech companies are all seeking asset value from AI-driven platforms and products, if only they can unlock continuous income streams, as Janja Komljenovic and colleagues have shown in recent research on assetization in education. There are two main routes to financial returns from owning assets.

First, by collecting ‘monetary’ payments as license fees and subscriptions from schools for access to services – where the platform or product is the asset being monetized. Second, by collecting data about institutions’, staff and students’ interactions with the platform for future feature design, upgrades and products that can be re-sold to schools – where the data is an asset that can be monetized in the future.

Through these dual income processes, schools may be locked-in to long term subscriptions and licensing contracts. Such long-term lock-ins serve as a business model for AI in education as companies can generate income streams from increasing the scale of their user base and extracting value from the data.

Non-accountability. A significant risk of all this infrastructuring, habituation and assetization is that we end up with AI-driven schooling systems that lack accountability. As Dan McQuillan has argued, most commercial AI is opaque, black boxed, nontransparent, uninterpretable and unaccountable, and its decisions/outputs are hard or impossible to understand or challenge.

If this is the case, then embedding AI in schools means that neither teachers nor administrators might be able to understand, explain, or justify the conclusions the programs reach, or audit or document their validity. School leaders and teachers may be unable to exercise judgment, provide a rationale for what the AI has done, or take responsibility for classroom and institutional decisions if black box AI is integrated into administrative systems and processes. 

School as a Service?

So what does all of this mean? What kind of schooling systems lie ahead of us if the current trajectory of AI integration into education continues? A recent article by Matthew Kirschenbaum and Rita Raley on AI in the higher education sector may offer a warning here. They have suggested that ‘AI may ruin the university as we know it’ – and their argument may stand for schooling too.

With the newest wave of edtech, they argue,  learning becomes ‘autosummary on demand, made possible by a vast undifferentiated pool of content that every successive use of the service helps to grow’. And they suggest that the university itself is now becoming a ‘service’.

‘The idea of the University as a Service extends the model of Software as a Service to education,’ they argue, where ‘Software as a Service refers to the practice of businesses licensing software and paying to renew the license rather than owning and maintaining the software for themselves. For the University as a Service, traditional academic institutions provide the lecturers, content, and degrees (for now). In return, the technological infrastructure, instructional delivery, and support services are all outsourced to third-party vendors and digital platforms’.

We could see the ‘School as a Service’ in similar terms. School as a Service refers to institutions providing the steady flow of users and data that AI demands. It requires well-habituated, friendly users. It extracts data from every interaction, and treats those aggregated data as assets with future value prospects. It also integrates schools into continuous forms of experimentation, which might include the successive introduction of polluted or biased information into educational materials and systems. The School as a Service is a system of outsourcing, of context-free standardization, and of an intensification of some of the most troubling aspects of contemporary schooling. The school could become a service for AI.

Some might say these conclusions are too speculative, and too critical, but I think it’s important to develop a critically speculative orientation to AI in education to counter the futures that are already being imagined and built by industry, entrepreneurs, investors, and solutionist policy authorities.

I hope these critical keywords have helped offer a vocabulary for contending with AI in education as a public problem that urgently requires our deliberation if we want to build other AI futures for the sector. Could we come up with other keywords, informed by other visions, and underpinned by different values, to orient our approach to AI in schools, and build other kinds of AI futures for education?

 

This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:

The views expressed by the blogger are not necessarily those of NEPC.

Ben Williamson

Ben Williamson is a Chancellor’s Fellow at the Centre for Research in Digital Education and the Edinburgh Futures Institute at the University of Edinburgh. His&nb...