BOULDER, CO (March 5, 2024) – Disregarding their own widely publicized appeals for regulating and slowing implementation of artificial intelligence (AI), leading tech giants like Google, Microsoft, and Meta are instead racing to evade regulation and incorporate AI into their platforms.
A new NEPC policy brief, Time for a Pause: Without Effective Public Oversight, AI in Schools Will Do More Harm Than Good, warns of the dangers of unregulated AI in schools, highlighting democracy and privacy concerns. Authors Ben Williamson of the University of Edinburgh, and Alex Molnar and Faith Boninger of the University of Colorado Boulder, examine the evidence and conclude that the proliferation of AI in schools jeopardizes democratic values and personal freedoms.
Public education is a public and private good that’s essential to democratic civic life. The public must, therefore, be able to provide meaningful direction over schools through transparent democratic governance structures. Yet important discussions about AI’s potentially negative impacts on education are being overwhelmed by relentless rhetoric promoting its purported ability to positively transform teaching and learning. The result is that AI, with little public oversight, is on the verge of becoming a routine and overriding presence in schools.
Years of warnings and precedents have highlighted the risks posed by the widespread use of pre-AI digital technologies in education, which have obscured decision-making and enabled student data exploitation. Without effective public oversight, the introduction of opaque and unproven AI systems and applications will likely exacerbate these problems.
The authors explore the harms likely if lawmakers and others do not step in with carefully considered regulations. Integration of AI can degrade teacher-student relationships, corrupt curriculum with misinformation, encourage student performance bias, and lock schools into a system of expensive corporate technology. Further, they contend, AI is likely to exacerbate violations of student privacy, increase surveillance, and further reduce the transparency and accountability of educational decision-making.
The authors advise that without responsible development and regulation, these opaque AI models and applications will become enmeshed in routine school processes. This will force students and teachers to become involuntary test subjects in a giant experiment in automated instruction and administration that is sure to be rife with unintended consequences and potentially negative effects. Once enmeshed, the only way to disentangle from AI would be to completely dismantle those systems.
The policy brief concludes by suggesting measures to prevent these extensive risks. Perhaps most importantly, the authors urge school leaders to pause the adoption of AI applications until policymakers have had sufficient time to thoroughly educate themselves and develop legislation and policies ensuring effective public oversight and control of its school applications.
Find Time for a Pause: Without Effective Public Oversight, AI in Schools Will Do More Harm Than Good, by Ben Williamson, Alex Molnar, and Faith Boninger, at:
http://nepc.colorado.edu/publication/ai