Critical Studies of Education & Technology: Start Where the Pain Is: Notes on Topic-Selection in Technology Studies (Vinsel 2024)
Lee Vinsel has written recently to lament the poor choice of topic selection by ‘critical’ tech researchers. In particular, he calls out the research choices of progressive academics who regularly make a point of publicly decrying technology-related harms, injustices, and similar social problems. Vinsel notes the tendency for such researchers to often fail to follow through on these concerns in their actual research – instead preferring to scrutinise and investigate the latest hyped ‘emerging’ technologies that are not yet in widespread use and certainly not the cause of actual harms at the present time (indeed, Vinsel wryly describes these at one point as ‘barely existing’ technologies).
Many critical researchers therefore seem not especially interested in practising what they preach – critically anticipating speculative future harms which may well be conceptually stimulating but are hardly worthy of sustained attention for anyone genuinely concerned with improving the lives of the currently marginalised and vulnerable. Vinsel reasons that this is largely due to the technology-led ways in which critical researchers choose their research topics – focusing on the conceptual threat of emerging innovations (such as facial recognition or self-driving cars) without much consideration of how common these technologies (or their harms) actually are. The announcement of a new product launch or future scenario – especially when framed in tech industry hype – will often pique the interest of even the most jaded tech critic and trigger all manner of counterarguments and theoretical connections. As well as being a source of intellectual stimulation, focusing one’s writing and research on these hyped technologies can prove to be highly lucrative – quickly attracting citations, book contracts and money from research funders eager to break new ground and tackle cutting-edge issues.
The limitations of this approach are obvious. By taking evidence-free tech hype seriously we inevitably run the risk of developing our own evidence-free hyperbolic counterarguments. For example, in my own research around educational technologies, it is tempting to add to critiques of emotion-detection AI and EEG bands in classrooms without stopping to check the extent to which any of this tech is actually being taken-up in schools (spoiler: hardly at all). Similarly, the emergence of online exam proctoring became a minor cause celebre in critical edtech circles during the pandemic – triggering numerous op-eds, articles, and other hot-takes. In the few universities where online proctoring was being used, this tech certainly was egregious and required local push-back. Yet online proctoring remains a niche technology – that most universities have not implemented and that many education leaders and administrators remain highly sceptical of.
Vinsel’s point is that critical tech scholars are too busy looking for fresh instances of emerging tech uses to be intellectually offended by, while looking past the mundane familiar tech practices and processes that continue to cause widespread harm and suffering across education systems and people’s everyday lives. Instead, Vinsel reasons that the consistent approach would be to address the ways in which technologies are currently implicated in “the pains of ordinary human beings happening right now”.
This certainly makes sense in terms of critical studies of education and technology. For sure, it might be argued that there is value in keeping our critical eyes keenly on the horizon of novel tech-related threats – anticipating the directions in which EdTech might be headed and then making efforts to prevent possible harms occurring on a widespread scale. Yet, we are perhaps fooling ourselves that are scathing takedowns of online proctoring and affective monitoring are the real reason that such tech is not coming to fruition. Instead, as with most examples of tech industry hustling and huckstering, these are simply speculative pitches being put forward by companies keen to raise more venture capital in order to develop even more investment. The aim of many EdTech firms is not to actually develop, build and deploy these technologies at scale. The aim of many EdTech firms is to develop the idea of these technologies in a manner that attracts further investment. In devoting sustained critical attention to these ideas we are also buying into – and perpetuating – the hype (albeit in dystopian rather than utopian tones).
So, what critical research is required when it comes to the world of technology and education? Vinsel’s argument is that most human torment comes from the existing mundane technologies in people’s lives. Instead, of being distracted by the hyped ‘emerging’ technologies just described above, Vinsel recommends that critical researchers choose their topics of research by starting with pain rather than by starting with technologies. This involves basing our work in the long-running conversations and concerns taking place within established scholarly literatures around issues such as inequality, injustice, disadvantage, and other problems that are causing human affliction. In this sense, we should be initially guided toward instances of social inequality, injustice, and disadvantage, and only then look where technologies are implicated. After this work of tracing backwards from people’s lived experiences we can then set about producing accounts of technology that make sense within these literatures.
In terms of critical studies and education and technology, then, the pains and actual human suffering that are afflicting students, teachers and others in education today are far less driven by the likes of Open AI, McKinsey, OECD, and angel investors, and much more by the mundane ways that crappy software and rotting platforms bump up against mundane factors such as student economic hardship, institutional regimes of surveillance and discrimination, teacher burnout and mental health. There is a rich critical education literature that documents the persistence of such harms, hardships, afflictions, and problems. As such, the job of critical EdTech scholarship is to start from these entrenched issues and seeing where technologies are present and implicated alongside many other material causes of that pain.
Vinsel therefore sketches out a form of materialist technology studies that ‘starts where the pain is’ and then explores where and how technologies are implicated alongside many other material causes of that pain:
“… things like lack of access to safe housing, clean water, clean air, healthy food, energy, medicine, waste management (e.g. avoiding exposure to human shit and piss), accessible transportation, leisure, headspace, and so on as well as a whole host of factors that induce stress, which leads to all kinds of terrible physical and mental health outcomes”.
As a tangible example of what this might look like, Vinsel highlights Julia Ticona’s work on how precarious workers “use their digital technologies to navigate insecure and flexible labour markets”. Instead of starting with the highly hyped ‘sharing economy’ apps of the moment, Ticona started by recruiting research subjects in the places that they were hanging out (phone stores, gas stations, convenience stores) and slowly getting to know how mobile phones fitted into their everyday work lives. This bottom-up lived-experience approach shifted focus away from the ‘digital divide’ costs and harms associated with lack of access to the latest apps and internet-connected digital technologies. Instead, Ticona’s research participants pointed her toward the real harms of being digitally included (or perhaps more accurately enmeshed) in a technologically enabled “system that is fundamentally unequal and designed to profit from being so”. As Vinsel enthuses, this study “shows us so much about how digital technology use fits within working-class struggles and tribulations”.
It is easy to imagine how this sort of study would translate over into educational contexts. If you talk to university students about their pain and hardships then a wide range of issues are likely to come to the fore – from the ongoing grind of juggling multiple work obligations, to the present rise of on-campus authoritarianism around the Palestinian conflict. Digital technologies will be implicated in all of these lived experiences. For example, there is definitely need for research at the moment around how students have been using digital technologies to coordinate campus activism and, conversely, how universities have been using tech to suppress protest, control space and minimise institutional risk.
There is also definitely need for research around how students navigate their universities’ crappy digital platforms on a daily basis – constantly having to find ‘hacks’ and workarounds that highlight the inequity and inaccessibility of the design of these ostensibly ‘customer-facing’ systems. There is definitely need for research around how digital technologies are implicated in how university teachers navigate their ever-increasing workloads and related stress. Such lines of enquiry might not seem the most appealing in terms of the technologies they lead us to study. Yet, to paraphrase Vinsel’s concluding point, “if we claim that human suffering is one of the things that we care most about in life, we actually have to study actual human suffering”.
REFERENCES
Lee Vinsel (2024). Start where the pain is: notes on topic-selection in technology studies. People & Things, April 5th https://peoples-things.ghost.io/start-where-the-pain-is-notes-on-topic-selection-in-technology-studies/
Julia Ticona (2022). Left to our own devices: coping with insecure work in a digital age. Oxford University Press
This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:
The views expressed by the blogger are not necessarily those of NEPC.