Hack Education: Ed-Tech Agitprop
This talk was delivered at OEB 2019 [the largest international Ed-Tech conference] in Berlin. Or part of it was. I only had 20 minutes to speak, and what I wrote here is a bit more than what I could fit in that time-slot. You can find the complete slide-deck (with references) here.
I am going to kick the hornet's nest this morning.
Before I do, let me say thank you very much for inviting me to OEB. I have never been able to make it to this event -- it's always coincided with the end-of-the-year series I've traditionally written for Hack Education. But I've put that series to rest while I focus on book-writing, and I'm thrilled to be able to join you here this year. (I'm also thrilled that the book is mostly done.)
I did speak, a few years ago, at the OEB Midsummit event in Iceland. It was, I will confess, one of the strangest experiences I've ever had on stage. A couple of men in the audience were obviously not pleased with my talk. They didn't like when I admonished other speakers for using the "Uber for education" catchphrase, or something. So they jeered. I can't say I've ever seen other keynote speakers at ed-tech events get heckled. But if nothing else, it helped underscore for me not only how the vast majority of ed-tech speakers give their talks with a righteous fury about today's education system that echoes that school-as-factory scene in Pink Floyd's The Wall -- a movie that is 40 years old, mind you -- and all starry-eyed about a future of education that will be (ironically) more automated, more algorithmic; and how too many people in the audience at ed-tech events want to chant "hey teacher! leave those kids alone" and then be reassured that, in the future, magically, technology will make everything happier.
I shared the stage that day in Iceland with a person who gave their talk on how, in the future, robots will love us -- how they will take care of us in our old age; how they will teach our classes; how they will raise our children. And while my criticism of exploitative "personalized learning" technology was booed, their predictions were cheered.
As this speaker listed the marvels of new digital technologies and a future of artificial intelligence, they claimed in passing that we can already "literally 3D print cats." And folks, let me assure you. We literally cannot.
But no one questioned them. No one batted an eye. No one seemed to stop and think, "hey wait, if you've made up this claim about 3D printing felines, perhaps you're also exaggerating the capabilities of AI, exaggerating our desires for robot love. Why should we trust what you tell us about the future?"
Why should an audience trust any of us up here? Why do you? And then again, why might you distrust me?
I've been thinking a lot lately about this storytelling that we speakers do -- it's part of what I call the "ed-tech imaginary." This includes the stories we invent to explain the necessity of technology, the promises of technology; the stories we use to describe how we got here and where we are headed. And despite all the talk about our being "data-driven," about the rigors of "learning sciences" and the like, much of the ed-tech imaginary is quite fanciful. Wizard of Oz pay-no-attention-to-the-man-behind-the-curtain kinds of stuff.
This storytelling seems to be quite powerful rhetorically, emotionally. It's influential internally, within the field of education and education technology. And it's influential externally -- that is, in convincing the general public about what the future of teaching and learning might look like, should look like, and making them fear that teaching and learning today are failing in particular ways. This storytelling hopes to set the agenda. Hence the title of my talk today: "Ed-Tech Agitprop" -- ed-tech agitation propaganda.
Arguably, the most powerful, most well-known story about the future of teaching and learning looks like this: [point to slide]. You can talk about Sugata Mitra or Ken Robinson's TED Talks all you want, but millions more people have watched this tale.
This science fiction creeps into presentations that claim to offer science fact. It creeps into promises about instantaneous learning, facilitated by alleged breakthroughs in brain science. Take Nicholas Negroponte, for example, the co-founder of the MIT Media Lab who in his 2014 TED Talk predicted that in 30 years time (which is, I guess, 25 years from now), you will swallow a pill and "know English," swallow a pill and "know Shakespeare."
What makes these stories appealing or even believable to some people? It's not science. It's "special effects." And The Matrix is, after all, a dystopia. So why would Matrix-style learning be desirable? Because of its speed? Its lack of teachers?
What does it mean in these stories -- in both the Wachowskis' and Negroponte's -- to "know"? To know Kung Fu or English or Shakespeare? It seems to me, at least, that knowing and knowledge here are decontextualized, cheapened. This is an hollowed-out epistemology, an epistemic poverty in which human experience and human culture are not valued.
I'm finishing up my book, as some of you know, on teaching machines. It's a history of the devices built in the mid-twentieth century (before computers) that psychologists like B. F. Skinner believed could be used to train people (much as he trained pigeons) and that would in the language of the time "individualize" education.
I am going to mostly sidestep a discussion about Teaching Machines for now -- I'll save that for the book tour. But I do want to quickly note the other meanings of the phrase -- the ones that show up in the "Google Alert" I have set for the title. In this formulation, it is the machines that are, ostensibly, being taught. Supposedly computer scientists are now teaching machines -- conditioning and training algorithms and machines, and as Skinner also believed, eliciting from them optimum learning behaviors. From time to time, my "teaching machines" Google Alert brings up other references to our fears that the students we are teaching are being reduced to robots, a suspicion about the increasing mechanization of students' lives -- of all of our lives, really. (The alert is never to the long history of instructional technology -- although my book will change that, I hope -- because we act like history doesn't matter.) Epistemic poverty once again.
This is my great concern with much of technology, particularly education technology: not that "artificial intelligence" will in fact surpass what humans can think or do; not that it will enhance what humans can know; but rather that humans -- intellectually, emotionally, occupationally -- will be reduced to machines. We already see this when we talk on the phone with customer support; we see this in Amazon warehouses; and we see this in adaptive learning software. Humans being bent towards the machine.
Why the rush to mechanize? Why the glee? Why the excitement?
I think the answer in part lies in the stories that we tell about technology -- "the ed-tech imaginary." "Ed-tech agitprop."
And when I say "we," I do mean we -- those of us in this room, those of us behind the microphones at events like this, those of us whose livelihoods demand we tell or repeat stories about the future of education.
Agitprop is a portmanteau -- a combination of "agitation" and "propaganda," the shortened name of the Soviet Department for Agitation and Propaganda which was responsible for explaining communist ideology and convincing the people to support the party. This agitprop took a number of forms -- posters, press, radio, film, social networks -- all in the service of spreading the message of the revolution, in the service of shaping public beliefs, in the service of directing the country towards a particular future.
To suggest that storytelling in ed-tech is agitprop is not to suggest that it's part of some communist plot. But it is, I hope, to make clear that there is an agenda -- a political agenda and a powerful one at that -- and an ideology to our technologies, that come intertwined with an incredible sense of urgency. "We must adopt this or that new technology" -- or so the story goes -- "or else we will fall behind, or else we'll lose our jobs, or else our children won't be able to compete." "We must adopt this or that new technology" -- or so the story goes -- "in order to bring about the revolution."
Although agitprop is often associated with the Soviet control and dissemination of information, there emerged in the 1920s a strong tradition of agitprop art and theatre -- not just in the USSR. One of its best known proponents was my favorite playwright, Bertolt Brecht. Once upon a time, before I turned my attention to education technology, I was working on a PhD in Comparative Literature that drew on Brecht's Verfremdungseffekt, on the Russian Formalists' concept of ostranenie -- "defamiliarization." Take the familiar and make it unfamiliar. A radical act or so these artists and activists believed that would destabilize what has become naturalized, normalized, taken for some deep "truth." Something to shake us out of our complacency.
Perhaps nothing has become quite as naturalized in education technology circles as stories about the inevitability of technology, about technology as salvation.
One of my goals with this talk is then to "defamiliarize" these stories, to turn ed-tech agitprop back on itself. This is my theatre for you this morning. Politically, I want to make you rethink your stories. Practically, if nothing else, I want to make you rethink your slide decks. And that sucks, I guess. That'll be what gets me booed this time. But we must think more carefully about the stories that we are helping powerful corporations and powerful political forces to push. We need to stop and realize what's propaganda, what's mis- and disinformation, what's marketing, what's bullshit, what's potentially dangerous, damaging, storytelling, what's dystopian world-building.
I want to run through a series of stories -- story snippets, really. It's propaganda that I'm guessing you have heard, you'll hear again -- although hopefully not here at this event. I contend that these stories are based on no more science than is Neo's Kung-Fu lesson or Negroponte's Shakespeare pill.
Now, none of these stories is indisputably true. At best -- at best -- they are unverifiable. We do not know what the future holds; we can build predictive models, sure, but that's not what these are. Rather, these stories get told to steer the future in a certain direction, to steer dollars in a certain direction. (Alan Kay once said "the best way to predict the future is to build it," but I think, more accurately, "the best way to predict the future is to issue a press release," "the best way to predict the future is to invent statistics in your keynote.") These stories might "work" for some people. They can be dropped into a narrative to heighten the urgency that institutions simply must adapt to a changing world -- agitation propaganda.
Many of these stories contain numbers, and that makes them appear as though they're based on research, on data. But these numbers are often cited without any sources. There's often no indication of where the data might have come from. These are numerical fantasies about the future.
I don't have a lot of time up here, but I do want to quickly refute these stories -- you can certainly, on your own time, try to verify them. Look for the sources. Look at the footnotes. And then, of course, ask why this is the story that gets told, that gets repeated -- how does the story function. Benjamin Doxtdator has provided a great example of how to do this in an article titled "A Field Guide to 'jobs that don't exist yet'" in which he investigates the origins of the claim that "65% of children entering primary school today will end up in jobs that don't exist yet." He traces it through its appearances in OECD and World Economic Forum materials; through its invocation in Cathy Davidson's 2011 book Now You See It and an Australian jobs report that now no one can find; and he traces it all the way back to a quip made by President Bill Clinton in 1996. It's not a fact. It's a slogan. It's an invention. It's a lie.
The "half life of skills" claim seems to have a similarly convoluted and dubious origin. If you search for this claim, you'll find yourself tracing it through a long list of references -- EAB cites the World Economic Forum (an organization which seems to specialize in this sort of storytelling and it's worth asking why the fine folks who gather in Davos might craft these sorts of narratives). For its part, the World Economic Forum cites a marketing blog. That marketing blog cites John Seely Brown's book The New Culture of Learning. But if you search the text of that book, the phrase "half life of skills" doesn't appear anywhere. It's fabricated. And if you stop and think about it, it's nonsense. And yet the story fits so neatly into the broader narrative that we must all be engaged in constant "lifelong" learning, perpetually retraining ourselves, always in danger of being replaced by another worker or replaced by a machine, never questioning why there is no more social safety net.
We have to keep retraining ourselves (often on our own dime and our own time), so the story goes, because people no longer remain on the same job or in the same career. People are changing jobs more frequently than they ever have before, and they're doing more different kinds of work than they ever have before. Except they're not. Not in the US at least. Job transitioning has actually slowed in the past two decades -- yes even for millennials. While the occupational structure of the economy has changed substantially in the last hundred years, the pace of new types of jobs has also slowed since the 1970s. Despite all the talk of the "gig economy," about 90% of the workforce is still employed in a standard job situation -- about the same percentage that it's been since 1995. More people aren't becoming freelancers; more people aren't starting startups.
Certainly there are plenty of people who insist that this occupational stagnation is poised to be disrupted in the coming years because "robots are coming for your jobs." A word, please. Robots aren't coming for your jobs, but management may well be. Employers will be the ones to make the decision to replace human labor with automated labor. But even this talk about the impending AI revolution is speculative. It's agitation propaganda. The AI revolution has been impending for about 65 years now. But this time, this time I'm sure it's for real.
Another word: "robots are coming for your jobs" is one side of the coin; "immigrants are coming for your jobs" is the other. That is, it is the same coin. It's a coin often used to marshall fear and hatred, to make us feel insecure and threatened. It's the coin used in a sleight of hand to distract us from the profit-driven practices of capitalism. It's a coin used to divide us so we cannot solve our pressing global problems for all of us, together.
Is technology changing faster than it's ever changed before? It might feel like it is. Futurists might tell you it is. But many historians would disagree. Robert Gordon, for example, has argued that economic growth began in the late 19th century and took off in the early 20th century with the invention of "electricity, the internal combustion engine, the telephone, chemicals and plastics, and the diffusion to every urban household of clear running water and waste removal." Rapid technological change -- faster than ever before. But he argues that the growth from new technologies slowed by the 1970s. New technologies -- even new digital technologies -- he contends, are incremental changes rather than whole-scale alterations to society we saw a century ago. Many new digital technologies, Gordon argues, are consumer technologies, and these will not -- despite all the stories we hear -- necessarily restructure our world. Perhaps we're compelled to buy a new iPhone every year, but that doesn't mean that technology is changing faster than it's ever changed before. That just means we're trapped by Apple's planned obsolescence.
As historian Jill Lepore writes, "Futurists foretell inevitable outcomes by conjuring up inevitable pasts. People who are in the business of selling predictions need to present the past as predictable -- the ground truth, the test case. Machines are more predictable than people, and in histories written by futurists the machines just keep coming; depicting their march as unstoppable certifies the futurists' predictions. But machines don't just keep coming. They are funded, invented, built, sold, bought, and used by people who could just as easily not fund, invent, build, sell, buy, and use them. Machines don't drive history; people do. History is not a smart car."
We should want a future of human dignity and thriving and justice and security and care -- for everyone. Education is a core part of that. But dignity, thriving, justice, and care are rarely the focus of how we frame "the future of learning" or "the future of work." Robots will never care for us. Unbridled techno-solution will never offer justice. Lifelong learning isn't thriving when it is a symptom of economic precarity, of instability, of a disinvestment in the public good.
When the futures we hear predicted on stages like this turn so casually towards the dystopian, towards an embrace of the machine, towards an embrace of efficiency and inequality and fear -- and certainly that's the trajectory I feel that we are on with the narratives underpinning so much of ed-tech agitprop -- then we have failed. This is a massive failure of our politics, for sure, but it is also a massive failure of imagination. Do better.
This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:
The views expressed by the blogger are not necessarily those of NEPC.