The Data-Driven Education Movement
In the education community, many proclaim themselves to be “completely data-driven.” Data Driven Decision Making (DDDM) has been a buzz phrase for a while now, and continues to be a badge many wear with pride. And yet, every time I hear it, I cringe.
Let me explain. During my first year in graduate school, I was taught that excessive attention to quantitative dataimpedes – rather than aids – in-depth understanding of social phenomena. In other words, explanations cannot simply be cranked out of statistical analyses, without the need for a precursor theory of some kind – a.k.a. “variable sociology” – and the attempt to do so constitutes a major obstacle to the advancement of knowledge.
I am no longer in graduate school, so part of me says: Okay, I know what data-driven means in education. But then, at times, I still think: No, really, what does “data-driven” mean even in this context?
At a basic level, it seems to signal a general orientation toward making decisions based on the best information that we have, which is a very good thing. But there are two problems here. First, we tend to have an extremely narrow view of the information that counts – that is, data that can be quantified easily. Second, we seem to operate under the illusion that data, in and of themselves, can tell stories and reveal truth.
But the thing is: (1) numbers are not the only type of data that matter; and (2) all data need to be interpreted before they can be elevated to the status of evidence – and theory should drive this process, not data.
Remember the parable about the drunk man searching for his wallet under a streetlight? When someone comes to help, they ask “Are you sure you dropped it here?” The drunk says, “I probably dropped it in the street, but the light is bad there, so it’s easier to look over here.” In science, this phenomenon – that is, researchers looking for answers where the data are better, “rather than where the truth is most likely to lie” – has been called the “streetlight effect.”
As David Freedman explains it in a Discover Magazine article that asks why scientific studies are so often wrong, researchers “don’t always have much choice. It is often extremely difficult or even impossible to cleanly measure what is really important, so scientists instead cleanly measure what they can, hoping it turns out to be relevant.”
As Freedman says, “We should fully expect scientific theories to frequently butt heads and to wind up being disproved sometimes as researchers grope their way toward the truth. That is the scientific process: Generate ideas, test them, discard the flimsy, repeat.”
But what if they develop the ideas to fit the data they have, rather than finding the data to test the most important ideas?
So, as yawn-inducing as the word theory may sound to a lot of people, theory acts to rationalize the search for your wallet or anything else, helping to focus attention on the areas where it is most likely to be found. In education, it often seems like we are too preoccupied with the convenient and well-lit. So, while it seems like we are drowning in education data, are they the data that we need to make sound decisions?
Sociologists Peter Hedström and Richard Swedberg (1996) wrote:
Quantitative research is essential both for descriptive purposes and for testing sociological theories. We do, however, believe that many sociologists have had all too much faith in statistical analysis as a tool for generating theories, and that the belief in an isomorphism between statistical and theoretical models [...] has hampered the development of sociological theories built upon concrete explanatory mechanisms.
Something similar could be said about the data-driven education movement: Excessive faith in data crunching as a tool for making decisions has interfered with the important task of asking the fundamental questions in education, such as whether we are looking for answers in the right places, and not just where it is easy (e.g., standardized test data).
As education scholar (and blogger) Bruce Baker has shown (often humorously), data devoid of theory can suggest ridiculous courses of action:
Let’s say I conducted a study in which I rented a fleet of helicopters and used those helicopters to, on a daily basis, transport a group of randomly selected students from Camden, NJ to elite private day schools around NJ and Philadelphia. I then compared the college attendance patterns of the kids participating in the helicopter program to 100 other kids from Camden who also signed up for the program but were not selected and stayed in Camden public schools. It turns out that I find that the helicopter kids were more likely to attend college – therefore I conclude logically that “helicopters improve college attendance among poor, minority kids.
As preposterous as this proposal may sound, the Brookings report he mentions argues somewhat along these lines – only the helicopters are vouchers. The study, says Baker, “purports to find [or at least the media spin on it] that vouchers as a treatment, worked especially for black students.” A minimal understanding of the mechanisms involved here should have made it obvious that vouchers are likely no more relevant than helicopters to children’s educational attainment.
A second example: About a year ago at the United Nations Social Innovation Summit, Nicholas Negroponte suggested that the “One Laptop Per Child” program might, “literally or figuratively, drop out of a helicopter with tablets into a village where there is no school,” and then come back after a year to see how children have taught themselves to read.
This faith in the power of new technology to bring about fundamental educational transformation is not new, but I think it could be minimized if we reflected on more basic questions such as: What it is that helicopter-dropped tablets might actually do to increase children’s educational gains?
My colleague recently wrote that NCLB “has helped to institutionalize the improper interpretation of testing data.” True. But I would go even further: NCLB has helped to institutionalize not just how we handle data, but also, and more importantly, what counts as data. The law requires schools to rely on scientifically-based research but, as it turns out, case studies, ethnographies, interviews, and other forms of qualitative research seem to fall outside this definition – and, thus, are deemed unacceptable as a basis for making decisions.
Since when are qualitative data unacceptable in social and behavioral science research and as a guide in policy-relevant decision-making?
Our blind faith in numbers has ultimately caused impoverishment in how (and what) information is used to help address real world problems. We now apparently believe that numbers are not just necessary, but sufficient, for making research-based decisions.
The irony, of course, is that this notion is actually contrary to the scientific process. Being data-driven is only useful if you have a strong theory by which to navigate; anything else can leave you heading blindly toward a cliff.
- Esther Quintero
This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:
The views expressed by the blogger are not necessarily those of NEPC.