Teaching Strategies That Work! (Just Don't Ask "Work to Do What?")
So here's the dilemma for someone who writes about education: certain critical cautions and principles need to be mentioned again and again because policymakers persist in ignoring them, yet faithful readers will eventually tire of the repetition.
Consider, for example, the reminder that schooling isn't necessarily better just because it's more "rigorous." Or that standardized test results are such a misleading indicator of teaching or learning that successful efforts to raise scores can actually lower the quality of students' education. Or that using rewards or punishments to control people inevitably backfires in multiple ways.
Even though these points have been made repeatedly (by me and many others) and supported by solid arguments and evidence, the violation of these principles remains at the core of the decades-old approach to education policy that still calls itself "reform." Hence the dilemma: will explaining in yet another book, article, or blog post why its premises are dead wrong have any effect, other than to elicit grumbles that the author is starting to sound like a broken record?*
Another axiom that has been offered many times (but to no apparent effect) is that it means very little to say that a given intervention is "effective" -- at least until we've asked "Effective at what?" and determined that the criterion in question is meaningful. Lots of educators cheerfully declare that they don't care about theories; they just want something that works. But this begs the (unavoidably theoretical) question: What do you mean by "works"?
And once you've asked that, you're obligated to remain skeptical about simple-minded demands for evidence-, data-, or research-based policies. At its best, and on those relatively rare occasions when its results are clear-cut, research can only show us that doing A has a reasonably good chance of producing result B. It can't tell us whether B is a good idea, and we're less likely to talk about that if the details of B aren't even clearly spelled out.
To wit: there's long been evidence to demonstrate the effectiveness of certain classroom management strategies, most of which require the teacher to exercise firm control from the first day of school. But how many readers of this research, including teacher educators and their students, interrupt the lengthy discussion of those strategies to ask what exactly is meant by "effectiveness"?
The answer, it turns out, is generally some variation on compliance. If you do this, this, and this, you're more likely to get your kids to do whatever they're told. Make that explicit and you'd then have to ask whether that's really your paramount goal. If, on reflection, you decide that it's most important for students to become critical thinkers, enthusiastic learners, ethical decision-makers, or generous and responsible members of a democratic community, then the basic finding -- and all the evidence behind it -- is worth very little. Indeed, it may turn out that proven classroom management techniques undermine the realization of more ambitious goals because those goals call for a very different kind of classroom than the standard one, which is designed to elicit obedience.
An even more common example of this general point concerns academic outcomes. In scholarly journals, in the media's coverage of education, and in professional development workshops for teachers, any number of things are described as more or less beneficial -- again, with scant attention paid to the outcome. The discussion about "promising results" (or their absence) is admirably precise about what produced them, while swiftly passing over the fact that those results consist of nothing more than scores on standardized tests, often norm-referenced and multiple-choice versions.
We're back, then, to one of those key principles, enunciated -- and ignored -- repeatedly, that I mentioned earlier. Standardized tests tend to measure what matters least about intellectual proficiency, so it makes absolutely no sense to judge curricula, teaching strategies, or the quality of educators or schools on the basis of the results of those tests. Indeed, as I've reported elsewhere, test scores have actually been shown to be inversely related to deep thinking.
Thus, "evidence" may demonstrate beyond a doubt that a certain teaching strategy is effective, but it isn't until you remember to press for the working definition of effectiveness -- which can take quite a bit of pressing when the answer isn't clearly specified -- that you realize the teaching strategy (and all the impressive sounding data that support it) are worthless because there's no evidence that it improves learning. Just test scores.
Which leads me to a report published earlier this year in the Journal of Educational Psychology. A group of researchers at the City University of New York and Kingston University in London performed two meta-analyses, which is a way of statistically combining studies to quantify the overall result. The title of the article was "Does Discovery-Based Instruction Enhance Learning?", which is a question of interest to many of us.
Would you like to know the much-simplified answer that the meta-analyzers reported? The first review, of 580 comparisons from 108 studies, showed that completely unassisted discovery learning is less effective than "explicit teaching methods." The second review, of 360 comparisons from 56 studies, showed that various "enhanced" forms of discovery learning work best of all.
There are many possible responses one might have to this news. One is "Duh." Another is "Tell me more about those enhanced forms, and which of them is most effective." Another is "Why did 108 groups of scholars bother to evaluate laissez-faire discovery given that, as these reviewers acknowledge, it constitutes something of a straw man since it's not the way most progressive and constructivist educators teach?" Yet another: "How much more effective are we talking about?" since a statistically significant difference can be functionally meaningless if the effect size is low.
But I took my own advice and asked "What the hell did all those researchers, whose cooking was tossed into a single giant pot, mean by 'effective'?" Pardon my italics, but it's astonishing how little this issue appeared to matter to the review's authors. There was no discussion of it in the article's lengthy introduction or in the concluding discussion section. Yes, "dependent variable" (D.V.) was one of the moderators employed to allow more specificity in crunching the results -- along with age of the students, academic subject being taught, and so on. But D.V. -- what discovery learning does or doesn't have an effect on -- was broken down only by the type of measurement used in the studies: post-test scores vs. acquisition scores vs. self-ratings. There wasn't a word to describe, let alone analyze, what all the researchers were looking for. Did they want to see how these different types of instruction affect kids' scores on tests of basic recall? Their ability to generalize principles to novel problems? Their creativity? (There's no point in wondering about the impact on kids' interest in learning; that almost never figures in these studies.)
Papers like this one are peer-reviewed and, as was the case here, are often sent back for revision based on reviewers' comments. Yet apparently no one thought to ask these authors to take a step back and consider what kind of educational outcomes are really at issue when different instructional strategies are compared. Never mind the possibility that explicit teaching might be much better than discovery learning... at producing results that don't matter worth a damn, intellectually speaking.
In fact, the D.V. in education studies is often quite superficial, consisting only of (yup) standardized test scores or a metric like number of items taught that were correctly recalled. And if one of these studies makes it into the popular press, that fact about it probably won't. In January I wrote about widespread media coverage of a study that supposedly proved one should, to quote the New York Times headline, "Take a Test to Really Learn, Research Suggests." Except that you had to read the study itself, and read it pretty carefully, to discover that "really learn" just meant "stuff more facts into short-term memory."
But the problem isn't just an over-reliance on outcome measures -- rote recall, test scores, or obedience -- that some of us regard as shrug-worthy and a distraction from the intellectual and moral characteristics that could be occupying us instead. The problem is that researchers are, as a journalist might put it, burying the lead. And too many educators don't seem to notice.
If this situation doesn't improve, please accept my apologies in advance because it's likely that I'll feel compelled to write another essay about it in the near future.
This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:
The views expressed by the blogger are not necessarily those of NEPC.