Policy by Algorithm (Jeff Henig), Part 2
Jeff Henig is a professor of political science and education at Teachers College, Columbia University. This post appeared July 27, 2011 on Rick Hess’s blog in Education Week.
There is a satisfying solidity to the term “data-based” decision-making. But basing decisions on data is not the same thing as basing them on knowledge. Data are collections of nuggets of information. Compared with “soft” rationales for action–opinion, intuition, conventional wisdom, common practice–they are hard, descriptive, often quantitative.
When rich and high quality sets of data are mined by sophisticated and dynamically-adjusted algorithms, the results can be powerful. Google’s search engine is the prime example here. Google scores web pages based on indicators like the number of other websites that link to the page, the popularity and selectivity of those linking sites, how long the target site has existed, and how prominently on the site the search keywords appear. The resulting score determines the order in which sites are listed in response to Google searches–and listing position is critical. According to one source, the top spot typically attracts 20 percent to 30 percent of the search page’s clicks, with a very sharp diminishing return to those listed further down.
A February 2011 change in the Google algorithm was estimated to shift about $1 billion in revenue.
Little wonder that policy technocrats are drawn to the algorithm as a way to improve governmental performance. In the education world, well-tuned algorithms promise to tell us which students need what kind of interventions, which schools are good candidates for closure, which teachers should get tenure, how much a teacher should be paid. I have come to think of this as policy by algorithm.
Policy by algorithm relies on statistical formulas that shift through existing indicators to generate a predicted outcome score, then assigning automatic rewards or penalties to individuals or organizations that fail to meet the expected targets. In education, this can work by penalizing teachers whose value-added scores leave them in the bottom 10 percent or 20 percent over a one, two, or three-year period.
Education is not the only sector where policy by algorithm is currently in vogue.
The Obama administration in May announced a new plan to hold hospitals more accountable for outcomes involving Medicare patients. The formula to be applied in judging their efficiency would look not only at the cost of the services while the patient is hospitalized, but also for the cost of services performed by doctors and other health care providers in the 90 days after the patient leaves the hospital. Under the plan, a hospital that conducted, say, a hip replacement, would get a lower reimbursement rate if the patient later needed follow-up for an infection, even if the infection develops weeks after the original operation.
But the high promise of policy by algorithm mutates into cause for concern when data are thin, algorithms theory-bare and untested, and results tied to laws that enshrine automatic rewards and penalties. Current applications of value-added models for assessing teachers, for example, enshrine standardized tests in reading and math as the outcomes of import primarily because those are the indicators on hand. A signature element of many examples of contemporary policy by algorithm, moreover, is their relative indifference to the specific processes that link interventions to outcomes; there is much we do not know about how and how much individual teachers contribute to their students’ long-term development, but legislators convince themselves that ignorance does not matter as long as the algorithm spits out a standard that has a satisfying gleam of technological precision.
Google makes up for what it might lack in theory and process-knowledge by continually tweaking its formula. The company makes about 500 changes a year, partly in response to feedback from organizations complaining that they have been unjustly “demoted,” but largely out of a continued need to stay ahead of others who keep trying to game the system in ways that will benefit their company or clients. State laws are unlikely to be so responsive and agile.
Both data and algorithms should be an important part of the process of making and implementing education policy, but they need to be employed as inputs into reasoned judgments that take other important factors into account. The last thing we need are accountability policies that undermine education as a profession or erode the elements of community and teamwork that mark and make good schools. But when law and policy outrun knowledge, the results are likely to be unanticipated, paradoxical, and occasionally perverse.
This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:
The views expressed by the blogger are not necessarily those of NEPC.