School Finance 101: When School Finance Research Died & Why It Matters #MSFRGA
As I continue work on my forthcoming book on school finance, I find myself reflecting on the state of the field. Where we stand, and how we got here, and perhaps most importantly, how we should move forward.
I personally began studying school finance around 1994 and completed my doctoral work in 1997 at Teachers College at Columbia University in New York. In 1995, while a doctoral student, and still largely unaware of the breadth of literature in my field, I received (actually, I went in someone’s place) an invitation to attend a symposium sponsored by the New York State Regents. The symposium was on the topic of Cost-Effectiveness in Education and included research papers presented by academic researchers from universities across the state. The papers were released as a report to the New York State Board of Regents in March of 1996.[i] These were state supported research studies, including research advancing the application of conceptual models and statistical methods for studying cost-effectiveness and efficiency of local public school districts. The studies were at the highest levels of empirical and conceptual rigor, and conducted by leading researchers in the field (something I totally didn’t get at the time as a cocky and naïve doctoral student). These studies were part of an ongoing research consortium among scholars from Cornell, Syracuse, SUNY Albany and NYU, supported by the Regents of the State of New York. Over time, research emanating from this group would serve to break ground in analyses of equity,[ii] efficiency,[iii]resource allocation and the use of state longitudinal data systems to study teacher labor markets.
Similar efforts on similar topics were occurring in the State of Texas, where state agencies and academic researchers were collaborating to better understand variations in labor costs, in order to inform re-calibration of their state school finance formula. In early 2000s, the Texas legislature established the Texas School Finance Project, in which I was involved with researchers from Texas A&M University.[iv]State supported efforts in Texas, like New York served to significantly advance our knowledge of education costs, cost analysis, cost variation and efficiency through the production of numerous prominent and frequently cited reports.[v]
Research emanating from these states also found its way into national symposia sponsored by the U.S. Department of Education and released in two different recurring report series – Developments in School Finance and Selected Papers in School Finance. These reports have long since been discontinued, occurring mainly between 1995 and 2005. These reports like the state efforts in New York and Texas, tackled with empirical rigor, issues including the development of national indices to capture variation in teacher wages from region to region, labor market to labor market and district to district, [vi] the application of statistical modeling techniques to estimate costs of achieving common outcome goals,[vii]and statistical tests of the reliability and validity of estimates of school performance and efficiency.[viii]
These were the very types of analyses needed to inform state school finance polices and to advance the art and science of evaluating educational reforms for their potential to improve equity, productivity and efficiency. But these efforts largely disappeared over the next decade. More disconcerting, these efforts were replaced by far less rigorous, often purely speculative policy papers, free of any substantive empirical analysis and devoid of any conceptual frameworks.
This shift was largely brought about under the leadership of Arne Duncan. Kevin Welner of the University of Colorado and I explained first in a report for the National Education Policy Center and subsequently in shorter form in the journal Educational Researcher, that Secretary Duncan had begun to give lip service to improving educational productivity and efficiency, but accompanied that lip service with wholly insufficient resources. Kevin Welner and I explained that:
“the materials provided on the Department’s website as guiding resources present poorly supported policy advisement. The materials listed and recommendations expressed within those materials repeatedly fail to provide substantive analyses of the cost effectiveness or efficiency of public schools, of practices within public schools, of broader policies pertaining to public schools, or of resource allocation strategies.” [ix]
Among other issues, the materials provided on the web site failed to acknowledge even the existence of the relevant conceptual frameworks and rigorous empirical methods which had risen to prominence in state supported and federally documented research in the years prior.
Not surprisingly, a similar shift occurred in the states. In 2011, John King, New York Education Commissioner, close ally of and eventual replacement for Arne Duncan took a “different” approach to the annual Regents Symposium. Prominent researchers were invited to sit in the audience and be subjected to presentations by the authors of many of the materials from Duncan’s productivity web site, including but not limited to, the baseless graph I have presented in several previous posts. Here it is again (as much as it pains me):
Researchers in attendance that day forwarded to me their critique of that graph:
Maguerite Roza made claims that the service delivery programs she discussed could increase the productivity of educational spending several fold and illustrated this point with a graph. It is hard to know where to start in responding to this claim. First, she did not explain how the productivity of current spending was measured, so it is difficult to assess the claims she made about that. However, we are familiar with the literature on educational productivity and do not believe the productivity of current service delivery models can be estimated as precisely as she claimed, and suspect that the basis for the figures presented is questionable. Concerning the productivity of the innovations she was advocating there are several problems. It is not at all clear what the innovations are that she is claiming would create such large productivity improvements. Also, it is not clear what research those productivity estimates are based on. During the Q&A session, she indicated that these innovations were less than a year old. How can the productivity gain produced by service models used in a very small number sites for a very short time be determined? They can’t. It is not an overstatement to say that the claims about productivity improvement were simply made up. [emphasis added]
Below is a second, equally problematic graph that was presented in that same symposium, also later used by John King in presentations to school administrators across New York. Here, the presenter (using a graph from an organization called Educational Resource Strategies) argues that only a very small share of teacher salaries actually goes into “responsibility for results.” Thus, all of the salary differences above base salaries – and by extension much of school funding in total – is squandered inefficiently and can and should be reallocated, presumably toward “responsibility for results,” whatever that may mean or however that should be measured.
Researchers in attendance that day forwarded to me this critique:
Dr. Fisher [the presenter of this graph] made the claim that 30 to 40 percent of school district dollars were spent on resources that had no relation to student achievement. Again, the basis for this claim was not presented, so it is difficult to assess. However, the little explanation that was presented suggests that the analysis suffered from serious conceptual errors. Arguments can be made for a flatter teaching salary schedule, however, those arguments are more complicated than the speaker acknowledged. Particularly, the suggestion that any spending on teacher salaries above the starting salary is unproductive is, well, wrong.
It is likely that most of the people in the audience did not take these claims very seriously. Nonetheless, it was disappointing to see such claims being made by researchers in a forum like this one.
I raise these issues because:
- It is vital that we return to the application of relevant frameworks and rigorous methods for studying productivity and efficiency; and
- It is vital that the U.S. Department of Education and State Education Agencies play a role in supporting this research and enabling the highest quality research and data to inform policy – specifically, state school finance policy and the federal role.
To reiterate a take home point of many previous posts, equitable and adequate financing are prerequisite conditions for our education systems, regardless of how we choose to deliver those systems. System delivery may alter what’s equitable or adequate. But without rigorous and relevant analyses, we can never know how or to what extent.
NOTES
[i] Berne, Robert. Study on Cost-effectiveness in Education. University of the State of New York, State Education Department, 1996.
[ii] University of the State of New York. Board of Regents. Supporting Cost-effective School Reform: New York State Board of Regents 1996-97 Detailed Proposal On School Aid. Albany, N.Y.: University of the State of New York, State Education Dept., 1996.
[iii] Duncombe, William, and Jerry Miner. “Productive Efficiency and Cost-Effectiveness: Different Approaches to Measuring School Performance.” Study on Cost-Effectiveness in Education: Final Report, ed. R. Berne (Albany: State Education Department, New York State Board of Regents, 1996) (1996): 141-156.
[iv] http://bush.tamu.edu/research/faculty/TXSchoolFinance/
[v] Alexander, Celeste D., et al. “A study of uncontrollable variations in the costs of Texas public education.” A summary report prepared for the 77th Texas Legislature, Austin: Charles A. Dana Center, University of Texas-Austin. Available at http://www. utdanacenter. org/research/reports/ceireport. pdf.(Last accessed on 3/4/04.) (2000).
Taylor, Lori L., et al. “Updating the Texas cost of education index.” Journal of Education Finance 28.2 (2002): 261-284.
Taylor, Lori L., and Harrison Keller. “Competing perspectives on the cost of education.” Developments in School Finance 2002–2002 (2003): 111-26.
Baker, Bruce D., Lori Taylor, and Arnold Vedlitz. “Measuring educational adequacy in public schools (Report prepared for the Texas Legislature Joint Committee on Public School Finance, The Texas School Finance Project).” Retrieved August 17 (2004): 2006.
[vi] Chambers, Jay G. “Public school teacher cost differences across the United States: Introduction to a teacher cost index (TCI).” Developments in school finance (1995): 19-32.
[vii] Reschovsky, Andrew, and Jennifer Imazeki. “The development of school finance formulas to guarantee the provision of adequate education to low-income students.” Developments in school finance124 (1997): 121-48.
[viii] Bifulco, Robert, and William Duncombe. “Evaluating School Performance: Are we ready for prime time?.” Developments in School Finance, 1999–2000 (2002): 9.
Rubenstein, Ross, et al. “Distinguishing good schools from bad in principle and practice: A comparison of four methods.” Developments in School Finance: 2003 (2004): 53.
Stiefel, Leanna, Hella Bel Hadj Amor, and Amy Ellen Schwartz. “Best schools, worst schools, and school efficiency: A reconciliation and assessment of alternative classification systems.” Developments in School Finance: 2004 81 (2005).
[ix] Baker, Bruce, and Kevin G. Welner. “Evidence and rigor: Scrutinizing the rhetorical embrace of evidence-based decision making.” Educational Researcher 41.3 (2012): 98-101.
This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:
The views expressed by the blogger are not necessarily those of NEPC.