NEPC Resources on International and Comparative Education
Five Myths About Education
NEPC Review: Beyond the Mirage: How Pragmatic Stewardship Could Transform Learning Outcomes in International Education Systems (June 2019)
A report, Beyond the Mirage: How Pragmatic Stewardship Could Transform Learning Outcomes in International Education Systems, prescribes a shift in the leadership role of education ministers – from providers and guarantors of education to pragmatic stewards of education systems. Focusing on the organization of education sectors in the Global South, the report contends that this shift will address the need for higher quality education, rather than simply providing access to education. The “pragmatic stewardship” advocated in the report involves strategies that increasingly incorporate private actors. Accordingly, the report draws on four case studies of different types of private-sector involvement in education as examples of a broader shift by education ministers. However, each case contains limitations – some discussed, others not – that undermine their suitability as successful examples of divesting public education systems of their primary role as guarantors and providers of education. While the report claims to be “non-ideological” and “beyond the mirage” of the education privatization debate, the funders of the report (no publisher is listed) have a material stake in a main program cited as evidence, raising concerns about conflicts of interest. The use of questionable evidence and the conflicts of interest combine to render the report’s recommendations unsubstantiated.
NEPC Review: SchoolGrades.org (Manhattan Institute for Policy Research, September 2015)
The Manhattan Institute's SchoolGrades.org evaluates and assigns grades, using reading and math test scores, to U.S. schools and compares schools across their respective states and to other countries. They apparently use a four-step process: (1) average two state test scores; (2) “norm” these results to the NAEP exam; (3) make an adjustment to this national “normed” measure using free and reduced price lunch data to account for SES; and (4) “norm” these results to the international PISA exam. The claim is that this process allows a parent to compare a local school to schools in their state and to other countries like South Korea and Lithuania. But the unsubstantiated norming chain is too tenuous and the results are overly extrapolated to be of any useful value. The website does not explain how international scores are “normed” (equated) to the national standard they developed or how letter grades were determined, nor does it explain how free and reduced price lunch counts are used to make socioeconomic adjustments. While there is considerable equating research available, none is cited. Further, the reliance on aggregated test scores is far too narrow a base to serve as a useful evaluation of schools. Thus, the website’s approach to evaluating schools fails on technical grounds and, just as importantly, it fails to understand and consider the broader purposes of education in a democratic society.
Divining the Meaning of the Test Scores
Review of The Efficiency Index
A new report that scores and ranks national education systems based on their efficiency has been receiving considerable media attention on both sides of the Atlantic. Efficiency is measured based on test scores, and resource use is analyzed in terms of teacher wages and pupil-teacher ratios. Looking across the 30 countries, the model predicts that, in order to get a 5% increase in PISA scores, teacher wages would have to go up by 14% or class sizes would have to go down by 13 students per class. But the optimal wages and class sizes for any given country may sometimes demand an increase or decrease in one or the other factor. For Switzerland, for example, the optimal teacher salary would require wages to be cut by almost half; for Indonesia, wages would have to be increased more than three-fold. For four countries, the optimal class size is estimated at fewer than two students per teacher. These extreme findings are due, in large part, to weaknesses in each of the study’s three key elements: the output measure is questionable, the input measures are unclear, and the econometric method by which they are correlated does not have a straightforward economic interpretation. The report may satisfy an apparent keenness for reports that rank countries— and especially for reports that castigate low-rank countries. But it does not advance our understanding of how to make the provision of education more efficient.
Data-Driven Improvement and Accountability
Review of Middle Class or Middle of the Pack
In Middle Class or Middle of the Pack: What Can We Learn When Benchmarking U.S. Schools Against the World’s Best?, America Achieves draws attention to what the group describes as the relatively low achievement of U.S. middle class students on the mathematics and science portions of the 2009 Program of International Student Assessment (PISA) test and, based on this “wake up call to America’s middle class,” urges U.S. high schools to participate in a new OECD test so schools can compare their 15 year-old students’ performance with the average performance of 15 year-old students in other countries. The message American Achieves promotes is that such comparisons are valid and can help improve high school performance. The report does not provide evidence supporting this message; nor do PISA reports nor the broader literature on school reform. Overall, the report is not grounded in research but rather is an assertion that measurement, by itself, is an effective reform tool. The report makes no attempt to reveal how this particular test would be connected to specific curricula, strategies for teaching mathematics and science, or teacher professional development strategies. Thus, the report is of no utility to policymakers.
NEPC Review: The School Staffing Surge: Decades of Employment Growth in America’s Public Schools, Part II (Friedman Foundation for Educational Choice, February 2013)
The School Staffing Surge, Part II is a companion report to a 2012 report called The School Staffing Surge. The earlier report argued that between 1992 and 2009, the number of full-time-equivalent school employees grew 2.3 times faster than the increase in students over the same period. It also claimed that despite these staffing increases, there was no progress on test scores or drop-out reductions. The new report disaggregates the trends in K-12 hiring for individual states and responds to some of the criticisms leveled at the original report. Yet this new report, like the original, fails to acknowledge that achievement scores and dropout rates have steadily improved. What it does instead is present ratios comparing the number of administrators and other non-teaching staff to the number of teachers or students, none of which has been shown to bear any meaningful relationship to student achievement. Neither the old report nor this new one explores the causes and consequences of employment growth. When a snapshot of hiring numbers is not benchmarked against the needs and realities of each state, it cannot illuminate the usefulness or wastefulness of hiring. The new companion report, much like the original one, is devoid of any important policy implications.
Review of The School Staffing Surge
The School Staffing Surge finds that between 1992 and 2009, the number of full-time equivalent school employees grew 2.3 times faster than the increase in students over the same period. The report claims that despite these staffing and related spending increases, there has been no progress on test scores or drop-out reductions. The solution, therefore, is school choice. However, the report fails to adequately address the fact that achievement scores and drop-out rates have actually improved. If the report had explored the causes and consequences of the faster employment growth, it could have made an important contribution. However, it does not do so. Unless we know the duties and responsibilities of the new employees, any assertion about the effects of hiring them is merely speculative. Further, the report’s recommendations are problematic in its uncritical presentation of school choice as a solution to financial and staffing increases. The report presents no evidence that school choice - whose record on improving educational outcomes and efficacy is mixed - will resolve this “problem.” The report's advocacy of private school vouchers and school choice seem even odder given that private schools have smaller class sizes and charter schools appear to allocate a substantially greater portion of their spending on administrative costs—two of the main policies attacked in the report.
NEPC Review: "Cross-Country Evidence on Teacher Performance" and "Merit Pay International" (February 2011)
The primary claim of this Harvard Program on Education Policy and Governance report and the abridged Education Next version is that nations “that pay teachers on their performance score higher on PISA tests.” After statistically controlling for several variables, the author concludes that nations with some form of merit pay system have, on average, higher reading and math scores on this international test of 15-year-old students. Although the author lists numerous caveats, his broad conclusions do not heed these cautions. The fundamental differences among countries in the types of performance pay system are not properly considered. Nations are simply lumped together as having or not having a performance pay plan. Also, the length of time the program had been in place in each country is not addressed and the unknown intensity of program implementations argue against drawing lessons from this study. The small sample size of 28 observations requires extreme caution in interpretation. For example, the inclusion or exclusion of a single country results in large shifts in the size of the reported relationships. That is, the numbers become unreliable and invalid. With any correlational study, attributing causality is problematic; the differences among nations could be due to any number of factors. Finally, the type of regression-based analyses used to support the performance pay conclusion does not properly consider that the background variables used in these analyses can vary in terms of relationships with student scores and have different definitions across the countries under study. Therefore, drawing policy conclusions about teacher performance pay on the basis of this analysis is not warranted.
Suggested Citation: von Davier, M. (2011). Review of “Cross-Country Evidence on Teacher Performance Pay.” Boulder, CO: National Education Policy Center. Retrieved [date] from http://nepc.colorado.edu/thinktank/review-pisa-performance-pay
NEPC Review: U.S. Math Performance in Global Perspective: How Well Does Each State Do at Producing High-Achieving Students? (November 2010)
A report from Harvard’s Program on Education Policy and Governance and the journal Education Next finds that only 6% of U.S. students in the high school graduating class of 2009 achieved at an advanced level in mathematics compared with 28% of Taiwanese students and more than 20% of students in Hong Kong, Korea, and Finland. Overall, the United States ranked behind most of its industrialized competitors. The report compares the mathematics performance of high achievers not only across countries but also across the 50 U.S. states and 10 urban districts. Most states and cities ranked closer to developing countries than to developed countries. However, the study has three noteworthy limitations: (a) internationally, students were sampled by age and not by grade, and countries varied greatly on the proportion of the student cohort included in the compared grades; in fact, only about 70% of the U.S. sample would have been in the graduating class of 2009, which makes the comparisons unreliable; (b) the misleading practice of reporting rankings of groups of high-achieving students hides the clustering of scores, inaccurately exaggerates small differences, and increases the possibility of error in measuring differences; and (c) the different tests used in the study measured different domains of mathematics proficiency, and the international measure was limited because of relatively few test items. The study’s deceptive comparison of high achievers on one test with high achievers on another says nothing useful about the class of 2009 and offers essentially no assistance to U.S. educators seeking to improve students’ performance in mathematics.
Suggested Citation: Kilpatrick, J. (2011). Review of “U.S. Math Performance in Global Perspective: How Well Does Each State Do at Producing High-Achieving Students?” Boulder, CO: National Education Policy Center. Retrieved [date] from http://nepc.colorado.edu/thinktank/review-us-math
Update: Paul Peterson, one of the report's authors, has posted a response to the review. The response can be found at: http://educationnext.org/no-matter-how-hard-you-try-you-cannot-deny-u-s…
Please also download and read Jeremy Kilpatrick's reply to that response, which is posted at the bottom of this page.
NEPC Review: 2010 State School Report Card (October 2010)
This review examines the Heartland Institute's report ranking states on student achievement, education expenditures, and adherence to learning standards, as well as a ranking based on an average of the first three. The rankings are based on indices created by the report's authors, and the report highlights the top- and lowest-performing states for each of the indices. The report assigns letter grades to each of the states (plus DC), with a forced distribution: 10 states are assigned A's, B's, C's, and D's, and 11 states must get F's. The report explains how the indices were devised but does not cite any research or provide rationales to support the methodological approach used in their creation. The report acknowledges that it does not control for state variations in demographic or other factors. It nevertheless presents conclusions concerning quality, and it recommends school choice as a remedy. The report's policy recommendations are undermined by the flaws in the report's methodological approaches, its limited and partisan selection of research references, and a clear disconnect between the recommendations and the report's findings.
NEPC Review: Education Olympics 2008: The Games in Review (Thomas B. Fordham Institute, August 2008)
This review examines the recently released Thomas P. Fordham Institute report, Education Olympics: The Games in Review. Published just after the completion of the 2008 Beijing Summer Olympics, Education Olympics strategically parallels the international competition by awarding gold, silver and bronze medals to top performing countries based on indicators including scores from international assessments in reading, mathematics, and science. The report contrasts American students’ unimpressive performance on international assessments with the United States’ success in the Olympics. However, the report fails to substantiate its primary claim: that American students’ relatively low rankings on these tests will weaken the U.S. economy and jeopardize its future global standing. It also fails to substantiate secondary claims, set forth throughout in various sidebars. The report recognizes its numerous methodological weaknesses, but it nonetheless bases its conclusions primarily on findings produced by this flawed process. In addition, the research meant to bolster the report’s position is very limited. Ultimately, its conclusions lack a basis of argument or evidence, and its attempt to link test scores to the nation’s economic standing fails.
Suggested Citation: Fierros, E.G. & Kornhaber, M.L. (2008). Review of “Education Olympics 2008: The Games in Review.” Boulder and Tempe: Education and the Public Interest Center & Education Policy Research Unit. Retrieved [date] from http://epicpolicy.org/thinktank/review-education-olympics
NEPC Review: Markets vs. Monopolies in Education: A Global Review of the Evidence (September 2008)
The Cato Institute report examines international evidence on outcomes from public and private education. The paper makes three key claims: private schools outperform public schools in “the overwhelming majority of cases”; private schools’ superiority is greatest in countries where the education system has more market features; and “the implications for U.S. education policy are profound.” Each claim is problematic. The first is based on an atypical method of summarizing academic literature and excludes two important research studies. The claim also fails to adequately take into account selection bias due, for instance, to parents choosing private schools because of an academic focus on their children. The second claim oversimplifies a very complex issue, namely the optimal application of market forces to improve education. And the third claim is dubious as well: even if the report’s first two claims are legitimate (based on international evidence), there may be no practical implications for U.S. education policy.
Suggested Citation: Belfield, C.R. (2008). Review of "Markets vs. Monopolies in Education. A Global Review of the Evidence.” Boulder and Tempe: Education and the Public Interest Center & Education Policy Research Unit. Retrieved [date] from http://epicpolicy.org/thinktank/review-markets-vs-monopolies