NEPC Resources on Reading Instruction
NEPC Review: Teacher Prep Review: Strengthening Elementary Reading Instruction (National Council on Teacher Quality (NCTQ), June 2023)
An NCTQ report evaluates 693 out of the 1,146 elementary teacher preparation programs in the US. It claims to identify how well candidates are prepared to teach elementary reading based on NCTQ’s Reading Foundations standards for scientifically based reading practices. While addressing teacher preparation for initial reading instruction is a high priority as states increasingly adopt new reading legislation grounded in the “science of reading,” this report repeats patterns identified in external reviews of previous NCTQ reports over the past two decades. For instance, this report again relies on flawed research methodology grounded in selective use of evidence to promote NCTQ’s narrow education reform agenda. Policymakers as well as the media are strongly cautioned to view this report as narrowly constructed reform advocacy rather than a valid or scientific analysis of the quality of reading content in elementary teacher preparation programs.
Suggested Citation: Thomas, P.L. (2023, September). NEPC review: Teacher prep review: Strengthening elementary reading instruction. Boulder, CO: National Education Policy Center. Retrieved [date], from https://nepc.colorado.edu/review/teacher-prep
NEPC’s Most Popular Publications of the Year
NEPC Review: SchoolGrades.org (Manhattan Institute for Policy Research, September 2015)
The Manhattan Institute's SchoolGrades.org evaluates and assigns grades, using reading and math test scores, to U.S. schools and compares schools across their respective states and to other countries. They apparently use a four-step process: (1) average two state test scores; (2) “norm” these results to the NAEP exam; (3) make an adjustment to this national “normed” measure using free and reduced price lunch data to account for SES; and (4) “norm” these results to the international PISA exam. The claim is that this process allows a parent to compare a local school to schools in their state and to other countries like South Korea and Lithuania. But the unsubstantiated norming chain is too tenuous and the results are overly extrapolated to be of any useful value. The website does not explain how international scores are “normed” (equated) to the national standard they developed or how letter grades were determined, nor does it explain how free and reduced price lunch counts are used to make socioeconomic adjustments. While there is considerable equating research available, none is cited. Further, the reliance on aggregated test scores is far too narrow a base to serve as a useful evaluation of schools. Thus, the website’s approach to evaluating schools fails on technical grounds and, just as importantly, it fails to understand and consider the broader purposes of education in a democratic society.
NEPC Review: Urban Charter School Study Report on 41 Regions 2015 (Center for Research on Education Outcomes (CREDO), March 2015)
Following up on a previous study, researchers sought to investigate whether the effect on reading and math scores of being in a charter school was different in urban areas compared with other areas and to explore what might contribute to such differences. Overall, the study finds a small positive effect of being in a charter school on both math and reading scores and finds that this effect is slightly stronger in urban environments. There are significant reasons to exercise caution, however. The study’s “virtual twin” technique is insufficiently documented, and it remains unclear and puzzling why the researchers use this approach rather than the more accepted approach of propensity score matching. Consequently, the study may not adequately control for the possibility that families selecting a charter school may be very different from those who do not. Other choices in the analysis and reporting, such as the apparent systematic exclusion of many lower-scoring students from the analyses, the estimation of growth, and the use of “days of learning” as a metric, are also insufficiently justified. Even setting aside such concerns over analytic methods, the actual effect sizes reported are very small, explaining well under a tenth of one percent of the variance in test scores. To call such an effect “substantial” strains credulity.
NEPC Review: Whole Language High Jinks: How to Tell When 'Scientifically-Based Reading Instruction' Isn't (Thomas B. Fordham Institute, January 2007)
In Whole Language High Jinks: How to Tell When 'Scientifically-Based Reading Instruction' Isn't, Louisa Moats contends that she provides "the necessary tools to distinguish those [programs] that truly are scientifically based... from those that merely pay lip service to science" (p. 10). This review finds that Moats exaggerates the findings of the National Reading Panel (NRP), especially the effects of systematic phonics on reading achievement. She also ignores research completed since the NRP report was issued seven years ago. Perhaps most disturbingly, she touts primarily commercial curriculum products distributed by her employer – products that have far fewer published studies of effectiveness than the products and methods she disparages.
These flaws pervade the report's subsequent discussion of what "scientifically based reading instruction" should look like. In the end, the Fordham report works more effectively as promotional material for products and services offered by Moats's employer, SoprisWest, than as a reliable guide to effective reading instruction.
Suggested Citation:
Allington, R. (2007). Review of "Whole Language High Jinks: How to Tell When 'Scientifically-Based Reading Instruction' Isn't." Boulder and Tempe: Education and the Public Interest Center & Education Policy Research Unit. Retrieved [date] from http://epicpolicy.org/thinktank/review-whole-language-high-jinks-how-tell-when-scientifically-based-reading-instruction-is