Students are calling for colleges and universities to prioritize equity in the admissions process and adopt test-optional policies for all applicants.
NEPC Resources on High-Stakes Testing and Evaluation
Four things to know about grading, accountability measures, NAEP, and equity.
In this Q&A, NEPC Fellow Edward Garcia Fierros sheds light on the use and abuse of 504 plans, which played a prominent role in this year’s college admissions scandals.
NEPC Fellow John Yun responds to a Mackinac Center critique of his review of their report, The Michigan Context and Performance Report Card: High Schools 2018.
A Consumer’s Guide to Testing under the Every Student Succeeds Act (ESSA): What Can the Common Core and Other ESSA Assessments Tell Us?
Between May and August of 2018, the federal government approved 44 proposals submitted by state departments of education to meet testing and accountability requirements under the Every Student Succ
A single test is the sole criterion for determining admissions to New York City’s highly ranked specialized high schools.
State-Level Assessments and Teacher Evaluation Systems after the Passage of the Every Student Succeeds Act: Some Steps in the Right Direction
Federally mandated standardized testing (i.e., in core subject areas and certain grade levels), as an element of educational accountability, began in 2002 with the No Child Left Behind Act.
Education Interview of the Month: Greg Smith Interviews Alyssa Hadley Dunn on Viral Teacher Resignation Letters
Lewis and Clark College Emeritus Professor of Education Gregory A.
This brief investigates whether closing schools and transferring students for the purpose of remedying low performance is an option educational decision makers should pursue.
The Every Student Succeeds Act (ESSA) replaced the No Child Left Behind Act with great fanfare and enthusiasm.
Research-based policies that provide sustained support can transform struggling schools into effective schools.
NEPC Review: Lessons From State Performance on NAEP: Why Some High-Poverty Students Score Better Than Others
“When I use a word,” Humpty Dumpty said in rather a scornful tone, “it means just what I choose it to mean – nothing more nor less.”
“The question is,” said Alice, “whether you can make words mean so many different things.”
A Policy Memo by NEPC Director Kevin Welner
Kevin Welner provides a commentary on this morning’s release of results from the National Assessment of Educational Progress (NAEP). The lower grades on the Nation’s Report Card are not good news for anyone, but they are particularly bad news for those who have been vigorously advocating for “no excuses” approaches — standards-based testing and accountability policies like No Child Left Behind.
Kevin Welner, (303) 492-8370, firstname.lastname@example.org
Reauthorization of the Elementary and Secondary Education Act: Time to Move Beyond Test-Focused Policies
In this Policy Memo, Kevin Welner and William Mathis discuss the broad research consensus that standardized tests are ineffective and even counterproductive when used to drive educational reform. Yet the debates in Washington over the reauthorization of the Elementary and Secondary Education Act largely ignore the harm and misdirection of these test-focused reforms. As a result, the proposals now on the table simply gild a demonstrably ineffective strategy, while crowding out policies with proven effectiveness.
The U.S. test-based accountability model holds schools and teachers accountable for student outcomes with little attention to school improvement processes. The authors look at an approach used in several European counties, which entails more school-centered accountability efforts, such as school self-evaluation followed by inspection (SSE/I) to examine school quality.
This brief examines policies and practices concerning the use of data to inform school improvement strategies and to provide information for accountability. This twin-pronged movement, termed Data-Driven Improvement and Accountability (DDIA), can lead either to greater quality, equity and integrity, or to deterioration of services and distraction from core purposes. The question addressed by this brief is what factors and forces can lead DDIA to generate more positive and fewer negative outcomes in relation to both improvement and accountability.
This piece was originally published in the peer-reviewed journal, Teachers College Record. It explains that when tests are used as drivers of policy, their validity depends on whether the measure as a policy tool is accomplishing what it is intended to accomplish. More pointedly, the article argues that the recent use of student test scores as tools to evaluate teacher effectiveness has not been validated.
Learning Gains Depend on Joining Outcome Goals to Sufficient and Smart Inputs
This brief discusses how three recent popular educational reform policies move teaching towards or away from professionalization. These reforms are (1) policies that evaluate teachers based on students’ annual standardized test score gains, and specifically, those based on value-added assessment; (2) fast-track teacher preparation and licensure; and (3) scripted, narrowed curricula. These particular policy reforms are considered because of their contemporary prominence and the fact that they directly influence the way teaching is perceived.
Democratic policymaking and democratic education have been undermined by the passage of No Child Left Behind. This brief offers guidelines for future federal education policy that addresses the loss of local control brought on by recent reforms.
A video of Howe and Meens discussing the policy brief is available here: http://www.youtube.com/watch?v=n_OODyDYi8Y.
An Analysis of the Use and Validity of Test-Based Teacher Evaluations Reported by the Los Angeles Times: 2011
For the second time, the Los Angeles Times has published results of statistical testing examining the variation in teacher and school performance in the Los Angeles Unified School District. Though this year's Teacher Ratings enable the reader to take into account variability and model sensitivity issues, an improvement over the 2010 report, the resulting ranking system was found to be inaccurate due to unreliable methodology.