Larry Cuban on School Reform and Classroom Practice: Donors Reform Schooling: Evaluating Teachers (Part 2)
In Part 1, I described a Gates Foundation initiative aimed at identifying effective teachers as measured in part by their students’ test scores, rewarding such stellar teachers with cash, and giving poor and minority children access to their classrooms. Called Institute Program for Effective Teaching, the Foundation had mobilized sufficient political support for the huge grant to find and fund three school districts and four charter school networks across the nation. IPET launched in 2009 and closed it doors (and funding) in 2016.
A brief look at the largest partner in the project, Florida’s Hillsborough County district, over the span of the grant gives a peek at how early exhilaration over the project morphed into opposition over rising program costs that had to be absorbed by the district’s regular budget, and then key district and school staff’s growing disillusion over the project’s direction and disappointing results for students. Consider what the Tampa Bay Times, a local paper, found in 2015 after a lengthy investigation into the grant. [i]
- The Gates-funded program — which required Hillsborough to raise its own $100 million — ballooned beyond the district’s ability to afford it, creating a new bureaucracy of mentors and “peer evaluators” who no longer work with students.
- Nearly 3,000 employees got one-year raises of more than $8,000. Some were as high as $15,000, or 25 percent.
- Raises went to a wider group than envisioned, including close to 500 people who don’t work in the classroom full time, if at all.
- The greatest share of large raises went to veteran teachers in stable suburban schools, despite the program’s stated goal of channeling better and better-paid teachers into high-needs schools.
- More than $23 million of the Gates money went to consultants.
- The program’s total cost has risen from $202 million to $271 million when related projects are factored in, with some of the money coming from private foundations in addition to Gates. The district’s share now comes to $124 million.
- Millions of dollars were pledged to parts of the program that educators now doubt. After investing in an elaborate system of peer evaluations to improve teaching, district leaders are considering a retreat from that model. And Gates is withholding $20 million after deciding it does not, after all, favor the idea of teacher performance bonuses — a major change in philosophy.
- The end product — results in the classroom — is a mixed bag.
Hillsborough’s graduation rate still lags behind other large school districts. Racial and economic achievement gaps remain pronounced, especially in middle school.
And poor schools still wind up with the newest, greenest teachers.
Not a pretty picture. RAND’s formal evaluation covering the life of the grant and across the three districts and four charter networks used less judgmental language but reached a similar conclusion on school outcomes that the Tampa Bay Times had for these county schools.
Overall, the initiative did not achieve its stated goals for students, particularly LIM [low-income minority students. By the end of 2014–2015, student outcomes were not dramatically better than outcomes in similar sites that did not participate in the IP initiative. Furthermore, in the sites where these analyses could be conducted, we did not find improvement in the effectiveness of newly hired teachers relative to experienced teachers; we found very few instances of improvement in the effectiveness of the teaching force overall; we found no evidence that LIM students had greater access than non-LIM students to effective teaching; and we found no increase in the retention of effective teachers, although we did find declines in the retention of ineffective teachers in most sites. [ii]
As with the history of such innovative projects in public schools over the past century, RAND evaluators found that districts and charter school networks fell short in achieving IPET because of uneven and incomplete implementation of the program.
We also examined variation in implementation and outcomes across sites. Although sites varied in context and in the ways in which they approached the levers, these differences did not translate into differences in ultimate outcomes. Although the sites implemented the same levers, they gave different degrees of emphasis to different levers, and none of the sites achieved strong implementation or outcomes across the board. [iii]
But the absolutist judgment of “failure” in achieving aims of this donor-funded initiative hides the rippling effects of this effort to reform teaching and learning in these districts and charter networks. For example, during the Obama administration, U.S. Secretary of Education Arne Duncan’s initiative of Race to the Top invited states to compete for grants of millions of dollars if they committed themselves to the Common Core standards—another Gates-funded initiative–and included, as did IPET, different ways of evaluating teachers. [iv]
Now over 40 states and the District of Columbia have adopted plans to evaluate teachers on the basis of student test scores. How much student test scores should weigh in the overall determination of a teacher’s effectiveness varies by state and local districts as does the autonomy local districts have in putting their signature on state requirements in evaluating teachers. For example, from half of the total judgment of the teacher to one-third or one-fourth, test scores have become a significant variable in assessing a teacher’s effectiveness. Even as testing experts and academic evaluators have raised significant flags about the instability, inaccuracy, and unfairness of such district and state evaluation policies based upon student scores being put into practice, they remain on the books and have been implemented in various districts. Because the amount of time is such an important factor in putting these policies into practice, states will go through trial and error as they implement these policies possibly leading to more (or less) political acceptance from teachers and principals, key participants in the venture.[v]
While there has been a noticeable dulling of the reform glow for evaluating teachers on the basis of student performance—note the Gates Foundation pulling back on their use in evaluating teachers as part of the half-billion dollar Intensive Partnerships for Effective Teaching—the rise and fall in enthusiasm in using test scores, intentionally or unintentionally, has focused policy discussions on teachers as the source of school “failure” and inequalities among students. In pressing for teachers to be held accountable, policy elites have largely ignored other factors that influence both teacher and student performance that are deeply connected to economic and social inequalities outside the school such as poverty, neighborhood crime, discriminatory labor and housing practices, and lack of access to health centers.
By donors helping to frame an agenda for turning around “failing” U.S. schools or, more generously, improving equal opportunity for children and youth, these philanthropists —unaccountable to anyone and receiving tax subsidies from the federal government–as members of policy elites spotlight teachers as both the problem and solution to school improvement. Surely, teachers are the most important in-school factor—perhaps 10 percent of the variation in student achievement. Yet over 60 percent of the variation in student academic performance is attributed to out-of-school factors such as the family. [vi]
This Gates-funded Intensive Partnerships for Effective Teaching is an example, then, of policy elites shaping a reform agenda for the nation’s schools using teacher effectiveness as a primary criterion and having enormous direct and indirect influence in advocating and enacting other pet reforms.
Did, then, Intensive Partnerships for Effective Teaching “fail?” Part 3 answers that question.
[i] (Marlene Sokol, “Sticker Shock: How Hillsborough County’s Gates Grant Became a Budget Buster,” October 23, 2015 )
[ii] RAND evaluation; implementation quote, p. 488.
[iii] William Howell, “Results of President Obama’s Race to the Top,” Education Next, 2015, 15 (4), at: https://www.educationnext.org/results-president-obama-race-to-the-top-reform/
[v] Eduardo Porter, “Grading Teachers by the Test,” New York Times, March 24, 2015; Rachel Cohen, “Teachers Tests Test Teachers,” American Prospect, July 18, 2017; Kaitlin Pennington and Sara Mead, For Good Measure? Teacher Evaluation Policy in the ESSA Era, Bellwether Education Partners, December 2016; Edward Haertel, “Reliability and Validity of Inferences about Teachers Based on Student Test Scores,” William Angoff Memorial Lecture, Washington D.C., March 22, 2013; Matthew Di Carlo, “Why Teacher evaluation Reform Is Not a Failure,” August 23, 2018 at: http://www.shankerinstitute.org/blog/why-teacher-evaluation-reform-not-failure
[vi] Edward Haertel, “Reliability and Validity of Inferences about Teachers Based on Student Test Scores,” William Angoff Memorial Lecture, Washington D.C., March 22, 2013
This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:
The views expressed by the blogger are not necessarily those of NEPC.