Skip to main content

Big Data, Algorithms, and Professional Judgment in Reforming Schools (Part 1)

The crusade among reformers for data-driven decision-making in classrooms, schools, and districts didn’t just begin in the past decade. Its roots go back to Frederick Winslow Taylor‘s “scientific management” movement a century ago. In the decade before World War I and through the 1930s, borrowing from the business sector where Taylorism reigned, school boards and superintendent adopted wholesale ways of determining educational efficiency producing a Niagara Falls of data which policymakers used to drive the practice of daily schooling.

Before there were IBM punchcards, before there were the earliest computers, there were city-wide surveys, school scorecards, and  statistical tables recording the efficiency and effectiveness of principals, teachers, and students. And, yes, there were achievement test scores as well.

In Raymond Callahan’s Education and The Cult Of Efficiency (1962), he documents Newton (MA) superintendent Frank Spaulding telling fellow superintendents at the annual conference of the National Education Association in 1913 how he “scientifically managed” his district (Review of Callahan book). The crucial task, Spaulding told his peers, was for district officials to measure school “products or results” and thereby compare “the efficiency of schools in these respects.” What did he mean by products?

I refer to such results as the percentage of children of each year of age [enrolled] in school; the average number of days attendance secured annually from each child; the average length of time required for each child to do a given definite unit of work…(p. 69).

Spaulding and other superintendents measured in dollars and cents whether the teaching of Latin was more efficient than the teaching of English, Latin, or history. They recorded how much it cost to teach vocational subjects vs. academic subjects.

What Spaulding described in Newton for increased efficiency (and effectiveness) spread swiftly among school boards, superintendents, and administrators.  Academic experts hired by districts produced huge amounts of data in the 1920s and 1930s describing and analyzing every nook and cranny of buildings, how much time principals spent with students and parents, and what teachers did in daily lessons.

That crusade for meaningful data to inform policy decisions about district and school efficiency and effectiveness continued in subsequent decades. The current resurgence of a “cult of efficiency,” or the application of scientific management to schooling appears in the current romance with Big Data and the onslaught of models that use algorithms applied to grading schools, individual teacher performance,  and customizing online lessons for students.

As efficiency-driven management began in the business sector a century ago, so too have contemporary business-driven practices in using “analytics”  harnessed computer capacity to process, over time, kilo-, mega-, giga-,  tera-, and petabytes of data filled policymakers determined to reform U.S. schools with confidence. Big Data, the use of complex algorithms, and data-driven decision-making in districts, schools, and classrooms have entranced school reformers. The use of these “analytics” and model-driven algorithms for grading schools, evaluating teachers, and finding the right lesson for the individual student have, sad to say, pushed teachers’ professional judgment off the cliff.

images

The point I want to make in this and subsequent posts on Big Data and models chock full with algorithms is that using data to inform decisions about schooling is (and has been) essential to policymakers and practitioners. For decades, teachers, principals, and policymakers have used data, gathered systematically or on-the-run, to make decisions about programs, buildings, teaching, and learning. The data, however, had to fit existing models, conceptual frameworks–or theory, if you like, to determine whether the numbers, the stories, the facts explained what was going on. If they didn’t fit, some smart people developed new theories, new models  to make sense of those data.

In the past few years, tons of data about students, teachers, and results surround decision-makers. Some zealots for Big Data believe that all of these quantifiable data mean the end of theory and models.  Listen to Chris Anderson:

Out with every theory of human behavior, from linguistics to sociology…. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.

Just like facts from the past do not speak for themselves and historians have to interpret those facts, neither do numbers speak for themselves.

The use of those data to inform and make decisions require policymakers and practitioners to have models in their heads that capture the nature of schooling and teaching and learning. From these models and Big Data, algorithms–mathematical rules for making decisions– spill out. Schooling algorithms derived from these models often aim to eliminate wasteful procedures and reduce costs–recall the “cult of efficiency”–without compromising quality. Think of computer-based algorithms to mark student essays. Or value-added measures to determine which teachers stay and which are fired. Or Florida grading each and every school in the state.

The next post takes up making school policy by algorithm.

This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:

The views expressed by the blogger are not necessarily those of NEPC.

Larry Cuban

Larry Cuban is a former high school social studies teacher (14 years), district superintendent (7 years) and university professor (20 years). He has published op-...