Skip to main content

Algorithms, Accountability, and Professional Judgment (Part 3)

So much of the public admiration for Big Data and algorithms avoids answering basic questions: Why are some facts counted and others ignored? Who decides what factors get included in an algorithm? What does an algorithm whose prediction might lead to  getting fired actually look like? Without a model, a theory in mind, every table, each chart, each datum gets counted, threatens privacy, and, yes, becomes overwhelming. A framework for quantifying data and making algorithmic decisions based on data is essential. Too often, however, they are kept secret or, sadly, missing-in-action.

Here is the point I want to make. Big Data are important; algorithmic formulas are important. They matter. Yet without data gatherers and analyzers using frameworks that make sense of the data, that asks questions about the what and why of phenomena–all the quantifying, all the regression equations and analysis can send researchers, policymakers, and practitioners down dead ends. Big Data become worthless and algorithms lead to bad decisions.

Few champions of Big data have pointed to its failures.  All the finely-crafted algorithms available to hedge fund CEOs, investment bankers, and Federal Reserve officials before 2008, for example, were of no help in predicting the popping of the housing bubble, the near-death of the financial sector, the spike in unemployment, and the very slow recovery after the financial crisis erupted.

So Big Data, as important as it is in determining which genes trigger certain cancers, shaping strategies for marketing products, and identifying possible terrorists, still hardly becomes a solution to problems in curing diseases, losses in advertising revenues, or terrorist actions. Frameworks for understanding data, asking the right questions, constant scrutiny, if not questioning, of the algorithms themselves, and professional judgment are necessities in making decisions once data are collected.

In the private sector the business model of decision-making (i.e., profit-making and returns on investment) drives interpretations of data, asking questions, and making organizational changes. It works most of the time but when it fails, it fails big. That business model has migrated to public schools.

In the past half-century, the dominant model for local, state, and federal decision-making in schools has become anchored in student performance on standardized tests. It is the “business model” grafted onto schools. If students score above the average, the model says that both teachers and students are doing their jobs well. If test scores fall below average, then changes have to be made in schools.

State and federal accountability regulations and significant penalties have been put into place (e.g., No Child Left Behind) that have set in concrete this model of test score-driven schooling.   Algorithms that distribute benefits and penalties for individual students, teachers, and schools are the steel rods embedded in the concrete that strengthen the entire structure leaving little room for teachers, principals, and superintendents to use their professional judgments.

Nonetheless, in fits and starts the entire regulatory model of performance-driven schooling  has come slowly under scrutiny by some policymakers, researchers, practitioners, and parents. Teachersadministrators, and parents have spoken out against too much standardized testing and constricting what students learn. These protests point to fundamental reasons why criticizing the use of Big Data and algorithmic decision-making has taken hold and is slowly spreading.

First, unlike private sector companies, tax-supported schools are a public enterprise and accountable to voters. If high-stakes decisions e.g., grading a school “F” and closing it) driven by algorithms are made, those decisions need to be made in public and those algorithm-driven rules on, say, evaluating teacher effectiveness (e.g., value-added measures in Los Angeles and Washington, D.C.), need to be transparent. easily understandable to voters and parents, and undergo public scrutiny.

Google, Facebook, and other companies keep their algorithms secret because they say revealing the formula they have created would give their competition valuable information that would hurt company profits. School districts, however, are public institutions and cannot keep algorithms buried in jargon-laden technical reports that are released months after consequential decisions on schools and teachers are made (see Measuring Value Added in DC 2011-2012).

Second, within a regulatory, test-driven structure teacher and principal expertise about students, how much and how they learn, school organization, innovation, and district policies has been miniaturized and shrink-wrapped to making changes in lessons based on test results delivered to individual schools.

Teacher and principal judgments about academic and non-academic performance of students matter a great deal. Such data appear in parent-teacher conferences, retention decisions when teachers meet with principals, and the portfolio of observations about individual students that teachers compile over the course of a school year. Teachers and principals use algorithmic decisions but they are seldom quantified and put into formulas. It is called professional judgment. Such data and thinking seldom, if ever, show up in official judgments about individual students, a class, or school. Such data are absent in mathematical formulas that judge student, teacher, and school performance.

Yet there are instances when professional judgments about regulations and tests make news. Two high school faculties in Seattle refused to give the Measures of Academic Progress (MAP) test recently. New York principals have lobbied the state legislature against standardized testing.

Such rebellions, and there will be more, are desperate measures. They reveal how professional expertise of those hired to teach and lead schools has been ignored and degraded. They also reveal the political difficulties facing professionals who decide to take on the regulatory test-driven model that use Big Data and algorithmic decision-making. Protesters appear to be against being held accountable and for preserving their jobs.

That is a must-climb political mountain that can be conquered. In questioning policymaker use of standardized tests to determine student futures, grade schools and judge teacher effectiveness, teachers and principals end up questioning the entire model of  regulatory accountability and algorithmic decision-making borrowed from the private sector. It is about time.

This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:

The views expressed by the blogger are not necessarily those of NEPC.

Larry Cuban

Larry Cuban is a former high school social studies teacher (14 years), district superintendent (7 years) and university professor (20 years). He has published op-...