Skip to main content

Moneyball, Baseball, Teaching & Learning: Is There a Relationship?

Moneyball: A book and a movie based on real events in which a baseball team is assembled using analytical, evidence-based, and sabermetric methods. Sabermetrics is derived from the acronym SABR meaning Society for American Baseball Research.

GA AWARDS: An acronym which stands for Georgia’s Academic and Workforce Analysis and Research Data System. GA AWARDS is data collected through Georgia’s Race to the Top (RT3) Statewide Longitudinal Data System (SLDS).

Baseball

According to David Grabiner, Bill James developed sabermetrics which is “the search for objective knowledge about baseball.” Here is an interesting quote from Grabiner’s book (Grabiner, n.d.):

Sabermetrics attempts to answer objective questions about baseball, such as “which player on the Red Sox contributed the most to the team’s offense?” or “How many home runs will Ken Griffey hit next year?” It cannot deal with the subjective judgments which are also important to the game, such as “Who is your favorite player?” or “That was a great game.”

Using sabermetrics, a baseball team’s management can predict how well a player should do during the next season. What happens to the player if they don’t produce according to “objective data” collected on him over the past years. Predicting how well a player will do in the future is analogous to using a teacher’s VAM score to predict their performance in the future, don’t you think?

Jackie Robinson's Hitting Statistics, 1947 - 1956.

Figure 1. Jackie Robinson’s Hitting Statistics, 1947 – 1956.Figure 1. Jackie Robinson’s Hitting Statistics, 1947 – 1956.

Instead of paying attention just to runs scored, hits, runs batted in, and batting average as shown for Jackie Robinson in Figure 1, sabermetrics expands the categories of data collection by adding variables such as these: base runs, batting average on balls in play, defensive runs saved, equivalent average, late inning pressure situations, Pythagorean expectation, runs created, ultimate zone rating, value over replacement player and so on. Thus saber metrics applies mathematical tools to analyze baseball, which are used by officials to make decisions about their teams (Wikipedia, 2013).

For teachers, however, the situation is a bit different. Most states in the U.S. are moving toward pinning teachers’ worth and value on just one variable: student achievement scores on high-stakes tests. It seems to me, that baseball players might have the edge here.

Statistics have always been a part of baseball. Baseball cards showed a picture of our favorite players on one side, but on the flip side was the player’s complete batting or pitching record . But it was nothing like the spreadsheets that are now used in the age of sabermetrics. Just look at the Figures 2 and 3 which are spread sheets of data used by sabermaticians.

As one author stated, “sabermetrics dig deep into raw data to answer questions such as: Do pitching coaches actually make a difference? Or, what’s the best way to measure a hitter’s value the team?” (J. Silverman, How Stuff Works). Figures 2 & 3 show some of the data used by baseball officials to make decisions about its players.

Screen Shot 2013-05-15 at 3.15.56 PMFigure 2. Micro-View of MLB 2011 Season Data

Figure 3. Enlarged view of the data shown in Figure 2. Figure 3. Enlarged view of the data shown in Figure 2.

Many baseball front offices have adopted sabermetrics, based on the work of Bill James, who had been publishing books on baseball including its history and statistics before he was discovered by Major League Baseball. The Oakland A’s were the first team to apply and adopt the principles of sabermetrics. The movie Moneyball was based on the book with the same title chronicling the the A’s general manager, Billy Beane, who applied the method to his team.

Much of the MLB data is online, and you can follow this link to Bill James Online.

Do you see any parallels with what is happening in education? Is there any connection between sabermetrics and the current data collection and analysis strategies that have been adopted by all state education departments, and the U.S. Department of Education? As you will see, education has a long history of collecting data, but nothing compared to what is happening in 2013.

Education

Like the statistics on the back of a baseball card, statistics on education have been collected since 1867 when Congress established a department of education for the purpose of collecting data on the condition and progress of education in the states and its territories (Grant, W. V., 1993). As Grant recalls, the department was very small, and as an entity was moved around from one Federal agency to another, until it was separated from the Department of Health, Education, and Welfare in 1980 to become the U.S. Department of Education (ED). The collection of data at the Federal level really began when in 1870, Congress authorized the department of education to hire its first statistician.

With time, the statistics part of the department, which is now the National Center for Education Statistics, expanded, so that by the the 1960s the center was collecting and publishing high quantities of data.

In 1969, the center began the National Assessment of Education Progress which has since then surveyed nationwide samples of students at age 9, 13 and 17 in mathematics, reading, science, writing, the arts, civics, economics, geography, U.S. history, and beginning in 2014, in Technology and Engineering Literacy (National Center for Educational Statistics, 2013).

In 2001, the U.S. Congress established the No Child Left Behind Act which required all states to develop assessments in basic skills, and to give these assessments to all students in order to receive federal school funding.

Figure 4. Race to the Top Winners.  Blue: Winners; Green: Losers: Yellow: Did not Submit

Figure 4. Race to the Top Winners. Blue: Winners; Green: Losers: Yellow: Did not Submit

In 2009, the U.S. Department of Education used $4.35 billion to fund The Race to the Top, a contest in which states competed for this money in two rounds of proposal writing. Those states that received funding had to agree to use statistics as a method to evaluate teachers, and to use a major portion of the funding to establish statewide longitudinal data systems to improve instruction, to evaluate schools and teachers.

Whether they received funding or not, many states changed their policies so as to position themselves to become more competititive for Race to the Top funding. Some states, at the last minute, agreed to tie teacher evaluations to student test scores, and to adopt the Common Core State Standards.

GA AWARDS

Georgia was one of the states that was a winner in the Race to the Top competition, and based on its proposal was required to develop a longitudinal data system. Georgia’s system is GA AWARDS or Georgia’s Academic and Workforce Analysis and Research Data System.

According to the GA AWARDS website, the data system:

has been made available to researchers with the high-level analytical skills and research training needed to mine the data and answer critical educational policy and evaluation questions. (emphasis mine)

Researchers will be asked to focus on key topics and advocacy areas including:

  • effectiveness of educator preparation programs
  • effectiveness of strategies and interventions implemented within the State’s RT3 proposal
  • education background of students who experience the least difficulty in transitioning to college

There’s a lot of data available to the researchers. According to the Georgia Department of Education, the Race to the Top data will be combined from data sets in these state agencies:

  • Bright from the Start: Department of Early Care & Learning – DECAL
  • Department of Education – DoE
  • Georgia Student Finance Achievement – GSFC
  • Governor’s Office of Student Achievement – GOSA
  • Georgia Professional Standards Commission – PSC
  • Technical College System of Georgia – TCSG
  • University System of Georgia – USG

This past week, the Georgia Department of Education released a data driven 110-point grading or report card system that published scores for individual schools, districts, and the state as a whole. Although the grading system doesn’t use as many categories as used in Sabermetrics, the principles are in place to use the data to make crucial decisions about students, teachers, and administrators. The educational front offices of the state and each school system will be able to make decisions that may or may not be advisable.

The selection of variables that the department of education thinks are the most important in measuring student learning are highly questionable. For example, for more than a decade, critics have questioned the use of academic learning based on end-of-the-year high-stakes tests as the major variable to assess student learning. Yet, in Georgia, the state’s teacher evaluation system which uses a teachers “value added” score will be based largely on student test scores. Much of the drive to put into place this far reaching data driven system of education can be traced to the Race to the Top. In a letter to the Georgia Department of Education, scholars in some of Georgia’s universities have recommended that the state not use this method to evaluate teachers because there is no evidence to show its been proven.

However, the Department of Education forges ahead.

CCRPI

Georgia just unveiled a new data system. CCRPI, or College and Career Ready Performance Index is equivalent to Bill James sabermetrics used in baseball. The index is actually a score on a scale from 0 – 110, called the CCRPI Score. The score is a sum of achievement, progress, achievement gap, and challenge points. Kind of like runs batted in, Pythagorean expectation, runs created, and ultimate zone rating used in baseball. From this kind of data, the state classifies schools as Reward Schools, Priority Schools, Focus Schools, and Alert Schools. Guess which variable is associated with these categories of schools?

When I dug deeper into the CCRPI index, I realized that the mathematics be used to sort out differences among schools was along the lines of sabermetrics.

At the high school level, a CCRPI is the amalgam of 19 items including (a) content mastery—% of students meeting or exceeding content test criteria (b) post high school readiness—% of graduates, % of AP courses, % passing certain national industry credential tests, and so forth, and (c) graduate rate in %.

Screen Shot 2013-05-14 at 8.00.30 PMFigure 5. College and Career Ready Performance Index for the State of Georgia.

Figure 5 is an image of the CCRPI Index home page. This page shows an average score for all of Georgia’s schools. As you can the CCRPI score for the state is 83.4 out of 110 points. The total score is a sum of achievement (57.5), progress points (9.8), achievement gap points (10.5) and challenge points—exceeding the bar (5.6).

Using the website, we can find the CCRPI scores for every school in the state, including public and charter schools. For example here are CCRPI scores for a few school districts that I have worked with in the past:

  • Atlanta Public Schools: 68.4
  • Clayton County: 70.2
  • Cobb County Schools: 85.4
  • Decatur City Schools: 93.7
  • Dekalb County Schools: 71.2
  • Douglas County: 80.8
  • Fulton County: 85.7
  • Walker County: 88.7

The CCRPI scoring system (follow this link to a slide show on the system) was part of Georgia’s Flexibility Report request to obtain waivers on some aspects of the No Child Left Behind Act. I’ve discussed this request in some detail here. The new scoring system is also readiness for the state’s adoption of the Common Core State Standards in mathematics and English/language arts, and assessments will be in place by 2014 based on the Common Core.

The CCRPI score system reduces the nature of teaching and learning to a single number that people really believe. Unfortunately the system does not tell us anything about the arts program of a school. It says nothing about the participation level of students in school activities. It doesn’t tell us anything about the kind of work that students do, nor does it tell us anything about the values and aspirations that are in place in the school.

Is there any kind of relationship between the method used to evaluate baseball players and the method used to evaluate schools, teachers and students? There seems to be, but baseball and education are based on very different value and compensation systems. To use the sabermetrics type of evaluation to judge schools and teachers is problematic. It sets up league standings of schools based on CCRPI scores. The rankings and scores are used to make comparisons, establish rewards and impose punishments.

When you sit in front of a computer screen and see the data that is at your finger tips, it makes you wonder just what is going on here. Will education use statistics in the same way that some front office managers are using in baseball?

What do you think?

References

Grabiner, D. J.. (n.d.). The Sabermetrics Manefesto. In SeanLayman.com. Retrieved May 15, 2013, from http://www.seanlahman.com/baseball-archive/sabermetrics/sabermetric-man….

Grant, W. V. (1993). 120 Years of American Education: A Statistical Portrait. In National Center for Education Statistics. Retrieved May 12, 2013, from http://0-nces.ed.gov.opac.acc.msmc.edu/pubs93/93442.pdf

Silverman, J. How Sabermetrics Works. In How Stuff Works, Retrieved May 14, 2013, from http://www.howstuffworks.com/sabermetrics.htm

Wikipedia. (May 7, 2013). Sabermetrics. In Wikipedia. Retrieved May 15, 2013, from http://en.wikipedia.org/wiki/Sabermetrics.

This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:

The views expressed by the blogger are not necessarily those of NEPC.

Jack Hassard

Jack Hassard is a former high school science teacher and Professor Emeritus of Science Education, Georgia State University. While at Georgia State he was coordina...