Years ago, I met Larry Berger at a conference. I had been impressed with the digital tools his company called Wireless Generation had developed to assess student learning and increase teacher efficiency. We talked briefly at the time. My hunch is that he neither remembers the conversation or my name.
Since that time, his career soared and he is now CEO of Amplify, a technology company once owned by Rupert Murdock’s News Corporation but since sold to Amplify executives who now run it. The company creates and develops curricular and assessment software for schools.
Rick Hess, educational policy maven at the American Enterprise Institute had invited Berger to a conference on the meaning of “personalized learning.” Berger could not attend and he asked a colleague who did attend to read a “confession” that he had to make about his abiding interest in “personalized learning.” Hess included Berger letter to the conferees and it appears below.
Until a few years ago, I was a great believer in what might be called the “engineering” model of personalized learning, which is still what most people mean by personalized learning. The model works as follows:
You start with a map of all the things that kids need to learn.
Then you measure the kids so that you can place each kid on the map in just the spot where they know everything behind them, and in front of them is what they should learn next.
Then you assemble a vast library of learning objects and ask an algorithm to sort through it to find the optimal learning object for each kid at that particular moment.
Then you make each kid use the learning object.
Then you measure the kids again. If they have learned what you wanted them to learn, you move them to the next place on the map. If they didn’t learn it, you try something simpler.
If the map, the assessments, and the library were used by millions of kids, then the algorithms would get smarter and smarter, and make better, more personalized choices about which things to put in front of which kids.
I spent a decade believing in this model—the map, the measure, and the library, all powered by big data algorithms.
Here’s the problem: The map doesn’t exist, the measurement is impossible, and we have, collectively, built only 5% of the library.
To be more precise: The map exists for early reading and the quantitative parts of K-8 mathematics, and much promising work on personalized learning has been done in these areas; but the map doesn’t exist for reading comprehension, or writing, or for the more complex areas of mathematical reasoning, or for any area of science or social studies. We aren’t sure whether you should learn about proteins then genes then traits—or traits, then genes, then proteins.
We also don’t have the assessments to place kids with any precision on the map. The existing measures are not high enough resolution to detect the thing that a kid should learn tomorrow. Our current precision would be like Google Maps trying to steer you home tonight using a GPS system that knows only that your location correlates highly with either Maryland or Virginia.
We also don’t have the library of learning objects for the kinds of difficulties that kids often encounter. Most of the available learning objects are in books that only work if you have read the previous page. And they aren’t indexed in ways that algorithms understand.
Finally, as if it were not enough of a problem that this is a system whose parts don’t exist, there’s a more fundamental breakdown: Just because the algorithms want a kid to learn the next thing doesn’t mean that a real kid actually wants to learn that thing.
So we need to move beyond this engineering model. Once we do, we find that many more compelling and more realistic frontiers of personalized learning opening up.
Berger’s confession about believing in “engineering” solutions such as “personalized learning” to school and classroom problems, of course, has a long history of policy elites in the 20th and 21st centuries seeing technical solutions to school governance, organization, curriculum, and instruction flop. After the post-Sputnik education reforms introduced curricular reforms in math and the natural and social sciences, cheerleaders for that reform confessed that what they had hoped would occur didn’t materialize (see here). After No Child Left Behind became law in 2002, for example, one-time advocates for the law confessed that there was too much testing and too little flexibility in the law for districts and schools (see here).
“Buyer’s remorse” is an abiding tradition.
I have a few observations about contrition and public confessions over errors in thinking about “personalized learning.”.
First, those confessing their errors about solving school problems seldom looked at previous generations of reformers seeking major changes in schools. They were ahistorical. They thought that they knew better than other very smart people who had earlier sought to solve problems in schooling.
Second, the confessions seldom go beyond blaming their own flawed thinking (or others who failed to carry out their instructions) and coming to realize the obvious: schooling is far more complex a human institution than they had ever considered.
Finally, few of these confessions take a step back to not only consider the complexity of schooling and its many moving parts but also the political, social, and economic structures that keep it in place (see Audrey Watters here). As I and many others have said often, schools are political institutions deeply entangled in American society, culture, and democracy. Keeping the macro and micro-perspectives in sight is a must for those seeking major changes in how teachers teach or how schools educate. Were that to occur the incidence of after-the-reform regret might decrease.
This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:
The views expressed by the blogger are not necessarily those of NEPC.