Skip to main content

Curmudgucation: Fordham: Teachers Are Downloading Junk

Fordham Institute, the right-tilted thinky tank and tireless ed reform advocacy group, just released a new study that actually raises some interesting questions. "The Supplemental Curriculum Bazaar" takes a look at the materials teachers are downloading, and it finds them, well, not delightful. While I'm only going to argue with their findings a little, there are aspects of their methods that I find, well, not delightful.



The Playing Field



The study looked specifically at materials for high school English (ELA for the core-trained), which is right in my wheelhouse (39 years of secondary English classrooms). So I was prepared to predict the results of this study before reading it. My prediction: the stuff available on line today is much like the stuff available in catalogs and teacher stores twenty yeas ago-- kind of mediocre, but occasionally helpful. So now we can see if Fordham matches my findings.



They looked at two main sets of questions. 1) what types of materials are being downloaded, and 2) how do "experts" rate the quality. Also, how do the "expert" ratings compare to teacher ratings?



Yeah, about that-- who are these experts, and why is "expert" different from "teacher."



Experts include:



Morgan Polikoff, from USC Rossier, and we've seen him before as a big-time testing supporter. His background is ed policy and math.

 

I think I see a good Hamlet unit in there

 

Jennifer Dean, freelance educational assessment consultant. She's worked for Student Achievement Partners, was involved in K-122 assessment for "a large assessment company" (looks like maybe ETS). At SAP she developed standards alignment guidelines and sample assessments. She supposedly started her career teaching secondary ELA.



Jenni Aberli is an ELA instructional lead. It's not clear how much actual teaching she does, but she does work hard at that old Core alignment and she is National Board certified.



Sarah Baughman is an educational consultant "with over a decade of middle and high school English teaching experience in public, charter, online and international schools." That's a l;ot of variety for a decade. She's also associated with Student Achievement Partners.



Dr. Bryan Drost. An administrator in Ohio and a big testing guy, including national supervisor of edTPA (boo).



Joey Hawkins is a national literacy consultant. She taught in rural Vermont. Lots of writing background.



So, some actual teachers, heavy on the love of assessment and the core. It is not quite clear to me why they, and not the classroom teachers, are the experts here, but let's forge ahead.



How Were the Materials Judged



The study used the list of commonly consulted sites from a 2015 RAND survey, omitting Google and Pinterest (because there's no way to track what's found there) and Readworks and Newsela (because they are narrowly focused on reading). So that left Teachers Pay Teachers, ReadWriteThink, and Share My Lesson.



For the last two, the study simply used the list of most-downloaded material, sticking to reading, writing, speaking and/or listening lessons. For Teachers Pay Teachers, they used the 15 free units, top 15 free lessons, top 47 paid units, and top 47 paid lessons. Then the evaluators whipped put their rubrics and went to work. Here are the ten areas covered.



1) Basic descriptive data. What's in it, metadata, title, etc.



2) Alignment to Standards. I'm a little fuzzy here/ The question is "does it include standards it aligns to." But the rating is based on whether or not it aligns to the target standard. I'm not sure why I care about either one. We really don't talk often enough about how "alignment to standards" has nothing to do with the level of instructional material suckitude.



3) Depth of Knowledge. Oh, lordy, spare me. DOK is a bunch of baloney that fastened itself, barnical-like, to the back of the standards.



4) Text complexity and quality. In other words, is this unit for Grapes of Wrath and not Diary of a Wimpy Kid. Unfortunately, it includes lexile scores, and according to those, Diary is high school appropriate and Grapes is not.



5) Close Reading and Evidence from the Text. If there's a reading comprehension task, it has to require "close reading and analysis" as well as the use of "evidence from the text." Also, it must focus on the main idea and important details in the text. All of this might be a good sign, if we don't wander too far into the David Coleman version of close reading.



6) Writing Task Quality. Does the task involve writing to a text? Because, of course, writing that isn't about a text is just a waste of time? SMH. This also includes an overall measure of task quality, in which the experts had to decide whether the task was too easy or too unimportant. How long has it been since those experts were in a classroom, I wonder.



7) Speaking and Listening Task Quality. Again, it's only high quality if it's to a text or recording, and it can't be too easy or frivolous.



8) Usability. Is it interesting? Is it free from errors? Is it visually attractive? Does it give clear guidance for teachers? Does it support diverse learners? Is it going to be used by some gormless drone with no personal knowledge of teaching? No, I just made that last one up. But seriously-- if the teacher using this can't figure out how to use it or spot errors in it, then we are already in more trouble than a lesson from an online vendor can solve.



9) Assessment quality. There are plenty of assessment experts on this case, so I'll give them this one.



10) Knowledge Building and Cultural Responsiveness. If this is a full unit, does it involve students showing that they know something. And does it include diverse perspectives, authors, topics.



What They Found Out



The report has some good news and bad news.



Good news: The text quality is good, with only a few hitting the "very low quality" score. The materials also mostly involved text-based evidence.



Good news: The materials are generally error-free and well designed. So, pretty and not filled with goofs. The report takes no position on the use of comic sans.



Bad news: The reviewers thought the materials were mostly mediocre. They particularly criticized the instructional guidance. "If busy teachers are going to take the time to look for supplemental materials," says the report, "they certainly want to know (quickly) how to use them." I am not so sure. Or rather, I'm not so sure that they need someone to spell it out for them. I suspect a teacher, particularly one who knows just what she's looking for, doesn't need someone else to tell her how te material is "supposed" to be used. And as it turns out, the expert ratings was a bad predictor of whether teachers seemed to think the materials were any good or not. Almost as if the academic experts and the actual classroom teachers are using different criteria.



Bad news: Materials have weak alignment to standards they say they're meeting. Well, duh. At this point, "alignment" is a pointless piece of paperwork, and the standards have spread out and been reinterpreted a hundred different times by a hundred different experts. They're a set of numbers that you have to slap on unit and lesson plans to make administrators happy, and few districts are teaching exactly what each standard means because nobody is certain they know. The good news or all those folks is that after ten years, thee isn't a lick of evidence that the standards are actually linked to life outcomes or college and career readiness.



Bad news: Writing, speaking and listening task quality is weak. It's a sign that we're in Coleman territory is that one "sign" that an assignment is weak is that it is "largely focused on personal feelings." The evil of all evils is to not be text-dependent, as if the only way a piece of writing can be legitimate is for it to be about a text. Writing in response to a text is an excellent path to take-- I won't argue that for a minute. But to argue that it is the only path to a good writing assignment is nuts, and again without a shred of evidence.



Bad news: Assessments are weak because they don't always cover key content and "rarely provide the supports needed to score student work." A recurring issue in this study is that it seems to assume that the end user of these materials is some shmoe off the street and not an actual classroom teacher. The study says that quality material includes an assessment rubric. No, thanks. If I'm bringing it into my classroom, i'll decide how I want to assess it.



Bad news: Lessons do a bad job of building content knowledge and are not cognitively demanding. They looked for units to introduce and sequence content knowledge, which means if you are downloading a unit to use as review, you're in trouble here. They also slam "skill building," which has, of course, been the major focus under Common Core and its attendant testing regime. Given the close association these experts have had with Core-pushing groups, it seems a b it disingenuous for them to get religion about rich content and background knowledge without acknowledging that they helped fuel the movement that drove these things underground in the first place.



This carries through with their examples of "bad" units (e.g., a unit on Romeo and Juliet without any reference to Elizabethan England or a unit on The Great Gatsby without any reference to the Roaring Twenties). These are exactly the sort of units that David "Stay Within The Four Corners of the Text" Coleman endorsed when he was pushing his ELA standards. Exactly.



Bad news: Super-lousy job on the whole support or diverse learners thing. Few units offered differentiation.



Bad news: Potential to engage students, low on cultural diversity. Once again, the end user here is a classroom teacher, and I can't remember a single thing I taught in 39 years for which I said, "Oh well, this unit is so super-engaging on its own I can just nap.' The engaging part comes from the human who presents it. Nor am I sure how you rate how interesting something is without asking "interesting to whom?" Diversity? Yes, that's an issue.



What they didn't ask about



The report actually does pull in some teacher quotes about things like why teachers search out these sites for supplemental material, but there are still some things they don't/can't know.



In particular, it's impossible to know how much modifying teachers do to these materials. Do they pick and choose parts? Do they modify activities and assessments to suit their classes? I'll bet they do plenty of both. I'd also be curious to know how often teachers adopt the materials ("I'm going to use this again next year) and how often not (Well, that was worth a try, but never again).



What they want to do about it.



The researchers have some "policy implications."



1) Supplemental ELA materials on these sites have "a long way to go." Yeah, we got that from the rest of the findings.



2) The supplemental materials market "begs curation." Somebody needs to sort this stuff out. I'm not sure who would be in a position to do that, really. The teacher knows her classes better than anyone.



3) More materials need to provide "soup-to-nuts" supports. Meh. Maybe. But the things is, these are supplemental materials, and so it's going to make a difference exactly what the teacher is supplementing. One size fits all is not going to be helpful. More materials to adapt could be.



4) More cultural diversity, diverse authors, culture of pluralism. Well, yeah. Not exactly an issue confined to supplemental materials.



5) School and district leaders need to decide whether and how to monitor this stuff. Nah. They need to leave their teachers alone, or maybe even ask how they can help. Further teacher surveillance and micro-management is not helpful ("Can't get this run off until I pass it through the Office of Worksheet Review").



So in the end



Teachers look for supplemental materials wherever they can find them, because all schools hand us are textbook sets and those are almost universally mediocre (I had one good lit series in my career, and it went out of print when the company was bought). One of the ongoing jobs of teaching is to make silk purses out of sows' ears, and the ears don't have enough material, so you're always looking for some sows' nose lining and cows' liver casing and whatever else you can get your hands on, then modifying it for the students you have.



This is an ongoing process. No teacher worthy of the title teaches exactly the same stuff exactly the same way two years in a row (this is just one of the reasons that scripts and teacher-proof programs in a box are absolutely junk). The dream is not supplemental material that does the work for you; it's material that includes little gems that you can use. If teaching is house-hunting, teachers never expect to buy a place that's move-in ready; they're just looking for a place with good bones and a nice floor plan that they can renovate without too much trouble.



What we used to have to depend on-- supplemental materials from publishers via teacher stores, catalogs, and the filing cabinet that the last lady who taught in this room left behind-- were never super. Created by people nowhere near a classroom, they were always an ill fit, always in need of adapting, trimming, cutting. The Common Core revolution brought a tidal wave of this crap-- workbook after workbook of short reading excerpts bundled with multiple choice quizzes. Your best source for extra material was always the teacher up the hall who used to teach this same thing, who knows the kinds of students you have, who had field tested the material in her own classroom laboratory. The internet just made that hall a lot bigger. Internet teachers are less familiar with your school and classroom, and not al of them are necessarily teachers you would aspire to imitate. But some teaching ideas from a fellow professional are at least as useful as publisher junk.



We could have a whole other discussion about why so much-- so very, very much-- of what teachers are given is junk (that it's created by non-teachers who think that alignment to bad standards has something to do with quality is probably on the list). But the question this study didn't ask might be the most important one-- if the online materials are junk, are they any more junky than what publishers crank out? Maybe next time.

This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:

The views expressed by the blogger are not necessarily those of NEPC.

Peter Greene

Peter Greene has been a high school English teacher in Northwest Pennsylvania for over 30 years. He blogs at Curmudgucation. ...