Skip to main content

Reports on College Student Learning Leave Out Passion

Thursday’s David Brooks column talks simple-mindedly about wanting colleges to be held accountable for student learning, somewhat misstates a key claim of Academically Adrift, and writes in a generally David Brooksish style, which should only be alarming if you were expecting a Krugman column. From someone deep in the territory controlled by the Southern Association of Colleges and Schools, the response by some seems… well, not exactly on point. Neither was Brooks’ column, but I rarely expect clarity from newspaper columnists.1

Here are some of the fundamental limits on holding higher education institutions “accountable” in the way politicians have imposed test-based accountability on K-12:

  • Because college students take different programs, assessing progress in those individual programs in a rigorously comparative way is impractical for all but the largest majors (psychology, business, maybe a few others).
  • If there are common elements of what we expect students to learn in college, the options for assessing that knowledge or set of skills is in its infancy, at best.
  • Many of the public and private goals for college are either noncognitive or otherwise difficult to assess.2
  • Students progress through college at very different speeds and often enroll in or take courses at two or more institutions in their undergraduate years.3
  • Not only do colleges and universities have different levels of resources, but they frequently have different missions in terms of serving student populations.4
  • The common rhetoric that college is essential for individual economic advancement, along with cost-shifting from public funds to students and families, implies that (because they will be receiving the primary benefits of college) students are responsible for their own costs and effort.
  • Corollary of the last point: as college is seen as a private rather than a public good, students and families make choices based on their perceived desires, which may be far from specific cognitive outcomes.

The way that regional accreditation agencies have responded to the accountability discourse has not helped much. In the South, SACS members must select program goals, identify objectives measures for those goals, report on the measures, and “close the loop” by discussing program improvement. It is a classic iterative program cycle, perfectly rational and far too unlikely to do much beyond occupy people’s time. It’s not because of rubrics. SACS does not require the creation of rubrics to evaluate the fuzzy and complex world of advanced student work (yes, SACS wants assessments for doctoral programs).5

The problem is the poor fit between a very generic process and the organization, work patterns, and passions of individual disciplines. At almost any level, the process can turn into paper-pushing for any of several reasons, and then the whole point is lost. Three degree programs in my department have national accreditation/approvals specific to the programs, and the specificity in those reports are orders of magnitude above what SACS expects. There is something at least mildly disorienting about the generic questions when your mind is at the level of the very specific. No one has yet phrased the closing-the-loop report as the program equivalent of an elevator pitch, but maybe it belongs there (and the forms need to reflect that).

At its root, even if we could get programs to think about SACS assessment as an elevator pitch, the problem is that the SACS-assessment elevator pitch is largely irrelevant to prospective students and the real choices made about colleges: “In the last year, we know that well over 80% of students competently analyzed primary sources in a 3-page paper assignment.” You know that no one chose a college based on information like this. It just doesn’t happen. What got me excited about majoring in history at Haverford was walking into Special Collections and thinking about working with archival documents, talking with history majors who had been or were currently working with archival materials. What sells Evergreen State College to many prospective students is listening to current students talking about the interdisciplinary programs they are in and the work they have been doing. What gets some potential students excited about attending Reed College is being walked into the tower of the library with hundreds of student theses and being invited to take any off the shelf and read through it. What gets some potential students excited about attending George Washington University is being walked through the business school and imagining oneself in a small seminar analyzing current business cases. And, yes, what sells many potential students about Ohio State University is the thought of painting one’s face and getting smashed on Saturday afternoons in the fall. What sells individual colleges and programs is the dopamine response that comes from students’ and parents’ internal thoughts of, “I could imagine [my child] studying here and being happy, and I like the idea.” Sometimes it’s a little less ambitious: “I could imagine [my child] completing a degree here without going crazy or $100K into debt, and I need that.” No one has ever chosen to attend a particular college or university because of SACS assessment data. Even if admissions offices do not see their job as making dopamine receptors light up on campus tours, they know their job is to encourage prospective students and their families to fall in love with a college or university. That is why parents generally do not ask, “How much do students here learn? How do you know?” (Brooks’ suggested questions), because “learn” is less concrete than individual subjects and less of a draw than “fall in love with their major.”

The hidden bet of the Lumina Foundation and the Tuning projects is that one could align general statements of academic goals with the key passions of a discipline. Oh, I know that all of the project documents refer to student learning outcomes and discipline-based expectations, and while I have some concerns about the national history Tuning project, creating a list of objectives tied to disciplinary conventions is doable in subjects such as history and physics. But while a better-grounded set of program goals is a good thing for many reasons, assessing progress towards those goals omits many reasons why students love or hate specific subjects. How could an assessment of college be complete without the passion we hope students will have?6

Notes

  1. If you disliked Brooks, you would positively hate John Stossel. Probably do, in fact. []
  2. Also true for K-12 schools. []
  3. A less intense equivalent is true for K-12 schools. []
  4. This is much more intense than in K-12: if there is anything USNWR rankings have taught us, it is the perverse incentives of rewarding an elite mission with even more prestige. []
  5. Rubrics will often serve the purpose, at least for paper compliance. But that’s not what anyone really wants. []
  6. No, no solutions proposed… yet. []

This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:

The views expressed by the blogger are not necessarily those of NEPC.

Sherman Dorn

Sherman Dorn is the Director of the Division of Educational Leadership and Innovation at the Arizona State University Mary Lou Fulton Teachers College, and editor...