Simple, general solutions to college problems (just add data)
“Get some burgers, get some beers data, a few laughs, Dude, our troubles are over.”
http://www.metacafe.com/watch/an-KDCB4bbmbh24m/the_big_lebowski_1998_walters_plan/
I thought of Walter’s line in the Big Lebowski while reading Dylan Matthews piece in the Washington Post wonkblog on college costs (part X: How do we fix it?) this morning. Matthews seems to think that better data on student outcomes, combined with a few readjustments of federal incentives will drive down college costs and tuition. I am not as well acquainted with college costs and financial aid as Matt Reed, Sara Goldrick Raab or Sherman Dorn. I do know enough to know that it is way more complicated than most pundits realize. I often invoke Archibald and Feldman for linking higher ed costs to the trajectory of costs in the rest of the service economy. Certainly higher education does have some differences from other services that require lots of education, like dentistry. But it is far more complicated that Matthews realizes. Matt Reed makes that point quite eloquently here in response to part VI in Matthews series. But in this most recent installment, Matthews steps on an area that I do know a little more about: assessment. While Walter tells the Dude that “we’ll go out there, brace the kid, should be a pushover, we’ll get our million dollars back,” Matthews has a similarly unreasonable expectation of how easy it should be to measure student learning outcomes:
The worry with proposals like the super Pell Grant or student loan caps, as with all price controls, is that it risks damaging the quality of the institutions. That’s a valid concern. But it raises a troubling point: we, at the present moment, have literally no idea how good different higher education institutions are. <emphasis mine> We don’t know anything about which are better at imparting given bodies of knowledge, which are better at getting their students paying jobs, which are better at producing voters and soldiers and other contributors to civic life, or any number of other outcomes.
This is a huge problem with how we regard higher education. When Matthews writes “how good” he is imagining one Platonic test that compares schools on “imparting given bodies of knowledge” and perhaps another for “contributors to civic life” or any other [finite] number of [interval scale, ranked] indices for student outcomes. He then goes on to describe NSSE data, and CLA data, and how it is such a shame that it is secret, because it is the best data we have.
I am no data nihilist. In fact, I am the chair of my college’s assessment committee. I am curious and devoted to measuring learning outcomes in my own classes, and helping others in my college use measurement to be reflective about the curricula and pedagogy, and improve it. But the more experience I get with grappling with doing responsible assessment but leaving space for teacher creativity and student transformation, the more skeptical I am of the CLA and other national standardized tests of critical thinking skills, writing skills, etc. I get the feeling that Matthews and his ilk think that we at colleges are looking at a big bowl of ingredients, and if we just mixed it properly we would get a uniform dough that could be measured and weighed properly (“but we could be sophisticated about it, we wouldn’t just weigh it, but we could test plasiticity, or moistness”). But instead, we have the ingredients for a many different meals, and there is value to keeping dimensions separate, and incomparable. Which is better kale with olives or a red velvet cupcake? Which is better, arts education or history? Which is better, a college with a tradition of local service projects or one with great graduate school placement rates?
Saying that we have “literally no idea how good different institutions of higher education are” is like saying we have literally no idea how good restaurants are. People enter restaurants with different values and expectations, not just expecting calorie counts and one dimension of taste. And sometimes they know that a better one is across town but they feel a lot more comfortable closer to home, so they’ll deal with that rude waiter.
Matthews closes his piece with placing all his bets on better data (which he assumes starts with the CLA and NSSE):
Without better data, there’s no way to defend the contribution that college makes to our economy and our society, and no way to make that benefit cheaper for those who need it.
I love data. I’m all for better data. But let’s not confuse data with value. Most people don’t. We won’t get any closer to solving our problems by insisting people act more like economists. Maybe we’d do better if we started insisting that economists act more like people.
Yeah, I’m also very skeptical of this idea that even the best data could be used to produce a meaningful metric for the quality of universities. Solving that problem for K-12 is hard enough, where the constraints are much stricter: you don’t have a self-selected student population (mostly), and you don’t have tremendous variation in fields of study undertaken by those students (mostly). Where you do have those factors present in a K-12 setting (e.g. magnet vs. public vs. private vs. special-needs schools, specialized/vocational high school curricula), they render the K-12 evaluations meaningless.
Better data could, I suppose, make it possible to do certain specific, narrow comparisons: does school X produce better outcomes than school Y where both X and Y have extremely similar curricula and extremely similar incoming student populations? But anything beyond that really is meaningless. Does Caltech produce better outcomes than Juliard? Does UDC do something more impressive with its resources and its incoming student population than Harvard does with theirs? What would such questions even mean?
Unfortunately, this rhetoric strikes me as one more chapter in the ongoing campaign in the American media to deny the role of social class and entrenched wealth in shaping young people’s lives. When I started at Columbia Law School, I was overwhelmed by the sensation that 90 to 05 percent of the students there were going to be very successful regardless of what they learned or didn’t learn in class: sure, most of them were smart and capable, but more importantly, most of them came from backgrounds that guaranteed them, at the very least, a nice fat sinecure somewhere for the rest of their lives. I suspect you wouldn’t find the same to be true of the students at, say, SUNY. If that’s the largest single determinant of post-graduation outcomes, how do you factor that into the evaluation of a school’s performance?
Reblogged this on College Ready and commented:
College costs, confusion, and the problems with data.