Thursday, June 4, 2009

Sorting, Selection, and Success

Cross-posted from Brainstorm, over at the Chronicle of Higher Education.

The latest report from the American Enterprise Institute, Diplomas and Dropouts, hits one nail on the head: plenty of students starting college do not finish a degree. Access does not equate with success, and partly as a result, U.S. higher education is perpetuating a lot of inequality.

What do we do about this? The authors identify a key fact: “analysis of graduation rates reveals wide variance among institutions that have similar admissions standards and admit students with similar track records and test scores.” They interpret this to mean that “while student motivation, intent, and ability matter greatly when it comes to college completion, our analysis suggests that the practices of higher education institutions matter, too.”

This is a pretty common argument made by many policy institutes and advocacy organizations, including but not limited to the Education Trust and the Education Sector. I understand their goal—to make sure that colleges and universities can’t hide behind the excuse of “student deficits” in explaining low graduation rates, and instead get focused on the things they can do something about. In some ways that mirrors efforts over the last fifty years to focus on “school effects” in k-12 education —witness the continuing discussion of class size and teacher quality despite evidence that overall variation in student achievement is much more attributable to within-school differences in student characteristics than to between-school differences (school characteristics). Like many others, I read those findings to say that if we really want to make progress in educational outcomes, we must address significant social problems (e.g. poverty, segregation) as well as educational practices. Don’t misinterpret me- it’s not that I think teachers don’t matter. It’s simply a matter of degree—where and how can we make the biggest difference for kids, and under what circumstances.

Unlike k-12, access in higher education isn’t universal and competitive admissions processes and pricing structures result in lots of sorting of kids into colleges and universities. As a result, they differ tremendously in the students they serve. In turn, as the AEI report admits, this necessarily shapes their outcomes.

The problem is, all this sorting (selection bias) has to be properly accounted for if you want to isolate the contributions that colleges make to graduation rates. (I’ll qualify that briefly to add that the role college enrollment management —tuition setting, financial aid, and admissions— plays in the sorting process is quite important, and is under colleges’ control.) But if you want to isolate institutional practices that ought to be adopted, you first have to get your statistical models right.

Unfortunately, I don’t think the AEI authors have done that. To be sure, they try to be cautious, pointing out colleges that look “similar” but have extremely different graduation rates (rather than modestly different ones). But how they reached “similarity” leaves a lot to be desired. It seems to rest entirely on level of selectivity and geographic region. Their methods don’t begin to approach the gold standard tools needed to figure out what works (say, a good quasi-experimental design). Important student-level characteristics (socioeconomic background, high school preparation, need for remediation, etc) aren’t taken into account. Nor are many key school-level characteristics (e.g. resource levels and allocations). In sum, we are left with no empirical evidence that the numerous other plausible explanations for the findings have even been explored.

I’m not surprised by this, but have to admit that I’m a bit bummed. Yes, I “get” that AEI and places like it aren’t research universities. Folks don’t want to spend long periods of time conducting highly involved quantitative research before starting to talk policy and practice. But I don’t see how this approach is moving the ball forward—sure it gets peoples’ attention, but it’s not compelling to the educated reader—the one who votes and takes action to change the system. Moreover, it doesn’t get us any closer to the right answers, or provide any confidence that if we follow the recommendations we can expect real change.

There have been solid academic studies of the causes for variation in college graduation rates (here’s one example). They struggle with how hard it is to deal with the many differences among students and colleges that are not recorded – and thus not detectable—in national datasets. If we want better answers, we need to start by investing in better data and better studies. In the meantime, I think skipping the step of sifting and winnowing for the most accurate answers is inadvisable. Though, sadly, unsurprising…