KCH: Your firm, Higher Ed Strategy, does a great deal of work in the area of university rankings. I understand that rankings are important to institutions, but do they serve students? Do they direct energy and resources to things that actually help students?
AU: I think it depends in large part what indicators they choose to measure. Obviously, ones which rely heavily on indicators which are solely focused on resources or research probably aren’t that helpful in helping undergraduates choose a school. I think another drawback of many ranking is the focus on a horse-race: which is the best school . Obviously that’s nonsense; schools are good at different things and most schools that have the capacity to among the best at something. The trouble is knowing how to disaggregate data in such a way as to present those different weaknesses and strengths and present them in a way that is intuitive for prospective students and parents.If you can do that, and if you can get parents out of this mindset that there is a single “best” school and into one which says there are many very good schools, one of which is probably a “best fit” for their son or daughter, then I think that various types of rankings –especially of the “personalized” sort which are increasingly popular in Europe and which is done in Canada by The Globe and Mail’s “Navigator” system – can be quite useful and won’t incentivize institutions to invest in the wrong kinds of things.
KCH: One of the characteristics of your writing on higher education that I particularly enjoy is the attempt to challenge commonly held assumptions (myths?) about higher education. If you had to pick one assumption/myth that you find particularly irksome, what would it be?
AU: The idea that tuition fees have a major effect on access to PSE. All the evidence suggests that in Canada at least, at present levels of tuition and student aid and tax credits, changing net tuition by a few hundred dollars will have no effect at all on participation. Even at a couple of thousand, the effects would be minimal, and could be offset by better student aid. At a time when institutions are facing significant cutbacks, ruling out tuition increases because of entirely imaginary concerns about access is irresponsible.
KCH: The lack of meaningful data about university performance is often discussed. I suggest that access to better information by students and other stakeholders is one of the lynchpins for truly substantive changes in higher ed. While we seem to be finally making some headway in terms of generating data about school performance, we have a ways to go. What’s holding this back?
AU: Three reasons, I think. The first is that there is genuine and very healthy debate about the true purposes of universities. When you choose a set of indicators to measure institutional performance, whether you’re a government or a newsmagazine, implicitly you’re saying “Good universities are good because they are adept at the following things”, and there is genuine disagreement about what those things are. The second is an outgrowth of the first: does it make sense to compare all institutions on all indicators? If institutions have different histories and missions, does it even make sense to be comparing them on some axes? I think the answer which we are slowly shuffling towards is, “yes, we need comparable data from all institutions, but different institutions might have different benchmarks to be considered “successful””. That’s the sensible way to go from a government performance indicator point of view, anyways – though it’s hard to incorporate into rankings.The third reason is maybe most important – there’s genuinely no consensus on how to collect or interpret data on the educational experience. People have tried to get “inside the box” by asking questions about satisfaction or by asking questions about class sizes: but that doesn’t really tell you much about whether any learning is going on. So we’re stuck with a bunch of proxies which are susceptible to manipulation and not surprisingly, that’s souring a lot of people on the usefulness of “performance data”.But I think what the university community needs to recognize is that the alternative to good learning indicators isn’t “nothing”, it’s “a lot of bad indicators. So throwing your hands up and saying “we can’t measure this” is effectively an invitation for rankers to do whatever they want. I firmly believe that universities get the rankings they deserve; if they choose not to engage on the issue of how to measure their own outputs, they’re going to be stuck with some bad ranking systems, simple as that.
KCH: Much of your work is in Canada. What is the most significant difference between the Canadian higher education system and systems in other OECD countries?
AU: Like much of the OECD, we have a mostly public university system. The differences are that it receives a substantial proportion of its resources from non-government sources like tuition (less than Japan, Korea and the US, but much more than anywhere else) and as a result is much better resourced than most systems, allowing us to spend much more money on academic staff than other countries. It is much more open to mature students than most other systems, providing lots of different pathways to degrees, but is among those least open to having university-level activity done in non-university settings.