College Graduation Rate Data are Flawed, but Do We Need that Information?


People who are intent on planning and controlling social phenomena are almost always great advocates of data. They want data so they can figure out how to change and (to their thinking) optimize things. A recent article on Ed Central, “The College Graduation Rate Flaw That No One’s Talking About” puts me in mind of that.

The author, Ben Miller, is a Senior Policy Analyst at the New America Foundation, a “liberal” group that is generally in support of increasing the government’s role. The gravamen of his complaint about the graduation rate data now available is that they make it difficult for politicians and analysts to accurately compare completion rates between various kinds of educational institutions.

For example, IPEDS (integrated Postsecondary Education Data System) data create the impression that for-profit institutions have a higher graduation rate than do community colleges. That’s badly misleading, Miller proceeds to show, because it means “judging the rate at which students earn associate degrees against shorter certificate programs.”

For the sake of argument, let’s concede that our higher education data are flawed. Making apples to apples comparisons between institutions is difficult. And the reason why that matters so much to education analysts is that they’re concerned about schools that have low graduation (or attainment) rates. To higher education policy types, it’s a sign of trouble when schools “graduate their students” at too low a rate.

One of the main ingredients in the Obama Administration’s proposed college rating system would be each school’s graduation rate. Getting higher ed policy right calls for lots of data – reliable data.

Here’s the problem I see with this. Data (on education and almost everything else) simply encourage more government meddling. They appear to identify “problems” and politicians and policy experts immediately jump in with proposed solutions. Such solutions, however, usually cause new problems and deepen the government’s interference in the workings of the free market. Therefore, it is better not to collect data at all.

That heretical position was espoused by the British Financial Secretary of Hong Kong during its rapid economic rise, Sir John Cowperthwaite. Perhaps because he did not have to run for office and pretend to be busy “improving” conditions, he famously left the people alone to succeed (or fail) without any government intervention. If you’re not familiar with this wonderful man, read this tribute to him by Cato Institute’s Marian Tupy after his death in 2006.

Tupy, who studied in Hong Kong, got to know Sir John well. In a conversation, Tupy asked him what policy reform he was most proud of. “I abolished the collection of statistics,” was his reply. Tupy explains, “Sir John believed that statistics are dangerous, because they enable social engineers of all stripes to justify state intervention in the economy.”

That is perfectly true – consider how politicians claim, for example, that if unemployment statistics go up, that calls for more federal “stimulus” spending.

It is equally true with regard to higher education. Suppose we had “perfect” (or merely good) statistics so we could make apples to apples comparisons on graduation rates. Then what? Politicians and bureaucrats would devise new regulations to reward institutions that seem to be doing a fine job of “graduating their students” and to punish those that are not.

But schools do not “graduate their students.” Students either do what is required of them to graduate or they don’t. Those who do graduate may benefit from their actions. Equally, those who do not might be making the best decision, since higher education is not a wise choice for everyone. Governmental meddling is unnecessary, and if it occurs, it’s apt to alter incentives in a way that makes matters worse.

No one demands statistics about activities that the government has nothing to do with – gyms and health clubs, for example. Some consumers make great use of them and rapidly improve in whatever metrics most concern them – weight loss, stamina, lifting, and so on. Others rarely go, or fritter away their time when they do. Whatever statistics may be collected are collected by individuals for themselves. Policy wonks do not insist on data that might show which clubs are more “effective” than others.

Why not? Because no government money is spent on subsidizing health club memberships. Even the most interventionist politicians seem to understand that individuals will act as they think best with regard to fitness since they’re spending their own money. No health club data could make any difference.

Rather than fretting about our flawed higher ed statistics, I suggest taking a page from Sir John Cowperthwaite’s book and stop collecting them entirely. They just encourage further meddling in a field where such meddling has already created a terrible mess.


George Leef is director of research for the John William Pope Center for Higher Education Policy

Read more articles:

by author on accountability on affordability