All things excellent are as difficult as they are rare.

SED OMNIA PRAECLARA TAM DIFFICILIA QUAM RARA SUNT

30 November 2011

Defining "Success" and Graduation Rates

Today, over at Joanne's, there is (by way of Joanne's other blog, Community College Spotlight) a link to a somewhat disturbing story:
At its final meeting on Tuesday in Washington, D.C., the 15-member Committee on Measures of Student Success (CMSS)—which includes several community college leaders and individuals who have served public two-year colleges—voted to approve its 26-page report. Among the recommendations: including part-time, degree-seeking students in the federal Integrated Postsecondary Education Data System (IPEDS), and collecting data on federal student aid recipients and students who are not academically ready for college.

The American Association of Community Colleges (AACC) was especially pleased that the committee urged the Education Department to calculate and publicize a single completion rate that includes students who receive degrees and certificates, as well as those who subsequently enroll in another higher education institution. The combined graduation-and-transfer rate would vastly improve the student success rate, AACC said in a statement, noting that the combined rate is required by federal stature but has not been implemented.

“The community college completion rate would immediately increase to 40 percent from the current 22 percent if this single recommendation were adopted,” AACC said. .

The first, most disturbing aspect of the story is this: there's an organization in the government (in the Dept. of Education, specifically) called "The Committee on Measures of Student Success". You can read a little about it here. Their sole job, as far as I can tell, is to write reports about how to help clarify for implementation the provisions of another law. All hail the regulative state, I suppose. When I was young, I was under the impression that the courts were the ones who interpreted the law when it wasn't clear. Silly me.

That's not really what I wanted to write about, though. I want to write about success.

Anyone with a pulse should be able to spot the logical problem with a statement like, "The completion rate would immediately increase from 22 to 40 percent." It's not the same rate if you change what you're measuring. Now, I don't mean to say that the people at the AACC are stupid -- they mean "the rate measured by law", and they're just using the word "completion rate" somewhat inartfully. I didn't come here today to pick on perfectly smart people speaking casually. That's allowed.

But this concern over how to define completion rates bring up an interesting set of issues. In this case, the concern seems motivated by the reporting requirements of federal legislation. Those requirements are in turn motivated by an apparent belief that measuring "successful outcomes" at a school is how to determine if a school is doing its job.

Which is sort of true, I suppose, but problematic if you don't look outside the school's own standards for determining "successful outcomes." Let me explain what I mean.

Naval SEAL training is a school. It's an excellent school. We can tell it's an excellent school because its graduates go out and proficiently defend their country in a wide variety of extremely difficult situations. Their "outcomes" are damn good. But their outcomes are measured in terms of actual, real success -- not just success at school. Indeed, the success rate at the school is somewhat abysmal, something on the order of 20%.

But here's the interesting thing about SEAL school failures: they're mostly "successful outcomes", too. If you put someone into a high intensity combat situation who isn't ready for it, and they slip, miss, stumble, or just choke and get people killed, that's a failure for the school. It's not a success at all, despite the fact that the school's graduation rate might be higher because they passed.

If you take a community college and judge its success by its graduation rate, something entirely within the control of the school, then a perverse situation develops. The school going to be able to increase its number of "successful outcomes" (as we use the term officially) by lowering its standards and shuffling more people out the door, diploma in hand. More graduates, yes, but they could be less skilled than otherwise might be the case.

In other words, the schools will be able to increase their official success rate by decreasing the number of real successful outcomes. Which isn't to say that they would do that -- but if that's even a possibility, it's a pretty strong clue that our way of measuring success is all messed up.

But, you might say, schools like Naval SEAL training "weed out" people. That shouldn't be the job of second grade!

Well, yes. It should. Second grade should "weed out" the people who aren't ready for third grade. And third grade should "weed out" the people who aren't ready for fourth grade. And so on.

The reason for this is that at some point the school system is going to have to weed people out, and it's not fair to put someone in twelfth grade and ask them to demonstrate high school academic proficiency if they weren't ready for the training in the first place. The lower grades are supposed to prepare you for the upper grades, which are supposed to prepare you for "life" or something like that. That's the theory. And if people are going into sixth grade unable to do sixth grade work, then the fifth grade teacher is failing to generate "successful outcomes", no matter how blisteringly high the graduation rate is.

If you look at graduation rates to determine success -- or even transfer rates, which while a little better, can similarly be affected by academic fraud on the community colleges' part -- you're not looking at anything substantive at all. You're looking merely at process: how many people are being approved by this school? The answer, of course, is always going to be "As many as the school approves of." Think about that for a second and ask yourself what substantive standard is involved there.

We could, of course, look at the schools themselves and what they are actually doing. Are the classes filled with interesting, useful, and challenging information or are they busywork? Do the professors/teachers demand excellence or are they just marking time? Is failure of various academic sorts frowned upon or cavalierly tolerated? Is the environment supportive, competitive, combative, or apathetic? These are all substantive questions about what the school is doing and how it is doing it. And they have no necessary connection whatsoever to graduation rates. A graduation is only a "successful outcome" if it's an accurate signal for a certain kind of competence.

Let me distill my thoughts down to a short paragraph, something you can take away and quote:

Success might be its own justification, and it might have many fathers. But when you're teaching in a school, your success is your students' success. And their success is out there, not in here with you.

No comments: