July 1, 2006

Semipenultimate thoughts on graduation rates (includes Losen response to Mishel)

I'm acting one more (and last) time as a messenger/archivist for the discussion over graduation rates. Attached is a response by Dan Losen to Larry Mishel, after both of them (and Joydeep Roy) had commented extensively on a prior entry on graduation rates.

Now that we've come to the end of this round of debate, let me separate the different issues and lay out my judgment. (And, I promise, I'll discuss practical solutions tomorrow.)


  1. National Educational Longitudinal Survey (NELS) as a data source. Mishel and Roy use this as evidence that high-school graduation is likely to be higher than what Greene, Swanson, et al., have been saying. As any data collection would be, it has some flaws. I'm more concerned with the exclusion of students with disabilities and others in the baseline than others are, apparently, as well as the possibility that there were some cohort effects, with this group more likely to graduate than the next half-decade or so of eighth-grade cohorts. But I think that budges the graduation rate for NELS:88's cohort by 5-6%, maybe a bit more, in general, but not dramatically. Disaggregating by population group is more hazardous, I think. Big picture: Using NELS is counter-evidence of dramatically low graduation rates nationally from grade-enrollment-based data, not a substitute for keeping tabs on graduation more recently. So in the end, NELS:88 doesn't tell us much about graduation rates in 2003.
  2. Current Population Survey (CPS) as a data source. Because CPS does not survey institutionalized populations (biggest sources: prisons and the military), it's difficult to tell how that restricted universe biases data for subpopulations (most for African-American males, as Mishel et al. acknowledge, much less for most subgroups). The questions that CPS has asked about graduation have changed over the years, as have the sampling frames, making comparable estimates across long stretches of time more difficult. CPS could not get at geographic areas smaller than states, and the within-state subpopulation groups—eeek. Don't bet your life on their accuracy.
  3. Common Core of Data as a data source. As I've discussed before with the example of Detroit's 2002-03 enrollment data, the Common Core of Data is an unverified, unaudited database. Enough said, right?
  4. Using ninth-grade enrollments in graduation-rate formulae. As Rob Warren's 2005 article (PDF) and Larry Mishel and Joydeep Roy (2006) each explain, using ninth-grade enrollment in the rate formula conflates first-time enrollment in high school with ninth-grade retention. The direction of that bias is unclear. On the one hand, a large amount of retention might lead to an overestimate of the first-time ninth-grade population and thus a downward bias on graduation rates—when the preceding cohort(s) either had higher retention rates or higher cohort sizes. But there are certain conditions when retention might lead to an upward bias in graduation—when there is substantial eighth-grade (or earlier) retention and when the preceding cohorts had lower retention rates or lower cohort sizes. Essentially, it's a question of where the "lagging" part of the cohort is accounted and the relative sizes of those lags.
  5. Longitudinal graduation rates. In theory, they're better than the quasi-cohort measures proposed by Greene and Winters, the Boston College/Harvard group, or Warren or the quasi-period measure proposed by Swanson. But that's in theory. As I explained Thursday with regard to Florida, there are plenty of ifs in the trustworthiness of attempts at true cohort measures, from the definitions of what "transfer" codes count to the confirmation/auditing of records.

The whole thing is making folks like Miami Herald reporter Matt Pinzur cry AAAARRRRGH!!, but there are some solid statements we can make:

  • The current system of data collection is inadequate right now to providing a trustworthy graduation rate. (This should, incidentally, make us very nervous about relying on school statistics in general for high-stakes decisions. In theory, a graduation rate should be among the easiest statistics to calculate.) Even in states with student-level database experience, such as Florida or Texas, there are problems.
  • Using ninth-grade enrollment data is a poor decision. Even using eighth-grade enrollment, such as Warren does, needs to be checked with evidence about eighth-grade retention. I favor using birth year rather than first year in ninth grade as the cohort basis.
  • The whole ball o' wax (statistically speaking) goes down the drain if you don't have accurate migration statistics for the student population. This hasn't been part of the debate thus far, but it's something at the heart of the article manuscript I submitted last week. And the bias can go either way, incidentally: dropouts reported as transfers will inflate the graduation rate, but students who move in the summer without ever having the receiving school request a transcript (possible for ninth-graders who fail every course) would artificially deflate the graduation rate.

The best system would be an individual student-level database with built-in editing and confirmation steps as well as an annual system-wide audit of accuracy and surveys for population migration. From that, you can build almost any accurate rate and account for various things. Because people will disagree whether GEDs and other non-standard diplomas should count, states should provide multiple rates (including and excluding non-standard diplomas).

As I promised, either tomorrow or at the end of the NEA Representative Assembly (where I'm volunteering at microphone 34, in the middle of the California delegate seating in the bleachers), I'll talk about solutions—what we can do to improve the likelihood of teens graduating, without waving a magic wand.

Listen to this article
Posted in Research on July 1, 2006 5:05 PM |