January 4, 2010

And Bill Brass would run through the streets of Edinburgh shouting "Eureqa!"

If you are a social scientist and haven't checked out Eureqa, you should spend a few hours playing with it in the next few months, because it's entirely different from your prior experiences with statistical software. Featured in a Wired article last month, Download Your Own Robot Scientist, Eureqa is not a scientist but a statistical engine that generates potential formulae to solve a defined problem from data, evaluates the formulae as it goes, and does so using a set of operations defined by the user. The usual (somewhat tedious) method for those of us trained in social sciences is to think very clearly about the problem, define a potential model (or, in reality, the form of a function and the variables that would go in that function), and let software estimate parameters to minimize error defined in some way or maximize the likelihood of having observed the collected data. If the "think very clearly" sounds remarkably Cartesian, so be it. In the best of worlds, that a priori modeling can lead to interesting and useful findings, even if you're also exposed to John Tukey-like practicality (such as his 53h smoothing). There's also the "churn it out" school of automated stepwise regressions that used to be an excuse for researcher laziness, though I have recently accepted a manuscript for Education Policy Analysis Archives with precisely that tool used at one step (and for very justifiable and practical reasons--the authors were not being lazy one whit).

So in this world of "try out one well-justified family of models at a time" rushes Eureqa, threatening to either upset the applecart or lead to some very interesting possibilities. Instead of comparing a set of nested models, where summary models often allow inferential judgments of the utility of additional variables, Eureqa compares some very different models, where the conclusions one can draw in comparing models is restricted to the sample (where many people would argue we're always restricted, but I'll skip the metatheoretical discussion of inferential statistics). So what the heck is the use of Eureqa?

To get a glimpse of the possibility, let me tell you about my experience. Looking at one of the images in the tutorials, I saw a sine curve whose magnitude diminished, and I thought, "Okay, let's see how quickly Eureqa recognizes that," synthesized numbers in a spreadsheet to fit a formula with a magnitude that diminished to 0 asymptotically (i.e., as the independent variable headed to infinity), and plugged it into Eureqa, telling Eureqa that it could add, subtract, multiply, divide, and use sine and cosines in any combination. In a few minutes, Eureqa spat out an optimal formula that was identical to the one I had used.  Okay, so far so good, but I had been easy the first time out.

Next, I added an error term. Eureqa asked me if I wanted to smooth the data first. No, I said, and Eureqa had some problems, so I went back and checked the "smooth" box. Eureqa dutifully chugged away, and one of the candidate formulae was almost the same as the one I had used (minus the error term), but it wasn't the prime candidate after several minutes. Instead, Eureqa proposed a sum of two sine curves that had slightly different periods. I thought about it and realized, oh, yes, of course. One way to have a diminishing-amplitude sine wave was to have diminishing amplitude, but another is to have the sum of two sine waves with almost but not completely identical periods. As time goes on (or x increases), the waves shift from constructive to destructing interference, and the amplitude of the sum decreases. In a real-world environment, we would need to extend the time (or observe at higher x's) to disconfirm one of the two candidates--increasing amplitude after some time would lend evidence to the two-wave interference formula. Eureqa had neatly forced me to think of another way to see the data.

And that is the obvious first-order value of Eureqa, to generate different ways of seeing data. But it isn't the only value. And to make the argument for the second value, to generate reliable models for complex social data, I'll ask for some help from the late Bill Brass, a Scottish medical demographer I encountered in graduate school through the Brass logit relational model of life tables, a 1971 model of transforming a single model life table into life tables of real countries in real time using two parameters (alpha and beta--okay, so he wasn't exactly stellar in the naming-parameters department, but he was a brilliant practical demographer otherwise). The Brass logit model has some problems at extreme age ranges and for countries with unique mortality conditions, but given the complexity of mortality experiences through time and across continents, having any simple model that could take a model set of age-specific measures and transform it into something anywhere close to real experiences is ... well, amazing. And Brass did it without the help of microcomputers.

Since the early 1970s, a number of demographers have tinkered with the Brass logit model, and they have the benefit of microcomputers, but without Eureqa. Before microcomputers and fast computing using individual-level data, demographers had to use a combination of mathematical models and the type of statistical insight that Brass brought to life tables. So could demographers and other social scientists use Eureqa to generate this type of relational model for a range of data? Possibly, and certainly they could use Eureqa to generate candidate models. I'd be curious to see if Eureqa could come up with anything close to the Brass logit model if fed an appropriately-prepared set of data. Demography grad students, here's a great project--see if Eureqa can beat Bill Brass!

In the next month I'm finishing up my EPAA editorial duties (or coming as close to it as I can in preparing approved articles) and delving more intensively into unfinished projects. But there's a small project that's perfect for Eureqa. I have no idea if it'll come up with anything useful, but because Eureqa's proposed solutions are sample-dependent and Eureqa splits the sample into training and validation sets (and uniquely per run), Eureqa gives me a perfect routine for dull tasks: do work, take break to see how Eureqa is running, capture proposed solutions, restart the run with a new training/validation split, go back to dull task, rinse, repeat. It doesn't require the intensity of concentration I'll need for unfinished projects this spring.

November 12, 2009

Race to the Top: review, revise, redux

I am in California this weekend for the Social Science History Association annual meeting, where we get to talk about Maris Vinovskis's book on the last quarter century of school reform, and since one of my copanelists Saturday morning is Jennifer Jennings, I finally get to meet the sociologist-formerly-known-as-Eduwonkette in person, face to face. Because several family members live in Costa Mesa, I also get to enjoy Kean Coffee about 20 miles south of the conference hotel/cruise ship (when the heck did the SSHA officers decide to book the Queen Mary??!).

While the focus of the book panel will be ... well, Maris's book, I'm sure we'll be talking about Obama education policy at some point, including Race to the Top. I was rushing around last night not getting enough done, so I didn't have a chance to do more than casually skim the stuff that's now available on the revised final guidelines. A few initial thoughts:

  • Bottom line? No idea. I traveled west and had coffee (see above), so I don't have a bad case of jet lag, but I've been on planes for 7 hours today. 
  • I very much like the competitive priority on STEM fields. That uses a standard device for focusing grant-writers' minds in USDOE competitions (the bonus points for meeting a competitive priority). (Disclosure: it looks like my state's department of education is following the push a bunch of us have been making about using Race to the Top funds for end of course exams, especially in science.)
  • From the list of changes made, it looks like there have been a lot of political calculations made on what changes had to be made to keep stakeholders in the game and what had to stay the same to satisfy policy goals.
  • Duncan is not anal retentive enough to make the points add up to a "nice round number." I have a suspicion this is deliberate, and if so I think I know the reason why.
  • People who focus on the total potential range of points for each section are missing an important feature of point distributions in scoring systems: it's the actual range and not the potential range that matters on rankings. If the potential range is 58 points from top to bottom on one component but the scoring leaves a real-life range of 10 points, it doesn't matter that the total number of points is 58. It could have been anything from 10 to 58. So what matters is how the reviewing panel looks at everything.

If we have time, I'll try to persuade Jennings to put on her Eduwonkette cape and save the state where I grew up. But I think California's problems are beyond what even a brilliant sociologist can solve. At least I get to see family members, which is worth the jet lag I'll be fighting in the next week.

October 14, 2009

R!

A graduate student in biostats here is willing to tutor me in R (the open-source equivalent of S-Plus). Yes, faculty need to learn additional things, and sometimes that requires tutoring/coaching. For me, those realms include music, martial arts, and now new stats packages. And because my college wants guinea pigs for its exercise-training classes, I'm doing that as well this semester.

July 13, 2009

Hechinger Institute hypes the obvious -- this is a role model for reporters??

I received an e-mail advertisement for a "webinar" on "The Dropout Crisis" from the Hechinger Institute. This is an organization that claims it "exists to equip journalists with the knowledge and skills they need to produce fair, accurate and insightful reporting."

Both the e-mail and the webpage for the seminar claim that "new research shows that 17 states produce some 70 percent of the students who don't graduate." Is that a mundane claim that is being hyped to produce seminar enrollment, or is it truly interesting? A quick check of the relevant table from the 2007 Digest of Education Statistics reveals that --voila!-- the 17 states with the highest high school enrollment also contain about 70 percent of all 9-12 enrollment in 50 states and DC (70.4%, if you want a third significant digit). In other words, this fact would be entirely expected simply from the pattern of school enrollment across the states. 

So... is the Hechinger Institute modeling the type of "fair, accurate and insightful" publications that they wish reporters to produce, or are they trying to jack up enrollment with scary and misleading statistics? C'mon, folks: high school graduation and dropout patterns are of serious concern as it is, without modeling patently bad reporting.

June 18, 2009

The world is complicated, part 752

So the Center for Research on Education Outcomes has a report on charter-school performance, the Center on Education Policy has released a report on student achievement trends, NAEP released art-education data, and the spin has begun. Missing from almost all the reporting: Statements about the extent of peer reviewing for any of these reports. I'm not too worried about the professionalism of these reports,  since I know that the Department of Education always has an internal review process, CEP usually asks researchers in the area to review draft reports, and I would be surprised if CREDO did not have a pre-publication review process. However, the failure to report on the extent of peer review is a continuing and glaring omission in the reporting of education research.

In terms of the substance of the reports, I'm up to my eyeballs in prior commitments, but it's clear from the brief reading I have been able to do that the findings for all three reports are more complicated than the spin emanating for many of The Usual Suspects.* That's not news, I know, but I am the King of Things That Are Obvious Once He States Them, and I have a job to do.

* a great name for an a cappella group, if you happen to be starting one up.

June 13, 2009

On graduation rates and auditing state databases

I sympathize with Florida's Deputy Commissioner of Education Jeff Sellers, finding himself defending the state's official graduation rate the week that Education Week published its Swanson-index issue and pointed to Florida as a low-graduation state, using numbers far below the state's official numbers.

Some perspective: Florida's official graduation rate is inflated, but it's still better than Swanson's. Florida's graduation rate does more than Swanson (i.e., does anything) to adjust for student transfers and the fact that ninth-grade enrollment numbers overestimate the number of first-time ninth graders. 

Because of Florida's state-level database and the programming/routine that already exists, Florida is much closer to the new federal regulatory definition of a graduation rate than many other states, and Commissioner Eric Smith has been preparing the state board and other interested parties for the likely effect of the change on the official published rate -- i.e., that the rate will be a visible quantum lower than the currently-published rates (and largely for the reasons I have explained in the 2006 paper linked above). So in a few years we'll get a closer estimate of graduation from a lay understanding (the proportion of 9th graders who graduate 4, 5, or 6 years later).

The point in the St Pete Times interview where I winced was Sellers's answer to the question of how the state (and the general public) knows that the exit codes entered for a student are accurate: Sellers said that his department conducts an "audit from a data perspective."

That statement is misleading. It is technically true that there is an audit in two senses: each school district is required to check its data for accuracy before sending the data to the state's servers, and the state conducts a search of students reported as withdrawn in one county to see if they entered another county system before labeling them dropouts. But while I have seen reference to checking that the withdrawal codes are correct, I have not seen any evidence that such checks have actually occurred, and I have been unable to find that evidence anywhere on the Florida Department of Education website. That doesn't mean that it doesn't happen, but call me a touch skeptical. Without random checks, there is no guarantee that a 16-year-old coded as a transfer to another school actually was a transfer.

Given Florida's long experience with a state-managed education database, the lack of published audits of this process should caution us about the magic of state databases. They are important, but they need to be done properly. It makes sense to talk about the internal and external checks that should happen as other states construct databases and all states start to conform to the mandated longitudinal graduation rate:

  • Districts will need to be the first party to check accuracy, both in terms of preventing mistakes/fraud but also conducting consistency checks--are there any records which claim that a 45-year-old is attending kindergarten, for example? The first is supposed to happen in Florida, and I suspect that counties catch the low-hanging fruit in terms of errors. But the accuracy check on withdrawal code is the type of check that requires extensive follow-up to document whether a student identified as a transfer did in fact enroll in another school.
  • States will also need to conduct accuracy and consistency checks, though a state will necessarily be far less likely than school districts to catch outright fraud in claiming students transferred when they did not. 
  • States will also have to conduct the cross-checking that Florida currently performs every year and that I describe above: which students move between districts in the same state, but are counted as dropouts because a county only looks at its own students.
  • Finally, the auditing of transfer records would be MUCH easier if there is a standard way for school districts and individual schools to request the transfer of a student record and simultaneously use that authenticated request as verification that a transfer code is appropriate.

This is an incomplete list, but it's a start.

May 29, 2009

Unhappy with my brain right now

  • Fuzzy logic
  • Responders/nonresponders
  • Donald Rubin and multiple imputation
  • Dichotomous variables
  • Record linkage: whether a linkage allows one to determine outcome
  • Limits
  • Category theory

You have now been infected. That is all (for now).

May 11, 2009

Elsevier and qui tam lawsuits?

Is there anyone else horrified by the Elsevier journal scam waiting not just for academic righteousness but legal action in the U.S. based on the False Claims Act? If federal money was entangled at all in any of the journal nonsense, statements made by any of the fraudulent Elsevier journals could conceivably implicate the publisher if the publisher was knowingly complicit. Because the False Claims Act allows for third-party plaintiffs, anyone who knows of the shenanigans could get a lawyer to work on this very quickly.

April 18, 2009

Research blog started

For those who want to walk into the weeds with me on a new research project, feel free to follow my new research blog hosted at USF. Dorn's dangerously public research blog has the subtitle "conducting research without a net," and I am likely to fail in public view. [Update 4/20/09: the blog server's database had a problem over the weekend, but it's fixed this morning. I swear, my entry did not break the internets.] See today's entry for an an example of a "duh, this is why you don't look at your project at 9:30 pm" story. That's not quite true: looking at the project at 9:30 on Saturday showed me something I didn't pick up the last time I worked on the data at a perfectly sane time. But that's what being a tenured faculty member is supposed to allow and even encourage: taking greater risks either in terms of potential failure or the time required for a project.

For those who are curious about the background for this project, we currently don't have a good way to translate administrative reports of enrollment by grade into a trustworthy measures of graduation. Chris Swanson's work doesn't count without considerable assumptions, but that's not a shame at all, since no one else's does with the exception of measures adjusted for interstate migration (such as Rob Warren's), and that's not feasible except with states and other large population units. Longitudinal measures such as the NGA and federal regulatory graduation statistics will go a long way to fixing this, but there will continue to be an important need to be able to work with administrative data. And it's an interesting intellectual puzzle.

In my spare time in the past few years I've been trying an analytical approach using whatever meager skills I have in formal demography. There are limits to that, and I've decided to try a different approach, simulating a range of conditions of potential high schools and looking at relationships that way. This'll start with the simplest approach, a hypothetical world where the student population at schools never change, each ninth-grade cohort has identical experiences, and no one transfers in or out. If I can look at that artificial world, I might be able to relax those assumptions one at a time.

But I need to be able to generate data for that world that is plausible, as opposed to something I could generate by my imagination. So I'm playing around with data from the National Longitudinal Sample of Youth cohort beginning in 1979 to have a set of nationally-sampled data from real, historical adolescents with a year-by-year longitudinal record of school attendance and high school graduation. From that, I'll generate a set of synthetic (or Monte Carlo/simulated) cohorts with a range of grade retention and graduation. Consider it a pilot, or proof-of-concept, or just playing around.

If your spectator sport of choice is not baseball or opera, follow the new blog. As I've said, I'm as likely to fall flat on my face as not.

April 16, 2009

Migration and graduation

I'm experimenting with publishing working papers on the Social Science Research Network, with the first one, Migration and Graduation Measures, freely downloadable on some technical issues with graduation rates. The gist: without knowing accurate information about migration (and transfers), non-longitudinal graduation rates are going to be inherently problematic.

March 22, 2009

Grokking social-science statistics

Several comments in the past few weeks have expressed some wonder that I use statistics when I am publicly skeptical of several policy-related uses of education statistics. I am a little confused by the comments (and implicit accusation of inconsistency), since many of the most articulate critics of high-stakes testing are assessment experts, but for the record, here are a few of my personal stances towards social-science statistics:

  • If for no other purpose than to engage in political debates in a conscientious and credible fashion, adults need to have some rudimentary knowledge of statistics and probability and also be able to listen to and discuss essential concepts without doing enormous violence to them. This is on the same order as needing to have some rudimentary knowledge of Newtonian motion, thermodynamics, electricity, algebra, natural selection, etc., to engage in public policy debates in a constructive fashion. Know why perpetual-motion machine patents require extraordinary (and highly improbable) evidence; know why regression to the mean invalidates many change-over-time claims when the baseline comes from a sample of outliers. 
  • If you're tempted to be proud that you don't know statistics, see what happens to the following sentence if you replace "in French" with "using statistics" and "French history" with your current interest: "Yes, I'm writing about French history; what do you mean, I need to read stuff that's written in French?"
  • One of the reasons why one needs that basic knowledge is to know the limits of statistics and be able to ask probing questions of the claims that are made in public debates. Probing questions are not of the formalist type that could be applied to any claim, "You can say what you want by picking a statistic" or "It's unethical to use statistics without talking about the metause of statistics." Probing questions engage the specific claims made in debate: "Politician Yodel says we saw a 102% increase in the incidence of Echoing Disease last year, but I want to know what the incidence was the year before so I know if this is a serious problem."
  • Though social-science statistics are inherently constructed objects, they can nonetheless be enormously useful. For a thoughtful and useful discussion of social-constructionist arguments, see Ian Hacking's The Social Construction of What? (1999). (Michael Berube and I both very much like Hacking's discussion of dolomite, though I suspect I am closer to Hacking's end view than is the Paterno Family Professor of American Airspace and Dangeral Studies.)
  • To work with social-science statistics, at least I find it tough to simultaneously criticize every character that I type in a statistics program and also work the darned program and think about what I'm doing. So I engage in a form of suspension of disbelief, work the statistics, pause and think about the larger meaning and doubts, work again, doubt, work, doubt, etc. I know I'm embedded in the statistical machinery when I hear, "Sherman, are you going to get any sleep tonight?" And I know when I've doubted enough when I realize I forget the syntax for calling up multiple regression.

And tomorrow morning, because of the idiosyncrasies of the USF IRB-02 records, I need to write and print an IRB protocol so I can finish a long-delayed project ... assuming I can climb the learning curve for the R-Project language.

February 24, 2009

Stationary population models and graduation

In odd moments in the last few weeks, I've been playing around with a standard demographic concept, the stationary population model. This is one of those things that don't really exist in reality, a population with constant mortality and fertility rates, with no migration in or out, and where the population is the same every year (no natural increase). In essence, a stationary population model is like a stripped-down car, something with all the extras out of the way so you can look at the engine while it's running. The question I've had is, if one looks at a stationary population model of high school, what can one say about a high school if one observes the total enrollment, the ninth-grade enrollment, the number of graduates, and the distribution of graduates by years in high school?

A few minutes of scribbling shows that the crude graduation rate (or the number of graduates divided by the total enrollment)  is equal to the probability of graduating times the rate of new ninth graders entering every year. The probability of graduating and the number of new ninth graders are both interesting and unobserved quantities. Unfortunately, they're also dependent on a crucial third unobserved quantity, the difference between the entering-ninth-grade rate and the proportion of the high school in ninth grade. (One way of interpreting this is the overestimate of entering 9th graders. Another interpretation is the proportion of total school life experienced in repeating ninth grade.) 

Because my life is now booked, I've only spent odd moments away from a computer on this exercise, but the obvious next step is to generate some simulated stationary populations (e.g., bootstrap samples of NELS:88, constrained to confirm to a range of graduation probabilities) and then look for regularities in the relationships between the underlying population measures and what would normally be observed from published data. Given the inherent constraints of the true value for the entering-ninth-grade rate (between 0.25 and the observed ninth-grade proportion), and a few other things, I suspect that regularities exist. Update: the problems of writing when sick is that one forgets obvious things like, NELS data sets do not have year-by-year information on enrollment. On the other hand, the Bureau of Labor Statistics longitudinal surveys (starting in 1979 and 1997) are every year...

Then the next step is to move on to a stable population model, where you relax the zero-growth assumption and assume a constant growth rate. That's important because school populations do not remain constant. (Neither does growth remain constant, but a stable population model introduces one level of complexity, and it's loads easier to understand than the full-blown, "let the population do what it wants to" model.) The problem here is that one crucial number in a stable population model is a term that normally corresponds to the mean length of a generation. This has no clear interpretation in a model of high school enrollment, so that's an interesting hurdle.

Incidentally, if anyone wants to jump ahead of me on this research program, feel free to dive in. The water's fine, I'm not likely to follow up for some months, and there are some interesting payoffs. Among other things, in a stationary population model, the product of life expectancy at birth and the birth rate is always one. In the school parallel with a stationary population model, if you multiply the entering-ninth-grade rate by the average time spent in high school, you will always get one. From there and the data on graduates, it's simple to calculate the average time spent in high school by those who eventually drop out.

December 26, 2008

(Effect) size matters

Nathan Yau is not Edward Tufte. Yau is a doctoral student in statistics. Tufte is a Yale professor emeritus. Yau's list of his 5 best data visualization projects of 2008 has a common missing element (from four of the listed projects) that E.T. would pull tufts of hair out over: the images have no quantification. To Tufte, that is a cardinal sin, along with the "chartjunk" that infects so many graphs in USA Today and other newspapers.

I am generally on the side of Tufte on this issue: unless you're a topologist, quantity matters and units matter. A common fallacy in manuscripts (and sometimes published articles and books) is the confusion between statistical significance and practical meaning. But if you are working with a sample size of 50,000 or more (common with a large epidemiological study or census microdata extracts), it is hard for many relationships not to be statistically significant. But whether the relationship is meaningful depends on the size of the relationship.

And here, the units matter! If you know that the multiple-regression coefficient between income and achievement is 1.5, that may or may not be notable. If you're measuring income in thousands of dollars and achievement in scale score points when the range is 0-1000 and the standard deviation is 150, that's a meaningless relationship (going up 15 points, or 0.10 of a standard deviation, when the income increases by $100,000). If you're measuring income by natural log and achievement in standard-deviation units, that's a substantial relationship (essentially moving a standard deviation up or down when the income doubles or is halved). 

In part stemming from the literature on meta-analysis, it is becoming more common for individual studies to identify effect sizes. While I still want to have a sense of concrete relationships, pushing authors to look at quantitifed relationships in perspective is always good. The same should be true for "data visualization." Quantify, folks! 

(For the record, I don't think Tufte is infallible. Far from it.)

July 9, 2008

Can reporters raise their game in writing about education research?

I know that I still owe readers the ultimate education platform and the big, hairy erratum I promised last month, but the issue of research vetting has popped up in the education blogule*, and it's something I've been intending to discuss for some time, so it's taking up my pre-10:30-am time today. In brief, Eduwonkette dismisses the new Manhattan Institute report on Florida's high-stakes testing regime as thinktankery, drive-by research with little credibility because it hasn't been vetted by peer review. Later in the day, she modified that to explain why she was willing to promote working papers published through the National Bureau of Economic Research or the RAND Corporation: they have a vetting process for researchers or reports, and their track record is longer. Jay Greene (one of the Manhattan Institute report's authors and a key part of the think tank's stable of writers) replied with probably the best argument against eduwonkette (or any blogger) in favor of using PR firms for unvetted research: as with blogs, publicizing unvetted reports involves a tradeoff between review and publishing speed, a tradeoff that reporters and other readers are aware of.

Releasing research directly to the public and through the mass media and internet improves the speed and breadth of information available, but it also comes with greater potential for errors. Consumers of this information are generally aware of these trade-offs and assign higher levels of confidence to research as it receives more review, but they appreciate being able to receive more of it sooner with less review.

In other words, caveat lector.


We've been down this road before with blogs in the anonymous Ivan Tribble column in fall 2005, responses such as Timothy Burke's, a second Tribble column, another round of responses such as Miriam Burstein's, and an occasional recurrence of sniping at blogs (or, in the latest case, Laura Blankenship's dismay at continued sniping). I could expand on Ernest Boyer's discussion of why scholarship should be defined broadly, or Michael Berube's discussion of "raw" and "cooked" blogs, but if you're reading this entry, you probably don't need all that. Suffice to say that there is a broad range of purpose and quality of blogging, some blogs such as The Valve or the Volokh Conspiracy have become lively places for academics, while others such as the The Panda's Thumb are more of a site for the public intellectual side of academics. These are retrospective judgments that are only possible after many months of consistent writing in each blog.

This retrospective judgment is a post facto evaluation of credibility, an evaluation that is also possible for institutional work. That judgment is what Eduwonkette is referring to when making a distinction between RAND and NBER, on the one hand, and the Manhattan Institute, on the other. Because of previous work she has read, she trusts RAND and NBER papers more. (She's not alone in that judgment of Manhattan Institute work, but I'm less concerned this morning with the specific case than the general principles.)

If an individual researcher needed to rely on a track record to be credible, we'd essentially be stuck in the intellectual equivalent of country clubs: only the invited need apply. That exists to some extent with citation indices such as Web of Science, but it's porous. One of the most important institutional roles of refereed journals and university presses is to lend credibility to new or unknown scholars who do not have a preexisting track record. To a sociologist of knowledge, refereeing serves a filtering purpose to sort out which researchers and claims to knowledge will be able to borrow institutional credibility/prestige.

Online technologies have created some cracks in these institutional arrangements in two ways: reducing the barriers to entry for new credibility-lending arrangements (i.e., online journals such as the Bryn Mawr Classical Review or Education Policy Analysis Archives) and making large banks of disciplinary working papers available for broad access (such as NBER in economics or arXiv in physics). To some extent, as John Willinsky has written, this ends up in an argument over the complex mix of economic models and intellectual principles. But its more serious side also challenges the refereeing process. To wit, in judging a work how much are we to rely on pre-publication reviewing and how much on post-publication evaluation and use?

To some extent, the reworking of intellectual credibility in the internet age will involve judgments of status as well as intellectual merit. To avoid doing so risks the careers of new scholars and status-anxious administrators, which is why Harvard led the way on open-access archiving for "traditional" disciplines and Stanford has led the way on open-access archiving for education, and I would not be surprised at all if Wharton or Chicago leads in an archiving policy for economics/business schools. Older institutions with little status at risk in open-access models might make it safer for institutions lower in the higher-ed hierarchy (or so I hope). (Explaining the phenomenon of anonymous academic blogging is left as an exercise for the reader.)

But the status issue doesn't address the intellectual question. If not for the inevitable issues of status, prestige, credibility, etc., would refereeing serve a purpose? No serious academic believes that publication inherently blesses the ideas in an article or book; publishable is different from influential. Nonetheless, refereeing serves a legitimate human side of academe, the networking side that wants to know which works have influenced others, which are judged classics, ... and which are judged publishable. Knowing that an article has gone through a refereeing process comforts the part of my training and professional judgment that values a community of scholarship with at least semi-coherent heuristics and methods. That community of scholarship can be fooled (witness Michael Bellesiles and the Bancroft Prize), but I still find it of some value.

Beyond the institutional credibility and community-of-scholarship issues, of course we can read individual works on their own merit, and I hope we all do. Professionally-educated researchers have more intellectual tools which we can bring to bear on working papers, think-tank reports, and the like. And that's our advantage over journalists; we know the literature in our area (or should), and we know the standard methodological strengths and weaknesses in the area (or should). On the other hand, journalists are paid to look at work quickly, while I always have competing priorities the day a think-tank report appears.

That gap provides a structural advantage to at least minimally-funded think tanks: they can hire publicists to push reports, and reporters will always be behind the curve in terms of evaluating the reports. More experienced reporters know a part of the relevant literature and some of the more common flaws in research, but the threshold for publication in news is not quality but newsworthiness. As news staffs shrink, individual reporters find that their beats become much larger, time for researching any story shorter, and the news hole chopped up further and further. (News blogs solve the news-hole problem but create one more burden for individual reporters.)

Complicating reporters' lack of time and research background is the limited pool of researchers who carve out time for reporters' calls and who understand their needs. In Florida, I am one of the usual suspects for education policy stories because I call reporters back quickly. While a few of my colleagues disdain reporting or fear being misquoted, the greater divide is cultural: reporters need contacts to respond within hours, not days, and they need something understandable and digestible. If a reporter leaves me a message and e-mails me about a story, I take some time to think about the obvious questions, figure out a way of explaining a technical issue, and try to think about who else the reporter might contact. It takes relatively little time, most of my colleagues could outthink me in this way, and somehow I still get called more than hundreds of other education or history faculty in the state. But enough about me: the larger point is that reporters usually have few contacts who have both the expertise and time to read a report quickly and provide context or evaluation before the reporter's deadline. Education Week reporters have more leeway because of the weekly cycle, but when the goal of a publicist is to place stories in the dailies, they have all the advantages with general reporters or reporters new to the education beat.

In this regard, the Hechinger Institute's workshops provide some important help to reporters, but everything I have read about the workshops are usually oriented to current topics, providing ideas for stories, and a matter of general context and "what's hot" rather than helping reporters respond to press releases. Yet reporters need the help from a research perspective that's still geared to their needs. So let me take a stab at what should appear in reporting on any research in education, at least from my idiosyncratic readers' perspective. I'll use the reporter's 5 W's, split into publication and methods issues:

  • Publication who: authors' names and institutional affiliations (both employer and publisher) are almost always described.
  • Publication what: title of the work and conclusions are also almost always described. Reporters are less successful in describing the research context, or how an article fits into the existing literature. Press releases are rarely challenged on claims of uniqueness or what is new about an article, and think-tank reports are far less likely than refereed articles or books to cite the broadly relevant literature. When reporters call me, they frequently ask me to evaluate the methods or meaning but rarely explicitly ask me, "Is this really new?"My suggested classification: entirely new, replicates or confirms existing research, or is counter to existing research. Reporters could address this problem by asking sources about uniqueness, and editors should demand this.
  • Publication when: publication date is usually reported, and occasionally the timing context becomes the story (as when a few federal reports were released on summer Fridays).
  • Publication where: rarely relevant to reporters, unless the institutional sponsor or author is local.
  • Publication why: Usually left implicit or addressed when quoting the "so what?" answer of a study author. Reporters could explicitly state whether the purpose of a study is to answer fundamental issues (such as basic education psychology), applied (as with teaching methods), attempting to influence, etc.
  • Publication how: Usually described at a superficial level. Reporters leave the question of refereeing as implicit: they will mention a journal or press, but I rarely see an explicit statement that a publication is either peer-reviewed or not peer-reviewed. There is no excuse for reporters to omit this information.
  • Content who: the study participants/subjects are often described if there's a coherent data set or number. Reporters are less successful in describing who are excluded from studies, though this should be important to readers and reporters could easily add this information.
  • Content what: how a researcher gathered data and broader design parameters are described if simple (e.g., secondary analysis of a data set) or if there is something unique or clever (as with some psychology research). More complex or obscure measures are usually simplified. This problem could be addressed, but it may be more difficult with some studies than with others.
  • Content when: if the data is fresh, this is generally reported. Reporters are weaker when describing reports that rely on older data sets. This is a simple issue to address.
  • Content where: Usually reported, unless the study setting is masked or an experimental environment.
  • Content why: Reporters usually report the researchers' primary explanation of a phenomenon. They rarely write about why the conclusion is superior to alternative explanations, either the researchers' explanations or critics'. The one exception to this superficiality is on research aimed at changing policy; in that realm, reporters have become more adept at probing for other explanations. When writing about non-policy research, reporters can ask more questions about alternative explanations.
  • Content how: The details of statistical analyses are rarely described, unless a reporter can find a researcher who is quotable on it, and then the reporting often strikes me as conclusory, quoting the critic rather than explaining the issue in depth. This problem is the most difficult one for reporters to address, both because of limited background knowledge and also because of limited column space for articles.

Let's see how reporters did in covering the new Manhattan Institute report, using the St Petersburg Times (blog), Education Week (blog thus far), and New York Sun (printed). This is a seat-of-the-pants judgment, but I think it shows the strengths and weaknesses of reporting on education research:


CriterionTimes (blog)Ed Week (blog)
Sun
Publication
WhoAcceptableAcceptableAcceptable
WhatWeakAcceptableWeak
WhenAcceptableAcceptableAcceptable
WhereN/AN/AN/A
WhyImplicit only
Implicit only
Implicit only
HowAcceptableAbsentAbsent
Content
WhoAcceptableAcceptableAcceptable
WhatWeakWeakWeak
WhenAcceptableAcceptableAcceptable
WhereAcceptable
AcceptableAcceptable
WhyWeakAcceptableWeak
HowWeakWeakWeak

Remarks: I rated the Times and Sun items as weak in "publication what" because there was no attempt to put the conclusions in the broader research context. All pieces implied rather than explicitly stated that the purpose of the report was to influence policy (specifically, to bolster high-stakes accountability policies). Only the Times blog noted that the report was not peer-reviewed. All three had "weak" in "content what" because none of them described the measures (individual student scale scores on science adjusted by standard deviation). Only the Ed Week blog entry mentioned alternative hypotheses. None described the analytical methods in depth.

While some parts of reporting on research is hard to improve on a short deadline (especially describing regression discontinuity analysis or evaluating the report without the technical details), the Ed Week blog entry was better than the others in in several areas, with the important exception of describing the non-refereed nature of the report. So, education reporters: can you raise your game?

* - Blogule is an anagram of globule and connotes something less global than blogosphere. Or at least I prefer it. Could you please spread it?

June 4, 2008

Can we count graduation??

Ed Week has its annual graduation special issue online, Diplomas Count 2008. Joydeep Roy and Larry Mishel have a brand-new article out today criticizing Swanson's measures as well as those of others, available at the ASU server or the epaa.info server for Education Policy Analysis Archives. (Disclosure: I'm the editor. And I've done some research on graduation myself.)

May 16, 2008

The ethics of expert mumbling

I started this entry a few days ago, when I wrote, "I probably should be doing something else," and obviously I did. But now that I'm back for a few minutes, I want to think aloud (or in text) about policy hubris. I've been batting a book idea back and forth recently, based on the encouragement of a series editor, to explore how a school system has responded to growing demographic diversity over several decades. Like many school systems I have encountered or studied, its key officials have been and are fairly proud of the work the system has done. But while that pride was justified in some circumstances, pride also became a substantial blind spot, allowing officials to ignore problems that festered and to lash out at critics. Pride became institutional hubris.

When we talk about hubris, it's usually in a personal context. Some weeks or months or years ago, I listened to a key legislator or legislative aide talking about education policy. She was sharp: smart, knowledgeable, and quick-witted. How she used that tremendous skill set bugged me; while listening to her respond to questions from the audience, I thought,

She's immersed himself in the reports and materials available at the 40,000-foot level. She knows all of the arguments, and she knows the counter-arguments to push back at others in the conversation. She's cocksure and unaware of where she might be terribly wrong. And she's alienating almost everyone in the room.

I'm obscuring her identity to protect the guilty, but the hubris I witnessed in the room is a personal version of the institutional hubris in a large school system. No one should be allowed to be that cocksure without an occasional whack upside the ego, for the good of the individual (or school system) as well as the world. There's a point to all of the Shakespearean tragedies: hubris hides your flaws, including flawed reasoning.

Some weeks ago, I heard a scientist interviewed who explained his professional epistemology on being open to evidence. He took some of his reasoning from Karl Popper, the giant "science is falsifiable" philosopher of the twentieth century. But falsifiability was not just a stance about testing hypotheses, to this scientist. It was a matter of ethics: you have to be willing to be wrong to be a good scientist.

I think that's true of most disciplines. If you don't get some sense of wonder when a small fact turns over a preconception, you shouldn't call yourself a researcher. If you only go out to prove a prejudice, you're not a researcher. If you ignore evidence that undermines your claims, leave the field.

Unless, of course, you're one of the exceptions whose life story is going to leave me wondering if I should have been so definite in that last paragraph. There's always that possibility...

April 21, 2008

College graduation

The new Ed Sector report by Kevin Carey, Graduation Rate Watch, summarizes some of the material available from the IPEDS 6-year graduation measures for four-year colleges and universities. The main point is that there are vast differences within different higher-ed sectors not only in 6-year graduation stats but also Black-White differences in graduation. He correctly points out that some institutions such as Florida State have programs that appear at first glance to provide substantial support to first-generation college students, support that increases the likelihood of graduating.

Kudos: the interesting slice of IPEDS rates, with the appropriate hedges/caveats; the nod to Vincent Tinto's work; the acknowledgment of Cliff Adelman's suggestion for improving the IPEDS measures; the observation that U.S. News & World Report rankings largely diss graduation rates as ways to distinguish institutions; the recommendation that financial aid be shifted away from its merit-based emphasis today and back towards means-testing; the observation that funding enrollment does not provide a strong incentive for retention programs.

Kumquats: the continued push for a national unit records database. I think that's the only DOA suggestion in a compact, complex report. I may disagree with some other ideas, but the report on the whole is thoughtful and presents issues in a clear way. I might want a bit more use of the current college-retention literature, but I can't point to specifics because that's outside my area of expertise.

Some broader issues that complicate efforts to increase undergraduate graduation:

  • A large proportion of college students are in community colleges, and programs that focus on first-time-in-college students at universities are great... and limited to that sector of higher education.
  • Part-time students are a serious puzzle in terms of retention and even measurement. In many states, part-time students have a much harder time getting aid (in part because they are often older, and in part because of minimal-credit requirements). They also have competing obligations, are on campus less frequently, etc. I love older students in my classes for very selfish reasons (they are more mature, they help teach their classmates simply by being there and talking about their lives), but I'm not sure who has cracked the practical challenges that part-time students present for themselves and for their colleges.
  • Health crises can turn a student with marginal success into a student who has dropped out, and young adults are among the least likely Americans to have adequate health insurance.
  • Institutional pecking orders are hard to pinpoint, and they can shift rapidly: witness Florida, where reduced funding is pushing most of the state's public universities into being far more selective. My guess is that graduation rates will rise in 4-5 years, but while some institutions (including mine) are figuring out how some concrete steps to increase student success, some part of that will be a selection effect. So making comparisons with "peer institutions" may be a difficult enterprise.
  • Measures focused on undergraduates make it somewhat more difficult for graduate-focused institutions in any incentive system. States need to be flexible and negotiate the systems with institutions, or they are likely to provide odd advantages to some institutions over others, advantages that will only be discovered after the fact.
And those are the issues that are apparent to me without knowing the higher-ed attrition/retention/graduation literature. There is one faculty colleague at USF who focuses on higher-ed attrition, and there are IR gurus for whom this is an occupational focus, so I do have local resources... now I really need that Time-Turner. But for now, it's 11:30 pm, and I still need to provide feedback on a student thesis...

April 20, 2008

Sketching a course 6

Habits and experience
Today I'm trying desperately to finish a paper that is far too late. Part of the delay is the craziness that is my professional and union life, but another part is that I am delving into two subjects that I have not been diligent in keeping up with. I am keenly interested in them, but they are on the margins of my main research interests, and when one's time is short...

The consequence is that I now have to play catch-up. If I weren't pressed for time in other ways, I would enjoy this process more, because over my life I have repeatedly been required to undergo a "drink from the firehose" experience in reading. It is an exhausting short-term experience, and it challenges me to engage all sorts of skills simultaneously, with the mental effect nothing quite so much like keeping a number of balls in the air at the same time. No, not juggling balls: more like a lit torch, a chef's knife, a soap bubble, and a ceramic bowl filled with yogurt. All of them. If you can keep them up there, it's quite a thrill.

Usually, graduate students have these experiences in high-stakes environments, as major papers at the end of a course. Or, rather, if they do have drink-from-the-firehose feelings, they're not likely to be successful. Is there a way to give them that experience in a strongly positive sense, with far lower stakes?

In more mundane news, I've been suckered into a new exercise regime. No, not suckered: quite enjoyable. But it's another thing I need to schedule. Anyone have a working Time-Turner I can borrow?

April 19, 2008

Silence on AT&T Aspire an "Ed in '08" parallel?

Is it my imagination, or has there been a deafening silence in both news outlets and the blogosphere after the AT&T Foundation's announcement Wednesday of a $100 Million Dropout Prevention Program? I wrote about the initiative Thursday, but given the lead-in publicity (the America's Promise report with Ed Week about low graduation rates?), I expected more commentary than just my note about the history of outcries about and responses to dropping out.

I know that $100 million is a drop in the bucket compared to public-school spending, but it's a big splash in philanthropy and usually gets more attention than the Ed Week article (linked above) or the Chronicle of Philanthropy note on the announcement.

April 17, 2008

$100 million... how will it be used?

Looking at today's New York Times story on AT&T Aspire, the $100 million effort to reduce dropping out. In reality, $100 million is a visible splash and not chump change, but it's a small amount compared to all the money spent on high schools every year. That effort to get a visible splash to serve as a lever is common with educational philanthropy these days. After all, Bill and Melinda Gates's entire fortune is only a few billion dollars more than what California taxpayers spend every year on education.

There's very little information about this on the AT&T Foundation website, other than working with Colin and Alma Powell's organization America's Promise to create local partnerships through "dropout summits." At that level, it looks remarkably like the early 1960s efforts I chronicled in Creating the Dropout. It doesn't have to be as ineffective as those efforts, and I hope this time around, it works out better.

April 12, 2008

Organizational psychosis?

Yesterday's New York Times article on 'credit recovery' puts the Bloomberg-Klein years in New York in perspective, as one Manhattan principal explains:

I think that credit recovery and the related topic independent study is in lots of ways the dirty little secret of high schools. There's very little oversight and there are very few standards.

The NYC Department of Education said one decent thing in its defense (that the plural of anecdotes is not data), but it would be relatively easy to look at the students who earn credit through credit-recovery and look at other data about their achievement... that is, if the Department of Education will release information about it.

I see the same thing in Florida to a lesser degree, in Florida's calculation of graduation in a way that calls it a success when a student drops out of school and immediately enrolls in a GED program. That's why I am not celebrating Margaret Spellings's announcement that regulations to define graduation rates are in the works: the devil's in the details.

Even more broadly, there's something fundamentally at odds with reality to create a system that keeps ratcheting up pressure on both students and educators and then addresses one of the resulting problems in a facile way. When individuals experience a substantial gap between their experiences and reality, we term that experience psychosis (which I know is a broad range, and plenty of people have psychotic experiences such as hallucinations without being mentally ill).

There is no organizational term to capture a gap between what we would consider reality and institutionally recognized reality, but maybe there should be something akin to organizational psychosis. And at least according to the Times article, the credit recovery system is one likely candidate for that category.

April 3, 2008

A dozen questions for an official graduation rate

When the OMB clears the draft regs on counting dropouts, we can expect another wave of stories on graduation rates and what they all mean. Sharp reporters and other observers will ask the following questions of the draft regs:

  1. Does the definition of graduation include or exclude non-standard completion categories such as GEDs and "certificates of completion"?
  2. How does the definition of graduation handle students with disabilities with a modified curriculum (that is, with an emphasis on functional rather than academic goals)?
  3. Is the mandatory measure a longitudinal statistic such as the NGA compact or a synthetic measure such as Chris Swanson's Cumulative Proportion Index? (I will assume until proven wrong that it is a longitudinal measure.)
  4. Regardless of the measure proposed, how many states have data systems that can produce the statistics required?
  5. How does the measure address transfers, homeschooling, migration, and mortality?
  6. For the adjustments proposed for transfers, homeschooling, migration, and mortality, are there any requirements that states audit the corresponding codes in their data systems?
  7. How does the proposed measure handle grade retention (e.g., multiple years in ninth grade)?
  8. Does the proposed measure forbid a state from using the Florida tactic of calling a dropout a transfer if the dropout immediately enrolled in a GED program?
  9. How does the proposed measure handle students who graduate in five years?
  10. Do the proposed regs require that school districts and schools must meet benchmarks in graduation in the same way that they must meet benchmarks with % 'proficient'?
  11. If there are such required benchmarks, is there any supporting research to suggest that the status or improvement benchmarks are realistic?
  12. In crafting the draft regs, did the Department of Education consult with more than two of the researchers recognized to have published in the relevant area, such as Chris Swanson, Rob Warren, Melissa Roderick, Russell Rumberger, Bob Hauser, Michelle Fine, or Gary Orfield? I'm an historian, and we're generally trotted out as mantel decorations for such affairs, if at all, but there are plenty of solid researchers in the area who could be consulted. And if you're a reporter, you need to line up a few of those folks to be ready to respond to draft regs.
I'm exhausted from a third straight fragmented day, looking forward to a fourth one... but I suspect the above set of questions covers much of the ground on the anticipated raft regs defining an official graduation rate.

April 1, 2008

Gradu[r]ated

So U.S. Secretary of Education Margaret Spellings Announces Department Will Move to a Uniform Graduation Rate, Require Disaggregation of Data (the true title of the press release today announcing imminent-but-not-published draft regs defining a graduation rate and only a few words away from the type of book title that would cure almost any insomnia). And George Miller huffs some that it wasn't bipartisan (hat tip to David Hoff on the Miller statement). So what's the buzz about?

  1. Spellings is channeling Adlai Stevenson's approach to governance and proudly announcing bold action on issues that are almost consensual and would happen without her intervention.
  2. Especially for this particular issue, the devil is in the details. Florida has a longitudinal graduation measure, but that doesn't mean it's accurate. If the regulatory language released in draft form would allow Florida to keep doing what it's doing officially, you won't see much in the form of transparency (and at least with two issues, you may see things get worse).
  3. Spellings is hoping the gravitas and charm of Colin Powell rubs off. Admittedly, Powell hasn't (yet) been on NPR's Wait, wait, ...

Maybe this is more evidence that Spellings will run for elected office in Texas and claim that she created growth measures, differentiated consequences, and airtight graduation rates. At least she's not claiming to have invented the Internet...

March 25, 2008

AERA brief note

I'm in the Delta terminal of JFK, waiting to go home to Tampa. Presented. Listened. Laughed. Bought things in both the AERA exhibit hall and also the Juliiard School bookstore (which was having a 30%-off sale on a bunch of CDs). I will be blogging later this week on a NYC nutty policy and on the 20th retrospective session on Jim Anderson's 1988 book, The Education of Blacks in the South, 1865-1935.

March 19, 2008

Florida ed policy and politics

The legislative session is in full swing (or a more colorful noun), and a bunch of things are in the air either in Tallahassee or elsewhere:

1. Both houses of the state legislature are considering bills to change the role of state testing (FCAT), either by adding other information to the labeling of high schools (the senate's approach) or by a compromise bill that discourages test-prep and sets more specific grade-level standards (the proposal in the house).

2. The ACLU sues Palm Beach County for its low high school graduation. Superintendent Art Johnson suggests it's the state's fault for not providing enough money (scroll down for "But the superintendent..."). (Disclosure: A 2006 paper of mine is mentioned in both stories.)

3. Something that wasn't covered in my local papers in January: Holmes County administrators have banned students from displaying anything related to gay pride. The ACLU of Florida sued. I suspect this one's a no-brainer in a bench trial: in the majority opinion in Morse v. Frederick, Chief Justice Roberts made a distinction between what he thought of as the political speech of Tinker and the display of "Bong Hits 4 Jesus."

The only interest the Court discerned underlying the school's actions [in Tinker] was the "mere desire to avoid the discomfort and unpleasantness that always accompany an unpopular viewpoint," or "an urgent wish to avoid the controversy which might result from the expression." Tinker, 393 U. S., at 509, 510. That interest was not enough to justify banning "a silent, passive expression of opinion, unaccompanied by any disorder or disturbance." Id., at 508.

I think that reasoning clearly applies in this case.

March 13, 2008

A rescue for Swanson's CPI?

(Note: I've changed a few things in the first graph, and someone pointed out to me that I had fallen prey to one of the many Excel glitches, but I'll show the changes below...)

I've written informally here on graduation measures, expressing my concern about Chris Swanson's Cumulative Promotion Index, known among the grad-rate numerati as the CPI (e.g., 4/16/06, 6/21/06, 6/12/07). One of the CPI's weaknesses is its reliance on the annual numbers reported to the Common Core of Data. Another is the assumption that the ratio of enrollment in 10th grade in 2008 to 9th grade enrollment in 2007 is a meaningful gauge of cohort retention/promotion. In 2005, Rob Warren explained (PDF, iPaper) why that is problematic: 9th grade retention and student transfers pollute the measure. Transfers pollute all of the components of the CPI (net in-transfers artificially inflate CPI), but 9th grade retention is particularly problematic (deflating CPI).

Smoothing down the 9th-grade bump and year-to-year jiggles

I've been returning to the issue of measuring attainment (my Holy Grail, I suppose), and there are a few ways I've thought to improve on the CPI. Two obvious ones are to smooth the data and remove 9th grade as an issue:

  • Use three years of enrollment and diploma data, to smooth over single-year bumps and reporting problems.
  • Start in 8th grade and use a two-year 10th-to-8th grade enrollment ratio as the first term in CPI.

I tried that on state-level data from the beginning of the Common Core of Data to the latest year (1986-2005), and the effect of smoothing is what I had hoped: the measures of central tendency for each state are similar, but the "bumpiness" of the data is dramatically reduced. And if one looks at data within each state, over the entire time series state medians for the Swanson CPI is highly correlated to state medians for the smoothed measure (r2=.92, N=51). By starting with 8th grade, state medians for CPI tend to rise between 3% and 8%. That's not surprising.

Reframing CPI

Then I had another thought: what if we looked at grade level not as a proxy for the time in high school but as a set of gateways, requirements to meet on the path to graduation? Then the concepts behind the CPI terms could be thought of as a standard probability problem. Through a few razzle-dazzle maneuvers, I snatched some cross-sectional data from CCD, took the natural log of everything, and tossed it through regression to see if the cross-sectional data could predict the smoothed CPI, at least with the state-level data. Here's the result, with the regression prediction on the X axis and the smoothed, skip-9th-grade CPI estimate on the Y axis):

More-smoothed-for-states-March-13-08.JPG

The previous graph has the same points but a more ambiguous indication of r2. For the geeks, I ran the regression on the logs of everything (there's a clear reason tied to the background for all this), and the top r2 refers to that regression. But you can also look at the translation back into percentages/ratios, and the second r2 is for the plotted graph. In this case, they're virtually identical. N=867 (17 x 51). Pretty snazzy, eh? No, I'm not releasing the details. Not until it's been highly vetted...

And I'm not going to break out the champagne, either (especially after a bit of embarrassment with the Excel glitch). At the lower end of the range for states, the prediction underestimates the smoothed CPI, and there are no guarantees how it'll perform at that low range. The different points for each "year" are not truly independent within a state, since we're working with multiple years of data (closely related to a moving average). And, as noted above, student migration can easily bias CPI, leading to CPIs above 100% with substantial migration.

States don't hit extremely low levels of graduation or high levels of migration/transfers, but districts do. I took California districts with average enrollment in grades 8-12 over several recent years of at least 3,333, removed a few elementary- or high-school-only districts (California has that odd combination) as well as others with some data anomalies, and ended up with 127 districts that make up 59% of the 8th-12th enrollment over the years in question earlier this decade. Snagged the same cross-sectional elements from the CCD.  How well does that idea work (the following graph was before fixing the spreadsheet formula error)?

Smoothed-for-California-recent-March-13-08.JPG

That was clearly not nearly as nice as the state-level picture. A few things are important to note, here: larger aggregations tend to look different in any statistical analysis, and you'll see here a broader range of smoothed CPI predictions and estimates, including the dreaded and improbable over-100% measure.

But there was an error in the spreadsheet (caused by copying a column instead of a formula). Here's the new graph:

 

Corrected-smoothed-for-California-recent-March-13-08.JPG

Much better, no? Again, I've noted both the r2 for the regression model and r2 for the translated figure, a little lower than with states and not quite as close. As before, smaller entities are going to have broader variation, but I'm a little more encouraged. Yes, I have a few ideas on how to attack that over-100% CPI, but I think that's enough for saving the world today. I have chaffeuring and journal editing to do in the next few hours... (the driving is done but I'll see if I have a bit more energy tonight)

One last point: The implications of this are a bit subtle, apart from the utility of smoothing the data with multiple years and starting with 8th grade. The ability to connect a few cross-sectional data elements tightly to the synthetic CPI does not mean that CPI is without flaws but rather that the roots of any bias in CPI are at least parallel to and probably identical to the biases in the cross-sectional data. If you slide up and down the potential biases in the cross-sectional elements, you also slide up and down the biases for the CPI.

February 19, 2008

Wake up and go to sleep with AERA

This says nothing about my personal life but instead about my often-fragmented day: I started work early this morning at the auto dealer's working on a paper I'm co-presenting at AERA with Stephen Provasnik, and I'm closing out my local coffeehouse by... working on the paper. I only wish I could have spent hours and hours of consecutive time on it. It's a fun analysis that no one else looking at teacher attrition (not Richard Ingersoll nor anyone else) appears to have thought of, and if you're free Monday, March 24, between 2:15 and 3:45, head to the Soho Room on the 7th floor of the New York Times Marriot Marquis Times Square (Soho Complex), and you'll learn some very interesting stuff about entrances into and exits from teaching. The key finding is...

Ah, heck, that would be giving away the show. Come, listen, and learn. We'll even pass around some very cool tables and figures.

January 26, 2008

Punching through

This morning, I drove my son to a workshop with a ninth-degree black belt in tae kwondo. While he was learning how to punch more effectively, I was writing some critical paragraphs in the article MS I've returned to. Quite pleased, and I've set up the rest of the argument's structure reasonably well.  I need to add in some more relevant data, revise the last section and abstract to match the revisions, and then I think I can send it out.

And then create a new manuscript based on this technical work. I've never done that before (writing one manuscript that depends on another), but I juggle enough things now, why not add another.

(I originally wrote paragraphs in as paragraph sin. Hmmn...)

January 22, 2008

It was 20 years ago today, Green's theorem began to play... (oh, how WRONG)

I don't do paid work or shop on legal holdidays. (Today, I did a bit of civic duty and a bit of personal stuff.) Or I try not to. This story is of how my effort not to work on MLK day has succeeded, but also failed.

Not today, quite, but around 20 years ago, I was in the middle of my first year in college, taking the second-year calculus sequence (linear algebra, multivariate calculus, and ordinary differential equations). Except for linear algebra, it was a fairly smooth (second-order differentiable) experience. (Lame calculus joke, there.) Last night, I happened onto an online multivariate calculus text and the description of Stokes's theorem. I looked at it, thought, "Well, that's vaguely familiar, but what the heck is a curl again?" So I backtracked, and today I tried to follow the bit about line integrals and Green's theorem. (Both Green's and Stokes's theorems are generalized multi-dimensional versions of the fundamental theorem of calculus. That part I remembered.) I had to reread the explanation of how one can derive the formula for the area of a circle using different choices for P and Q, and then I saw a connection to one of the issues I'm working on for a grant proposal resubmission due in March. Briefly, can one take a Lexis diagram in demography and use Green's theorem?

So I brave Work Land, take out a piece of paper, draw a rectangle, and confirm that the line integral of N(a,t) (population at age a, time t), taken over the boundary of a Lexis-diagram rectangle, is the net number of deaths in the period and age interval. Yep, Sherman, you just re-defined the demographic balancing equation, and couldn't get any further. In reality, I think there is something more to be done here, especially since I'm working with a puzzle that I haven't yet solved (why estimates of the proportion of school life spent in certain grades is more robust with migration/transfer misspecifications than estimates of graduation).

But may be there is a lesson here in just letting the mind wander away from Workland when it should. It's just past midnight on the East Coast, so the holiday's over. With luck I'll be able to sleep without having this keep me awake, since I'm not capable of relearning this on the fly after midnight. (I suspect I never was able to do that, even in college.)

November 24, 2007

Zotero - love at first byte

This tells you something about my semester: it's taken until this weekend for me to try out Zotero, essentially an open-source citation database system. (I do wait until others on the bleeding edge show a new tool is useful, but I try to be in the second wave of adopters of useful tools.) It's free, thanks to the Center for History and New Media, and while I have been frustrated with the expensive software that my university purchased a site license to some years ago, at first blush Zotero is elegant and workable, including things such as snagging citations from Worldcat and JSTOR and my own university's library catalog.

But it took me about 5 minutes to set up, 5 minutes to play with it, and four minutes to use it to send a citation to students this afternoon. There is nothing in Zotero that you couldn't do manually with about 10 times the effort. But in the same way that learning a word processor's style system eventually pays off in hours, days, and weeks of time saved, so will Zotero. Goodbye, EndNote and ProCite. I have forsaken you for Zotero.

(Extra credit: how many pop-culture references exist for that phrase in the title, love at first byte?)

November 3, 2007

American Journal of Sociology review of "Schools as Imagined Communities"

I just found online the book review of Schools as Imagined Communities that Scott Davies wrote in the American Journal of Sociology. It's positive, and it includes the following in the conclusion:

As a whole, the book offers sociologists several themes to ponder, such as the uneasy relation between ideals of school community and formal equality, the tension between legal initiatives and subjective experiences of belonging, and the meandering path from political battle to institutionalized practice. This Canadian reader was particularly alerted to the tacit influence of the American Civil Rights movement and its legal landmarks, such as Brown v. Board of Education, on contemporary notions of educability and rights that are spreading around the globe.

For a variety of reasons, I'm very happy with this review: getting some confirmation from academics you've never met is always pleasant (the ego part), I can see about putting it in my promotion file (the professional part), and the visibility in one of the top sociology journals means that it is more likely to be purchased and assigned in courses, which will propagate the ideas and lead to royalties going to the non-profits that are benefitting from the book (idea and professional society nachas part).

August 14, 2007

Graduation measure workshop for AERA?

Earlier in the summer, I toyed with the idea of putting together one of the pre-AERA intensive workshops on measuring graduation. I didn't have time at the end of July to finish it, but AERA's program chairs have kept the proposal window up through early September.

Practical question: would anyone be interested in such a workshop, focusing on measures of graduation and some practical tips for research projects?

July 22, 2007

More on inflated Florida graduation stats

Today's article by Margaret Susca on GED withdrawals in Florida (hat tip) again raises issues about an official measure that inflates graduation stats by making schools and districts unaccountable for students who drop out and enter a GED program.

I'm back in my office this morning, continuing reading student papers. Harry Potter is downstairs in the car. Incentive, anyone?

July 5, 2007

Wild young Danes

The last night of my trip to the Society for the History of Childhood and Youth conference, I walked through the center of Copenhagen and kept coming across young Danes in white hats, riding in decorated trucks, climbing the fountains of one of the central squares, and whooping it up (see the pictures below the cut).

When I asked some of them, they explained that they had just graduated from secondary school (high school), and this was the way they celebrated. With bullhorns, they were raucous, and they waved and shouted out to passersby on the street.

So high school graduation rituals exist in other countries, if different from ours. I think this is better than high school prom drunks, but maybe they get drunk later. In any case, the growth of these rituals shows how graduation has become a standard expectation for European teens. I don't know how this evolved, but to an historian of dropping out, it's fascinating. Institutional life often becomes attached to rites of passage, and graduation rituals are part of that.


Danish graduates climbing statuary in King's New Square, Copenhagen
Danish graduates climbing statuary in King's New Square, Copenhagen, June 30, 2007

Trucks in which Danish secondary graduates rode, Copenhagen, June
Trucks in which Danish secondary graduates rode, Copenhagen, June 30, 2007

July 2, 2007

Norrköping trip photos, set 1

You can now see photos of the trip to and the environment of the conference. I'm doing my best to stay up a few hours longer so I can get the pain of jet lag out of the way by tomorrow morning. So while I have loads to catch up on, a little travelogue:

June 26: Tampa-Atlanta-Copenhagen flights. Overnight flights to Europe are not designed to be fun. I anticipated getting little sleep, so at least I wasn't disappointed by the occasional and incomplete napping. Zonked in Copenhagen, shocked to discover something in the airport that's in the photo album linked above, frustrated that there was no Swedish train-line agent there to see if my ticket for the 12:44 train could be changed to the 10:44 train that I might have been able to make, disoriented when my train was canceled but I had to luck into finding out the way we're supposed to handle it (hop a commuter train unpaid to Malmo and get the ticket reservation changed there, in Sweden), and relieved to get the train reservation changed in Malmo, where I found a quaint and very pleasant coffeehouse.

But my adventure wasn't over: heavy rains had warped or otherwise damaged tracks over a small stretch, so everyone had to get off the train and onto buses. I was probably the only passenger happy with the detour: As Bengt Sandin confirmed, tourists often pay high prices for precisely the rural-Sweden bus tour I got without any extra charge. I had a dermatologist sitting next to me, and we talked about how our 15-year-olds are environmentally conscious. This was either foreshadowing for something the hotel did (show Al Gore's An Inconvenient Truth over and over again on one of the movie channels) or just a common concern. In any case, I finally arrived at the hotel late at night, a little before sunset. I met some colleagues on the deserted main street at 10 pm (2200) and discovered the dearth of nightlife. Ah, well.

On the whole, the conference was quite good, in part because organizers had arranged for an online upload, so we could read a bunch of papers before the conference. There were several I discovered I hadn't but wanted to download after the conference. The repository is a nice feature that many conferences are now using. I heard 2 of the three plenary sessions, and they were linked thematically, though without any conspiracy. Kriste Lindenmeyer argued that Americans haven't shown the capacity to understand and cope with dependency as a concept, and Linda Gordon argued that the innocent child rhetoric has been damaging to children's interests when applied to public policy. Both are firmly rooted in historiography in the U.S., but Gordon's message is the one that I suspect is hardest to swallow, in part because of the deep roots of "child saving" and other patronizing reform movements.

One of the very nicest experiences was a conversation I had with one of the other presenters after his session and a Major Scholar whom I knew strongly disagreed with the presenter's perspective. The Major Scholar didn't try to browbeat but just asked factual "how did this happen?" and "what happened to this?" questions, listening intently, finally asking a few questions designed to prod the author to rethinking a basic perspective. I don't know if the author picked up on the clues, but it was one of the gentlest acts of intellectual criticism by a peer I've seen in years. For those who encounter intellectual sadists, there are better ways and better colleagues.

The return trip by train was much smoother, and I had enough time to visit the center of Copenhagen, having dinner and then walking briskly as far as I could in the 3 hours before sunset. Those photos aren't up yet, and I'll have a bit more to say, because a few are directly connected to one of my areas of research. All I will say is that I saw plenty of cows, white hats, and European architecture, and I ate well. It was good.

When I returned to the hotel, I heard about the Glasgow car bombing attempt. I also saw the short clip of new British PM Gordon Brown talking to camera from a hallway in 10 Downing Street. Definitely not the glitz of Tony Blair, but I suspect the British public will welcome Brown not as the dour Scot but as the sensible Scottish PM. It doesn't hurt the impression I received of him that he has a history Ph.D.

The plane flight back yesterday: Copenhagen-Paris (Charles de Gaulle)-Atlanta-Tampa. Charles de Gaulle is a horribly confusing airport, and I'm one who takes O'Hare, Atlanta, and Dallas-Ft. Worth in stride. I made the plane without fuss, but I saw the panic in other passenger's eyes. Air France is definitely a different airline. Delta flight attendants on the way over announced that passengers over 21 could have one complementary (alcoholic) beverage with dinner.  Air France was willing to give you a glass of wine whenever. (I had two glasses of red wine in the 9 hour flight.  Gasp. Horrors!)

I returned with photos, one scholarly book, two Swedish folk-rock albums, two postcards, and two newspapers (an edition each of the International Herald-Tribune and the Daily Mail). I have a few dozen e-mails and a bunch of tasks to organize, and it's back into the fray.

June 30, 2007

Leaving Norrköping

I had some very nice comments about my paper yesterday, so I hit the 'isn't this obvious?' sweet spot I target when presenting research. Or I think I did. Thus far, the questions raised about the intercensal estimating technique I'm using are the ones I've expected, so I'm getting more comfortable with the use and less wondering when I'll be blindsided by things I haven't thought about. (Example: To avoid migration issues, I only use people who have been born in the country, with a few exceptions. What I don't and can't avoid is the assumption that mortality is unrelated to educational attainment, something that's fine for many parts of the world unless you're talking about countries with very high HIV positive rates.)

My train back to Copenhagen leaves in about an hour, and I'll either be digesting the conference or napping. (Some of you may be able to do both, but not me.) And trying to read a book I've assigned for one of my fall classes. Or maybe student work that's on one of my small electronic devices. I'm still not done with jet lag, and I get another dose heading back to the states.

June 28, 2007

In Norrköping

I'm in the city's Scandic Hotel, a few blocks from the train station. I'll explain a lot more when I get home, but I need to get off the hotel lounge computer so others can use it, so all I'll say is, Wow. I go away for a few days, and the comments are flying fast!.

Learning lots. Walking a lot. Taking lots of pictures. Had a wonderful and long detour because of flooding in mid-east Sweden yesterday that I'm probably the only person to appreciate in any way (got a bus tour through rural countryside that others probably pay lots for, just because rail lines were flooded).

Oh, and the hotel is playing Al Gore's An Inconvenient Truth back to back to back on one of the movie stations (without guests having to pay). You think the Swedes are trying to send a message to overseas tourists? Nah...

June 23, 2007

Travel-time Saturday

Profgrrrrl is off to Thailand for a month. My son and mother are coming back from Arizona, where they spent a week on and around the Colorado River. (My mother just called me from the Houston airport on layover.) I have been grading student work today so I have a little less on my plate when heading to the Society for the History of Childhood and Youth conference next week.

I'm reasonably well-prepared for this conference (though I'll try to condense the results of my paper down to two sides of a single sheet of paper), and I think I'm prepared for the travel, having acquired my power converter, a set of Koss "the plug" headphones, and having figured out one of the Koss mods. I haven't taken an overnight plane flight since I was in my early 20s, and I'm hoping I get at least a little sleep, as I'll be getting into Copenhagen at 9 am local time, or Middle of the Night, Tampa time.

Is anyone else heading somewhere interesting this week?

June 12, 2007

New Swanson CPI numbers: still flawed

The new Ed Week graduation numbers are out, based on 2003 and 2004 Common Core of Data figures. Among the 50 largest district, the lowest Swanson CPI (cohort progression index) in the nation is still Detroit with 24.9%. And the CPI is still flawed, partly because it relies on the unaudited Common Core of Data figures. As I said almost a year ago, Detroit's figures make no sense (and please accept my apologies for the long scroll you'll have to make below to get to the table):

Detroit CPI calculations, 2000-2004
Measure 2000 2001 2002 2003 2004
9th grade enrollment 13,723 14,494 20,025 17,837 16,832
10th grade enrollment 8,860 9,291 11,275 9,899 9,326
11th grade enrollment 6,355 6,382 7,795 7,421 6,581
12th grade enrollment 5,329 4,618 6,020 5,244 5,604
Prior year grads 6,068 5,540 5,975 4,975
CPI 40.4% 73.9% 21.7% 24.9%

Source: Common Core of Data.

Is there any chance that Detroit's numbers can fluctuate so wildly? Garbage in, garbage out. 

May 26, 2007

Americas secondary enrollment trace, late 20th c.

Thanks to a a great Excel chart tip, I can now provide one way of summarizing synthetic-cohort educational attainment data from the following countries using census data from the second half of the 20th century:

  • Brazil
  • Chile
  • Colombia
  • Costa Rica
  • Ecuador
  • Those born in Mexico and enumerated in either Mexico or the U.S. in 1960, 1970, 1990, or 2000
  • United States
  • Venezuela

All of this is courtesy of the International Public Use Microdata Sample library, a wonderful resource available without use charge to any researcher in the world. You can download this Excel file with the relevant chart and use the scroll bar on the right to highlight the key data from any country, period, and sex combination. More in the full entry...


Very roughly, each line indicates the proportion at each age that would have completed secondary education but only secondary education (no university degree), if a hypothetical cohort went through ages 15-35 with the same educational experiences implied for the intercensal period by the census microdata at each end of the period in question.

There are the usual number of quirks and quibbles—quirkles?—embodied in this chart, from some key model issues to the algorithmic details:

  • The census estimates at the base of this chart start with only those born in the country, with the exception of Mexico (explained below)
  • I assume that there is no substantial differential mortality by educational attainment for the years in question
  • I assume similarly that out-migration does not substantially affect attainment (again with the exception of Mexico)
  • I estimate the cross-sectional proportion with a credential at an exact age as the average of the proportions in surrounding single-year age intervals, smoothed in the case of the Latin American countries at many ages as three-year averages (in the age intervals). Many of the increments are again smoothed with moving three-year averages and then fixed at 0 if slightly negative.
  • The model I'm using (from Carl Schmertmann's 2002 article [$]) is an estimate of intercensal increments without weighting by person-years, unlike most intercensal estimate techniques.

Of all these issues, the migration assumptions are the ones that will raise the most eyebrows, and I hope that if you've read this far, you're wondering why I combined the U.S. and Mexico census data. The basic answer to the latter question is because I could. Both Mexico and the U.S. conducted censuses in 1960, 1970, 1990, and 2000, and I was curious if the results would be affected by including U.S. residents born in Mexico. I discovered that for some ages (older teens and those in their 20s), more than 10% of those born in Mexico were residing in the U.S. for some of the censuses. That's a fascinating statistic in itself, and the existence of the same-year censuses suggests a potential for cross-national social histories using the censuses in question. I'm still puzzling over questions of "effects," since we don't know who spent which years where from the census stats, just the end result for the population as a whole.

I used the secondary-and-only-secondary-attainment line because it shows both secondary and college attainment. The up-slope shows secondary attainment, and the downslope shows college attainment (absent some late secondary graduation).

Bon appetit!

Oh, yes: For those following these things, my son's team won their first tournament game today, 14-2 (ten-run rule after four innings). By doing so, they've saved their #2 and #3 pitchers for tomorrow's game. Then everything gets harried, regardless of the results, and all teams go through their experienced pitchers. Spahn and Sain, then pray for rain?

May 25, 2007

Holding my fingers back, regretfully

I'd love to type a few hundred words on the major testing story in Florida this week, the Department of Education's acknowledgment Wednesday that they blew the scores on last year's tests. But here's why I haven't:

  • This summer I'm teaching in Sarasota Wednesday evenings, and all that afternoon and evening I was either prepping, in class, or driving between Sarasota and Tampa.
  • Yesterday was the last school day for my children, and I picked up one child at school and then had my beautiful, wonderful children with me for the rest of the day. My daughter decided that she wanted me to drive her to the afternoon martial-arts class instead of go with her mom to the evening class, and my son had baseball practice a little later. (His Little League team is in the county's championship, having won their local league.)
  • I'm on deadline with a paper for the meeting of the Society for the History of Childhood and Youth, "Comparative Educational Attainment Portraits 1940-2002." The paper has to be uploaded by a week from today.

That, plus a few other obligations, has meant that I've chosen not to engage in link sausage or discussion.  And when I discovered this morning that I had the same attainment figures for Venezuelan natives in Venezuela and Mexican natives in the U.S., that gave me some confirmation that I need to concentrate on the task at hand.  (Short explanation: I put a duplicate file of the Venezuela SAS file in the folder where I kept the other materials and mixed them up. All sorted out now.)

(For the 2.5 readers keeping track of my research, this is an extension of last year's Social Science History Association paper, where I tried out some new estimation techniques on U.S. census data. That worked quite well, so now I'm using openly-available historical international census data.)

May 21, 2007

Indulging in number-crunching on teacher flows

A friend and I have been batting some research on teacher demographics back and forth over a few years, and it's my turn this month to do some number-crunching involving the Current Population Survey (CPS) and simulations of what would happen to those entering and leaving teaching assuming a stationery population (as if the numbers of a population and all the age-specific rates remained constant). This is a way to check Richard Ingersoll's claims about early attrition against some national data (putting School and Staffing Survey and CPS together).

To put it briefly, what we're going to come up with is going to be a very different look at the flow into and out of teaching, based on age rather than time in the profession. There are some very good reasons to use this approach rather than what one might assume is the "natural" way of asking what happens after one enters teaching. Both are perfectly fine if you have the data for it. We have the data to look at age, and the results are going to surprise a bunch of people (but probably not you, dear reader). No, no numbers to put up here, since I need to finish the work and get, uh, my co-author's feedback/vetting/check. Oh, yeah, and we're hoping to publish this in a journal. 

I did the last bit of data downloading this evening to finish up the analysis with one of the four panels of data we're working on. It's simple in concept but very detail-oriented in practice and required a few hours of uninterrupted time... at one of my regular near-home "offices" (i.e., a coffee shop). Since the last week has been incredibly fragmented, I'm treating this as indulgence.  Yes, research is indulgence. And a blast.

May 15, 2007

Graduation news

Several items of note related to high school graduation:

  1. Yesterday, the Texas House of Representatives initially approved the elimination of the 22-year-old requirement that high school graduates pass a state exam. The state senate's proposal would replace the generic achievement tests with end-of-course exams. The House proposal faces another vote today. (My personal prediction: whether the House approves it, the complete elimination of the requirement is unlikely. The Senate's proposal will probably win the day.)
  2. Florida's Office of Program Policy Analysis and Government Accountability has issued a report on what happened to graduation in Florida since the shift in the state's graduation exam from an older exam to the FCATs. The basic story: there isn't clear evidence that the shift in exams led to decreased graduation, since the cohort analysis suggests a moderate rise of graduation over the boundary years (before and after the switch). This is not a surprise to me, since the research literature is mixed, and one wouldn't expect clear evidence from a single state or the relatively crude measures available in Florida. Subtler and more interesting information about passing rates at different grades (i.e., the original test-taking vs. retakes) is buried in Tables B-2 and B-3, on p. 10. Note: the retakes are different tests in effect because they don't contain the performance items that are in the 10th grade FCAT, and there are the usual caveats on the cohort analysis given Florida's W26 problem.
  3. Secretary Spellings and Senator Kennedy co-wrote (or had co-ghostwritten?) an op-ed on the "high school dropout crisis." Flashback to my book on the subject (check available libraries): Yes, dropping out is a problem, but not necessarily for the human-capital reasons Spellings and Kennedy use. Dropping out became a headline issue in the 1960s as graduation trends were rising, because it marked when graduation became the norm for teenagers. While there are human-capital effects of education, there is no clear threshold when failing to graduate from high school has a different individual or social effect. We should be worried about the differential graduation rates for civil-rights reasons more than human-capital reasons. On that front, it's clear that the differences in graduation reflect and are part of broader inequalities in educational outcomes.

And for those who don't want to buy my 11-year-old book, there's also my article, High-Stakes Testing and the History of Graduation (2003). (Incidentally, someone on Fordham's The Gadfly staff misread the article when it came out 4 years ago, claiming that I tried to use 20th Century history to predict the effects of high-stakes tests that must be passed to earn high-school diplomas. You may not be surprised to learn that the author thinks they will cause graduation rates to decline, dropouts to rise, and confusion to persist over the "social meaning of diplomas." No, folks: That's not what I said. B- on comprehension. Aaargh, indeed.)

May 5, 2007

Grades and puns

As I left the house this afternoon for Chain Cafe, I told my dear spouse, "I'm leaving for a few hours, hopefully to finish my grading." I swear that the pun was unintentional (and she didn't catch it until I groaned).

I only had to wait half an hour before getting my pun-ishment: the rejection of a article manuscript ... on migration and graduation. The reason was a lack of fit for the journal scope, which was almost half-expected: I had chosen the journal for the audience and not because it was the best fit. So it's time to hunt for another outlet or five.

(Incidentally, while the permalink for this entry is 900, I only have 848 entries.  No, I don't understand the math, either.)

March 4, 2007

Sweden ho! (in June)

In late June, I'm headed outside North America for the first time since 1973. It's for the 2007 meeting of the Society for the History of Children and Youth, and appropriately enough I committed myself to looking at educational attainment internationally.

Even though I'm late with the registration, it's a virtual steal, only 1000 kronors, and I can get a hotel room for 540 kronors/night (apart from VAT, of course). Cool! So I have my plane reservations, registration, hotel accommodations in Linkoping, and I think I have everything I need (visas apparently don't need to be acquired before travel), with two exceptions...

I can't make train reservations to get between Copenhagen airport and Linkoping University yet, and I need to find a hotel in Copenhagen the last night before I fly back. I just need to wait a few weeks for the train reservations, but if someone has suggestions for a hotel that's a little different from the Danish version of corporate hotels (e.g., Scandic Sydhavnen), please let me know!

November 22, 2006

Reading of the day on graduation rates

Strongly recommended (if belatedly found!): Charles Hirschman, Nikolas Pharris-Ciurej, and Joseph Willhoft, How Many Students Really Graduate from High School? The Process of High School Attrition. This working paper is a revision of a poster session I saw at the Population Association of America meeting in March, and it's very solidly done, including a clever way of estimating the true out-migration/out-transfer rate of students (i.e., sorting out who is really transferring versus dropping out).

October 30, 2006

New paper on attainment using census records

My fall conference paper for the Social Science History Association, Long-Term Educational Attainment Trends in the US: A New Look (2.0 MB), is available as a working paper. Lots of small-multiples. I'm sure there are lots of omissions and errors. (Feel free to point them out in comments!) The text is fairly bare-bones, because of the 23 pages of figures.

Incidentally, it was a blast doing the number-crunching.

October 6, 2006

Statistical magic and record linkage

Highly recommended link on a way-cool statistical technique in record linkage: The Bristol Observatory, where Steven Banks and John Pandiani have developed probabilistic population estimation, using two data sets with just birthdates. It's not really magic but relies on a classic puzzle in probability (and an elementary one to solve, apparently).

Banks and Pandiani developed this technique to solve a serious evaluation problem with mental health programs: how do you identify who used two services, or showed up in two different places, if the two agencies cannot reveal personally-identifiable information for privacy reasons? 

They went around that problem to rephrase it:  the operative question for program evaluation is not who shows up in two places but how many. The first requires invading privacy to some extent. The second, not at all.  Their technique requires information only about birthdates and such other nonidentifiable information as would allow them to subdivide a population for greater accuracy, but no names, addresses, phone numbers, or Social Security numbers. They don't even need to know the unduplicated birthdates. It also bypasses all the attendant problems of keeping separate databases up-to-date.

Is this applicable to education research and my own work? Well, suppose you want to know if a specific intervention leads kids to graduate from high school, but the local school district (or some relevant agency) won't release identifiable information.  All you need is the birthdates, sex, and maybe ethnicity of those who graduate from the district (though since ethnicity is more malleable than sex, that's a problem), and you can estimate the numbers of graduates who also came from your participants (or a segment of your participants).

Banks and Pandiani have patented their work, so someone wanting use this specific procedure needs to work with them, but there is another technique that is similar and publicly usable. I'll post on that one after I've had a chance to absorb it.  (I have a demography masters and can read statistical explanations, but sometimes I need more time to absorb it.)

But definitely go to Banks and Pandiani's website.  And check out the video, which explains the principles of their technique!

October 2, 2006

The answer to the ultimate question of dropping out, graduating, and unit records

As someone who blogs about K-12 and higher education, I should perhaps do something productive with this crossover, right? And since at the conclusion of my next solar circumnavigation I'm going to be the answer to the ultimate question of Life, the Universe, and Everything (at least according to Douglas Adams), while I may not know the ultimate question of Life, the Universe, and Everything, at least I can frame the ultimate question of dropping out, graduating, and the unit-record database controversy:

Is there a way to collect information efficiently that would allow us to track high school graduation and also college attainment given transfers among institutions and address the student-privacy concerns of those who oppose a federal unit-record database?

I think the answer is a fortunate yes. Oh, wait.  That's not the ultimate question.  The ultimate question is How?

And for that answer, you'll have to wait until I give a talk, "The Graduation-Rate Debate, Higher-Education Unit Records, and Public Policy: Serious Academic Discussions, Political Tragedy, or Farce?" at the Minnesota Population Center Monday, November 6, 2006, 12:15-1:15 pm. (I'll also be gut-checking the higher-ed piece with a few institutional-research folks beforehand.)

August 24, 2006

Comparative studies in special education: a brain-bursting exercise

This evening, I'm finishing up my reading and note-taking on non-U.S. history of special education placement for a review article on inclusion. I'm writing a small section, and I know it would come to this: someone who knows just about the U.S. (me!) has to search for and read the secondary literature on comparative perspectives. And it was just as I feared: big enough that I couldn't quickly grasp it, and small enough that I really could read the entire field or close to it. I'd been hunting and pecking away over a month with moments stolen here and there, but the primary author came down with the hammer (properly so) and told me, Thou shalt redo your section and do it quickly. That means now or yesterday, whichever is earlier. (Un)fortunately, I found an extra book or three today and also realized I needed to scavenge a three-volume reference work to really flesh it out. (The problem with wonderful online resources is that when a field is mostly journal, you sometimes forget to check books... silly historian who should know better: me, again.)

You didn't think it would be simple, did you?


So I headed to the library this afternoon to do that. Shortly after I entered the library, I realized a few things: I had forgotten my laptop for notetaking in the reference section, I didn't have an umbrella, and the skies had just opened.  Great, just great.  Oh, yes, and I had forgotten my reading glasses, so I had about an hour of useful reading time before a headache was inevitable.

So I tried a technological crutch I've never used before: the cell phone. Scrounging through the encyclopedia, I'd find and read an article and then call my own voicemail and leave a minute-long message: author, title, pages, and the bit I wanted to extract. I left the reference section sometime later having made 10-11 calls. So far, so good. Then I headed up to the stacks and gathered the volumes and brought them down again. Still pouring. Okay, check out the books, extract cash from the in-library ATM, and get a latté in the foyer Starbucks. (I get the decaf, nonfat version, or what one wag barrista tells me is the "why bother" drink there.) I sit down, skim through a third of one of the books, and realized it's down to a drop every 10 seconds and so it's time to rush to my building.  Whew!

... until I got to my office realizing I'd have to transcribe my own dictation.  Note to self: never torture a dictation secretary with your talking.  Please. By the time that was done and I had finished notes on all but one volume, I headed out (with that last volume) to pick my son up from school. The poor guy had started a headache at 1 and didn't know he could go to the school nurse and beg the nurse to get approval from me to dose him with ibuprofen. So we stopped by a pharmacy, came home so he could rest (which worked, in combination with the Motrin), and after I told the story of the parent-teacher conference that morning (at my daughter's high school), I realized I still needed to finish that reading.  Off to local ChainCafé (where I sit tonight almost, almost done).

And so, after all is said and done, I'm left with about 4-1/2 single-spaced pages of notes on various countries and a few broader models. I suspect I'll need to condense this to about 2 pages of double-spaced text (and also delete some of the U.S. material that was in the earlier draft). And I hope to return it to the primary author tomorrow, while there are two meetings. 

The practical problem is that this has sufficiently engaged me that I want to puzzle the patterns out. The most sophisticated comparative model I've read, by Rosemary Putnam in 1979 (Comparative Education, vol. 15, pp. 83-98), addresses the generic size of special education, not placement issues, and suggests that different looks suggest either a stage theory of national development or a wealth effect. (The data is a little different, suggesting by the relationship with health expenditures that it may be a matter of state welfare development as well.) A book by Mazurek and Winter in 1994 (Comparative Studies in Special Education) specifically suggests a stage theory of inclusion, except for some pesky countries that have well-developed, "mature" special education systems that are largely segregated (or were at the time): Japan, Russia, Taiwan, Hong Kong, and Czechoslovakia. So that throws a wrench in the comparative developmentalist (i.e., stage) model.

All right: time to get another drink, ponder my notes, and think of a way to organize this material.

August 11, 2006

Migration and graduation, from NIH reviewer standpoint

Reviewer comments on my NIH proposal have come back, and the substantive comments make me smile. One of the weaknesses: potential problems associated with the measurement of population mobility. Bingo. I had submitted this proposal before I had finished the article manuscript I submitted in the summer, which sketches the relationship between mobility and graduation measures to a greater degree than my proposal.

Fortunately, one key measure I intend to use for the Georgia historical work—proportion of school life spent in high school—is not nearly as sensitive to migration as either graduation or net flow estimates. In addition, I could look at the estimates of the various items as curves (in a migration-to-indicator plane) instead of point estimates.

The priority score (221) and the percentile (41.5) are consistent with the text recommendation for further consideration with a "very good" label. The program-level decision on funding is in October, so I wouldn't hear back (on a rejection) soon enough to turn around a revision until the February deadline.

Historical note: see August 2004 entry where I discuss the first round of reviews.

And now it's off to do various mundane things...

July 13, 2006

RAND researchers estimate Pittsburgh graduation, call for confirmed/audited withdrawal codes

John Engberg and Brian Gill's new study of Pittsburgh graduation rates estimates a longitudinal 5-year graduation rate by assuming that transfers have the same risk of graduation/dropping out as those who stay in the system. They also repeat my call for documentation and auditing of transfer exit codes. And their report is getting flack from the Pittsburgh board.

Hat-tip to Andrew Rotherham.

July 1, 2006

Semipenultimate thoughts on graduation rates (includes Losen response to Mishel)

I'm acting one more (and last) time as a messenger/archivist for the discussion over graduation rates. Attached is a response by Dan Losen to Larry Mishel, after both of them (and Joydeep Roy) had commented extensively on a prior entry on graduation rates.

Now that we've come to the end of this round of debate, let me separate the different issues and lay out my judgment. (And, I promise, I'll discuss practical solutions tomorrow.)


  1. National Educational Longitudinal Survey (NELS) as a data source. Mishel and Roy use this as evidence that high-school graduation is likely to be higher than what Greene, Swanson, et al., have been saying. As any data collection would be, it has some flaws. I'm more concerned with the exclusion of students with disabilities and others in the baseline than others are, apparently, as well as the possibility that there were some cohort effects, with this group more likely to graduate than the next half-decade or so of eighth-grade cohorts. But I think that budges the graduation rate for NELS:88's cohort by 5-6%, maybe a bit more, in general, but not dramatically. Disaggregating by population group is more hazardous, I think. Big picture: Using NELS is counter-evidence of dramatically low graduation rates nationally from grade-enrollment-based data, not a substitute for keeping tabs on graduation more recently. So in the end, NELS:88 doesn't tell us much about graduation rates in 2003.
  2. Current Population Survey (CPS) as a data source. Because CPS does not survey institutionalized populations (biggest sources: prisons and the military), it's difficult to tell how that restricted universe biases data for subpopulations (most for African-American males, as Mishel et al. acknowledge, much less for most subgroups). The questions that CPS has asked about graduation have changed over the years, as have the sampling frames, making comparable estimates across long stretches of time more difficult. CPS could not get at geographic areas smaller than states, and the within-state subpopulation groups—eeek. Don't bet your life on their accuracy.
  3. Common Core of Data as a data source. As I've discussed before with the example of Detroit's 2002-03 enrollment data, the Common Core of Data is an unverified, unaudited database. Enough said, right?
  4. Using ninth-grade enrollments in graduation-rate formulae. As Rob Warren's 2005 article (PDF) and Larry Mishel and Joydeep Roy (2006) each explain, using ninth-grade enrollment in the rate formula conflates first-time enrollment in high school with ninth-grade retention. The direction of that bias is unclear. On the one hand, a large amount of retention might lead to an overestimate of the first-time ninth-grade population and thus a downward bias on graduation rates—when the preceding cohort(s) either had higher retention rates or higher cohort sizes. But there are certain conditions when retention might lead to an upward bias in graduation—when there is substantial eighth-grade (or earlier) retention and when the preceding cohorts had lower retention rates or lower cohort sizes. Essentially, it's a question of where the "lagging" part of the cohort is accounted and the relative sizes of those lags.
  5. Longitudinal graduation rates. In theory, they're better than the quasi-cohort measures proposed by Greene and Winters, the Boston College/Harvard group, or Warren or the quasi-period measure proposed by Swanson. But that's in theory. As I explained Thursday with regard to Florida, there are plenty of ifs in the trustworthiness of attempts at true cohort measures, from the definitions of what "transfer" codes count to the confirmation/auditing of records.

The whole thing is making folks like Miami Herald reporter Matt Pinzur cry AAAARRRRGH!!, but there are some solid statements we can make:

  • The current system of data collection is inadequate right now to providing a trustworthy graduation rate. (This should, incidentally, make us very nervous about relying on school statistics in general for high-stakes decisions. In theory, a graduation rate should be among the easiest statistics to calculate.) Even in states with student-level database experience, such as Florida or Texas, there are problems.
  • Using ninth-grade enrollment data is a poor decision. Even using eighth-grade enrollment, such as Warren does, needs to be checked with evidence about eighth-grade retention. I favor using birth year rather than first year in ninth grade as the cohort basis.
  • The whole ball o' wax (statistically speaking) goes down the drain if you don't have accurate migration statistics for the student population. This hasn't been part of the debate thus far, but it's something at the heart of the article manuscript I submitted last week. And the bias can go either way, incidentally: dropouts reported as transfers will inflate the graduation rate, but students who move in the summer without ever having the receiving school request a transcript (possible for ninth-graders who fail every course) would artificially deflate the graduation rate.

The best system would be an individual student-level database with built-in editing and confirmation steps as well as an annual system-wide audit of accuracy and surveys for population migration. From that, you can build almost any accurate rate and account for various things. Because people will disagree whether GEDs and other non-standard diplomas should count, states should provide multiple rates (including and excluding non-standard diplomas).

As I promised, either tomorrow or at the end of the NEA Representative Assembly (where I'm volunteering at microphone 34, in the middle of the California delegate seating in the bleachers), I'll talk about solutions—what we can do to improve the likelihood of teens graduating, without waving a magic wand.

June 30, 2006

NYC graduation rates

In today's NY Times story on school graduation rates by David Herszenhorn, there's an important tidbit: the schools hired a firm to audit records. I didn't know of that when I wrote that as one recommendation for Florida's graduation-rate calculation, but I'm delighted to hear it now. A preventive step to ensure the accuracy of school records? Who woulda thunk it? I'll be curious to see what the actual audit process was.

It's unfortunate that the chancellor's office chose to release the rates selectively, just for 15 small schools (part of the city's small-school initiative). I'm happy that graduation is up at those schools, but it would be a smarter, more credible move to release all of the school stats on the same day, so it looks less like cherry-picking the results. It'll be a few more years before we know whether the small-schools initiative is really an improvement or is too much of "big schools in drag," a phrase I've read attributed to Michelle Fine in Washington Post, Philadelphia Public School Notebook, and InsideSchools.org articles on small schools.

June 29, 2006

Florida graduation rates inflated

The paper linked from this blog entry describes how Florida's Department of Education inflates the official longitudinal graduation rate by approximately 9-10% through two statistical definitions:

  • It excludes from public-school responsibility all those dropouts who immediately enroll in GED programs.
  • It includes GED and special-education diplomas in the general graduation rate.

I have discussed this problem before in this space, but until recently I did not have data to quantify how these definitional quirks affect the actual numbers. In the last two weeks, staff from the Department of Education sent me additional (if limited) information on the cohort calculations at the state and county levels since 1999, and they allowed me to correct at least one of the problems (the excusal of so-called W26 withdrawals from school responsibility) and take a decent guess at the effect of the other.

Part of this information appears in the manuscript I've sent off to a peer-reviewed article, and usually I frown on touting research publicly until it's been reviewed. Yet the reports released last week by Ed Week and the U.S. DOE have given me an opportunity to point out some of the problems. And you will find that much of the detail here would never be publishable in a national refereed research journal—it's too specific to Florida in some instances. Yet it should be available publicly. So, what's the ethical stance?

What I've chosen to do is release it here on my blog and send it to Florida education reporters, but no announcement to national reporters. And I think I'm careful in the paper itself to explain that it is not a refereed publication. The analysis is pretty simple, and anyone can do it.

June 26, 2006

Migration and graduation MS

Well, the MS is now submitted to a journal that accepts papers online. Hurrah! No copying-several-times-over-and-mailing. In the end, I added an additional section overnight, shoved some things around, did major tinkering in the Microsoft Equation frames (when plain text just doesn't handle integral or summation signs).

The conclusion of the paper: "Become my minions and let me rule Graduation Rate Land. Bwahahaha!" No, that's not the conclusion of the paper, and I do not intend to join Evil Overlords Anonymous (but see a song and Harry Potter fan fiction for more on that). I do try to make a mild case in a few spots for the superiority of measures not based on grade level, but the main point of the paper was to demonstrate how sensitive most graduation-rate formulas are to graduation, and the problems that will be encountered as states try to meet their obligations under the National Governors Association compact.

Blogging about my research raises an interesting question, since the target journal has anonymous reviewing: would someone who reads my blog have to recuse herself or himself from being a referee? It's akin to walking by and reading a conference poster closely and then receiving a review assignment of the later paper. But sometimes you just can't get around these things, and besides, you can be wrong about your assumptions.

June 25, 2006

Article manuscript on graduation and migration drafted

Well, the draft is done. Time to go through it with a fine-toothed comb for grammatical kerfluffles, confusing terms and symbols, and citation errors. Right now, the conclusions are fairly strong: you can't get accurate graduation rates without accurate information about migration. And that means you just can't have accurate graduation rates at the school level, even though NCLB demands it.

June 24, 2006

Grant applications, one rejection and one improvement

This is not only the week of graduation-rate reports, it's also the week of receiving news on two grant applications. The short precís sent to the U.S. Department of Education's unsolicited research competition was rejected (i.e., they didn't want a full proposal), and unfortunately a review of the recommended regular competition for the next fiscal year confirms that it wouldn't fit in any of the priorities. That was a longshot proposal.

The shorter odds are on the revision of my 2002 NIH proposal. This time around, I had changed the measure to one based on age, had narrowed the scope, and had gone to some trouble with data collection in the intervening years. Last time around, the percentile score for the study section (demography) was 52.0, which in NIH tradition is the reverse of normal percentile scores, meaning that a slight majority of proposals in the prior year had been scored superior to my proposal. This time, the percentile is 41.6, a moderate but definite improvement. If the funding cutline is generous (unlikely!), it'll get funded automatically. I could also get funded by a program-officer recommendation for select pay, but that usually happens on the last revision. My prediction: no funding this time, but this has a good shot at funding with another revision.

At least two other grant proposals to be written this summer. One is another longshot, and another is another revision (or maybe it'll count as a new proposal, depending on what the program officers advise), this time for NSF.

June 23, 2006

It's graduation week!

One more belated entry in the grad-rate report sweepstakes: the National School Board Association's Center for Public Education guide on graduation rates, released yesterday. (Hat tip: Andrew Rotherham.) Sometime overnight, my brain convinced me that this slew of reports was distributed over two weeks, not one. But it really has been just one week.

And on the way back from a music lesson today, we listened to Garrison Keillor's 1998 "Graduation" (one of his News from Lake Wobegone stories, in a collection purchased on Father's Day, last Sunday). How appropriate...

Migration and graduation

When two major releases in two weeks highlight graduation measures in the press, it's time to kick the writing of my paper on migration and graduation into high gear. The book MS will wait. I need to read one revised paper for the journal today, and take my son to a music lesson, but other than those issues, it's time to roll up my sleeves, ice that leg, and get cracking. Incidentally, for everyone who wants to know: unmeasured migration pollutes any graduation measure. There's no mystery in that. I hope to quantify that relationship, or, rather, I have quantified it for a few cases and will use those cases to discuss the general problem. I know the target journal, and I hope to finish it by early next week and send it off.

(Regarding the leg-icing, the orthopedist yesterday said the bones are fine, but the spot where my son's live drive hit my leg will be swollen for about 4 weeks, so I now have the wonderful clothing option of either a beige compression stocking on one leg or a beige compression stocking on one leg, just in time for the summer fashion season. But it eliminates the add-on swelling that was causing increasing discomfort Tuesday and Wednesday. I can now concentrate for more than half a minute at a time!)

June 22, 2006

Mishel on Swanson

In corerspondence, Larry Mishel sent me material over the last few days. With his permission, I am posting his material below. In the meantime, of course, the USDOE put out its report today on the "Averaged Freshman Graduation Rate", which is also based on CCD figures (hat tip: Andrew Rotherham). Of those, the numbers for Nevada (dropping precipitously in one year) don't look right.

But, to Mishel's comments, below the fold. I'll say this about using grade-based data from the CCD: one of the problems we've discovered this week is how friable that data is (and that's something that Chris Chapman and his coauthors at NCES noted in the AFGR publication today). But the other is a problem Mishel notes with using 9th grade enrollment data, which conflates first-time high school students with those who are repeaters. When I first came back to the quantification of attainment a few years ago, I tried to model retention issues, using both state-provided data (for a few states) and then thinking about it as a stable-population problem. Neither approach was satisfactory. That's why Warren uses 8th grade enrollment. I'm sticking to age-based data where available.

In any case, you can judge the issues for yourself, on the jump.

From Mishel:


The Bulge and Retention


Let me see if I can convince you that retention and the ninth grade bulge are serious problems for Swanson. Recall that his formula iterates declines in enrollment beck from diplomas to 9th grade. Every decline in enrollment from year to year counts as dropping out. It is well known (by Jay Greene who acknowledges it, and among everybody else but Swanson) that enrollment in 9th grade is far above that of eighth grade (therefore a ‘bulge’) because there is a lot of students, especially minorities, retained in 9th grade- the bulge is about 12-13% overall and 25% for minorities. This means that 9th grade enrollment is far above the count of entering ninth graders. This makes Swanson’s formula have the equivalent of a far too large denominator in his formula. It is easy to see how large the bias is by just extending his formula back to 8th grade: it shows an eight percentage point higher graduation rate and twelve percentage point higher minority graduation rates (see our book Table 10, page 64).

The Texas example shows how misleading his formula can be. I choose Texas both because it has very high retention rates and because Texas provides data on retention by grade by race. [He attached a file related to retention data in Texas.] The first table (page) basically shows that the ninth grade bulge—the extent to which 9th grade enrollment exceeds 8th grade enrollment- is fully explained by retention (in some states there may be issues of transfers into public schools from private schools).

The following table shows the impact on Texas rates and on the comparison of Texas to the nation. The first column reproduces Swanson’s published rates for 2001, which assumes that all ninth grade enrollment is ‘first-time’. The second column uses published Texas data on retention to recomputed graduation rates per ‘first-time’ ninth grader (we are employing a simple diploma to 9th grade ratio to make things simple). These calculations show that Swanson understates graduation rates in Texas by wide margins (13 and 14 percentage points for blacks and Hispanics) and overstates the race/ethnic gaps by 6-8 percentage points (increasing them by more than half their value). These are pretty large errors.

Because states and districts vary so much in the extent of retention these errors in Swanson’s formula are larger in some places than others: therefore, Swanson’s measure generates faulty comparisons across jurisdictions. Consider a comparison of Texas to the nation in the last columns. By Swanson’s measure Texas is below the national average but with a corrected measure Texas is substantially above the national average and has smaller race/ethnic gaps (though these national numbers are biased, as well, because of retention, but not as much as Texas).

Table 2. Bias in Swanson Measure from Ignoring Retention in Texas

Texas relative to national average
PopulationSwanson Measure*Uses first-time 9th graders
Corrected for retention
BiasNational averageSwanson measureCorrected measure

Total

 

 

 

 

 

 

diplomas

65.0

65.0

 

 

 

 

9th grade Enrollment

100

 

 

 

 

 

First-Time 9th graders

 

84.8

 

 

 

 

Graduation Rate

65.0%

76.7%

11.7%

68.0

-3.0

8.7

Black

 

 

 

 

 

 

diplomas

55.3

55.3

 

 

 

 

9th grade Enrollment

100

 

 

 

 

 

First-Time 9th graders

 

81.0

 

 

 

 

Graduation Rate

55.3%

68.3%

13.0%

50.2

5.1

18.1

Hispanics

 

 

 

 

 

 

diplomas

55.9

55.9

 

 

 

 

9th grade Enrollment

100

 

 

 

 

 

First-Time 9th graders

 

79.3

 

 

 

 

Graduation Rate

55.9%

70.5%

14.6%

53.2

2.7

17.3

Whites

 

 

 

 

 

 

diplomas

73.5

73.5

 

 

 

 

9th grade Enrollment

100

 

 

 

 

 

First-Time 9th graders

 

91.4

 

 

 

 

Graduation Rate

73.5%

80.4%

6.9%

74.9

-1.4

5.5

 

 

 

 

 

 

 

Gaps

 

 

 

 

 

 

Black-White

18.2%

12.2%

-6.0%

24.7

-6.5

-12.5

Hispanic-White

17.6%

9.9%

-7.7%

21.7

-4.1

-11.8

* Column Source: Christopher B. Swanson, Who Graduates? Who Doesn’t? A Statistical Portrait of Public High School Graduation, Class of 2001 (Washington, DC: Education Policy Center, The Urban Institute), Table 4.

Swanson versus NYC Longitudinal Data

One way to check on whether Swanson's measure of graduation correctly estimates graduation rates is to compare it to other measures for the same location using the same underlying student records. New York City provides such a possibility. Several newspaper articles point to differing estimates of graduation from the city data, the state data and Swanson.

Our purpose here is to create an apples-to-apples comparison between Swanson and the city school district data. So, since Swanson’s measure of graduation counts all diplomas, no matter when earned, we use a comparable measure form the school district data. To avoid issues of whether to count GEDs or not, we make comparisons with diplomas only and exclude GEDs.

The school district data are based on following individual students through a longitudinal data system. Swanson bases his estimates based on enrollment counts in 9th and other grades and counts of diplomas each year.

New York City Longitudinal Data

I start from the fact that NYC reports graduation rates, excluding GEDS, of about 60%, as calculated below. The rate with GEDs would be 7.0% higher. Swanson reports 39% and Greene, 43% for the class of 2001. That’s a huge difference. It is easy for me to identify ways that Swanson and Greene’s estimates inappropriately and artificially lower the measured graduation rate.

One can get the necessary information for constructing the following table from the report for 2001:

Population N % Total % Grand total
Dropouts19,748 32.0%25.2%
Graduates37,54260.9%47.9%
GED4,3397.0%5.5%
Total61,629100.0%78.6%
Other discharges16,82721.4%
Grand Total78,456100.0%
Source: pages 3 and 5

This table shows where we get our figure from. We take the reported graduates from Figure 1 (page 3) and subtract the GEDs from Table 1 (page 5). This gives us a rate of 60.9%, the longitudinal rate that eliminates GEDs. This rate includes graduation in 3,4,5,6 and 7 years (mostly all are within five years). However, so does Swanson’s diploma counts!

Some people have questioned whether some of the students identified as ‘other discharges’ are really dropouts, but not counted as dropouts in the NYC data. This is the tricky part for school districts in compiling longitudinal graduation rates—they essentially have to determine how many students left their system and which ones should be considered dropouts or legitimate transfers to other districts, etc.

I’m not sure how one can identify how many are falsely labeled a discharge versus a dropout. WE can assess the extent of any possible bias by making the extreme assumption that all the ‘discharges’ are dropouts. If so, the graduation rate would be 47.9% according to my calculations in the table. That is still above Greene’s rate of 43% and way above Swanson’s 39%. Yet, the rate must be somewhere between the 47.9% rate and the official 60.9% rate since surely some of the discharges are appropriately classified as such.

We can also adjust these data to include special education. Figure 5 says that there are 1,092 (city-wide special ed) who graduated at a 35.5% rate and 4,359 in self-contained classes who had a 38.3% rate. If we include the special education graduates among the graduates and add the total special education enrollment to the grand total we get a graduation rate of 47.2%.

This suggests to me that the NYC grad rates are substantially higher than Greene and Swanson. My calculation is that when one includes all of the special education and assumes all other discharges are dropouts one still finds a grad rate of 47.2%. The graduation rate actually lies somewhere between 47.2% and about 60%. Unfortunately, we do not have the data to make calculations by race and ethnicity. Anyway, a grad rate between 47.2% and about 60% may be nothing to brag about but it still shows that Greene’s and Swanson’s estimates are way off base.

I’m struck by your statement that Swanson’s biggest problem is no migration adjustment. [Sherman here: in correspondence, I explained that my initial impression was that was the major problem with Detroit. The major problem with Detroit is awful data.] I’m skeptical about these population adjustments once I realized that they incorporate new immigrants along with transfers in and out, not by design but because there’s no way to separate out immigrants (correct?). At the national level the population adjustment is only immigration. That’s why I don’t understand why you think a Warren estimate at the national level is at all valid (or at least can be compared to longitudinal data of students starting in 8th grade or some other starting point (NLSY).

June 21, 2006

Ed Week grad rates: GIGO for Detroit

Paul Gazzerro of S&P's School Matters data compilation service thought I was wrong in asserting that the major problem with the Ed Week Detroit graduation "rate" for 2003 was not accounting for migration. True, I used all the enrollment data from all of PK-12, not just high school,* but Gazzerro then made the error of assuming that Swanson was using the 2001-02 and 2002-03 sets from the US DOE's Common Core of Data. He wasn't. He was using 2002-03 and 2003-04. But I'm glad Gazzerro pushed me to look at the CCD Detroit data, because it shows exactly how bollixed up the Swanson method can be. Added Thursday: the main issue here is the original data. It may not be a true test of the algorithm, because the data from schools can be so unreliable. See below on procedural issues.

The Swanson formula has the following in the numerator:

Diploma (end of year 1) * 12th grade enrollment * 11th * 10th (all except diplomas from fall of yr 2).

In the denominator is the following:

12th enrollment (fall of year 1) * 11th * 10th * 9th (all from the fall of year 1).

So for Detroit for the 2003 Swanson CPI, year 1 is 2002-03 and year 2 is 2003-04, and here are the details:

Numerator: 5,975 * 5,244 * 7,421 * 9,899 = 2,301,729,842,459,100
Denominator: 6,020 * 7,795 * 11,275 * 20,025 = 10,595,017,688,062,500

For Detroit the prior year, year 1 is 2001-02 and year 2 is 2002-03, and here are the details:

Numerator: 5540 * 6020 * 7795 * 11275 = 2,931,155,954,650,000
Denominator: 4618 * 6355 * 9291 * 14494 = 3,952,029,707,502,060

The ratio is 74.2%. How in the heck could Detroit go from 74.2% graduation to 21.7% graduation in a single year, in reality? In reality, Detroit reported a bulge in enrollment at all high-school grades in 2002-03, and the 22% rate is an artifact of that. I don't know if the bulge came from some amazing (and unbelievable) transient surge in population or just lousy record-keeping by Detroit or the state of Michigan. Update: Okay. I'm fairly certain it's bad data.

But I'll repeat: the Detroit data is useless, and I'm surprised no one in Swanson's shop even took the basic step of looking at the prior year's CPIs to see if, maybe, possibly, there might be some unreliable instability in the numbers. Added Thursday: Part of this problem is from the nature of the Common Core of Data (CCD) as an unaudited database, or rather one that is the responsibility of each state to correct its own figures. Bad record-keeping by a state = bad data. Recent years of data (including 2002-03, with the should-be-infamous Detroit enrollment) are explicitly noted as preliminary by the CCD, but those are the figures that everyone uses to update CCD-based measures. In sociology and demography, you always look at time series of the raw numbers and assume that you need to smooth the data to some extent, or you're likely to be tripped up by the vagaries of administrative record-keeping problems. (Age-heaping, for example, is the phenomenon of people rounding their ages to the nearest 5 in areas without birth registration systems or cultural celebrations of birthdays.) Big lesson here: be wary of CCD figures. My instinct was to pile on here about the failure to accommodate migration, but I was wrong. Yes, I think there's still a problem not adjusting for migration (and Larry Mishel and Joydeep Roy would point to the 9th-grade enrollment figures as a problem), but I may have caught the problem with Detroit because I was sensitive to the implications there. In reality, it's a problem of bad data.

Update (2:30 pm Thursday): I just received an e-mail from Chris Swanson: "We're taking a closer look at Detroit and a couple other places. One of the things I would like to build into our online database is a set of flags or notes to call attention to situations like this." Good.

Update (2:50 pm Thursday): Thanks to an informant with information about Michigan, it turns out that 2002-03 was the first year of a new data-collection system, and the CCD data are a bit different than the corrected figures released in 2005 for that year:





GradeEnrollment reported to CCDEnrollment reported in 2005
9th20,02518406
10th11,27510470
11th7,7957371
12th6,0205748

On the other hand, that correction only raises the 2003 CPI to 28.2% and lowers the 2002 CPI to 62.2%. There's still stuff wrong with the data. (Given that this is Detroit, as one correspondent put it, the newly-installed school CEO in 2002-03 may have resulted in exaggerated pupil counts.)

* — It's quite true that migration rates vary by age, and one would not want to use elementary-aged data and extrapolate to adolescents without a huge caveat. On the other hand, there is no way to separate attrition and returns from transfers out and in at the high school ages without an audit of the records. In the case of Detroit, I suspect we just have bad, unaudited data, not a migration issue. But this provides a pretty good example of how sensitive these formulas can be to misstatements of migration.

June 20, 2006

Graduation rates, redux

Today, Education Week published Chris Swanson's new round of estimates of graduation. Lawrence Mishel also posted a new set of comments in a discussion of rates that's now considerably longer than the original entry. Andrew Rotherham points to my own state, Florida (which according to Swanson ranks as fifth worst), and Ron Matus's quick story in the St. Pete Times notes the differences between Florida's official calculation and Swanson's.

For those who need a scorecard for the grad-rate "players" (i.e., newly-coined methods of calculating graduation)...


  • The Boston-area researchers (Walt Haney, Gary Orfield, Jing Miao, etc.) use a straight diploma:9th-grade or diploma:8th-grade quasi-longitudinal rate (going from graduation back in a pseudo-cohort line to 9th grade 4 falls before or 8th grade 5 falls before). The diploma:9th-grade measure conflates grade-retention issues with graduation.
  • Warren uses the diploma:8th-grade rate plus a migration/mortality correction (using smoothed Census bureau state population estimates by age). In my opinion, this is the best-justified method using administrative records (such as the Common Core of Data). It's also useless at the local level because of the need for some data for the migration/mortality adjustment. (Mortality is so low for teens that it's not a serious concern, but I mention it for completeness.)
  • The US DOE has an "average freshmen graduation rate" that is akin to the Boston-area uncorrected quasi-longitudinal rate except averaging the pseudo-cohort's 8th, 9th, and 10th grade enrollments. This is an attempt to address 9th-grade retention, but Mishel and Roy are correct that it's jerry-built rather than having a theoretically-justified basis.
  • Greene and Winters use the USDOE averaged-freshmen rate plus a migration/mortality adjustment that is almost identical to Warren's (which Warren proposed in a 2003 paper).
  • Swanson's (and now Ed Week's) method chains together proportions of nth to (n-1)th graders in two successive years to get a quasi-period measure. It's entirely uncorrected for grade-retention and migration/mortality issues.

Swanson's production of numbers down to the county and district level will get large play in the broadcast and print media in the next few days, even though it has some serious technical problems. Those technical problems in some places are swamped by large differences in graduation in some places (South Carolina probably does have close to the lowest graduation rate if not the lowest, as Swanson claims), but the actual numbers are going to be inaccurate, especially in cases with significant net migration or 9th-grade retention. And it's at the local level where you are likely to see such cases. For example, consider Detroit, which Swanson says had 22% graduation for 2003. Detroit's PK-12 enrollment also shrank from 173,742 (in 2002-03) to 150,604 (2003-04), a net outmigration that would lead to an artificial deflation of Swanson's measure. That doesn't mean that Detroit is a great school system. It means that Swanson's measure is largely useless for Detroit.

Migration is the great problem with trying to estimate graduation at the local level. Without an audited trail of transfers in and out, there is no conceivable way of calculating an accurate graduation rate for a school or district.

June 19, 2006

Detour into data quirks

I've received something from a public agency this afternoon that threw me into a quandary on one of my research projects, but I'm afraid I can't say anything about it until I give the agency a chance to respond to some questions and, potentially, some analysis. It's very interesting, and it's worth keeping track of. Sorry for the mystery, but I want to be fair to the agency.

June 8, 2006

How graduation estimates vary by migration

It's a bit late tonight for me to continue with some of the obligations I haven't gotten to yet this week (several manuscripts to read/evaluate for EPAA, an EPAA article to begin preparing for publication, a student paper from an incomplete a while ago, a book review, a commentary on a lecture, and probably a few other things), so I'll write a bit about the stuff that's almost ready to put in article form, about graduation. You can see some details of my approach, more formally, but the crucial bit in this piece is that the demographic approach I'm using allows for seeing how estimates of graduation change depending on what migration conditions hold.


My example here is from Virginia statistical reports from the late 90s through 2003-04, because the state gives generally plausible estimates of enrollment by age and grade as well as total numbers for graduates in different categories (which I have collapsed into standard academic diplomas, special-education diplomas, and miscellaneous other certificates—including GEDs and certificates of completion). Given the data provided, one can see how graduation estimates (for this entry, the likelihood of earning a standard diploma from age 14 up) change as assumptions about net migration change. If I had much more time and programming skills on my hands, I could see how the estimates change as one changed the migration estimates age-by-age (the model is that detailed), but I've taken the simple road and seen what happened if one assumes constant migration and changes that hypothetical migration rate. If there's a high net in-migration rate, for example, then the straight-up (zero-migration) estimates of migration will be biased up because the algorithm wouldn't make a distinction between cohort-size changes and migration. But the question is how much does migration affect estimates of graduation?

Regular-grad-rate-by-migration.JPG

The answer is: quite a bit! The figure above shows biennial estimates of standard-diploma graduation rates for Virginia between 1996 and 2004 (each set of estimates is a different curve), showing how the estimates (y-axis) changes depending on the hypothetical migration rate (x-axis). As explained above, net in-migration drops the estimate to account for the (hypothetical) bias, and net out-migration raises the estimate.

If one looks at the 1997-98—1998-99 estimate (and one needs two years of data for this calculation), a change of migration of just 0.01 is fairly dramatic. A zero-migration assumption leads to a rate of 74.0%; a migration rate of 0.01, 70.6%; a migration rate of 0.02, 67.5%. Now, there's a world of difference between zero migration and a 0.02 rate (which is substantial if not world-changing). But the fact that the graduation-rate estimate changes about 3% for every 1% change in the net migration rate is rather amazing. I'd be hesitant to claim significant changes in Virginia's standard-diploma graduation rate without pretty good evidence about real net-migration (and not just unaudited transfer statistics from administrative records). And this raises even more questions about whether any graduation measure could be sufficiently robust to rate individual schools based on improvement in graduation. Or, rather, you could, but the costs of auditing transfer stats might be on the same order of magnitude as the consequences of the statistics.

Update (6/17/06): Via Eduwonk comes the link to a USDOE Inspector General report on South Dakota's graduation-rate method (such as it is).

May 25, 2006

Hot grad-rate discussion in comments!

We have the first extended thread of comments in this blog, on the grad-rate debate, with an active debate between Dan Losen and Joydeep Roy. My thanks to both of them for continuing the discussion. (I usually let comments stand on their own but wanted to point to this, since it's extended and quite substantive.)

May 24, 2006

Peace! Peace!

True to form, my May has been utterly crazy-busy, enough that I am both relieved I am not teaching this summer and a bit ashamed that I haven't done more writing. (It comes from my children's birthdays, my wife's and my anniversary, the end of the K-12 school year in Tampa, and my need to repay my wife for her having done the runaround while I was out of town several times in the spring.) Today, I went to my son's fifth-grade award ceremony (mostly for awarding good-citizenship certificates to the majority of the students), missed the department meeting with the dean about our chair vacancy (because our chair is being pulled downstairs as an associate dean), talked with colleagues a bit, rushed to the central administration building for a short meeting with the president's representative on union affairs over a grievance that I'm the representative for (not my grievance, in other words), then ran to my daughter's school to pick her up from school and drive her to my wife's school, where she's currently playing violin for a post-graduation reception, and then back here to work on a few tidbits before my son gets home, at which point I'll clean the public space of the house and start baking the cake for my daughter's birthday party.

The month's been like that.


That fact may explain why I begged and pleaded for a few hours of peace in a café last night, just to get one significant task done uninterrupted. I didn't even care what it was. So I spent the time analyzing Virginia's enrollment and graduation data from 1996-97 through 2003-04, since they have kindly posted online all sorts of useful information, including age-grade tables (which I use in my current project). There was one obviously incorrect estimate (I think 15-year-old enrollment for 1997-98, which had to have been about 5,000 students short of the real figure), but the rest is going to allow me to demonstrate how changing migration estimates affect graduation statistics. There's overlapping data on Virginia, as well, from others studying graduation rates.

All in all, a nice three hours of work. I wonder when I'll get that uninterrupted time next...

And I hear my son's bus pulling up outside.

May 23, 2006

More grad rates debates

Via NCLBlog comes the tip of Jay Mathews's story today on the grad-rate debate, along with sidebars by Mishel and Roy and Greene and Winters, respectively. Greene and Winters are pegging most of their rebuttal on an accounting argument that asks where about half a million graduates are, if you accept their claims about how many should be graduating given Mishel's and Roy's arguments.

I finally had a chance to listen to the April 27 live debate between Mishel and Greene at the National Press Club. For most of it, I think Mishel ate Greene's lunch. But that's about debating points.

In the long run, Mishel's argument has some serious weaknesses. In terms of the National Educational Longitudinal Sample that started in 1988, it's well-known that the sample exclusion was about 5 percent of the population. Assuming a small problem in attrition of 1-2 percent, and it's plausible that the NELS graduation stats are overestimated for that cohort by 6-8 percent. Then there's the question of cohort effects. The federal statistics suggest that for its semi-official rates (event dropout rate, status dropout rate, and completion rate), this cohort was unusually likely to graduate and unusually unlikely to drop out. So I'm not sure that looking to NELS really says that much.

Here is where being slow and cautious is a professional disadvantage. I haven't had the time to turn any analysis from my perspective into an article, but I think I need to, at least with regard to the effects of migration on estimates of graduation.

April 27, 2006

Books: accountability or academic freedom?

Well, no publisher thus far has bitten on my proposal for an academic-freedom book. I guess David Horowitz's self-inflicted publicity wounds plus the existing books have made it less appealing. But I have some interest in a book on accountability, so I may tap that one out over the summer.

My first stab at explaining the basic argument of one chapter—the tension between democratic and technocratic issues in accountability—appeared to go over the heads of my undergraduates today. Hmmn... time to regroup and see how to present it a bit differently.

April 24, 2006

Dueling grad rates

A Jay Greene dropout/graduation report, and then comes Lawrence Mishel's counterargument! Then Edwize gets into the game! And then the AFT NFTBlog says there will be a debate between the primary authors hosted by the Center on Education Policy. It sounds a bit like a WWF match, but I need to confirm the existence of this debate with CEP staff.

I've been crazy-busy for about a week and haven't had time to read either one, especially the Mishel/Roy book. I have my own approach to measuring graduation, but I'm a little concerned this is becoming the Jay-and-Lawrence show, when I think neither has published their graduation-rate research in refereed journals. (I haven't and won't tout my stuff as anything but exploratory at the moment.) I hope CEP calls on a few more people, such as Jing Miao, Rob Warren, or Robert Kominski, to provide commentary.

April 16, 2006

Details count for graduation rates

You ever realize belatedly that you should have paid more attention to something when you were distracted? I'm definitely getting that feeling today, as I've been working on projects that don't require the data-that-with-luck-is-in-recovery from the hard disk crash Thursday morning. One of those is an article manuscript that will be up at Education Policy Analysis Archives in a few days, and another is a closer look at Florida's official graduation-rate calculation.

More on the jump...


When Florida changed its method of calculating graduation rates in the late 1990s, I didn't pay too much attention, largely because these measures are a dime a dozen, because I was focusing on other projects at the time, and because the first few numbers seemed in line with other figures at the time. But now, Florida's measures is the model for the National Governors Association-approved measure (see p. 18), and because a sharp rise in the graduation rate between the late 1990s and 2003 (the latest published rate) is being touted as evidence of success in Governor Bush's education policy.

The empirical question observers have noted about the official rate is that it is considerably higher than the other measures researchers have proposed, whether the Boston College simple methods, Jay Greene's, or Rob Warren's. I'll illustrate this with one of the standard measures used for years, and one that doesn't depend on counting students in each grade: the ratio of high school graduates to the older-teen population that one might expect would graduate that year. I've calculated these measures in four different ways for Florida: with and without private-school graduates included, and compared both to the 17-year-old population estimate in Florida for the graduating year and also to the average of the 17-year-old poulation the year before and the 18-year-old population in the graduating year (in other words, an estimate of the number of 18th birthdays in the academic year of graduation). More details after the image (a larger version behind the thumbnail0:

Fla-grad-rates.JPG

A comparison of different graduation rates for Florida, 1989-2003

The figure above shows four trend lines for Florida from 1989 in contrast to the higher ratio for the U.S. as a whole (including private-school graduates) and then, starting in 1998-99, the official Florida graduation rate.

The trend-line for the graduate:teen ratio has been heading up for Florida and the country for the last few years, so the upward trend in the official rate isn't surprising. What is notable is the implicit claim from the official rate that the class of 2003 was about 9 percent more likely to graduate than the class of 1999, and that dramatic multi-year trend is inconsistent with every other data source available.

Looking at the official manual for the state, I can see a few troubling issues:

  • The inclusion of alternatives to standard diplomas in the graduation numbers, with no public disaggregation
  • The exclusion of alleged transfers and movers from the base (creating an adjusted cohort) without any data quality checks to ensure that transfers really show up at a private school or in another state
  • The exclusion from the base (adjusted cohort) of students who drop out and immediately enroll in GED programs (as transfers to adult programs)

The last one is especially troubling and highly misleading. Note: I am not claiming that there is deliberate fraud involved in either the construction of this definition (which was piloted before Jeb Bush became governor) or in schools' manipulation of student records. But I think I need to follow up on this and see if the state has kept decent records on what adjustments have been made, precisely, on which basis.

(Gory details for the chart: Public-school details from Common Core of Data; private-school graduates for various years from the Private School Survey, with other years interpolated with a spline function and extrapolated before 1992 and after 2001 with linear trend lines; estimates of 17- and 18-year-old populations from the Census Bureau.)

Update (4/17/06, 3:40 pm): Andrew Rotherham takes a few minutes from caring for babies to comment on things, but gave the subscription URL for the Greene, Winters, and Swanson piece in Ed Week, a column which is also available for free at the Manhattan Institute site (yeah, at that link above with their names). More stuff at the Ed Week forum on dropout rates and stuff, something to which I contributed just now, almost three weeks later. Pay attention to the remarks of Cliff Adelman—while I disagree that he's made a convincing case that NELS shows that the CPS figures are correct (and he doesn't actually say that, though a sloppy reader might assume it), it's important to note what type of documentation NELS's staff had available. Neither the CCD nor CPS has such a check on data quality.

April 4, 2006

PAA conference handout

The PAA conference handout (PDF) is cryptic and dense, but it captures the methods side of what I'm doing right now. I don't think it replaces the other attempts to measure graduation, in part because few states can produce age-based statistics at the moment. On the other hand, the explicit modeling can show the sensitivity of graduation measures to migration/transfer rates (considerable), and it's a way to get at other issues (including good measures of retention and possibly dropping out, though that's also sensitive to migration/transfer rate misspecification).

April 2, 2006

The real entry on the conference

In case you were wondering, the last item was an April Fool's joke. I saw my mom and the family of one of my sisters. The Population Association of America conference was great. While no one had the energy or concentration to vet my stuff in the poster session (gee, why wouldn't people be able to think about integral equations in a room with several hundred people?), I did give away most of the handouts and had a bunch of people interested in it. I found the work of several others working on education (esp. since I guess the PAA serves as a spring outlet for sociologists). I found a bunch of sessions and resources on migration I need to pass on to my colleagues who specialize in it. I saw Susan Watkins, fertility specialist whose course I took at Penn. And the person who sponsored me through Penn's demography masters, Sam Preston, won the PAA mathematical demography award for his work in variable-rate modeling—which most people assumed he would have a long time ago, as the presenter noted.

Incidentally, it's that work I'm exploiting. I need to clean up the poster and handouts and upload it here as well as to the PAA conference website.

April 1, 2006

Terrible conference

Well, I'm back in Tampa after a 14-hour journey that you really don't want to know about. (Hint: Southwest shouldn't ever try to fly to Cancun.) The Population Association of America was an awful conference where I didn't learn anything, where everyone who visited my poster session ripped it to shreds, where there was no one else really interested in work similar to mine, where I didn't see any of my demography profs from grad school at Penn (where I earned a masters at the same time as my history Ph.D.), and where I didn't have a chance to see any of my family in the L.A. area.

And I've come back to too much work. I've decided to chuck all writing assignments for the rest of the semester for my undergraduates and replace it all with multiple-choice tests. At least as of this date.

February 17, 2006

Viking raid on Georgia archives

With a family and Florida's odd school year (end in late May, teachers return in late July), I don't get to archives that often and schedule them in negotiations with my wonderful spouse. So after consulting our schedules, our kids' schedules, and the phases of Venus, I figured out about 2 years ago that I might just get away this weekend to the Georgia state archives. Okay, it was more like a month ago that we negotiated this, but here I am.

Completely, absolutely, totally exhausted. And satisfied...


Because of my reluctance to just spirit myself away for weeks at a time, I view visits to archives similar to Viking raids: get in, get stuff, and get out quickly (or you'll settle there). When I was in grad school and had a five-ton Toshiba T1000 with me, I spent every possible minute in the archives, squeezing every drop of concentration from my brain while the reading room was open. It was thrilling, and I have a cabinet-drawer full of notes from my dissertation with everything I took notes on. But the emphasis was on concentrated time in an archive.

About 13 months ago, I began the process again with a new project and a focused set of records, in the Georgia archives. every few months or so, I've come back, getting a few more snippets of the necessary records. (Brief explanation: I'm collecting annual enrollment and graduation data for four clusters of counties in Georgia from the late 1930s through the mid-1960s: coastal lowlands, Black Belt near the Alabama border, northeastern hills, and urban counties. From this, I hope to get a better picture of how and where the attainment and secondary-ed experience gap between Whites and African-Americans shrank in the middle of the 20th century.) This was scheduled to be my last trip funded by a mini-grant from my college, with the promise that it'll turn into several grant proposals (which it already has, in combination with a few other items). I knew I had to scarf down four years' worth of data on 23 counties (plus 3 city systems inside those counties), and then catch up with four years in one county I had missed my last time here (ouch!).

A day and a half, I figured. Fly up late on a Friday morning, hope that the archives staff will pull the boxes I start on, and then work madly through Friday afternoon and all Saturday. Sleep over in Atlanta Saturday night and hop a plane back Sunday. That's enough time for if things don't break my way.

Well, things broke my way. One of the archivists familiar with my work answered the phone this morning when I was at the rental-car counter (yes, I had called earlier in the week, but no one had answered), and he agreed to have the first five boxes pulled. Then the formats for these years are more amenable to photocopying than in the mid- to late 1950s. Then there were precious few examples of reports with paper clips I pointed out to staff to replace. (Rusting paper clips are evil things for records with archival value.) And I had a bit more energy for working efficiently.

The result is that I got a boatload of work done today. With any luck, I can finish the job early enough tomorrow that I can spend a few hours consulting with a colleague who lives and works in Atlanta. That's the good part. The frustrating bit is that because I went all-out at the end of the day to get through one last box, I'm completely exhausted right now. I can type brainlessly (witness this entry), but do anything that requires concentration? Not a chance!

January 6, 2006

Back in the archives

I'm back at the Georgia Archives today and tomorrow (started yesterday), with luck finishing the photographing of the reports I've chosen from the late 1950s and early 1960s. (This series starts in the late 1930s.) Usually, historians don't use the term data collection, because sifting through primary documents is a much more active cognitive process when you're in an archive. But when you're standing there taking snapshots and turning pages, what else do you call it? If I have time tomorrow, I'll also go through the state reports and see how early there is decent age-grade data.

This is the historical pilot for the tools I'm developing on graduation and attrition. Here, the historical question is where and how secondary-school experience grew in the South at mid-century. Did it grow primarily in urban areas or in both urban and rural areas? So I've selected about 20 (of Georgia's 159) counties to follow in a time series, divided into groups: urban counties, Georgia northeastern rural counties, coastal counties south of Savannah, and rural counties in the Black Belt along the Alabama border.

There are a few related measures I'm using. One is a growth-adjusted estimate of the proportion of time schoolchildren spend in secondary school. A second is an estimate of grade progression rates for elementary ages. A third is a profile of attrition. The last is the trickiest, because it depends so highly on accurate migration data. I'm using an average of a residual flow for ages 7-10 as the assumed average for the student population, but that doesn't capture age-to-age differences, and that's likely to make things tricky for ages 16-17. For many of the rural historical populations, the attrition begins well before 16. Because I have the other measures now, I'm less worried about this one, but it's such a nasty thing to get right, and it affects everything about graduation estimates as well as attrition.

December 25, 2005

Done and not done

Many of my friends, neighbors, and colleagues have been preparing (perhaps frantically) for either of the holidays today (and Merry Christmas and Happy Hannukah to those for whom it's relevant). And, from both popular culture and personal testimony, I gather that there is a certain point at which one says (or thinks), "It's not done, but it's as done as it's going to be for now."

The publication a few days ago of John Robert Warren's State-Level High School Completion Rates gave me a similar feeling as an editor, researcher, and observer of research. Last year, EPAA published Jing Miao and Walt Haney's High School Graduation Rates, which compared several methods for estimating high school graduation. Since then, Jay Greene has adjusted his method to account for population change with census data (though he should've acknowledged Rob Warren's work on this point, which has been available from Warren's web site at the University of Minnesota for a few years). And Warren polished his method, I think presented in a refereed journal for the first time in EPAA.

More below the jump.


(For what it's worth, getting Warren's article out was the easiest process I've had yet with an article with tables or figures: I make a template document available to authors when I accept a manuscript, and he took full advantage of that opportunity. So all I needed to do was ask him for slightly different versions of two figures, work a bit with the formatting of tables, and do a few search-and-replace commands for typographical reasons, and it was ready to go. Well, until the errata, at which point I would create an updated version. So the article is done and not done.)

The substantive research, however, will go on, and here is where I'm sure any author feels that "it's as done as it's going to be, for now." The renewed interest in measuring graduation comes from No Child Left Behind, which includes a graduation rate as a key measure, but without really defining it well. So in step academic entrepreneurs, with their suggestions (and with the additional motivation for some of judging reforms by graduation rates—Warren has a number of pieces that use his measure for other purposes, so he is done and not done).

Part of the problem with measuring graduation has been school officials' and statisticians' continued publication of data based on administrative dropout counts (an awful idea and something inherited from the first headline-level concerns over dropping out as such in the 1960s). The recent research has focused properly on measuring graduation instead, and I think Warren has a pretty good approach on measuring cohort graduation at the state level. By definition, it certainly is the latest approach.

But work will continue. I have my own ideas, focused on statistics reported by age rather than grade. You can see a partial draft of that approach, with the introduction of the central concept and one illustration. The real sticking point for all of us here is estimating migration at anything below the state level. Warren's approach is good at the state level, but things get gnarly very quickly at local levels, which is where NCLB's graduation rate becomes very important, and where we'd like to have a reasonable method. In individual school districts and schools, net migration rates can be high enough to make an unadjusted cohort or period measure highly inaccurate.

And, at the level of a journal, I'm also done and not done. Warren's piece is the last article for the year in Education Policy Analysis Archives, and this is roughly the end of the first year I've been editor. It's been an intriguing transition (full of things to learn about post-acceptance processes!), and I'm delighted to have ended with a piece in my own area of interest and that continues a small series that EPAA has published on it over the years. So the article is done and the field is not done.

For now, I'm headed out of town for a week, with little if any e-mail access, having just sent the first editor's draft of the first article for next year to its authors. It's provocative and continues the journal's history of using an electronic journal for things that a print journal could never pull off—in this case, publishing a 58-page article, turning it around from acceptance to publication on short order (compared to the post-acceptance process at many hardcopy journals), and publishing a set of appendices that's longer than the article and longer than many entire issues of hardcopy journals. But having polished the 58 pages of the article and done a once-through on the appendices, it's time for me to send my version to the authors with minor queries and head out of town, done and not done.

December 4, 2005

How to use variable-rate stuff in individual-level analyses?

Reading is the theme for me for the last week—reading student work, a minor truckload of submissions that came in recently for EPAA, and some other stuff. In about an hour, I'll drive down to where my children are in the last chess tournament for the year, so there isn't enough time to get into the databases I just got permission to use (or rather, an IRB exemption because they're anonymous data).

But after writing about growth earlier this week (if this is the end of the week), I've been thinking about the use of so-called variable-rate demographic models for populations. (See that entry's reference to a Preston et al. text, which has the relevant citations. Yes, I took several courses from Sam Preston at Penn. He's a very smart mortality expert, and I love taking advantage of his and others' work.)

Can I use that same principle for analyzing individual-level data? Is it possible to take population-based information on age-varying rates for a parameter (say, school attendance) and use that in an analysis of cross-sectional data (e.g., the annual current population survey), to partially eliminate the conflation of age- and cohort-related effects on the item of interest?

Once again, an idea hits at the end of a semester. Ai!

November 3, 2005

Conference time!

Second public presentation of the net-flow stuff tomorrow, with a poster in the main book exhibit at the Social Science History Association, meeting this weekend in Portland. It's rainy in Portland. I'm not quite crazy enough to laminate my papers to get them safely between hotels. Or maybe I'm crazy enough not to...

In any case, the SSHA is full of serious quantitative folks, so I'll probably be asked to show my work the spreadsheets with the calculations.

I'm still figuring out what to do with the places and times where things don't quite look right. It could be a huge set of typographical errors, or maybe problems in how carefully officials wrote down the figures one year. (There's a great example in Union County in 1938-39, where every child in a grade is also the same age. How amazing! How unbelievable.) Or maybe instability with small populations, which I can believe with African American students in White county (all 125 of them in one year), but not whites in Terrell county the same year. Hmmn...

(Yes, this is all in Georgia.)

October 28, 2005

More on "big social science"

Richard Steckel sent me the following response to my contrarian view of Big Social Science:

I have read your comments with interest and find that I agree with most of them. Big projects are a matter of degree, in both funding and numbers of people. Much very good work involves small budgets (or no external money
whatsoever) and a handful of people. I am certainly not trying to marginalize this type of work; among other things I participate regularly in this type of research.

But you would be surprised on how many big budget projects are now underway in social science history, much less the physical and biological sciences. I don't think the funding prospects for SSH [social-science history] are as dim as you paint them. And I have ideas for increasing the funds, which I will cover in my talk.

It's very nice of him reply thoughtfully (if briefly), and we'll see what discussion evolves next week in Portland. I look forward to it!

October 19, 2005

A contrarian definition of big social-science history

In crafting the call for papers for this year's Social Science History Association annual meeting, incoming SSHA President Richard Steckel asked SSHA members and networks to think about the meaning of "big social science history," defined in the call as "large collaborative research projects within and across disciplines" roughly tied to social-science history. In some ways, this call was a reflection of the original mission of SSHA, to which this year's call for papers referred, and perhaps asking us to evaluate those large research projects.

But Steckel also asked us to dream big. In network meetings, he referred to multimillion-dollar grants in medicine and other fields and framed the call for papers as a thought experiment: "Networks are encouraged to imagine the research program they would conduct with a multi-million dollar grant."

Since I've recently finished a collaborative project among 5 historians of education, 3 sociologists, 1 criminologist, several grad students, and a partridge in a pear tree (though the partridge is not a coauthor in the book that will be coming out), and because I have benefitted indirectly from other collaborative (data-collection) projects, I think I have some experience with today's collaboration, including the prospects for multimillion-dollar grants. And while I will not discount the possibilities of getting large grants, I think Steckel framed the issue too narrowly at last year's meeting. Because the SSHA annual meeting is half a month away, I'm putting out this contrarian definition in hopes of starting a dialogue before the meeting (and one I hope will extend through the meeting).

Framing the issue as one of multimillion-dollar grants was inapt for several reasons and conflicts with the questions raised elsewhere in the call for papers:

  1. Multimillion-dollar grants have large price tags for very specific reasons tied to the needs of the projects, not to the intellectual integrity of the work. Below, I'll describe multimillion-dollar social-science research projects worth every penny and more, but size only matters to the spam in our inbox and ambitious institutional officers who look at federal funding figures. Medical research requires labs, technicians, physicians and nurses for treatment studies, and so forth. Engineering research requires labs, expensive equipment that has a limited life, and technicians. There are social-science history projects that require such funding, but they're generally data-collection efforts. Those are incredibly important, but that requires a different definition of "big social-science history," one I propose below.
  2. Multimillion-dollar grants in social-science history are inconsistent with the current research funding environment, for the most part. Maybe other countries are more generous, but the big federal funding agencies in the U.S. (NSF, NIH) aren't as free with their money as we might like in our fantasies. Funded NSF project budgets are routinely shrunk in negotiation. And while I love the NIH's modular budget philosophy, that only applies for small and moderate grants (I think $250,000 is the cap for modular budgeting at NIH). The last time that the major funder in my area (the Spencer Foundation, sponsoring disciplinary research in education) dangled a few million dollars to several groups, it was in 1999 and early 2000, and the grants that eventually came out of that initiative shrank to shoestring size.
  3. The type of collaborative work funded by multimillion-dollar grants is frequently targeted at specific projects with well-defined research questions. I love well-defined research questions, but is this the only definition of fruitful collaboration? I'm not speaking of the normal development of an area of literature but unusual projects (topical conferences, summer workshops, collaborative volumes) that can move a field but neither need huge gobs of cash nor the type of research question that focus grant proposals.
  4. The multimillion-dollar model is inappropriate for most faculty and other researchers we want to engage in SSHA in the future. In the past 30-40 years, more teaching faculty across the nation have been expected to carry on active research, and a far higher proportion are on regional state campuses of public university systems. SSHA is like most other academic bodies and draws disproportionately from institutions that give faculty significant time for research. While we talk about significant research, there is a growing body of scholars who face research demands with little infrastructure on their own campuses apart from an office, a computer, and maybe a few hundred dollars of travel funds per year. Few of them have the institutional resources necessary to draw such grants, and yet they can both contribute greatly to social-science history.
  5. A multimillion-dollar model will preferentially affect some disciplines and tools, by the argument I presented above in #1. The tool for which money can most easily and legitimately be requested is GIS. I love GIS as a tool. I want it well-funded for basic data collection, such as the National Historical Geographic Information System, as well as good individual projects. But not every good research project is a GIS project, and not every collaboration requires or can feasibly use GIS. This is suggested by the abstracts available with the preliminary program for the meeting. Apart from the roundtable sessions (which are skewed a bit towards GIS), I could only identify one or two paper abstracts not associated with GIS where a multimillion-dollar investment seemed to be part of the research agenda. Abstracts are not papers, and I hope to be proved wrong in Portland.

Given these concerns, I hope that the discussion of big social-science history will veer away from the size of desired grants and instead towards the environment necessary for fruitful interdisciplinary collaboration. Let me start with an abstract but serviceable definition. Big social-science history is interdisciplinary collaboration in history that can create, develop, or support a research agenda that would not be possible by researchers acting alone. Big social-science history should focus on collaboration and infrastructure that makes research possible. Big social-science history makes the tools and end results widely available to researchers and other readers worldwide.

Let me give some ideas that look like big social-science history to me. Some of these exist already and will be discussed in sessions at the SSHA annual meeting. Some don't.

  1. Data-collection and archiving projects such as the Integrated Public Use Microdata Sample projects at the University of Minnesota. A few weeks after the National Science Board published its Long-Lived Data Collections report, we should see data collection as the foundation of big social-science history. Any faculty member with skills in SAS or SPSS can sit in a tiny office, download huge datasets, and manipulate them on today's computers. Today, I can replicate in a few hours what took me months to do with a mainframe in 1990-91. In essence, any time I download a data set, I'm involved in a collaborative relationship with those who collect and maintain the data. Or, rather, I'm benefitting from that infrastructure. With these huge collections, any scholar around the globe with a decent computer can engage in big social-science research that would have taken hundreds of thousands of dollars in the 1970s.

    The reason why one should focus on these large-scale data collection projects is because they require a certain amount of expertise in organizing the work effectively, and because local projects can still be done using this model. I hope Steve Ruggles and others of his ilk might be interested in spreading the Secrets of Data Collection and Management for projects of smaller scope... or might be willing to take on the digitizing of local data.

  2. Data "digesting" projects with end results free on the web. These exist with contemporary data (e.g., Current Population Survey reports and data), and it's essential to create professional approbation (or a brownie-point market) for these in social-science history. They require multi-year, large grants, with the clear expectation that the resulting data sets and reports will be available online, free to anyone. This expectation will require a change in the norms of historical scholarship dissemination, which currently favor books over all other ways of disseminating research. Why is it important to create a new norm? There is currently a long-delayed project of this sort in social-science history that Amazon lists (pre-publication) for $825. Who will buy it, other than libraries? Who will read and use it, other than those of us who still venture to libraries? The editors are well-meaning researchers who started the project with a model of big social-science history that would have worked well in the 1980s because there were no other options then. But there are now, and deadtree statistical compilations that you and I can never have at home or in our office are truly dinosaurs.
  3. Online scholarly encyclopedias. For some years, I was surprised at the fad of encyclopedias among some publishers, and then I became irritated. This type of work is precisely the collaborative scholarship that should be online, refereed, and updated. Typically, scholarly encyclopedias are highly mixed in quality, because editors can't get writers for all entries without dredging for authors. Then you're stuck with an encyclopedia with a major entry that ignores huge swaths of historiography. And then it's obsolete within five years. But with online publication, everything changes. If you don't like an entry? Write a competing one that gets refereed! There are unmediated (or semi-mediated) versions of this on the internet, commonly known as wikis (such as Wikipedia. But we can do better! And we should.
  4. Working-papers archives for historians, with the infrastructure necessary for archiving commentaries and make metadata available. Some version of this exists for physicists and economists, though I'm not sure if they have commenting and metadata attached that would allow such archives to be used by academic library software. Similarly, someone needs to collect dissertation abstracts and metadata in a publicly-available site that could be folded into academic library software.
  5. Online communities centered around areas of interest, where scholars around the globe can discuss topics of mutual interest and ... hey! That's H-Net. (Speaking of which, you can donate to support this infrastructure for Big Social-Science History with just a few clicks.)

None of these look like the "big social-science history" projects that were legends when I was in grad school. I don't know what the budget for the Philadelphia Social History Project was, but the time for that type of project is probably over. Its data collection was important, but that's different from the project as a whole. We need to conceive of big social-science history in ways that faculty around the globe can engage in it. I take Professor Steckel at his word in the gist of the call for papers—we need to evaluate and think about it as a whole, with large ambitions—and hope that this is a reasonable prod for the debate.

October 8, 2005

Age-specific graduation rates

After too many months, I've finally carved out a few hours to play with the data Florida's DOE sent me, a collection of individual-level data on enrollment, age, grade level, sex, race/ethnicity, lunch status, and withdrawal code (including different types of diplomas). The data isn't clean (especially when looking at the birth years), but it's important to see what can be done with the different file structures, and an initial, very draft and obviously not quite accurate graph of age-specific graduation rates is instructive:

grate_10-5-05.gif

(Click picture for larger version. PDF version of graduation-rate graph is also available.)

(For those not used to event-exposure rates, the rates over 1 are not errors. An event-exposure rate has the denominator of the collective exposure to a certain event, usually measured in person-years. So if an eighteen-year-old graduated two months after her birthday, she only added a sixth of a year (0.1666... person years) to the total exposure. If a majority of 18-year-olds in school graduate, and they only contribute on average half a year of exposure, then the rate is going to be over 1.)

There are a few notable patterns. First, the non-standard diplomas become an important feature only with 19-year-olds and those older. In other words, most people don't get either GEDs or attendance certificates until after 19. Second, the bump at age 22 (and the increasing gap between standard and other diplomas) is from the small number of students with disabilities who receive services until the end of the school year after they turn 22.

More work needed... much more work. But it's time this weekend to turn to other tasks in EPAA and grading.

August 29, 2005

Dropout statistics in political use

Diane Cardwell's story this morning in the New York Times discusses the political uses of graduation and dropout statistics in the New York City mayoral race. As is common, the use of educational statistics here implies normative judgment: a 44-percent graduation rate must be awful, according to Fernando Ferrer (one of the challengers of Bloomberg), in comparison to the 54-percent rate Bloomberg's campaign cites as evidence of improvement. Assuming accuracy, context is everything: Both are much better than 100 years ago and still absolutely unacceptable in comparison to the country as a whole.

Keep the limits of dropout and graduation statistics in mind, though: There is no universally agreed-upon method of measuring graduation and dropping out. Even skipping the old method of measuring dropping out (divide counted dropouts by total 9th-12th enrollment), you'll find many problems with what I call quasi-longitudinal methods of looking at enrollment in 9th grade one year and graduation numbers three years and nine months later. Such quasi-longitudinal methods need to adjust for migration paths to have any chance of accuracy. Of the three ways I've seen in the last few years—Jay Greene's, John Robert Warren's, and Haney et al.'s—Warren's is the soundest methodologically. Miao and Haney argue that the Greene stats and theirs show similar results in terms of trends, and that may well be true for recent years at large scales (i.e., states).

However, I am reluctant to trust correlation statistics to judge the soundness of measures that are amenable to demographic analysis. Those correlations will likely not hold for extremely low levels of graduation and for small scales, such as individual districts and schools. Unfortunately, Warren's approach which adjusts for migration is only usable at large scales.

Some aspects of the technical debate are political and accessible to anyone, though: Should graduation rates include GEDs (which leads to higher graduation rates)? Should we exclude expelled students and students with disabilities from the calculation (which would lead to lower graduation rates)? These questions are not technical at all and go to the heart of what we expect from schools.

July 6, 2005

In the Georgia Archives (again)

I'm back in the Georgia Archives again this week, with a digital camera taking images of the local superintendents' reports to the state department of education from the late 30s through the early 60s. At the moment, they don't allow use of flash or tripods, so it's an interesting challenge to hold my arms still enough to take decent images. I'm doing a good job thus far, with the help of the camera's timer. It's just taking a long time even to do one year, because there are 170-180 school districts in Georgia. I spent last night thinking about streamlining this to a set of counties by category: major cities, Atlantic coast, Black Belt near the Alabama border, and the northeastern mountains near SC and TN.

I'll see if I can get permission to post some of the photographs, less for the age-grade tables I'm grabbing than some images that illustrate ... well ... it's an obvious bureaucratic tip to segregation once you see the images, but I don't want to spoil the surprise.

June 5, 2005

Educational Reform in Florida

Educational Reform in Florida: Diversity and Equity in Public Policy, a collection of essays edited by Kathy Borman and me, has been accepted for publication by SUNY Press. The collaborative work was supported for several years by the Spencer Foundation, and with sociologists and historians as the authors, it covers a broad range of perspectives on the last six years or so of school reform in Florida. The chapters:

  1. Issues in Florida Educational Reform (Kathryn Borman and Sherman Dorn)
  2. The Legacy of Desegregation in Florida (Deidre Cobb-Roberts and Barbara Shircliffe)
  3. The Legacy of Educational Finance Reform in Florida (Sherman Dorn and Deanna Michael)
  4. Accountability as a Means of Improvement: A Continuity of Themes (Deanna Michael and Sherman Dorn)
  5. Diversity, Desegregation, and Accountability in Florida Districts (Tamela McNulty Eitle)
  6. Equity, Disorder, and Discipline in Florida Schools (David Eitle and Tamela McNulty Eitle)
  7. Competing Agendas for University Governance: Placing the Conflict between Jeb Bush and Bob Graham in Context (Larry Johnson and Kathryn Borman)
  8. One Florida, the Politics of Educational Opportunity, and the Language of White Advantage (Larry Johnson and Deidre Cobb-Roberts)
  9. Florida’s A+ Plan: Education Reform Policies and Student Outcomes (Reginald Lee, Kathryn Borman, and William Tyson)

The book does not cover every possible topic, and I wish we had a chapter covering vouchers, among other things. But I'm happy to have this accepted, and I look forward to its publication (I assume towards the end of the year or, more probably, early in 2006).

May 4, 2005

Grant!

I just got word today that my college's mini-grant program awarded me a small amount of money for most people but enough for me to spend a few weeks in Atlanta this summer collecting age-grade tables and other data from the Georgia State Archives for the historical educational attainment and attrition study that's in my head. So it won't be in my head any more. Hurrah!

Okay, now I have to figure out my summer schedule for real... and the schedule for the rest of my family, since it makes no difference to a hotel if there are one or four people staying in a room (if two are children).

April 30, 2005

Failure rates

The St Pete Times story on high-school graduation-test failures in Pinellas County (which includes St. Petersburg) reports that approximately 10 percent of seniors in the county have failed their last-chance try at the high-school graduation test in Florida. They will be receiving "certificates of completion" rather than a standard diploma—an exit document that is useless to adults.

It is important to keep in mind that the 10 percent failure rate is not for all students but just those who have stuck it out through their senior year. The research is murky on whether graduation tests add a serious obstacle beyond course requirements for graduation (e.g., the requirement that students pass an algebra class to receive a standard diploma), but if so, the greatest effect would probably be encouraging students to drop out well before they are seniors. The best measure available, which is Florida's state official graduation rate, is longitudinal rather than calculated year by year, so it's difficult to track what happens over time.

Yes, this is one of the motivations for my starting up the dropout research again.

February 18, 2005

IRB exemption!

This week the IRB exemption came that allowed me to open up the package with individual-level data from the Florida Department of Education. This is anonymous enrollment and graduation information from 1999-2000 and 2000-01, so I can test working with age-specific data . Hurrah! Now, where do I find the time for this? (That's okay: good dilemma.)

February 8, 2005

Directions, directions, ...

Okay, to reprise the burgeoning research on student net flows for my 2.5 readers ...

December 2003, I had one of these painful epiphanies that one could estimate dropping out effectively by looking at everything else in a demographic balancing equation: population starting point and ending point, entries into an age or grade (through birthday or promotion), exits out of an age or grade (through birthdays or promotion), and exits through graduation. A bit of adjustment for mobility and mortality through the students before the age of typical dropping out, and voila! you have a way of estimating dropping out (and graduation, incidentally) in a way that should be sensitive to year-to-year changes and not require longitudinal record-keeping.

First idea for application: look at school systems in states that provide promotion/retention data (necessary for grade-based estimates). I submitted grants to NIH and NSF in 2004, both of which were turned down with comments suggesting the projects were fundable if revised.

Second idea for application: historical records. I visited Harvard's education library, found a bunch of public school records with stats by age, and wrote a paper for the History of Education Society meeting last year. That led to the ...

Third idea for application: a huge set of age-grade tables produced yearly in Georgia by every school system from 1938-1968 (separated by race until the mid-1960s). This is a way to look at the John Rury argument about growing high-school enrollment and attainment for African-American southerners. The obvious question here is whether the change we can identify in that era was concentrated in city school systems or spread throughout the state. It's a fairly important question because the assumption by Rury is that it focuses on cities (as the places most likely to be under pressure to desegregate). When I was in Georgia in early January, I collected age-grade tables selectively from 6 districts around the state, and I could bring a digital camera and just take pictures of every such sheet.

Fourth idea for application: look again at contemporary records, except on a school-by-school basis. Florida's Department of Ed has sent me individual-level records for 1999-2000 and 2000-01 (without identifiers), noting all the relevant information to construct the school-specific statistics by age that don't rely on grade-level retention. That's sweet, and as soon as I have an exemption from the IRB, I'm delving into it, as it can lead to an effective reformulation of the grant proposals.

In the meantime, I ran across one more article—recent this time, as opposed the 1980s methods articles I have been relying on —showing how I can use age-grade tables to look at attainment in early elementary years. It's not tripping off my tongue at the moment, but it's clever and make sense to me.

I've never had a "methods" idea before, so the way this is expanding out in different directions is surprising. Now I just need the time to develop it!

January 10, 2005

Sheer pleasure in an archive

The last occasion I spent time in the Georgia State Archives was in 1991, when I was in the midst of dissertation research. I was there last week looking at a series of local school-district reports from the late 1930s through the 1960s that had detailed numbers on the age and grade of students, exactly what I need for my current research obsession. My experience over a few days was even more pleasurable than 14 years ago (when my dominant emotions were excitement and relief at finding my huge plunge into unknown archives). This time, I knew the materials were there.

In part, what was new was the building, now housing the archives close to the Atlanta airport (and the new Southeast National Archive and Records Administration building. They have a whole room of lockers for personal effects, computer bags, coats, etc., as well as a break room for lunch and snacks. But the extra bonus is the beeper system they have when you request original documents. Talk with staff, tell them the series and boxes, and they had you a disk that looks remarkably like the oversized beepers that restaurants give out to diners waiting for a table:


(sample picture and page, in case I'm not being clear—not an endorsement)

So I had two and a half days of luxuriating in state reports with the tables I needed and then enough time to scarf down selected years for Atlanta and five rural counties spread around Atlanta from the Sea Islands to the northern Georgia mountains. When I leave an archive as satisfied as I was Friday evening, I know I'm in the right career.

December 26, 2004

Thank you, Steven Ruggles

Sometimes, there are ways to conduct research that would be impossible without the internet. In the last few days, I've culled key data sets to get a better picture of 20th century graduation and educational attainment than I was able to put in Creating the Dropout (1996), from a collected set of data that one can simply download from the project generally known as the Integrated Public Use Microdata Sample (or IPUMS) group at the University of Minnesota.

Let me focus a bit on what I produced this evening, in a few hours. I've been struggling for years with how to put together a decent portrait of high-school graduation. For my dissertation and first book, I spent months getting access to public use microdata samples on mainframes, programming them, and waiting for the results, often for hours late at night in my first apartment. Looking for possible new or arcane techniques was fairly painstaking.

This week, while looking again at some mid-1980s techniques I've been pondering for about a year, I did a "citation search" to see who had cited a key article from 1985. Lo and behold, I discovered the following:

Carl P. Schmertmann, “A Simple Method for Estimating Age-Specific Rates from Sequential Cross-Sections," Demography 39 (2002):287-310.

Within a few minutes, I had found a copy through my library's electronic subscriptions, downloaded it, and puzzled out the key points. Then I went to IPUMS, downloaded census data from 1940 thorugh 1980 (I'll need to get 1990 and 2000 separately to get the right education variables), and did a first stab. Then, tonight, I turned to the Current Population Surveys done every year in March, which IPUMS now has available from 1962. Except for 1963, there is an educational attainment question for everyone 15 and up, and that's enough for me to take about 2 million cases, put them in a data set, get some simple summary measures by survey year and age, and then turn it into the following graph:

Graph of synthetic-cohort graduation probabilities at 18, 19, and 20 years old, 1962-2003

There are a number of things I need to check here, from the problems of estimating exact-age proportions by averaging the proportions in the surrounding intervals to the assumptions made by lumping GEDs and regular diplomas together. But on first glance, it appears that this data confirms my previous claims that high school graduation has plateaued since 1970, and that people are graduating on average a little later as teenagers now.

Now, to summarize how this five-hour analysis was possible: The federal government gave IPUMS money to make the data available to researchers all over the globe. I set up my data extract in about 90 seconds, downloaded it after waiting about 2 minutes for the extract to be set up, waited another 3 minutes on the download, and then processed it and set up the graph when all was said and done in about 2 hours of work. The longest step on my laptop was waiting for the computer to read the raw data, about 90 seconds. There are other things I'm not explaining, about recoding of variables, etc., but the larger point is that many of the things that would have taken months and enormous frustration were gone, letting me focus on key issues that do matter substantively.

No, I don't expect this graph to appear as is. This is, after all, a very first draft of work. But it's enormously fun to get this far this quickly on something.

Oh, and Steven Ruggles? He's the head of the IPUMS group, one of those changing how research gets done—and done more easily—with the internet.

November 4, 2004

The second Bush administration and educational research

One of the sotto voce points of the AFT study on charter schools is that the Bush administration suppressed the information (and thus AFT staff members went looking for it). This parallels the criticisms that the Bush administration has politicized physical and biological science. The irony is that the language of the No Child Left Behind Act repeatedly refers to scientific research in education and prioritizes quasi-experimental research.

With a second Bush term, researchers should be alert in cases where political appointees or their direct underlings might be using bureaucratic tools to make research more difficult. Over the last few years, there has been substantial criticism of the reorganization of ERIC (Educational Resources Information Clearinghouse), and specifically the end of dedicated clearinghouses for specific topics within ERIC, funded by contract with specific organizations. At the time, I was skeptical of the approach taken by some dissenters on ERIC (not the site linked to above—but you can see a June 2003 archived version of the "Save ERIC" web site), because the rhetoric seemed paranoid and because I thought it might have been more aimed at saving the several clearinghouse contracts than the value of research access. The ERIC digests never seemed to be worth all that much to me, and ERIC was falling increasingly behind the curve of Internet research distribution.

But if there is clear evidence that scientific research and advice is being thwarted in other areas of the federal government, it is something to be alert to in education. After all, you can't just be in favor of scientific research when that research agrees with your predispositions!

October 26, 2004

Age-specific data from 20th c.

In working on my paper for the History of Education Society meeting in 12 days, I've been thinking about the work involved in historical data here. There are snippets of data in Harvard's Gutman Library (or otherwise gathered) from Arizona, Delaware, Georgia, New York, Louisiana, Massachusetts, and Pennsylvania. Some are from states (Arizona, Delaware, and Georgia), while others are from cities (Phoenix, Atlanta, NYC, Boston, Philadelphia). I expect I can get some very long runs of data from Georgia, NYC, and Philadelphia, and probably Arizona and Delaware. Not sure about Louisiana. But it involves quite a bit of travel.

Not that I mind the travel, but juggling various schedules will be interesting. I suspect I can get to the Georgia state archives in early January, and the rest will require a bit of work.

What I'd love in addition to the age-specific historical data would be age-specific data from states, so I wouldn't have to worry about the retention rates. Ages are nice and clean. Retention/promotion rates are much messier. I suspect I might be able to extract some help from here in Florida. I'm not so sure about other states. More legwork!

August 20, 2004

Comments from NIH!

The NIH program director sent me the comments of the study section on the net-flow grant proposal. On the whole, they were very encouraging, in two ways. First, they generally agreed that the approach I had in January (when I submitted the proposal) was sound and interesting, and there was nothing inherently problematic. Second, their criticisms were all about mildness—it seemed a mild innovation, and with a few mild weaknesses. I had applied for an R03 grant, which is for pilot projects with some innovative promise, so obviously that plays an important role in the evaluation of projects within the study group.

What to do? With NIH, I can revise and resubmit two times, so I will. My first instinct is to change the followng:

  • Update the methods section to what I have now
  • Address whether or how I'd estimate the net flows for specific schools, and how this might be accomplished. (This was a specific weakness addressed by the study-group comments.) If I decide it's impractical at this point, explain why the research is still valuable without it. (Why is it okay to look at districts? Many small districts only have one high school. So then I can discuss large-district issues.) Or discuss the hope that modeling retention rates might allow the choice of retention rates for a high school based on an aggregate figure (which is often available). This last will be acceptable for demographers, who often must choose a set of model mortality rates when estimating population parameters or projecting populations.
  • Change the focus of the proposal to something that emphasizes the innovation and the immediate intellectual results, which may compensate for the innovation. The historical materials here might be very useful, since a number of schools have either age-grade tables (Delaware) or have both age tables and retention data for grades (Boston, from 1884 through the 1950s). Thus, there might be some real hope for establishing relationships between the age-derived estimates and grade-derived estimates.

The trick, I suspect, is not to overpromise. I can't promise to look at all districts for multiple years in Texas, Florida, North Carolina, and Massachusetts, make school-level estimates for a large state, estimate many series of data from historical records, and also conduct the analysis I've proposed to NSF. I've embarked on a potentially long-term series of research projects stemming from this method. So the question is how to frame it as a good pilot study.

I sometimes plan articles and other pieces from the reaction I want to get from a reader. Usually it's "I hadn't thought of that and, with a few seconds' reflection, it makes a lot of sense." Here, I need something different: "Wow, he's continued to work on this project, he's addressed our concerns, and I'd give my next sabbatical to see the results." Well, not really the last one, but I do want to give them the impression that this is an incredibly promising idea.

August 12, 2004

Grant proposal away!

The proposal to NSF based on the net-flow estimates is now submitted. We'll see what happens. (I asked for a primary review by the Sociology program.) Thanks to a staff member in my college, I now know how to get the parallel data (not by grade but in toto) to estimate net flows for graduate programs at USF, divided into advanced and other (mostly masters) programs. Hmmn...

August 11, 2004

More net-flow SAS files

More test SAS files for state-level net flow from North Carolina in 2000-01, Massachusetts in 2000-01, and Massachusetts from the mid-1990s. As I've seen before I got the algorithm right, there's a significant downtick for Mass. 2000-01 for 10th grade (or a higher-magnitude negative flow). You need several years of data after that point to know precisely how to interpret that datum. Is it a response to the implementation of either the high-stakes system with MCAS or the exit exam in Massachusetts? Hard to tell, exactly. Having seen anecdotal evidence of school-system responses to high-stakes testing, it might be evidence of a massive burp of triage/purging or of a student response. It's the broader pattern that's really necessary to tease things out.

One methods issue: there are two ways to think of smoothing data for small systems. One is to take annual data and smooth it. The other is to estimate two-year net-flows. Have to work on appropriate SAS statements for that.

July 8, 2004

When you don't have all grades...

One more idea: what to do with districts that aren't unified—do not have students in all grades? There are bunches of districts in Texas and Massachusetts, for example, that have only elementary or only secondary grades. The iterative process for estimating student net flows relies on the whole grade span in two different ways—you need the upper grades to estimate the lower grades properly, and you need the lower grades to have a baseline net-migrant rate against which to compare the net-flow rates for high-school grades.

So the inverse (or converse) of a jackknife approach is called for.

The jackknife is a statistical procedure that allows one to capture how robust a summary measure is by selectively removing points and recalculating the measure without different points. If that set of jackknife measures clusters around the estimate for the whole sample (or population), then it's a fairly robust measure. (There are other uses for the jackknife, but that's beyond the point here.)

Here, we can use a jackknife-like procedure to get at the reverse—what is the measure for the deleted population? If we can find the net flows for a large area (like a state) and then the net flows for the state with an limited-gradespan district deleted, I think we can then find the net flows for the district. That takes care of the first problem. I'm still not sure how to get at the baseline net-migrant rate, though I suspect that in most places, a secondary-only district will have some elementary or unified districts clustering around it geographically, and one can probably use those figures as a reasonable baseline for the secondary district.

Reference


Efron, Bradley. 1982. Nonparametric estimates of standard error: The jackknife, the bootstrap and other methods. Biometrika 63: 589-599.

Net-flow SAS test files

After talking with my colleague John Ferron, I've tried to use the SAS DATA step to calculate the net-flow rates and then to vary the retention-rate estimates randomly around the official figures, to see how that changes the results.

I'm not surprised that the most fragile estimates are net flows at 8th and 9th grades, because retention rates are typically highest in 9th grade and it's in 8th and 9th grades when students start to drop out of school in larger numbers. The retention rate affects the net-flow estimates most for that grade and the grade below it. (The algebraic expressions for the estimates only include data from the grades surrounding the grade in question, but the iterative process creates a larger influence down the grades from a specific retention rate. Regression on the Monte Carlo data sets strongly suggests that the influence is highest on the grade below, then on the same grade, and then the retention rate's influence on net-flow estimates sharply decreases for other grades.)

The most surprising feature for the Florida 2000-01 case is the estimated net in-flow during 8th grade. I suspect that's an artifact of the data to some extent—an underestimate in 9th grade retention would boost the implied net in-flow for 8th and increase the implied net out-flow for 9th. But the official retention rate for 9th graders in Florida for 2000-01 is 25% (calculated from end-of-year rolls to the beginning of the next year). Could it be higher? The other moderate influence could be the distance between promotion time and enrollment-counting time. I've been assuming that Florida's August start time is about 85% of the way through an October-to-October enrollment-counting cycle (which is what the federal government wants for its Common Core of Data, my source for enrollment). But if the enrollment is a beginning-of-year count, the figures are a little less anomalous.

The files


July 6, 2004

More on student net flows and dropping out

Sometimes it's hard to sit on the sidelines during a public-policy debate when an ongoing research project is relevant. I've seen that twice this year, first as arguments developed in Massachusetts over whether student dropout rates had increased after the creation of a graduation test and more recently when the Florida Department of Education gave the St. Petersburg Times erroneous figures on the ages of students taking the GED tests. From the first figures produced by Florida's government, the Times wrote an article implying that the new graduation test had pushed a large number of teenagers to drop out of school and take the GED instead.

I've said nothing other than to tell a few people about my project and say, "I'm pretty confident, but it's still in development." And sometimes I'm quite happy not to have overpromised things. Over the weekend, I discovered a notational error in the working paper I previously posted and a substantive error in something I sent a colleague, the latter a modification of that paper to adjust for mid-year promotions (or, rather, promotions between the end points of enrollment-count intervals).

This adjustment is important because students move into a new grade at the beginning of a school year, which can range from early August to September, depending on the state and district. But the fall enrollment data sent to the U.S. Department of Education is from October. I'm assuming that the absolute net-migrant count is evenly distributed over a year, and then all that's necessary is to provide a scaling factor and an additional term in one equation. I'll put something up sometime in the next week or so to fix the notational error and add the adjustment factor.

I also have been thinking about one bit of advice I received in May, about the stability of these estimates. Can I come up with maximum likelihood estimations of the net flows and then look at key figures (the diagonals in the information matrix, if anyone's interested)? The colleague I needed to apologize to about the substantive error volunteered to see if he could help me with that. But can I also use some Monte Carlo or bootstrap procedures? The key thing is to think about where an estimate might be wrong. There might be some errors in the assigned grade level or some missing students (or those who have dropped out but are still on the rolls). I suspect the biggest source of potential error is in the retention rate, so that should get the closest attention.

References


Ron Matus. 2004. State: FCAT may fuel big GED numbers. St. Petersburg Times, June 14, 2004. Retrieved July 6, 2004, from http://www.sptimes.com/2004/06/14/news_pf/State/FCAT_may_fuel_big_GED.shtml.

From FCAT to GED [editorial]. St. Petersburg Times, June 18, 2004. Retrieved July 6, 2004, from http://www.sptimes.com/2004/06/18/news_pf/Opinion/From_FCAT_to_GED.shtml

GED article based on inaccurate state statistics. St. Petersburg Times, June 28, 2004. Retrieved July 6, 2004, from http://www.sptimes.com/2004/06/28/State/Article_on_GED_based_.shtml

May 26, 2004

First paper on net-flow rates

My first paper on the net-flow rates, Indirect Estimation of Student Net Flows and Dropout Rates: Concepts and Examples, is now available (PDF version). I need to work on a few things—okay, more than a few things—but here's the gist.

April 15, 2004

Kentucky points the way

When I wrote "Alternatives for Florida's Assessment and Accountability System" (available in Word or PDF, in a new window) for the Reform Florida briefs on educational policy in the state, I hadn't known that Kentucky requires educational audits for schools labeled low-performing. That had been one of my key recommendations, and I wish I had been aware of Kentucky beforehand. I'm not sure if there's been solid research on Kentucky's practice, but I'll check before I turn the briefs into any article(s).

Sometimes, that's the risk with writing policy stuff under time pressure: you can't know everything that's happening. But Kentucky's example at least is that.

March 20, 2004

Simplify, simplify!

Today, I figured out how to simplify the explanation of the net-flow rate. It makes no real difference either in calculations or in the complexity of the spreadsheet, but it'll reduce the difficulty of explaining it to non-demographers.

Here's the gist: the increase in enrollment for grades x on up between any two points in time is equal to

  • the number of those who enroll in grade x for the first time between those two points
  • minus
  • those who graduate between the two points
  • plus
  • those who enter for all other reasons (moving into an area, transferring from other schools, returning to school)
  • minus
  • those who leave for all other reasons (moving away, transferring to other schools, dying, dropping out).

The last two terms can be collected as residual net flow, can be estimated, and then from cumulative raw numbers, you can infer grade-specific raw net-flow numbers (and thus rates).

February 29, 2004

Presentation next week

I was invited to give a presentation in Phoenix next week, on a topic of my choice. No, it's not a job interview (or at least not that I'm aware of). Fortunately, I was intending to be in California anyway, and my brother Ron's family is in Tempe, so this'll be fine. I just need to figure out how to flesh out my talk on "Sideways Policy Analysis: An Historian Looks at High-Stakes Testing."

February 5, 2004

Hurrah for keeping notes!

I unearthed my dissertation notes and copies of archival materials from 1990 and 1991 and discovered that—as I had hoped—I had copied student enrollment information from Atlanta and New York City schools from the mid-20th century. I'm not sure if it has all the information necessary to calculate student net flows, but I can hope!

January 21, 2004

Grant proposal drafted

The proposal for NIH on indirect measures of dropping out is drafted. Whew! Now to proof it until there are no errors left.

January 5, 2004

Behind already!

It would be extraordinarily silly to be bothered about this, but I just didn't get done with everything I hoped to accomplish over break. But I suppose break is not for getting everything done but for setting yourself up for a big to-do list just when students want and deserve your full attention. Ah, well.

Met today with Srinivas about our project. Meeting with him is a real pleasure—we gab a bit, we get our business done, and we then agree on when we're meeting next and what we hope to do then. Fabulous!

December 16, 2003

Great ideas, no time

A week ago, more or less, I had one of the clearest epiphanies I’ve ever had in academe, right in the middle of finals week. Do I have time to work on this right now? No! Sheesh. But I can describe it, and it’ll sit until I do have time, or I’ll peck away at it. Here’s the gist:

Dropout and graduation rates are notoriously unreliable. There are decent measures of graduation, if you use population-based data from the Census Bureau (for the U.S.), but numbers from school systems are awful. Part of the reason why they’re bad is because counting dropouts relies on accurate identification of school-leavers as dropouts (as opposed to transferees), something that’s tough even when there isn’t evidence of intentional fraud. Part of the reason is because students transfer between schools at such a high rate that looking at raw numbers longitudinally is seriously problematic. Part of the problem is relying on information by grade when that number is fuzzy as you get to secondary school and when you never know how long someone can stay in a grade, especially with what Robert Hauser has called an epidemic of grade retention (PDF file—look at the figures starting on p. 55 of the file). Demographers like to work with age, because you can generally rely on someone’s age going up by one year for every year of time. (There’s a phenomenon of misestimation called age heaping, but I’ll ignore that for the moment, and it’s considerably less evident at younger ages and in countries where knowing your birthday is common.) But school systems do not publish information by student age.

So here’s my epiphany, inspired by some wonderful work by demographers Sam Preston, Ansley Coale, and Ken Hill: one version of the demographic balancing equation is that the rate of growth in any population is equal to the birth rate minus the death rate plus the net migration rate. That’s pretty simple. Here’s a corollary: if you take any age x, the rate of growth in any population for that age up (from x to infinity) is equal to the “birthday rate” at age x (the rate at which birthday x is happening in the population) minus the death rate for the population from x on up plus the net migration rate for the population from age x on up. Most demographers don’t use this, because you can get good estimates of mortality and fertility directly from birth and death registration systems combined with census figures (estimated or actual full census).

But here’s the application to school systems. For any grade x, the growth rate for students grade x and up is equal to the “first time in grade” rate for grade x minus the graduation rate plus a residual “net flow” rate that includes transfers in and out, student deaths, dropping out, and returning to school. You can calculate all of that for a single year just by knowing the enrollment counts by grade for two successful years (or any two points in time), the number of graduations between the two points in time, and the retention rate for each grade. Or, rather, by a bit of algebraic magic, from that data one can directly calculate the growth rate, the first-time-in-grade rate, and the graduation rate, allowing one to infer the net flow rate.

And if you can calculate the net flow rate for grade x on up for every grade, then you can get the net flow rate grade by grade. Since children of school age move around at a fairly-even rate across the age span, you can take the average net-flow rate for the earlier grades (grades 2-7 look pretty good) and then calculate an adjusted net-flow rate that should be pretty close to the sum of student bodies flowing in and out of schools because of deaths (pretty small), in- and out-flows that are for specific grades (most commonly flowing to public schools in 9th grade), and dropping out.

I’ve done this tentatively for Massachusetts 1996-2001 (skipping 1998) and for Texas for 2000-2001, since the states post grade-by-grade retention rates. But it was pretty simple, there’s a clear dip in the 10th grade net-flow rate for Mass. in the last year that might be attributable to the MCAS graduation requirement, and I can easily imagine how to write grants for this for NICHD and NSF (with an extension to analyzing retention in higher ed). Now, if only I didn’t already have the following on my plate:

  • two edited book projects that are in process
  • Some papers I’m committed to working on as a fellow of ASU’s Ed Policy Studies Unit
  • a book on academic freedom that I should write, given events at USF and my knowledge of them
  • an historiography book tied to the history of education
  • everything else academics typically do

So it’ll sit there until I can figure out how to carve out time. Ideas are welcome!

June 5, 2001

Posing as Expert

Yesterday, a reporter from WCPN public radio in Cleveland asked me if I'd be willing to be interviewed on-air tomorrow morning about the Ohio testing system. Ohio's state department of education released some test results last week, and I gather the station will run a story summarizing them and then asking me some questions. I've been interviewed a few times by journalists, and it's one of the public services academics do (in addition to the ego-boost and minimal public attention).

The trick for preparation is to draft some short questions you think a journalist might ask and then think about your answers. It's much like guessing questions that would be on a final exam, except your answers on the exam are not read or heard by thousands of people. I have my "cheat sheet" on my desk right now.

I'm also revising the article manuscript that the History of Education Quarterly has accepted for publication (next year, probably in the fall). I've tried to make the first few paragraphs more readable.

Between two trips last week and the two-week trip starting June 12, I have a lot to do this week, playing catch-up and getting the ball rolling before I leave town.

April 27, 2001

Writing

The last class is done and now I grade. In the meantime, I've finally read the March 2001 issue of Educational Research, to find the following in one article:

Freire's theory appears to be insufficiently historicized, even though he places a historical and cultural praxis at its core. As we will see, this leads to a connected group of ontological and epistemological quandaries that require substantially different responses than Freire provides. In addition, because of the structure of his arguments, these problems impact Freire's ethical and political positions since he supports them by ontological appeals to human nature and by epistemic claims about situations (including self-understandings).
(Ronald David Glass, "On Paulo Freire's Philosophy of Praxis and the Foundations of Liberation Education," Educational Researcher 30 [March 2001]: 20)

In plain English, Paulo Freire naively assumed that human nature encourages peasants and poor people towards embracing radical democracy and redistribution of resources. I wish there had been a neon sign in front of the article: "We apologize for the incomprehensibility." Fortunately, immediately afterwards is an article cowritten by one of my favorite authors on writing, Mike Rose. "A Call for the Teaching of Writing in Graduate Education" (Mike Rose and Karen A. McClafferty, in Educational Researcher 30 [March 2001]: 27-33) describes a graduate seminar at the University of California at Los Angeles in writing. I have no idea if the editors of Educational Researcher intended that the issue itself help make the case for teaching academics how to write!

Mike Rose, author of Lives on the Boundary (1989, available from an alliance of local independent bookstores), has taught me a great deal about how adults become socialized into writing. I explain to my undergraduate students when returning the first batch of written work that I don't know if the mistakes I see are a result of sloppiness, lack of being taught, or because they are desperately trying to work with new ideas (which I realized, thanks to Rose, often happens with students). They are responsible for figuring out what happened and, if they need help, asking me or finding other resources. But I forget that my thick-skinned nature is the result of my own experiences, and too many students see comments on their writing as comments on their personal character.

March 30, 2001

Starting a research project

Well, we didn't start a research project, but it was the first meeting where we started to put short-term timelines together for who does what.

The Spencer Foundation, an organization that funds educational research, gave the University of South Florida money for two years to start three projects as part of a new consortium on educational research in Florida (or CERF, if you want to use an acronym). This afternoon, several of the researchers met (or phoned in) to discuss concrete steps at the moment.


This part—dividing up responsibilities—is a new experience for me. I've worked on research projects where I was the flunky and given responsibilities, and I've run my own research projects, generally just myself or where I give graduate students clearly delineated responsibilities. But being in a group of researchers, each of us with our own agenda and professional needs, and starting to work out what to do with limited time and resources, is going to be interesting.


One weakness of mine, as a researcher, is hiring research assistants and training them in a way that both gets the work done with a minimum of fuss and also respecting their needs as students. I'm reasonably sure I treat research assistants decently as human beings, but it's the judgment of what they can do and what they need training on that I'd love to figure out better. Fortunately, as I'm not the boss of this consortium, I get to see others do that.

March 25, 2001

Faster transcription and going-out-of-business sales

I went to the configuration file and stripped out half of the things that were starting up with the computer, and that solved the problem—transcription is much faster now (the desktop's working right now on that). Yes, I'm back in the office on Sunday. The chapter is dictated, mostly transcribed, and needs references and editing. Whew!

Yesterday, after I came home, the kids and I went shopping for a compass Kathryn could use (found, at Wal-Mart), a frog piggy bank (a froggy bank?) (found, at a Learning Express store—more about that later), horses (same), and an egg timer (not found). Elizabeth (my spouse) does not like the clockwork timer I use to keep my showers short, in this drought Florida's had for two years, and we're not going to bring an electronic timer into the bathroom, so we've been looking for one of those three-minute egg timers. Wal-Mart didn't have one, the local grocery didn't, and we have no idea where to look. Any ideas?

We don't usually buy toys at chic places like Learning Express, but this was with the children's money, so they get to make the choice. (They had been feeding the pets of a neighbor, so they had enough to had some fun.) We also found out that Learning Express is going out of business— well, this particular store is, in about a month. Everything is 30% or more off. Is this the upside of the recession?