June 30, 2006

Questia spam!

I just received an e-mail offer that's clearly spam:

After reading your blog, it is clear that education is an issue you care about. As your commentary becomes a part of history, you are well aware that it is intrinsic to stay informed and up to date.

This is where we come in. Questia provides an online library service with full-text access to over 30 million pages of academic books and articles.
We're reaching out to a select number of bloggers like you who write on topics within our library by offering a complimentary 3-month subscription to Questia.

Our site is set up so that you can create a direct link into the full text of any page from any one of our 66,000 books and 1.4 million articles.
Below you will find a few materials that might interest you: [the rest cut—SJD]

My response:
It's interesting to receive spam based on my blogging. You may note that I teach at a university, which provides me plenty of resources, and I'm unlikely to want to pay for more.

Look, folks—in case you haven't noted, bloggers can ridicule you when you do something foolish! Update: My university carries Questia. You think someone might put in a routine for spam generation at least to check for that? Hmm...

Listen to this article Listen to this article
Posted in Random comments at 10:17 AM (Permalink) |

NYC graduation rates

In today's NY Times story on school graduation rates by David Herszenhorn, there's an important tidbit: the schools hired a firm to audit records. I didn't know of that when I wrote that as one recommendation for Florida's graduation-rate calculation, but I'm delighted to hear it now. A preventive step to ensure the accuracy of school records? Who woulda thunk it? I'll be curious to see what the actual audit process was.

It's unfortunate that the chancellor's office chose to release the rates selectively, just for 15 small schools (part of the city's small-school initiative). I'm happy that graduation is up at those schools, but it would be a smarter, more credible move to release all of the school stats on the same day, so it looks less like cherry-picking the results. It'll be a few more years before we know whether the small-schools initiative is really an improvement or is too much of "big schools in drag," a phrase I've read attributed to Michelle Fine in Washington Post, Philadelphia Public School Notebook, and InsideSchools.org articles on small schools.

Listen to this article Listen to this article
Posted in Research at 7:03 AM (Permalink) |

June 29, 2006

Testing as technology

I've spent my book-time today pondering the nature of testing as technology. For the first time, I'm using concept-mapping software (CMAP), because, well, I haven't been able to get my mind to think about this rigorously, and anything's worth a shot.

So it's been fruitful. One thought in my head before today, which I knew was incomplete, was the technical focus on consistency, consistency among different levels of objects (content standards and item specification, item specification and items, items and total tests) as well as the type of consistency we think of as the technical term reliability. Item response theory (which I only grasp in a general way, having had no practice in it at all) is a tool in service to this consistency.

Trying to wrap my head around this made me think of the obvious criticism of this focus on consistency—the way our real-life skills are not consistent, despite the conformity of tests to this consistency standard. But that doesn't really touch on the core tensions between technologies (such as testing) and democratic politics (no matter what your political theory). Consistency is more a matter of minimum quality, a key concept in the same way that optimization is a driving force in engineering. It doesn't really tell you anything about the politics of technocracy.

So there are two thoughts that are running through my head this evening, after this exercise:

  • The control and oversight in testing has some interesting professional characteristics—internal accountability (within an organization) is critical, the standards are technical, market demands shape behavior (even if the buyers are generally public agencies), and the public legitimacy of testing is crucial to the survivability of the whole enterprise.
  • I need to discuss political-science notions of the iron triangle of industry, Congress, and regulation.

I have nothing more useful or synthetic right now, other than just wanting to jot these down to ponder overnight. Tomorrow is my son's last morning in the summer chess camp (more about chess and history after that's done), so I'll spend the morning in some place trying to ponder/write, pick him up, and then head to campus for a few errands. But this is enough to ponder.

Listen to this article Listen to this article
Posted in Accountability Frankenstein at 10:18 PM (Permalink) |

Florida graduation rates inflated

The paper linked from this blog entry describes how Florida's Department of Education inflates the official longitudinal graduation rate by approximately 9-10% through two statistical definitions:

  • It excludes from public-school responsibility all those dropouts who immediately enroll in GED programs.
  • It includes GED and special-education diplomas in the general graduation rate.

I have discussed this problem before in this space, but until recently I did not have data to quantify how these definitional quirks affect the actual numbers. In the last two weeks, staff from the Department of Education sent me additional (if limited) information on the cohort calculations at the state and county levels since 1999, and they allowed me to correct at least one of the problems (the excusal of so-called W26 withdrawals from school responsibility) and take a decent guess at the effect of the other.

Part of this information appears in the manuscript I've sent off to a peer-reviewed article, and usually I frown on touting research publicly until it's been reviewed. Yet the reports released last week by Ed Week and the U.S. DOE have given me an opportunity to point out some of the problems. And you will find that much of the detail here would never be publishable in a national refereed research journal—it's too specific to Florida in some instances. Yet it should be available publicly. So, what's the ethical stance?

What I've chosen to do is release it here on my blog and send it to Florida education reporters, but no announcement to national reporters. And I think I'm careful in the paper itself to explain that it is not a refereed publication. The analysis is pretty simple, and anyone can do it.

Listen to this article Listen to this article
Posted in Research at 12:21 PM (Permalink) |

June 28, 2006

The first Garcetti v. Ceballos fallout, in education

Well, it was bound to come, but I didn't expect it so quickly. This month, a federal judge in Tampa, Florida, dismissed the complaint of a fired Polk County principal directly because of Garcetti v. Ceballos.

Former Kathleen High School principal Mike D'Angelo was fired after an outstanding evaluation, and D'Angelo claims it was because he was investigating the possibility of turning the high school into a charter school. In explaining his decision, Judge Richard Lazzara said (according to a transcript quoted by the Lakeland Ledger article,

You know, I've always been taught that for every wrong there's a remedy. Well, there was a remedy here, which turned out to be hollow. So I say to you, Mr. D'Angelo and Mrs. D'Angelo, I think this is the first time I've ever done this—I apologize to you. I don't like making this decision, but the law compels it.
Listen to this article Listen to this article
Posted in Education policy at 8:57 PM (Permalink) |

Commentary on Spellings Commission draft report

I just sent off an op-ed column on the Spellings Commission draft report to one of the major higher-ed news outlets. In it, I acknowledge the misleading statements in the draft but focus on a deeper problem in the report, a fundamental inconsistency that I think is a fatal error (even if you agreed with the factual claims).

We'll see if there is any interest.

Listen to this article Listen to this article
Posted in Academic freedom at 1:31 PM (Permalink) |

Growth models, technocracy, democracy, and algebra

I had thought that my being anxious and out of sorts for several days was from that line-drive hit into my leg two weeks ago and the resulting swelling and discomfort. But no—it was from the interruption of the book-writing for other things that are quite valuable (the journal, the article on migration and graduation estimates, and a few other tidbits) but that stopped my writing momentum. I think I have it back now, and I'm much happier. I know—I should be delighted that I've accomplished so much with even a minor injury. But I have a compulsion (hopefully not a disorder!) to get this book done.

Current status: I have a contract I need to sign with one publisher (hurrah! I'll add that information as soon as I receive my completely-signed copies), and I'm on chapter 2 right now. That's the chapter on the relationship between technical expertise and democracy, one that explores the politics of accountability statistics and how that is rooted in a long-term tension between technocracy and democracy. The first section of the chapter explores the Progressive-Era origins of prestigious technical expertise and the ambivalence our society has with expertise (with IQ testing as a prominent example). The second section of the chapter explains the organizational life of testing and how the fragility of the world of testing undermines our ability to use high-stakes testing with confidence. I suspect I need to add a separate section on some of the stuff that didn't fit in the first chapter, on the civil-rights meme that's only just emerged as a major rationale for high-stakes testing.

This morning, I'm working on the last section, on growth modeling. I've discussed growth models before. It's a paradigm* of the dilemma in balancing both technocracy and democracy. My goal is to describe both the technical difficulties with growth measures and the way that the holy grail of growth has obscured the political questions involved: how we set expectations for schools and students.

Addendum (added after a few minutes' thought waiting in a coffee line): the tricky part of this chapter is figuring out what technical discussion is necessary without going over the heads of the potential audience. What can I assume? Practically speaking, I think I need to assume some knowledge of plain multivariate regression. These days, principals and superintendents need some statistical reading skills (though I won't call it statistical literacy) not to be bowled over by waves of school accountability statistics and bad research.

And there's another place where algebra has some use in everyday life. If you understand a linear equation, you can understand multivariate regression. This means that administrators, and any teacher (at any level!) who wants to be an administrator, needs to know algebra.

* Paradigm is now associated with Thomas Kuhn's (mis)usage of the word to mean "social model." I'm using the older meaning.

Listen to this article Listen to this article
Posted in Accountability Frankenstein at 9:57 AM (Permalink) |

June 27, 2006

Bargaining spotlight in Miami-Dade

Matt Pinzur discusses in his blog today the non-salary negotiation issues described by the United Teachers of Dade regarding the three-year contract currently in negotiations. As someone who's a higher-ed union activist, I don't know that much about the K12 negotiating field, but Pinzur takes a stab at tea-leaf reading. It's a fascinating fleshing out, and I'll be curious to see how accurate he is.

Listen to this article Listen to this article
Posted in Education policy at 4:30 PM (Permalink) |

Spellings commission draft shows signs of distortion

Sigh. Sometimes, it's hard to have perspective. Reports of the Spellings commission's draft are not exaggerated, and it appears that Miller is hell-bent on claiming that college costs have risen and need to be clamped down on.

Oh, yes, and quality needs to rise.

What is true, and what I explained to a key staff member last summer at the NEA/AFT Higher-Ed Conference, is that tuition has gone up dramatically. But tuition has risen far more than actual costs, especially at public institutions, because states have cut the proportion of higher education funding provided from state coffers. In the public sector, at least, tuition costs are a reflection of cost-shifting from the state to students and their families.

And in the private market, the "list price" is as much a marketing tactic as a real cost. It's the snob factor that causes Princeton's tuition, room, and board to top $40,000, even though its endowment could allow everyone to get an undergraduate education without any tuition at all.

So what the heck is Miller getting at, and does he understand that maintaining this stance in the final report would be as close to an outright lie as one can get without stepping in it?

Listen to this article Listen to this article
Posted in Random comments at 8:05 AM (Permalink) |

June 26, 2006

Migration and graduation MS

Well, the MS is now submitted to a journal that accepts papers online. Hurrah! No copying-several-times-over-and-mailing. In the end, I added an additional section overnight, shoved some things around, did major tinkering in the Microsoft Equation frames (when plain text just doesn't handle integral or summation signs).

The conclusion of the paper: "Become my minions and let me rule Graduation Rate Land. Bwahahaha!" No, that's not the conclusion of the paper, and I do not intend to join Evil Overlords Anonymous (but see a song and Harry Potter fan fiction for more on that). I do try to make a mild case in a few spots for the superiority of measures not based on grade level, but the main point of the paper was to demonstrate how sensitive most graduation-rate formulas are to graduation, and the problems that will be encountered as states try to meet their obligations under the National Governors Association compact.

Blogging about my research raises an interesting question, since the target journal has anonymous reviewing: would someone who reads my blog have to recuse herself or himself from being a referee? It's akin to walking by and reading a conference poster closely and then receiving a review assignment of the later paper. But sometimes you just can't get around these things, and besides, you can be wrong about your assumptions.

Listen to this article Listen to this article
Posted in Research at 10:18 PM (Permalink) |

June 25, 2006

Article manuscript on graduation and migration drafted

Well, the draft is done. Time to go through it with a fine-toothed comb for grammatical kerfluffles, confusing terms and symbols, and citation errors. Right now, the conclusions are fairly strong: you can't get accurate graduation rates without accurate information about migration. And that means you just can't have accurate graduation rates at the school level, even though NCLB demands it.

Listen to this article Listen to this article
Posted in Research at 10:42 AM (Permalink) |

Online encyclopedias, but not wikis

Scott McLemee has joined Sage Storrs, Jeremy Boggs, Alun Salt, and Roy Rosenzweig in discussing both how to teach students how to read Wikipedia skeptically and also how to colonize Wikipedia and other open-source secondary materials. I will leave the teaching angle alone for the moment and head directly to the question of what we write and how.

As the editor of an online journal, I am biased in favor of open access, and one of the frustrating aspects of the Historical Statistics of the United States—Millennial Edition (HSUS-ME) is their hard-copy business model, which relied on an old assumption: you get an agreement with a publisher for an expensive project, maybe enough of an advance to pay contributors a pittance, and then sell hard copies to libraries at exhorbitant rates to justify the project. Cambridge University Press added on a pay-per-view model. For example, if you head towards the historical statistics on education, you get asked for $6 for 48-hour access.

For many, that's not a bad deal. Pay, zip in, suck up the tables while you have access, and go. But think of who this leaves out: schoolkids whose parents don't have acces to $6. You want to give students the sense that history is beyond their reach? Put things behind a subscription wall. Fortunately, the American Memory Project of the Library of Congress, along with dozens of other sites, show that there is a way to provide access to historical materials (and I consider statistics part of that—see the Integrated Public Use Microdata Sample website for an example—their business model is "get grants; do work; make it available to the world for free"). For goodness' sake: many of the source materials for HSUS-ME come from the Census Bureau and other agencies which throw terabytes of data online for public consumption. And HSUS-ME is behind a subscription wall? To quote from a Christine Lavin song, "What were they thinking?"

But that's blood under the bridge (to quote a German professor from many years ago at another institution). I hope that the HSUS-ME is the very last semi-definitive compilation of statistics that operates under this model.

So, from a philosophical and professional standpoint, I'm in favor of open access. And yet I know from my one attempt at contributing to Wikipedia how frustrating collective writing is. Fortunately, there are other options. Online refereed encyclopedias for narrow topics, the reference equivalent to online journals, could allow anyone to submit an article that would be vetted by an editorial board. The versions would be (like Wikipedia) open to comment and discussion, but there would be editorial control. But the wonderful thing about online encyclopedias is that there would be openness to multiple perspectives. If you don't like an article's stance, just write a competing version! And with an online encyclopedia, obsolete articles aren't a problem.

I have my hands full with current obligations, but I'd love for others to run with this one. Go collect enough of an editorial board to run the project, get someone to fund the copyediting if you can (such as scholarly societies), and then visit the Open Journal Systems website (free online-journal system) and see if that might work for you. (I can think of at least two ways to tweak that into an encyclopedia-friendly form.)

Listen to this article Listen to this article
Posted in History at 8:00 AM (Permalink) |

June 24, 2006

Grant applications, one rejection and one improvement

This is not only the week of graduation-rate reports, it's also the week of receiving news on two grant applications. The short precís sent to the U.S. Department of Education's unsolicited research competition was rejected (i.e., they didn't want a full proposal), and unfortunately a review of the recommended regular competition for the next fiscal year confirms that it wouldn't fit in any of the priorities. That was a longshot proposal.

The shorter odds are on the revision of my 2002 NIH proposal. This time around, I had changed the measure to one based on age, had narrowed the scope, and had gone to some trouble with data collection in the intervening years. Last time around, the percentile score for the study section (demography) was 52.0, which in NIH tradition is the reverse of normal percentile scores, meaning that a slight majority of proposals in the prior year had been scored superior to my proposal. This time, the percentile is 41.6, a moderate but definite improvement. If the funding cutline is generous (unlikely!), it'll get funded automatically. I could also get funded by a program-officer recommendation for select pay, but that usually happens on the last revision. My prediction: no funding this time, but this has a good shot at funding with another revision.

At least two other grant proposals to be written this summer. One is another longshot, and another is another revision (or maybe it'll count as a new proposal, depending on what the program officers advise), this time for NSF.

Listen to this article Listen to this article
Posted in Research at 11:27 PM (Permalink) |

It's the idea, stupid, not the network

And this morning, I get to combine two major topics of this blog, academic freedom and (mostly K-12) education policy, with the following message: a social network is not a conspiracy, but it can embody a worldview.

Case 1: NCLBlog (of the AFT) has pointed out that the response to a Freedom of Information Act request about a modern (suited and not trigger-happy) Pinkerton agent Richard Berman coordinating an anti-union publicity campaign notes a whole lot of people at least tenuously involved in Berman's activities, including bloggers.

Case 2: Last Friday (June 16th), Alan Jones penned a column at Inside Higher Ed, trying to "connect the dots," as the headline put it, in funding of conservative groups who criticize higher ed. His central claim: "the same funding sources that brought Horowitz’s organization into being, also created and sustain a large and integrated network of ideologically defined think tanks and centers both outside of and within the higher education establishment.... The relentlessness with which columnists and experts with direct funding relationships with Olin, Scaife, Bradley, Koch and Coors level charges of academic bias and assert the need for legislative reform of higher education is remarkable. The goal of this narrowly focused and ideologically driven public relations campaign can only be understood in terms of its fostering of a political climate in which federal regulatory “reform” of what is universally recognized as the finest system of higher education in the world, will be tolerated."

Sample response in case 1: Andrew Rotherham (who was apparently mentioned in the documents) writes, "I'm shocked! Next they'll tell us that the NRA and the Republicans are in cahoots!"

Sample response in case 2: After point out some egregious factual errors, Jones target Cathy Young writes, "Jones's diatribe resembles nothing so much as David Horowitz's attempt to sniff out George Soros's money behind every left-wing venture."

A FOIA request is not a conspiracy theory, but I hope that the AFL-CIO (with which I'm affiliated because of my membership in an AFT local) is careful in describing the social networking, because it's embarrassing when someone like Alan Jones goes way overboard. Social networking and funding is not, in itself, evidence of some conspiracy or inappropriate twisting of a democratic polity. It seems that conservative funding agencies and think tanks (and wannabes) have not tried to hide their links. This is different from the efforts of very wealthy families to establish front organizations deliberately to fight the estate tax. The key officers and staff members in the Reason Foundation, Manhattan Institute, American Enterprise Institute, Hoover Institute, etc., are not fronts. They're true believers or serious intellectuals (or both). And there are serious intellectual differences among the think-tankie types (to borrow from a Tom Chapin song).

The way to respond to the arguments of a social network with a coherent worldview (when there is one) is ... to respond to the arguments. If you're truly afraid that a network of funding will wash over the efforts of hundreds of folk, remember that we have the internet now. Michael Bérubé can now wield almost as much force through his blog as David Horowitz can with his minions at Front Page. We'll be in trouble if Horowitz ever hires a fact-checker, but the nice thing about the wild ones is that they generally think they don't need fact-checking.

So how have social networks blinded conservatives and others in talking about either teacher unions or academic freedom?

In the area of education and teachers unions, the key blind spot is the inconsistency surrounding material interest. Whenever you hear someone describe unions as "interest groups" or as organizations devoted to "adult interests," not "children's interests," listen carefully to arguments made about the motivation of teachers. In many cases, the same folks who disaparage unions for looking after the material collective well-being of teachers want to impose some rationalistic carrot-and-stick system to motivate teachers through ... oh, yeah, material interest.

In the area of academic freedom, the key blind spot is the conflation of political perspective with academic perspectives. (Not incidentally, this is their common criticism of faculty.) While many of the voter-card-check studies of faculty are methodologically weak, they're also beside the point. Who cares whether a majority of faculty in a department are liberal, conservative, libertarian, socialist, whatever? What matters in an academic context are the academic perspectives. Physicist Alan Sokal is a pretty liberal guy, from what I remember, and yet had no problem poking fun at postmodernists in the Great Social Text Hoax of the 90s. A neo-Marxist perspective (and my brain is spinning right now trying to think of a true Marxist in sociology these days) is very different from a Weberian (e.g., David Labaree) who is very different from a Tocquevillean (e.g., Theda Skocpol), but you can find Democrats among them all. Yet they have no problem disagreeing on the fundamental ideas. Incidentally, the best exchange to come out of the Alan Jones imbroglio is between Timothy Burke and KC Johnson about the larger issues involved in academic freedom and the judgment of different arguments.

And, finally, worst come to worst, you can always turn to your friendly Think Tanky Type and say, "Oh, so that's now the conventional wisdom?" They hate being identified as spouting conventional wisdom. No, that's not quite true, but it's a guaranteed conversation starter, if you happen to bump into them. I'm jesting, but the best ground for fighting ideas is still in the realm of ideas, not poking away at funding sources.

Listen to this article Listen to this article
Posted in Education policy at 7:30 AM (Permalink) |

June 23, 2006

The decline of a profession

Well, here comes another article talking about the attrition from the profession and the waste of so-called professional education. Yep. 40% attrition within 4 years, 60% within 6 years. One wonders if we shouldn't close the doors of the ignoble preparatory institutions and throw away the key. We might even open up the profession to those who want to switch careers in midlife. They'd bring in a new perspective, unhampered by the brainwashing that happens in the obviously dumbed-down classes where Democrats are far more likely to be the instructors than Republicans. Why, the professors in these so-called "schools" don't even have to do the type of hard empirical research folks in the real disciplines have to. These burned-out practitioners get away with "thought pieces" about policy and even linguistic analysis of their subjects. Didn't they ever learn about experimentation? Yeah. Get rid of the law schools, I say.

Oh... you think I meant ed-school matters?

(Hat-tip: University Diarist.)

Listen to this article Listen to this article
Posted in Education policy at 6:00 PM (Permalink) |

It's graduation week!

One more belated entry in the grad-rate report sweepstakes: the National School Board Association's Center for Public Education guide on graduation rates, released yesterday. (Hat tip: Andrew Rotherham.) Sometime overnight, my brain convinced me that this slew of reports was distributed over two weeks, not one. But it really has been just one week.

And on the way back from a music lesson today, we listened to Garrison Keillor's 1998 "Graduation" (one of his News from Lake Wobegone stories, in a collection purchased on Father's Day, last Sunday). How appropriate...

Listen to this article Listen to this article
Posted in Research at 12:57 PM (Permalink) |

Migration and graduation

When two major releases in two weeks highlight graduation measures in the press, it's time to kick the writing of my paper on migration and graduation into high gear. The book MS will wait. I need to read one revised paper for the journal today, and take my son to a music lesson, but other than those issues, it's time to roll up my sleeves, ice that leg, and get cracking. Incidentally, for everyone who wants to know: unmeasured migration pollutes any graduation measure. There's no mystery in that. I hope to quantify that relationship, or, rather, I have quantified it for a few cases and will use those cases to discuss the general problem. I know the target journal, and I hope to finish it by early next week and send it off.

(Regarding the leg-icing, the orthopedist yesterday said the bones are fine, but the spot where my son's live drive hit my leg will be swollen for about 4 weeks, so I now have the wonderful clothing option of either a beige compression stocking on one leg or a beige compression stocking on one leg, just in time for the summer fashion season. But it eliminates the add-on swelling that was causing increasing discomfort Tuesday and Wednesday. I can now concentrate for more than half a minute at a time!)

Listen to this article Listen to this article
Posted in Research at 8:31 AM (Permalink) |

June 22, 2006

Mishel on Swanson

In corerspondence, Larry Mishel sent me material over the last few days. With his permission, I am posting his material below. In the meantime, of course, the USDOE put out its report today on the "Averaged Freshman Graduation Rate", which is also based on CCD figures (hat tip: Andrew Rotherham). Of those, the numbers for Nevada (dropping precipitously in one year) don't look right.

But, to Mishel's comments, below the fold. I'll say this about using grade-based data from the CCD: one of the problems we've discovered this week is how friable that data is (and that's something that Chris Chapman and his coauthors at NCES noted in the AFGR publication today). But the other is a problem Mishel notes with using 9th grade enrollment data, which conflates first-time high school students with those who are repeaters. When I first came back to the quantification of attainment a few years ago, I tried to model retention issues, using both state-provided data (for a few states) and then thinking about it as a stable-population problem. Neither approach was satisfactory. That's why Warren uses 8th grade enrollment. I'm sticking to age-based data where available.

In any case, you can judge the issues for yourself, on the jump.

From Mishel:

The Bulge and Retention

Let me see if I can convince you that retention and the ninth grade bulge are serious problems for Swanson. Recall that his formula iterates declines in enrollment beck from diplomas to 9th grade. Every decline in enrollment from year to year counts as dropping out. It is well known (by Jay Greene who acknowledges it, and among everybody else but Swanson) that enrollment in 9th grade is far above that of eighth grade (therefore a ‘bulge’) because there is a lot of students, especially minorities, retained in 9th grade- the bulge is about 12-13% overall and 25% for minorities. This means that 9th grade enrollment is far above the count of entering ninth graders. This makes Swanson’s formula have the equivalent of a far too large denominator in his formula. It is easy to see how large the bias is by just extending his formula back to 8th grade: it shows an eight percentage point higher graduation rate and twelve percentage point higher minority graduation rates (see our book Table 10, page 64).

The Texas example shows how misleading his formula can be. I choose Texas both because it has very high retention rates and because Texas provides data on retention by grade by race. [He attached a file related to retention data in Texas.] The first table (page) basically shows that the ninth grade bulge—the extent to which 9th grade enrollment exceeds 8th grade enrollment- is fully explained by retention (in some states there may be issues of transfers into public schools from private schools).

The following table shows the impact on Texas rates and on the comparison of Texas to the nation. The first column reproduces Swanson’s published rates for 2001, which assumes that all ninth grade enrollment is ‘first-time’. The second column uses published Texas data on retention to recomputed graduation rates per ‘first-time’ ninth grader (we are employing a simple diploma to 9th grade ratio to make things simple). These calculations show that Swanson understates graduation rates in Texas by wide margins (13 and 14 percentage points for blacks and Hispanics) and overstates the race/ethnic gaps by 6-8 percentage points (increasing them by more than half their value). These are pretty large errors.

Because states and districts vary so much in the extent of retention these errors in Swanson’s formula are larger in some places than others: therefore, Swanson’s measure generates faulty comparisons across jurisdictions. Consider a comparison of Texas to the nation in the last columns. By Swanson’s measure Texas is below the national average but with a corrected measure Texas is substantially above the national average and has smaller race/ethnic gaps (though these national numbers are biased, as well, because of retention, but not as much as Texas).

Table 2. Bias in Swanson Measure from Ignoring Retention in Texas

Texas relative to national average
PopulationSwanson Measure*Uses first-time 9th graders
Corrected for retention
BiasNational averageSwanson measureCorrected measure















9th grade Enrollment







First-Time 9th graders







Graduation Rate





















9th grade Enrollment







First-Time 9th graders







Graduation Rate





















9th grade Enrollment







First-Time 9th graders







Graduation Rate





















9th grade Enrollment







First-Time 9th graders







Graduation Rate



































* Column Source: Christopher B. Swanson, Who Graduates? Who Doesn’t? A Statistical Portrait of Public High School Graduation, Class of 2001 (Washington, DC: Education Policy Center, The Urban Institute), Table 4.

Swanson versus NYC Longitudinal Data

One way to check on whether Swanson's measure of graduation correctly estimates graduation rates is to compare it to other measures for the same location using the same underlying student records. New York City provides such a possibility. Several newspaper articles point to differing estimates of graduation from the city data, the state data and Swanson.

Our purpose here is to create an apples-to-apples comparison between Swanson and the city school district data. So, since Swanson’s measure of graduation counts all diplomas, no matter when earned, we use a comparable measure form the school district data. To avoid issues of whether to count GEDs or not, we make comparisons with diplomas only and exclude GEDs.

The school district data are based on following individual students through a longitudinal data system. Swanson bases his estimates based on enrollment counts in 9th and other grades and counts of diplomas each year.

New York City Longitudinal Data

I start from the fact that NYC reports graduation rates, excluding GEDS, of about 60%, as calculated below. The rate with GEDs would be 7.0% higher. Swanson reports 39% and Greene, 43% for the class of 2001. That’s a huge difference. It is easy for me to identify ways that Swanson and Greene’s estimates inappropriately and artificially lower the measured graduation rate.

One can get the necessary information for constructing the following table from the report for 2001:

Population N % Total % Grand total
Dropouts19,748 32.0%25.2%
Other discharges16,82721.4%
Grand Total78,456100.0%
Source: pages 3 and 5

This table shows where we get our figure from. We take the reported graduates from Figure 1 (page 3) and subtract the GEDs from Table 1 (page 5). This gives us a rate of 60.9%, the longitudinal rate that eliminates GEDs. This rate includes graduation in 3,4,5,6 and 7 years (mostly all are within five years). However, so does Swanson’s diploma counts!

Some people have questioned whether some of the students identified as ‘other discharges’ are really dropouts, but not counted as dropouts in the NYC data. This is the tricky part for school districts in compiling longitudinal graduation rates—they essentially have to determine how many students left their system and which ones should be considered dropouts or legitimate transfers to other districts, etc.

I’m not sure how one can identify how many are falsely labeled a discharge versus a dropout. WE can assess the extent of any possible bias by making the extreme assumption that all the ‘discharges’ are dropouts. If so, the graduation rate would be 47.9% according to my calculations in the table. That is still above Greene’s rate of 43% and way above Swanson’s 39%. Yet, the rate must be somewhere between the 47.9% rate and the official 60.9% rate since surely some of the discharges are appropriately classified as such.

We can also adjust these data to include special education. Figure 5 says that there are 1,092 (city-wide special ed) who graduated at a 35.5% rate and 4,359 in self-contained classes who had a 38.3% rate. If we include the special education graduates among the graduates and add the total special education enrollment to the grand total we get a graduation rate of 47.2%.

This suggests to me that the NYC grad rates are substantially higher than Greene and Swanson. My calculation is that when one includes all of the special education and assumes all other discharges are dropouts one still finds a grad rate of 47.2%. The graduation rate actually lies somewhere between 47.2% and about 60%. Unfortunately, we do not have the data to make calculations by race and ethnicity. Anyway, a grad rate between 47.2% and about 60% may be nothing to brag about but it still shows that Greene’s and Swanson’s estimates are way off base.

I’m struck by your statement that Swanson’s biggest problem is no migration adjustment. [Sherman here: in correspondence, I explained that my initial impression was that was the major problem with Detroit. The major problem with Detroit is awful data.] I’m skeptical about these population adjustments once I realized that they incorporate new immigrants along with transfers in and out, not by design but because there’s no way to separate out immigrants (correct?). At the national level the population adjustment is only immigration. That’s why I don’t understand why you think a Warren estimate at the national level is at all valid (or at least can be compared to longitudinal data of students starting in 8th grade or some other starting point (NLSY).

Listen to this article Listen to this article
Posted in Research at 4:08 PM (Permalink) |

Theo Bell, RIP

Former Pittsburgh Steeler and Tampa Bay Buccaneer Theo Bell died yesterday after a long battle with kidney disease. I knew him from my brief involvement with a local GEAR UP project a few years ago. Some years after leaving the NFL in 1985, he earned a masters and devoted his time to following kids in school in different counselor-type jobs, including the one funded by the GEAR UP grant. He had all the money he ever needed, and he spent his time in schools instead of partying away his money like some other athletes have done. Maybe it was because the kids loved him, or the reverse. Maybe it was because, according to him, a mentor at the same age had saved him from the life some of his relatives went into.

But regardless of the origins, I know several hundred teens from the center of Tampa will be sorely missing a football star they never saw play.

Listen to this article Listen to this article
Posted in Random comments at 8:10 AM (Permalink) |

June 21, 2006

Ed Week grad rates: GIGO for Detroit

Paul Gazzerro of S&P's School Matters data compilation service thought I was wrong in asserting that the major problem with the Ed Week Detroit graduation "rate" for 2003 was not accounting for migration. True, I used all the enrollment data from all of PK-12, not just high school,* but Gazzerro then made the error of assuming that Swanson was using the 2001-02 and 2002-03 sets from the US DOE's Common Core of Data. He wasn't. He was using 2002-03 and 2003-04. But I'm glad Gazzerro pushed me to look at the CCD Detroit data, because it shows exactly how bollixed up the Swanson method can be. Added Thursday: the main issue here is the original data. It may not be a true test of the algorithm, because the data from schools can be so unreliable. See below on procedural issues.

The Swanson formula has the following in the numerator:

Diploma (end of year 1) * 12th grade enrollment * 11th * 10th (all except diplomas from fall of yr 2).

In the denominator is the following:

12th enrollment (fall of year 1) * 11th * 10th * 9th (all from the fall of year 1).

So for Detroit for the 2003 Swanson CPI, year 1 is 2002-03 and year 2 is 2003-04, and here are the details:

Numerator: 5,975 * 5,244 * 7,421 * 9,899 = 2,301,729,842,459,100
Denominator: 6,020 * 7,795 * 11,275 * 20,025 = 10,595,017,688,062,500

For Detroit the prior year, year 1 is 2001-02 and year 2 is 2002-03, and here are the details:

Numerator: 5540 * 6020 * 7795 * 11275 = 2,931,155,954,650,000
Denominator: 4618 * 6355 * 9291 * 14494 = 3,952,029,707,502,060

The ratio is 74.2%. How in the heck could Detroit go from 74.2% graduation to 21.7% graduation in a single year, in reality? In reality, Detroit reported a bulge in enrollment at all high-school grades in 2002-03, and the 22% rate is an artifact of that. I don't know if the bulge came from some amazing (and unbelievable) transient surge in population or just lousy record-keeping by Detroit or the state of Michigan. Update: Okay. I'm fairly certain it's bad data.

But I'll repeat: the Detroit data is useless, and I'm surprised no one in Swanson's shop even took the basic step of looking at the prior year's CPIs to see if, maybe, possibly, there might be some unreliable instability in the numbers. Added Thursday: Part of this problem is from the nature of the Common Core of Data (CCD) as an unaudited database, or rather one that is the responsibility of each state to correct its own figures. Bad record-keeping by a state = bad data. Recent years of data (including 2002-03, with the should-be-infamous Detroit enrollment) are explicitly noted as preliminary by the CCD, but those are the figures that everyone uses to update CCD-based measures. In sociology and demography, you always look at time series of the raw numbers and assume that you need to smooth the data to some extent, or you're likely to be tripped up by the vagaries of administrative record-keeping problems. (Age-heaping, for example, is the phenomenon of people rounding their ages to the nearest 5 in areas without birth registration systems or cultural celebrations of birthdays.) Big lesson here: be wary of CCD figures. My instinct was to pile on here about the failure to accommodate migration, but I was wrong. Yes, I think there's still a problem not adjusting for migration (and Larry Mishel and Joydeep Roy would point to the 9th-grade enrollment figures as a problem), but I may have caught the problem with Detroit because I was sensitive to the implications there. In reality, it's a problem of bad data.

Update (2:30 pm Thursday): I just received an e-mail from Chris Swanson: "We're taking a closer look at Detroit and a couple other places. One of the things I would like to build into our online database is a set of flags or notes to call attention to situations like this." Good.

Update (2:50 pm Thursday): Thanks to an informant with information about Michigan, it turns out that 2002-03 was the first year of a new data-collection system, and the CCD data are a bit different than the corrected figures released in 2005 for that year:

GradeEnrollment reported to CCDEnrollment reported in 2005

On the other hand, that correction only raises the 2003 CPI to 28.2% and lowers the 2002 CPI to 62.2%. There's still stuff wrong with the data. (Given that this is Detroit, as one correspondent put it, the newly-installed school CEO in 2002-03 may have resulted in exaggerated pupil counts.)

* — It's quite true that migration rates vary by age, and one would not want to use elementary-aged data and extrapolate to adolescents without a huge caveat. On the other hand, there is no way to separate attrition and returns from transfers out and in at the high school ages without an audit of the records. In the case of Detroit, I suspect we just have bad, unaudited data, not a migration issue. But this provides a pretty good example of how sensitive these formulas can be to misstatements of migration.

Listen to this article Listen to this article
Posted in Research at 9:37 PM (Permalink) |

Full-time in the summer

No new writing commitments for me for about two or three years. I'm working full time right now, without being paid, given the book I'm trying to write, a few grants I'd like to submit this summer, some other research, the journal, etc. I realized this morning as I was resting (my leg is still in pain and, yes, I'm headed to the orthopedist tomorrow morning) that I'm feeling under pressure for more than the journal, and that's just silly. There's not enough hours in the day to do everything I'd like, but there's a difference between keeping things on the back burner so you'll never be bored, on the one hand, and getting stressed about it, on the other.

So I'm going to trim my wife's hair (and cut my son's, if he wants), then pack up the laptop and head off to a cafe to work and enjoy the day.

Listen to this article Listen to this article
Posted in Random comments at 10:10 AM (Permalink) |

Work with the teachers you have

[Note: A slightly longer version of this is at the group blog The Wall of Education, where I'm a participant.]

Is there anyone else who just doesn't understand Kevin Carey's blog entry at the end of last month? It included a reasonable caveat that current research on teacher effectiveness had low R-square figures and high residual variance (i.e., evidence that the model in question accounted for little of the existing variation in student achievement). Then Carey jumped from that to a nullification of research on teachers:

[Sanders' study is part of] the ongoing search for the characteristics of the effective teacher. A definitive list of such characteristics is the holy grail of teacher policy. If we only had that list, so the thinking goes, we could do all kinds of important and useful things. We could reshape education schools to impart those characteristics. We could set up certification systems to filter out teachers who don't have those characteristics. We could design compensation systems that pay teachers with those characteristics more money.... My strong suspicion is that this whole way of thinking will ultimately turn out to be profoundly wrong.... [W]e could double, triple, or magnify tenfold our efforts to refine and expand things like the NBPTS and still never get close to identifying the effective teacher, for the simple reason that she doesn't exist.

That reasoning conflates screening instruments with teacher education and professional development, and it fails to address the fundamental weakness of much of this research, the search for a general qualification of teachers based on a global credential (not usually immutable characteristics). Then Carey went into some weird stuff about Dell's reversal of the usual production-before-sales process. (Never mind that car customers who were willing to wait could custom-order cars years before.) I think it has something to do with being satisfied with identifying effective teachers and not worrying about helping teachers (and prospective teachers) get better. Maybe I'm misreading that entry, but it sure sounded like that.

And, if so, Carey is wrong. Suppose we could identify with 100% accuracy who the good math teachers are. (Incidentally, neither Bill Sanders nor I will ever claim this, regardless of our differences otherwise.) Do we then fire those who are weaker and pray that their replacements are better, on average? As far as I'm aware, there has never been a period of time when you had 100% perfect teachers, when a system didn't need to work with the teachers they had because, well, they were the teachers there at the moment. It makes no sense from a decency, fairness, civil rights, morale, or human resources standpoint to sit there and let an inexperienced, less-skilled, or overwhelmed teacher flounder just because the research on national certification or masters degrees isn't conclusively in favor of those as screening/pay increment policies.

To borrow from a certain Crosby, Stills, Nash, & Young classic, if you can't have the ones you want, help the ones you have.

Listen to this article Listen to this article
Posted in Education policy at 9:51 AM (Permalink) |

June 20, 2006

Graduation rates, redux

Today, Education Week published Chris Swanson's new round of estimates of graduation. Lawrence Mishel also posted a new set of comments in a discussion of rates that's now considerably longer than the original entry. Andrew Rotherham points to my own state, Florida (which according to Swanson ranks as fifth worst), and Ron Matus's quick story in the St. Pete Times notes the differences between Florida's official calculation and Swanson's.

For those who need a scorecard for the grad-rate "players" (i.e., newly-coined methods of calculating graduation)...

  • The Boston-area researchers (Walt Haney, Gary Orfield, Jing Miao, etc.) use a straight diploma:9th-grade or diploma:8th-grade quasi-longitudinal rate (going from graduation back in a pseudo-cohort line to 9th grade 4 falls before or 8th grade 5 falls before). The diploma:9th-grade measure conflates grade-retention issues with graduation.
  • Warren uses the diploma:8th-grade rate plus a migration/mortality correction (using smoothed Census bureau state population estimates by age). In my opinion, this is the best-justified method using administrative records (such as the Common Core of Data). It's also useless at the local level because of the need for some data for the migration/mortality adjustment. (Mortality is so low for teens that it's not a serious concern, but I mention it for completeness.)
  • The US DOE has an "average freshmen graduation rate" that is akin to the Boston-area uncorrected quasi-longitudinal rate except averaging the pseudo-cohort's 8th, 9th, and 10th grade enrollments. This is an attempt to address 9th-grade retention, but Mishel and Roy are correct that it's jerry-built rather than having a theoretically-justified basis.
  • Greene and Winters use the USDOE averaged-freshmen rate plus a migration/mortality adjustment that is almost identical to Warren's (which Warren proposed in a 2003 paper).
  • Swanson's (and now Ed Week's) method chains together proportions of nth to (n-1)th graders in two successive years to get a quasi-period measure. It's entirely uncorrected for grade-retention and migration/mortality issues.

Swanson's production of numbers down to the county and district level will get large play in the broadcast and print media in the next few days, even though it has some serious technical problems. Those technical problems in some places are swamped by large differences in graduation in some places (South Carolina probably does have close to the lowest graduation rate if not the lowest, as Swanson claims), but the actual numbers are going to be inaccurate, especially in cases with significant net migration or 9th-grade retention. And it's at the local level where you are likely to see such cases. For example, consider Detroit, which Swanson says had 22% graduation for 2003. Detroit's PK-12 enrollment also shrank from 173,742 (in 2002-03) to 150,604 (2003-04), a net outmigration that would lead to an artificial deflation of Swanson's measure. That doesn't mean that Detroit is a great school system. It means that Swanson's measure is largely useless for Detroit.

Migration is the great problem with trying to estimate graduation at the local level. Without an audited trail of transfers in and out, there is no conceivable way of calculating an accurate graduation rate for a school or district.

Listen to this article Listen to this article
Posted in Research at 6:12 PM (Permalink) |

June 19, 2006

The leisurely pace of editing

I spent over half of my worktime today doing editing stuff—sending out most of the disposition e-mails to authors I owed (still have two or three left for tomorrow) and seeing if the new article is up yet (it is, but I'm too tired to vet an e-mail announcement, so that's tomorrow's first task).

Reading manuscripts, whether incoming or after reviews, is one of the more demanding parts of editing, and it's the second-most enjoyable task I have as an editor. Polishing an article's look is detail work, I'm not perfect at it, and it's not about the ideas for the most part. Helping an author improve a piece is definitely the most rewarding part of editing. But after that, the initial read and then the re-read after reviews are returned are a fascinating exercise in listening to perspectives and reading with three lenses on: Would it contribute something? Do I see what the reviewers saw? Am I seeing everything I can here? That attempt to keep multiple perspectives in the air is harder than reading dense prose.

It also means I can't quickly skim the type of article I was going over today—papers where the reviewers were generally positive but mixed. Would it contribute something? became Would it contribute enough, given everything else? Do I see what the reviewers saw? became How do I sort these issues in order of importance? Am I seeing everything I can here? is of course the hardest one, and explains why I spent more than an hour rereading and deciding how to write a revise-and-request letter for the first MS I tackled this morning. That was relatively quick, too.

NBPTS gets snookered on evaluation

What was I going to say after waiting several weeks to discuss the hullaballoo over the Sanders report for the National Board of Professional Teaching Standards? It's been a quiet month in Lake Wobegone... no, that's not it. I've been busy. Yes, that's it. Let's get the links out of the way first, from Eduwonk to AFT , and Barnett Barry (and you can follow the links snowball-wise from there). Here's the gist: The NBPTS has wanted to commission a study of the effects of certification on student achievement—or, more causally, evidence on whether nationally-certified teachers were more effective than non-certified teachers—and they chose someone with a certain amount of cache nationally because he's been effective at promoting growth analysis (and his statistical model, specifically). Then the NBPTS looked foolish for appearing as if they were quashing the study, so they released it, after releasing a summary and general criticism. They've got ostrich egg on their faces, collectively.

Before I read the study, I was prepared to say something like the following, given my prior criticism of Sanders: There's a difference between accountability and general research. Any accountability algorithm has to be public and transparent, to be fair and to be consistent with the goals of accountability (which include public and transparent information about student achievement). But while Sanders' model is proprietary (generally a bad thing, in my view), a version of it exists in the SAS Institute software's PROC MIXED, and statisticians have been able to play around with that enough to know how it behaves (and can reasonably extrapolate to Sanders, even if we'd all prefer he'd come clean). So let's treat this study as we treat all research, which is to respect rigor and look to see incremental contributions even if we might quibble about method.

I've read the study report now, and while I still see a difference between what we should expect for an accountability mechanism and what we might see as valuable in a single research project, I'm disappointed with the public version of the paper itself. As usual, Sanders included no demographic information as covariates, and the paper itself has very little information otherwise on methods. There is far more information in Dan Goldhaber's paper on national certification.

Here's the rub, from a reader's perspective: Sanders claims in large part (as he has before) that the primary difference between the papers is because he has random effects for teachers. Theoretically pure, I guess—I don't mean to ridicule the rationale for mixed models (there's a good reason to want to use it, if you have the data and the estimates converge), but unless you have data that you rework in different ways, you can't tell that it's the multilevel, mixed-effects model that makes the difference. Maybe it's in how you work with the scales (as some have suggested), or the covariates included, or the sample sets. I'd love to see each set of researchers hand their data over to the others. Then you get some better idea of how this all works (or doesn't).

In the end, NPBTS suckered themselves into hiring a researcher for the wrong reasons without having a solid contract requiring that the resulting report would have enough information to be credible, and then they compounded the secrecy of Sanders's shop by not releasing the report immediately. Bad move, guys, all around.

Incidentally, don't ask me to comment much on the ABCTE study of what has occasionally been called evidence of its effectiveness. Balderdash. Participants were already-certified teachers. If anything, it's just a validity study for the test itself.

Policy research is hard. It's very tempting to exaggerate the importance of any study and to remember the incremental nature of this. The Coleman study in 1966 didn't prove anything. Neither did Goldhaber last year or Sanders et al. this year. And if you read someone's work without seeing any discussion of limitations, caveat lector. But that's always been true.

Listen to this article Listen to this article
Posted in Education policy at 11:39 PM (Permalink) |

Detour into data quirks

I've received something from a public agency this afternoon that threw me into a quandary on one of my research projects, but I'm afraid I can't say anything about it until I give the agency a chance to respond to some questions and, potentially, some analysis. It's very interesting, and it's worth keeping track of. Sorry for the mystery, but I want to be fair to the agency.

Listen to this article Listen to this article
Posted in Research at 11:11 PM (Permalink) |

June 18, 2006

Miami school board bans a book on Cuba

It's now well-known that the Miami-Dade school board banned a series of elementary books because one, "Vamos a Cuba," presented an uncritical portrait of contemporary Cuba under Castro. This was after a complaint from one parent and two levels of review just on Vamos a Cuba (not the series) before the board discussion this week.

The facts are fairly clear:

  • The book presented an uncritical portrait of life in Cuba.

  • The book was in school libraries, not in the curriculum's required reading.

  • The school board members had evidently not seen the whole series.

  • The Florida ACLU is preparing a legal challenge, one they have a good shot at winning.

Beyond that, there's a whole load of perceptions. There's a clear difference between selecting material for the curriculum and material for a school library, and on principle I think that Vamos a Cuba should not be yanked from any library. It will join plenty of other mediocre books on the shelves, and part of our job as educators is to help children be critical of what they read.

To put this in perspective, Miami is certainly not alone in the U.S. as a place where lots of people feel comfortable censoring books from school libraries. The fact that this is shaped by Miami Cuban-American politics doesn't really change that fact. There are Cuban Americans on both sides of the censorship issue (though probably not many thinking the book is well-written), and I hope comments will acknowledge that. See Miami Herald reporter Matt Pinzur's blog for a set of links and reactions.

(Cross-posted at DailyKos.)

Listen to this article Listen to this article
Posted in Education policy at 7:03 AM (Permalink) |

June 17, 2006

In NSA and McGraw-Hill, we face problems of both technology and democracy

An Associated Press story today on using cryptography to allow more data-mining while (at least in theory) protecting privacy has an important point buried deep in the story: many of us just don't trust the cryptography. Without some external, independent, transparent review of the technology, a good many of us don't want the NSA or other federal agencies having access to our phone or credit-card records without a court-approved search warrant. This is a problem where issues of technology and democracy collide.

So, too, with testing. Whenever stories appear, such as the Florida Commissioner of Education's see-no-evil approach to the qualifications of test graders, we see a collision between issues of technology and democracy. Democracy demands transparency, independence, and accountability (precisely those qualities that lead proponents to defend high-stakes standardized tests). But the tests are produced and graded in secret, and every scoring error and other foul-up that is revealed from behind the veil creates the clear impression that these folks just can't be trusted with something as important as accountability.

The problem is that testing and accountability is a case where both the technology and the democratic issues are important. This is something that is hard both for defenders of high-stakes testing and some opponents to grasp. You can't go full-bore with high-stakes accountability without understanding the serious limits of any assessment. But it's close to Luddism to reject any attempt at assessment. It's tempting, certainly, given the sad history of test misuse. But this is a dilemma we have to tackle, both democratically and technocratically.

Listen to this article Listen to this article
Posted in Accountability Frankenstein at 6:49 PM (Permalink) |

Longitudinal student database glitches

It's rare when you can combine SAS (stats package) geekery and education policy analysis, so I have to take advantage of this opportunity. This morning, I had a discussion with a staff member of the Florida Department of Education, specifically in its data-warehouse unit. Very kindly, the staff in the data warehouse allow state researchers to use the data (once identifying information such as the real school ID, name, etc., is removed). Over the past 17 months, I've been playing off and on with several data sets they sent me from the 1999-2000 and 2000-01 school years, as I've been tinkering with my ideas for measuring graduation and other attainment indicators. Someone pointed out at some point that the enrollment numbers I was working with for 1999-2000 was a chunk smaller than the next year's (over 10%). That's embarrassing! I finally did some follow-up (checked through the monthly figures) and discussed this with my acquaintance in the FDOE.

I learned today that the data set I was using (what's called the attendance file, which has enrollment and disenrollment dates) is not what the FDOE uses. For their annual enrollment count, they use a database of students uploaded by each district from those students enrolled in the relevant week (e.g., Oct. 11-15, 1999). But this enrollment file doesn't have dates of entry and doesn't always have an exit date. And the attendance file (that I was using) isn't as reliable as the enrollment file (according to my informant). Practically speaking, after I merge the sets of data, I'm left with a record of students for whom I sometimes have enrollment/disenrollment dates and codes and sometimes don't.

My strategy is fairly simple at this point: after merging the data, I impute one of the end-points for the enrollment interval and then impute the enrollment length. Because of the structure of the data (monotone, for my readers who know multiple imputation), I'm first imputing the withdrawal date and then the length of enrollment. I'm too tired to follow up with the analysis tonight, so that will wait until tomorrow.

But the gaps in coverage for enrollment are significant for anyone who thinks building a longitudinal database with individual-student records is easy to any degree. Florida has been at this longer than anyone (I think a few years longer than Texas), and we still have problems. Essentially, the data is split among many tables, and key information is entered by poorly-paid data-processing clerks at each school without significant edit checks in the software. Sometimes, that leads to records that are just silly: I found a few individuals whose records show they were born before 1900 or after 2000, including one child born in 2027 whose enterprising parents or grandparents enrolled her or him about a quarter-century before her or his birth. Now, that problem could be solved with a simple software check on dates, including an explicit question along the lines of The data you entered indicated that this student is X years old. Is that correct? Others are harder: as my acquaintance told me, records that should be uploaded (attendance records for students who are enrolled in a school) aren't.

And that doesn't touch the questions of auditing the withdrawal codes (how do we know someone showed up at another school when they said they were transferring, not dropping out?) or anything that touches on the longitudinal record of achievement. Please remember that Florida is one of the best-case scenarios with data integrity, as there's considerable investment in this data in terms of infrastructure, training, and an incremental approach to adding elements. Even with that, it's clunky and prone to errors—errors that might appear small but affects everything we have come to assume about schools (i.e., the official statistics).

Update: I forgot the SAS geeking. Last night I discovered PROC MI and PROC MIANALYZE, two procedures that make much easier the the type of multiple imputation Rubin's (1987) book describes. I realized this morning that I had made an error in the merging by including records for which one of the other variables clearly indicated the student had not attended, and so there was a spurious set of rare cases with withdrawal but not entry dates. Removing those cases means that the missing date data is in identical cases. Technically, I can impute either variable first and then impute the length of enrollment. (Quick logic puzzle for the reader: why wouldn't I just want to impute the two dates independently?)

The other information I have: school (and county), school year, race, gender, ethnicity, birth year and month, lunch-program participation, and grade (first grade, second, etc.). Obviously, the imputation has to be done separately by year (otherwise I might have starting and ending dates in the wrong academic year), and I could have separate imputations by county. I'm using the predictive mean matching for the endpoint date (to avoid dates that are beyond the ends of the school year—I'm so glad my campus's version of SAS has that option), and I'm not sure whether to use predictive mean matching or straight regression for the interval. The obvious thing is to try it different ways and see if it makes a difference.

Further update: Oh, rats. Imputing dates doesn't work, because either a regression or a predictive mean matching system gives me dates that are about 250 days apart (give or take a few days), no matter what, because the vast majority of students are in the same school for the whole academic year. That gives me less variation than calculating the variable of interest (was the person in school on day X in that year) and imputing that variable directly, so I'm going with that. But the nasty bit is that there is a lower proportion of 1999-2000 records needing such imputation than 2000-01 records. This doesn't take care of the undercoverage in the 1999-2000 record set. Yes, it's a problem.

Listen to this article Listen to this article
Posted in Education policy at 12:24 AM (Permalink) |

June 16, 2006

Standardized testing and accountability have a complicated history

As I've caught up on other work tasks, caught a line drive in my leg, and caught up with some reading, I've only pecked away at the remaining tasks on chapter 1 (on the political roots of accountability). Well, not really—I've received quite a bit of advice on the history of standardized testing and accountability and skimmed a lot of sources. As usual when I hit this type of passage, one subtopic (the history of standardized testing) deserves a full-length history in itself, and while there are plenty of books on parts of it (Gould on IQ, Lehmann on the SATs [with plenty of qualms by historians], etc.), there isn't a well-researched academic history of standardized testing as far as I'm aware. But that's not the book I'm writing. I only have a few pages for it.

The themes that are popping up are going to be familiar to historians:

  • the way that experiences with testing establishes a "grammar of schooling" for standardized tests (see Cuban and Tyack's book for a discussion of that term)
  • long-term business relationships between testing firms and states before the 1960s established an easy route to accountability
  • social networks from early-20th-century researchers to late-20th-century folks in the guts of state departments of education (e.g., the three-degree difference from Charles Judd to Tom Fisher, the head of state testing in Florida who retired just a few years ago)
  • resistance to the use of testing to judge schools in the 1960s (with the National Assessment of Educational Progress) and later
  • two key researchers outside colleges of education who carried ideas into the policy realm (James Coleman and Pat Moynihan)

To what extent were civil-rights motives important to the start of the modern accountability movement? I'm seeing considerably mixed evidence, and I have to make a judgment call on that, obviously. As an historian, it's one of those big-picture questions that I prefer to sleep on. I suspect it requires a nuanced reading of the evidence that just isn't in my head yet.

Listen to this article Listen to this article
Posted in Accountability Frankenstein at 11:49 AM (Permalink) |

June 14, 2006

Well, that was fine as far as it went

I don't think I'm going to be doing any more substantive writing or focusing for the rest of the evening, thanks to the growing batting skills of my 11-year-old son. The batting cage didn't have a screen, and I (stupidly) threw him BP. He effectively walloped the ball several times, and one of the line drives hit my left leg about two inches below the knee. For those two inches, I'm very grateful right now. (Okay. I'm grateful it didn't hit my head.) I'm also smarting from my foolishness, now that the melon-ish lump shrank after the application of a cold pack and after 400 mg of ibuprofen has turned the inevitable pain from a stab in the consciousness to a very effective reminder of what the screens are for in batting cages.

Listen to this article Listen to this article
Posted in Random comments at 9:18 PM (Permalink) |

Inconsistency (again) between Florida school grades and AYP judgments

The 2006 letter grades assigned to Florida's public schools—and AYP classifications— came out today, and as expected the governor talked about the increasing number of schools labeled A and B, while the press has thus far not picked up on the increasing inconsistency between Florida's 'grades' and the AYP classifications (PDF). In the pie-chart comparisons, note that provisional simply is the category of inconsistency between the state grade of A and B and an AYP failure classification. Robert Linn used Florida's inconsistency as one of his paradigmatic examples of the silliness of the current scheme. ("Silliness" is a technical term here...)

We'll see how long it takes reporters to realize this... Update: I should've checked Google News earlier. Ron Matus's story in the St. Pete Times had that as the lead.

Listen to this article Listen to this article
Posted in Education policy at 5:41 PM (Permalink) |

More on Churchill

The full report of the standing committee on research misconduct at the University of Colorado Boulder should answer the questions raised about the political context of the Churchill investigation, from both those critical of the investigation for proceeding at all and from those who wondered if UCB would address the question of why Churchill was hired in the first place. Once again, the faculty involved have done themslves quite a bit of credit.

Incidentally, it is not true (if anyone was wondering) that faculty eschew concerns about academic integrity in searches. I am aware of a search some years ago that removed a candidate from the interview pool because of bona fide concerns about plagiarism.

Listen to this article Listen to this article
Posted in Academic freedom at 7:39 AM (Permalink) |

June 13, 2006

A true defender of academic freedom

No, I'm not talking about Ward Churchill, whom a University of Colorado committee today recommended be dismissed for research misconduct. I mean Michael Bérubé, whose speech at the AAUP annual meeting today is another thoughtful essay on the needs of universities. Another must-read. (Incidentally, someone who cannot tell the difference between Bérubé and Churchill is incompetent to comment on universities; someone who cannot tell the difference bewteen Bérubé and Horowitz is drunk on Purim.)

And now, if you'll excuse me, I'll be showering after playing some baseball with Vincent.

Listen to this article Listen to this article
Posted in Academic freedom at 8:44 PM (Permalink) |

June 12, 2006

No writing yesterday, and we'll see about today

I've spent the past 36 hours doing non-writing work when I've had the chance (which is complicated by dropping my daughter at camp, then worrying about said daughter because of Tropical Storm [soon-to-be Hurricane] Alberto), plus other things. So with regards to chapter 1, I've been going through the 10 books I brought home to read as well as an article. Then there's other work stuff, of course. I suspect I'll get at least an hour or two to write in the evening.

Listen to this article Listen to this article
Posted in Accountability Frankenstein at 12:58 PM (Permalink) |

A new metaphor for school reform

My thanks to the artists of today's Shoe comic strip, who provide a wonderful metaphor for school reform (though today's strip is not about schooling). You try to avoid quick fixes, you're in it for the long haul, you try to make adjustments along the way, even though you know it's expensive, the whole thing is traumatic, and often when you're done, someone else tells you you're out of date and need to start all over again.

It's just like remodeling your kitchen.

Listen to this article Listen to this article
Posted in Education policy at 12:55 PM (Permalink) |

June 11, 2006

Constitution Day and academic freedom

On my union's activist e-mail list this weekend is a thread on the Constitution Day mandate starting last year. Some are miffed by it, but there's more than one interpretation of that requirement. From section 111(b) of the Consolidated Appropriations Act of 2005 comes the following threat to our nation's students, at least if you're David Horowitz:

Each educational institution that receives Federal funds for a fiscal year shall hold an educational program on the United States Constitution on September 17 of such year for the students served by the educational institution.

(See the Notice of Implementation for additional information about implementation.)

What this does, of course, is give cover to all teachers to violate their academic responsibilities by inserting all sorts of irrelevant material into their classes. Biology and philosophy teachers now have a "a bulletproof excuse" (to quote my colleague Roy Weatherford) to discuss the constitutionality of the current war's constitutionality, wiretapping, etc. Careless for Congress to give all of my irresponsible colleagues an ironclad alibi for their politicization of classes.

Unfortunately, those of us who teach history, government, political science, etc., can't use Constitution Day for these purposes, but, heck, it's our subject anyway.

Listen to this article Listen to this article
Posted in Academic freedom at 7:18 AM (Permalink) |

June 10, 2006

Ban on academic travel to Cuba hurts U.S. scientific needs

Wonderful entry yesterday from Archives of the Scientific Activist on why the new law forbidding Florida public-university faculty from traveling to Cuba on any outside grant, let alone state funds, is foolish: it would prevent my marine-science colleagues from keeping track of pollution in future years when Cuba is expected to start drilling for oil.

Listen to this article Listen to this article
Posted in Academic freedom at 8:08 PM (Permalink) |

Chapter 1 with identifiable holes

I spent a few hours writing and then more time organizing what remains to be done with chapter 1 (on the political origins of accountability) and then diving into the library for resources. So I'll take home 10 books to read and an additional list of things to finish. But the first chunk of the book may be done by the end of next week.

Listen to this article Listen to this article
Posted in Accountability Frankenstein at 1:56 PM (Permalink) |

June 8, 2006

How graduation estimates vary by migration

It's a bit late tonight for me to continue with some of the obligations I haven't gotten to yet this week (several manuscripts to read/evaluate for EPAA, an EPAA article to begin preparing for publication, a student paper from an incomplete a while ago, a book review, a commentary on a lecture, and probably a few other things), so I'll write a bit about the stuff that's almost ready to put in article form, about graduation. You can see some details of my approach, more formally, but the crucial bit in this piece is that the demographic approach I'm using allows for seeing how estimates of graduation change depending on what migration conditions hold.

My example here is from Virginia statistical reports from the late 90s through 2003-04, because the state gives generally plausible estimates of enrollment by age and grade as well as total numbers for graduates in different categories (which I have collapsed into standard academic diplomas, special-education diplomas, and miscellaneous other certificates—including GEDs and certificates of completion). Given the data provided, one can see how graduation estimates (for this entry, the likelihood of earning a standard diploma from age 14 up) change as assumptions about net migration change. If I had much more time and programming skills on my hands, I could see how the estimates change as one changed the migration estimates age-by-age (the model is that detailed), but I've taken the simple road and seen what happened if one assumes constant migration and changes that hypothetical migration rate. If there's a high net in-migration rate, for example, then the straight-up (zero-migration) estimates of migration will be biased up because the algorithm wouldn't make a distinction between cohort-size changes and migration. But the question is how much does migration affect estimates of graduation?


The answer is: quite a bit! The figure above shows biennial estimates of standard-diploma graduation rates for Virginia between 1996 and 2004 (each set of estimates is a different curve), showing how the estimates (y-axis) changes depending on the hypothetical migration rate (x-axis). As explained above, net in-migration drops the estimate to account for the (hypothetical) bias, and net out-migration raises the estimate.

If one looks at the 1997-98—1998-99 estimate (and one needs two years of data for this calculation), a change of migration of just 0.01 is fairly dramatic. A zero-migration assumption leads to a rate of 74.0%; a migration rate of 0.01, 70.6%; a migration rate of 0.02, 67.5%. Now, there's a world of difference between zero migration and a 0.02 rate (which is substantial if not world-changing). But the fact that the graduation-rate estimate changes about 3% for every 1% change in the net migration rate is rather amazing. I'd be hesitant to claim significant changes in Virginia's standard-diploma graduation rate without pretty good evidence about real net-migration (and not just unaudited transfer statistics from administrative records). And this raises even more questions about whether any graduation measure could be sufficiently robust to rate individual schools based on improvement in graduation. Or, rather, you could, but the costs of auditing transfer stats might be on the same order of magnitude as the consequences of the statistics.

Update (6/17/06): Via Eduwonk comes the link to a USDOE Inspector General report on South Dakota's graduation-rate method (such as it is).

Listen to this article Listen to this article
Posted in Research at 10:14 PM (Permalink) |

Dis-positioning NCATE and teacher-ed (again)

There's been a spate of online stories and blogs about NCATE's president Art Wise announcing that the definition of social justice will be removed from NCATE documents on accreditation at a hearing where critics were ready to suggest removing NCATE as a federally-approved accreditor because, they argued, the combination of requiring certain evidence of dispositions for teacher candidates together with some institutional missions of "social justice" represented potential "viewpoint discrimination." Commentary: Samantha Harris on the public-private status of NCATE as an accreditor, Jim Horn's response to Wise's statement, Margaret Soltan, Robert Shibly on social justice, and KC Johnson on his battle over dispositions at Brooklyn College (among other things).

I have a few comments here and there in those entries. One of my colleagues down the hall in the school psychology program was not convinced by my argument that trying to gauge the virtue of teacher candidates was unwise. In addition to my concerns about litmus tests and feeding into the historical denigration of teaching, this commitment to judging teacher candidates may come from the domination of education by psychology. (One of my fellow historians on the same hallway would call it colonization.) I just don't think, absent behavior we witness, we can predict with that much accuracy who's likely to be prejudiced in a classroom situation.

Then again, we do have students (and when your institution graduates hundreds a year, you have a good handful every once in a while) who behave egregiously in various ways, and we kick them out. No, you're not allowed to date students or sexually harass them. No, you're not allowed to plagiarize if you want to be an English teacher. Yes, you have to tell us if you've ever been arrested, and that would cause some serious difficulties for your being hired by a school. That's an institutional burden by itself; I'd hate to see what would happen if we tried to guess who would be poor teachers by other means in addition.

Listen to this article Listen to this article
Posted in Academic freedom at 9:37 PM (Permalink) |

More on incrementalism or tinkering

Since I discussed incrementalism a few days ago, let me continue in that vein here and both praise and criticize the most prominent historians who advocate incremental school reform. Eleven years ago, David Tyack and Larry Cuban's Tinkering toward Utopia appeared, as a short but pithy analysis of top-down school reform efforts. It was not strictly geared at accountability as such (the book came out 7 years before NCLB!), but it is still one of the most popular books on education reform, both in course assignments and in popular outlets.

I'll gladly admit to being one of those who assigns it in a course. There is nothing else in print that makes such a tightly-constructed argument about the dangers of utopian education reform. They pick wonderful examples from the past and make solid historical that many readers see as subtle (such as the point that schools change reforms as much as the other way around). The book easily deserved the praise it has received and more.

And yet...

There is something quintissentially late-20th century in the book, not in the explicit arguments but in the basic assumption that incremental reform is more likely to stick and be successful than dramatic reform. Far from criticizing tinkering, they see real value in it and deliberately choose something as mundane as minor changes to classroom design to illustrate what they see as the ideal scale for school reform.

The fly in the ointment here—only made possible because the book is so fine—is desegregation. Desegregation is a perfect counter-example of a reform that was far from incremental in its plans or eventual execution (even if you or I might have wanted it to be different in several ways). There was really no way to "tinker" towards desegregation. Well, okay, there was, but that was the reason why there was no substantive desegregation for the first ten years after Brown. In her 1984 book The New American Dilemma, Jennifer Hochschild argued that desegregation was most effective (and had the least disruptive implementation) when it was sudden, without compromise, and affected the lives of the youngest children immediately. She pointed out that these traits conflicted with classic pluralist doctrine in political science, which emphasizes compromise and incrementalism as a fundamental feature of the American political system.

In Tinkering toward Utopia, at least, Tyack and Cuban write as pluralists, emphasizing incrementalism and compromise. Yet I think they would acknowledge that desegregation was a good thing, a necessary event. But I don't know how they would reconcile the two.

Listen to this article Listen to this article
Posted in Accountability Frankenstein at 12:19 AM (Permalink) |

June 7, 2006

Calendars and conferences

I have to make a choice soon about where I'll be spending my spring 2007 conference time. It's either the Population Association of America (PAA) March 29-31 (Thu-Sat) or the American Educational Research Association (AERA) April 9-13 (Mon-Fri). Actually, I don't think I have a choice—as a journal editor, I should go to AERA when I can. But I think I could put together a very nice substantive panel on graduation rates for PAA, when some of those folks would probably not come to AERA! Ah, well. I'll put together a different panel for AERA, at least.

No, I can't go to both. It's less a cash issue (though going to New York for PAA would be very expensive) than how many times can I leave my spouse the burden of doing the afternoon runs for kid pick-up, etc. Unless I can bring one of my children with me to New York while I'm at PAA. Hmmn...

My only regret about AERA—okay, not the only regret—is that it's again on weekdays only. Sheesh.

Listen to this article Listen to this article
Posted in Random comments at 7:42 AM (Permalink) |

The choice of standardized testing

Last night, I was thinking about the moment in the early- to mid-1970s when state legislatures began experimenting with some form of accountability (as they then termed it, perhaps borrowing from Leon Lessinger, an associate commissioner of education in the Nixon administration). For example, Florida passed something called an accountability act in 1971, Governor Rubin Askew talked about it in speeches in his first term, and the legislature tried different things over the next several years, including requiring more detailed reporting of spending (from the fiscal metaphor of accountability) and choosing standardized testing as the main measure of academic achievement.

I don't think anyone has adequately explained that choice. In the 1970s, standardized testing was coming under fairly harsh criticism for both its construction (claims that they were generally biased in content) and use (especially the group administration of IQ tests frequently used as screening devices for special education). It was in the 1970s that Congress changed the requirements for special-education assessment. From the general criticism, I sometimes wonder if one of the motivations for ETS's famous 1975-76 "blue-ribbon" panel analyzing the SAT decline was a subtle way of relegitimizing the SATs. (No, I don't have time to look into ETS's archives for that.) So why did legislatures such as Florida's choose standardized testing? Last night, my wife gave the usual answers (it's cheaper and easier to number-crunch with them), but that doesn't quite satisfy me as an historian, in part because those are ahistorical claims and in part because I'm not sure where I'd find evidence to confirm those hypotheses.

Any other possible answers?

Listen to this article Listen to this article
Posted in Accountability Frankenstein at 6:51 AM (Permalink) |

June 5, 2006

The politics of incrementalism

In the last few hours, I've been thinking quite a bit about the political viability of incrementalism in school reform. Tyack and Cuban argued for it in Tinkering toward Utopia (1995), and while that's still very popular, their argument didn't win the day in school reform. The AYP requirements of No Child Left Behind look incremental, until you think about the 100%-proficiency deadline of 2014. Proponents of NCLB certainly haven't touted it as incremental, either.

Could incrementalism survive the political cauldron of education politics and the organizational cauldron of school systems?

Let's consider both the incrementalism of summative evaluation, such as what Robert Linn has proposed, and also the incrementalism of formative evaluation, such as progress monitoring (formerly known as curriculum-based measurement). Linn proposed that targets for student achievement have a foundation in what real schools were achieving rather than figures pulled out of a hat (to use the gentler analogy). Those who work on progress monitoring, such as Stan Deno (who wrote the germinal article in the 1980s), argue that long-term outcomes for students with disabilities are dramatically improved if teachers make inductive decisions based on trends from frequent assessment. This is all based in solid research and the observations of many gray eminences.

And yet I worry about the political or organizational viability, for two reasons.

  1. Incrementalism is not part of most adults' mental images of a real school (to borrow from Mary Heywood Metz) or the grammar of schooling (from Tyack and Cuban), all of which are deeply impressed instead with abstract notions of absolute standards (schools give grades, and 90% is an A, even if we don't know what the scale referent is).
  2. School systems are now deeply enmeshed in behavior that is summative and non-incremental in nature and that takes gaming the system (or test-prep) as a legitimate response to virtually any policy.
The mental images of school

I've been struggling with a counter-image or contrary meme to schooling is grading. I am less concerned in this context with the utility of grading than with its suffocating alternative concepts of evaluation. I think I have a tentative idea for Linn's summative incrementalism: what we're really trying to do in education reform is boost the average knowledge of this generation of children above what the average knowledge of their parents are, and this will require a generation to accomplish.

But let's try on a few for size specifically for progress monitoring:

  • The stock market. Daily stock prices allow one to track the market value of a company and respond accordingly—except day trading is a foolish activity, and I'd like to keep the question of incrementalism separate from issues of competition in schooling.
  • Baseball (or other sports) training stats. This morning, my son started a summer baseball camp, and the coaches there used a radar gun to test his bat and throwing speed. By the end of the week, I'm sure he'll have improved, and that's a parallel to progress monitoring... except that the stats in baseball camp are used primarily to confirm the value of the camp and boost confidence. I'm not sure if coaches would use the trends in these stats to change strategy.
  • Weather and other environmental data. We certainly keep track of all sorts of weather data daily (and even hourly) and respond accordingly (changing one's clothing or deciding whether to take an umbrella depending on the forecast). But we don't generally assume we can shape the weather in a short-term way (and the debates about human influence on global warming have no real parallel to education reform).
  • American can-do problem-solving. There is plenty of literature on American positive attitudes, especially problem-solving ones, such as David Potter's People of Plenty (though he thought this had a serious downside). Maybe one can look at progress monitoring, and ideally a teacher's use of the data, as the educational equivalent of problem-solving... except that the techie connotations of that are a one-time problem-solving: someone sees a technical problems and quickly figures out a clever workaround. That doesn't really have a parallel with progress monitoring.
  • A right to know/forewarning. As a parent of an 8th grader, I've had a few moments in the last 2-3 years when I really wanted more information at my fingertips, as my daughter heads into the age where she no longer babbles everything that happened that day. Most of her academic teachers in middle school posted homework assignments online—something for which I was very grateful. And in the last year, most also posted grades online, and I had a similar reaction. Transparency is certainly an American value... but this doesn't quite get at the inductive response to data that's the key to progress monitoring.

As you can tell, I'm still looking for hooks for progress monitoring. More ideas are welcome!

Summative, non-incremental, and gaming behaviors

I deeply fear that test-prep and other behaviors are genies that are out of the bottle. I have heard second- or third-hand that teachers have told the Florida Center for Reading Research staff of their efforts to prep students for DIBELS by teaching lessons geared specifically for the format, that they want to use DIBELS data to retain some kindergarteners, and that some principals want to use DIBELS data to reward or punish teachers, professionally. I understand the staff was (properly) horrified by these ideas. And yet they come directly from current behaviors in schools. Formative, incremental, and inductive behavior is neither rewarded nor experienced very frequently in schools. There needs to be a way to explain the difference. Something Elizabeth said this evening suggested the following: "We're testing the instruction, not the kids." But I just don't know if it would stick.

Ideas welcome here, too!

Listen to this article Listen to this article
Posted in Accountability Frankenstein at 11:51 PM (Permalink) |

June 4, 2006

Arrests in murder case of Ronald Stem

Three teenagers have now been arrested in the murder of my former student Ronald Stem. Two were arrested in April and one more was arrested this past week. One known suspect is still at large.

Listen to this article Listen to this article
Posted in Teaching at 5:23 PM (Permalink) |

What I did on my summer vacation

Well, this last week I did something unusual on a vacation. I went to prison...

Sherman Dorn behind bars

More in the extended entry...

Yes, I went to prison—Eastern State Penitentiary in Philadelphia, to be exact.

Eastern State Penitentiary, external view, May 2006

When I was a graduate student, the prison was an abandoned site and has since been in the process of reconstruction. When originally opened in 1829, it was one of the models of the new penal-reform movement, with isolated cells...

reconstructed hallway in Eastern State Penitentiary

... and inside the one-person chambers ....

bed in reconstructed cell at Eastern State Penitentiary

... one could find the things that prisoners would do to focus their mind on repentance and rehabilitation:

table in reconstructed cell at Eastern State Penitentiary

As many social historians have documented, this rehabilitative ideal quickly deteriorated, and Eastern State eventually became a mass warehouse, with long hallways in the cell blocks...

hallway in Eastern State Penitentiary, May 2006

... and where the cells were no longer just for one person and where, if you were really lucky, you had two skylights instead of one.

cell ceiling in Eastern State Penitentiary, May 2006

And so one should not be too surprised that the great move for rehabilitative prisons (perhaps best known through Jeremy Bentham's work, or maybe Foucault's arguments about Bentham) ended up being far from rehabilitative. I only visited for an hour or two...

Sherman Dorn behind another set of bars at Eastern State Penitentiary

Listen to this article Listen to this article
Posted in History at 2:39 PM (Permalink) |

Accountability jingoists and nihilists

I've been searching for a while for how to explain the Manichean-style debate we've been having on accountability, and I think I have it. Those who advocate for high-stakes testing as currently wrought (or who seek an intensification of it) are accountability jingoists. As with foreign-policy jingoism, accountability jingoism is belligerent in tone and identifies disagreement as either foolish or undermining American values. On the other hand, some of those who disagree with the current moment in high-stakes testing are accountability nihilists, who turn in frustration to a denial of anything associated with the current regime.

You can see this "the other side is full of villains and fools" rhetoric in a recent Eduwonk post and in the song in Kathy Emery and Susan Ohanian's Why Is Corporate America Bashing Our Public Schools? (on p. 4, singable to "If You're Happy and You Know It, Clap Your Hands"):

If you cannot find Osama, test the kids.
If the market hurts your Mama, test the kids.
If the CEOs are liars,
putting schools on funeral pyres,
screaming, "Vouchers, we desire!"
test the kids.

I understand the value of snarky comments in blogs and outlandish words in songs, and I have agreed with both Andrew Rotherham and Susan Ohanian about various matters, but these are not isolated examples of Manichean rhetoric. In the long run, I don't think that either jingoism or nihilism helps us get to saner accountability policies. While my sensibilities in many ways are close to the nihilists—I see the current system of accountability as out of balance—I disagree with a broader worldview that fails to acknowledge that there might be something of value or politically rooted (in a positive sense) about accountability politics.

Listen to this article Listen to this article
Posted in Accountability Frankenstein at 11:18 AM (Permalink) |

Syndication details

For those who have been frustrated reading my blog on an aggregator because you don't see the links, try subscribing with the Atom index (at http://www.shermandorn.com/mt/atom.xml). That should have the links.

Listen to this article Listen to this article
Posted in Random comments at 10:44 AM (Permalink) |

When superintendents become defensive

The contrast in tone between Pinellas Superintendent Clayton Wilcox and Hillsborough Superintendent Maria Ellen Elia is notable in their columns in today's St. Petersburg Times. Several weeks ago, the paper ran a series on a poll of teachers showing very low morale in Pinellas County when compared with Hillsborough, though morale in Hillsborough isn't exactly great. Elia (from Hillsborough) wrote the following in the third and fifth paragraphs of her column:

Though Hillsborough County teachers differed from Pinellas County teachers in significant ways, the overall message was unmistakable: We must do everything in our power to support teachers or risk losing them when we need them most.... After reading the stories in the Times, I decided to use the education survey data as a guide and a reminder.

Contrasted with this "we can do better" response is Wilcox's muffled language in the first paragraph:

Employee morale in our public schools is a very complicated and difficult issue to understand and much more difficult to influence.

The first nine paragraphs of Wilcox's column was a litany of pressures on teachers and reasons why he's not solely responsible for low morale. And that is true... but he is responsible for keeping the nutty pacing calendar and other schemes that are justified primarily by the consequences of FCAT, not by instructional need. If I were a teacher or parent in Pinellas, I would be unsatisfied with Wilcox's response, which has more in common with Mikhail Gorbachev's obscurantist ramblings than with the clear response that the poll of teachers in the county should have provoked.

(For those who don't know Florida's geography, Pinellas County is a peninsula on the gulf side of Tampa Bay, and it includes St. Petersburg and many other smaller towns. Hillsborough County is a sprawling mix of Tampa, a few other small towns, unincorporated suburbia, and farmland to the east of the bay.)

Listen to this article Listen to this article
Posted in Education policy at 9:00 AM (Permalink) |

June 3, 2006

The critics of standards and accountability

I have started to write a new book, titled Accountability Frankenstein, that I hope to finish by the end of the summer and get to a press shortly afterwards. If I can stay disciplined and write a few hours every day, I should accomplish my goal. And, on the way, I'll be putting in a few teasers of short excerpts or paraphrases of some material. Today's is the first...

Susan Ohanian, Alfie Kohn, Marion Brady, and many others have criticized accountability systems well before I did. But, unlike these critics, I do not see standards as evil in themselves. Others, though, see the technocracy of accountability as unnatural. Ohanian, for example, calls advocates of high-stakes testing standardistos. For years, Kohn has railed against the use of rewards and punishments at all in schools, either for individual students or for educators. Brady has argued that standards themselves, commonly rooted in conventional disciplinary definitions, are inappropriate. I think I understand these humanistic critics of standards and accountability. To use technocratic tools to improve education threatens to remove the individualism, spontaneity, and joy of the best education many of us have experienced. The real world, to Brady and others, is more complicated than our compartmentalized curriculum. Good education, to Ohanian and Kohn, relies on something more than scripting. Most of us have had our “aha” or eureka moments, where something a teacher said gave us a new perspective, or where we finally understood a concept we had struggled with. And in most cases, these moments did not come in scripted lessons or during fill-in-the-bubble tests. To many critics of high-stakes accountability, trying to improve education by standardizing it is an obscene marriage of technocracy and democracy.

Listen to this article Listen to this article
Posted in Accountability Frankenstein at 7:17 PM (Permalink) |

Smart NATFHE boycott commentary

Jon Pike has the best commentary I've seen on the Israeli-boycott resolution of the former lecturers' union, NATFHE. The resolution isn't binding on the newly-created University and College Union (UCU) (into which NATFHE and AUT merged), and I suspect that there will be a decisive anti-boycott vote next year within UCU.

Listen to this article Listen to this article
Posted in Academic freedom at 10:29 AM (Permalink) |

June 2, 2006

Petulance and its discontents

George Leef reified ACTA's logical slip yesterday by saying that Ward Churchill's plagiarism, fabrication, and falsification were not "the main thing" in terms of his transgressions and repeats the claim that courses are replete with ideological bias (based on a few anecdotes, as usual). ACTA's blog says it's "good ... to see the petulance with which some academics respond to legitimate criticism named for what it is."

Hmmn... I guess it's fair to infer tone from a written document, but we don't know who the petulants are. John Wilson? Me? Well, I have a thick-enough skin that calling me petulant doesn't bother me, but Leef could've had the decency to provide a link to us miscreants to drive up our visitor counts... oh, wait. That would be petulant griping.

In any case, the ad hominem remarks don't address the substantive criticism by Hiram Hover or Timothy Burke, among others. And it's just mind-boggling for anyone to say with a straight face that evidence of research misconduct isn't the "main thing" when it appears.

Listen to this article Listen to this article
Posted in Academic freedom at 4:14 PM (Permalink) |

Connecticut, NCLB, and the state NAACP: where symbolic politics don't get you far

Some around the blogosophere and newspaper editorial rooms have been crowing about the Connecticut NAACP Chapter's intervention in the Connecticut v. Spellings lawsuit this year. This is proof that NCLB is on the side of kids and the American way! seems to be the cry.

Well, sort of. Attorney William Taylor's testimony at the May 9, 2006, meeting of the NCLB Commission highlighted one concrete problem—Connecticut's request to have much lower participation of students with disabilities and English-language learners in annual assessments—and a whole bunch of "we distrust the bastards" language. I understand the distrust—Connecticut is a collection of towns with disproportionate numbers of them being either very poor or very wealthy, and guess who controls state politics?—but I think Taylor's rhetorical argument on mandates (that the federal constitution is an unfunded mandate) is insufficient to respond to CT Attorney General Richard Blumenthal's focus on specific statutory language. The state Commissioner of Education Betty Sternberg is also clever in referring to the practical problems of the mushrooming test requirements, something I suspect a court will have a hard time ignoring. Essentially, Connecticut is saying that they insist on a certain quality of test, and Spellings's refusal to grant a waiver violates the federal statutory language.

But that's not all...

It turns out that the national NAACP is a signatory on a 2004 joint statement about the changes needed to NCLB, one of which is precisely Connecticut's main request to allow testing in every other grade (rather than every grade 3-8) so the state can use high-quality tests.

The arguments about testing in special education and for students whose first language is not English is much harder. Connecticut's legal complaint not only asks for the court to require that Spellings grant a waiver for every-other-year state-level testing but also (in a roundabout way) that Spellings grant the requested waivers to allow testing of students with disabilities at their instructional level and to wait until someone whose first language is not English to have been in Connecticut schools for three years before participating in state-level testing. On the one hand, my impression of the literature is that it is true that appropriate testing is more accurate when it is on a student's instructional level, but who determines that can also set lower expectations than appropriate for a student with disabilities. (E.g., a student in 9th grade whose reading is at a 6th-grade level might still be put at a 4th-grade instructional level. Bad move.) And there is no easy solution for testing with summative purposes with students who are learning English. Waiting three years is not a great idea, and I don't think any court looking at the substance of this will want to get involved in those issues.

Right now, I think the parties are waiting for a judge to decide on the U.S. DOE's motion for a summary dismissal. My best prediction: no dismissal, and eventually the court will agree with Connecticut on the every-other-year testing (on the unfunded-mandates issue) and boot the other issues back to the U.S. DOE, but without much direction.

But the symbolic politics get weird. Many NCLB opponents cheered the state's lawsuit because they dislike NCLB's testing mandates (and on that, they agree with the state's position) but have ignored the fact that Connecticut is raising no legal challenges to NCLB's mandate of high-stakes testing (whoops). The Connecticut NAACP is intervening largely because its members distrust the state DOE (right attitude) but without a clear legal argument that they've articulated (poor tactics). And Andrew Rotherham is cheering the state NAACP for keeping the feet to the fire of the state (right attitude) without acknowledging that the state is trying to keep the quality of tests high, something Rotherham has written about and his colleague Thomas Toch wrote about in the Margins of Error report.

There is no grand lesson here, except that the symbolic politics of education reform don't always pay attention to the details. But I expect you knew that, anyway.

Update: Andrew Rotherham responded the same day, but my Bloglines subscription didn't show it to me until yesterday. Go read his response, especially since he noted my jailtime in Philadelphia.

Listen to this article Listen to this article
Posted in Education policy at 8:16 AM (Permalink) |