August 31, 2009

EPAA acceptance rate, 2007 and 2008 (English language articles): 12-13%

I have been doing some tidying up with EPAA, and looked at the stats for 2008. I don't check the statistics often (and especially not early in a calendar year when manuscripts from the prior year may well be still pending), but authors occasionally need the information for tenure and promotion purposes. While there are still a few revise-and-resubmits out, it looks like the acceptance rate for the English-language section was fairly steady for the last few years, 12-13% in both 2007 and 2008.

Bad social criticism 'r' us

Kay Hymowitz's piece on dating in the City Journal reminds me in one way of a horrible newsmagazine story in the late 1980s or early 1990s, one that proclaimed certainty that women who were single in their early 30s were doomed never to marry. In another way, it reminds me of Sex and the City: this bears no resemblance to the New Yorkers I personally know, many of whom have been in monogamous relationships quite happily or otherwise are dating without displaying the angst for the world. And it is in the display where this article reminds me of the selectivity bias in whoever calls the Laura Schlessinger show: "Hi, Doctor Laura, I love your show and I love my kids and I am my kid's mom, and my moral dilemma is whether it's okay to let this cute ex-con sleep in my house on the couch when my 12-year-old is in the house, and I know that manslaughter isn't good but it isn't murder, either, and we're just dating and I've just let him peck me on the cheek." Thank you, Kay Hymowitz, for your interviewees' TMI. Next time, if I want rigor with the salaciousness, I'll at least head to Lillian Rubin's Erotic Wars.

Dear people who are dating and don't have it together enough to know what you want, explain it, and then accept the choices potential or real dates make in response: it's just a phase you're going through, called "life." Don't worry: it'll be over before you know it, but in the meantime, make the best choices you can.

Listen to this article Listen to this article
Posted in Random comments at 11:36 AM (Permalink) |

Structure, choice, or just good books to read?

Several years ago on the prompting of then-Governor Jeb Bush, Florida's legislature mandated reading classes for middle-school students and ninth graders. On the first day of seventh grade a few years ago, a teenager I know discovered that his reading teacher had assigned Stephen Covey's The Seven Habits of Highly Effective Teens. Yes, this was a stellar choice as mandated reading for the hard-bitten cynical set: time-management skills. You can imagine the snide comments from students ("great work of fiction"), and it was truly a double facepalm moment in the annals of reading instruction.

So when debate erupts over the New York Times "we have no news to report so we will make up a trend" story on letting students pick their own books, I hope you will pardon my bewilderment. I am not too old to remember the structure of middle-school English classes: you read some books together as a class, and then you read others on your own and wrote a book report. In my day, the "let's try to engage the students" ploy was letting students create dioramas. Today, I guess it's keeping a reading journal. Those are both fine as long as teachers understand that no trick-du-jour works with all students. 

There is a role both for mandated reading and choice, and maybe 36 weeks of school allow both, as long as there is a reasonable chance that the mandated books will not be treacly. Less Stephen Covey, more Gordon Korman: "Go to the library and pick out a book with an award sticker and a dog on the cover. Trust me, that dog is going down."

Listen to this article Listen to this article
Posted in Education policy at 10:30 AM (Permalink) |

August 30, 2009

Race to the Top comment sausage

A friend of mine from Chicago introduced me to the term link sausage as a blog entry that is not much more than a set of links. Here are links to various comments on Race to the Top (a tiny slice of the well-over-thousand comments submitted):

As I expected, others have started to chime in on the NEA comments. The New York Times took the comments as a sign of obstinance. Former Park Ridge Education Association president Fred Klonsky wrote,

While it seems to me that it is late in coming, the letter from Brilliant is well deserved, and [Sherman] Dorn's comments notwithstanding, I think it reflects the views of the NEA membership. At least among those who have been following the debate.

I think that was my point: the comments reflected the views of a large slice of the NEA membership, but not in a productive fashion, and I fear that on balance it will harm the concrete interests of teachers (both in and out of the NEA) no matter how you want to define those interests. 

Note: As Klosnky points out in comments, he's not an ex-president (yet). The error is all mine in sloppy reading of his about page.

Listen to this article Listen to this article
Posted in Accountability Frankenstein at 12:34 PM (Permalink) |

August 28, 2009

I'm commenting on Race to the Top, and I want a pony, too!

Impressions of a quick skim of 20 or so comments on the draft Race to the Top regs:

  • I couldn't find the national AFT comments anywhere.
  • Thus far, the two sets of technical comments by the Learning Disabilities Association of America and the group of academics with Kane, Staiger, and several others (uploaded by Thomas Kane), respectively, earn my "okay, you guys read the regulations and targeted your comments" award. Whether you agree with them or not, the comments were shrewd and focused. (I happen to like most of the comments, which are practical and sensible on the whole.)
  • The New Teacher Project signed onto the multi-organization letter that was essentially a vague "okay, we agree with this" note (with the advice for the USDOE to be selective in the first round), and then submitted comments that were, ahem, not nearly as far in the opposite direction as NEA but bewildering in its unbridled confidence in the suggestions made. TNTP staff, please read the comments written by Kane et al. You're smart, and they're smart, and they're much closer to the mark than you were this week. At least you don't come close to winning the second "I'm commenting on Race to the Top, and I want a pony, too!" award (first was to the NEA). 
  • I think that the California Teachers Association (the NEA affiliate in California) avoided the factual blunder in the NEA comments of asserting that Race to the Top is a mandate. Instead, they asked what states would have to give up in return for the money. In this case, they were deeply, deeply concerned with the threat to federalism embedded in asking that a state be able to link teacher and student records. That would be more plausible if TNTP's comments were enacted, but either the draft regs or the Kane et al.'s suggestions are reasonable in an imperfect world.
  • One state department of education accidentally sent the USDOE its cover letter to a national organization telling the national organization it was sharing its reg comments, in the place where it was supposed to upload comments. No signs of actual comments on the regs (thus far today). Ouch! I suspect there are similar technical glitches in other places.

I didn't comment. This is the first week of classes, and I'm a firm believer in the biggest bang for my buck (or hour).

Listen to this article Listen to this article
Posted in Accountability Frankenstein at 8:13 PM (Permalink) |

Charlie Crist, George LeMieux, and higher ed searches

Well, it looked like a foregone conclusion early on, and though the hiring authority promised a wide-ranging search for the best talent available, and he went through all the motions of a search, in the end it was the inside candidate who won out, just as a number of people predicted.

Yep: Florida Governor Charlie Crist picked his alter ego, his political shadow George LeMieux, to replace Mel Martinez and become Florida's Interim Senator until the 2010 election. 

There are two reasons why Crist picked LeMieux: he can rely on LeMieux to act in ways beneficial to Crist's own bid for election to the seat, and because Crist is human and susceptible to the availability heuristic. Like all of us, he is biased towards the most easily thought-of explanation or solution. In this case, it's his good friend and confidante George LeMieux.

Listen to this article Listen to this article
Posted in Higher education at 11:37 AM (Permalink) |

Greg Mankiw provides the laugh of the day

Economist Greg Mankiw provides the unintentional humor of the day: "Smart parents make more money and pass those good genes on to their offspring."

Smart parents years ago miraculously picked employers who survived the Great Recession without laying them off?

Smart parents are dumb enough to spend money on expensive private schools and expensive private test-prep services when according to Mankiw's claim their kids would do well anyway?

Smart parents who choose public schools are dumb enough to spend far too much for houses in wealthy areas because it's really not necessary for their kids to have a decent education?

Historical perspective: I think that Greg Mankiw is living in the past, a time when wealthy people would accept the argument that they're wealthy because they're smart rather than the argument that they're wealthy because they work their tails off. The second that we became a workaholic society, the arguments of Charles Murray, Greg Mankiw, and the like became dinoideologies. Wealthy people no longer need to argue that their wealth derives from their being smarter than other people in the sense of algorithmic cognition. And it's been years since I've heard any of that crap from actual wealthy people who don't fancy themselves as part of the chattering class. They and their close admirers will talk about their being "whip-smart," sure, but also working very, very hard and having the luck to have good mentors, the right opportunity at the right time, and so forth. 

Listen to this article Listen to this article
Posted in Out of Left Field Friday at 11:14 AM (Permalink) |

August 24, 2009

John Yoo and academic freedom

On the Balkanization blog, Deborah Pearlstein copies a memo from Berkeley (Boalt Hall) Law School dean Christopher Edley regarding John Yoo, the tenured faculty member at Boalt who wrote the torture-justification memos for the Bush administration. In it, Edley makes clear his disdain for Yoo's reasoning and then argues that those who would want Yoo gone have a very high bar to pass over to make that argument. See an interview transcript with Edley here.

Brad DeLong disagrees. DeLong argues that Yoo's intellectual inconsistencies form the type of misconduct that justifies firing a tenured faculty member. DeLong is wrong, and Edley is correct. I have no more taste for Yoo's views than DeLong, but I value academic freedom and academic due-process more than I value my desire to see Yoo kicked out of his job for prevarication in the service of torture. 

If he is prosecuted by the Department of Justice, then that's a different matter, and we'll see what happens in court. But he hasn't been indicted, and the current arguments for his ouster strike me as very reminiscent of the calls in late 2001 for USF to immediately fire Sami Al-Arian.

Listen to this article Listen to this article
Posted in Academic freedom at 10:19 AM (Permalink) |

August 23, 2009

NEA's comments: righteousness over responsibility to members?

I'm an NEA member, through my membership in the United Faculty of Florida. I'm a skeptic and critic of high-stakes accountability. Wrote a book and a few articles on the topic. And I am astounded at the NEA's comments on the Race to the Top draft regulations. (Hat tip.)

It is one thing to submit a righteous objection to the entire program if you are an individual with no responsibilities but to your conscience and your personal judgment of posterity. It is an entirely different thing when you represent several million teachers and you submit a document that for all intents and purposes appears to have an internal audience inside the NEA. That's nice, in the worst sense of the word "nice," because NEA staff had a responsibility to protect and advance their members' interests, not indulge any of our fantasies. To put it bluntly, on what planet would this regulatory comment have any effect on the final regs?

Let me be clear on my perspective as an NEA member and as an observer of political processes: There are lots of reasonable individual passages within the document, but you don't submit a manifesto when you comment on regs as an organization. You don't submit a manifesto that covers up any potential for effectiveness with what amounts to political poison. And you don't submit a manifesto that undermines your credibility. 

Two examples will have to suffice, because there's only so much I can wince at publicly: "we cannot support yet another layer of federal mandates" (from p. 2), or with regard to the creation of statewide longitudinal data systems, opposition to "[i]gnoring states' rights to enact their own laws and constitutions" (p. 24). The problem with these claims (and attendant tone of outrage) is that Race to the Top is not a mandate. Love it or hate it, it's something states must apply for. 

There were certainly alternatives available to the NEA, including the following choices:

  • Realpolitik: nudge the regs a bit to help state and local affiliates.
  • Legal: set up a legal challenge after final publication.
  • Abstinence: if you need to make a statement of conscience, declare that "we have serious doubts that this program will substantially help schools and will not participate in the regulatory comment process." 

I may be dead wrong about this, and there may be some uber-secret strategy behind this comment, but from where I sit at the end of the summer, it looks like one of my national affiliates' new president's first major move has been a bunch of wasted electrons.

Listen to this article Listen to this article
Posted in Accountability Frankenstein at 2:38 PM (Permalink) |

August 22, 2009

H1N1-motivated (and very brief) reviews of CamStudio, Captivate, and Elluminate

Preparation is better than panic, or so they say. With questions about the expected wave of H1N1 infections this fall and winter, my administration is trying to gently prod faculty into thinking about what might happen if 30-40% of students in a class are infected and absent. (Well, I'd hope they're absent if they're infectious.) While I've heard from one colleague who thinks that the admin is being unreasonable, at least my first impression is that they're not being heavy-handed (and certainly not as heavy-handed as a "gee, that's a lawyerly rather than educational" response over the summer to FERPA complaints, but I think the faculty will solve the latter problem quickly, now that the fall's upon us). One issue is the question of whether and how to adjust attendance policies (see the quick survey responses of about 100 USF faculty here). Another is the issue of making material available to students. Students would like classroom capture, and apart from the fact that the technology isn't there yet for a lot of situations, we need to address some intellectual-property issues before that becomes widespread.

But then there's distance education. Hi, Margaret! For many classes it is far from the ideal, but it may be a backstop in case H1N1 develops a more virulent strain (and here, virulent might well mean "upchucking for a day, followed by fevers, chills, and no capacity to read or do work for a week" rather than high mortality). At USF, as at many places, staff in many places are already skeletal, and it's the spread of H1N1 through staff more than students that could cause a university to close for a week or two.

So... what's a faculty member to do? One week? "Let's see where we are when we reopen" is as good for a short-term flu-related closure as hurricane/earthquake closure. More than two weeks?  Hmmn... the options there are mixed. Today, while preparing a few things for a class I designed to be online, I tried out the latest versions of three technologies, one geared for tutorials (CamStudio's capture/narrative of Things You Do While Computing), one geared for one-way presentations (Captivate, which has some interactive features but is probably most quickly learned as a way to narrate presentations), and one geared for recording live online sessions (Elluminate, which has a tool [Elluminate Publish] that can export the recording to mp3, mp4, etc.).

CamStudio: Best for tutorials. I suppose this might be modified to work for an MST3K version of commentary on video, but I'd recommend other software for that. Nothing else, I think.

Captivate: Produces a very slick Flash file, and if you have a decent microphone, it'll work fine for one-way presentations sure to put your students to sleep, which they might need anyway with the flu. I happened to be using my onboard mic inside my office, and until I discovered a way to improve a pop screen for the computer's built-in mic, my recordings were echoey, harsh, and POPoPOPped far too much.  Pop screen workaround: multi-folded kleenex over the tiny hole leading to the mic's diaphram. Now the recordings were just echoey and harsh. Much better. The interface is relatively easy to master, but I found it very clunky to use, in part because when I talk to students I am talking to real, live students, not to a hole on the top margin of the inside of the laptop's clamshell. For those who point to the inestimable Scott Simon as the paragon of radio storytelling, I can only say, Scott Simon is also talking to real, live human beings, the recording engineers in the studio!

Elluminate is a clunkyish way of connecting to students live, while everyone is miles away. I spoke with two students using it today and used Elluminate's publishing tool to turn a test recording into an mp3, an mp4, and a few other items whose purpose I couldn't quite figure out. The benefit: Ah, real students who can respond, ask questions, and keep me feeling a little as if I'm not alone in the world! The cost: oh, the pain of audio compression! Ugh. It's bad enough when you have a crackly connection and you know you're coping because, well, you have to cope. But while my voice in the Captivate Flash file was uncomfortable to hear, at least it didn't have the type of quality you'd associate with mid-20th century AM nighttime radio bounced off the ionosphere. 

My personal plans if USF closes for any reason this semester? Captivate for anything I really want to record (and figure out how to work that without feeling like I'm talking to a computer), Elluminate for connecting with students who know how to log into our CMS and almost nothing else, and then pray that students start to learn how to use Skype.


Listen to this article Listen to this article
Posted in Teaching at 5:45 PM (Permalink) |

August 17, 2009

The principal as eunuch?

I will confess to tremendous confusion about the actual skill level of principals, especially after reading a bunch of stuff about teacher evaluation and teacher labor markets (and whether principals should be able to hire teachers directly). Many of those who criticize the current state of teacher evaluation note that principals usually are poorly trained in evaluation, engage in drive-by observations, and give satisfactory ratings to almost every teacher. Yet the same people who criticize principals' evaluation of teachers are often in favor of dramatic autonomy for principals in selecting teachers. I am confused: principals are incompetent at evaluating teachers, so we should give as much authority to them as possible in hiring? 

In the comments on my entry about teacher evaluation policy debate ground rules, New York math teacher Jonathan writes, "Our current evaluation does not work well because administrators implement it poorly, not because it is inherently flawed. Why should teachers be punished (for that's what the refomers' schemes would do) for something that people other than teachers have messed up." And then he proposes a vague transition to holistic review of teachers. Again, I'm confused: principals have screwed up massively, so let's move from a checklist to an unstructured process where they'll be important partners?

The common phrasing here sounds remarkably like rhetoric that frames the principal as simultaneously a doofus and an entrepreneur, someone in a 1980s suburban-teen flick or a turnaround artist, a petty tyrant or a serious partner in collaboration, or, let's face it, a cipher moldable into whatever you think is necessary at the moment. In the early 1990s, Lynn Beck and Joe Murphy wrote a book about the changing metaphors used for principals, and I'm wondering how to think about the inconsistencies in how people write about principals. They should be functionaries in a giant bureaucracy aimed at achievement, I guess, and they can rise as far as their talents take them, but not to the top, which should be reserved for visionaries, and they can be trusted with all sorts of tasks.

The principal as eunuch, in other words.

It's probably a saner goal to see principals as human beings with a certain set of skills, and like all of us, that set of skills can be impressive but is always finite. The concerns I always have about systems and proposals that rely heavily on a single role within a school is that people are variable, including principals. Setting up the principal as a hero is no better than setting up the teacher as a hero.

Listen to this article Listen to this article
Posted in Education policy at 11:18 AM (Permalink) |

August 16, 2009

What "multiple measures" looks like in reality

Friday's Sun-Sentinel article on the new evaluation scale for Florida high schools shows what happens when a state moves away from general-assessment test scores as the end-all and be-all of accountability. In this case, Florida's new scale for high schools rewards schools for graduating more students, especially those who have problems with the state assessments, for enrolling students in challenging courses, for students who succeed in the challenging courses, and for student success in voc-ed certification programs.

How are Broward County schools responding?

At South Broward High School in Hollywood, students will get the chance to take additional AP classes, such as human geography, world history, music theory and macroeconomics, in addition to more traditional offerings such as AP English and biology, said principal Alan Strauss.

They're also ready to better monitor performance of at-risk students and ensure the entire senior class is ready to graduate, Strauss said. "I say overall I would hold myself accountable for grad rate and preparing my kids for college," Strauss said. "I don't find a problem with that. I think that's what my job should be."

Surprise, surprise! A more balanced accountability mechanism leads to planning a more balanced set of programs for students. I can quibble with loads of details on the new scale, but the direction is the right one, and I think we'll know in a few years how this is going. I'll stick my neck out and predict the evidence will be reasonably good (in terms of outcomes). A small step for a single state, a giant step for accountability options.

Listen to this article Listen to this article
Posted in Accountability Frankenstein at 10:29 AM (Permalink) |

August 13, 2009

How can we use bad measures in decisionmaking?

I had about 20 minutes of between-events time this morning and used it to catch up on two interesting papers on value-added assessment and teacher evaluation--the Jesse Rothstein piece using North Carolina data and the Koedel-Betts replication-and-more with San Diego data. 

Speaking very roughly, Rothstein used a clever falsification test: if the assignment of students to fifth grade is random, then you shouldn't be able to use fifth-grade teachers to predict test-score gains in fourth grade. At least with the set of data he used in North Carolina, you could predict a good chunk of the variation in fourth-grade test gains knowing who the fifth grade teachers were, which means that a central assumption of many value-added models is problematic.

Cory Koedel and Julian Betts's paper replicated and extended the analysis using data from San Diego. They were able to confirm with different data that using a single year's worth of data led to severe problems with the assumption of close-to-random assignment. They also claimed that using more than one year's worth of data smoothed out the problems.

Apart from the specifics of this new aspect of the value-added measure debate, it pushed my nose once again into the fact that any accountability system has to address the fact of messy data.


Let's face it: we will never have data that are so accurate that we can worry about whether the basis for a measure is cesium or ytterbium. Generally, the rhetoric around accountability systems has been either "well, they're good enough and better than not acting" or "toss out anything with flaws," though we're getting some new approaches, or rather older approaches introduced into national debate, as with the June Broader, Bolder Approach paper and this morning's paper on accountability from the Education Equality Project.

Now that we have the response by the Education Equality Project to the Broader, Approach on accountability more specifically, we can see the nature of the debate taking shape. Broader, Bolder is pushing testing-and-inspections, while Education Equality is pushing value-added measures. Incidentally, or perhaps not, the EEP report mentioned Diane Ravitch in four paragraphs (the same number of paragraphs I spotted with references to President Obama) while including this backhanded, unfootnoted reference to the Broader, Bolder Approach:

While many of these same advocates criticize both the quality and utility of current math and reading assessments in state accountability systems, they are curiously blithe about the ability of states and districts to create a multi-billion dollar system of trained inspectors--who would be responsible for equitably assessing the nation's 95,000 schools on a regular basis on nearly every dimension of school performance imaginable, no matter how ill-defined.

I find it telling that the Education Equality Project folks couldn't bring themselves to acknowledge the Broader, Bolder Approach openly or the work of others on inspection systems (such as Thomas Wilson). Listen up, EEP folks: Acknowledging the work of others is essentially a requirement for debate these days. Ignoring the work of your intellectual opponents is not the best way to maintain your own credibility. I understand the politics: the references to Ravitch indicate that EEP (and Klein) see her as a much bigger threat than Broader, Bolder. This is a perfect setup for Ravitch's new book, whose title is modeled after Jane Jacobs's fight with Robert Moses. So I don't think in the end that the EEP gang is doing themselves much of a favor by ignoring BBA.

Let's return to the substance: is there a way to think coherently about using mediocre data that exist while acknowledging we need better systems and working towards them? I think the answer is yes, especially if you divide the messiness of test data into separate problems (which are not exhaustive categories but are my first stab at this): problems when data cover a too-small part of what's important in schooling, and problems when the data are of questionable trustworthiness.

Data that cover too little

As Daniel Koretz explains, no test currently in existence can measure everything in the curriculum. The circumscribed nature of any assessment may be tied to the format of a test (a paper and pencil test cannot assess the ability to look through a microscope and identify what's on a slide), to test specifications (which limits what a test measures within a subject), or to subjects covered by a testing system. Some of the options:

  • Don't worry. Don't worry about or dismiss the possibility of a narrowed curriculum. Advantage: simple. Easy to spin in a political context. Disadvantage: does not comport with the concerns of millions of parents concerned about a narrowed curriculum.
  • Toss. Decide that the negative consequences of accountability outweigh any use of limited-purpose testing. Advantage: simple. Easy to spin in a political context. Disadvantage: does not comport with the concerns of millions of parents concerned about the quality of their children's schooling.
  • Supplement. Add more information, either by expanding the testing or by expanding the sources of information. Advantage: easy to justify in the abstract. Disadvantages: requires more spending for assessment purposes, either for testing or for the type of inspection system Wilson and BBA advocate (though inspections are not nearly as expensive as the EEP report claims without a shred of evidence). If the supplementation proposal is for more testing, this will concern some proportion of parents who do not like the extent of testing as it currently exists.

Data that are of questionable trustworthiness

I'm using the term trustworthiness instead of reliability because the latter is a term of art in measurement, and I mean the category to address how accurately a particular measure tells us something about student outcomes or any plausible causal connection to programs or personnel. There are a number of reasons why we would not trust a particular measure to be an accurate picture of what happens in a school, ranging from test conditions or technical problems to test-specification predictability (i.e., teaching to the test over several years) and the global questions of causality.

The debate about value-added measures is part of a longer discussion about the trustworthiness of test scores as an indication of teacher quality and a response to arguments that status indicators are neither a fair nor accurate way to judge teachers who may have very different types of students. What we're learning is a confirmation of what I wrote almost 4 years ago: as Harvey Goldstein would say, growth models are not the Holy Grail of assessment. Since there is no Holy Grail of measurement, how do we use data that we know are of limited trustworthiness (even if we don't know in advance exactly what those limits are)?

  • Don't worry. Don't worry about or dismiss the possibility of making the wrong decision from untrustworthy data. Advantage: simple. Easy to spin in a political context. Disadvantage: does not comport with the credibility problems of historical error in testing and the considerable research on the limits of test scores.
  • Toss. Decide that the flaws of testing outweigh any use of messy data. Advantage: simple in concept. Easy to spin in a political context. Easy to argue if it's a partial toss justified for technical reasons (e.g., small numbers of students tested). Disadvantage: does not comport with the concerns of millions of parents concerned about the quality of their children's schooling. More difficult in practice if it's a partial toss (i.e., if you toss some data because a student is an English language learner, because of small numbers tested, or for other reasons).
  • Make a new model. Growth (value-added) models are the prime example of changing a formula in response to concerns about trustworthiness (in this case, global issues about achievement status measures). Advantage: makes sense in the abstract. Disadvantage: more complicated models can undermine both transparency and understanding, and claims about superiority of different models become more difficult to evaluate as the models become more complex. There ain't no such thing* as a perfect model specification.
  • Retest, recalculate, or continue to accumulate data until you have trustworthy data. Treat testing as the equivalent of a blood-pressure measurement: if you suspect that a measurement is not to be trusted, take the blood pressure test the student again in a few minutes months/another year. Advantage: can wave hands broadly and talk about "multiple years of data" and refer to some research on multiple years of data. Disadvantage: Retesting/reassessment works best with a certain density of data points, and the critical density will depend on context. This works with some versions of formative assessment, where one questionable datum can be balanced out by longer trends. It's more problematic with annual testing, for a variety of reasons, though that can reduce uncertainties. 
  • Model the trustworthiness as a formal uncertainty. Decide that information is usable if there is a way to accommodate the mess. Advantage: makes sense in the abstract. Disadvantage: The choices are not easy, and there are consequences of the way of modeling uncertainty you choose: adjusting cut scores/data presentation by measurement/standard errors, using fuzzy-set algorithms, Bayesian reasoning, or political mechanisms to reduce the influence of a specific measure when trustworthiness decreases.

Even if you haven't read Accountability Frankenstein or other entries on this blog, you have probably already sussed out my view that both "don't worry" and "toss" are poor choices in addressing messy data. All other options should be on the table, usable for different circumstances and in different ways. Least explored? The last idea, modeling trustworthiness problems as formal uncertainty. I'm going to part from measurement researchers and say that the modeling should go beyond standard errors and measurement errors, or rather head in a different direction. There is no way to use standard errors or measurement errors to address issues of trustworthiness that go beyond sampling and reliability issues, or to structure a process to balance the inherently value-laden and political issues involved here. 

The difficulty in looking coldly at messy and mediocre data generally revolves around the human tendency to prefer impressions of confidence and certainty over uncertainty, even when a rational examination and background knowledge should lead one to recognize the problems in trusting a set of data. One side of that coin is an emphasis on point estimates and firmly-drawn classification lines. The other side is to decide that one should entirely ignore messy and mediocre data because of the flaws. Neither is an appropriate response to the problem.

* A literary reference, not an illiteracism.

Listen to this article Listen to this article
Posted in Education policy at 4:18 PM (Permalink) |

August 12, 2009

Belated kudos to Broader, Bolder and to Fordham

In the whirlwind of my obligations this year, my reading has lagged, and I am late in recommending and praising two reports published in the first half of 2009:

  • The Broader, Bolder Approach's accountability report, published in late June. This report suggests combining the use of achievement test data and on-site school inspections for school-level accountability. For those who have read Accountability Frankenstein, you'll know that I agree with those ideas. This report addresses the central gap in the original Broader, Bolder manifesto, and I am delighted to have read the proposal.
  • In March, the Fordham Institute published a report recommending a scaled approach to accountability when private schools take public dollars. Their proposal is roughly that the more dependent a private school is on public funding, the more the school has to provide data and be accountable in a way similar or parallel to local public schools.

Both are thoughtful, well-reasoned brief arguments, and they move each debate in interesting directions. Whether or not you agree with the conclusions, you'll have things to think about.

Updated: Aaaaargh! Six days later, I realize I've been calling the group the Bolder, Broader Approach instead of the other way around. Dear readers: when I make a stupid error, please point it out as soon as you see it.

Listen to this article Listen to this article
Posted in Education policy at 9:53 AM (Permalink) |

Proposed ground rules on teacher evaluation and test discussion

Seeing how too many writers about Race to the Top, tests, and teacher evaluation would have taken actions in the Cuban Missile Crisis that would have led to nuclear war--i.e., seeing the worst in opponents, or maybe seeing posturing as the best path forward for themselves personally or for their positions (sound like the health-care debate-cum-food-fight?)--I am hereby proposing the following ground rules/stipulations:

  1. The modal forms of teacher evaluation used in K-12 schools are not useful.
  2. Some aspect of student performance (abstracted from all measurement questions and concerns about flawed tests) should matter in teacher evaluation.
  3. At least one problem of including student performance in teacher evaluation is how to use messy and flawed data. This comes from the fact that current tests are flawed. Heck, all tests are going to be imperfect and create the dilemma that Diane Ravitch referred to this morning. But plenty of today's tests should embarrass anyone who approved their use.
  4. Yes, people who disagree with you have used inane arguments, and some of them might even have gotten some provisions through a legislature by logrolling. I know I can say the same about your putative allies. Let's call each other out on those moves, and then move on to the substantive issues. Doing more than calling people out on that at the time (i.e., holding grudges) is playing the game of "your side is dirtier than mine," and you will inevitably lose that game, especially if there's an historian in the room (and in addition to me, there's also Diane Ravitch, Larry Cuban, Maris Vinovskis, and others who can quickly point out where folks have played dirty political pool for decades, though many of us will just call it the standard operating procedure in education politics). See reference above to Cuban Missile Crisis. If Reagan make an arms-control treaty with Gorbachev, we can all be a little more mature in disagreements.
Anyone who has broken these ground rules or is going to break the ground rules in the near future is currently in a grace period thanks to my staying away from blogging much in the past few weeks. But if I have time in the fall, I'll write a weekly entry on who's doing the best and worst jobs of fighting fairly on this issue.
Listen to this article Listen to this article
Posted in Accountability Frankenstein at 9:27 AM (Permalink) |

August 7, 2009

Logjams in time... er, timejams?

For the last few years the pace of my life has been such that I've rarely had the chance to look at the ebb and flow of various things at work, largely because there hasn't been such a thing as an ebb. And while I'm not seeing an ebb right now, I see at least the least possible glimmer of hope that the logjam in my head and schedule is at least not getting worse. I have the next two articles for EPAA ready to go, the following two English articles for the journal mostly prepared, a slew of new submissions with their initial readings finished, several revisions read and decided on, and now I'm into decisions with reviews in hand and assignment of reviewers for new submissions. At least for the last year for me, revise-and-resubmit letters have been the hardest to write and taken the longest time, because each one requires explicit advice on what to change as opposed to reasons why the manuscript is not being accepted or minor additions to my standard "your article is wonderful! and here's what to do while preparing a final copy" template.

And, on top of that, I have one manuscript revision promised by the middle of this month, plus assorted other tasks. Oh, yes, and the semester begins on the 24th. But I'm chugging along on backlog and should get through the worst of it by the start of the semester, assuming disasters don't plop themselves in the middle of my life.

In other words, my foreseeable free time is feeling much like the economy: not heading downwards, bobbing along, and possibly looking up in the next three months. And, speaking of which, time to organize the logistics for the weekend. This afternoon and tomorrow: EPAA, maybe some manuscript revision when my editorial batteries run low. Sunday: visit to (quite wonderful) mother-in-law!

Listen to this article Listen to this article
Posted in The academic life at 10:59 AM (Permalink) |

August 4, 2009

Your personal, homemade commission on tenure and test scores

Sick of finger-pointing in the absence of a New York state commission to study how to use test scores in teacher evaluation (including tenure) decisions? Look no further! In this space, we will be conducting our own homegrown commission over the next three months. No need for the New York Assembly and Senate to act! We'll do it ourselves.

What? you say. You're in Florida. Well, yes, but everyone knows that Florida is just the Southern branch of New York. My father grew up on Flatbush Avenue and graduated from Lincoln High School. He was in New York City for his residency in pediatrics (with an office in Bellevue, but that's another story). The Yankees' spring training home? Eight miles from my house. 

And if that doesn't convince you, you should know that Alexander Russo runs his blog on Chicago schools from Long Island. If he can do that, I can run a citizens' commission for New York from here (and then someone in Chicago can run something in Florida).

Apply in comments: name, role in New York education, what you'll bring to the table.

Listen to this article Listen to this article
Posted in Accountability Frankenstein at 8:52 AM (Permalink) |

August 2, 2009

The liberal arts and narratives of declension

There is a teacher's voice in my head, asking the logical question of New York Times reporter Patricia Cohen who speculated whether the humanities are in decline (perhaps because of the Great Recession) and whether older history subdisciplines are also in decline: "where did she go to school, and who were her teachers?" Evidently, the Times is hiring reporters who either never had good history teachers, never paid attention to them, or forgot one of the basic lessons in a good college history class: beware narratives of climbing societies, falling societies, or any society-wide "rise and fall." The February article brought the expected number of letters to the editor to a newspaper that might just depend on readers who want to read (you know, that humanities-ish activity), Timothy Burke had some words, and Michael Berube had solid things to say in early June and late June. About the second article, again see Burke as well as Mary Dudziak, Mark Grimsley, Claire Potter, and David Silbey. I am months late on this, so I will do what I can.

First, before panicking it probably makes sense to divide what parts of the proportionate decline of humanities majors in the past few decades are attributable to different factors: the growth of undergraduate professional degrees, the growth of higher-education enrollments more generally, the decline of GI Bill-related enrollment as a proportion of undergraduates, and any leftover changes that just might be related to the nature of the disciplines themselves. In part because the expansion of higher education came side-by-side with the belief that a college degree's main utility was getting a job and growing credential requirements for jobs, enrollment grew faster in professional majors than in the humanities.

Maybe I should cry over the fact that a lower proportion of students are history majors than there used to be (though the percentages bounce up and down), or maybe I should celebrate the dramatic expansion of college attendance in the past 70 years and the fact that even if the proportion of history majors has dropped, there are still more graduates with history majors living in the country today than were living in 1950. Remember new "old saw" about the total population of China and India; apply as balm to humanities woes. Not only does the general expansion of college attendance make me less concerned than others are, but my guess is that they're more likely to be exposed to teaching that asks important historiographical questions and that uses primary sources. I didn't say immersed in: exposed. 

Those perspectives do not completely eliminate concern about the future of humanities teaching and humanities departments in colleges and universities. Though regionally accredited colleges and universities have some version of a distribution/breadth requirement or general-education program (depending on your regional accreditor), that fact does not mean that a department has to be anything more than a "service outlet," the higher-ed equivalent of the quick-lube shop tucked in between the strip malls of Finance and Psychology. "Shakespeare while u wait! Fulfill writing requirement in 30 mins or ur money back." 

On the other hand, while the standard choices of academe has been for greater adjunct use in all high-student areas (and that is true whether they're called adjuncts or graduate students), the reality is that humanities classes are cheap in comparison with science and math if one looks at course credit earned. High failures rates in algebra and the costs of maintaining labs add up in a pragmatic sense, and that's only looking at credit courses. What about community college remedial classes? As DeanDad has noted, developmental courses in math are a death march in comparison with other noncredit classes. Teaching-heavy institutions may short the humanities in individual places, but the combination of gen-ed/distribution requirements makes it virtually impossible for college students to graduate without some liberal-arts classes and thus virtually impossible for colleges to eliminate liberal-arts programs entirely.

And then, if you look at the costs of maintaining the research capacity of faculty, the humanities look even better: no lab animals to house, fewer research assistants to hire, and the primary need for many scholars is a computer, some travel funds for conferences or research trips, and time. The big difference is in universities with doctoral programs, where the expectation of support for doctoral students has both direct costs (tuition waivers, which are on top of the pitiful stipends for TAs and RAs) and also indirect costs (in terms of the classes that graduate faculty are not teaching while they are running seminars and advising students). What I'm seeing in Florida universities is a combination of closing small doctoral programs as well as some atrocious decisions about closing departments. 

The probable consequence of the first type of decisions--closing down small doctoral programs in the liberal arts and in other areas--is a change in the doctoral-education opportunities in those fields, somewhat different workloads for those faculty, and perhaps a bit of status shift back to traditionally-elite programs. It's not as though small-program closures is going to bump the publication trends in any significant manner, and Cohen's articles presume that the rolling crisis in academic publishing is in an entirely different universe from the mythical status decline she posits. In her February article, the world of publishing is entirely ignored, and the June article only discusses a presumed shift in journal publishing. In the real world where I live, as opposed to the make-believe world of the New York Times reporter, the long-term crisis in the liberal arts is in academic publishing and questions about the economics of monographs and the long-form argument.

(Among the atrocious departmental closure decisions, the University of Central Florida almost shut down its statistics department the same year it's opening up a new medical school, and Florida Atlantic University reorganized its engineering college into the Department of Tenured Faculty We Like, Department of Tenured Faculty We Hate, Department of Tenure-Track Faculty, and Department of Non-Tenurable Faculty Who Teach Boatloads of Undergraduates. Those weren't the official names of the reorganized units, but that's the central function of the reorganization. Guess which "department" was closed, with the tenured faculty told to leave by August 7?)

Listen to this article Listen to this article
Posted in Education policy at 2:36 PM (Permalink) |