March 31, 2008

Tacit knowledge and the AERA program hustle

Eduwonkette has commented on the heterogeneous quality of sessions at the American Educational Research Association annual meeting, quoted someone saying it was a tenure hustle, and suggested that the IES-funded Society for Research on Educational Effectiveness is a rival to AERA. (Oh, yes, and her friend skoolboy is right in recommending Topaz Thai.) I've commented on the oversized aspect of the conference, but I waited until after AERA to gripe about one feature of AERA that is fundamentally inequitable:

AERA's reviewing system provides structured advantages to groups of researchers who collaborate on submitted proposals. Researchers from disciplines with solitary traditions face inherent disadvantages in such a system.

Because of its size, AERA has for years rationed session slots to divisions and SIGS by the number of submissions in prior meetings. (I don't know the formula, but I suspect it includes the number of prior submissions, number of prior panels, total membership in the division/SIG, phase of the moon at AERA two years ago, together with a cosine function tied to the inverse proportional pressure wave created by a size-10 shoe dropped from the AERA executive director's office to the street below.) So there is a huge incentive for division leaders to encourage large numbers of submissions, which produces a low acceptance rate.

Theoretically, that should mean a better overall quality of sessions, but it doesn't turn out that way, because with a large number of submissions (which are heterogeneous in quality), you also need a large number of reviewers, and program folks literally go begging for reviewers in the second half of the year, after submissions are in.

If you think the quality of reviewing is significantly more consistent than the quality of submissions, I have a swamp or bridge to sell you. If you're looking for reviewers, you don't have much of a choice. And AERA's reviewing system has a one-size-fits-all quantitative rating scheme (rubric), regardless of the methodological or epistemological traditions of the scholar. "Data Sources" are irrelevant to the philosopher, but it's a required criterion for all reviews. And the quality of feedback varies as well. Here are the (quite positive) reviews my coauthor and I received for the proposal that was accepted:

Criteria   
Choice of Problem/Topic    4 / 5
Theoretical Framework    4 / 5
Methods    4 / 5
Data Sources    4 / 5
Conclusions/Interpretations    4 / 5
Quality of Writing/Organization    5 / 5
Contribution to Field    4 / 5
Membership Appeal    4 / 5
Would You Attend This Session?    4 / 5
Overall Recommendation    5 / 5

Comments to the Author
This is a well-written, clear, and very focused proposal. It offers new perspectives on the oft-talked about teacher shortage problem, providing new evidence from data on re-entry into the profession and analyzing entry and exit by age. The data, methods and conclusions all appear to be solid. However, I do feel that more critical issues related to teacher shortages emerge if we consider the distribution of teacher shortages--for example, shortages of teachers willing to teach in urban areas, and subject-specific teacher shortages in math and science. Nevertheless, this paper makes an important contribution to the overall teacher shortage debate.

Criteria   
Choice of Problem/Topic    5 / 5
Theoretical Framework    4 / 5
Methods    5 / 5
Data Sources    5 / 5
Conclusions/Interpretations    5 / 5
Quality of Writing/Organization    4 / 5
Contribution to Field    4 / 5
Membership Appeal    4 / 5
Would You Attend This Session?    3 / 5
Overall Recommendation    4 / 5

Comments to the Author
A strong, well-designed proposal on a clearly important topic.

I'm not sure if the second reviewer was exhausted from 17 prior reviews (my hats off to her or his service in that case) or just had little to say, but I've had reviews that are all over the map in terms of ratings and amount/quality/relevance of comments. I pity the poor program volunteer who has to sort the reviews and figure out what to do with submissions that receive disparate splits (4s and 5s from one reviewer, 1s and 2s from another, with either or both reviews having either many or no comments). But there's one conclusion I take as a member of AERA who submits proposals:

Whether your AERA proposal is accepted is substantially a game of craps. This conclusion doesn't mean that horrid proposals are accepted but that plenty of very decent proposals are shot down because there is no way to create a consistent system of reviewing, and there is probably no way to predict which good proposals are accepted and which good proposals are rejected. (I wonder if anyone has asked permission to look at a set of proposal ratings to calculate reliability...)

I suppose I could make money by having a side bet system (but I don't live in Vegas or Atlantic City), but there's a more pragmatic consequence that some researchers use to increase their odds of being placed on the program (often a requirement for getting travel funds from your institution): Agree with colleagues or graduate students to collaborate on submissions. The more submissions your name is attached to (either as main presenter or coauthor), the greater your chances of having a proposal accepted and thus being on the program (see the "tenure hustle" comment above).

This consequence is obvious to some but the type of tacit knowledge that isn't told to others as part of their grad-school socialization. Many of us work in relatively solitary fields (philosophy, history, etc.), where being on the margins of someone else's work doesn't seem to deserve the intellectual credit of being a coauthor. So someone coming from that field would probably not be told by her or his advisor that to maximize one's chances of appearing on an AERA program, you need to network and increase the number of submissions your name is attached to. In my subfield, the usual advice is to collaborate with others to propose a coherent panel, which is supposed to have a higher chance of acceptance because of the higher quality and relevance for the complete panel. That works in some conferences where there are advantages to complete panels, but in most divisions at AERA, that is unlikely to be true.

I'm not griping about the system this year, since I did the logical thing and both collaborated with a colleague where we could ethically submit two proposals (one emphasizing my side of the work and another emphasizing his side of the work) and also agreed to be put on as a panelist by a third colleague, and in another role by a fourth person. The proposal where I was the presenter happened to get accepted. Was that because my proposal was the most qualified? Not likely. Just having several proposals with my name on it helped, and the odds worked in my favor. But if you haven't learned this and your single proposal to AERA was rejected, you now know what you need to do: get your name on multiple submissions to the next AERA program. The submission deadline is in the summer, so it's time now to start networking for next year. Don't be unethical: network where you really can be a contributor. But if your tenure depends on AERA appearances, it's (sadly) in your interest to play this game.

Ultimately, I suppose AERA could be overwhelmed if groups of researchers decide they'll band together in 100-person units, each of whom submits 2 paper proposals as a primary presenter and 99 coauthors. That is probably not likely to happen, but the ad absurdum thought experiment should make my point clear: increased numbers of submissions do not inherently improve the quality of accepted panels at AERA, even with lower acceptance rates, and those who work in large research groups have an inherent advantage in a metastasized conference like AERA.

There are some potential fixes I can imagine:

  • Divide single-authored proposals from multiple-authored proposals in the reviewing process, so single-authored proposals are compared only to single-authored proposals, and likewise with multiple-authored proposals.
  • Have some metric of reviewer trust within AERA. No, I have no clue how this could be done feasibly.
  • Subdivide the reviewing process so program volunteers only have a limited number of submissions to work with and can read and filter the reviews without going bonkers.

There are two reasons why AERA should care about this problem. First, it's an issue of equity. AERA's annual meeting is already designed in a way that benefit faculty who work in better-funded institutions who can support travel to and several nights' stay in expensive hotels in New York, San Diego, Chicago, etc., and those are the same faculty who are likely to have research groups (i.e., grad students) that foster a multiple-submission system and increase odds of appearing on the program.

Second, it's also an issue of quality with the program. AERA is the best evidence I know that a high rejection rate does not increase one's program quality. That rejection rate is only meaningful if it reliably includes stronger proposals and filters out weaker proposals. Apart from disciplinary and other differences on what you or I may think are stronger or weaker proposals, I just don't think the system is working at AERA. That doesn't mean I'm going to abandon AERA entirely (I've reviewed dozens of proposals over the years, regardless of my own participation), but I am one of those on the margins of AERA in large part because the current annual-meeting structure is dysfunctional.

Listen to this article
Posted in Reading on March 31, 2008 2:46 PM |