June 8, 2010
The value of college III
Part of the value of a good college education is that much of it is surplus. In the same way that the early nineteenth-century education of women could have been perceived as superfluous, a good deal of what students learn could be seen as not directly or immediately useful in their lives. To some economists, this may smack of inefficiency: why should we educate anyone beyond what we can see as an immediate payback on the job or in life? To others, this gets absorbed in a metastatic notion of human capital, where everything good in life is redefined as investment. (Read the new introduction in the 1993 edition of Gary Becker's Human Capital if you doubt me: not only are schooling and standalone job training considered human capital, so is love from one's parents.) Claudia Goldin and Larry Katz refer generically to education as critical to handling changing technology on the job, which makes a certain amount of sense as long as you're not operating a picture-based point-of-sale register (technology can deskill jobs as well as require greater skills). Goldin, Katz, and Uwe Reinhardt are definitely well-meaning, and I'd want them all at my back in an unlit economics-department hallway. But at some level, the economic justification of surplus education is troublesome because it is a black box (how the extra education works exactly isn't modeled); the slop between formal schooling and economic utility (which I've termed surplus) is a fundamental problem for how economists approach education.
An inefficient education as useful play
So let's turn from economics to anthropology for some help. In 1973, American Anthropologist published Stephen Miller's "Ends, Means, and Galumphing," which explored the social and evolutionary purposes of play. It's reasonably well-cited for a social-science article, but more importantly it's widely cited in areas as diverse as educational and social psychology (where you might expect it to be cited) and... well, it's cited in "Marketing in Hypermedia Computer-Mediated Environments" (1996, in the Journal of Marketing). In other words, it's got legs. Miller argues that one can define play within multiple species as activity that is deliberately inefficient and where the individuals involved gain pleasure from facing challenges that stem directly from the inefficiency, whether we're talking formal inefficiencies such as the rules of baseball and chess or informal make-believe... or activities one might find in college such as analyzing a real or fictional company's operations, writing a history paper, spending ten or more hours talking about a single play of Shakespeare, and so forth.
More importantly, Miller argues that play has some advantage for a species in that it turns specific skills into general problem-solving capacity. In play, one uses skills repeatedly and in a range of combinations. (One could argue a little differently about some videogames I know, but I'm describing his argument, not making my own, and the point would still be important even if you removed videogames that require nothing but exactly-repetitive behavior.) Play looks remarkably inefficient in one way, but it has important adaptive value in another.
So too with much of formal education. I could make the same faculty-psychology arguments on behalf of studying history that many people do: not only does it provide specific knowledge of certain times and places, it also prepares you for any career that requires the presentation of linear arguments with specific time- and place-bound evidence. (Legal brief, anyone?) It teaches you about human foibles and prepares you for situations where you have to suspend antipathy towards individuals to identify potential motives and key interests. David Brooks makes all of those arguments in his column today.
But that type of argument has always struck me as beside the point, not because history majors do not have practice in those skills but because any faculty-psychology argument is easily turned into a nebulous "this will help you learn critical thinking" claim, which my time-and-place-specific training makes me skeptical of. Yes, majoring in history will help you in a lot of fields more than not going to college at all, but it's hard to argue that a history major is better suited to a professional biochem lab's gruntwork than a math or physics major, even if the gruntwork has occasional public presentations attached to it requiring linear arguments with detailed evidence (see above on that refrain).
(Margaret Soltan argues a different point today, asserting that the value of the humanities is in the embodiment of human frailty, not its rational analysis. She writes, "For [William Arrowsmith], a prolonged encounter with the humanistic tradition amounts to a more and more sensate anguish at the recognition of our own chaos." I'm not going to argue with her or Arrowsmith, since I'm sure many a student in a Milton seminar has probably had crises of faith, and I had the odd experience of The Painted Bird as a soothing read at the end of my first semester in college. I'm just making a different point that can stretch beyond the humanities.)
An honest explanation of the value of college acknowledges that when college accomplishes what it can, a good part of that achievement is teaching students how to play with ideas in thoughtful ways and follow up that play in a reasonable, rigorous manner. This is neither a comprehensive nor exclusive way of thinking about college: formal schooling doesn't guarantee this result, and there are plenty of wise people in this world who can play with ideas without having finished secondary school, let alone college. But you're far more likely to get adults who can play with ideas in a productive sense if some critical mass of them have attended formal schooling where that was one of the outcomes.
I think Stanley Fish and gaming-for-learning enthusiasts are some of the more extreme proponents of this view, though they may not like being put in the same bin. At some times eloquently and inarticulately at other times, Fish argues (or just implies, as in yesterday's piece) that playing with ideas is the purest and highest aim of college and university life. That's a good part of the reason why he is allergic to some other conceptions of teaching (such as passionate engagement in the world). Those who have pushed for the insertion of game design in teaching likewise see value in gaming in and of itself, and they have the well-intentioned goal of spreading that joy to students through the use of gaming in teaching.
I do not think the promotion of intellectual play is the sole purpose of higher education, which is why I do not agree with Fish on his save the world on your own time refrain, which would place a wall between classes and any concern with what happens off a campus. Nor do I think that constructing game-like structures inside classes is the only way to promote intellectual play, which is why I have only experimented in a tiny way (and not that well) with game-like structures inside classes. Instead, what a good college (and many a good high school course) provides is the foundation, tools, and time and space for students to play with ideas.
This play needs to be rooted in specifics: some critical mass of specific knowledge in an area, which includes stuff we might call factual information and also knowledge about important questions that have been and continue to be asked in the discipline or field. In most (but not all) colleges and for most (but not all) students in those colleges, that foundation and set of tools require some breadth and some depth. You can't be a great student of history without knowing a sufficient amount about some critical mass of places and time, or without knowing a sufficient amount about some critical mass of other fields that bring other questions to bear on the ideas you're playing with.
And then you need the opportunities and encouragement to play with ideas in important ways. Sometimes these come in structured assignments that look playful, sometimes in serious assignments that engage students in the flow that positive psychologists write about, and sometimes the opportunity comes in extracurricular activities. Again, none of this necessarily requires formal schooling, but the playful autodidact must discipline herself or himself, and a formal school can provide structures to encourage this type of engagement. The institutional nature of a school can often grate on those within its walls, but it can also provide helpful structures. From an historical standpoint, the amazing feature of non-mandatory secondary and postsecondary education is not that one-quarter of teenagers leave high school and two-thirds of young adults do not complete a B.A. but that so many finish when there is no law requiring it. Normative expectations play an important role, and that is as true for shaping behavior within a school as standing outside it pushing students towards school.
Justifying public subsidies
Okay, some of you must be thinking, I'll follow this argument about the play of ideas as far as formal schooling doesn't cost much. But why should taxpayers subsidize this, and why should someone incur more than $100,000 in debt to learn how to play with ideas? Taxpayers should subsidize surplus education because it's worked for society in the past, which may seem highly unsatisfying but is true with one caveat (below). More pragmatically, the obviously-useful parts of higher education easily justify the subsidy, and what appear to be "frills" are comparatively cheap: try to tell a provost that the English department or history department is a money-waster, and she or he will laugh in your face with good reason: humanities faculty are generally the cheapest dates in any place, in part because of their low salaries and in part because even at the ritziest research universities they don't require several hundred thousand dollars in start-up money each. Doubt me? Go ask your local university the annual maintenance costs per student of a intro-chem lab and an intro-languages lab.
Costs to students: the car rule-of-thumb
Student debt is a different issue. I don't think someone should incur more than $100,000 in debt for an undergraduate education. However, that issue is complicated by stories about new college graduates with mountains of debt that come from enrollment in private schooling, either non-profit colleges and universities or for-profit programs. We need to watch the debt issue, but the streams of student debt origins are concentrated away from public colleges and universities (i.e., not what the solid majority of students face). There are plenty of public colleges and universities where the average debt for graduates carrying debt is under $20,000, and that's a reasonable debt to incur for the part of a college education with likely immediate payoffs in the job market (assuming that there's a job market in the next few years). In addition, the creation of income-based repayment plans is a buffer against college debt peonage if debt begins in the federal loan programs that are captured by income-based repayment. Again, that's easy when you're talking about public colleges and universities. Fortunately, a very large majority of high school seniors and their families are skeptical of mountains of debt, which is why (for example) two of my daughter's closest friends are going to the University of Florida next year rather than Rensselaer, Rutgers, or Georgia Tech (some of the other places one or the other was accepted, where they would have paid out-of-state or private tuition).
(As I've noted, private loans and gigantic debt coming from attendance at private institutions comprise a different matter, in addition to credit card debt. Part of the role of Pell grants, the new GI Bill, and federal loans is to encourage families to take on both subsidized and unsubsidized loans. That may sound remarkably like the type of public-private partnership that's become common in economic development, except that here, families and students incur substantial risk. Private non-profits and for-profits are in the same boat here, receiving a federal subsidy that's often bundled in with additional unsubsidized loans that families and students carry forward, something NYU is struggling to respond to, at least. And all university administrators who approve privacy-invading deals with credit-card companies should rot in Purgatory for a very, very long time.)
There is another way in which student debt is taken out of context: for full-time students and a number of part-time students, a significant part of the cost of college is the opportunity cost of not being in the labor market (or giving up some job opportunities, for part-time students). That can end up in debt if students borrow to pay for living expenses while going to school, and in any case, it reduces income and the accumulation of job experience. For a few years, that's more than balanced by expected greater earnings. The opportunity cost of not gaining job experience becomes a larger issue for someone who is out of the job market for an extended period, as happens with longer graduate programs (such as programs that have an average time-to-degree of nine years for students who finish, and that would be on top of the time spent in an undergraduate program).
A few rules of thumb, to summarize on debt and opportunity costs of attending college: if the direct debt incurred by going to college is on the order of magnitude of an economy or low-priced midsize car, it's justified by the anticipated concrete returns, so the chance to play with ideas isn't a giant financial risk. Don't go into debt on the order of a house note unless the degree leads directly to a lucrative career (e.g., medicine or law, and even there I have some questions). And if you're going to spend more than ten years out of the labor market as part of getting an education, definitely get that economy-car-sized education.
The assessment dilemma
Let me return now to the issue of public subsidies in part for what might look like surplus education. Part of the justification for public subsidy (concerned with value) is taken care of by the parts of college you can identify concretely as human capital, specific bits of skills and knowledge with clear social benefits. Part of the justification for subsidy (concerned with cost) is taken care of by the fact that the more expensive parts of college and university academic programs are concentrated where you see more clearly identified returns (the "humanities are cheap dates" principle). (Athletic programs and student affairs are different subjects.)
That might be enough from the perspective of some faculty (and Stanley Fish and David Brooks, at least this week), but the push for accountability in learning outcomes in higher education can easily be turned into the type of mechanism that squeezes out opportunities and structures for playing with ideas. For the foreseeable future, there will be key actors in several states who would be willing to impose reductive standardized testing on colleges and universities. That is the alternative to the current set of assessment mechanisms embedded in regional accreditation. So let's look at assessment and accreditation with regard to playing with ideas.
The black hole of accreditation-centered assessment
Assessment in the context of regional accreditation is best thought of as meta-assessment, where accreditors hold colleges and universities responsible for having a curriculum and assessing how well students learn it. That putatively gives institutions the freedom to create a structure consistent with a unique mission as long as there is assessment of student learning. In reality, this type of meta-game can be difficult to navigate, and the default behavior leans heavily towards mimesis: many colleges and universities hire consultants familiar with a particular regional accreditor, and they tend to suggest whatever structure has enabled similar institutions to pass muster. In addition, because consultants (or former consultants) are sometimes brought in-house to handle the logistics, they focus on the parts of the process that are most easily managed and cause the least hiccups internally... and that often turns into a small universe of reductive measures available commercially, especially for general-education goals. (Want to assess writing? Let's try the ABCXYZ. Want to assess problem-solving? Let's try the ABCXYZ. Want to assess critical thinking? Let's try the ABCXYZ. Yes, of course we can create our own in-house assessment, but we'd also have to justify its use to our accreditor, and it's just easier to use the ABCXYZ; why don't we at least try that as we're developing our own...) There's a reason why the Voluntary System of Accountability specified one of three cognitive measures: it piggybacked on existing trends in accreditation and institutional inertia.
My general concern is that the mechanisms of assessment through regional accreditation can become the black hole of faculty time, absorbing everything around it and making it difficult to plan a structure for more engaged projects or the type of activity I have described as intellectual play. In addition to what else I could say about that narrow range of measures, the long-term problem with institutional meta-gaming is that the rules of the game can change, sometimes with nasty consequences for faculty time. Every time that an accrediting body changes the rules by which institutions have to set rules for students (i.e., the curriculum), faculty have to rework their lives and often entire programs of studies to accommodate the changes. Every time my state reworks licensing requirements for college-based teacher education, or changes the rules for state review, faculty in my college have their time stolen by the logistics of meeting the rules. (Please don't ask a Florida dean of education to describe the double-standard between the rules for college-based teacher education and alt-cert unless you have a few hours.) One of the consequences is an overburden on both faculty and student time. Let me stop talking about faculty time and focus instead on student time: Look at a few random programs of study for baccalaureate programs in nursing or education. Count the number of elective courses. Compare with a program of studies in any social-science or humanities major. Then pick your jaw up off the floor.
On the one hand, the licensure requirements make a certain amount of sense from the perspective of professional training: you want teachers, social workers, and nurses to have the tools to do the job. On the other hand, an undergraduate education that is devoid of anything but instrumentalist technical courses is job-training and nothing else. And especially for teachers, that is inconsistent with one central purpose of college and dangerous for what we'd like them to do on the job. And the Holmes Group's proposal to shift all teacher training to the masters is unrealistic for working-class students if you apply the car-cost limit to student debt for future teachers. I am not sure there is a good way out of this problem for elementary teacher education, and it is on the extreme end of the "no room for thought" problem we face with accreditation-based assessment.
Outside elementary teacher education, there are a few escapes, but none are palatable. Ignoring assessment requirements of accreditors is either fatally brave or foolish, so what's left? Assessing intellectual play. You can stop groaning now. Yes, attempts to assess "creativity" make you tear your hair out, and the thought of assessing intellectual play makes you want to punch me out for the oxymoron or the threat of one of these projects unmoored from substance and rigor. But from an institutional standpoint for a faculty member in one of those regions with an accreditor that threatens micromanagement, you can either tilt at windmills or see what the power might be used for. I've got a limited appetite for windmill-tilting, and I've got enough blunted spears in my garage for a lifetime, thank you very much. This may sound like squaring the circle or getting out from within the horizon of a black hole, but the ability to assess intellectual play would allow faculty to justify all sorts of projects within an existing accreditation framework.
Defining and assessing a challenge
First, a reminder of Miller's notion of galumphing, or play: pleasurable activity that is deliberately inefficient and encourages the combination of existing skills to accomplish the self-defined or agreed-upon goals over and around the obstacles presented by the constructed inefficiencies. The tricky part of assessing such activity is not focusing on the issue of pleasure but instead on the meta-rules that characterize the nature of the activity. For this purpose, it's best to think about a circumscribed type of intellectual play: a challenge that is at least partially well-defined, based in considerable part on what others have done (i.e., not entirely reinventing the wheel), and that requires putting together at least a few skills. Then the assessment of the student activity has two levels: the level of the meta-game, where you assess how well the student defines the challenge, shows where and how the project relies on other work or is new, and how well the student used multiple skills; and the level of the project itself, where disciplinary conventions come into play...
And for history, at least, the disciplinary conventions match fairly well with the first level: having an appropriate historical topic, using the historiography in a sensible way, and handling a range of evidence and argument structures. The guts of most undergraduate history papers are in that last catch-all category: "handling a range of evidence and argument structures." There are a number of more idiosyncratic and less comparable assessment frames (such as student reflection on engagement), and this short essay is about the larger picture, not a detailed (let alone a tested!) framework for assessing intellectual play. And this sketch is about a narrowly-defined type of challenge, with lots left out. But it's a way to think a bit about the issue... or play with the idea of assessing playing with ideas.
Tools to explore
A few words about some recent developments to watch in this vein. The Lumina Foundation's Tuning project could have begun within a regional accreditation context, but it's geared instead towards a proof of concept that a faculty-driven definition of outcomes and assessments can simultaneously honor disciplinary conventions and also satisfy external constituencies (thus the term "tuning" to get everyone singing in the same key: I've got to ask Cliff Adelman sometime whether it's harmonic or tempered tuning). If I remember correctly, the first discipline-specific reports should have been available on the foundation website sometime this spring, but it's not there now (just a cutesy cartoonish presentation of the idea along with Cliff Adelman's concept paper and other materials from 2009). At a first glance, it looks like an application of the accountability framework of the American Association of Colleges and Universities (i.e., the liberal-arts office in One Dupont Circle). But without sample exemplar projects, it's hard to judge at the moment.
Then there's the movement for undergraduate research. When my daughter and I were visiting colleges over the past few years, it's clear that every institution devoted resources specifically to undergraduate research, whether they were public or private. Then again, these were generally small colleges where undergraduates were the only research assistants that faculty would be getting. On the third hand, undergraduate research is a type of operation that both liberal-arts colleges and universities are trying to develop and promote, albeit with different understandings of student engagement. I think my alma mater (a small liberal-arts college) now requires seniors to engage in a major thesis-like project. At my current university, that's expected only of Honors College students, and the resources of the Undergraduate Research office are available to all in theory and would be totally swamped if every student asked to be involved. Again, neither the development of Tuning and undergraduate research are models in any practical sense of the word, but they're something to watch and, if nothing else, they provide a few rocks on which to stand and survey the landscape of playing with ideas.
May 23, 2010
A hexadecimalful for hacking the academy
I do not regret not applying for THATCamp Prime (The Humanities and Technology unconference) this year, as it fell on the weekend of my anniversary, but I do miss the conversation as I woke up this morning reading the tweets (#thatcamp if you're curious), and I hope those participating in the game jam write up their notes for more public consumption. One of the side projects is Tom Scheinfeldt and Dan Cohen's call for contributions to Hacking the Academy, which they hope to collect within a week. A book in a week? I'd consider that a wee bit ambitious if I didn't know them. And I'm glad I'm not teaching this summer, so I have the time to write a short essay.
I am an incrementalist radical, certain that change can happen, good change, without enormous discontinuity. So my vision of hacking the academy is less disruptive than what others imagine. In many ways, the academy has been in the process of being hacked for decades. My own experience as a student and academic illustrates that history. I was in the generation of college students who often enough began high school with typewriters and ended college or graduate school with computers (mine: a Leading Edge XT bought when I entered grad school in 1987). In college, I took a Greek literature in translation course from someone who was a founder half a decade later of the Bryn Mawr Classical Review, one of the first online humanities journals. In graduate school I was among the third generation of social-science historians to learn SPSS or SAS; I hope I was among the last to do part of my dissertation research on a mainframe VAX. At the end of grad school, I searched for job openings using a Gopher connection to the Chronicle's job database. I created my first webpage in 1996, and I've been blogging jobwise since 2001 (though that first entry on March 24 could best be used to point out the frequent tedium of an assistant professor's life, not Hello, world so much as I'm in the office on another Saturday, world). In its first few years of existence, I was one of the hundreds of active participants on the H-Net collection of humanities e-mail lists. According to Google Scholar this morning, my five most-cited publications include my first book, two articles in standard journals (one sponsored by a scholarly society, one stand-alone), and two articles in an online-only journal. The most cited? My first article in an online journal. My education and professional life has been touched in fundamental ways by previous efforts to "hack the academy."
So what does this history mean? Some would say such incremental change is insufficient, and we should blow up higher education. In the Twitter stream tied to THATCamp this year, Mark Sample argued, "[H]igher ed is terminally ill and hacking it only prolongs its stay in the hospice." Historians of education are familiar with this rhetoric of radical reform, though the literature on it generally focuses on K-12 rather than higher education. There are plenty of attempts to "blow up" higher education by creating institutions that veer off from the plurality of practices. Sometimes those survive on the margins, or die out, and sometimes they become a model (if not necessarily an ideal). Antioch College, New College of Florida, Evergreen State College, and UC Santa Cruz have their origins in such attempts. Two of those started life as public institutions, one became public after a funding crisis, and Antioch's future is still in doubt. Or, if you want to go back further in time, both Clark University and Johns Hopkins University were founded with the intent to stay away from undergraduate education, and instead Johns Hopkins became the model for the modern large university, including high-priced tuition for undergraduates.
As an historian of education, I'd caution my more utopian colleagues that both institutions and people have to pay bills regularly, and that being able to pay bills allows innovation to thrive and spread. The wonderful thing about the internet is that you can do all sorts of intellectual work with minimal infrastructure. The damned thing about the internet is that resources still matter, especially if you want to foster a community of practice. Life involves compromise, even for hackers. With that in mind, here is a digital handful of ideas for hacking the academy, starting with the necessary and institutionalist and moving to what I would like to think of as the more inspirational:
0. Tenure-track faculty at research universities need to demonstrate competence in conventional ways. The bad news (if you were hoping to gain tenure at a research-oriented place with an experimental form of scholarship): if you are an assistant professor at a place that requires scholarship for tenure, you have roughly five years to get stuff published in a recognized way, and that means ways that external reviewers will recognize. The corollary: if you're a graduate student wanting to be a faculty member at a place that values research, you need to develop those competencies. The bittersweet news: because so few faculty are tenure-track at research universities, that means the vast majority of scholars can be innovative. That includes tenured faculty but also librarians, museum staff, and anyone who can find an alternative academic career path. If you are on the tenure track, you need to think about a career that lasts 20-30 years. If you demonstrate your chops in conventional ways in your first 5 years, you have the vast majority of your career to take greater risks.
1. We can build a broader coalition for reforming promotion considerations (but probably not tenure criteria) by discussing the value of taking risk in scholarship. If you're an aspiring digital humanist and are frustrated that a curated online website is not valued as scholarship in the same way as a university-press monograph, even if it's used by hundreds of classes or scholars worldwide, look at your colleagues who are conducting engaged scholarship in communities, where projects take years to get funded, help communities, and become translated into refereed articles. Or look at your colleagues who worked their tails off to earn tenure to find themselves as associate professors caring simultaneously for children and aging parents. Yes, the vast majority of them are women, and they find themselves with tenure but also with a gap in their scholarship record. The best way out of all these dilemmas is to argue that institutions should value long-term, risky projects when they demonstrate their value to the broader scholarly community. One could argue that the obligation of a scholar with tenure is not to continue to do the same work you did to earn tenure but to take greater intellectual risks. Let's find common cause by appealing to broader values.
2. The transition to post-publication review is in process. arXiv is leading the way as a recognized outlet for working papers in an entire discipline, and somehow physicists don't agonize about the peer-review process as journal publication still conveys an imprimatur of quality. How post-publication review develops is something I cannot predict, but there are a number of reasons why we are likely to head in a different direction, from the expenses of humanities journals to the diversification of bibliometrics and the weak-ethics of author-fee journals with high acceptance rates, or what some hard-sciences faculty refer to as "write-only" journals. Assistant professors may not be happy that a provost wants to see their h-index, but you should be happy that Google Scholar will find a good chunk of (if not everyone) who cited your conference paper from 3 years ago.
3. Senior scholars have an obligation to advocate for the ideas explained above. I expect to be around for another few decades, and I want my university to be a place where I like to work. What's the value of being a tenured full professor if we don't help colleagues and encourage risk-taking in scholarship? This involves both the realistic advice we have to give new scholars and ways to nudge academic administrators with arguments we know are more likely to appeal to them. If we don't speak up, we let the most powerful and conventional win by default, and we fail in our obligation to make the "codes of power" (see Lisa Delpit) explicit and open.
4. Expect large universities to abandon good initiatives on a regular basis unless there are forceful incentives that inhibit double-crossing. In April 2010, Yale University stopped contributing to the Public Library of Science journal system, despite a symbiotic relationship (where Yale scholars have increasingly contributed to PLoS journals). Institutional support: great idea. But there was nothing to inhibit Yale's withdrawal apart from reputational risk. There's a reason why Elsevier is hated: they're very effective at rent-seeking. Don't become Elsevier, but if you run an innovative project, don't avoid or hate the time you spend thinking up how to diversify income. It'll keep your people employed when Yale kicks your project to the curb.
5. Reputational markets are the tip of the iceberg in academic economies, and expanding/creating new economies is one route to hacking the academy in both peer review and funding. The Berkeley Electronic Press system has a formalized credit system for authors and reviewers in the form of its A&R bank. In the twitter stream for THATCamp Prime 2010, Jo Guldi suggested a pledge-support system for creative scholarly initiatives. This payback collaboration is a viable, sustainable model in other environments; for example, one early-childhood intervention program in Tennessee relies on a reciprocal-obligation model for services, where parents in the program are obliged to pay back services by becoming volunteers after their children exits.
6. Tight networks should raise red flags. The network of self-labeled digital humanists comprise mostly white academics, library and museum staff, and independent scholars. That is broader than disciplinary societies in one sense but misses lots of people who might consider themselves digital humanists if exposed to the idea, including the growing population of people connected to cultural heritage sites. That omission is a missed opportunity to make tools and conversations more useful as well as make digital humanities more sustainable in the long term. There is a solid reason for departments and similar structures to exist inside an organization, but your good sense should prompt occasional trips outside your hallway. Periodically ask, Who is missing from the conversation?
7. Some projects are going to be ephemeral; either plan for obsolescence or plan for periodic rebuilding. Archivists remind us regularly that formats are not forever. The same is true for individual projects that require continuous maintenance, whether specific intellectual enterprises or the infrastructure (such as base code). The earliest online journals began as e-mail lists; those which survive are now on the web. H-Net has atrophied in part because it has never been through a recent complete rebuild, despite internal advocacy for such rebuilding.
8. Some projects should be ephemeral. This is not necessarily a bad thing: like a sand mandala of Buddhist monks, an ephemeral project can teach us much during its existence even while and perhaps because everyone involved knows it is time-limited. If you work on a project that will most likely burn brightly for eighteen months and no more, be happy and up-front about that fact. Make your fans miss the project when it's gone.
9. Not everyone will or should be on the bleeding edge. Especially for specific tools, there is some maturity threshold before a piece of technology becomes more broadly usable (if it ever does). For example, online conference software exists, and particularly adept scholars can put together a virtual conference if they are willing to invest more effort than a lot of people might. With some effort I could probably create a one-day workshop using Google Wave. But how many would participate? In a few years, there might be a package that is closer to turnkey status, and then virtual conferences will be more feasible because organizing them will require less effort for the infrastructure.
A. A critical mass of users enables not only a rapid change of practice but the breaking of barriers. The corollary of not everyone's being on the bleeding edge is that one needs to know when enough people have a technology to assume its availability and to push hard at barriers. For example, now is the time to push unwieldy scholarly organizations to negotiate members' wifi in conference hotel contracts. Twitter may not exist in a few years, but internet access for attendees will make conferences more useful for whatever exists, to connect people and enable more engagement than listening to 20-minute paper readings.
B. Students need rules made explicit, and these include the hidden rules of life and scholarship, especially when a faculty member is trying something new and risky for students as well as the teacher. By rules I mean, "Here's how you get stuff done with minimal pain." And also the meta-stuff: "Here's what I'm trying that's new, here's why and what I expect you to learn, and please tell me when I'm screwing up." The immediate corollary inside a university or college is love your librarians, for they will often teach students what you forget to. The second corollary is reveal the hidden secrets in bits and pieces. I have very long undergraduate syllabi, but I know the students who most need the information are least likely to read and remember everything, so I expect to repeat the same information at key points in a term. The head of the martial-arts center I attend regularly introduces corrections with, "Here's a black-belt secret..." Everyone loves to know secrets, especially students.
C. Surplus time is necessary for students to be creative and rigorous. The explanation is left as an exercise for the reader just before going to bed. If you're working too late tonight to be able to think as you brush your teeth, please reread the first sentence of this paragraph.
D. We make our teaching more effective if we can figure out how the class can seduce students. My first year at USF, I was hoarse halfway through each semester, and I decided I needed to take voice lessons if I wanted my career to last without ruining my voice. One of the most important concepts I learned was that every time I took a breath, it was a chance to start a beautiful phrase. Every group of students has at least one wonderful new scholar, and on the first day of the term, you have not yet bored them. That doesn't mean hacking our teaching should focus on entertainment. But it should make the experience irresistible.
E. Can you explain what you do to your neighbors, and have you invited them to look at your website and at the websites of people and projects you admire? Academic freedom means that you do not cater to political whims of the moment, but higher education should not throw away the enormous benefit of being perceived as a public good. Since so much of hacking the academy results in public work, that should be public in a broader sense of being known to the general public.
F. Make time to dance. Everyone gets grumpy on occasion, but it's hard to sustain scholarship or creativity (and get others to support you!) if you're permanently grumpy. If you are no longer motivated by the joy or beauty of what you're doing, rediscover it or reinvent what you're doing until you discover a new source of joy.
August 13, 2009
How can we use bad measures in decisionmaking?
I had about 20 minutes of between-events time this morning and used it to catch up on two interesting papers on value-added assessment and teacher evaluation--the Jesse Rothstein piece using North Carolina data and the Koedel-Betts replication-and-more with San Diego data.
Speaking very roughly, Rothstein used a clever falsification test: if the assignment of students to fifth grade is random, then you shouldn't be able to use fifth-grade teachers to predict test-score gains in fourth grade. At least with the set of data he used in North Carolina, you could predict a good chunk of the variation in fourth-grade test gains knowing who the fifth grade teachers were, which means that a central assumption of many value-added models is problematic.
Cory Koedel and Julian Betts's paper replicated and extended the analysis using data from San Diego. They were able to confirm with different data that using a single year's worth of data led to severe problems with the assumption of close-to-random assignment. They also claimed that using more than one year's worth of data smoothed out the problems.
Apart from the specifics of this new aspect of the value-added measure debate, it pushed my nose once again into the fact that any accountability system has to address the fact of messy data.
Let's face it: we will never have data that are so accurate that we can worry about whether the basis for a measure is cesium or ytterbium. Generally, the rhetoric around accountability systems has been either "well, they're good enough and better than not acting" or "toss out anything with flaws," though we're getting some new approaches, or rather older approaches introduced into national debate, as with the June Broader, Bolder Approach paper and this morning's paper on accountability from the Education Equality Project.
Now that we have the response by the Education Equality Project to the Broader, Approach on accountability more specifically, we can see the nature of the debate taking shape. Broader, Bolder is pushing testing-and-inspections, while Education Equality is pushing value-added measures. Incidentally, or perhaps not, the EEP report mentioned Diane Ravitch in four paragraphs (the same number of paragraphs I spotted with references to President Obama) while including this backhanded, unfootnoted reference to the Broader, Bolder Approach:
While many of these same advocates criticize both the quality and utility of current math and reading assessments in state accountability systems, they are curiously blithe about the ability of states and districts to create a multi-billion dollar system of trained inspectors--who would be responsible for equitably assessing the nation's 95,000 schools on a regular basis on nearly every dimension of school performance imaginable, no matter how ill-defined.
I find it telling that the Education Equality Project folks couldn't bring themselves to acknowledge the Broader, Bolder Approach openly or the work of others on inspection systems (such as Thomas Wilson). Listen up, EEP folks: Acknowledging the work of others is essentially a requirement for
debate these days. Ignoring the work of your intellectual opponents is
not the best way to maintain your own credibility. I understand the politics: the references to Ravitch indicate that EEP (and Klein) see her as a much bigger threat than Broader, Bolder. This is a perfect setup for Ravitch's new book, whose title is modeled after Jane Jacobs's fight with Robert Moses. So I don't think in the end that the EEP gang is doing themselves much of a favor by ignoring BBA.
to the substance: is there a way to think coherently about using
mediocre data that exist while acknowledging we need better systems
and working towards them? I think the answer is yes, especially if you
divide the messiness of test data into separate problems (which are not
exhaustive categories but are my first stab at this): problems when data cover a
too-small part of what's important in schooling, and problems when the
data are of questionable trustworthiness.
Data that cover too little
As Daniel Koretz explains, no test currently in existence can measure everything in the curriculum. The circumscribed nature of any assessment may be tied to the format of a test (a paper and pencil test cannot assess the ability to look through a microscope and identify what's on a slide), to test specifications (which limits what a test measures within a subject), or to subjects covered by a testing system. Some of the options:
- Don't worry. Don't worry about or dismiss the possibility of a narrowed curriculum. Advantage: simple. Easy to spin in a political context. Disadvantage: does not comport with the concerns of millions of parents concerned about a narrowed curriculum.
- Toss. Decide that the negative consequences of accountability outweigh any use of limited-purpose testing. Advantage: simple. Easy to spin in a political context. Disadvantage: does not comport with the concerns of millions of parents concerned about the quality of their children's schooling.
- Supplement. Add more information, either by expanding the testing or by expanding the sources of information. Advantage: easy to justify in the abstract. Disadvantages: requires more spending for assessment purposes, either for testing or for the type of inspection system Wilson and BBA advocate (though inspections are not nearly as expensive as the EEP report claims without a shred of evidence). If the supplementation proposal is for more testing, this will concern some proportion of parents who do not like the extent of testing as it currently exists.
Data that are of questionable trustworthiness
I'm using the term trustworthiness instead of reliability because the latter is a term of art in measurement, and I mean the category to address how accurately a particular measure tells us something about student outcomes or any plausible causal connection to programs or personnel. There are a number of reasons why we would not trust a particular measure to be an accurate picture of what happens in a school, ranging from test conditions or technical problems to test-specification predictability (i.e., teaching to the test over several years) and the global questions of causality.
The debate about value-added measures is part of a longer discussion about the trustworthiness of test scores as an indication of teacher quality and a response to arguments that status indicators are neither a fair nor accurate way to judge teachers who may have very different types of students. What we're learning is a confirmation of what I wrote almost 4 years ago: as Harvey Goldstein would say, growth models are not the Holy Grail of assessment. Since there is no Holy Grail of measurement, how do we use data that we know are of limited trustworthiness (even if we don't know in advance exactly what those limits are)?
- Don't worry. Don't worry about or dismiss the possibility of making the wrong decision from untrustworthy data. Advantage: simple. Easy to spin in a political
context. Disadvantage: does not comport with the credibility problems of historical error in testing and the considerable research on the limits of test scores.
Decide that the flaws of testing outweigh any
use of messy data. Advantage: simple in concept. Easy to spin in a
political context. Easy to argue if it's a partial toss justified for technical reasons (e.g., small numbers of students tested). Disadvantage: does not comport with the concerns of
millions of parents concerned about the quality of their children's
schooling. More difficult in practice if it's a partial toss (i.e., if you toss some data because a student is an English language learner, because of small numbers tested, or for other reasons).
- Make a new model. Growth (value-added) models are the prime example of changing a formula in response to concerns about trustworthiness (in this case, global issues about achievement status measures). Advantage: makes sense in the abstract. Disadvantage: more complicated models can undermine both transparency and understanding, and claims about superiority of different models become more difficult to evaluate as the models become more complex. There ain't no such thing* as a perfect model specification.
- Retest, recalculate, or continue to accumulate data until you have trustworthy data. Treat testing as the equivalent of a blood-pressure measurement: if you suspect that a measurement is not to be trusted,
take the blood pressuretest the student again in a few minutesmonths/another year. Advantage: can wave hands broadly and talk about "multiple years of data" and refer to some research on multiple years of data. Disadvantage: Retesting/reassessment works best with a certain density of data points, and the critical density will depend on context. This works with some versions of formative assessment, where one questionable datum can be balanced out by longer trends. It's more problematic with annual testing, for a variety of reasons, though that can reduce uncertainties.
- Model the trustworthiness as a formal uncertainty. Decide that information is usable if there is a way to accommodate the mess. Advantage: makes sense in the abstract. Disadvantage: The choices are not easy, and there are consequences of the way of modeling uncertainty you choose: adjusting cut scores/data presentation by measurement/standard errors, using fuzzy-set algorithms, Bayesian reasoning, or political mechanisms to reduce the influence of a specific measure when trustworthiness decreases.
Even if you haven't read Accountability Frankenstein or other entries on this blog, you have probably already sussed out my view that both "don't worry" and "toss" are poor choices in addressing messy data. All other options should be on the table, usable for different circumstances and in different ways. Least explored? The last idea, modeling trustworthiness problems as formal uncertainty. I'm going to part from measurement researchers and say that the modeling should go beyond standard errors and measurement errors, or rather head in a different direction. There is no way to use standard errors or measurement errors to address issues of trustworthiness that go beyond sampling and reliability issues, or to structure a process to balance the inherently value-laden and political issues involved here.
The difficulty in looking coldly at messy and mediocre data generally revolves around the human tendency to prefer impressions of confidence and certainty over uncertainty, even when a rational examination and background knowledge should lead one to recognize the problems in trusting a set of data. One side of that coin is an emphasis on point estimates and firmly-drawn classification lines. The other side is to decide that one should entirely ignore messy and mediocre data because of the flaws. Neither is an appropriate response to the problem.
* A literary reference, not an illiteracism.
June 26, 2009
How to steer CYA-oriented bureaucracies, or why NCLB supporters need to think about libel law
Someone at USDOE sent me an invitation to listen to the June 14 phone conference where Arne Duncan explained how disappointed he was in Tennessee, Indiana, and other states with charter caps, let alone states such as Maine with no charter law, and how that disappointment might be reflected in the distribution (or lack of distribution) of "Race to the Top" funds (applications available in October, due in December, with the first round of funding out in February 2010). There are a few details that reporters didn't ask about (Duncan's somewhat surprising statement that a good state charter law would set some barriers for entry rather than establish a "Wild West of charter schools," and the way that small charter schools and charter schools with grade configurations outside state testing programs can stay off the radar for accountability purposes), but I was not surprised that two Tennessee reporters were called on for questions.
But apart from the selection of reporters for questions, the phone presser and other DOE moves made me think about the various uses of power in education-policy federalism. In limited ways, explicit mandates can be effective, if there is a sustained willingness within the USDOE (and esp. OCR) to make painful examples of the nastier school systems that try to evade those mandates. Offering technical assistance is another method, and despite the massive conflict-of-interest problems in Reading First, I agree with one of the researchers in the field who thinks that Reading First did improve primary-grade reading instruction, on balance. (Thumbnail version: hourslong scripts, ugh; explicit instruction in phonemic awareness and some other fluency components, obviously necessary.)
But neither heavyhanded mandates nor technical assistance can do
everything, and neither works with the greatest motivation for both
defensive and hubris-oriented bureaucracies: risk management. If you
are a public school teacher or administrator, my guess is that you can
identify some fairly silly action by your district that was motivated
almost entirely by CYA motives, and if you can marry those CYA
activities to pedagogy, you've been lucky or have a black belt in
administrative maneuvering. (If you have such victories, please
describe them in comments! Otherwise, we'll all wallow in the shared
misery of observing defensive administering and the all-too-frequent ensuing
I think the federal government can shape bureaucratic behavior to
the good by using that risk management and structuring accountability
policies around that. And here's the lesson I take from my high-school
journalism class in ninth grade 30 years ago: libel law in the U.S.
generally recognizes the truth as a positive defense agaist libel
allegations. That seems like a backwards way to frame the legal issue
-- after all, isn't it common sense that a publication is libelous only
if it's false? -- but the notion of a legal positive defense gives an
individual or organization a way to organize behavior in a way that is
both professionally appropriate and also make a legal defense aligned
with professional expectations. Because the truth is a positive defense
against libel claims, even an idiotic general counsel for a newspaper
or publisher looks to the professionally-appropriate standard: is there
documentation that the published work is true?
Sometimes a positive defense is not explicitly part of jurisprudence
but evolves as a practical guidance for clinical legal work and
internal advice for school systems. Observing procedural and
professional niceties create exactly that type of positive defense in
special education law. There is nothing in federal special education
law to carve out an explicit positive defense for school system
behavior, but many articles written by Mitchell Yell over the past few
decades constitute a convincing case that school systems now have a de
facto positive defense: professional documentation of decisionmaing and
scrupulous adherence to procedural requirements are a positive defense against a broad range of allegations by parents of and advocates for students with disabilities.
Yell has argued (persuasively) that due-process hearing officers and judges use procedural adherence and professional documentation as a filter in special education cases.
If a school district can document that it has paid attention to
procedural mandates and has met professional standards for documenting
decision-making, then hearing officers and judges are extremely
reluctant to look at the substantive merits of those decisions. But if
a school district has ignored standard procedural expectations that
most districts meet, or if a school district has kept no or inadequate
documentation of its decision-making rationale, then all bets are off
and a hearing officer or judge will be much less likely to defer to the
school district on professional judgments.
In essence, Yell implies, school districts can avoid adverse judgments if they pay attention to timelines and other procedural niceties and if they keep teachers and principals on their toes about current "best practices" as well as deadlines, notices, etc. Not all districts are aware of this positive defense, or I suspect that some enterprising special education researchers could make a mint running seminars, "How never to get sued again."
More broadly, I'm beginning to think that the construction of a positive defense against charges of incompetence would be healthy for school systems and state policies. The devil would definitely be in the details, but instead of being frustrated by a consistently observed school system behavior, maybe we should take advantage of that consistency.
December 21, 2008
Student debt, social investment in education and the search for a basketful of school
At the Social Science History Association conference this year, there were "author meets critic" sessions on two important books, Kathryn Neckerman's Schools Betrayed: Roots of Failure in Inner-City Education and Claudia Goldin and Laurence Katz's The Race between Education and Technology. Together, the two books represent solid new work in understanding urban education (with Neckerman) or arguments about the relationship between education and the economy (with Goldin and Katz). In particular, Goldin and Katz's argument is both a brief in favor of investment in education and a reply to skeptics such as Alison Wolf, author of Does Education Matter? Myths about Education and Economic Growth (2003). (Wolf updates the older arguments along the lines of Berg, Freeman, and Braverman.)
In Neckerman's book, we see the behavior of parents and cities (or one city, Chicago), embedded in a very specific historical context. In Goldin and Katz's book, we see the behavior of parents and societies more generally, across more than a century. I suspect most of the reviews of Goldin and Katz will focus on their human-capital assumptions and their claims that the ratio of skilled-worker wages to unskilled-worker wages (and thus wage inequality) will drop if we move more of the workforce to the skilled (i.e., educated) end. I hope that at least a little of the discussion will make things a bit more complicated, not because Goldin and Katz are entirely wrong but because we need a better way to talk about how schooling works. Yes, education builds human capital, but it does a lot more, and even within a human capital lens, a focus on education and only education ignores a few other things. The rest of this post addresses some of the problems of social investment in education from within a human capital perspective. Criticism of that perspective and alternatives waits for another post.
Let's start with a family-strategy question: if you're a parent, what is the best strategy to make sure that your kids are healthy and happy adults and that they can raise their own children (your grandchildren!) in a life that makes you proud? A human-capital perspective says that education is the best investment, almost universally. Well, that's not quite right. If you happen to have five million dollars to invest in your child by age 25, you certainly can spend a good chunk of that money on what you could call human-capital investment: private schools, tutors, great experiences, colleges, grad school, etc. (You could also spend some of that money working less so you can spend quality time with your child; economists would still call that a good investment in human capital.) But you wouldn't spend all five million dollars that way: you'd invest the majority so that your child (and grandchildren) can have a safety net. (Let's assume that not all of that was invested in Lehman stock.) So for the very wealthy, education as human capital is part of a family strategy. If you're wealthy enough, your child will survive and do quite well almost no matter how foolishly she or he behaves as a young adult. But education is a good thing, too. In this framework, education is part of a diversification strategy. Even if you did invest $4 million in Madoff's enterprise or Lehman stock (along with other large chunks of the portfolio in WorldCom and Enron), your kid still has an education to fall back on. The one security of an education is that no one can foreclose on the knowledge in your head. In other words, education as human capital in part is a hedge for the very wealthy.
If you're extremely poor, your choices are much more limited. You worry about whether you can put food on the table and take your child to the doctor long before you worry about how to pay for college. There's no such thing as a nest egg you can put away for either yourself or your child, and everything is a matter of (often cruel) tradeoffs. The choice is sometimes between investing resources in immediate survival (absolutely necessary) or in education (a long-term investment with an inherently uncertain return). So in contrast with very wealthy families, formal education is both the best long-term investment and also one that is the most risky one... not because there are less risky ones but because there is no other option.
The majority of Americans are neither very poor nor extraordinarily wealthy; most of us have enough to live on but not enough where our children's education is a hedge against other investments. For many parents, the choices are between approximately equally valued options, but they're often framed as avoiding harm: not making our children pay for us when we retire, not losing a house, not having our children on bread lines, etc. And all of the options have some risk and require tradeoffs. Do you save more for your retirement fund or save for your child's college? Do you pay for tutoring in middle school, knowing that doing so has a harsh long-term penalty for college savings, or do you hope that she or he gets straightened out and justifies socking away more for college? Do you get a new roof or save for college or get tutoring or stuff more money into the cash fund in case you're laid off ...? Oh, yes, and do you put in overtime and thus spend less time with your child? On the one hand, being "middle class" provides far more options than being very poor. On the other hand, the options are not necessarily easy choices or ones with great certainty.
Thinking about education as a family strategy should put a spotlight on the gap between a microeconomic perspective (that the rate of return on education makes it a good idea) and an individual or family perspective: individuals don't have a smooth return-on-investment ROI curve. You're employed, or not, or have part-time work, or work overtime. You only have one job (or two), and one salary (or two). Abstractions such as ROI make sense when you're speaking of populations, and millions of Americans understand that abstraction: that's why mutual funds have expanded so dramatically in the past few decades (well, expanded in investments before they shrank in value...). In buying a mutual-fund share, you're buying a basket of property, getting diversification on the cheap (well, if you watch the fees). But you can't diversify your family that much: "I'll send 5% of my son to manufacturing industry, 10% to financial services, 10% to information technology, ..." You make investment choices for one child at a time. And there are no guarantees for that child (or for you). On the whole, investing in education is a good choice. But you're still trusting to a great deal of luck.
And even if you look at populations, behavior can look inconsistent with the incentives microeconomists assume. Sociologist Roz Mickelson focused on such an inconsistency in her classic article, Why Does Jane Read and Write So Well? (1989), and her follow-up, Gender, Bourdieu, and the Anomaly of Women's Achievement Redux (2003) (both subscription based/$$ required). Why have women dramatically expanded college attendance in the past half-century, even as the return on that investment has lagged behind the value of college for men? Her argument five years ago was that women are more likely to try to balance the social value of different spheres in life: work, family, etc.
We'll come back to Mickelson and Bourdieu another time. Today, let's focus on the individual-population gap. To a great extent, the problem of student debt is that it concentrates the risk at the level of individuals and families. In contrast with purchasing private insurance or a social insurance program, either of which spreads risk, parents or college students take on substantial parts of the risk that the college education will not pay off, because of dumb luck either in the economy of the moment (cross-sectional dumb luck) or in the lifetime of the student (cohort dumb luck).
As states have withdrawn support from undergraduate instruction, this privatization of risk has accelerated. If you care about equity, you should be worried by the consequences. But even if you don't care at all about fairness, you should still recognize that the assumption of greater risk will change the behavior of college students. (I won't call it distortion because I am not likely to be convinced that there is any theoretically neutral behavior of college students.) To be honest, I do not pretend to know for certainty how the behavior of college students changes with the assumption of greater debt. I will leave that empirical question to sociologists and economists.
I am not sure how to spread the risk across either individuals or cohorts. A tuition-free undergraduate education that public taxes support would be one way, but I suspect we're not headed there as a society. Among other reasons, people think that college students should bear some of the burden of their own education, a result of the vocational rhetoric surrounding college education (including the human-capital rationale itself). But even in a world with tuition and debt, there should be some way to create a "basketful of school," creative mechanisms that spread risk so that students from families of moderate means can attend college with the reasonable security that their futures are not going to be shackled to student debt.
October 17, 2008
A few months ago, I became a ringer in an August 19 Ed Week chat with David Figlio and Jennifer Jennings. I've known economist David Figlio for about a decade, I've respected his work on Florida and accountability, and I've wanted to see how he'd respond to an argument from the young-Turk subfield of behavioral economics. (For one taste of this approach, see Dan Ariely's comment on market fundamentalism this week.) While Figlio is very clever in thinking up eyecatching projects as well as solid substantive work, it's from a fairly standard microeconomic perspective. So I sent in a question before the chat, and it was the first one out of the chute.
Let's see how he responded:
Q: The general theory of action for NCLB and other high-stakes accountability systems appears to assume the existence of magister economicus, the theoretically rational school employee. On the other hand, critics of NCLB, Florida's systems, and others are concerned with the potential harms of irrational responses, unintended consequences such as narrowing the curriculum or teaching to the test. The critics seems closer to the mindset of behavioral economists. Is there any research currently going on to determine if teachers are magisters economici, irrational actors, or a mix (and what type of mix)?
A: I think that the evidence is becoming clearer that many of the hopes of high-stakes accountability advocates and many of the fears of high-stakes accountability critics are correct -- school administrators and teachers can and do respond to accountability pressures, at least at the margins.
A number of recent studies have shown that schools subject to greater accountability pressure tend to improve student test performance in reading and mathematics to a meaningful degreemy recent study of Florida with Cecilia Rouse, Jane Hannaway and Dan Goldhaber (working paper on the website of the National Center for the Analysis of Longitudinal Data in Education Research), for instance, suggests test score gains of one-tenth of a standard deviation in reading and math associated with a school getting an "F" grade relative to a "D" grade. We find that these test score gains persist for several years after the student leaves the affected school. Jonah Rockoff of Columbia University has a new working paper studying New York City's rollout of school grades that suggests that responses to grading pressure seem to happen immediatelygrades released in November were mainfested in test score changes in the same winter/spring.
In the case of my study with Rouse, Hannaway and Goldhaber, we try to look inside the "black box" by studying a wide variety of potentially productive school responses, and it appears that Florida schools responded to accountability pressures by changing some of their instructional policies and practices, rather than "gaming the system."
The rapid and apparently productive response of school personnel to school accountability pressure suggests that educators are, at least to some degree "magisters economici," responding to the incentives associated with the system. And this makes getting the system right so important, because if schools and teachers respond quickly to incentives, the incentives had better be what society/policymakers want.
Many people raise concerns about teaching to the test, and there is certainly evidence of thisconsistently, estimated effects of accountability on high-stakes tests are larger than those on low-stakes teststhough the low-stakes test results tend to be meaningful still, especially with respect to math. Harder to get a handle on is the narrowing of the curriculum to concentrate on the measured subjects; there is a lot of suggestive evidence that this is taking place to a small degree at the elementary level, though studies of the effects of accountability on performance on low-stakes subjects typically don't find that performance on these subjects suffersbut of course, those subjects are still being measured with tests. Still there is certainly the incentive to reduce focus on "low-stakes" subjects. One possible solution for those concerned about low-stakes subjects being given short shrift would be to impose requirements such as minimum time spent of instruction or portfolio reviews.
There is a lot of evidence that accountability systems can have unintended consequences that are predicted by the magister economicus model. Derek Neal and Diane Whitmore Schanzenbach at the University of Chicago note that accountability systems based on getting students above a given performance threshold tend to induce schools to focus on the kids on the "bubble." I've found that that type of system may lead schools to employ selective discipline in an apparent attempt to shape the testing pool, or even to utilize the school meals program to artificially boost student test performance by "carbo-loading" students for peak short-term brain activity. These types of unintended consequences are much more likely in accountability systems based on the "status" model of getting students above a proficiency threshold, rather than the "gains" model of evaluating schools based on how much these students gain.
But there's a tradeoff here. The more we evaluate schools based on test score gains, where gaming incentives are lower, the more the focus is taken off of poorly-performing students whom society/policymakers would like to see attain proficiency. How the system is designed is crucially important.
I was hoping that Jennings (known then only as Eduwonkette) would respond, in part because I suspected she was a sociologist (she is, an ABD at Columbia University) and because there are some very interesting critiques of the homo economicus assumption from sociology, most notably Viviana Zelizer's work on the nonfungible, social meaning of money. But it looked like the chat had a structure that didn't allow a back-and-forth discussion between Figlio and Jennings, instead being a two-person panel, with questions alternatively answered by each.
But back to the central question: to what extent are teachers and administrators people who respond to financial incentives? Figlio argues that they are, though we have to be wary of the consequences of a poorly-designed incentive system. I am not entirely convinced; while I agree that people respond to incentives, they don't necessarily do so in the way Figlio assumes (i.e., to maximize their gain). First, there is the phenomenon Zelizer noted, which is the social meaning of money. For a number of teachers in Florida, bonuses tied to the state system of assigning grade labels to schools is dirty money. That doesn't mean that teachers won't respond to the system in Florida (Cecilia Rouse, and Jane Hannaway, Dan Goldhaber, and Figlio make a pretty good argument that they do respond in ways that raise test scores). But that changed behavior may be tied to the reputational threat/promise of school grades rather than the bonuses. (Also see the Damian Betebenner review of their paper.)
Even if money doesn't mean different things to people depending on the context, there is the broader question of money in the context of other motivations. Here, behavioral economics is the tip of the iceberg; there are plenty of other nonfinancial reasons that drive people's behavior. That doesn't mean money is entirely unimportant but that it is one of many motivations. I suspect Figlio et al. would agree with me but point out that their analysis concerns the marginal effect of a change in incentivethat is, people's behavior can be driven by relatively small economic motives when that is the possible change they will attend to.
But in reality, you can't hold everything else constant. Given the resource and time constraints in the real world, you have to choose whether to try financial motivation for behavior or something else. What most arguments ignore if they favor public policies emphasizing financial assumptions is a fundamental economic concept: opportunity cost. What is the opportunity cost of trying to drive teacher (or student!) behavior by offering financial incentives? That is more than a thought experiment, and the cost is not just in tangible ways. But that discussion is for another evening.
See comments: Jennifer Imazeki takes me to task for viewing economists' work too narrowly, with some justification.
September 29, 2008
On Wendy Kopp, TFA, and Linda Darling-Hammond
Back in June, I wrote a long entry on Teach for America and Linda-Darling Hammond's critique of the Kopp organization and model. I had been puzzled at the claim by Kevin Carey and others that Darling-Hammond simply hated TFA with the type of bile that is usually attributed to Karl Rove, Bill Belichek, and others with a take-no-prisoners approach to civic life and sports.
I don't recall who e-mailed me and pointed me to the 1994 Kappan article on TFA by Darling-Hammond, and the description of its aftermath in Kopp's book. But I went back and read the article carefully, then the relevant passage by Kopp. And I will freely and openly admit that I was wrong: I now know why some describe LDH as having a visceral opposition to TFA. I think the description is wrong, but it's understandable enough, since a vivid conflict often is frozen in people's memory as an enduring symbol of a relationship. I'm sure Frank Zappa and Tipper Gore quickly got tired of being asked what they thought of the other's latest initiative, life events, whatever. But because they clashed over the labeling of popular music, that became etched in people's memories. (Well, the memories of some of us.)
This summer we've had another Kappan issue focusing on Teach for America, with both Darling-Hammond and Kopp contributing. TFA has been around long-enough, with enough scars and criticisms of it, that I can make some long-term observations. I suppose that it is the unique prerogative of an historian to live long enough that he or she can proclaim that, no, I wasn't ignoring things; I was just waiting for the dust to settle.
So let me start with some general observations about worldviews: in the late 1980s and early 1990s, Darling-Hammond and Kopp worked from two very different views of teaching and teachers. (I'll do my best to present their perspectives from the best vantage point.) For Darling-Hammond, teaching is inherently a complex occupation, with the best teaching full of nuanced judgments that require deep and complex knowledge. The consequence of this perspective would be requirements for teachers to have a good deal of content knowledge, a good deal of pedagogical knowledge, and a great deal of what has come to be known as pedagogical content knowledge (or a repertoire of how to teach specific subjects).
In contrast, Kopp worked from the assumption that the greatest gap in poor districts is an insufficient supply of young, enthusiastic teachers with a minimum threshold of intellectual authority. The consequence of this perspective would be her initial recruiting model for Teach for America: the "best and brightest" new graduates from the liberal arts. Later, she acknowledged that teachers do need some basic pedagogical skills, and TFA's greatest public challenge over the past two decades has been getting its recruits up to speed fast enough to survive their classrooms (and let their students survive, too).
One irony of these perspectives is that each operates in her professional life with the way the other views teachers. Darling-Hammond's professional life at Teachers College and then Columbia is full of the type of intellectual authority (refereed publications, confirmed recognition of her colleagues, a connection to a top-notch facility) that Kopp asserts is necessary for teachers. For Kopp, her work as a social entrepreneur is absolutely full of the type of occupational complexity that Darling-Hammond claims is the life of a great teacher. Of course, being a professor and the leader of a non-profit is not the same thing at all as being a teacher. But I am a bit surprised that neither of them has said, "Well, in some ways I have the qualities that my opponent thinks is necessary for teachers. Let me explain why that is the wrong perspective on teaching, from my role that is removed from the K-12 world."
Darling-Hammond has been consistently skeptical of TFA's activities, but she has moved from her early 1990s writings that portrayed TFA simply as an almost fraudulent organization (as in the 1994 article) to a more careful focus on the new-teacher issues (in her research on student outcomes). The 1994 article relies on a considerable amount of anecdotal evidence to portray TFA as sloppy and possibly quite dangerous to schoolchildren. Darling-Hammond has continued to be skeptical, but since I have started reading this stuff (from about 9 years ago or so), I don't recall anything in print where Darling-Hammond has veered away from a fairly strict focus on, "Okay, let's look at what's happened from the data..." You may or may not agree with her conclusions, but is there anyone who likes TFA's work who thinks that's a bad focus? (As I've noted elsewhere, I am not sure that is the only potential value in TFA, but it's a legitimate question.)
As far as I can tell, Kopp's focus for the last 15 years has been on organizational growth, shifting much of her efforts to shoring up organizational operations, especially fundraising, recruitment, and connections with districts. Several times, TFA has reworked their programs for supporting recruits after placement, but my sense is that's still in flux, while the other pieces are more stable.
To be fair to Darling-Hammond, I think she had some evidence to support the claim that TFA was an organizational mess in the early years, and Kopp has pretty much admitted as much. Looking at what was available in print in 1993 and 1994, TFA's reputation for shoddy work really was fair game. The recent audit of TFA's use of federal funds may raise those questions again, but the point is that it wasn't an obviously wrong concern. The occasional problems with TFA's organizational reputation is inconsistent with Kopp's enterpreneural reputation as a go-getter and someone who cares about poor children. I'm not surprised it's that image clash that raises the hackles of TFA supporters; Kopp's brand is as a social entrepreneur, not someone who runs alternative certification programs.
I think Kopp may have fed a bit of Schadenfreude here, because she helped propagate the myth that TFA tells us anything about teaching in general, that TFA is a model for the New Teaching. Instead, if she had focused on less millennial and more defensible claims, that stopgaps are ethically defensible and bolster the public system, she probably would have found a more ready audience among those who should recognize the value in finding new ways of bolstering public support for the public sector. That's a missed opportunity, I think.
The organizational woes of TFA should tell us something, but it hasn't been discussed much among the social entrepreneurial crowd or the critics of TFA. Let's suppose for the moment that TFA's fans are absolutely correct, that TFA really is a new model both for recruiting new teachers and also for generating social entrepreneurs. Given what we know about TFA's organizational history, that means that one of the most successful social entrepreneur organizations required almost a decade for this Great Hope to become a competent organization. We should be skeptical that any similar Great Hope could become competent in a shorter period of time.
September 27, 2008
Both Fish and Bérubé are wrong
Some years ago, I ran across someone who was so firmly convinced that schools were heterosexist, he thought that K-12 teachers should be forbidden from mentioning anything about their private lives lest they reinforce heteronormative assumptions. I asked, "Okay, so that means you can't have a picture of your spouse or children on your desk?" "Of course not!" was the reply. That took my breath away, and I was thinking of asking whether we should just give up this parental childrearing idea entirely and have state-run creches. But I thought better of my time and his and just shook my head and walked away.
That type of foolishness has its parallel in higher education with the biennial arguments about Bumper Stickers and Buttons. Along with the foolishness this week in Illinois whereby faculty and staff were told they could not have political bumper stickers on cars they parked on campus (All faculty must leave their classes right now and scrape the "Harry Potter for President" stickers off their cars, or so I imagined), I received an e-mail from a colleague asking about candidate buttons worn on campus. I explained the usual distinction between public and private resourcesyou can't use public property to support candidates, but I assume faculty buy their own clothes, so they're festooning personal propertyand the distinction between sense and propriety. Not everything that is unwise is unprofessional: you're not going to impress your students if you wear a huge McCain or Obama button, but telling a faculty member not to wear campaign buttons is a violation of a faculty member's rights. Yes, faculty and students have rights to do foolish things as well as brilliant things.
And, yes, I included both faculty and students in that statement. When he was on campus Tuesday, Michael Bérubé said that students do not have academic freedom and that he agrees with Stanley Fish's argument that academic freedom is a guild concept. Because I agree with Bérubé on a great deal in terms of academic politics, in some ways it is a relief to find something on which we disagree; otherwise, I'd worry that I was a figment of his imagination. (Please don't explain in comments that he could surely imagine someone with whom he disagrees and thus I am still a figment of his imagination. I know that argument, it ignores the ineffability of English professors, and I'm just holding onto this thin reed of intellectual autonomy as is, so will you stop with the Jesuitical reasoning already?)
More seriously, Fish's argument is an understandable but narrow view of academic freedom, and despite what he thinks, it is weak ground on which to make the case for academic freedom.
Fish asks, Is academic freedom a philosophical concept tied to larger concepts of individual dignity and autonomy, or is it a guild concept developed in an effort to insulate the enterprise from the threat of a hostile takeover? That's a great start, a combination of a false dichotomy and straw-man argument. Apart from the fact that there are arguments in favor of academic freedom that are not rooted in either a priori concepts of intellectual freedom or guild protections, though, using the term guild is not very specific. This is fairly typical of Fish's ex cathedra pronouncements of Academic Truth, full of elisions that make me want to tear my hair out.
Fortunately for my sanity, if nothing else, Michael Bérubé put flesh on Fish's frisson in his talk Tuesday. He argued that Fish's guild concept was rooted in the academic's search for truth, whose path is unpredictable. Because of that unpredictability, faculty could not be restricted in the direction their inquiries took. Faculty are confirmed in their expertise, so they get this freedom. Students are not, so they don't have academic freedom.
This sounds like a clean distinction until you poke below the surface. Do I have academic freedom because I engage in research but my colleagues who are just instructors do not have academic freedom because they don't publish? Wait: maybe we let teachers have academic freedom because you never know where class may go in a field like mine. So do instructors have academic freedom in the humanities but not in calculus, because intro calc is well defined? Or suppose you tie it to the stability of the job because you don't want some full-time faculty to be excluded or have there be arguments about which field has academic freedom. Then you have the question of whether full-time faculty have academic freedom but adjuncts don't. What about graduate students, who are learning but also teach and engage in research? Ah, but they're not yet confirmed experts. But in some fields doctoral students commonly publish before their dissertation, while in other departments new assistant professors sometimes are hired as ABDs without publications. So does the ABD and unpublished assistant professor have academic freedom at a university where the published advanced doctoral student doesn't? Or suppose you have a doctoral student at a university who also teaches and has tenure at a nearby community college. Does she have academic freedom or not? According to the guild concept, she might have it when at work at the community college (where she has tenure), but not at the university, even though her work at a university may contribute more to the body of knowledge in her field. If your brain is about to explode from these problems, follow my advice: don't root academic freedom in a guild concept.
The other problem with the guild notion of academic freedom is its political viability: today, not only is it dangerous to imply that faculty should have academic freedom while you don't because we're special, it fails a basic reality check. A high enough proportion of the general population has a college education that we just aren't that special. Maybe only one percent of the American population has a Ph.D., but we've done a pretty darn good job of educating our neighbors so that they can think for themselves. That's a good thing, on the whole. Maybe you're not a trained scientist, but some of you participate in the annual Christmas bird count, or you're an amateur astronomer, or you know Lilium columbianum when you see it. For me to claim that only I have the academic freedom to be protected when I talk about those things while you don't is guilding the lily (the Tiger lily, if you're curious, though I can't guarantee I could spot it in a field). When defenders of academic freedom use arguments that are as fallacious as they are pretentious, they are not helping defend the professoriate from political interference.
A far better route is to take part of Bérubé's commentary on Fishthat academic freedom is rooted in the job we doand expand the way we look at the job of faculty and universities. Maybe Stanley Fish thinks the academic is interested in an abstract, decontextualized search for truth (see Steven Kellman's Chronicle column for a nice response to that claim), but many of the historical academic freedom controversies are rooted firmly in politics. I suspect that for those whose academic freedom was violated thanks to the economics of the dairy economy or the politics of the Cold War, Fish's defense of them as only in search of the (defenestrated, lifeless) truth would be cold comfort. We may academicize the world because that's the modus operandi of analysis, but we can be motivated by the same passions as our neighbors.
The search for truth isn't as ascetic as Fish would hope. It is emotional, personal, and often a matter of sensitive politics. As higher education has evolved in the U.S. and elsewhere, college and university faculty look for truth and are general social critics. The rhetoric and reality of academic freedom is a political construct, tied to our institutional role as social whistleblower. Sometimes that's "social" in an ascetic-truth sense, and sometimes it's social in a very political sense. To divorce faculty from the development of political rights in American history is to ignore the real history of academic freedom controversies and the growing recognition of general free-speech rights. Of course, Stanley Fish doesn't believe in free speech, either. But I do, I bet you do, and that means that we can and should talk about academic freedom in a political context.
To make that case means that we have to acknowledge that students have academic freedom in an institutional context (i.e., when they're at a public university). If we tell students that they have no academic freedom, we're inviting them to care less about the academic freedom of faculty once they leave us. If we invite them into the sphere of protection we'd like enlarged, they'll be far more likely to support academic freedom as older adults. So for all sorts of selfish and historical reasons, I hereby proclaim that college students have academic freedom, and it's a good thing, too.
September 23, 2008
Critical thinking and cultural work
I have another hour or so of work to do before bed, out of a combination of weekend-long computer woes, an uncooperative body, scheduling near-misses, and a delayed plane. But as a result of Michael Bérubé's visit this week, I've been thinking about What's Liberal about the Liberal Arts? and his discussion of his classes. I know that his explicit intention is to show how a liberal professor can teach (and usually does teach) literature without using it as an excuse to propagandize, because the other issues swamp anything that might stem from a professor-as-policy-liberal as opposed to a professor-as-procedural-and-intellectual-liberal.
But there are a bunch of other things in there, and one of them is how classroom discussion is cultural work. A seminar discussion about The Rise of Silas Lapham involves a great deal of give and take between students and faculty and among students. I get the sense from reading Bérubé that he works very hard to engage students and push them to think, about the book and related ideas about literature and humanity. And students work hard as well. I suppose somebody might say that they're engaged in critical thinking, but that's wrong on several levels. At one level, it's wrong from the perspective of cognitive psychologists who have tried but failed to identify the modules that are connected to this mysterious entity. That doesn't mean that there is no such thing as critical thinking but that it may not be what we think it is, or we have to look at it differently.
So, back to the students who are struggling to grasp what a Penn State English professor is saying. He's pushing them to examine the implications of Silas's ethical choices, force them (the students, readers) to decide what's right and wrong, to make connections. And they begin to (or so MB describes, and I have no reason to doubt his account). It's not a brilliant eureka moment that stems from cognitive growth, or at least not in any coherent sense that my friends the cognitivists can point to. But there is something going on, in the classroom space that has discussion, open questions, leading questions, pushy questions, pushback, and occasionally silence. Hundreds of thousands of students go through that process each semester; they may not go through it with Silas, and their epiphanies may not be original except to them and their classmates, but in the type of classroom that I hope all of us experience at least once, they do a type of work that can only happen in or with groups: cultural work.
Yeah, yeah, Peter McLaren wrote that a few decades ago, I know: the classroom is a performance space. But I mean something a bit deeper and more problematic: some of the best opportunities for cultural work is in a functional, engaging classroom. For a whole variety of reasons I won't go into detail about, beyond cognitive psychology, I am very skeptical of broad generalized claims about critical thinking when posed as a cognitive-psychology question. Usually, that turns the college curriculum into a sort of faculty-psychology jungle gym, much as the 1824 Yale Report claimed in its defense of a curriculum. But there is stuff going on in a good liberal-arts classroom, and that's inherently hard to capture because cultural work can simultaneously be local and universal, even at the mundane level of the individual, personalized classroom discussions that are going on about Moby Dick this fall, not at one university but at hundreds. To put it in a concrete sense, there are probably hundreds of students in different high schools, colleges, and universities who are talking this week about the fact that "The Cassock" (chapter 95, I think) is about the disposal/use of the whale's penis and foreskin, either giggling or being taken aback at it. Widepsread, but very personal and local.
I think this cultural work is what distinguishes a liberal-arts college from lots of other educational experiences. I think it is why the Amethyst Initiative signatories are disproportionately from liberal-arts colleges: Despite the research suggesting how the 21 age threshold for alcohol saves lives, and in addition to the legal/political liability issues, liberal-arts college presidents are less devoted to a certain definition of "cognitive thinking" than to a common sense that college is for discursive, social learning.
I have still been unable to find a work by an anthropologist of education who studies the type of cultural work that happens in college seminars. So maybe instead of hoping that an anthropologist of education takes this up, I'll issue a challenge instead to cognitive psychologists: surely you can do better than my social-science history-ish writing in capturing the cultural work that happens inside seminar classes, of finding more specific and narrow stuff rather than the global claims of "critical thinking" might suggest.
September 12, 2008
Shared responsibilities III: The next ESEA
Over the summer, Charles Barone challenged me to put up or shut up on NCLB/ESEA. I immediately said that was fair; Accountability Frankenstein had a last chapter that was general, not specific to federal law. I'm stuck in an airport lounge waiting for a late flight, so I have an occasion to write this now. Because I'm on battery power, I'm going to focus on the test-based accountability provisions rather than other items such as the high-quality teaching provisions. Let me identify what I find valuable in No Child Left Behind:
- Disaggregation of data
- Public reporting
So where do we go from here? I don't think trying to tinker with the proficiency formula makes sense: none of the alternatives look like they'll be that much more rational. What needs more focus is what happens when the data suggest that things are going wrong in a school or system. On that, I think the research community is clear: no one has a damned clue what to do. There are a few turnaround miracles, but these are outliers, and billions of dollars are now being spent on turnaround intervention with scant research support. To be honest, I don't care what screening mechanism is used as long as (a) the screening mechanism is used in that way and in that way only: to screen for further investigation/intervention; (b) the screening mechanism has a reasonable shot of identifying a set of schools that a state really does have the capacity to help change things -- if 0 schools are identified, that's a problem, but it's also a problem if 75% of schools are identified for a "go shoot the principal today" intervention; (c) we put more effort and money into changing instruction than in weighing or putting lipstick on the pig. Never mind that I'm vegetarian; this is a metaphor, folks.
So, to the mechanisms:
- A "you pick your own damned tool" approach to assessment: States are required to assess students in at least core academic content areas in a rigorous, research-supported manner and use those assessments as screening mechanisms for intervention in schools or districts. Those assessments must be disaggregated publicly, disaggregation must figure somehow into the screening decisions, and state plans must meet a basic sniff test on results: if fewer than 5-10% of schools are identified as needing further investigation, or more than 50%, there's something obviously wrong with the state plan, and it has to be changed. The feds don't mandate whether proficiency or scale scores are used; as far as the feds are concerned, it's a state decision whether to use growth. But a state plan HAS to disaggregate data, that disaggregation HAS to count, and the results HAVE to meet the basic sniff test.
- A separate filter on top of the basic one to identify serious inequalities in education. I've suggested using the grand-jury process as a way for even the wealthiest suburban district to be held to account if they're screwing around with racial/ethnic minorities, English language learners, or students with disabilities. I suspect that there are others, but I think a bottom line here is the following: independence of makeup, independent investigatory powers (as far as I'm aware, in all states grand juries have subpoena power), and public reporting.
- Each state has to have a follow-up process when a school is screened into investigation either by the basic tool noted above or through the separate filter on inequality. That follow-up process must address both curriculum content and instructional techniques and have a statewide technical support process. At the same time, the federal government needs to engage in a large set of research to figure out what works in intervention. We have no clue, dear reader, and most "turnaround consultants" are the educational equivalents of snake-oil peddlers. That shames all of us.
Doing so will also allow the federal government to focus on what it's largely ignored for years: no one knows how to improve all schools in trouble (and here I mean the organizational remedies -- there's plenty of research on good instruction). Instead of pretending that we do and enforcing remedies with little basis in research, maybe we should leave that as an open, practical question and... uh... do some research?
August 1, 2008
A higher-ed unionist's view of the performance-pay debate
Perhaps the most ridiculous thing that Alter writes -- and the statement that gives away the ideological underpinnings of his argument if anybody wasn't already aware -- is that unions "still believe that protecting incompetents is more important than educating children." Unions are far from perfect, and this is far from the most inflammatory rhetoric that I've read about them, but it's still sheer and utter nonsense.... Though more polite, it's the intellectual equivalent of calling somebody with whom you disagree a [N]azi or a terrorist.
If I were a union leader, however, I would mull over Alter's final point.... the general idea that unions could view submitting their members to more scrutiny in exchange for higher pay is something on which both sides might find some common ground.
I suppose I qualify as a union leader albeit in higher ed, so I'll take the bait. Disclosure: my faculty union was the one to propose merit pay at the table many years ago, and university faculty are more likely to approve of something called merit pay because there is a tradition of peer review for tenure/promotion. (Our collective bargaining agreement provides for general due process and substantive standards but leaves specific procedures for annual reviews to department votes.) So while I am skeptical of several top-down proposals for/policies encouraging performance pay in K-12, it is out of my seeing problems with it rather than a visceral opposition to merit pay. As the car ads say, your mileage may vary.
There are two policy issues here: one is how to think about teacher pay and working conditions in general, and the other is the question of collective bargaining at the local level (and the centralization/local question more generally). In Accountability Frankenstein, I wrote about high-stakes accountability advocates' simplistic and often flawed grasp of motivation. To put it briefly, even if we had a Holy Grail measure of "teacher contribution to learning," that wouldn't be a sufficient justification for relying on test scores for teacher pay. No one has the best idea for what works best, and a top-down approach would short-circuit even the most rabid merit-pay advocate's interest in finding out what works, in much the same way that NCLB's proficiency measure aborted alternative ways to examine student achievement (including quantitative measures such as average scale score, medians, percentile splits, etc.). Essentially, those interested in performance pay have to make the policy choice between experimentation and a crusade. So to all 0.379 Capitol Hill staffers and campaign advisors reading this blog, you should be wary of federal mandates: if you mandate the wrong formula, everyone will pay the price for Beltway arrogance, and you'll endanger the political legitimacy of the idea for the long term.
Caution about top-down mandates also fits with the local nature of collective bargaining and the affiliate structure in American unions. Despite what people may claim about the NEA's visceral opposition to merit pay, the big picture is more complicated: locals have negotiated performance pay or merit pay or whatever you want to call it, and the governance structures of both the NEA and the AFT commit the national affiliates to support collective bargaining at the local level. (There are also the merged locals and state affiliates that belong to both national affiliates.) That federal structure means that the NEA and AFT support what local leaders decide in terms of bargaining strategy and the agreements that the parties ratify at the local level. Where local leadership negotiates performance pay, the state and national affiliates support that. And where local leadership decides not to negotiate performance pay, the affiliates support that, too. (See a March 2008 column from NEA Today for an example of recent rhetoric that illustrates this complexity.) The more accurate policy position of both the NEA and AFT is that they oppose top-down mandates of performance pay, including how it is structured. The AFT is not officially skeptical of performance pay, but both national affiliates work with and for the locals. If you believe that either national teachers union can dictate bargaining positions to locals, e-mail me about my deep-discount sale price on the Brooklyn Bridge.
The second question about performance pay is thus the degree to which there should be centralized decision-making in education, and that is true for collective bargaining as well as for other matters of policy. It is not necessarily a matter of offering a grand bargain to Randi Weingarten and Dennis Van Roekel, because the bargain for some segments of a national union may be anathema to others. Let me put forward a
pro-performance-pay, pro-union person's pipe-dream proposal that would serve someone's interests as a union leader, and you may understand: If I were a K-12 union leader in Florida, I would definitely listen to a national policy proposal that would tie some incentives for performance pay (bargained at the local level) to the degree to which a state had the following in place:
- Collective-bargaining rights for public employees
- Card-check procedures for certification of public employee unions
- Binding arbitration for first contracts after a certain length of bargaining (say, 6-12 months)
- Fair share in a bargaining unit that is represented by a union
As a result of this pattern, where different circumstances lead to different views of policy by local union leaders, you can have leaders sitting in different places, each of whom has a deserved reputation for being able to craft a deal with administrators, but where they have very different views of policy proposals. Ultimately, someone who wants performance pay in K-12 schools has to understand the fact that national affiliates support locals, and that the needs of locals will vary by state environment.
July 23, 2008
Crisis rhetoric, attention seeking, and capacity building
Berliner and Biddle's The Manufactured Crisis was the independent reading choice of several students in my summer doctoral course, and as they have been writing comments on the book in the last week, I have been thinking about the split retrospective view of the 1983 A Nation at Risk report, produced by the National Commission on Excellence in Education. The report has been on the receiving end of a tremendous amount of criticism by Berliner, Biddle, Jerry Bracey, and many others.
Of the various criticisms of the report, two stick fairly well: the report was thin on legitimate evidence of a decline in school performance, and the declension story is ahistorical. First, the report relied on a poor evidentiary record, using problematic statistics such as the average annual decline in SAT scale scores from 1964 to 1975, statistics the report's authors claimed were proof of declining standards in schools. (Why this was flawed is left as an exercise for the reader.) Using this evidence, the report claimed that
... the educational foundations of our society are presently being eroded by a rising tide of mediocrity that threatens our very future as a Nation and a people. What was unimaginable a generation ago has begun to occur--others are matching and surpassing our educational attainments.
If an unfriendly foreign power had attempted to impose on America the mediocre educational performance that exists today, we might well have viewed it as an act of war. As it stands, we have allowed this to happen to ourselves. We have even squandered the gains in student achievement made in the wake of the Sputnik challenge. Moreover, we have dismantled essential support systems which helped make those gains possible. We have, in effect, been committing an act of unthinking, unilateral educational disarmament.
Where do I start with the problems here: the war-like rhetoric, the implication that we don't want the rest of the world's education to improve, the bald assertion that there is any solid evidence of student achievement gains post-1958 that can be attributed to Sputnik, or the assumption that if there were low expectations observable in the early 1980s it must have been a decline from previous times instead of a generally anti-intellectual culture?
But 25 years after the report's release, it is easy to poke holes in and fun at the hyperbolic rhetoric. What the last few weeks have brought home for me is the very different perceptions of the report. Berliner, Biddle, Bracey, and other critics are absolutely right that the report is factually and conceptually flawed. And yet there are many people involved with the commission who not only thought they were factually correct, they thought that the report's purpose was to help public schooling. If you read various accounts of the commission's work, it is clear that they thought the report was necessary to build political support for school reforms.
Part of the report's creation lies in the campaign promise of President Ronald Reagan to abolish the federal Department of Education. In this regard, his first Secretary of Education Terence Bell brilliantly outmaneuvered Reagan, and within a few months of the report's release, it was clear that the report had resonated with newspaper editorial boards and state policymakers. Even without it, given the Democratic majority in the House and the presence of several moderate Republicans in the Senate, it was unlikely that Congress would abolish the department. After it, the idea was largely unthinkable.
But the motives of Bell and the commission members were clearly not about saving an administrative apparatus. They were true believers in reform, and if all of the recommendations had been followed, today we would have a much more expansive school system. (The recommendations included 200- or 220-day school calendars and 11-month teacher contracts.) Some of the recommendations were followed, primarily expanding high school course-taking requirements and standardized testing, as well as the experiments in teacher career ladders in several states. But the guts of the implemented recommendations were already in the works or in the air: I remember that California state Senator Gary Hart had been pushing an increase in graduation requirements, a bill that passed in 1983. (This is not the same Gary Hart as the famous one from Colorado.) While I could have graduated from high school in 1983 with one or two semesters of math (I forget which), students in my former high school now must take several years of math. (As others have pointed out, one of the unintended beneficial consequences of raising course-taking requirements was dramatically reducing the gender differences in math and science course taking. Richard Whitmire, take note: Terence Bell is the villain!)
Lest some people not know or have forgotten, A Nation at Risk was not the only major mid-80s report on public schooling. Others were written from a variety of perspectives: Ernest Boyer's High School, Ted Sizer's Horace's Compromise, Arthur Powell et al.'s The Shopping-Mall High School, and John Goodlad's A Place Called School. All were published in 1983 or 1984. All were earnest. All were more thoughtful than A Nation at Risk. I suspect that if Two Million Minutes had been made and released at the same time (if with different non-U.S. countries and different students), it would have fit into that cache of reform reports very well.
Those other reports did not gain the same attention as A Nation at Risk, and I am not certain that any of the reports dramatically changed the policy options discussed at the state level. Changed course requirements and testing were prominent parts of the discussion before the reports, and they were the primary consequences of state-level reforms in the 1970s and 1980s. What the body of reports did instead was push the idea that schools needed reforming. On that score, I think they succeeded, even if several of the report writers (Sizer and Goodlad) became horrified at the direction of reform policies.
Today, we have a new set of actors making similar claims about the need to reform schools: did you receive the e-mail from Strong American Schools/Ed in '08 that I did yesterday? If you didn't, here's the text:
We are only as strong as our schools, and our schools are failing our children.
We know that the nations with the best schools attract the best jobs. If those jobs move to other countries, our economy, our lives and our children will suffer.
- Almost 70% of America's eighth-graders do not read at grade level.
- Our 15-year-olds rank 25th in math and 21st in science.
- America showed no improvement in its post-secondary graduation rate between 2000 and 2005.
For that reason, Strong American Schools launched a new campaign this week to combat the crisis in our public schools.
Click on the image below to view our television advertisement:
Please join us. Tell your governors, your state and national representatives and senators that you want a change for stronger schools.
Make your voice heard.
The ad's rhetoric is definitely in line with A Nation at Risk, down to the tagline: "As our schools go, so goes our country." It's tired rhetoric at this point, and I think it's important to understand why the folks behind Strong American Schools are keeping at it, though they've made no traction in making education a highly visible part of the presidential campaign thus far: as with the major figures in A Nation at Risk, they are true believers in reform to increase the capacity of regulators.
But Strong American Schools has now become a shadow of A Nation at Risk, itself the least substantive of the mid-1980s reports on American schooling. Instead of making specific claims or recommendations, they're pushing "a change for stronger schools," or rather attention. To do so, they claim a crisis, though this is probably the worst time to claim that weak education is the cause of what Phil Gramm calls our "mental recession": to anyone who looks at the current state of the world, our economic woes are the consequences of the subprime mortgage crisis and energy prices (which themselves are driven by the growing Chinese and Indian economies). In 1983, the economy was out of recession. I just don't think the world will realign itself in the same way as in the 1980s. That doesn't mean that there isn't a tie between education and the
economy in the long term, but it's diffuse rather than mechanical.
And there's another question here: is it ethical or even helpful to claim that a long-term problem is an acute crisis, just to gain public attention for an issue? We've gone down this road many times before, and I just don't see where it helps in the long term.
July 9, 2008
Can reporters raise their game in writing about education research?
I know that I still owe readers the ultimate education platform and the big, hairy erratum I promised last month, but the issue of research vetting has popped up in the education blogule*, and it's something I've been intending to discuss for some time, so it's taking up my pre-10:30-am time today. In brief, Eduwonkette dismisses the new Manhattan Institute report on Florida's high-stakes testing regime as thinktankery, drive-by research with little credibility because it hasn't been vetted by peer review. Later in the day, she modified that to explain why she was willing to promote working papers published through the National Bureau of Economic Research or the RAND Corporation: they have a vetting process for researchers or reports, and their track record is longer. Jay Greene (one of the Manhattan Institute report's authors and a key part of the think tank's stable of writers) replied with probably the best argument against eduwonkette (or any blogger) in favor of using PR firms for unvetted research: as with blogs, publicizing unvetted reports involves a tradeoff between review and publishing speed, a tradeoff that reporters and other readers are aware of.
Releasing research directly to the public and through the mass media and internet improves the speed and breadth of information available, but it also comes with greater potential for errors. Consumers of this information are generally aware of these trade-offs and assign higher levels of confidence to research as it receives more review, but they appreciate being able to receive more of it sooner with less review.
In other words, caveat lector.
We've been down this road before with blogs in the anonymous Ivan Tribble column in fall 2005, responses such as Timothy Burke's, a second Tribble column, another round of responses such as Miriam Burstein's, and an occasional recurrence of sniping at blogs (or, in the latest case, Laura Blankenship's dismay at continued sniping). I could expand on Ernest Boyer's discussion of why scholarship should be defined broadly, or Michael Berube's discussion of "raw" and "cooked" blogs, but if you're reading this entry, you probably don't need all that. Suffice to say that there is a broad range of purpose and quality of blogging, some blogs such as The Valve or the Volokh Conspiracy have become lively places for academics, while others such as the The Panda's Thumb are more of a site for the public intellectual side of academics. These are retrospective judgments that are only possible after many months of consistent writing in each blog.
This retrospective judgment is a post facto evaluation of credibility, an evaluation that is also possible for institutional work. That judgment is what Eduwonkette is referring to when making a distinction between RAND and NBER, on the one hand, and the Manhattan Institute, on the other. Because of previous work she has read, she trusts RAND and NBER papers more. (She's not alone in that judgment of Manhattan Institute work, but I'm less concerned this morning with the specific case than the general principles.)
If an individual researcher needed to rely on a track record to be credible, we'd essentially be stuck in the intellectual equivalent of country clubs: only the invited need apply. That exists to some extent with citation indices such as Web of Science, but it's porous. One of the most important institutional roles of refereed journals and university presses is to lend credibility to new or unknown scholars who do not have a preexisting track record. To a sociologist of knowledge, refereeing serves a filtering purpose to sort out which researchers and claims to knowledge will be able to borrow institutional credibility/prestige.
Online technologies have created some cracks in these institutional arrangements in two ways: reducing the barriers to entry for new credibility-lending arrangements (i.e., online journals such as the Bryn Mawr Classical Review or Education Policy Analysis Archives) and making large banks of disciplinary working papers available for broad access (such as NBER in economics or arXiv in physics). To some extent, as John Willinsky has written, this ends up in an argument over the complex mix of economic models and intellectual principles. But its more serious side also challenges the refereeing process. To wit, in judging a work how much are we to rely on pre-publication reviewing and how much on post-publication evaluation and use?
To some extent, the reworking of intellectual credibility in the internet age will involve judgments of status as well as intellectual merit. To avoid doing so risks the careers of new scholars and status-anxious administrators, which is why Harvard led the way on open-access archiving for "traditional" disciplines and Stanford has led the way on open-access archiving for education, and I would not be surprised at all if Wharton or Chicago leads in an archiving policy for economics/business schools. Older institutions with little status at risk in open-access models might make it safer for institutions lower in the higher-ed hierarchy (or so I hope). (Explaining the phenomenon of anonymous academic blogging is left as an exercise for the reader.)
But the status issue doesn't address the intellectual question. If not for the inevitable issues of status, prestige, credibility, etc., would refereeing serve a purpose? No serious academic believes that publication inherently blesses the ideas in an article or book; publishable is different from influential. Nonetheless, refereeing serves a legitimate human side of academe, the networking side that wants to know which works have influenced others, which are judged classics, ... and which are judged publishable. Knowing that an article has gone through a refereeing process comforts the part of my training and professional judgment that values a community of scholarship with at least semi-coherent heuristics and methods. That community of scholarship can be fooled (witness Michael Bellesiles and the Bancroft Prize), but I still find it of some value.
Beyond the institutional credibility and community-of-scholarship issues, of course we can read individual works on their own merit, and I hope we all do. Professionally-educated researchers have more intellectual tools which we can bring to bear on working papers, think-tank reports, and the like. And that's our advantage over journalists; we know the literature in our area (or should), and we know the standard methodological strengths and weaknesses in the area (or should). On the other hand, journalists are paid to look at work quickly, while I always have competing priorities the day a think-tank report appears.
That gap provides a structural advantage to at least minimally-funded think tanks: they can hire publicists to push reports, and reporters will always be behind the curve in terms of evaluating the reports. More experienced reporters know a part of the relevant literature and some of the more common flaws in research, but the threshold for publication in news is not quality but newsworthiness. As news staffs shrink, individual reporters find that their beats become much larger, time for researching any story shorter, and the news hole chopped up further and further. (News blogs solve the news-hole problem but create one more burden for individual reporters.)
Complicating reporters' lack of time and research background is the limited pool of researchers who carve out time for reporters' calls and who understand their needs. In Florida, I am one of the usual suspects for education policy stories because I call reporters back quickly. While a few of my colleagues disdain reporting or fear being misquoted, the greater divide is cultural: reporters need contacts to respond within hours, not days, and they need something understandable and digestible. If a reporter leaves me a message and e-mails me about a story, I take some time to think about the obvious questions, figure out a way of explaining a technical issue, and try to think about who else the reporter might contact. It takes relatively little time, most of my colleagues could outthink me in this way, and somehow I still get called more than hundreds of other education or history faculty in the state. But enough about me: the larger point is that reporters usually have few contacts who have both the expertise and time to read a report quickly and provide context or evaluation before the reporter's deadline. Education Week reporters have more leeway because of the weekly cycle, but when the goal of a publicist is to place stories in the dailies, they have all the advantages with general reporters or reporters new to the education beat.
In this regard, the Hechinger Institute's workshops provide some important help to reporters, but everything I have read about the workshops are usually oriented to current topics, providing ideas for stories, and a matter of general context and "what's hot" rather than helping reporters respond to press releases. Yet reporters need the help from a research perspective that's still geared to their needs. So let me take a stab at what should appear in reporting on any research in education, at least from my idiosyncratic readers' perspective. I'll use the reporter's 5 W's, split into publication and methods issues:
- Publication who: authors' names and institutional affiliations (both employer and publisher) are almost always described.
- Publication what: title of the work and conclusions are also almost always described. Reporters are less successful in describing the research context, or how an article fits into the existing literature. Press releases are rarely challenged on claims of uniqueness or what is new about an article, and think-tank reports are far less likely than refereed articles or books to cite the broadly relevant literature. When reporters call me, they frequently ask me to evaluate the methods or meaning but rarely explicitly ask me, "Is this really new?"My suggested classification: entirely new, replicates or confirms existing research, or is counter to existing research. Reporters could address this problem by asking sources about uniqueness, and editors should demand this.
- Publication when: publication date is usually reported, and occasionally the timing context becomes the story (as when a few federal reports were released on summer Fridays).
- Publication where: rarely relevant to reporters, unless the institutional sponsor or author is local.
- Publication why: Usually left implicit or addressed when quoting the "so what?" answer of a study author. Reporters could explicitly state whether the purpose of a study is to answer fundamental issues (such as basic education psychology), applied (as with teaching methods), attempting to influence, etc.
- Publication how: Usually described at a superficial level. Reporters leave the question of refereeing as implicit: they will mention a journal or press, but I rarely see an explicit statement that a publication is either peer-reviewed or not peer-reviewed. There is no excuse for reporters to omit this information.
- Content who: the study participants/subjects are often described if there's a coherent data set or number. Reporters are less successful in describing who are excluded from studies, though this should be important to readers and reporters could easily add this information.
- Content what: how a researcher gathered data and broader design parameters are described if simple (e.g., secondary analysis of a data set) or if there is something unique or clever (as with some psychology research). More complex or obscure measures are usually simplified. This problem could be addressed, but it may be more difficult with some studies than with others.
- Content when: if the data is fresh, this is generally reported. Reporters are weaker when describing reports that rely on older data sets. This is a simple issue to address.
- Content where: Usually reported, unless the study setting is masked or an experimental environment.
- Content why: Reporters usually report the researchers' primary explanation of a phenomenon. They rarely write about why the conclusion is superior to alternative explanations, either the researchers' explanations or critics'. The one exception to this superficiality is on research aimed at changing policy; in that realm, reporters have become more adept at probing for other explanations. When writing about non-policy research, reporters can ask more questions about alternative explanations.
- Content how: The details of statistical analyses are rarely described, unless a reporter can find a researcher who is quotable on it, and then the reporting often strikes me as conclusory, quoting the critic rather than explaining the issue in depth. This problem is the most difficult one for reporters to address, both because of limited background knowledge and also because of limited column space for articles.
Let's see how reporters did in covering the new Manhattan Institute report, using the St Petersburg Times (blog), Education Week (blog thus far), and New York Sun (printed). This is a seat-of-the-pants judgment, but I think it shows the strengths and weaknesses of reporting on education research:
|Criterion||Times (blog)||Ed Week (blog)||Sun|
|Why||Implicit only||Implicit only||Implicit only|
Remarks: I rated the Times and Sun items as weak in "publication what" because there was no attempt to put the conclusions in the broader research context. All pieces implied rather than explicitly stated that the purpose of the report was to influence policy (specifically, to bolster high-stakes accountability policies). Only the Times blog noted that the report was not peer-reviewed. All three had "weak" in "content what" because none of them described the measures (individual student scale scores on science adjusted by standard deviation). Only the Ed Week blog entry mentioned alternative hypotheses. None described the analytical methods in depth.
While some parts of reporting on research is hard to improve on a short deadline (especially describing regression discontinuity analysis or evaluating the report without the technical details), the Ed Week blog entry was better than the others in in several areas, with the important exception of describing the non-refereed nature of the report. So, education reporters: can you raise your game?
* - Blogule is an anagram of globule and connotes something less global than blogosphere. Or at least I prefer it. Could you please spread it?
May 23, 2008
Default policy frames, rationality, and Mr. Bayes
About five weeks ago, Kevin Carey wrote a longish blog entry about null hypotheses, the status quo, and decision-making about policy. The gist of Carey's argument was that we should be willing to make policy changes with a preponderance of evidence in favor of change.
Carey claimed that academic skepticism aimed at various policy proposals was a legacy of frequentist notions of the null hypothes, where you have to prove that a result was unlikely to have occurred by chance (usually stated as a p < .05 threshold, though that's a value choice and convention, not carved into tablets). In contract, he said, policy options need to be chosen on an epistemological equivalent of first-past-the-post voting -- i.e., based on the preponderance of evidence on which was the best option at the time.
I think Carey has at least a few people pegged wrong in the reasons for skeptical views of reform, including me, and I think he has the causality backwards for the few social science-y folks for whom he might be right on surface rhetoric. The reason why the null hypothesis exists in the disciplines where it does is because academics (and I hope others!) are conservative in accepting new claims of Truth (or truth). We're socialized to be skeptical, to begin with the caveats as the main story, and the null-hypothesis framework is just one operationalization of that broader academic culture. (Minor bit of evidence: usually the first advice given academics in media training is to reverse the order of presentation, to start with a main positive claim and only later get to caveats.) Because academics are conservative on changing views about their disciplinary reality, the most popular type of article with a new factual claim is the plausible surprise, the small twist on disciplinary convention that makes the reader go, "Hmmmn.... not what I had thought, but I can see it." (There's also a danger in that socialization: a scholar can create a professionally attractive claim by heading for that plausible-surprise sweet spot. Witness the Bancroft Award given to Michael Bellesiles' Arming America before the fabrication/falsification charges were investigated, his resignation, and the embarrassed withdrawal of the Bancroft.)
Back to the core of Carey's argument instead of the straw-man argument he had created: Carey was responding to criticisms of value-added approaches to accountability (by the anonymous Eduwonkette, but I've made similar criticisms). Over at Eduwonkette's blog, skoolboy argued in rebuttal that policy conservatism exists because policies are always enacted in specific times and places, and the real costs of implementation as well as the existence of unintended consequences means that the a priori preponderance of evidence is not always a good prediction of what would happen in practice. This is very close to the default framework I remember from cross-examination team debates in high school, where the negative wins by default unless the affirmative team overcomes the predisposition towards the status quo. (For the life of me, I can't remember the early-1980s term used for this, though I think it started with a p.) But the default position in high school debate is a faux default created to hone the competition with ground rules rather than a great Rule in the Sky.
There are two broader perspectives I have on this question about warrants and evidentiary evaluation, and then an idea for someone else's dissertation. First, the status quo v. reform framework is itself fictive. There ain't no such thing as a monolithic status quo or monolithic reform, policy rhetoric is fluid, and evidence about practices isn't stagnant, either. I don't even think many people use that specific framework as the set of mental bins in which they store the various policy proposals floating in the ether. C'mon, Kevin and skoolboy, fess up: where would you slot "performance pay"? "Collective bargaining"? "High-stakes testing"? [insert whole-school curriculum plan here]? You can think of the counterarguments as well as I can. We can talk about the policy frameworks people work with, but they're likely to be much more earthy than "I work with a preponderance standard" or "I'm waiting for a representative sample before I'm convinced." Well, unless you're one of Russ Whitehurst's in-house methodological purists (and I doubt even Whitehurst is his own purist). That fictive framework doesn't mean that people don't ask questions about "school reform," but the more useful work takes the term as much as a problem as a foundation (e.g., Tyack and Cuban's work).
It may be useful here to separate the evaluation of factual claims from the evaluation of policy option. In my relatively limited experience in the world, both inside and outside academe people have separate ways of judging claims. In academe, these are very roughly divided into questions of procedural warrants and questions of substantive warrants. Procedural warrant debates are often called methodology, especially in experimental disciplines, but the procedural warrant does not always require a section called methods: the historian's standard procedural warrant is the footnote, and it's a pretty serious matter if you screw that one up (see the case of Bellesiles, referred to above). Substantive warrants revolve around the interpretation of evidence and how that dovetails with previous disciplinary knowledge and substantive frameworks (i.e., "the literature"). Herbert Gutman's response to /critique of Fogel and Engerman's Time on the Cross is full of such substantive arguments about what he claimed were Fogel and Engerman's misinterpretations of the evidence.
In a similar way, we can (and do) have all sorts of debates about what the right substantive questions are on policy as well as what evidence we will accept about a particular factual claim. The last time elected officials took a very-well-designed study as the sole basis for creating policy was California's class-size initiative. STAR was a great study. I suspect Kevin Carey would admit that California's policymakers didn't ask enough questions after being convinced of the factual claim that in Tennessee, a pretty-darned-close-to-random-assignment study documented both short- and long-term benefits to very low elementary class sizes.
So you know by now that I'm an advocate of separating the evaluation of factual claims from making policy. On the narrow question of evaluating factual claims, I'm going to be even more iconoclastic: there is a difference between confirmation bias and outright irrationality. We all have confirmation bias, and moreover, there's a pretty good case to be made that it's okay to have a confirmation bias if you're honest. I'm not much into Bayesian probability theory, but there's some pretty famous philosophical stuff which starts from the premise that we have preconceived ideas about what truth is before we come across any chunk of evidence pertaining to a factual claim. If I understand the Bayesian perspective, that's not irrational because our personal (or subjective) judgment about reality before we come across a chunk of evidence should be affected by the evidence to push us towards post-encounter judgments of reality (or, more formally, posterior probability estimates). Or, in more gutsy language, it's okay to have preconceptions as long as you're willing to change them based on the evidence. What's not kosher is to entirely ignore evidence that's been reasonably vetted. (Holocaust denial claims and the like can be dismissed because their advocates have violated this test of rationality.)
While I was driving around central Florida over the past few weeks, I've been thinking about the Carey-skoolboy posts and trying to think through a formal approach to work backwards from Bayes' theorem to an identification of assumptions, something like this: "After reading a lay description of research claiming benefits for prescribing watermelon juice to ADD-identified adolescent boys, a reader is still skeptical and believes that it's highly unlikely (say, only a 2% probability) that the claim is true. From that posterior gut-level belief and the research evidence, can we infer an a priori assumption about the claim?" Unfortunately, my glorious plans for a simple article that would win me the Nobel Prize for Mathematical Political Philosophy Written by Historians were chewed up by a rabid pack of math and common sense (to paraphrase Berkeley Breathed). (For this reason, please do not put me in charge of planning any post-invasion occupation of a country. Or planning the Great Hydrogen Economy. Or the Best School Uniform Policy. I tend to be... oh, yeah, academically skeptical.)
But in my late-night wanderings through lit databases, I came across a fascinating 2004 article by Drazen Prelec in Science that argues for a much better way of processing subjective judgment than well-known approaches such as the Delphi process. And then there's a more concrete demonstration paper he wrote with H. Sebastian Seung. Prelec's search for a "Bayesian truth serum" is wonderfully outlandish, but the basic stuff seems to be sensible, which is is to use an individual's own set of judgments as a filter with which to identify particularly common or uncommon judgments in a data set and particularly accurate or inaccurate judgments of individuals about the distribution of judgments in the population. That's pretty abstract, but it strikes me as a definite improvement on the Delphi process and possibly very useful for research on sociometrics... or subjective judgments of education. Last doc student in is a rotten egg!
(No, not really, but if you can understand the math of both papers, there are some obvious applications here.)
May 11, 2008
Sterility or psychodrama vs. untimed engagement or intellectual drama
Margaret Soltan is not a Ludditefar from it, she has used her University Diaries blog to become one of American academic letters' premier public intellectuals. But as an observer of college life, she has a well-reasoned hatred of what she calls technolust. She regularly links to stories about students who abuse cell phones and laptops in class and professors who abuse students with PowerPoint. Her argument is that at its best, the classroom is the best environment for the drama of learning, and that technology is too tempting a draw for poor teaching:
...my focus is not on occasional courses in which clever and restrained use of this and other visual technologies makes a better class. My focus is on student (and other audience) response to PowerPoint in general, and on the clear trend toward the overuse of this technology and other technologies in settings in which direct human interaction should be primary. [emphasis added]I assume that she is working off the same mental model of intensive interactions that's in my head: you walk into class, and you cannot wait to see what ideas suddenly come into conflict, which people realize what's happened to the ideas they've always held, and who change their minds as you watch and participate! ("Survivor" and other reality shows have nothing on a great seminar, because involvement of the audience on a "reality" show is vicarious at best.) To Soltan, presentation software, clickers, and online course management systems are the processed carbs of higher education: easy to digest, but not very nutritious. [The extension of this metaphor to identify academic equivalents of fiber, proteins, fats, and MSG is left as an exercise for the reader, who should instead read Howard Becker's warning about metaphors in Writing for the Social Scientist.]
The reality of instruction is far more diluted: even in a small seminar, the great, life-changing moments are rare. To her credit, Soltan recognizes that but holds up the ideal as the standard against which parallel-play* online classes, reading from PowerPoint slides, and constant-clicker lectures are found wanting. No shinola, Sherman. Take the worst from any format and it will be found wanting against the best of another format. The worst of online classes is the electronic equivalent of a correspondence class, where students proceed at their own pace in their personalized and isolated bubbles, at best watching their peers in an adult form of parallel play. The worst of either bad PowerPoint or bad clicker-based lecturing is a sterile reading of bullet point and faux interactivity. But the worst of in-class drama would also cause Soltan to cringe: the unprepared/psychodrama professor leading her or his students through a semester's equivalent of drowning in emotions, an academic waterboarding.
Maybe a better comparison is among the everyday exchanges in a highly-competent class taught in different formats. In the hands of a skilled lecturer, a PowerPoint or a clicker is a tool used to keep the class engaged, not a crutch for bad teaching. For decades, Bryn Mawr professor Brunhilde Ridgeway kept her beginning archaeology classes engaged with the old set of lantern slides, chugging through centuries of sculpture until, just as she was pointing out the development of articulated knees carved in Greek funerary sculpture, onscreen would appear Magic Johnson, larger than life, running downcourt with... superbly articulated knees. Everyone laughed, the point was carved in our brains, and she moved on. No one took her class expecting to fall asleep, and I suspect today's skilled equivalents of Bruni Ridgeway use PowerPoint stacks in similar ways.
The everyday exchange in a competently-run small discussion class is what Soltan claims it is, an intellectual drama. The adrenaline isn't pumping every minute, but even when the tension ebbs, there is always a flow, a set of themes that the faculty member reinforces through the term, the possibility of a quick turn of thought, a sudden connection with material remembered from several weeks before, and regularly a softly-spoken "aha!" that marks a minor epiphany.
The problem with online education is not that you can find bad online classes, because you can run a poor class in any environment. The problem with online education is that we don't have a strong sense of what broad engagement looks like online. I've been struggling with this issue for some time. When I can make the class synchronous (an awful term implying that we're somehow in our bathing caps and in an Olympic pool), there is some drama that helps, but synchronous online classes have to be pretty small to work well with equipment commonly available. Asynchronously? There's the great challenge, and the fact that I don't have an answer may mean that Margaret Soltan is right: Maybe there is no way to engage students consistently in an online class that doesn't have a live (synchronous) component.
But I suspect that there is a way to have an engaging intellectual exchange online. The terms social presence and transactional distance are awkward ways of talking about how to engage students outside a live setting. It would not be the same thing as a face-to-face seminar, but it may have some compensating advantages: the student who participates more when she or he has more time to think through a response, or the working parent who is able to take the class and thereby injects a mature perspective that changes the way 20-year-old classmates think about the world. Those changes are more likely when the message comes from a peer instead of a teacher. It would not be the live intellectual drama that Soltan and I value, but it would not necessarily be of lesser value.
I am certainly not There yet. I am not sure if anyone is in terms of deliberate course design, though I am certain it appears in spots and for some students. But it is incorrect to assume that distance education is technolust just because faculty are not practiced in a relatively new format in the same way that they can be in a centuries-old format.
April 3, 2008
Jim Anderson retrospective, part 2
A few days ago I described the 20th-anniversary Jim Anderson retrospective at AERA. Now it's my turn to address some of the topics raised in that session, in a personal historiography, or my reading of The Education of Blacks in the South, originally published in 1988.
For me, the thesis of the book was not particularly a surprise, for several reasons. First, my undergraduate advisor Paul Jefferson had exposed me to a broad variety of historical arguments from the very first course I took with him, which used Herbert Aptheker's documentary collection, to a seminar course where I wrote an historiographical essay on W.E.B. DuBois's Black Reconstruction. Bryn Mawr College sociologist David Karen had exposed me to both structural-functionalists and radical education critics in a course I took with him when I was a junior (or at least I vaguely recall its being spring 1986). Then in grad school I had Michael Katz as an advisor.
But probably the teacher who lay the groundwork the most for Anderson was Bob Engs, for whom I read C. Vann Woodward's Origins of the New South. Because Engs and Anderson use the same material to arrive at very different interpretations of the role of foundations in Southern education, it says a great deal about Engs as a teacher that he made Anderson make sense for me even while he was telling me that Anderson's book was polemical. I like both men a great deal, so perhaps a broader explanation is in order.
Engs and Anderson were both pioneers as African American historians in elite majority-white universities in the same time (the early 1970s), Engs at Penn and Anderson at Illinois. I wish I could say they were part of a continuous record going back decades, but in an case they've become part of diverse faculties for the past several.
Engs turned his first research project into a book ten years before Anderson's, with Freedom's First Generation about the Hampton, Virginia, community. Anderson took a decade and a half to write his first book (something Vanessa Siddle Walked called "lingering with an idea," but I thought of as "a darned good example of a leader in my field who didn't write 7 articles a year before tenure"). And they are different books: While Anderson writes only of education, Engs writes a local history, focusing on the contingent conditions that allowed Hampton's African American community to thrive after the Civil War and hang on to wealth in the very late 19th century even while the curtain of segregation and disfranchising was closing in from all sides.
Engs saw the Hampton Institute as one of those contingencies, and Samuel Chapman Armstrong (Hampton's first leader) as a friend of the Hampton African American community. Where Anderson saw a conspiracy to undermine equality, Engs saw irony with Armstrong's showing one face to the white community and another to Hampton's African Americans. Where Engs saw opportunity that some grabbed in the midst of oppression, Anderson saw structural limitations that were covered up by a tamed education system. Let me make clear that their views of the Southern political economy and educational structure are similar; the great interpretive differences lie in the role of the foundations.
Despite those deep differences in the interpretation of late 19th century Southern education, Engs laid the groundwork for my "oh, yes, of course" reading of Anderson in several ways. First, he made me and other graduate students read Willie Lee Rose's Rehearsal for Reconstrution and C. Vann Woodward and Jacqueline Jones and Exodusters and several others in a way that raised important questions about the South's history after the Civil War. I was also his teaching assistant one semester for his Southern postwar history class (that's postwar as in post-Civil War), and apart from his tolerance for the awkward naive grad student I was then, I figured out how he could say the most outlandish things in a lecture and get the southern white male students to recommend that all of their friends take his classes. With a light baritone, he stood at the front of class, uttering outrageous interpretations in a quiet, patient manner, as if they wouldn't ruffle anyone's feathers. The students loved him (and I presume students still love him at Penn).
So in many ways, I bought much of Anderson's argument because of Engs. If it's any comfort, Bob, it's because of Anderson's discussion of communities that I bought your argument, too. Ultimately, the best scholarship in each book is about different levels of action. Anderson effectively demonstrates that white philanthropists did conspire to impose a certain type of education on the South. Yet in his work on community efforts, Anderson bolsters Engs's argument that at the local level, there was a lot more going on. I'm not sure we have to establish the moral worth of Samuel Chapman Armstrong to evaluate either book. (Some years ago, Engs wrote a biography of Armstrong, and it's much more sympathetic than I expect Anderson's version would be.)
I have both learned from Anderson's work and also failed to give it credit in one case. It was because of his book that my own dissertation research on graduation in the 20th century involved looking at the extent of high school availability in the 1950s and 1960s. And like John Rury, I am returning to the scope of high school education in the 20th century South. In Schools as Imagined Communities, Deirdre Cobb-Roberts, Barbara Shircliffe, and I could have enriched the introduction by discussing Anderson's work. Mea culpa.
As those at the AERA panel said, Anderson's book helped open up the history of Southern education after the Civil War, giving the subject the gravitas that it deserves and momentum that has served many other historians well. The rest of us in the field can only hope to leave an intellectual legacy as significant as Jim Anderson's.
March 1, 2008
You can write a very nice article describing train wrecks
The budget situation for Florida is pitiful and deteriorating. I'm on the Florida Education Association's governance board, and we're meeting this weekend. I think the students in the Florida Student Education Association and the occasional younger teacher were probably among the few who were truly partying last night at the reception. Part of it is addiction: As I told one activist who's on the NEA national board, what the heck were we doing talking shop at 10 pm? But part is being disheartened at the emerging picture in the state.
At one level, it's my emotions that are engaged, in part because I represent over 1700 faculty and professional employees at USF, and the idea of any one of them receiving a layoff notice is upsetting. Someone not being reappointed or failing to make tenure is a different issue; in principle, those should be merit-based decisions. But with a layoff, you're telling someone who's worked hard and met the institution's standards that they're gone. I hate that, and a large part of my time and energies in the last few months has gone towards addressing that.
And yet there's a part of me that knows that a budget crisis is a remarkable opportunity for studying organizations. Almost a quarter century ago, David Tyack, Robert Lowe, and Elizabeth Hansot wrote Public Schools in Hard Times, looking at how the Depression changed public education. Some in my institution look instead to Naomi Klein's The Shock Doctrine, which argues that restructuring ideas float out in the political ether, and people who advocate those ideas use crises as opportunities to push dramatic change that would never be considered otherwise. I haven't read Klein, but the representation of her argument strikes me as a more conspiratorial version of John Kingdon.
The world is more complicated, at least with regard to education. Several years ago, Iowa's plan for performance pay got knocked for a loop when a budget crisis led the state to cut those dollars, and given the realities of budgeting in most states, innovative programs funded with discretionary dollars are often the first on the chopping block. That's the dynamic whether the programs represent good, bad, or ugly ideas.
But this is clearly an area where I'm relatively ignorant. Putting school and budget crisis into my favorite academic search hopper gives us a few pieces to examine, including the following ones that look promising:
- Andrew Glassberg, Organizational Responses to Municipal Budget Decreases (JSTOR $)
- Ted Schwinden, School Reorganization in Montana: A Time for Decision? (free)
- Patricia Gumport, The Contested Terrain of Academic Program Reducation (sub, though the link is to Worldcat)
- Steven Sheffrin, State Budget Deficit Dynamics and the California Debacle (JSTOR $)
You can then snowball outward from those first entries by looking for who cites Glassberg and others. These are two of the essential tools of the academic researcher: leveraging one's interest/passion in a topic to begin crafting questions and discovering what others have already written. And I suppose this is all to say that someone else can write some fine articles on what is currently giving me nightmares.
February 25, 2008
NCLB and where we sit
In my undergraduate social foundations class, I spend some time explaining the politics of accountability. For the last few years, a critical mass of students (either a majority or a vocal minority) have consistently opposed accountability, taking on the mantle of professionalism, and it's my job to rattle their cages and make them see things using at least one other lens.
I usually explain things in words something like the following:
Views of accountability depend dramatically on where you are. At the classroom level, teachers trust what they do and would like to trust parents but aren't exactly sure. Parents may want to trust teachers, if their children's experiences have generally been decent, or may be entirely untrusting if not. Principals generally trust their own judgment and would like to trust teachers but have a supervisory responsibility (and the level of supervision they exercise will depend rather dramatically on a variety of factors).
Once you get above the level of the school, each level tends to want to impose some accountability on the level below it. For NCLB purposes, the key issue is the state/feds split: in a number of states, officials in the state capitol don't trust local districts and feel that it is their responsibility to regulate the districts, while a number of federal officials are skeptical that states will do the right thing unless there is a federal level of accountability.
NCLB forced states to define a variety of measures and set targets for those measures. At the local level, the state plan is often viewed as onerous, unreasonable, and inflexible. But the state plans are inherently compromises, and so various parties in Washington have looked at the state plans with skepticism.
For example, let's take a look at graduation, which states often defined to mean one minus the proportion of high school students identified as dropouts. That too-easily-falsifiable "dropout rate" is very low in many places, for reasons largely unrelated to the actual proportion of teenagers who graduate from high school, and the official graduation rate if defined as the complement will be wildly inflated.
To local residents and some educators, it looks like the state is hiding a sizable dropout rate, which many view as a consequence of out-of-control accountability systems. That's the type of local or educator-centered view many of you have described.
But you also need to look at it from a federal perspective, from those who see state plans and state commitments with enormous skepticism. To them, what would be the logical conclusion drawn about such graduation rates?
Linda McNeil et al.'s recent article on high-stakes accountability in Texas and Charles Barone's entry today, The Games States Play: Graduation Rates, are Exhibits A and B the next time I have this discussion.
February 17, 2008
On eprints at Harvard and Full Monty open-access
I'm still trying to figure out the consequences of Harvard's Arts and Science faculty voting last week to push open-access publication of faculty work. This is fundamentally different from the occasional individual boycott of subscription-based journals. Harvard's faculty move is closer to Congress's push for a mandate that all grant-funded articles etc. be accessible to the public within a year of original publication. It is from these institutional moves that the publishing world will change. There is a simple, digestible explanation for the open-access moves related to grants (the public pays, so the public should be able to read) and the Harvard A&S faculty (we're established enough not to have to worry about the reputational economy of subscription journals). What flows from that is not necessarily clear, but we can reasonably assume that something will flow.
Reputational economies and the refereeing process
There are two broader issues here that need to be untangled. One is the reputational economy of academe, which is partly tied to the referee process and partly to post-publication reputational measures, such as citations. As physics has shown with arXiv, a discipline can survive quite nicely with a much fuzzier boundary between working paper and publication. But maybe that's because of the established reputation of physics. Similarly, I think history, classics, math, and other disciplines that have relatively high intellectual status (if not in resources) have nothing to fear from loosening up the refereeing process.
But what about other disciplines, including education? Education research already has a number of unrefereed publications that receive a lot of attention, largely because of differential access to publicity. Unlike medicine, where the top-reputed journals have publicists that distribute press releases (and you will see those regularly reported in the press), education has a different distribution of publicity. If you look at the indispensable Fritzwire, you'll see oodles of announcements for think-tank-based research symposia, and the ability to hire publicity folks does have an impact on what gets reported. As one colleague in another institution explained, when I asked why his work received far less attention in his area than the think-tank-based work of X and Y, which I thought was of lower quality, "Sociology departments don't usually hire publicists."
This is not to say that all think-tank-funded research is of poor quality, or that articles in refereed journals is of high quality: you don't know until you read the stuff. Nor am I suggesting that think tanks fire their publicists or stop doing the legwork to get attention. My point is rather that given the existing visibility of nonrefereed work in education, in addition to the status issues in education already, I suspect that faculty in education will be far more reluctant to let go of a peer-refereed model. Even though the notion of peer refereeing is historically and geographically bounded (see Einstein versus the Physical Review for one example), it is wrapped up in status issues. For Harvard's A&S faculty to vote for an open-access preference is one thing. For even Harvard's education faculty to go the same route? We'll see.
Economic models for open access
Since EPAA is described by John Willinsky as a "zero-budget journal," I'm living the tensions involved in open-access. We don't charge either readers or authors for anything, though I have no compunction about asking authors to review other manuscripts as part of a reviewing ecology, and I've shifted the submission checkoff to alert authors that very long manuscripts or manuscripts with a number of tables may involve some paid preparation of an article post-acceptance. (I haven't yet asked authors to pay for such preparation, but it's a recent move.) Apart from the administrative issues involved, I am not philosophically inclined towards allowing advertising on EPAA. Maybe I should, but I and many editorial board members would be uncomfortable with that. But as a result, the burden of making the journal work is largely on volunteer labor, or labor borrowed from other tasks. Even if I were to accept advertising into EPAA, I suspect that we would not receive much revenue from it, and it may not be worth the headaches involved.
The most visible open-access journal system, the Public Library of Science, relies on publication fees charged to authors, starting right now at $1250. Here is the PLoS explanation of publication fees:
It costs money to produce a peer-reviewed, edited, and formatted article that is ready for online publication, and to host it on a server that is accessible around the clock. Prior to that, a public or private funding agency has already paid a great deal more money for the research to be undertaken in the interest of the public. This real cost of "producing" a paper can be calculated by dividing your laboratory's annual budget by the number of papers published. We ask that-as a small part of the cost of doing the research-the author, institution, or funding agency pays a fee, to help cover the actual cost of the essential final step, the publication. (As it stands, authors now often pay for publication in the form of page or color charges.) Many funding agencies now support this view.
For largely grant-funded disciplines, that's doable. For others? Not possible, either because an institution will not pay publication fees or because an author may be an independent scholar.
Here's the bottom-line concern: For journals in non-grant fields that are currently subscription-based and where there is paid staff who work on the journal, the transition to subscription-free work is fraught with risk, and I suspect that forcing all currently-operating journals to go subscription-free would result in the closure of hundreds of journals. I don't think anyone wants that to happen, but there is no secure economic model for open-access journals right now. We'll see the development of hybrids for some time (such as the Teachers College Record in education research), and that will work to some extent. And my guess is that a number of journals would have no problem with open-access for a substantial number of country-specific domains, to help scholars in countries that do not generally have institutional subscriptions to expensive journals. But that's different from the "Full Monty" open-access journal.
Where to go from here
Of the two issues, my guess is that the reputational-economy question is easier to answer. I suspect citation harvesting will be the basis of future reputation economies in academic publication. Google Scholar is incomplete and inaccurate, but so is ISI's Web of Science, and as long as academics don't treat bibliometrics as carved in stone, things should work out (or at least the problems are of a much lower magnitude than other problems we face). Unlike David Rothman, I do not see online comment forums and rating algorithms working, in part because few researchers can afford the time to invest in such forums or devices. For institutions that care about research, they will still use external reviews at promotion gates, and that will supplement other information.
The economic model of "full Monty open-access" is going to be harder to achieve. Maybe I should state what I would love, as an editor: for someone to figure out how to provide me great copyediting and compositing. Make it so I don't have the headaches of economic administration and post-acceptance detail work, and I'll probably swing towards accepting advertising or a sliding-scale manuscript-processing fee. That's going to be a bit of a challenge, since I have very particular ideas about how an article should look. But a clearinghouse that manages advertising, moderate manuscript-processing and publication fees, copyeditors and compositors, and has a quality-control mechanism for the copyeditors and compositors would do me a huge favor. And if this finicky editor will accept it, and if you can make it work economically, you just might make open-access work on a sustainable basis.
February 12, 2008
On excuses for unintended consequences
Oh, my: I head out of town for a week, and when I get back there's a trail of
tears blogs on curriculum narrowing:
- Charles Barone, January 17
- Robert Pondiscio, January 18
- Eduwonkette, January 18
- Eduwonk, January 17-18
- Eduwonkette, February 4
- Eduwonk, February 6
- Ken DeRosa, February 6
- Robert Pondiscio, February 7
- Joanne Jacobs, February 7
- Eduwonkette, February 8
- Eduwonk, February 8
- Charles Barone, February 12
While there is some question about the extent of curriculum narrowing that followed NCLB (see: no causal language there), the basic argument in these entries is over whether NCLB creates incentives to narrow the curriculum and the extent to which the variation in curriculum narrowing shows that schools don't have to narrow the curriculum to do well on tests.
(...except for Eduwonk's red herring about low bars, which essentially is that because states can set relatively low thresholds for proficiency, that eliminates the incentive to narrow curriculum, stuff test-prep into the kids up the wazoo, etc. No economist or behaviorist would accept an argument of "hey, the marginal change required is low, so that doesn't create an incentive for changed behavior." Either would reply that's a question that should be left to evidence, not speculation. I'm not an economist or a behaviorist, but I don't buy the hand-waving about low bars, either. And, as 'kette points out, isn't NCLB supposed to change behavior? You can't simultaneously say NCLB is changing some behavior you like without acknowledging that it has the potential to provoke behavior we don't like.)
If we agree that thousands of schools are making poor decisions in response to the pressure of test-based accountability, then the operative question is, How do we help schools and educators make better decisions? Charles Barone and others suggest we hold up exemplars and say, "Follow them." That's the effective-schools-literature strategy, and we've paddled that boat since the late 1970s without getting where we want, so we know at least that it's not enough. Robert Pondiscio and other core-knowledge or other-curriculum standards folks would say, "Build the curriculum, and they will follow." That's a step towards regulating input more than outcomes, which I suspect will not be politically viable, but I may be wrong. George Miller, Ted Kennedy, and others propose to increase the number of measures used, with legislative language that assumes that AYP can be finely tuned. I don't buy that argument: test-based accountability is a cudgel, not a scalpel. My instinct is to say, Watch the decision-making, but that's because I distrust black-box handwaving, and I know it's hard to operationalize a procedural standard within a test-prep culture.
The meta-political question is deeper and one that I think most people understand in spots if not generally: you either own reform or you lose the reformer label. If you do not acknowledge problems through implementation and own them, you give up a huge chunk of credibility. Whether I agree with them on an issue or not, I give credit to Ed Trust for occasionally identifying problems with implementation and deciding to own the issue (e.g., growth models). They haven't done that with 100%-proficiency goals or test-prep (yet), but it's a healthy dynamic where they have done it. You could say the same with Fordham and curriculum-narrowing (or Diane Ravitch with the same issue plus test-prep). Or Miller and Kennedy and 100% proficiency (though their concrete ideas on those points are Rube-Goldbergesque).
I haven't seen that nearly as much with Barone, Eduwonk, or some others, and the failure to own problems with NCLB ignores the fundamental fact of post-NCLB politics: Parents of public-school children are far more skeptical of test-based accountability than they were 5 years ago. Own the problems or lose control.
January 14, 2008
Teaching about what humans do
I've been tagged by Craig Smith, who asks, Why Do You Teach and Why Does It Matter? after reading Dr. Crazy's explanation of why she teaches literature. This comes on the heels of Stanley Fish's boldly hedonistic Epistle to Philistines and the expansion on this, last night's Epistle to Dumb-Ass Colleagues. (Okay, the posts were properly called The Uses of the Humanities, parts 1 and 2, but I agree with Margaret Soltan's reading of Fish Epistles I.) Fish's essays are in his typical eliding style, with just enough of substance to frustrate me when he misses the obvious.
And here is one part of the obvious: an academic education requires the study of a variety of disciplines, including science, math, and also what humans do. Understanding "what humans do" requires behavioral sciences, social sciences, and humanities. While the configuration of disciplines is not carved in stone, a student will get a pretty good education in the culture that humans produce within the humanities. One way to think about the value of any discipline or area is to think about the institutions that leave out the area.
Here is the other part of the obvious: you don't learn how to think in the abstract but in bumping up against ideas in specific contexts. That "bumping up against" phrase is important to me, because you don't learn anything if you are not challenged. Some subjects appear easier to you or me than others, but that perception is about subjects that are under a threshold of difficulty, not the absence of new ideas and challenges. Teachers can make learning easier, but that fact doesn't eliminate the need for challenge. And the specific context matters. As my favorite high school English teacher told us at the beginning of AP English, she taught writing, and she did it in the context of teaching about literature. She also taught us an enormous amount about literature in the course of that year. Even philosophers talk about topics. Care for a casual game of penny-ante Ontology?
In my case, I teach social-science and humanities perspectives on education, with a focus on history and sociology. The majority of my students come to me to fulfill exit requirements or in the midst of pre-professional training that reinforces psychological assumptions, and I have most of them for only one semester. I provide students with an additional set of views, humanities and social-science perspectives to examine schooling. When students leave my classroom, they should be able to explain how people fight over the purposes of schooling and the different models of how schools function as organizations (or don't).
In many ways, I am lucky to be in a field where I get paid for navel-gazing. My neighbors and fellow citizens should want me to teach students who want to teach that the world may not agree with their reasons for teaching or their view of the purpose of schooling; that the world's range of schools includes places that provide a very different education from their own experiences as they grew up; and that the job of teaching involves more than going into a room, shutting the door, and letting the gorgeous lesson plans unfold without interruption or difficulty. That's a fairly practical purpose. There is also the specific example of the argument above: Formal schooling is what humans do today, and studying the social context of formal schooling is a reasonable way to study what humans do.
In addition, when students are in my course, they have to write extensively and coherently about schooling. Over my career, I have taught over 2,000 students. I have taught most of those students at USF, where I have never written a multiple-choice final exam and where I have always required that students write papers. Before my colleagues and I agreed to craft a single paper assignment across all of the undergraduate social-foundations sections, I assigned a "perspectives" paper where I collected sources on two or three recent "hot topics" in education and told my students, "This is not a research paper. I've collected all of the background you should need. Your job is to apply the concepts you have learned in the course to these hot topics." (I gave students the ability to propose a topic of their own choosing, as long as I approved it in the first month of the course. Almost no students took me up on the offer, and as a result, I stopped having students propose topics that focused more on psychology than the topics in my course.) In most cases, the common readings for the course never directly addressed the hot topics, so they couldn't just regurgitate ideas. I was mean! (See the bit about challenges above.)
Some of these assignments were more successful than others. I am still aghast that a few years ago, the majority of students who wrote about the "intelligent-design" controversy in Dover supported teaching it alongside evolution in a science class. I graded them on the merits of the assignment (which is not synonymous with the question of what should be in the curriculum), and then explained my point of view in comments separate from the grading. But I challenge students' beliefs about education, no matter what they carried into the classroom, and I push students to justify their conclusions with plausible arguments.
And to continue this meme, I tag...
January 7, 2008
Credentialism, human capital, and ahistoricism
I recently had occasion to review a very small slice of the economic literature on the value of education, and it struck me that while both sociologists and economists struggle with arguments about the value of education in contrast with the value of a credential, they do so almost in mirrored ways. The economic argument about credentialism comes from some conservative economists such as Richard Vedder, who asks,
Can the strictly credentialing function be performed much cheaper through alternative approaches --examinations, IQ tests, etc.?... How much of "learning" in college is the attainment of needed skills (e.g., accounting, engineering skills) that are not readily learned on the job? And how much of it is merely an academic form of some endurance race, where the mere completion of the race denotes certain desirable character traits?
To Vedder and a few others, educational credentials signal employers about the inherent taits of potential employees. Thus, to Vedder, the Griggs v. Duke Power Co. (1971) case was horrible because it discouraged employers from using what he thinks is direct evidence of intrinsic personal value (IQ tests) and thus encourages the use of educational credentials as a proxy.
(Even apart from Vedder's misplaced faith in IQ tests, his interpretation of Griggs is a substantial misreading of the case on two important grounds. First, the Supreme Court also struck down the use of educational credentials by Duke at the time (in this case, high school diplomas) because they were not tied to bona fide job requirements. Second, Vedder ignores the important historical context: while the district court and appeals court decided that the plaintiffs had not demonstrated evidence of discriminatory intent of denying opportunities to African-American employees, the use of IQ tests and credential requirements maintained an uneven playing field: "Under the [1964 Civil Rights] Act, practices, procedures, or tests neutral on their face, and even neutral in terms of intent, cannot be maintained if they operate to 'freeze' the status quo of prior discriminatory employment practices.")
Vedder and some other economists are skeptical of the intrinsic value of education, seeing the use of credentials as a poor proxy or signal of some intrinsic values. In this story, people who enroll in and complete college essentially have the same value at the end of college as at the beginning, but the process performs a sorting function on traits that employers find valuable. Because of this argument, mainstream economists exploring the value of a diploma have spent enormous effort trying to disentangle the value of a degree from what they vaguely call ability.
Far to the left of Vedder, a number of sociologists (and some economists and historians of education) have also criticized the argument that education is primarily an investment in human capital. The social-reproduction argument claims that schooling is provided on an unequal basis, and these unequal opportunities essentially have confirmed a preexisting social hierarchy. Thus, for example, Sam Bowles and Herbert Gintis provided evidence that even within young adults with similar ranges of scores on IQ tests, those from wealthier families were far more likely to attend college. The history of tracking provided a wealth of evidence of unequal curriculum opportunities and low expectations for students from poor families, and the conclusion drawn by the mirror image of the conservative credentialists was, don't bother reforming education. To them, we should change the economic system instead.
At a retrospective panel at the Social Science History Association in 2000 or 2001 (I don't remember which year), Herb Gintis said that he saw no conflict between these mirror images. Gintis was referring not to economists but instead to structural-functionalist sociologists such as Robert Dreeben as the mirror of his and Bowles's argument. Yes, he said, his 1970s version of social reproduction was as determinist as Dreeben's argument that schools served primarily to inure students to their largely predetermined place in the social order. Gintis said he and Bowles had just turned Dreeben's argument on its head.*
Some writers on credentialism have used historical trends to make their case. Thus, Richard Freeman's The Overeducated American (1976) and Thomas Green's Predicting the Behavior of the Educational System (1980) wrote about the changing value of educational credentials. The classic sociological treatise on the topic is David Labaree's How To Succeed in School without Really Learning (1997). More recent is Claudia Goldin and Lawrence Katz's work, such as their recent NBER paper The Race between Education and Technology (2007). To Goldin and Katz, the relative value of educational credentials have changed in different directions over time, and the mid-20th century was the time simultaneously of rapidly increasing high school attainment and of both wage compression (lower wage inequality) and of low relative value to education.
This link between inequality and the growing value of education over the late 20th century should not be treated as a post-WW2 trends, though. At the beginning of the 20th century, Goldin and Katz argue that the relative value of a high school education was quite high. (In some ways, this mirrors Green's analysis, but with fundamentally different mechanisms. Among other matters, Green treats the economic value of a diploma as a credential function, while Goldin and Katz are talking about their estimates of the human-capital value of completing a high school education.)
So how do we treat the credential value vs. the non-instrumental value of education? It is not a simple human-capital issue, but students do learn stuff in school. It is not just credentialism, but there is a "sheepskin effect" to a diploma. Over the past few years, I have explained to my classes that there are different layers to the relationship between schools and the economy. One is human capital. A second is the use of schools as sites for sponsorship, either at the individual level (what James Rosenbaum has talked about as networking) or at the mass level (credentialing). A third is the more mundane, lay understanding of networking: learning about and with others in a way that extends beyond one's own skills. (For a variety of reasons, I am not going to lump this with cultural capital or Coleman's concept of social capital.) A fourth level is at the level of social and political beliefs about opportunity (or the connections that Jennifer Hochschild and Nathan Scovronick describe between public schooling and the "American dream"). Cutting across these different levels are differences between the use of schooling for private purposes (individual or family competition) and the use of schooling for public purposes.
While this static sketch serves its teaching purpose reasonably well (and it's a lot easier to teach and more satisfying than Bourdieau's notion of cultural capital), it is not satisfying as a template for an historian. How did these different layers and purposes evolve?
And now, dear readers, I'm going to leave you in suspense, for I cannot answer that question to my satisfaction. Or at least not yet. But I'll take suggestions!
December 12, 2007
What would Nation X do?
Today, my morning paper had a column by Susan Taylor Martin, Finns set teachers free, with enviable results, discussing the secular, largely-standardized-testing-free Finnish schools that have enviable student outcomes by almost any measure.
On the one hand, this argument is extraordinarily tempting: See what the Finns do? We need to do that: provide substantial social welfare, provide higher status for teachers, then leave them to do their jobs without the corrosive testing regime we have in the United States.
But the historian in me says something different: Wait. This argument has been made before: no, not the one about Finland but the one about needing to follow Nation X, whatever that country happened to be in a particular decade. At the end of the 18th century, a strong push inside the new country said, "We're different from Europe ["Old Europe," as Donald Rumsfeld might put it]. This new nation is a fresh start. We need to be as different from Europe as possible." As David W. Noble argued years ago in Historians against History, that was a dominant theme among 19th century amateur history writers.
But there has also been a counter-argument: other countries have model systems of education, and we need to learn from them. (If you want the academic jargon, you can call it mimetic isomorphism when the rhetoric is all about national anxiety and panic and normative isomorphism when the rhetoric is "this is professionally best.") The most famous 19th century argument along those lines was that of Horace Mann, who traveled to Prussia before writing his seventh school report. While he noted the flaws of Prussian schools, he also thought they treated students much better than schools in Massachusetts. You don't have to beat your students to teach them, he argued, and Prussia is the proof. Why Mann went to Prussia to make that case is an interesting question. He should have known that one of the responses would be the reference to American exceptionalism, and he could have found reasonably kind teachers somewhere in North America if maybe not in Boston.
You can find the "we should do what Nation X is doing" argument sprinkled through the rest of the 19th and 20th centuries. In the post-WW2 era, the comparison nation was whoever our military or economic adversary was at the time, from the Soviet Union in the 1950s and 1960s to Japan and Germany in the 1980s. In the last half-century, many of these comparative arguments were projections of adult anxieties onto children. As many have pointed out over the years, most notably David Berliner and Bruce Biddle, schools are carrying the rhetorical water for adult failings. In almost all cases, the comparison is superficial, omitting information about context and structure. So the blithe suggestions for us to copy Japan in the 1980s often failed to mention the juku market (of private cram schools) or the common Japanese parenting repertoire of letting preschools socialize children through group pressures. Even academics fall into this trap: James Rosenbaum et al. wrote in Market and network theories of the transition from high school to work that professional networking between schools and work was great, and they pointed to Japan as a model... right before the Japanese economy dove into a 10-year downturn. Oops.
There are plenty of wonderful comparative education analyses one can make, but the standard rhetoric you see in American political discourse is usually shallow. Caveat lector.
December 7, 2007
Whose values would be valued in a neoliberal education world: Michelle Rhee's or Marc Dean Millot's?
What I see in Chancellor Rhee's approach, abetted, permitted or endorsed by Mayor Fenty, is 1) insensitivity and arrogance towards others, combined with 2) a reliance on fear to control staff, and 3) a considerable willingness not to apply analogous performance criteria and public criticism to themselves. Managers cannot be harder and harsher with others than they are on themselves and expect support from their staff, respect from their board, or trust from the public. And managers without all three cannot succeed in a turn-around.
There are three points here. One is the immediate and obvious one: Humiliation and denigration are not great motivators, nor is "making an example of" a significant proportion of the people you work with. I don't know Rhee, but this is not the first time I've seen reports of her approach to people being problematic. And Millot is right on the general principle.
The second point is that mayoral control of schools is no panacea and often a fig-leaf reform. As Monday's Washington Post story on the matter indicates, politics don't disappear with mayoral control. And that's why I was disappointed to see the brief mention of David Tyack's One Best System in Wong, Shen, Anagnostopolous, and Rutledge's new book, The Education Mayor. Tyack showed how governance reformers in the early 20th century claimed to be "taking politics out of school" in changing ward-based urban school boards to nonpartisan boards often appointed by courts or mayors. Wong et al. seriously misread Tyack in claiming that the historical lesson is that we need to keep politics out of school. Tyack documented how the new boards may have been nonpartisan but were certainly political, elitist, highly connected, and contributors to instead of brakes on bureaucracy. We have seen plenty of the last (continuing bureaucracy) in Chicago and New York City, where mayoral control appears to have changed the address of the bureaucracy instead of the basic facts. Beyond the obscuring of bureaucratic continuation, the arguments in favor of mayoral control contain a romantic view that is all too familiar to historians: change the structure and you can reduce if not eliminate the presumably nasty consequences of education politics. There are at least two fallacies in this romantic view: An unrealistic view of structural change as a panacea, and the blithe assumption that we'd want public education without politics. As long as education is tied to citizenship, politics will inevitably be involved, and that's not a bad thing. (You think Brown v. Board of Education and Title VI of the Civil Rights Act of 1964 weren't political??)
The third point is obvious in the today but subtler when looking at the long term (or long duree if you're a devotee of the French Annalist school): there is a distinction between policy and approaches to handling people, and you don't know what will win out in the end. You can agree with the policy orientations of people whom you'd never trust (Millot's response to Rhee), and you can see and admire the human qualities of people with whom you have fundamental policy disagreements (me and Mike Huckabee, to take one example; I mean my view of him, not the converse). Often, the historical perspective focuses on the policy issues instead of the person, in part because extant records that focus on personality are often sensationalist instead of subtle. One exception is the record of a few common-school reformers from the early 19th century, whose views on "school management" were an intimate and conscious part of their ouvre. While one or two of the crankier education historians from the 1970s portrayed Horace Mann and his ilk as 19th century Darth Vaders, top-down class-oriented stealers of democracy, the truth that good historians of various stripes recognize is that a number of class-conscious reformers had a serious argument about the need to be kinder to students. One of the arguments for women as teachers was that they'd be more nurturing. (Sexist? Yes. Motivated by some understanding that beating kids isn't great? Absolutely. Ignores the fact that in the 19th century, women as well as men beat students? You bet.) And Mann is famous for pointing out that Massachusetts teachers regularly beat and humiliated students... and his argument that such mistreatment was unnecessary and wrong.
That fact notwithstanding, Mann, Henry Barnard, and others still fit into a broad movement of 19th century social reformers who held a set of overlapping traits, which in retrospect we associate with northern Whig parties, the growth of merchant capitalism, concerns about poverty and social disorder, a belief in the ability of the state to address such concerns, and an environmentalist analysis of social problems. When most educational historiography mentions Michael Katz's The Irony of Early School Reform, it is usually in reference to the vote abolishing the high school in Beverly, Massachusetts, but the Beverly story is only the first of three parts. The other two sections emphasize the rise and fall of environmental thinking in the mid-19th century. By the 1870s and 1880s, the optimistic environmentalism from a few decades before had become overshadowed by Social Darwinism and "scientific charity." Katz argued that the early promises of reformatories and other social reforms overpromised and ignored the corrupting influences of institutions and the expenses of running truly beneficial programs. (Disclosure: I'm a Katz student, or I was in grad school.)
Mann's twelve reports are the most interesting body of common-school reform writing to me, in part because there is so much complexity to them. He wanted teachers to be kinder to kids and to use more effective teaching methods. He certainly fit comfortably into the world of early- and mid-19th century Whig reformers, belonging to a temperance society and key in the creation of a state asylum while in the Massachusetts legislature. That reformist attitude was perfectly consistent with the background fear of social disorder. In a letter to a friend, Mann explained his acceptance of the Board of Education secretary position by saying, "Having found the present generation composed of materials almost unmalleable, I am about transferring my efforts to the next. Men are cast-iron; children are wax." Maybe he was influenced by religious riots in Massachusetts in the prior few years, but in any case that fear lasted until his very last report in 1848, which resonated with the news of revolution Europe and the publication of the Communist Manifesto. We had to have common schooling, Mann said, or else we would have classes bent on mutual conflict:
Now, surely, nothing but Universal Education can counter-work this tendency to the domination of capital and the servility of labor. If one class possesses all the wealth and the education, while the residue of society is ignorant and poor, it matters not by what name the relation between them may be called; the latter, in fact and in truth, will be the servile dependents and subjects of the former.
For students of 19th century history, this should be familiar; it is an echo of the developing free-labor ideology in the North. And as Maris Vinovskis has pointed out, Mann had an approach to education that approximated human capital arguments:
But if education be equably diffused, it will draw property after it, by the strongest of all attractions; for such a thing never did happen, and never can happen, as that an intelligent and practical body of men should be permanently poor. Property and labor, in different classes, are essentially antagonistic; but property and labor, in the same class, are essentially fraternal.
Educate the tykes, and they'll all have some prosperity and a stake in society. But Mann's fear is less about the South than events across the Atlantic:
The people of Massachusetts have, in some degree, appreciated the truth, that the unexampled prosperity of the State,-its comfort, its competence, its general intelligence and virtue,-is attributable to the education, more or less perfect, which all its people have received; but are they sensible of a fact equally important?-namely, that it is to this same education that two thirds of the people are indebted for not being, to-day, the vassals of as severe a tyranny, in the form of capital, as the lower classes of Europe are bound to in the form of brute force.
To Mann, poverty and conflict lurk under the surface of an industrial economy, something that only education can forestall. This was not the naked instrumentalism that Bowles, Gintis, and others claimed in the 1970s, but neither were common-school reformers unconnected to early 19th century industrialization: there were intimately vested in it and saw education's connections to it in multiple ways, including ameliorating social tensions.
In the long run, the more child-friendly views of Mann did not become a part of bureaucratic school culture. As hundreds of my students have pointed out to me over the years, common school reforms were far more successful in changing the structure of schools than in directly affecting the cultural practices inside a classroom. Some things changed, certainly: as other historians (e.g., David Tyack and Larry Cuban) note, chalkboards slowly became institutionalized in school construction, and in the early 1960s, Mann's view of an 'unvarnished' Bible reading instead of sectarian instruction had become the norm. But those were compartmentalized practices, the type of add-on that Larry Cuban has frequently noted is easier for schools to accommodate. (Note: I am dramatically underestimating the issues involved in shifting away from sectarian instruction. Nonetheless, )
One operative question that 1970s and 1980s historians wrestled with is the extent to which the growth of bureaucracy and the decline of early 19th century environmentalism were the consequence of early industrial capitalism. We have a much richer and more complex picture of 19th century school history today, and yet that question remains (or should remain) interesting. The truly large-factory model of education tried in early 19th century cities died as many schools shifted from monitorial schools to smaller, self-contained classes and choral recitation. On the one hand, one could argue that the organization of graded elementary school in many ways mirrored the less-mechanized and smaller factories in the U.S. better than they did some of the much larger factories in England, where monitorial instruction was invented. But that argument that emphasizes the parallel between graded elementary schools and factories overemphasizes the importance of larger cities, when much of early industrialization happened in towns rather than the largest cities.
And that city-town distortion ignores rural places. As Nancy Beadie's recent research uncovers, the building of schools in small towns and rural places may have been as important a part of local economic development in indirect terms as in any human capital effects. The marshaling of local resources for something as simple as church or school buildings required a complex web of economic and social relationships, quasi-private loan networks and reciprocal property relationships that helped incorporate small towns and rural places into a regional economic watershed. ("Watershed" is an unfortunately naturalized metaphor, but I'm not sure there are better alternatives: web and ecology are as inapt.) There's far more to industrialization than building schools, but Beadie's work shows the potential subtlety of schooling's effects and the relationship between economic life and formal education.
And even the subtler views skip some important topics, including the role of mid-19th century higher education, a fuzzily-bordered sector that included institutions called academies, high schools, normal schools, and colleges. And then there's the growth of Sunday schools, and the links between Northern missionary groups and Reconstruction education. So I'm feeling still a bit at sea, wanting a more synthetic interpretive history of 19th century education that wrestles with the bigger economic questions.
What is unquestionable is that Mann's kinder, gentler school didn't survive in the nascent bureaucracy that he helped build. School bureaucracies were easily corrupted into hierarchies that held low expectations for the poorest students. We have the historical example of a structurally-oriented school reformer who still held complex views about what should happen inside the classroom, views that did respect the potential and humanity of children in ways that we should not ignore. Yet his humane vision of schools lost out, at least for most of a century. The structure he imagined did not require humane treatment of its inhabitants.
So today, as we witness another experimental phase in the structure of American education, I read Marc Dean Millot's blogging with both a smile and heartache. Millot writes with passion about treating people with respect. Yet he is in favor of building the same type of structure that Michelle Rhee favors. Whose ways of treating humans would win out in that structure?
November 4, 2007
A twofer on Delaware student program and social justice, or "Let's not confuse institutional prerogatives with students' propensity to make mistakes"
I normally don't waste bytes just to point to someone else's blog and say, "What (s)he said!" In this case, though, Timothy Burke's engagingly garrulous entry on the University of Delaware student orientation controversy serves double-duty to describe the obvious about the University of Delaware program and also help explain my discomfort with official statements by colleges of education that they want students to foster social justice:
... with the Delaware residential life program, there's nothing wrong per se with asking straights when they first realized their orientation or when they came out as straights. That is, nothing wrong if that's a sly or mischievious aside in a personal conversation about sexuality, or a subversive question directed at a public figure who is intensely anti-gay, or as a way in an intellectual discussion about the history of sexuality to illustrate what the ten-dollar word 'heteronormativity' actually means. Turning the question into a set part of a pseudo-mandatory workshop (there's some confusion at Delaware about how strongly students are encouraged to attend) takes everything valuable out of it. It turns something sly into dogma.
Burke is putting this observation in the context of a nuanced discussion of the institutional context of resident student activists and the role of college as a place where young adults learn by being bold and frequently making mistakes. What makes sense for student activists or activists engaged in civic life often becomes self-parody when oversolemnified in an institutional context.
Such oversolemnification is all too typical in the debate over dispositions and social justice in teacher education. In several contexts, I have heard colleagues in social foundations or my institution upset at the attack on the demand that students display a disposition towards social justice... a term now closely associated with the National Council for the Accreditation of Teacher Education (NCATE). Because NCATE referred to social justice in a glossary item that mentioned it as a potential disposition that colleges might assess students on, and because some colleges did some patently stupid things when students expressed dissenting political views, that term became a magnet for critics of college policies that appeared to infringe on students' rights to political expression. Respondents in education have sometimes interpreted that attack as a neoconservative attack on teacher education more broadly.
The truth is that the attack on social justice and dispositions is both a floor wax and a dessert topping. Some of those who have attacked teacher education's and NCATE's move towards dispositions have been social conservatives upset with the nature of teacher education. At a June 2006 hearing in front of the National Advisory Committee on Institutional Quality and Integrity, critics of NCATE included the National Association of Scholars and the American Council of Trustees and Alumni. But that's not the entire picture. Critics also have included the Foundation for Individual Rights in Education (see FIRE's statement on NCATE and dispositions). FIRE's staff and supporters have included conservatives, but they have also included people from across the political spectrum, a group of those who are reasonably described as academic libertarians.
Academic libertarians focus on campuses as a site of debate, where the job of a university is to encourage a discourse of disputation. In this environment, assessing the alignment of one's thoughts with any template with ideological overtones strikes academic libertarians as obnoxious, an affront to students' freedom of thought. While many defenders of assessing dispositions point to the evaluation of behavior rather than thought and the interplay of that behavior with professional expectations, critics are skeptical, especially when some places (such as LeMoyne College) have been caught with their hands in the cookie jar... or the brains of their students.
The vulnerability of teacher education to such criticism is not just the visibility of a few outrageous idiocies by specific teacher education programs. To some extent, the coalition between social conservatives and academic libertarians has focused criticism in a way that often dissipates when the criticism comes from just one quarter. But the internet is also partly responsible, because that copper or fiber-optic cable is a double-edged sword, bringing visibility in both good and bad measure. In addition, teacher education is more vulnerable because of the historical disrespect for teachers in general and for teacher education within colleges and universities.
But there are a few other issues to consider, issues that schools and colleges of education control. One issue under the control of teacher education programs is the way faculty and administrators address the inherent tensions of trying to stuff a professional preparation program into a relatively short period, at most three or four years in an undergraduate program. We'd like teachers to leave college with a fantastically well-rounded liberal-arts education, professional information about educational psychology, historical and social-science perspectives on education, professional ethics, assessment, teaching the methods of their field, content expertise in their field, something about the practical matters of running a classroom, field experiences while learning everything else, and a capstone experience with a final internship and structured feedback and reflection.
To put the problem bluntly, if you can do all that for all students in undergraduate teacher education, I also want a pony. The telling choice is what you give up in professional programs, more than in almost any other type of education. That's not even considering the newer demands in areas such as special education, where "highly qualified teachers" now have to demonstrate content expertise in every curriculum area. So the curriculum discussions in teacher education inevitably revolve around the desire to somehow stuff more into less. If someone could extract the essence of half of our curriculum and put it in a pill, I know a bunch of education deans who would be very happy.
In the midst of this perennial stretch, teacher education stakeholders and institutions talk about accountability as outcomes. Outcomes? Sure. We'll be responsible for what happens with our teachers. So what does that mean, in an era when tracking graduates is a bit tough? Well, we'll certainly be responsible for the passing rates for graduates on state exams, and their meeting our state standards, and ... hmmn... something else. Someone must have suggested dispositions (the history of that would be a great dissertation topic!), and the idea met multiple needs. Stakeholders in the NCATE orbit were reasonably satisfied that teacher education programs were at least addressing accountability. Within teacher education, dispositions met several needs, and it could be used both to justify keeping some things in and removing others out of the curriculum, depending on how one phrased one's goals and preferred dispositions.
Dispositions have also neatly coincided with a psychological approach to education. Kurt Danziger has explained how the history of psychology is intertwined with the bureaucratization of public schooling in the early 20th century U.S. That psychologization continues, far beyond the knowledge of educational psychology that is the bread and butter of my department colleagues. (As my fellow historian Erwin V. Johanningmeier has noted, there is some considerable irony in the fact that one of the most well-known educational psychologists, David Berliner, has written more about the social conditions of schools in the last 15 years than educational psychology.) I am not sure if any professional field outside education or social services would ever frame their competencies as anything close to dispositions -- do business, legal, medical, engineering, or architecture programs have anything similar? Part of the difference is the much shorter formal apprenticeships that teacher education has, but some is due to the role of psychology within education.
Both the University of Delaware residency program and the existence of dispositions border on a therapeutic approach to education, implying that part of the job of college is the reconstruction of behavior and personality. I am not one to believe in the fairy tale that education only touches the intellect; college is a life-changing experience, no matter the outcome. Yet there are reasons to be very cautious about how we engage in the deliberate process of social engineering that is inherent in education.
To some extent, I am sympathetic with part of the idea of dispositions: it is extraordinarily hard to assess the fit of a student with professional expectations, and at some level one has to find proxies for professional competence while people are still in the program. The notion of assessing dispositions is an attempt to find some proxy for that fit apart from course grades. And given the relative flexibility of dispositions, some colleges of education do a much better job of treating them reasonably than other teacher education programs. But there is a foundation of psychological assumptions behind them, and the same flexibility that allows reasonableness also allows LeMoyne and its ilk.
Given that set of psychological (and almost therapeutic) assumptions, a set of dispositions geared to social justice is an oxymoron. Any definition of social justice I have seen talks about the social context, the broader structures of society. To imagine that one can accomplish social justice by changing the personalities of teachers ignores the theoretical arguments involved in social justice. To change the broader structures of society, you have to change the broader structures of society, and teacher goodwill doesn't really enter into it (though teachers' acting ethically towards their students does matter, just in a different sense). Mandating that students demonstrate a disposition towards social justice is likely to be a sloppy description of an institutional mission at best and an effective generator of cynicism at worst.
There is some other stuff that needs to be said here, about how an ethic of teachers' being at the heart of social justice is a potential form of exploitation. (Brief form: those who think KIPP schools are the solution for education and those who want teacher education programs to revolve around social justice have the same assumption about the broader role of teachers.) But this entry is far too long as it is, and I should just finish with this: I desperately want the world to have more justice, and I work towards that end, but I am a better teacher if I model those beliefs than if I try to get my students to parrot them.
November 1, 2007
Social annotation for teaching how to read difficult material
A few days ago, I raved about the possibilities of social annotation. What I barely touched were the teaching purposes of social annotation. Let me provide an example from my masters course in social foundations of education. Below is the root to a discussion thread over the past week on the Seattle and Louisville desegregation cases that the Supreme Court ruled on this spring. The following contains my comments to students, links to the opinions that have my annotations (hold your cursor over the underlined passages to see the annotations), and a few starting questions.
Parents Involved in Community Schools v. Seattle School District No. 1 (2007) was as fragmented as the Gratz and Grutter cases. Below are links to the annotated pages of the opinions.
- Plurality opinion (Roberts)
- Thomas concurrence
- Kennedy concurrence
- Stevens dissent
- Breyer dissent
Roberts's opinion is called a plurality because a majority of justices agreed to the decision but only four agreed (Roberts and three others) agreed on the same reasoning; Kennedy agreed with the decision but for his own reasons. This is a particularly difficult set of opinions to read -- in this case, it is Breyer's dissent that is long-winded (not Thomas's), and then the plurality opinion and the concurrences both refer to the dissent.
A few questions:
- Does this case shut the door on voluntary desegregation? If not, what other options are available?
- Regardless of whether there are options available in the future, the decision will make districts think three or four times before including racial classifications in formal plans to create more diversity in schools. Is that a good or bad outcome?
In my during-semester survey, a few students offered the following comments about Diigo when asked what had helped them learn in the course:
- The Diigo annotation technology has made reading the court cases far more enriching. It as though you are in the room while I am reading the cases.... I wish there were a way you could do the same for all the other readings.
- It really helps to bring clarity to the court cases by reading your comments. I would be confu[s]ed on some judgements or miss important points without the comments. It is the next best thing than [to] sitting in a lecture and discussing interpretations.
Let me be honest: providing this annotation requires a lot of time, and that is time sucked away from other activities (being more proactive on the discussion board, or creating more formal presentations). But I know from prior experience that some readings such as court opinions desperately require some assistance for students, and I was gratified to have my judgment confirmed by students who felt the effort helped them.
October 30, 2007
Social annotation and the marketplace of ideas
David Rothman has a wonderful idea from the growth of social annotation tools and the development of an open e-book format:
How long until savvy writers pester publishers to let them do interactive e-books? -- where readers' comments can appear in relevant places in the texts or elsewhere in the books. Imagine the possibilities for smart nonfiction writers and those in dream-with-me genres like romance fiction.
I am experimenting this semester with using Diigo to show students in one course my annotations on Supreme Court desegregation opinions. I've been able to provide translations of legal terms (certiorari, de jure, de facto, etc.), tell students where they can skip (e.g., issues of standing, which are tangential to the topics at hand for the course), what passages to read in depth, and some questions to think about specific passages.
There is already BookGlutton's idea for Unbound Reader, based on the epub standard. For those wondering what the One Laptop Per Child initiative is for, imagine an eight-year-old reading a copy of a story and seeing and replying to the comments of other eight-year-olds around the world on the same passage.
For those who wonder about the monetization of this -- how can anyone make money off free books? -- Rothman has an obvious answer:
A community approach is worthwhile in itself, but along the way would reduce losses to piracy. You're less likely to steal from someone whom you and your friends respect. What's more, forum participation could be among the rewards for those who paid voluntarily for books distributed under Creative Commons licenses.
I suspect that savvy musicians think of mp3-sharing in similar ways, and if we're headed back to the days when vinyl records were the a way to get musicians concert gigs, maybe free books are a way to draw people into other ways to remunerate authors. For those in genre fields (romance, science fiction and fantasy, mystery, etc.), midlist authors might find that approach enormously attractive. And those of us in academe? There are some obvious possibilities that appeal to me to provide access to reading but some possibility for revenues where appropriate, such as books that are free online but that carry a Creative Commons license requiring a "binding license" fee, so anyone can read a book but where publishers or copy shops need to pay to distribute bound copies. This idea adds to that imaginary repertoire.
As Rothman notes, this potential requires a standard for annotation to be folded into the next generation of epub standards.
October 23, 2007
Poor teaching != indoctrination
The response to the AAUP's statement Freedom in the Classroom (released September 11) has been fascinating, from Peter Wood and Stephen Balch's tendentious attempt to fisk the report (thereby burying the legitimate criticisms) to Erin O'Connor's more focused criticism to Stanley Fish's column this Sunday, where he takes the statement (rightly) to task for an inane example. First, let me quote Fish's distinction between teaching with controversial subjects and indoctrination:
Any subject -- pornography, pedophilia, genocide, scatology -- can be introduced into an academic discussion so long as the perspective from which it is analyzed is academic and not political.
This is Fish's "academicizing" (see the end of an August 2006 article about Kevin Barrett), and apart from the suggestion that properly teaching a subject requires anaesthetizing the student, it is one reasonable slice at the definition of indoctrination.
The AAUP subcommittee made its largest mistake in choosing a horrible example of teaching that should be protected from political scrutiny:
Might not a teacher of nineteenth-century American literature, taking up Moby Dick, a subject having nothing to do with the presidency, ask the class to consider whether any parallel between President George W. Bush and Captain Ahab could be pursued for insight into Melville's novel?
In contrast with Fish, I think that this choice of examples should be protected from claims of indoctrination, because faculty should be allowed and even encouraged to insert passion into the classroom, even when an attempt fails. But a teacher using such an example should not be protected from claims that this is simply an awful instructional choice. One of my college teachers claimed that Dostoevsky's portraits of psychological imbalance predicted Hitler's rise and the Holocaust. I suspect that he was trying to enliven the class, not indoctrinate us (and what would he have been indoctrinating us into, the Cult of Heterodox Dostoevsky Social Criticism?). We stared at him, mouths agape, wondering what he had been smoking. Great books, mediocre class.
So, like Timothy Burke (both in talking about ACTA's "How Many Ward Churchills?" screed and in discussing teaching in general), I am more concerned with inept teaching than indoctrination, in part because I strongly suspect that most students read crass political didacticism as incompetence as well as or rather than indoctrination.
The practical question is what no one (including the AAUP) has addressed. Suppose that a student complains about the Ahab/Bush comparison. What do we do? I agree with Stanley Fish that the comparison is not professional. Does that mean we toss the professor out on his or her ear? The AAUP statement refers vaguely to academic due process:
When that [allegation of improper conduct] happens, sound professional standards of proper classroom conduct should be enforced in ways that are compatible with academic due process. Over the last century the profession has developed an understanding of the nature of these standards. It has also developed methods for enforcing these standards that allow for students to file complaints and that afford accused faculty members the right fully to be heard by a body of their peers.
That's all fine and pretty but while the statement seems to imply that universities have developed ways of addressing improper instruction, such a conclusion is simply unwarranted. We know how to handle allegations of research misconduct (or at least we think we do until politicians get involved), there are reasonable guidelines from the AAUP on extramural utterances and behavior, and I suspect most universities have formal academic grievance procedures (where a student can appeal an academic decision), but we professors don't have a clue how to handle allegations of teaching misconduct except where there are bright-line standards such as showing up to class and not hitting (on) students.
I don't mean that faculty always stand idly by when they observe or discover a peer's teaching behavior that they find troubling in a variety of ways. But in terms of formal investigations -- what warrants special attention apart from annual reviews and how to gather and evaluate evidence -- I suspect most institutions have absolutely no procedural guidelines. And therein lies the problem: without procedures set down somewhere, administrators under pressure will resort to ad-hoc decisions and processes, which will inevitably violate academic freedom and erode institutional integrity.
The first line of defense against ad-hoc-ism is some proactive evaluation of teaching, the type of thoughtful peer observation and probing that Timothy Burke advocates. Yes, that requires some time and resources. Many good things do, and in many places (such as my institution right now, under enormous budget pressures), that ideal is unlikely to evolve quickly. Most institutions have some annual evaluation, which has an indirect evaluation of teaching through student surveys and materials submitted by the faculty member. This is better than nothing from a variety of perspectives and much worse than the ideal.
The second line of defense is a procedure for screening and evaluating allegations of serious teaching misconduct and incompetence. Here is where most institutions are susceptible to pressures. While most institutions have established procedures when students gripe about a grade, no one has thought through all the other grievances and griping. Even the vaunted-by-ACTA University of Missouri-Columbia Ombudsman program has "Under Development" as the entire content for the Grievance Procedures of Academic Units page. The world will have to see if and how such procedures develop or if they remain largely ad-hoc.
The third line of defense is a system to coach students on reasonable assertiveness, how to raise issues in a course that expand discussion and educational opportunity. This coaching is necessary both for the shy and the brash student. I try to give students opportunities every semester to give me early feedback on a course in an anonymous way, and while I provide that structure and generally try not to bite students' heads off, some students will not tell me their concerns until long after they become worried about an issue (whether it is instruction or assignments or grades or something else). Other students are simply brusque, either with me or other students, and while (I hope) I'm fairly easygoing about criticism, some faculty are thin-skinned or may misinterpret student expressions of concern. There are right and wrong ways to point out that a class omitted an important perspective, and we do students a disservice in assuming that they come to college knowing the right way to criticize class.
This need for education starts with the usual front-line "ears" in a university: chairs and the secretarial staff of university presidents. My chairs have always tried to redirect the student back to me and also let me know when a student raised a concern with them. Presidents' secretaries don't often have the professional experience to tell students to go back to the professor, and when the presidential staff sends a "here's a heads-up" message down the line through a provost, dean, and chair back to the faculty member, sometimes carelessness with the wording and inevitable gaps in communication turn an intended "here's a heads-up" message into an assumption that the message is really "you better deal with this or else."
The fourth line of defense is a bright-line standard for when administrators should even be thinking about intervening in the middle of a term, in contrast to gathering evidence about an allegation at the end of a term. Starting an investigation in the middle of a class is a serious step that can interfere with the learning environment as much as many of the practices that students might complain about; think about what would happen if the Proper Instruction Police interview students in a class regularly, asking what they thought of the politics of the instructor and the assignment du jour. I don't think any administrator would ever imagine that could happen, but starting an investigation about classes in the middle of a class always carries the risk of educational iatrogenesis. Here are my suggested standards:
- Investigate when the allegation is of behavior that is dangerous to students.
- Investigate when a prudent and yet reasonably thick-skinned person would agree that a student's right to education is jeopardized by the alleged behavior (e.g., screaming at students, racial discrimination, etc.), if allegations come from several sources that are credible. Thus, if the majority of a class complains that the instructor is swearing a blue streak and failing to teach physiology when the course is a required part of the nursing sequence, someone needs to look into those allegations, but one student's complaint should not trigger a full-blown set of interviews with all students in a course.
- Gather evidence passively during a term if the allegations are serious but the claims come from isolated sources. By passive data collection, I mean planning how to gather evidence at the end of the semester and waiting to see if there are other complaints from other credible sources.
- Refuse to use evidence that is gathered illegally or without provenance. For example, Florida law prohibits audio recordings of people who have a reasonable expectation of privacy without the permission of recorded students--thus, I have been told that surreptitious video on Youtube of Florida classrooms would almost always be illegal unless the faculty member agreed to such guerrilla recording and the student used a shotgun microphone so no fellow student's voice was picked up.
- In all cases, the faculty member must be told promptly of student concerns and, where the administrator has decided no immediate intervention is required, that should be specified (i.e., in the vast majority of cases).
Comments are most welcome on this sketch.
October 19, 2007
On metaphors and people
A few days ago I commented on an Eduwonk entry about Michelle Rhee's wanting more convenient dismissal options for non-unionized central-office staff... and teachers, in part to give some positive reinforcement for the decision to allow comments and in part because there are some interesting ideas in the entry that I wanted to follow up on. (You'll have to go there to see the comments.)
But I looked back at the entry last night, and upon rereading, the last paragraph stuck in my craw:
In the case of D.C., this debate is actually larger than whether Michelle Rhee will be able to fire some people from the central office and some low-performing teachers. It's a proxy for how hard she (and Mayor Fenty) will push on the schools. If they lose this one it's an enormous setback and the wait them out game will start in earnest. If they win, they might not have to fire so many people anyway because it will be a clear signal that business as usual is over. For Rhee, a lot riding on this. Insert your own metaphor here.
While we may think partly in metaphors, I'd prefer to think of debates over the terms and conditions of work in something other than a metaphorical sense. Maybe this is because I like the second formulation of Kant's categorical imperative (the one about not treating people as ends), and if so, I'm a softie for unreadable German philosophers. But I don't think either children or adults are metaphorical vehicles. They're people, and we should talk about them as such.
Beyond that, I think Andy Rotherham is mistaken here about the use of power. I've known plenty of people in academe and the K-12 world who have paid far too much attention to symbols of power, from the all-too-important brush-off in person to stressing the importance of a particular goal for ends far beyond what it can possibly mean in reality. Power is also more subtle than the imposition of one's will through forceful means. The principal who inspires and convinces a school's teachers to work their tails off is more powerful than any petty tyrant who might occupy the same office. The true setback in DC would be if Rhee focuses more on acquiring power than in using it wisely.
Addendum: I realized a fast read of this entry may lead readers to erroneously conclude I think Andy Rotherham is into power games. That's not my argument or assumption at all; I suspect that in his own work environment, Andy pays attention to the interpersonal touch and not to imposition of his will on the people who report to him. Maybe the same should be true in school systems...
September 15, 2007
Department of Something
I've wanted to respond to an early August post of Timothy Burke's for a few months, but I've been swamped with a number of other tasks, and it took an early Saturday morning of some other mundanities to justify my splurging on this reply.
This really started with Mark Bauerlein, so you can complain to him about having to read this. Back in July, Mark Bauerlein wrote An Anti-Progressive Syllabus as an IHE column, suggesting a rather eclectic set of conservative readings he wanted included in literary theory anthologies/courses (such as the Norton Anthology of Theory and Criticism). That column prompted a long discussion in The Valve, to which Bauerlein responded (in part):
Luther [Blisset] raises a significant point that goes deeper to the heart of what is and is not relevant in a Theory course. He says that this course should teach students the ideas and approaches that have prevailed in the discipline for the last 30 years or so. But what if the problem lies in precisely what the discipline has considered important? That's the real issue. For me, literary/cultural theory has traveled so far into itself, so far into advanced humanistic study, that it has lost touch with both the basic undergraduate classroom and with cultural policy decision-making in the public sphere.
Later in the thread, Adam Kotsko suggested that the problem with the items list was that they had not been used in literary theory:
English professors aren't using "conservative" figures as sources for literary theory. The syllabus of the Theory course is not the place to make this change--rather, [Bauerlein] should be arguing for the deployment of [Francis] Fukayama (or whoever) in literary scholarship. "Hitler Studies After the End of History: A Fukayaman Reading of White Noise." In fact, MB should be out there doing Fukayaman ... scholarship himself.
To which Tim Burke added in his blog,
So don't tell people they ought to make their students read Hayek or Horowitz. Explain what a hermeneutics that riffs off of Hayek actually looks like. Illustrate it. Do it.
Burke then riffs off of a comment in the original The Valve post about English morphing into a department of Everything Studies (an idea that also appears in Bauerlein's comment quoted above, albeit in the context of teaching and cultural policy decision-making in the public sphere, a term that's just a tad obscure), to argue that, in fact, there should be such a department:
I want to collapse all departments concerned with the interpretation and practice of expressive culture into a single large departmental unit. I'd call it Cultural Studies, but I don't want it to be Cultural Studies as that term is now understood in the American academy. Call it Department of the Humanities, or of Interpretation, or something more elegant and self-explanatory if you can think of it. I want English, Modern Languages, Dance, Theater, Art History, Music, the hermeneutical portions of philosophy, cultural and media studies, some strands of anthropology, history and sociology, and even a smattering of cognitive science all under one roof. I want what John [Holbo] is calling Everything Studies, except that I want its domain limited to expressive culture.
Burke is making an argument for something beyond interdisciplinarity; I've heard a colleague describe it as transdisciplinary. The nub of this argument is how Burke acknowledgmes that he has a limited set of skills, "But the limits on our research and interpretation of expressive media are provisional and personal. There's no reason to turn them into prescriptive claims about the nature of interpretative work for everybody else." Burke is arguing that disciplinary boundaries are constructed.
Well, that's not quite it. Burke is frustrated with disciplinary parochialism:
What I'm sick of is people who want a "conservative tradition" picking only the neo-Arnoldian parts of this list and then thumbing their nose at the rest as if it is self-evident that no self-respecting critic would want to talk about the cognitive, historical, economic, ideological questions that surround expressive culture, that all that crap is some social scientist's dreary business and get it the fuck out of my English Department. Just as I'm sick of a historicist refusing to take hermeneutics seriously, or some Franksteinian Frankfurter regarding the practical questions involved in actually doing cultural production as some sort of low-class consorting with the hegemonic beast.
I've written about academic parochialism and my own frustrations with it. I doubt whether that parochialism justifies the destruction of departmental boundaries, but let me focus on Burke's stronger argument, that we should be aware of how our disciplinary boundaries are institutionalized for practical or political reasons, not issues of fundamental divisions in knowledge:
The problem of course is to have world enough and time. We cannot write everything, read everything, teach everything. Scholars and publishers have to make decisions about what they value: which graduate student should advance or be rewarded, which work should be published, who makes the cut in a syllabus, which courses do we offer and not offer? Canons and disciplines are a pragmatic shorthand that keep us from having to rehearse our wanderings through Everything every time we set out to teach and research Everything. But that's all they are. They're not complete ontologies, not totalizing politics, not comprehensive philosophies.
I've heard arguments before in favor of some version of transdisciplinarity before, from vapid progressive educational philosophies to conversations with my campus colleagues. Burke's is the most thoughtful argument on this point that I've read or heard, and I don't think anyone who's aware of a smattering of sociology of knowledge would disagree with his basic point: disciplines are malleable entities, in theory and in practice. If playing along disciplinary boundaries is useful--what we call interdisciplinary work--then maybe destroying the boundaries would be even better.
Burke is wrong on several grounds, and I state my disagreement as someone whose academic work is necessarily interdisciplinary as a teacher (of an interdisciplinary field), as a researcher, as a member of several interdisciplinary communities (education policy, social-science history), and as a university employee (an historian in a college of education). In my own career, I don't think I've behaved as a parochial academic.
Yet Burke is wrong from three perspectives. First, in colleges and universities, departments are essential to organizing the professional life of academics. Every so often, institutions attempt to abolish departments, and these are usually unhappy experiments (such as in Peabody College for Teachers in the 1970s, before its absorption into Vanderbilt University). There is a certain amount of support that faculty need in the practical life of running an institution, and beyond a certain size, large collections of faculty are unwieldy to support or administer. In a more practical sense, however, the peer relationships among colleagues that are part of evaluation, tenure, and promotion decisions require enough common understandings that decisions avoid the capricious quality inherent when you just don't understand someone else's work. As an historian in a department with a majority of psychologists, I've seen the hard work that such interdisciplinary structures require. My colleagues and I all putatively focus on education, but that doesn't eliminate the frictions that occasionally need to be smoothed about our research traditions, the different types of questions we raise, and the ways we answer those questions. I've worked in a Department of Everything Studies (Education Division), and Burke is glossing over the hard work required in such an arrangement.
Even if we could ignore the institutional needs for departmental structures, we should not ignore the importance of providing depth of experience in research education. What would a Ph.D. produced by a Department of Everything Studies look like? Even if disciplinary traditions are constructed, and even if disciplinary boundaries are movable, they are sufficiently coherent to provide a foundation for advanced research education. Graduate students need to focus on something, both in terms of interests and also in terms of scholarly tools.
I suspect that Burke would want people with different sets of skills and interests in his Department of Everything Studies, but they'd have to have graduate education in something, and I suspect that couldn't happen in a Department of Everything Studies, or a Ph.D. produced by such a department would carry enormous risks of having eaten a thin intellectual gruel rather than consuming something of substance. I often worry about that risk in colleges of education. (In our college, a formulaic program structure is the institutional answer to such concerns, with required courses in ed psych, social foundations, statistics, and research design, as well as a certain number of courses in one's specialization and in a cognate field. Of course that doesn't guarantee a coherent, sensible program; advisors still need to provide considerable guidance.)
Finally, if we could wish away the institutional need for departmental structures and a graduate student's need to study something, there is the question of whether undergraduate students should study something instead of everything. At least in one context, we can wish away those departmental needs: In a small liberal-arts college, where the problems of scale and graduate education recede, one could experiment with an undergraduate Department of Everything. Those who worry about students' ability to think critically and develop other generalizable intellectual skills might approve of such a department, and I suppose it would fit into the views of others who want some sort of universal assessment of what students learn from college (such as those who like the Collegiate Learning Assessment).
But I do not think that we learn anything as general as critical thinking or even subdivisions of that (such as essay-writing) absent studying something specific. Yes, our intellectual skills are generalizable, but we don't develop them absent topics. Each topic then invites its own set of approaches, including ways of categorizing the subject, raising important questions, and answering those questions. One of the reasons why Kotsko and Burke could call Bauerlein on the carpet for failing to show what a Fukuyamaesque literary analysis would be like is because there exists a mental model of what good scholarly tools for literary analysis should look like, and they have a sense that tossing off names doesn't fit the bill. Where did that mental model come from?
Maybe my point about needing to study something would be useful with a contrast. What makes history different from biology is a set of limits to the topic, the questions that historians and biologists raise, and the ways that they answer them. And while there are interesting overlaps between the two (such as how humans have shaped the environment, and vice versa), even in the overlap environmental historians such as William Cronon and Michael L. Lewis are going to ask questions different from the questions their biology colleagues ask and have different ways of answering them. Part of what we learn from interdisciplinary work is how to ask questions differently, something that can change our own disciplines, but that can only happen when there are differences in approaches.
Well, responds Timothy Burke from the Devil's counsel table, wouldn't an interdisciplinary area such as environmental studies then develop its own somewhat coherent set of topic boundaries, categorizations, questions, and tools, akin to the canonical disciplines? Yes, of course. Point granted. It could, and it does. Apart from the campus politics of interdisciplinary areas and new departments (such as the history of SUNY Buffalo's women's studies program/$JSTOR), there is nothing in what I've said that dictates what configuration of disciplines would be necessary. Disciplinary boundaries evolve, and there are plenty of undergraduate and graduate programs that live in the boundaries of two or more disciplines... or have evolved out of that interdisciplinary state into their own entities. My own program area and department are examples of such evolutions and noncanonical configurations.
But the fact of intellectual change and the constructed nature of disciplines doesn't mean that disciplinarity doesn't exist, isn't healthy, and isn't necessary for undergraduate curricula, graduate education, or academic institutions. Thus, my bottom line is not the current constellation of disciplines but some configuration, not a Collegium of What Exists Now but a Collegium of Somethings. In higher education, everyone needs a Department of Something.
August 28, 2007
Parents change their minds on teaching to the test
Since 2002, the annual fall release of results from the Phi Delta Kappa/Gallup Poll of public attitudes towards public education has become increasingly focused on NCLB. Today's release (hat tip) is no exception, and my guess is that most reporters will run with the results of the first section on NCLB and accountability.
My nomination for most significant result is from Table 14, asked of those who agreed in a prior question that "standardized tests encourage teachers to 'teach to the test,' that is, concentrate on teaching their students to pass the tests rather than teaching the subject." The majorities answering yes to that first question (in Table 13) haven't changed much between 2003 (when 68% of public-school parents and 64% of adults without children in school said yes, standardized testing encouraged teaching to the test) and 2007 (with 75% and 66% of each group saying testing encouraged teaching to the test).
While a clear majority has always seen testing as encouraging teaching to the test, American adults have changed their mind on whether that is good or not. In 2003, 40% of surveyed parents with children in public schools thought that teaching to the test was a good thing. This fits in well with arguments by David Labaree, Jennifer Hochschild, and Nathan Scovronick that a good part of the appeal of public schooling is to serve private purposes, giving children a leg up in a competitive environment. In that context, it makes enormous sense to value teaching to the test, since many parents understand how college admissions tests are related to access to selective institutions and scholarships. While 58% of public-school parents thought that teaching to the test was a bad idea in 2003, a sizable minority thought it was just fine.
That opinion has changed, dramatically. In the 2007 poll, only 17% of public-school parents thought that teaching to the test was a good thing. Fewer than one-half of one percent had no opinion, and 83% of public-school parents thought that teaching to the test is a bad thing. Adults who did not have children in school also have changed their minds, with 22% of those surveyed this year thinking that teaching to the test is a good thing.
This question was asked separately from the issue of narrowing the curriculum. While there may be some spillage or confusion of issues, I think the sea change is a warning to advocates of high-stakes test-only accountability: Few parents see benefits in sending their children to test-prep factories. Fix that consequence or see the political foundations of accountability crumble.
August 16, 2007
Multiple issues in multiple measures
In his July 30 statement at the National Press Club, House Education and Labor Committee Chair George Miller said that his plans for reauthorizing the No Child Left Behind Act included the addition of multiple measures, an incantation that has provoked more Sturmunddrang in national education politics than if Rep. Miller had stood at the podium and revealed he was a Visitor from space. While Congress is in recess this month, the politics of reauthorization continue. I'll parse the debate over multiple measures or multiple sources of evidence, and then I'll foolishly predict NCLB politics over the next month or so.
The different issues
At one level, the discussion appears entirely to focus on the determination of adequate yearly progress. Add measures and you "let schools off the hook," according to Education Trust (with similar noises from the Chamber of Commerce's Arthur Rothkopf [RealAudio file-hat tip]. No escape hatch, promised Miller when asked. Maybe if you add measures, there are more ways to fail AYP, as one reporter noted at the press conference; not so, said Miller, for we'll figure out some way so that the extra measures only get you over the hump if you're almost there. Since AYP is the largest chunk of NCLB politics, all of the talking points are familiar. In the end, this piece of the debate will get bundled into the most likely package that includes growth measures.
Teaching to the test
As the Forum on Educational Accountability has argued, as well as last week's letter by civil rights groups, narrow measures of learning tend to distort how schools behave in several ways, from narrowing the taught curriculum to teaching test-taking skills and engaging in various forms of triage. One argument in favor of multiple sources of evidence is Lauren Resnick's old one, that a better test is likely to encourage better behavior by schools, both in terms of better assessments and school indicators that penalize schools for triage. To the extent that more input dilutes the incentive for systems to attend to single indicators, that may be true. On the other hand, multiple sources of evidence by themselves will not eliminate the corrupting effect of brain-dead accountability formulas, and to some extent the resolution of the debate over AYP can blunt the effect of multiple sources of evidence. On the third hand, I suspect most of those who support multiple sources of evidence are adults and prefer some improvement over none. Including multiple sources of evidence will not eliminate the deleterious side-effects of high-stakes testing, but they should ameliorate them.
Improving the quality of exams and their cost
Connecticut's NCLB lawsuit is based on the claim that the federal government has not provided enough support for the state to develop its performance-heavy exam for all the required grades. The feds allegedly told Connecticut that it doesn't need to use the performance-heavy exams, claiming that an off-the-shelf commercial test system would work just fine. After investing state money and political capital in the performance exams, Connecticut officials were rather peeved. The Title I Monitor nailed this issue in May, noting that the argument over multiple measures is in part a matter of the quality of assessments and cost. The Monitor also noted a level of denial in the US Department of Education that should be familiar to Bush-watchers:
[A] senior ED staffer acknowledged the benefits of states using varying assessment formats compared to a single test, but challenged the idea that costs and timelines are a barrier to states developing tests with multiple formats.
And the escalation in Iraq is currently providing an environment conducive to the reconciliation of factions. Right. Officials from a variety of states and a number of players in Washington agree that NCLB has essentially stressed if not broken the testing industry's credibility and infrastructure, and the inclusion of multiple measures is part of the negotiations over how much Washington will pay for better assessments.
One doesn't have to agree with George Lakoff's version of framing to recognize that the politics of accountability are driven by assumptions about the need for centralization and authoritarian/bureaucratic discipline. These themes are obvious in the dominant inside-the-Beltway narrative about NCLB: We can't trust the states. The best argument for this position is Jennifer Hochschild's thesis in The New American Dilemma (1984), a claim that sometimes we need a non-pluralistic tool to advance democratic aims, a contradiction she saw in desegregation. But we don't have an open debate about this dilemma. We didn't have it about desegregation, and we certainly don't have it about accountability.
Instead of reflecting some honesty about policy dilemmas, the arguments defending No Child Left Behind today are generally at the soundbite level. A common metaphor used by many supporters of NCLB relies on time, such as the Education Trust's organizing an administrators' letter several years ago warning against a thinly veiled attempt to turn back the clock. A step forward is another phrase that the same letter uses to describe NCLB, and Education Trust's response to the Forum on Educational Accountability proposals describes them as a giant step backward. This is an ad hominem metaphor: It says, "Our opponents are Luddites. They are not to be trusted to defend anything except their own narrow and short-sighted interests."
The other language commonly used by NCLB supporters is a simple assertion that they own accountability. Anyone who disagrees with them is against accountability. Together, these bits of accountability language imply that there is one true accountability and that NCLB skeptics like me are apostates or blasphemers. Pardon me, but I don't believe in an accountability millennium.
To shift the debate away from accountability millennialism, critics of NCLB have to provide a counter-narrative. Both the August 7 civil rights-group letter and the August 13 researchers' letter (or the letter signed by mostly researchers) describe the current NCLB implementation with words such as discourage, narrowed, and fail. In its August 2 recommendations for reauthorization, the Forum on Educational Accountability uses the words build, support, and strengthen. The Forum and August 7 letter also use a single word to describe the best use of assessment: tool. In their recommendations, the Forum and its allies use an architectural metaphor: we need to strengthen the system while keeping it mostly intact. The criticisms directed against multiple-choice statistics aren't part of that story, though I suppose a purist would insist on that, some how described as undermining foundations, eroding under the foundation, blowing out a window, or somesuch.
I don't know to what extent the debate over multiple measures will shift debate, but it is potentially the most far-reaching of the consequences of the letter.
Where we're headed in the short term
My guess is that Miller's September draft will bless consortia of states that develop assessments with more performance, authorize funding for more (but not all) of that test development if small states work in consortia, and promise to pay for almost all of the infrastructure needed to track student data.
We will also see the true character of high-stakes advocates in Education Trust and the Chamber of Commerce. The Education Trust is now under the greatest pressure of its existence over both growth measures and the issue of multiple measures. In Washington, almost no one gets their way all the time. How people negotiate and handle compromise reveals their true character.
August 6, 2007
Raul Hilberg, 81
Holocaust historian Raul Hilberg died over the weekend. I never met him, but he deeply affected my understanding of history and human cruelty. As a child raised in a American Jewish household in the 1960s and 1970s, I was exposed to the first generation of Holocaust education. (I didn't know until a few years ago that American Jews took a few decades after WW2 to start that project seriously, and the NY Times article linked above notes that Hilberg's advisor tried to discourage him from the subject as unvalued in history.) That first wave of Holocaust education hadn't yet absorbed Hilberg's ideas, and so the dominant arguments were that Hitler was evil and that we must never forget. Fortunately, I also met several survivors, including Mel Mermelstein, people whose specificity was a useful antidote to the oversimplification of early Holocaust education. (I met Mermelstein before his legal battles with the Holocaust-denying Institute for Historical Review.) For my bar mitzvah, I looked closely at the trial of Adolf Eichmann.
When I was in college, I took several classes from Jane Caplan, including a German history course. I don't remember whether I found the Hilberg volumes in my history intro class or in one of Caplan's classes; I suspect the latter. I read large chunks of his opus, though I'll readily admit I skimmed significant portions. (Someone who claims to have read every word of all three volumes as a sideline to undergraduate course requirements with a full liberal-arts college load... well, I'd be skeptical.)
Hilberg's account was meticulous, detailed, horrific, and mesmerizing. His description of the bureaucracy of genocide answered questions that had lain unformed in my mind for years. I had little understanding of historiographical dynamics, but I knew this was important. I cannot imagine that anyone who has read Hilberg could simplify the Holocaust or other genocides with any shred of historical conscience.
(p. 73/104; see prior entry for context)
June 11, 2007
National Standards as Policy Machismo
Alexander Russo and I agree on National (Yawn) Standards (Again) (his title), regarding last week's CEP report on state proficiency percentage trends and the NCES comparison of state proficiency cut-scores and NAEP cut-scores and also the double-report week's politics. In a different way, I also agree with U.S. Secretary of Education Margaret Spellings in her dissing of national standards. Same (in a yet third way) with the Education Sector's Danny Rosenthal. And I disagree with all of them.
Russo is right on the politics of national standards: dead for now. He's at his best in pegging the accountability politics, and since that's his focus in the last few weeks, I'll give him a pass for now on where I disagree with him. Spellings is right that the federal government does a better job of collecting data than telling the states what to do. She's wrong that the federal government does a better job of telling the states what to do when it's labeled NCLB. Rosenthal is correct that there is a difference between setting curriculum standards and setting cut scores. He's wrong in asserting that the cut scores are what is important.
The cut-score debate would be a silly one except for the stakes involved in states and the way that cut scores frame the education policy debate inside the Washington, D.C., beltway. As anyone who has taken elementary statistics should know, the division of an interval scale into several tiers creates an ordinal scale. Whether one labels the tiers Expert, Proficient, Basic, and Below Basic; Red, Orange, Yellow, Green, and Blue; or Venti, Grande, and Tall, tying values to ordinal tiers doesn't tell us anything about the tiers themselves other than that someone wanted to label them.
Confusing cut scores with rigor is an act of policy machismo, not common sense. "Yo Mama's so wimpy, she's satisfied with Mississippi's cut scores."
May 14, 2007
Mis-Remembering Title IX
The debate over the 2005 reinterpretation of college athletic applications of Title IX tends to avoid acknowledging the truth: the higher-ed athletic application of Title IX is only one part of what Title IX's prohibition on gender discrimination touched, and it's probably the least important large chunk of Title IX's effects. In the decade before Title IX's passage as part of the Education Amendments of 1972, ...
- Schools could slot students into program by sex and could make programs differentially available by sex (classically, home economics for girls and shop for boys, or higher math and science just for boys)
- High schools expelled pregnant students and students who had given birth (I know: it still happens today, but it's clearly illegal)
- Administrators were not held responsible for looking the other way when teachers discriminated based on sex
- A small fraction of administrators were female
- Many K-12 schools had no athletic programs for girls
Unless I explain these facts to students, many assume that Title IX only affects athletics and the debate over single-sex education. But in the broad sweep, Title IX has been remarkably successful in the core areas of academics and providing professional opportunities. So I'm torn over the current debate. Yes, athletic opportunities matter, but not as much as academic opportunities.
April 28, 2007
The question that Reading First's Chris Doherty never asked
(Disclosure: I have had colleagues who are involved in phonemic-awareness or phonics-based reading research. It pains me to see anyone I know involved even tangentially in conflict-of-interest problems, let alone people I highly respect. No, I don't know the principals from Oregon who are at the center of this, and I'm not going to be any more specific.)
"Do you have a sister I could date?"
Well, no: I don't think he needed to ask that question, or because I've never met the man, I'm really not qualified to know anything about his personal life (nor would I share details about it if I did).
But there is a connection between the old saw and the Reading First scandal. Reading the Title I Monitor's OIG Refers Reading First To U.S. Justice Department, I get the sense that everyone in Washington involved in Reading First has no clue about academe. Here's the nugget about the alleged conflict of interest in having close business relationships tied up in the people who were critical in the Reading First program:
Supporters of Reading First have claimed that the pool of experts in the nascent field of scientifically based reading research was small and that conflicts were therefore inevitable; the OIG, in its reports, said the department did not do enough to prevent the process from becoming incestuous.
The supporters of Reading First's business practices cannot have it both ways: either the University of Oregon faculty were experienced faculty with years of research under their belt or the field is "nascent." Take your pick. But we don't have to; while some of the research on phonemic awareness is new, the University of Oregon faculty involved in Reading First are the second generation of Oregon faculty involved in phonics-based instruction techniques.
"Second generation" is a deliberately-chosen phrase there. Apart from a few parent-offspring pairs, college faculty don't generally reproduce new professors in a biological sense, but there is often a chain of intellectual descendants from mentor to graduate-student advisee, who in turn mentors other graduate students at other institutions, and so on. In a field with 20-30 years of research, any small group of researchers will have mentored at least several dozen doctoral students, of whom some will remain close to their mentors' fields and others will go off in different directions.
And thus, while there are still conflict-of-interest issues involved in a small field where people know each other, there is no justification for using the exact same group of people who benefited financially from the program. So here is the question Chris Doherty should have asked: "Do you have any former students who might serve on a panel?"
Education history and school renewal
Today's brief N.Y. Times story, Massachusetts Acts To Save the Country's First Public High School, is about efforts to revive academics in the first public high school, which opened in the 1820s. The school is troubled, far from its origins as part of the 19th century expansion of the public sphere into tertiary education
English High was neither the first secondary school nor the first "public school" (a concept that only became clearly distinguished from private schooling later in the 19th century). There were plenty of schools called academies, seminaries, colleges, and so forth, and their curriculum, academic intensity, and potential student pool all overlapped. As Nancy Beadie and Kim Tolley have documented, New York state provided partial support of academies for part of the 19th century.
But English High represented an idea that was controversial for much of the 19th century: using public funds to provide more advanced education for a small group of students. Today, we think of high school as a universal adolescent experience, but it wasn't until the middle third of the 20th century. In Massachusetts, the legislature repeatedly required towns of a certain size to have high schools, a requirement that was generally ignored until the 1850s. In one case famous among education historians, the town of Beverly, Massachusetts, first started a high school when the state sued. Then a few years later, the town voted to abolish the school. (The reasons why have been argued over for the last 40 years: see Michael B. Katz's The Irony of Early School Reform and Maris Vinovskis's The Origins of Public High Schools.)
Part of the reason for the controversy in Beverly and elsewhere was the limited enrollment in high schools; few could afford to keep their kids out of work long enough to attend, but taxes still supported the schools. Part also came from competition for legitimacy (and students) from academies. High schools didn't really acquire political legitimacy until after an 1873 lawsuit filed to block public tax support of the Kalamazoo Union High School. The suit failed, and while the state supreme court only had precedential power over Michigan, it essentially knocked the legs out of the anti-high-school movement.
Many interpret the growth of high schools in the late 19th and early 20th century as a direct outgrowth of the Kalamazoo case, and the webpage linked above includes a similar argument:
Although this issue had been heard by other courts, Justice Cooley's prestige helped to make the Kalamazoo School Case a leading decision that was cited in many courts in surrounding states. In Michigan the effect was profound. The number of high schools in the state increased from 107 in the early 1870's to 278 by 1890.
But that's not quite true. As David Labaree notes in The Making of an American High School, the growing credential value of high schools gave people in cities a powerful incentive to push for more access to high schools. That growth in high schools eliminated the institutional prestige of the earlier high schools, such as English High in Boston or Central High in Philadelphia. Central High reacquired higher status in the 1940s when the city differentiated its high schools, creating an elite tier.
The history of English High would make a wonderful dissertation project, from its origins through various phases: A quick search of Worldcat reveals enough secondary materials to make a go of it.
March 29, 2007
Unions and masters degrees
Kevin Carey asks, Why do unions support pay steps for masters degrees? As an historian, I think the question is a bit backwards, but then I think the same of many other policy questions. After all, bureaucratically-oriented salary schedules existed long before the UFT's successful NYC strike in the 1960s and still exist in plenty of districts without union representation.
As an historian, I wonder how salary schedules developed to include steps for masters degrees; I wonder how steps for masters degrees benefited school systems as well as unions; and I wonder if those dynamics remain the same today.
Only then will I entertain the question of whether it continues to serve union interests to help maintain masters-linked increments. And I will admit I don't know enough about K-12 unionism to have a solid understanding of K-12 union positions on salary schedules. I mean, I've been initiated into the Order of the SMOF Hoodies (SMOF = secret masters of fandom, or maybe secret masters of faculty), but that doesn't give me entree into the more subtle K-12 perspectives.
But there are reasons why districts pay the increment, even in nonunion states.
Update 1: See Ed Muir's response.
Update 2: And more from Kevin Carey (before Ed's response, I assume.) Kevin asks explicitly about the teacher-education response, perhaps assuming that colleges of education are only interested in degree programs or are happy when people enroll for the next step. For the record, one of my colleagues who has taught here for more than 30 years told me years ago that part of my job in graduate classes was to turn enrollees into students. Yes, I've seen students who were in it for the degree and pay bump, but I don't know any of my colleagues at the faculty level who enjoy such interactions or look to create the "here to fill an empty seat and enrollment profile" program.
And many institutions compromise in various ways to make various programs shorter and more convenient for students, sometimes to the detriment of program quality. So maybe we shouldn't say that all masters programs are alike.
There are other possible configurations apart from masters programs. I know the old Miami-Dade contract included temporary pay bumps for teachers who had acquired a graduate certificate (a subdegree program) and worked in certain schools. That section is no longer in the contract, but I don't know why. Let's just say that colleges of education are not necessarily the "structure it only one way" organization that some may assume.
Update 3: Leo Casey responds to Kevin, who fires back. Apart from the overheated rhetoric (c'mon, guys; the testosterone bar is around the corner near the ballpark, not here in the blogosphere), both have points. Leo notes that the study Kevin cites is from North Carolina, with a different distribution of educational credentials from other states. Maybe masters degrees have little association with student achievement in North Carolina because of things intrinsic to the state. The broader issue is that state and local policies often determine the distribution of masters degrees, the rationale, the structure, etc.
For his part, Kevin is right to question the 'professionalism' argument that Leo makes: masters degrees are good because they'll raise the status of teaching. But Kevin's argument isn't the best one, I think, but I'm biased because I've written about it before:
Professionalism, however, is not likely to be a successful gambit in schooling, for several reasons. Most importantly, professional ideology is politically unpalatable in the late twentieth century. Trying to use professionalism misunderstands the historical context for the ideology of expertise and its widespread (political) success a century ago. Professionalism in the form of high-status, science-based occupations like medicine and engineering was one response to the chaos of industrialization and changing class structure (Wiebe 1967). Its early proponents argued that the complexities of modern life required technical expertise to solve public policy and practical problems. However, professions include more than high-status jobs, with occupations as diverse as architecture and craft work like plumbing.
A profession typically involves three dimensions: a claim to specialized expertise, some informal or formal credentialing to control entry into the occupation, and autonomy on the job (Friedson 1984). Classroom teaching falls partway among all three dimensions. Classroom teaching does involve some skills that few could walk in off the street with, but the general public has far more knowledge of what happens in classrooms (and is more willing to make second judgments of teaching) than fields like surgery. Long-term teaching requires credentials, but many school systems hire uncredentialed personnel on an emergency basis. Finally, public schools operate as loosely coupled organizations (Weick 1976): Most teachers can shut their doors in the face of some supervisory directives, but material conditions (such as the textbooks available) circumscribe their autonomy on the job, and they face other demands they cannot ignore, such as the official curriculum and standardized tests. We should see the ideology of professionalism thus as attempting to emulate a relatively small slice of all occupations with professional traits rather than, as is typically assumed, making teaching a "real" profession. Teaching already is a real profession, though one with less claim to specialized expertise and less autonomy than advocates of teacher professionalism would want.
Education did professionalize at the same time as law and medicine, but administrators and not teachers were the beneficiaries.
February 27, 2007
Of Diane Ravitch and presentism
In an amended entry earlier today, I noted my being a Michael Katz student and somehow still not having fits at the sight of Diane Ravitch's name. (As far as I'm aware, Michael doesn't, either.) That doesn't mean that I agree with her substantive scholarship, and I'll repeat here a 2004 contribution I made to the H-Education e-mail list on H-Net. While its intended topic is the historiographical concept of presentism and not Ravitch's Left Back, I do make my views of the book clear:
I've just read Derrick Aldridge's commentary in the December 2003 Educational Research, and in it he describes how he's wrestled with the issue of presentism after being warned about it at a conference. What he describes afterward (on pp. 27-29) is a plausible professional approach, but I'm becoming more and more dissatisfied with the term itself. Ravitch used the label many years ago to criticize what she called revisionist historians [Katz among them], and John Rury then pasted the same label on Ravitch's book Left Back.
I think we should ban the term presentist from our vocabulary as a red herring, full of sound and professional jargon and signifying nothing of substance. Good history has the same characteristics, whether it's making an argument about the development of educational policy in the late 20th century or witchcraft trials of the late 17th, and I challenge anyone to show me differently. Yet presentism is one of the chief bogeymen of historiography. This is especially true with educational history, where we're often caught between educationists who want everything to be immediately relevant and our colleagues in regular history departments who can be skeptical of our subfield.
So what does the term presentist refer to? Most historians would define presentism to include taking events and materials out of context, stretching the interpretation with an eye to the modern implications of the argument. It is a close cousin to teleology, and its red flag sits at our disk right next to the warning flags ready to be waved at the first sign of Whiggish history or the myth of the Golden Age. Maybe an example will illustrate my discomfort. Take John Rury's lambaste of Left Back:
[I]t is largely a history without context, and one that telescopes past ideas about education into a single-minded concern about educational standards, one of Ravitch's pet peeves in current debates about educational policy. In this work we find a classic example of history turned to the purpose of supporting a political agenda.
But that larger description hides more substantive concerns of Rury's: Ravitch's oversimplification of Progressive advocates, the limiting scope of her mini-biographies, the focus on just a few locations, the inconsistency between her critique of Teachers College as an institution and her hagiography of William Bagley (a Teachers College faculty member), the misleading use of a statistic about Kilpatrick's teaching, the exaggeration of evidence about classroom instructional practices in the 20th century, and the inconsistency between her championing disciplinary approaches early in the 20th century and then ignoring the professional judgment of historians in the war over the history standards.
Now, I could add some additional criticisms after wading through the book last year. She acknowledges in the prefatory matter that there was no Golden Age of education (p. 13), and then proceeds to describe the justifiable pride of earlier ages on pp. 19, 21, 25, 30, and 89 (and probably elsewhere). She describes the Committee of Ten report as the first to make curriculum recommendations on secondary education to the country (p. 42), ignoring the legacy of the Yale Report earlier in the 19th century. She claims that the book focuses on the curriculum, but she has a large chunk of material on the reading methods wars in the last few decades. She complete[ly] ignores David Labaree's work on high schools, and while she notes Tyack and Kliebard's work, they appeared to have no influence on the book (either shaping it actively or as serious arguments to counter). The margins of my copy is filled with specific comments, and I found it as frustrating a read as I expect John Rury did, from his review.
And yet I am reluctant to slap a label on it. It is frustrating in part because of the sloppiness of the historical argument and the handling of evidence. But it is also frustrating because I can see the construction of a popularly-appealing book. She mixes detail inside each chapter and the patina of careful history with overblown rhetoric at the beginning and end of most chapters. But that's my fear that many readers will pay more attention to the rhetoric than to the rest of the book.
The problem with Left Back is not that it is "history turned to the purpose of supporting a political agenda," as Rury claims. There is plenty of wonderful, provocative history motivated by political or social beliefs; my favorite is C. Vann Woodward's Origins of the New South. Those good works are just as presentist as Left Back. They're just better at handling evidence and the nuances of writing an historical argument. I've decided that presentism is a label, not a useful analytical concept in historiography.
Someday soon, I'll tackle another perennial bogeyman of history, the number of history doctoral programs in the U.S.
February 9, 2007
And now, a defense of boutique education
At the cost of alienating my new blog-buddy Alexander Russo, who a few days ago said, "Once in a while, Sherman Dorn and I agree about something," I'm going to go in a slightly different direction from my gripe about the NCLB-and-gifted-ed argument Wednesday. Then, I argued that the defense of gifted education was an inappropriate argument against NCLB for a variety of reasons.
Today, let me address the reverse: Are complaints about NCLB good for gifted education and other specialized programs?
Without addressing specifics, I'll say that there are places for specialized programs within education, included gifted education, though not necessarily with the standard rationale you'll hear. I mentioned one yesterday: addressing students who are not only bored in school but bored and likely to get into trouble through that boredom. In addition, gifted education (among other places) provides a legitimate opportunity for trying out challenging material. As long as there's an understanding that the challenging material will eventually be made available more broadly if successful and when hammered out, that "laboratory" environment raises far fewer equity concerns. And when giftedness is thought of in a dynamic way and not as a static quantity, there's less of a danger of reifying it and conflating it with social class, race, ethnicity, etc. A number of researchers in special education, including gifted education, have been working with the elitism/inequality issues for a number of years. I'm confident I'm not breaking any new ground here, and I'm sure my colleagues in my college can go much further in describing current research. The fact that much current gifted education is bounded by a variety of practices and institutional legacies does not mean that it can't be different.
There are other specialized programs that have some justification, as long as we acknowledge tensions between specialization and choice, on the one hand, and common purpose, on the other. In programs without gatekeeping, the within-school choice issue justifies considerable specialization. Yet there's a counter-argument, best explained in The Shopping Mall High School: "boutique shops" (or specialized programs) give the illusion of choice without depth and without a common mission. The real experiences of millions of adults (i.e., former students) provides natural constituencies to maintain various education boutiques in their historical forms.
Now, to an uncomfortable fact: apparently the (DOA) Bush budget proposal includes an interesting shift to fund NCLB: take funds away from federal special-education appropriations (see Public Agenda's blog). So perhaps it looks like NCLB does threaten some specialized programs (those with high mandated costs and an historically underfunded federal commitment)... or at least the proposed budget. But what about the complaints?
First, to gifted education: The article in the New York Times (and others) gives the impression that gifted educators are whining. Whining generally is not a successful tactic, and it's only a tactic, not a strategy. Regarding the strategy implicit in the article, relying on the political power of wealthy families who are more likely to have children in gifted programs imprisons gifted education in its current structure and practices. Again, that's not wise in the long term.
Second, to most special education and English-language learning instruction: In both cases, NCLB fails to address the dilemma of assessment (to wit, that one must insist on high standards while acknowledging that age-peer "grade-level" testing will often be insensitive to improvement). Only to complain about the assessment problem risks failing to address the underlying need for accountability. There are some complex (and often unsolved) technical problems, but the political problem is explaining the technical problems appearing as if one is excusing schools for low expectations.
Third, to career and technical education (also part of the rob-Peter-to-pay-Paul maneuver of the president's proposed budget): I haven't read that much related to CTE/vocational education in the last few years, at least insofar as NCLB is concerned. So I'm not going to hazard a guess. But if the other two "shops" discussed above are any indication, standard criticism of NCLB will not necessarily help CTE.
February 7, 2007
Ugly arguments against NCLB
There are plenty of ways I can criticize NCLB and its implementation, but to whine that it drains resources for the gifted is one of the more disturbing arguments I've read (and today's story by Joseph Berger isn't the first time it's appeared in the New York Times). Particularly wince-inducing passages...
Even critics of No Child Left Behind say there is no educational goal more important than helping the nation's poorly performing students read and calculate competently. But in a world of scarce resources, a balance has to be struck so that programs for the gifted are not frozen out. After all, many students nurtured by such programs will one day concoct the technology and dream up the ideas that will keep America competitive.
Apart from the blatant editorializing (which source said "a balance has to be struck"?), is there any evidence that adults who were in gifted programs years earlier are the primary source of tech innovation and that, to the extent that they are, it was the existence of gifted ed that's responsible?
Michael J. Petrilli, vice president for national programs at the Thomas B. Fordham Foundation, which supports educational research, said cuts in programs for the gifted hurt "low-income children with tons of potential who may not be getting the attention they deserve."
First, the wording above suggests that Fordham is like the Spencer Foundation (which really does fund education research), but Fordham is a think tank. Second, Petrilli implies that if gifted programs weren't cut, they'd be serving millions of poor students. No, they wouldn't: gifted programs serve a very low percentage of students, and the vast majority of "low-income children with tons of potential" are outside elementary and middle-school gifted programs. The better bet for advancing these children's interests is to improve general academics, not nurture boutique programs for a few.
Survival of gifted programs is not just a matter of money; they have long been a target of complaints that they are elitist, and violate the bedrock egalitarianism that created public schools in the first place.
Yes, Joseph Berger is right on the general criticism, but the history is off. I'm not Paul Violas, but to claim that schools were founded entirely on an egalitarian ethic ignores much of the historical research on the topic over the past 40 years.
Lost in the debate, champions of the gifted say, is that exceptional intellects need hand-holding as much as those below average, that they get restless or disheartened working with material they long ago conquered. Jane Clarenbach, public education director of the National Association for Gifted Children, said research shows that 20 percent of the nation's three million gifted students will drop out before graduating from high school.
There is a grain of truth here hidden by dunes of slipshod reasoning. The grain of truth is that there are plenty of children in school who are bored because they face no challenge at the moment, and some proportion of them get into trouble as a result. My spouse calls this group "Devil's workshop children," and we've known a few. An absolutely legitimate purpose of any gifted education program is to identify those children and make sure they don't have idle hands. (For those who know special education, this redefinition would be the gifted version of response to intervention eligibility criteria.)
But that's just a grain of truth. One fundamental problem with this "gifted kids will drop out if we don't give them extra services" argument is that when resources are devoted solely to students labeled gifted through so-called IQ and other testing programs, such programs commonly concentrate on elementary and middle-school years, long before anyone drops out of school.
Then there's the argument I made earlier: the better route to serving these students (and all others) is by improving the general education curriculum so that no one is bored or alienated.
Nancy Eastlake, coordinator for West Hartford's gifted programs, points out that so-called pullout programs are often criticized "as fluffy activities." Yet, she argued that "when you have children research a topic of great personal interest, that's solid, good learning."
Yes... and students outside the gifted programs don't want to learn about a "topic of great personal interest"?
I understand the parents' dilemma when gifted-education programs exist: do you hold your individual child's interests hostage to the larger principles? In general, the answer is going to be no. I know enough relatives who have been in gifted programs or have placed their children in gifted programs to understand the reasoning.
But many of the opportunities that draw parents into such programs should be available more broadly. One of my daughter's best friends is in all advanced coursework this year, but she was not in a gifted program. Another friend from elementary school (also not in its gifted program) shifted to advanced math in middle school. ("About time!" was my thought at the time.)
Irony: This story appeared one day after the release of the latest statistical report on the nation's largest challenging general-curriculum program, Advanced Placement testing. While I do not think AP programs are the be-all and end-all of academic challenges, their recent history demonstrates that a school can open up challenging opportunities by having counselors broaden rather than narrow the funnel in their gatekeeping role.
In summary, critics of NCLB need the "NCLB hurts gifted ed" argument like we need Charles Murray's "help." And now, if you'll excuse me, I'll return to correcting the first page proofs of Accountability Frankenstein.
Update: See my defense of boutique education, including gifted education, written a few days after this entry. Can't say I'm not finessing things...
January 17, 2007
Report: Three of every two government statistics are flawed
Okay, I'm joking. The real headline from the Guardian newspaper is One in five Home Office statistics are unreliable, says department head. Maria Farrell at Crooked Timber makes the point that non-neutral claims of facts degrade public discourse. But I wonder whether even a putatively independent body can create trustworthy facts when they're subject to subtle pressures (budgetary, etc.). Those who look for such independence are correct to criticize obvious warping of data, but those who think that nominal independence is great (as opposed to better) have never read political theory from the iron triangle forward.
Even at the level of shaping a study, negotiation can often decide what is studied. The National Reading Panel is a case in point. The report trumpets the extensive public hearings that shaped the priorities of the panel. There are two conclusions one can draw from that fact. Either the panel had decided in advance what would be studied, and the hearings were a sham, or the panel was sincere in letting the public input shape the substudies. If the first is true, the definition of research was political by exclusion. If the second is true, the definition of research was political by inclusion.
(This conclusion about the negotiability of research is true whether or not you agree with the NRP conclusions.)
January 2, 2007
Boundaries, agendas, and meta-narratives
Kevin Carey has an interesting discussion about policy perspectives and POV boundaries in the context of a broader discussion about the role of teacher unions. (Minor point here: to his good, bad, and good and bad perspectives on unionization, I'd add look at the d***ed specifics. Also see Michele McLaughin's response, which I'll just respond to as editor of Education Policy Analysis Archives: Hey, submit stuff for peer review here! Disclosure: I'm a union member affiliated with both the NEA and AFT as well as an education [and maybe even an educational] historian.)
Carey's looking at it from a policy wonk's (and think tank staffer's) perspective: how do you move ideas? In the long term, you try to reshape political agendas, and Carey's argument about pushing perspective boundaries around is about agenda shaping...
... which brings me to two political books on NCLB published in 2006, Paul Manna's School's In and Patrick McGuinn's No Child Left Behind and the Transformation of Federal Education Policy, 1965-2005. Despite the fact that schools are part of the unrecognized welfare state in the U.S., education politics have gotten precious little attention from grand(ly)-theorizing political scientists. I'm an historian, not a political scientist, but I think Jennifer Hochschild's The New American Dilemma (1984) and Ira Katznelson and Margaret Weir's Schooling for All (1985) were the last books that took school politics as important, serious evidence about American political structures. Manna and McGuinn's books should end that drought and spark interesting dialog.
To put it briefly (and do great violence to their arguments), McGuinn's and Manna's books are part of ongoing arguments about what shapes agendas, something that has been challenged/reworked by Frank Baumgartner and Bryan Jones's Agendas and Instability in American Politics (1993). McGuinn argues that NCLB came about with a change in policy regimes, which I read as a dominant meta-narrative about policy. To him, federal policymakers were finally fed up with state intransigence on accountability in the late 1990s, and members of both parties were happy to jump on board the NCLB bandwagon, an event that would have been unthinkable 7-8 years before. To McGuinn, the underlying story about education policy shifted over 7-8 years, a change that involved partisan politics as well as the arguments of key players in Washington. McGuinn's focus is at the national level, and most of his evidence is there.
In contrast to McGuinn, Manna explicitly focuses on the interrelationship of federal and state actors, and as a result his story is different. To him, states were active in the 1990s, and they were willing to borrow strength from the federal government in building an agenda and let the feds borrow it from them as well, either in the political rationale for action or the capacity for action. So to McGuinn, NCLB represents the hidden strength of governors, subtly letting the federal government claim all sorts of honors as long as it served their purposes. The reverse is true, at least in theory, but McGuinn tends to write his story from a state POV, while McGuinn's POV is clearly at the federal level.
Each book has some strengths in terms of detail. McGuinn's interviews with selected key federal actors provide retrospectives that I don't think you'll get anywhere else. The description of the AYP-definition train wreck in 2001 is Manna's surprise contribution. But the larger clash is one of levels of government and emphasis on meta-narratives vs. initiative. McGuinn's eye is on the federal level, while Manna's is on the interplay between federal and state. McGuinn focuses on policy regime (what I think of as meta-narrative), while Manna's is on who has the initiative in agenda-setting.
There are some irritating flaws that I found discomforting in each book. For McGuinn, the national teacher union affiliates are shadowy figures who are recalcitrant, anti-reform, anti-accountability, but he never provides any details though he had an NEA lobbyist as an informant. For McGuinn, Shanker's activism in the late 80s and most of the 90s was invisible, Bob Chase didn't exist, and he must not have asked his NEA interviewee any hard questions. For his part, Manna relies for the depiction of the importance of education at the federal level on one of the more trite types of political-science evidence, mentions of words in presidential speeches. Someone looking at both books would wonder why Manna failed to look at legislation (which McGuinn at least touches on in some depth, even if he ignored the issue of classroom space from the 1950s). In a book devoted to the interplay of different levels, that odd reliance on symbolic speech is... well, odd.
One last thing: Neither discuss the other's ideas much, though I suspect they know of each other's work (McGuinn had read Manna's dissertation, at least). I would love to get both of them in a room, have them talk about the issues, decide what things they really disagree on and why, and get the recording online. But both books should be required readings in education policy programs, in part for the substantive background on NCLB and in part for their very different and interesting uses of federal education policy to illuminate political dynamics.
December 21, 2006
Theodore Porter, Trust in Numbers, and picking the right fights
On the way to and from my mother-in-law's house today, I finished Theodore Porter's Trust in Numbers (1995). (I should say that I finished it while my spouse was driving!) While I was distraught this morning at Porter's style, I slogged through, a matter which I knew was important. And the book has plenty of food for thought. But the (dis)organization remained problematic, and not surprisingly, the book reviews varied fairly dramatically in terms of how they read the main argument. In particular, the reviews in the Economic History Review and Technology and Culture read Porter's book as less deterministic than I thought he was in the end.
That determinism is a critical question. Is autonomy such a driving force that weak disciplines and administrative apparatuses under political threat will resort to statistics as a buffering mechanism to protect autonomy, even while higher-status disciplines or bureaucracies can still turn to networks of trust and rely on elite status? If so, then test-score accountability was inevitable, as Stephen Turner suggests. But I think the details in Porter's book belie that argument of virtual inevitability (which Porter makes clear, I think, in the second-to-last chapter). As Porter notes, weights and measures have historically been more negotiable than we assume today, and his description of the origins of the Chicago futures market is a fascinating tale of contingent events. There was nothing inevitable in it.
We don't have to look at NCLB and debates over NAEP to see how flexible truth is and the porous factual claims that permeate education. Evidence of how negotiable education "facts" are lies in the current debate over measuring graduationor, as is more common, mismeasuring graduation. There is no agreement on how to measure graduation, the sides are frequently identified as biased in terms of other issues (support of public schools v. vouchers), and even the terms of the debate are vigorously argued, an argument that suggests that education facts are not completely behind the boundaries of expertise.
The debatability of education facts suggests another way of looking at accountability: given the fact that accountability systems will produce arguments, maybe one way of thinking about them is to structure the system so you get the argument that you want. If proponents of high-stakes accountability are sick of educators responding to accountability by blaming parents, maybe they should look in the mirror: didn't the system predictably set up that argument? And if so, what's the argument that you want to have?
Maybe it's because my father grew up on Flatbush Avenue, but I don't think there's anything wrong with a good argument, as long as it's about the right things. Do we really want to keep arguing about whether the scores mean something or who's responsible? I can predict continuing arguments precisely on these issues for as long as accountability is based entirely on test scores. I know of one commendable accountability mechanismRhode Island's site-visit systemthat produces enormous discomfort in schools that are judged wanting and some arguments, but I think they're arguments worth having, about the nature of the school, what isn't happening, and what could be happening. Those arguments can only happen if you get beyond test scores.
December 20, 2006
Danziger, Contructing the Subject, and the dangers of following the trail
Thanks to a trail of other readings, I'm now delving into Theodore Potter's Trust in Numbers (1995) and Kurt Danziger's Constructing the Subject (1990), both relatively dense books discussing topics on the edges of my concerns with testing and professional expertise. While reading the page proofs of a book that will be coming out in just a few months, I've already had one basic assumption rattled (it's a minor point in the book, David, but it forces me to rethink the question of psychometrics as a profession and how we treat teachers). Then I picked up Stephen Turner's Liberal Democracy 3.0 (2003), about whose provocative arguments about expertise and democratic political theory I've written elsewhere (on Education Policy Blog).
So in this trail of expertise, professional history, and our social trust in test scores, I've come to two very different chunks of the literature. Theodore Potter has written two books on the social history of statistics, one on The Rise in Statistical Thinking (1988) in the 19th century and a second one (Trust in Numbers), which is a little more broad and ambitious in its argument. I've left that fairly early to tackle the Danziger book, which is a brilliant little book that rocks you with a gem of insight every chapter. Danziger argues that Wundt's laboratory circle in Leipzig both established the concept of subject and also became an alternative view of subject (where the experimenter and observer frequently exchanged roles) to the later, more common notion of subject as of a different social status and knowledge position than the experimenter (and report author).
One point that is both suggestive and devastating is Danziger's suggestion that schools may have influenced the path of psychology as much as the other way around, for three reasons: first, schools created a huge resource of subjects once those became defined as a separate social group from experimenters; second, schools became a target of marketing of applied research; and third, in their dramatic expansion in the late 19th century and the organization around bureaucratic forms (graded multi-classroom schools, for example), the new bureaucratic school systems both produced and consumed huge numbers of the type of population statistics that are akin to censuses, creating the idea that one could capture the sense of schools and children with a sort of social census. That statistical consumption may have shaped psychology's turn from reporting the introspective observations of individuals to the reporting of aggregate statistics, what Danziger calls a "psychological census."
In turn, this broad (and ironic) argument brings me to two other issues: John Dewey and Daniel Calhoun. Most people in education describe Dewey as a sort of demi-god, creating a humane vision of education. What my colleague Erwin Johanningmeier argues is that Dewey used schools as a way to inform his writings on pragmatism more than attempting to define what schools should do. I suspect this may be a matter of different perspectives on the same writings, but Johanningmeier's argument parallels Danziger's.
The second is that Danziger cites Calhoun's The Intelligence of a People (1973), of which Dorothy Ross aptly said, "Any reader who spends a few minutes with Calhoun's ... book will learn that it is infuriatingly difficult of access." She also noted, again accurately, "But it will repay the reader's persistence." So I need to delve back into that (which I haven't touched since grad school). There are two copies on the shelf in my library: BF431 .C256. Please don't grab both, as I need them.
December 16, 2006
The "New" Commission and Jurgen Herbst
Achieve, Inc.-National Center on Education and the Economy smackdown program notes: Has anyone else noticed that the redesign recommendations of the New Commission on the Skills of the American Workforce are just a wee bit inconsistent with Achieve Inc.'s American Diploma Project? End high school as we know it at the end of grade 10 versus boosting the academic demands of high school. Maybe it's time for that Marc Tucker-Lou Gerstner WWF headliner. But let's get to the more substantive comments on the New Commission report...
The New Commission's structural recommendations are close to the shift that Jurgen Herbst recommended in The Once and Future School (1995), the high-school history that came out about the same time as Bill Reese's Origins of the American High School (1995). Herbst said the period of state-subsidized universal schooling should start and end earlier and, lo! and behold!, that's what the New Commission recommends, too (or maybe "recommends 2" since it's the Commission Mark II). I'm not going to expect to see any citation of Herbst in the full report (which I haven't seen since it's not online). But you shouldn't be surprised at the failure to know the historical literature, since this type of commission usually has a faux historical perspective, if any.
The best argument in favor of such a shift is not that globalization requires restructuring (these commissions never recommend economic policy changes) but rather that it conforms better to the needs of families: A much larger proportion of mothers are working at preschool ages than several decades ago, and so preschool and daycare are the experiences of the vast majority of children in the U.S. Given that, and the downward shift in academic expectations, having state-subsidized preschool experiences would piggyback on the expectations of families anyway (which is that children will be in institutionalized environments earlier than decades ago). On the upper end of the age range, a substantial proportion of 17- and 18-year-olds work part-time during the school year, and a substantial minority of juniors and seniors work long enough hours to interfere with serious schoolwork. You can fight that in a number of ways (reducing the hours that minors can work, for example), or you can "go with the flow" and eliminate the pseudo-universal claims of high school's last two years.
Herbst's proposal for a downward age shift was informed by his comparative perspective (he's also written a comparative book, School Choice and School Governance, which came out this year on, well, ... just reread the title, okay?), and I suspect that the New Commission Mark II also was, but from a more superficial angle. (Hey, Marc: Put your full report online so I don't have to guess!) In Herbst's case, it's a case of "the current system isn't inevitable; get over yourself and structure the system to work better." In the case of most commissions, the comparative perspective is often phrased as a "let's see what our competitors do and then respond to them" argument; the 1980s was full of shallow "let's mimic Japanese education" arguments. (Note: does anyone on these commissions know what the social role of Japanese preschools has been? Those in the U.S. will probably be surprised, if you don't know already.) This set of recommendations isn't quite as crass, and the New Commission's staff-produced and -commissioned papers (a Commission's commission? hmmn...) are decent descriptive pieces, if a bit pedestrian, but I have the sense that they were used to flesh out a predetermined structure rather than to inform discussion. For example, Lynne Sacks and Betsy Brown Ruzzi's Overview of Education Ministries in Selected Countries contains the important note that not all countries have a national curriculum, but does that inform the recommendations, and will anyone pay attention?
There are a variety of concerns many will raise about recommendations of the New Commission Mark II, Junior. (For one of the first out of the chute, see AFT's response.) One that was rehearsed in the 1980swhen other comparatively-derived proposals looked at European tracking practices as a modelis that tracking students' educational careers at age 16 will make the inequalities in the current system harder to root out. Right now, many systems have a semi-soft tracking system: a substantial minority of students are actively encouraged to take challenging courses, while others are either actively discouraged (or encouraged to take non-challenging or remedial courses) or just not told about opportunities. That's the underlying purpose of Jay Mathews and other advocates of AP classes, to push schools to actively encourage students to take challenging academic classes in high school. Some school systems have started to soften that implicit tracking, encouraging a broader range of students to take AP and other challenging classes. If we end high school at age 16 and then track students into different types of institutions, we will risk increasing the inequalities in educational opportunities. As Fairtest head Monty Neill wrote yesterday,
If 16 year olds will be separated based on test scores, barring not only changes in school and in preschool but also in a wide range of other societal aspects, low income kids, kids of color, those whose first language are not English, those with disabilities, will be sorted out into some pretense of voc training (like McDonalds as was previously posted).
I don't know if Education Trust has weighed in yet on the recommendations of the New and Improved Commission Mark II, Junior, but if I were a betting man, I'd predict that they'll oppose it on these grounds.
A second argument against the structural recommendation for adolescents is about adult supervision of minors. If 16-year-olds are not only not required to attend school but are signaled, "Here's where school ends," then you'll have a much larger proportion of teenagers who will end their schooling. In the last several decades, young adults have had higher unemployment rates than adults over 25, so one possible consequence is a larger number of teens (maybe a little larger, maybe much larger) having nothing to do. I suspect that school boards will argue that if the common curriculum ends at age 16, crime will increase.
The appeal of a third argument is superficial, but I suspect it will be more important than the others in sinking the recommendations of the New and Improved 5% More Free! Commission Mark II, Junior: If you end the standard school program at 16, there go high school athletics and much of the extracurricular activities that millions of Americans remember as the best part of high school. That common experience helps create what David Tyack and Larry Cuban have called the grammar of schooling, or what Mary Metz called the "real school" script. To many adults, a "real school" has a football team, cheerleaders, a high school newspaper, senior prom, a yearbook, etc. Efforts to end the common academic program at 16 will have to fight the positive memories of millions of Americans and a century-plus discourse on the need to appeal to (and sometimes appease) the tastes of teens.
This is not to say that we shouldn't rethink the structure of schooling: we certainly should, regularly. As I've written before, there are significant historiographical flaws in Tyack and Cuban's Tinkering toward Utopia (1995), an historical brief for incrementalism. But at a first glance (i.e., the executive summary and some of the attached papers), the New and Improved 5% More Free! Commission Mark II, Junior, has gone about the redesign effort in the all-too-common ahistorical and narrowly-framed way.
November 28, 2005
All righthaving beaten a future article for Education Policy Analysis Archives halfway into shape, I'm taking some time for relaxation and my sidewise way of looking at education policy, or at what passes for it.
Since the announcement this month that the Department of Education would promote the piloting of so-called growth models of accountability, there have been a number of reactions, many of which are skeptical, from George Miller and Ted Kennedy, the Citizens' Commission on Civil Rights (a private organization, despite the similarity in names to the official U.S. Civil Rights Commission), Education Trust, and Clintonista Andrew Rotherham, who points out that only a few states have even close to the sufficient longitudinal-database elements to carry this off.
Since a few journalists have had a reaction-fest with this, there has been no acknowledgment of the existing literature on so-called growth models, their political implications, or the gaps in the literature....
I'll state up front that it's fine to focus on political questionsmoreover, I've argued in The Political Legacy of School Accountability Systems that the political questions are the important ones, ultimately, and it's impossible to have a technocratic solution to political problemsjust so long as you don't ignore the technical issues (and for that, see Linn, 2004). Haycock of the Education Trust is ultimately right about the focus on philosophical questions, regardless of whether I might agree with her on specifics.
Big political questions
So what are the policy/political questions? A few to consider:
- The dilemma between setting absolute standards and focusing on improvements. As Hochschild and Scovronick (2003) have pointed out, there's a real tension between the two, and it's impossible to completely resolve the two. On the one hand, there are concrete skills adults need to be decent citizens (yea, even productive). On the other hand, focusing entirely on absolute standards without acknowledging the work that many teachers do with students with low skills is unfair to the teachers who voluntarily choose to work in hard environments. And, no, I'm not going to take BS from either side claiming that, on the one hand, we need to be kind to kids (and deny them the skills they need??) or, on the other hand, that we need to take a No Excuses approach towards those lazy teachers (and who are you going to find to teach in high-poverty schools when the teachers you've insulted have left??)
- The question of how much improvement to expect. Here, Bill Sanders' model (we'll take it on faith for the moment that he's accurately representing his modelmore later on this point) is close to an average of one-year's-growth-per-year-in-school (see Balou, Sanders, & Wright, 2004, for the most recent article on his approach). But for students who are behind either their peers or where we'd like them to be, Haycock is right: one year's growth is not enough (see Fuchs et al., 1993, for a more technical discussion and the National Center on Student Progress Monitoring for resources).
- The tension between the public equity purposes of schooling and the private uses of schooling to gain or maintain advantages. Here's one thought experiment: Try telling wealthy suburban parents, We want your kids to improve this year, but not too much because we want poor kids in the city or older suburb nearby to catch up with your children in achievement and life chances. If anyone can keep a straight face while claiming the parents so told would just sit back and say, Sure. That's right, I have some land to sell you in Florida.
- Where is intervention best applied? Andrew Rotherham's false dichotomony between demographic determinists and accountability hawks aside, arguments by David Berliner are about where to intervene to improve children's learning, not about giving up. (I should state here that of course I have heard teachers and some of my students fall into the trap of this dichotomy, but that's a constructed dynamic from which we can and must escape. To dismiss Berliner and others as if they fall into the trap is to shut off one escape route. Shame on those who careless elide the two.)
- Assumptions that technocratically-triggered sanctions based on (either) growth or absolute formulae work. I am yet to be convinced that such a kick-in-the-pants effect is strong enough or without side effects. This is not to say that I don't believe in coercion. I am just a believer in shrewd coercion, not the application of statistical tubafors (you'll have to search for the term on that page).
Statistical issues with multilevel modeling
Among education researchers, probably the tool(s) of choice for growth right now for measuring growth is so-called multilevel modeling. Explaining why multilevel modeling is the tool of choice for growth is probably an accident of recent educational history (combining the more recent pushes for accountability with the development of multilevel statistical tools), but it allows a variety of accommodations to the real life of schools, where students are affected not only by a teacher but a classroom environment in common with other kids as well as the school and their own characteristics (and family characteristics). That's a mouthful and only skims the surface.
Of multilevel modeling pioneers, the best of the bunch by far (beyond Bryk and Raudenbush, whose names are most familiar in the U.S.) is Harvey Goldstein, whose papers for downloading is a treasure-trove of introductory material for those who have some statistical background. The Centre for Multilevel Modeling (which he founded) is one broader source, as is UCLA's multilevel modeling page and Wolfgang Ludwig-Mayerhofer's. A Journal of Educational and Behavioral Statistics (Spring 2004) special issue on value-added assessment is now required reading for anyone looking at multilevel modeling and the question of adjustment for demographic factors.
But there are both technical and policy/political issues with the use of multilevel modeling software (and I use that more generic term rather than referring to specific software packages or procedures). Let me first address some of the technical issues:
- Vertical scaling. In some statistical packages, there is a need for a uniform scale where the achievement of students at different grades and ages are on the same scale. That way, a score of a student who is 7 can be compared to an 8-, 9- or 10-year-old's achievement, resulting in some comparison across grades. This is not necessary with packages that use prior scores as covariates, but anything that looks at a measure of growth in some way strongly begs for a uniform (or vertical) scale. There are two problems with such vertical scaling, stemming from the fact that it is very, very difficult to do the type of equating across different grades (and equivalent curricula!) that is necessary to put students on a single scale. Learning and achievement is not like weight, where you can put a 7-year-old and a 17-year-old on the same scale. Essentially, equating is a type of piecemeal process of pinning together a few points of separate scales (each more closely normed). At least two consequences follow:
- Measurement errors in a vertical scale will be larger than errors in a single-grade scale, which test manufacturers have far more experience norming.
- The interpretation of differences in a vertical error will be rather difficult. One reason is the change in academic expectations among different grades, unless you narrow testing to a limited range of skills. But the other reason is subtler: the construction of a vertical scale can only be guaranteed to be monotonic (higher scores in a single-grade test will map to higher scores in the cross-grade, vertical scale), not linear. There will almost inevitably be some compression and expansion of the scale relative to single-grade test statistics. That nonlinearity is not a problem for estimation (since models of growth can easily be nonlinear). But the compression/expansion possibility makes interpretation of growth difficult. Does 15-point growth between ages 10 and 11 mean the same thing as 15-point growth between ages 15 and 16? Who the heck knows!
- Swallowing variance. As Tekwe et al. (2004) point out in a probably-underlooked part of their article, the more complex models of growth swallow a substantial part of the available variance before getting to the "effects" of individual schools and teachers. This is inevitable with any statistical estimation technique with multiple covariates (or factors, independent variables, or whatever else you want to call them), but it has some serious consequencees for using growth models for accountability purposes. It erodes the legitimacy of such accountability models among statistically-literate stakeholders, who see that most variance is accounted for (even if in a noncausal sense) by issues other than schools and teachers. In addition, this process leaves the effect estimates for individual teachers and schools very close to zero and each other. Thus, with Sanders' model used in Tennessee, the vast majority of effects for teachers (in publicly-released distributions) are statistically indistinguishable. Never mind all my other concerns about judging teachers by technocracy: this just isn't a powerful tool even for summative judgments.
- Convergence of estimates. In the packages I know, the models don't always converge (result in stable parameter estimates), given the data. Researchers with specific, focused questions will often fiddle manually with equations and the variables to achieve convergence, but you can't really do idiosyncratic adjustments in an accountability system that claims to be stable and uniform over timeor, rather, you shouldn't make such idiosyncratic adjustments and keep a straight face in claiming that the results are uniform and stable over time.
Political complications of multilevel models
In addition to the technical considerations, there are issues with multilevel modeling that are more political in nature than technical/statistical:
- Omissions of student data. This is true of any accountability system that allows exemptions, but it's especially true of any model of growth that omits students who move between test dates. It's a powerful incentive for schools to perform triage on marginal students in high school, either subtly or openly. I've heard of such triage efforts in Florida, though it's hard to demonstrate intentionality. But even apart from the incentive for triage, it's hard to claim that any accountability system targets the most vulnerable when those are frequently the students who move between schools, systems, and states. And the more years included in a model, the less that movers count in accountability.
- The complexity factor. Technical issues with complex statistical models are, well, complex and difficult to understand without some statistical background, and such complexity requires sufficient care with educating policymakers. That's especially important with growth models, which are pretty easy to sell to lawmakers who may be looking for a technocratic model that they don't have to think too hard about. Here's a reasonable test: will Andrew Rotherham's blog ever mention the technical problems with growth models? Will the briefs put out by various education policy think tanks explain the technical issues, or will they prove the term to be an oxymoron?
- Proprietary software. I think that William Sanders still holds all data and the internal workings of his package to be proprietary trade secrets, even though they're used as public accountability mechanisms in Tennessee, at least (anywhere else, dear readers?) (Fisher, 1996). How can anyone justify using a secret algorithm for public policy in an environment (education) where everyone (and the justification for accountability itself) expects transparency? (For other commentaries about Sanders' model, see Alicias, 2005; Camilli, 1996; Kupermintz, 2003, and an older description of my own involvement in the earlier discussions of Tennessee's system. For his own description, see Balou, Sanders, & Wright, 2004; Sanders & Horn, 1998.)
One of my concerns with the increasingly complex world of statistical models of growth is their amazing disconnect from fields that should be natural allies. We have great statistical packages that are incredibly complex, but some days they seem more like solutions in search of problems than a logical outgrowth of the need to model growth and development in children.
As stated earlier, one problem is the attempt to put student skills, knowledge, and that vague thing we call achievement in an area on one scale. Unlike weight, there isn't a cognitive measuring tool I'm aware of in which all children would have interpretable scoresnonzero measures on an equal-interval scale, to choose one goal. But for now, let's assume that someday psychometricians find the Holy Grail of vertical scales (or maybe that would be a Holy Belay Line to climb down after scaling the...). Even waving away that problem, I'm still troubled by the almost gory use of statistical packages without some thoughts about the underlying models.
Even if one were interested largely in describing rather than modeling growth, you could start with nonparametric tools such as locally-weighted regression (or LOESS) and move on to functional data analysis. Those areas of statistics seem logical ways to approach the types of longitudinal analysis that the call for modeling growth seems to require.
Then there is demography. I'll admit I'm a bit partial to it (having a masters from Penn's demography group), but few education researchers have any formal training in a field whose model assumptions are closer to epidemiology and statistical engineering analysis than psychometrics. In demography, the basic conceptual apparatus revolves around analyzing the risk of events that a population is exposed to. The bread and butter of demography are births and deaths, or fertility and mortality. The fundamental measure is the event-occurrence rate, and the conceptual key to mathematical demography is the assumption that behind any living population is a corresponding stationary population equivalent, a hypothetical or synthetic cohort that one can conceive as exposed to the conditions in a population in a period of time rather than conditions a birth cohort experiences. It's as if you had a time machine at the end of Dec. 31, 1997, and a group of 1000 babies born all at the first instant of January 1, 1997, would be flipped back to the beginning of the year for all who survived to the end. It's an imaginary, lifelong version of Groundhog Day, but one with the happy consequence that the synthetic cohort would never hear of Monica Lewinsky. What happens to that synthetic cohort never happens to a real birth cohort, but it does capture the population characteristics of 1997. You can find the U.S. period life table for 1997 online in a PDF file, with absolutely no mention of Monica Lewinsky. (There is much I'm omitting in this description of a stationary population equivalent, I know!)
Demography offers a few aids to this business of modeling growth, because its bailiwick is looking at age-associated processes. Or, as a program officer for the National Institute on Aging explained at a conference session I attended a few weeks ago, aging is a lifelong process. Trite, I know, but it's something that the growth-modeling wannabes should learn from, for two reasons.
One is the equally obvious (almost Yogi Berra-esque) observation that as children grow older, their ages get bigger. Unfortunately, most school statistics are reported by administrative grade, not age, but this makes comparability on almost any subject (from graduation to achievement) virtually impossible. The only reputable source of national information about achievement that I'm aware of based on age, not grade, is the NAEP Long-Term Trends reports, pegged to 9-, 13-, and 17-year-olds tested in various years from 1971 to 2004. Some school statistics used to be reported by ageage-grade tables, which I'm finally figuring out how to use reasonably. But you could have some achievement testing conducted by age and ... well, enough of that rant.
The broader use of demography should be the set of perspectives and tools that demographers have developed for measuring and modeling lifelong processes. Social historians have an awkward term for thislife-course analysis. What changes and processes occur over one's life, and how do you analyze them? Some education researchers acknowledge at least a chunk of this perspective, most notably in the literature on retention, where you cannot take achievement in a specific grade's curriculum as evidence of the (in)effectiveness of retention in improving achievement. You can only find out the answer by looking at what happens to children as they grow older.
Some of the more sophisticated mathematical models of population processes have direct parallels in education that could be explored fruitfully. To take one example unrelated to achievement growth, parity progression (women's moves from having 0 children to 1 to 2 to ...) is an analog of progression through grades, and more could be done with using parity progression ratio estimates to see what happens with grade progression.
But, to growth... variable-rate demographic models hold some considerable promise at least in theory for analyzing changes from cross-sectional data. In the standard (multilevel model) view, you focus on longitudinal data and toss cross-sectional information, because (you think) that there is no way to separate out cohort from real growth effects. Aha! but here demography has an ideastationary population equivalentsand a toolvariable-rate modeling. While the risk model of demography requires proportionate changes, natural logs, and e to the power of ... well, you get the idea, I'm going to provide a brief sketch and two possible directions. For more details, see Chapter 8 of Preston, Heuveline, and Guillot (2001). (And remember, we're magically waving away all psychometric concerns. We'll get back to that a bit later.)
We're going to consider the measured achievement of 10-year-olds in 2005 (on a theoretically perfect vertically-scaled instrument) in two different ways, one related to changes among 10-year-olds and a second way, in the experience of this cohort, and use that to relate observed information from two cross-sectional testing administrations to the underlying population dynamics (in this case, achievement growth through childhood).
First, let's compare the achievement of 10-year-olds in 2006 to 10-year-olds in 2005. It doesn't matter whose is better (or if they're equal). My son is now 10 years old (and will still be 10 for the next round of annual tests here in Florida), so let's suppose that the achievement of 10-year-olds in 2004 is higher than for 10-year-old students the year before. Then we could think of achievement as follows:The achievement of 10-year-olds in 2006 = achievement of 10-year-olds in 2005 and some growth factor in achievement among 10-year-olds between 2005 and 2006For now, it doesn't matter whether the and refers to an additive growth factor, a proportionate one, or some other function. And if the 10-year-olds in 2005 did better, the growth factor is negative, so it doesn't matter who did better.
Second, let's compare the achievement of 10-year-olds in 2006 to 9-year-olds in 2005 in a parallel way:The achievement of 10-year-olds in 2006 = achievement of 9-year-olds in 2005 and some growth factor in achievement between the ages of 9 and 10 for 2005-06.Note: this "growth factor" is part of the underlying population characteristic that we are interested in (implied growth in achievement between ages, across the ages of enrollment).
Now, let's combine the two statements into one:the achievement of 10-year-olds in 2005 and some growth factor in achievement among 10-year-olds between 2005 and 2006 =Without assuming any specific function here, this statement explains the relationship between cross-sectional information across ages as one that combines changes within a single age (across the period) and changes across ages (within the period). Demographers' models of population numbers and mortality are proportional, so the and in both cases are multiplicative functions. But one could assume an additive function, also, or something else (a variety of functions), and the concept would still work. Once one estimates the changes within single years of age, one can then accumulate those differences and, within the model, estimate the underlying achievement growth between ages, which is the critical information of interest. When the interval between test administrations is equal to the interval between the ages (four years, for NAEP long-term trends), then the additive version with linear interpolation of age-specific change measures is identical to the change between 9-year-olds in 1980 and 13-year-olds in 1984, etc. But this method allows estimating those period-specific rates when the test dates aren't as convenient, and the exponential estimates are different.
the achievement of 9-year-olds in 2005 and some growth factor in achievement between the ages of 9 and 10 for 2005-06.
Of course, this assumes perfect measurement, something that I'd be very cautious of, especially given the paucity of data sets apart from the NAEP long-term trends tables. I've played around with those, and the additive and proportionate models come up with virtually identical results with national totals, assuming linear change in the age-specific growth measures (since we only have measures for 9-, 13-, and 17-year-olds).
(Changing the interpolation of age-specific growth rates to a polynomial fit doesn't change the additive model much. It shrinks the estimates of growth in the exponential model a bit but doesn't change trends. And, yes, I'm aware of the label problem: arithmetic should be additive or linear.) Click on either graph to see a larger version.
There are odd results (does anyone know of reasons why the reading results were unusually high in 1992? are the results for 17-year-olds in 2004 unusually low for any reason? I was using the bridge results), and there are all sorts of caveats one should use for this type of analysis, from the complexity of estimating standard errors of derived data to changes in the administration for students with disabilities to the comparability of 2004 results and, oh, I'm sure there's more. The point is that demographic methods provides some feasible tools precisely for looking at age-related processes, if we'd only look.
Alicias, E. R. Jr. (2005). Toward an objective evaluation of teacher performance: The use of variance partitioning analysis, VPA. Education Policy Analysis Archives, 13(30).
Balou, D., Sanders, W., & Wright, P. (2004). Controlling for student background in value-added assessment of teachers. Journal of Educational and Behavioral Statistics, 29(1), 3765.
Camilli, G. (1996). Standard errors in educational assessment: A policy analysis perspective. Education Policy Analysis Archives, 4(4).
Fisher, T. H. (1996, January). A review and analysis of the Tennessee Value-Added Assessment System. Part II. Nashville, TN: Comptroller of the Treasury.
Fuchs, L.S., Fuchs, D., Hamlett, C.L., Walz, L., & Germann, G. (1993). Formative evaluation of academic progress: how much growth can we expect? School Psychology Review, 22, 27-48.
Hochschild, J.L., & Scovronick, N.B. (2003). The American dream and the public schools. New York: Oxford University Press.
Kupermintz, H. (2003). Teacher effects and teacher effectiveness: A validity investigation of the Tennessee value-added assessment system. Educational Evaluation and Policy Analysis, 25(3), pp. 287 298.
Linn, R. L. (2004). Accountability models. In S. H. Fuhrman & R. F. Elmore (Eds.), Redesigning accountability systems for education (pp. 7395). New York: Teachers College Press.
Preston, S.H., Heuveline, P., & Guillot, M. (2001). Demography: Measuring and modeling population processes. Malden, MA: Blackwell Publishers.
Sanders, W. L. & Horn, S. P. (1998). Research findings from the Tennessee value added assessment system (TVAAS): Implications for educational evaluation and research. Journal of Personnel Evaluation in Education, 12(3), 247256.
Tekwe, C. D., Carter, L. R., Ma, C., Algina, J., Lucas, M. E., Roth, J., Ariet, M., Fisher, T., & Resnick, M. B. (2004). An empirical comparison of statistical models for value-added assessment of school performance. Journal of Educational and Behavioral Staistics, 29(1), 1135.
Update! (12/2)Today, the Financial Times is publishing an article on the UK system of league tables, and reporter Robert Matthews cites Harvey Goldstein extensively. Thanks to Crooked Timber for the tip.
Update (12/8)I foolishly forgot to mention a 2004 RAND publication, Evaluating Value-Added Models for Teacher Accountability, which describes the limits of growth models for accountability. Thanks to UFT's Edwize blog for point it out (though I have a few bones to pick with the larger postdon't have enough time to right now...).
Update (12/13)Andrew Rotherham discusses two technical issues with growth models (longitudinal databases and vertical scaling of measures), to his credit.
November 14, 2005
Al-Arian and academic freedom, redux
Greg McColm and my article, A University’s Dilemma in the Age of National Security (PDF), is now out in the National Education Association Thought and Action Fall 2005 issue (pp. 163-77). We've been working on this for over two years or, rather, Greg has done the vast bulk of the work and I've been putting in chip shots, academically speaking. He deserves any credit for clever turns of phrase as well as persistence that many other academics don't have. It's a little different from what we submitted but that's life with editing. Among other things that I've learned in working on this article is that some disciplines don't have standard citation styles because the rival proprietary journals have different ones, so the standard is to use the citation style of the source that material came from. But I'm sure that's not why you're going to read the full entry, which is about the criminal trial that's entering its concluding stage this week. Note: the article itself is unrelated to the trial, since it was written well before the trial started. Its appearance at the close of the trial is just coincidence.
This week, the prosecution will rebut the case raised by the lawyers for the four defendants. Al-Arian's team rested without presenting witnesses, but the others presented a few witnesses each before the summations. Journalists have described the summaries in essence as a battle over circumstantial evidence. Are the disparate pieces suggesting funding links between the defendants and the Palestinian Islamic Jihad enough to show that they raised money for PIJ with the intent to support the specific organizational mechanics of terrorism (and not just ancillary activities of PIJ)? There are bound to be appeals upon any convictions (and the several hundred pages of jury instructions, along with the dozens of decisions Judge Moody made in the course of the trial, will be fodder for them), but this seems to be the central question of the conspiracy and terrorism charges. (The other charges, about fraudulent immigration applications, is a whole other kettle of fish, and journalists haven't touched those at all, at least as far as I can tell.)
I haven't been sitting in the jury box, so I don't know the full evidence and won't comment on the key question. I'm sure that if there is a conviction, many will claim that the conviction is proof that the administration of USF did the right thing by trying to fire Al-Arian before indictment and by firing him right afterwards. That is essentially an argument that the end result of a criminal trial justifies the employment actions of a university. In some cases, where the basic facts are known before trial, that might well be the case. But I'm not so sure it holds here, with Al-Ariannot because he's anything like an angel. Far from it. But there are a few points that remain, specific to the trial:
- The firing of Al-Arian after the indictment was a purely symbolic and political act. There was no payroll difference for the university between an unpaid leave of absence during a trial, at the end of which a conviction ends the job, and firing a professor after indictment. In both cases, the defendant is unpaid.
- Many of the factual assumptions of Al-Arian's critics turned out to be in error, especially if you agree with the prosecution's case. In the early 1990s, Al-Arian wasn't adding to PIJ coffers, from all reports of the prosecution case that I've read. Instead, he was desperately seeking to raid PIJ accounts to support the think-tank he had co-founded. This prosecution claim doesn't necessarily obviate their central point, but it is related to the criticism of Al-Arian that he was using his employment at USF as a cover to legitimate the funding of terrorism. He may well have used his employment at USF as a mechanism to start a think-tank with delusions of Palestinian intelligentsia gravitas, eventually willing to propose various financial mechanisms to keep it afloat. (This is detail from the prosecution's case, detail that Al-Arian's lawyers may or may not dispute.)
- The immigration-fraud charges are a safety-valve for the federal government. If Al-Arian and the other defendants are acquitted on the more serious charges but are convicted on the fraud charges (which I am guessing have a lower threshold to prove), those convictions will be powerful tools at deportation hearings, which (I am also guessing) would proceed on a track parallel to the appeals of any criminal convictions. A fraud charge may not carry lengthy prison time beyond what the defendants have already served before trial, but such convictions could be used in deportation hearings. The end result might well be an even more complicated legal mess than some of my friends and colleagues are predicting.
- If there is no conviction or deportation order left standing at the end of the day, there is still Al-Arian's grievance against his firing. The USF administration's decision to fire Al-Arian on his indictment hinges on the legitimacy of that indictment, whose counts changed before trial, and (if it comes back to an employment case) would necessarily be a matter of not being proven in a court of law, at least as far as the law is concerned. The machinations to fire Al-Arian before indictment might well be used by Al-Arian's civil lawyer(s) as evidence that the termination decision was pretextual. And Al-Arian's civil lawyer in 2003 filed a pro-forma grievance under administrative rules passed by our Board of Trustees under the assumption that the Collective Bargaining Agreement with the faculty union was void, an assumption that Florida's courts have now made invalid. One more mess to consider.
- If there is no standing conviction or deportation order at the end of all this, and there is a university grievance process that results in upholding Al-Arian's dismissal, the AAUP investigation of USF will probably become active again. In the summer of 2003, staff and members of Committee A reported to the annual meeting that under AAUP procedures, universities were given more due process than they usually give faculty: a university's hearing process had to be concluded (or none started) for AAUP to officially censure the administration. Because USF's administration and Al-Arian's lawyer agreed to suspend the process for a post-termination grievance pending the outcome of a criminal trial, AAUP's staff and Committee A leadership concluded that the annual meeting could not fairly consider the censuring of USF's administration. But if Al-Arian is freed and the grievance proceeds, then that stoppage on AAUP action is lifted (at least as I read the AAUP process). That doesn't guarantee censure, but it does make some discussion within AAUP highly likely, at least in the annual meeting.
- For those who long argued for Al-Arian's termination, before an indictment, I wonder if they considered the likely results (at least until an indictment): a man as a cause celebre, with loads of time on his hands to raise funds for Palestinian causes. If those causes included terrorism,...
- For those who long argued for Al-Arian's termination, and who are delighted that Al-Arian is on trial, I wonder if they thought that federal agents were better or worse at investigation than university administrators, or if in retrospect they preferred that the administration hire private investigators, who could possibly have interfered with or discovered the clandestine wiretaps of the feds.
Since Al-Arian's lawyer filed his post-termination grievance in 2003 using non-union procedures, the United Faculty of Florida (my faculty union) is out of the loop officially regardless of the results of the trial, any deportation hearings, or the grievance process. Of course, I'm not ruling anything out given the twists and turns of all this. My longstanding concern here has been with the long-term consequences of administration actions on faculty morale and the university environment, and while there are many things that are operating significantly better today than almost four years ago, this episode is another patch of tarnish on USF's history. The administrators and trustees who served in late 2001-early 2003 may not have been responsible for all of the things coming at them, but they made enough errors to contribute to problems. Until someone convinces me otherwise, I think the university would have been better off waiting until an indictment and putting Al-Arian on unpaid leave until the end of the trial (and subsequence proceedings) or waiting for evidence that would clearly justify discipline or termination on its face. The guy is no model of university citizenship, but that's not the entire question here.
Correction (7:30 a.m., Tuesday): It looks like the jury instructions only took three hours for the judge to read. Deliberations start today.
October 19, 2005
A contrarian definition of big social-science history
In crafting the call for papers for this year's Social Science History Association annual meeting, incoming SSHA President Richard Steckel asked SSHA members and networks to think about the meaning of "big social science history," defined in the call as "large collaborative research projects within and across disciplines" roughly tied to social-science history. In some ways, this call was a reflection of the original mission of SSHA, to which this year's call for papers referred, and perhaps asking us to evaluate those large research projects.
But Steckel also asked us to dream big. In network meetings, he referred to multimillion-dollar grants in medicine and other fields and framed the call for papers as a thought experiment: "Networks are encouraged to imagine the research program they would conduct with a multi-million dollar grant."
Since I've recently finished a collaborative project among 5 historians of education, 3 sociologists, 1 criminologist, several grad students, and a partridge in a pear tree (though the partridge is not a coauthor in the book that will be coming out), and because I have benefitted indirectly from other collaborative (data-collection) projects, I think I have some experience with today's collaboration, including the prospects for multimillion-dollar grants. And while I will not discount the possibilities of getting large grants, I think Steckel framed the issue too narrowly at last year's meeting. Because the SSHA annual meeting is half a month away, I'm putting out this contrarian definition in hopes of starting a dialogue before the meeting (and one I hope will extend through the meeting).
Framing the issue as one of multimillion-dollar grants was inapt for several reasons and conflicts with the questions raised elsewhere in the call for papers:
- Multimillion-dollar grants have large price tags for very specific reasons tied to the needs of the projects, not to the intellectual integrity of the work. Below, I'll describe multimillion-dollar social-science research projects worth every penny and more, but size only matters to the spam in our inbox and ambitious institutional officers who look at federal funding figures. Medical research requires labs, technicians, physicians and nurses for treatment studies, and so forth. Engineering research requires labs, expensive equipment that has a limited life, and technicians. There are social-science history projects that require such funding, but they're generally data-collection efforts. Those are incredibly important, but that requires a different definition of "big social-science history," one I propose below.
- Multimillion-dollar grants in social-science history are inconsistent with the current research funding environment, for the most part. Maybe other countries are more generous, but the big federal funding agencies in the U.S. (NSF, NIH) aren't as free with their money as we might like in our fantasies. Funded NSF project budgets are routinely shrunk in negotiation. And while I love the NIH's modular budget philosophy, that only applies for small and moderate grants (I think $250,000 is the cap for modular budgeting at NIH). The last time that the major funder in my area (the Spencer Foundation, sponsoring disciplinary research in education) dangled a few million dollars to several groups, it was in 1999 and early 2000, and the grants that eventually came out of that initiative shrank to shoestring size.
- The type of collaborative work funded by multimillion-dollar grants is frequently targeted at specific projects with well-defined research questions. I love well-defined research questions, but is this the only definition of fruitful collaboration? I'm not speaking of the normal development of an area of literature but unusual projects (topical conferences, summer workshops, collaborative volumes) that can move a field but neither need huge gobs of cash nor the type of research question that focus grant proposals.
- The multimillion-dollar model is inappropriate for most faculty and other researchers we want to engage in SSHA in the future. In the past 30-40 years, more teaching faculty across the nation have been expected to carry on active research, and a far higher proportion are on regional state campuses of public university systems. SSHA is like most other academic bodies and draws disproportionately from institutions that give faculty significant time for research. While we talk about significant research, there is a growing body of scholars who face research demands with little infrastructure on their own campuses apart from an office, a computer, and maybe a few hundred dollars of travel funds per year. Few of them have the institutional resources necessary to draw such grants, and yet they can both contribute greatly to social-science history.
- A multimillion-dollar model will preferentially affect some disciplines and tools, by the argument I presented above in #1. The tool for which money can most easily and legitimately be requested is GIS. I love GIS as a tool. I want it well-funded for basic data collection, such as the National Historical Geographic Information System, as well as good individual projects. But not every good research project is a GIS project, and not every collaboration requires or can feasibly use GIS. This is suggested by the abstracts available with the preliminary program for the meeting. Apart from the roundtable sessions (which are skewed a bit towards GIS), I could only identify one or two paper abstracts not associated with GIS where a multimillion-dollar investment seemed to be part of the research agenda. Abstracts are not papers, and I hope to be proved wrong in Portland.
Given these concerns, I hope that the discussion of big social-science history will veer away from the size of desired grants and instead towards the environment necessary for fruitful interdisciplinary collaboration. Let me start with an abstract but serviceable definition. Big social-science history is interdisciplinary collaboration in history that can create, develop, or support a research agenda that would not be possible by researchers acting alone. Big social-science history should focus on collaboration and infrastructure that makes research possible. Big social-science history makes the tools and end results widely available to researchers and other readers worldwide.
Let me give some ideas that look like big social-science history to me. Some of these exist already and will be discussed in sessions at the SSHA annual meeting. Some don't.
- Data-collection and archiving projects such as the Integrated Public Use Microdata Sample projects at the University of Minnesota. A few weeks after the National Science Board published its Long-Lived Data Collections report, we should see data collection as the foundation of big social-science history. Any faculty member with skills in SAS or SPSS can sit in a tiny office, download huge datasets, and manipulate them on today's computers. Today, I can replicate in a few hours what took me months to do with a mainframe in 1990-91. In essence, any time I download a data set, I'm involved in a collaborative relationship with those who collect and maintain the data. Or, rather, I'm benefitting from that infrastructure. With these huge collections, any scholar around the globe with a decent computer can engage in big social-science research that would have taken hundreds of thousands of dollars in the 1970s.
The reason why one should focus on these large-scale data collection projects is because they require a certain amount of expertise in organizing the work effectively, and because local projects can still be done using this model. I hope Steve Ruggles and others of his ilk might be interested in spreading the Secrets of Data Collection and Management for projects of smaller scope... or might be willing to take on the digitizing of local data.
- Data "digesting" projects with end results free on the web. These exist with contemporary data (e.g., Current Population Survey reports and data), and it's essential to create professional approbation (or a brownie-point market) for these in social-science history. They require multi-year, large grants, with the clear expectation that the resulting data sets and reports will be available online, free to anyone. This expectation will require a change in the norms of historical scholarship dissemination, which currently favor books over all other ways of disseminating research. Why is it important to create a new norm? There is currently a long-delayed project of this sort in social-science history that Amazon lists (pre-publication) for $825. Who will buy it, other than libraries? Who will read and use it, other than those of us who still venture to libraries? The editors are well-meaning researchers who started the project with a model of big social-science history that would have worked well in the 1980s because there were no other options then. But there are now, and deadtree statistical compilations that you and I can never have at home or in our office are truly dinosaurs.
- Online scholarly encyclopedias. For some years, I was surprised at the fad of encyclopedias among some publishers, and then I became irritated. This type of work is precisely the collaborative scholarship that should be online, refereed, and updated. Typically, scholarly encyclopedias are highly mixed in quality, because editors can't get writers for all entries without dredging for authors. Then you're stuck with an encyclopedia with a major entry that ignores huge swaths of historiography. And then it's obsolete within five years. But with online publication, everything changes. If you don't like an entry? Write a competing one that gets refereed! There are unmediated (or semi-mediated) versions of this on the internet, commonly known as wikis (such as Wikipedia. But we can do better! And we should.
- Working-papers archives for historians, with the infrastructure necessary for archiving commentaries and make metadata available. Some version of this exists for physicists and economists, though I'm not sure if they have commenting and metadata attached that would allow such archives to be used by academic library software. Similarly, someone needs to collect dissertation abstracts and metadata in a publicly-available site that could be folded into academic library software.
- Online communities centered around areas of interest, where scholars around the globe can discuss topics of mutual interest and ... hey! That's H-Net. (Speaking of which, you can donate to support this infrastructure for Big Social-Science History with just a few clicks.)
None of these look like the "big social-science history" projects that were legends when I was in grad school. I don't know what the budget for the Philadelphia Social History Project was, but the time for that type of project is probably over. Its data collection was important, but that's different from the project as a whole. We need to conceive of big social-science history in ways that faculty around the globe can engage in it. I take Professor Steckel at his word in the gist of the call for paperswe need to evaluate and think about it as a whole, with large ambitionsand hope that this is a reasonable prod for the debate.
September 16, 2005
"Declining (x) literacy"
Now that Constitution Day is nearly upon us (tomorrow, folks, the 218th anniversary of the signing of the Constitution, not that its signing had much historical significancequick, what was the criterion for ratification?), the New York Times has its obligatory obeisance-and-reflection, which turns out to be a bunch of historically-unreflective pap about "historical literacy:"
The new law takes effect as many historians are voicing alarm over the dimming historical memory of the nation.
James Rees, executive director of George Washington's Mount Vernon estate, said that in his 22-year tenure he has seen a growing historical ignorance among visitors. ...
Some educators believe that young people's history proficiency is declining because they watch too much television,...
Some experts say the problem is worsening because history and civics are receiving less attention in public schools, the result of a nationwide focus on reading and math....(emphases added)
One of the great signs of historical ignorance is when people take on one of the great trend-y myths of popular belief, like the myth of declension. Whoops. Looks like Sam Dillon, the reporter for the Times, catered to that myth in the article. While Americans may not know enough of their own history, I doubt that such ignorance is greater now than at any time in the past, except maybe September 17, 1787, when there was very little national history to regurgitate on standardized tests.
Moreover, the usual definition of "historical literacy" is focused on fragmentary bits of information that look remarkably like trivia. In the case of the Times article, the obligatory quiz question was the commander of colonial forces at Yorktown in 1781 (hint: not William Sherman, Ulysses Grant, or Douglas Macarthur). While I'd like folks to have a clue to who was the commander, I'd also like them to know a little more: why the battle was important and who else was involved besides the colonials and the royalists, for starters.
As historian of education Harvey Graff has noted, we use the word literacy when we want a trump card to tout the importance of a topic, even though any particular literacy concept is historically contingent and constructed. To add to computer literacy, economic literacy, math literacy, physics literacy, among others, we now have historical literacy. And, after a bit of searching, I've discovered that there are at least 31 pages referring to condom literacy. That puts the dispensers in gas station men's rooms in a whole other perspective. Postmodernists should be so happy, that we are constantly commanded to read the world in so many ways.
June 1, 2005
So the news today is filled with the revelation that former FBI No. 2 Mark Felt was Woodward and Bernstein's Deep Throat, the anonymous source for much of their Watergate coverage. I expect that we'll first see the lionization of Felt for being a whistleblower. Regardless of the putative motivation (in one version, revenge at Nixon for having targeted the FBI), Felt helped unravel a conspiracy to suppress an investigation of a political crime. I wonder how soon will come discussion of the rest of Felt's recordhis work as J. Edgar Hoover's right-hand man, the person in charge of internal inspections who raised no ethical questions about COINTELPRO, his conviction related to one COINTELPRO operation (against the Weathermen), and the pardon by Reagan. Does his service as Deep Throat mitigate his undermining of American democracy through COINTELPRO? But Felt's record isn't the only story of redemption in popular consciousness this month. There's Anakin Skywalker, after all, ...
Discovering that my children don't remember anything about the original Star Wars (you know, "Episode IV," originally released in 1977), I showed the DVD to them last week and discovered a few uncomfortable things, like Princess Leia's complete failure to mourn the deaths of millions on the planet where she grew up. But let's skip the fairy-tale elements here and get to the myth of the broader Star Wars story-arc (see Alex Soojung-Kim Pang's commentary for the best critique of Lucas's general movie-making): The six films together are far more about Anakin than about either Luke, Obi-Wan, or the political struggle in the SW universe. As many others have noted, Anakin is a tragic figure, falling into the depths of savagery before being redeemed at the last minute, quite literally, in Return of the Jedi.
But is it really the saving of his soul or whatever is akin to that? Anakin's redemption in SW VI: RotJ consists primarily of his heaving Palpatine to his death to save Luke. Given his role in millions of deaths in the prior twenty years or so, seeing Anakin as redeemed would be like the celebration of Lavrenti Beria if he had killed Stalin in the early 50s. (Please, don't tell me in comments that he really did! I'm not a Soviet historian and don't wish to be swamped with various conspiracy theories. The question would still remain.) Without having seen SWIII, I think I can safely say that Anakin/Vader was responsible for much of the harm of the Empire in Lucas's mythical long-ago, far-away galaxy. One good moment doesn't wipe out hundreds of crimes.
In history, though, there is a broader question: is redemption individual or collective? I recall more than twenty years ago a similar storyline about redemption when George Wallace was elected Alabama's governor one last time in 1982, after announcing he had become born-again and apologized to civil rights leaders for his recalcitrant segregationist stance in the 1960s. Yes, it is true that Wallace turned himself around in many ways. But the redemption story ignored in 1982 was that the American political system had been redeemed in significant ways with the Voting Rights Act of 1965. Facing newly-enfranchised Black voters, a whole bunch of white segregationists suddenly discovered religion (or at least civil rights). Some of them were heartfelt. Some, like Strom Thurmond, you really couldn't quite believe. Some, like Jesse Helms, betrayed the lip service to civil rights with their actions (the "white hands" ads in Helms' first campaign against Harvey Gantt). But the true story of George Wallace's election in 1982 was the redemption of a region (and a country), not just of one individual.
Whether Mark Felt's whistleblowing as Deep Throat is a similarly broad redemption is much more questionable. But we're taken with stories of individual redemptionthus the appeal of the SW movies and what I expect will be an eventual journalistic judgment that Felt really was redeemed by his maybe-not-so-well-intentioned semi-whistleblowing.
May 21, 2005
Al-Arian and symbolism
Now that Judge James Moody has identified a jury for the trial of Sami Al-Arian and three codefendants, it's time for a little reflection on the choices made by USF administrators over the past decade and a little perspective on the meaning of the trial for academic freedom, if there is any. The book I'm writing, Scholar-Citizen, will discuss the case at USF (what do you think started the idea of writing a book in the first place?), but that's one of many incidents in the book. I'll put some broader thoughts together about the case here and over the course of the trial.
A few disclaimers first: I will not take a position on Al-Arian's guilt or innocence. In the next week, I'll explain the relevance of any verdict for the university, but this is about the case on the campus. In addition, my colleague Greg McColm has a much stronger grasp than I on the details of Al-Arian's time at USF (or maybe I should say the details of and allegations about his employment), and is still keeping up a huge repository of information at the on-campus faculty union web site. He'll be the primary author of an article on Al-Arian, USF, due process, and faculty governance that will be appearing in a journal sometime in the next year.
So, to the issue today: Why did some faculty and many Tampa residents want USF to fire Al-Arian long before his indictment on charges that he helped finance the Palestinian Islamic Jihad? In 1994 and 1995, a PBS video and newspaper articles in the Tampa Tribune alleged that Al-Arian had helped finance terrorism, and that USF's formal relationship with the World Islam Studies Enterprise (WISE) (that Al-Arian had co-founded a few years before) legitimized a slew of activities that were mixed up with the direct and indirect financing of terrorism. USF cut off that relationship in June 1995 and suspended Al-Arian for pay in 1996 during a formal investigation, and then reinstated him to teaching in 1998 after the investigation stated that there was insufficient cause to fire Al-Arian (as part of a broader report into the relationship with WISE). While the controversy over Al-Arian heated up again in 2001 when Al-Arian appeared on the FoxNews "O'Reilly Factor," in reality the local controversy had begun several years before. (See Greg's chronology for a summary timeline of related events.)
I do not think that my neighbors and the minority of my colleagues who wanted Al-Arian fired before 2001 really thought that doing so would end his activities. They're smart enough to know that he would have become a cause celebré, with plenty of time on his hands to raise more money if that's what he did with his free time. Nor do I think that anyone really thought that USF was better equipped to investigate criminal activity than the FBI. Nor, for that matter, was anyone willing to say that anything goes in trying to stop Al-Arian's non-work activities, or at least no one behaved as if they believed it. As far as I'm aware, no one tried to assassinate him (which would be the logical step to take if you really thought Al-Arian financed terrorism and if you really believed that someone other than the justice system should take direct responsibility for action). And, in the end, after all the pretexts put forward in 2001 and 2002, USF fired Al-Arian after he was imprisoned.
So no one could have thoughtfully proposed that USF fire Sami Al-Arian as one step to fight terrorism. Instead, the pressure to fire Al-Arian was largely symbolic: To many who live in Tampa, the presence of Al-Arian shamed USF in some way, and paying him a salary was a violation of USF's moral obligations. There are two pieces of this claim, both the argument that a publicly-funded institution has additional obligations not to hire shady characters as faculty, simply because they are publicly funded, and also the argument about a university's moral obligations.
The first argument is about public funding and has been used repeatedly as a justification to fire unpopular faculty at state universities, from William Schaper at the University of Minnesota in 1917 to Al-Arian to Ward Churchill in 2005. (See Carol Gruber's Mars and Minerva for Schaper's case.) But there are both legal and ethical reasons why this purse-strings argument is untrue. Legally, governments are under First Amendment restrictions in its hiring and firing practices. A private university has far more legal leeway to fire a faculty member for the faculty member's public statements and off-campus activities than a public university (even where doing so would violate principles of academic freedom).
More fundamentally, however, he who pays the piper does not call the tune in professional relationships. When we bought our home, we had a buyer's agent. The buyer's agent was paid 2.5 or 3 percent of the purchase price, and that money came from the seller. Even though the seller was paying our agent, our agent had no obligation to maximize the purchase price. In fact, her fiduciary obligation was ethically and legally to usto give us the best advice on buying a good house at the lowest cost. In many areas of business, fiduciary obligations are guided by a professional-client relationship, not by who pays the professional.
Even if we were to accept that professors should pay heed to whoever pays the bills, it is unclear how we should decide matters where there might be a conflict. The legislature gives operating money to the university, but my students also pay fees. What should I do when a student performs poorly in a course? The student might say, "I paid tuition. I should get the course credit." But most of my fellow taxpayers would probably say the opposite. Both are paying for my salary. If I followed the money, I would have no clear guidance.
So the identity of whoever pays my check is generally irrelevant to my professional obligations as a faculty member and irrelevant to the central institutional obligations of a college or university. (I am not arguing that there are not accountability issues with public funding in terms of tracking the funding, following state laws, following the state and federal constitutions, etc. Nor am I saying that administrators should ignore state legislators. This matter is about the core principles of any college or university.)
After disposing of the claims that USF could have fought terrorism by firing Al-Arian or that it was obliged by its publicly-funded status to fire Al-Arian, we are left with the argument that a university has a moral obligation to maintain clean hands in its hiring practices and to be willing to fire faculty against whom there are serious allegations of immoral actions off campus. The difference between me and my colleagues and neighbors who believe this argument is that we have different ideas about a university's basic obligations. It is not about fighting terrorism, and it is not about public funding. At its core, a university's core principles is what the argument about Al-Arian and USF is all about.
February 3, 2005
The shameful voting record of academics
Well, the bloom's off the rose, definitely, for the view of academics as politicized. It turns out that, if we trust the methods in one study of academe's party registrations, the greatest threat to the patriotism of universities is in the apathy of the faculty, not its politicization. Daniel Klein and Andrew Western's study of voter registrations at Stanford and Berkeley show that a surprising number of faculty aren't registered as either Republicans or Democrats! Almost 50% of academics for whom Klein and Western scoured records for were either not found or otherwise didn't fit into a Republican-Democratic dichotomy. From the accompanying Excel file, we find that the most apathetic departments must be in business disciplines. In the marketing and accounting departments at these two universities, for example, more than two-thirds of the faculty were either not found or didn't have major-party affiliations (19 just not found). In general, professional schools and disciplines are the "worst:" out of 346 faculty the study looked for, they couldn't find major-party registrations of 186 (or 54%). But the Music Department at Stanford shouldn't be cut any slack, either, as only 4 of 13 had major-party registrations. How awful!
Let's take a step back and look at the methods, though: this study relies on what social historians know quite well as the imperfect, often atrocious, attempts at matching individuals across different databases. In the 1970s, there was a small cottage industry in matching census records to city directories and other databases, and what historians found out is that matching is a very hard business indeed. Names change, they're listed in variant forms, and so forth. Other names are so common that you can't reliably assume that the Tom Smith you've seen in the census is the same person you found in the city directory.
Klein and Western acknowledge some of the difficulties, but they generally gloss over them (in part because they're not historians or from fields with similar work experience). The discussion that I found most painful to read is this not-quite-acknowledgment of the flaws when they discuss disciplines outside the liberal arts:
The matter of the business school is important because when claims of political lopsidedness are raised, people often suggest that the business school leans in the opposite direction and helps balance things out. Our investigation provides evidence to the contrary, but we did not get as good a reading as we had hoped to. (p. 24)
When the clear majority of faculty are simply not found, it's hard to make any claim, and certainly not anything like an "established fact" (p. 31) as the authors write at the end of the paper. I don't think anyone should be surprised that there is disproportionate party registration in fields, nor that liberal arts outside the sciences are disproportionately liberal at Berkeley and Stanford. That's a far cry from discussing "the campus" as a monolithic entity on such data, assuming that Berkeley and Stanford is representative of colleges and universities more broadly, or describing it in such quasi-conspiratorial ways as I've seen in the more hysterical forums. Why not conduct the same study (with more caveats about the matching, of course) at Santa Clara University (where Klein works)?
April 2, 2001
A former student e-mails
Last night, a student from the fall e-mailed me. Over the course of the last semester, she and I had a running correspondence about the teaching of the course as well as the substance of it as well. She's an older student who, for a variety of reasons, was justifiably irritated at some of the structures I impose for younger undergraduates. I learned a great deal from her comments, which pushed me to think about what I do and why. One exchange, first her:
Here's my take on your comments. First that you detected a lack of respect for the writers you are asking us to read, and observed little or weak paraphrasing. Cool, you're right. Both counts. Writers should be respected even if they're wrong and it's an attitude that is easily corrected. It seems to me that you are defining respect for a writer by direct references and this paraphrasing you've requested. To me that was all a waste of words because you read the stuff, I've read the stuff and the paper only serves to show how we relate and respond. Another student pointed out to me that your aim might just be to have our papers readable by some outside person who possibly hasn't read the article in question. Ok, I'll buy that; surely that will improve with awareness. . . .
Looks to me so far that my papers are associational with the readings, but you're probably hearing way more than you care to about what we already know, and not enough about what difference it makes 'by direct reference/paraphrase'. To be honest, reading these articles hasn't presented me with much new information, but then we're back to the 'respect the writer of the readings ;-) and the teacher, aren't we? :-) By the way, there were a couple of things I'd have included in those papers, but couldn't fit all that into 600 words and pull off paraphrasing too!
And my response:
I know that at times a focus on the "text" makes teachers seem like we most value the pinning of authors down for close study, maybe making me an academic-as-lepidopterist (or butterfly collector). Please have pity on this poor reader, though, as I try both to gauge each student's understanding of the material and also respecting the nuances of all your thoughts. Specific references help with both of these tasks, because I face many times each week a passage of student writing that is ambiguous. Is the ambiguity intended, a reflection of writing late at night, or a sign of some misunderstanding? Discussing details helps me understand writing.
Teachers with small numbers of students—and students who have time to reflect and respond—have a great advantage in this effort at communication, in that a conversation can often sort out what the student is understanding and thinking. With more than 100 students, I do not have that option (I refuse to call it a luxury). The result is a horrific Catch-22 in which students most lacking patient guidance and coaching, in medium to large classes, are those where they most desperately need the skills that only such close teaching can provide.
As I wrote to another teacher afterwards, "My wife, who is taking classes for a masters, often chides me about this 'textuality' of academics, and at least I have an answer now for her (and myself)."