June 8, 2010
The value of college III
Part of the value of a good college education is that much of it is surplus. In the same way that the early nineteenth-century education of women could have been perceived as superfluous, a good deal of what students learn could be seen as not directly or immediately useful in their lives. To some economists, this may smack of inefficiency: why should we educate anyone beyond what we can see as an immediate payback on the job or in life? To others, this gets absorbed in a metastatic notion of human capital, where everything good in life is redefined as investment. (Read the new introduction in the 1993 edition of Gary Becker's Human Capital if you doubt me: not only are schooling and standalone job training considered human capital, so is love from one's parents.) Claudia Goldin and Larry Katz refer generically to education as critical to handling changing technology on the job, which makes a certain amount of sense as long as you're not operating a picture-based point-of-sale register (technology can deskill jobs as well as require greater skills). Goldin, Katz, and Uwe Reinhardt are definitely well-meaning, and I'd want them all at my back in an unlit economics-department hallway. But at some level, the economic justification of surplus education is troublesome because it is a black box (how the extra education works exactly isn't modeled); the slop between formal schooling and economic utility (which I've termed surplus) is a fundamental problem for how economists approach education.
An inefficient education as useful play
So let's turn from economics to anthropology for some help. In 1973, American Anthropologist published Stephen Miller's "Ends, Means, and Galumphing," which explored the social and evolutionary purposes of play. It's reasonably well-cited for a social-science article, but more importantly it's widely cited in areas as diverse as educational and social psychology (where you might expect it to be cited) and... well, it's cited in "Marketing in Hypermedia Computer-Mediated Environments" (1996, in the Journal of Marketing). In other words, it's got legs. Miller argues that one can define play within multiple species as activity that is deliberately inefficient and where the individuals involved gain pleasure from facing challenges that stem directly from the inefficiency, whether we're talking formal inefficiencies such as the rules of baseball and chess or informal make-believe... or activities one might find in college such as analyzing a real or fictional company's operations, writing a history paper, spending ten or more hours talking about a single play of Shakespeare, and so forth.
More importantly, Miller argues that play has some advantage for a species in that it turns specific skills into general problem-solving capacity. In play, one uses skills repeatedly and in a range of combinations. (One could argue a little differently about some videogames I know, but I'm describing his argument, not making my own, and the point would still be important even if you removed videogames that require nothing but exactly-repetitive behavior.) Play looks remarkably inefficient in one way, but it has important adaptive value in another.
So too with much of formal education. I could make the same faculty-psychology arguments on behalf of studying history that many people do: not only does it provide specific knowledge of certain times and places, it also prepares you for any career that requires the presentation of linear arguments with specific time- and place-bound evidence. (Legal brief, anyone?) It teaches you about human foibles and prepares you for situations where you have to suspend antipathy towards individuals to identify potential motives and key interests. David Brooks makes all of those arguments in his column today.
But that type of argument has always struck me as beside the point, not because history majors do not have practice in those skills but because any faculty-psychology argument is easily turned into a nebulous "this will help you learn critical thinking" claim, which my time-and-place-specific training makes me skeptical of. Yes, majoring in history will help you in a lot of fields more than not going to college at all, but it's hard to argue that a history major is better suited to a professional biochem lab's gruntwork than a math or physics major, even if the gruntwork has occasional public presentations attached to it requiring linear arguments with detailed evidence (see above on that refrain).
(Margaret Soltan argues a different point today, asserting that the value of the humanities is in the embodiment of human frailty, not its rational analysis. She writes, "For [William Arrowsmith], a prolonged encounter with the humanistic tradition amounts to a more and more sensate anguish at the recognition of our own chaos." I'm not going to argue with her or Arrowsmith, since I'm sure many a student in a Milton seminar has probably had crises of faith, and I had the odd experience of The Painted Bird as a soothing read at the end of my first semester in college. I'm just making a different point that can stretch beyond the humanities.)
An honest explanation of the value of college acknowledges that when college accomplishes what it can, a good part of that achievement is teaching students how to play with ideas in thoughtful ways and follow up that play in a reasonable, rigorous manner. This is neither a comprehensive nor exclusive way of thinking about college: formal schooling doesn't guarantee this result, and there are plenty of wise people in this world who can play with ideas without having finished secondary school, let alone college. But you're far more likely to get adults who can play with ideas in a productive sense if some critical mass of them have attended formal schooling where that was one of the outcomes.
I think Stanley Fish and gaming-for-learning enthusiasts are some of the more extreme proponents of this view, though they may not like being put in the same bin. At some times eloquently and inarticulately at other times, Fish argues (or just implies, as in yesterday's piece) that playing with ideas is the purest and highest aim of college and university life. That's a good part of the reason why he is allergic to some other conceptions of teaching (such as passionate engagement in the world). Those who have pushed for the insertion of game design in teaching likewise see value in gaming in and of itself, and they have the well-intentioned goal of spreading that joy to students through the use of gaming in teaching.
I do not think the promotion of intellectual play is the sole purpose of higher education, which is why I do not agree with Fish on his save the world on your own time refrain, which would place a wall between classes and any concern with what happens off a campus. Nor do I think that constructing game-like structures inside classes is the only way to promote intellectual play, which is why I have only experimented in a tiny way (and not that well) with game-like structures inside classes. Instead, what a good college (and many a good high school course) provides is the foundation, tools, and time and space for students to play with ideas.
This play needs to be rooted in specifics: some critical mass of specific knowledge in an area, which includes stuff we might call factual information and also knowledge about important questions that have been and continue to be asked in the discipline or field. In most (but not all) colleges and for most (but not all) students in those colleges, that foundation and set of tools require some breadth and some depth. You can't be a great student of history without knowing a sufficient amount about some critical mass of places and time, or without knowing a sufficient amount about some critical mass of other fields that bring other questions to bear on the ideas you're playing with.
And then you need the opportunities and encouragement to play with ideas in important ways. Sometimes these come in structured assignments that look playful, sometimes in serious assignments that engage students in the flow that positive psychologists write about, and sometimes the opportunity comes in extracurricular activities. Again, none of this necessarily requires formal schooling, but the playful autodidact must discipline herself or himself, and a formal school can provide structures to encourage this type of engagement. The institutional nature of a school can often grate on those within its walls, but it can also provide helpful structures. From an historical standpoint, the amazing feature of non-mandatory secondary and postsecondary education is not that one-quarter of teenagers leave high school and two-thirds of young adults do not complete a B.A. but that so many finish when there is no law requiring it. Normative expectations play an important role, and that is as true for shaping behavior within a school as standing outside it pushing students towards school.
Justifying public subsidies
Okay, some of you must be thinking, I'll follow this argument about the play of ideas as far as formal schooling doesn't cost much. But why should taxpayers subsidize this, and why should someone incur more than $100,000 in debt to learn how to play with ideas? Taxpayers should subsidize surplus education because it's worked for society in the past, which may seem highly unsatisfying but is true with one caveat (below). More pragmatically, the obviously-useful parts of higher education easily justify the subsidy, and what appear to be "frills" are comparatively cheap: try to tell a provost that the English department or history department is a money-waster, and she or he will laugh in your face with good reason: humanities faculty are generally the cheapest dates in any place, in part because of their low salaries and in part because even at the ritziest research universities they don't require several hundred thousand dollars in start-up money each. Doubt me? Go ask your local university the annual maintenance costs per student of a intro-chem lab and an intro-languages lab.
Costs to students: the car rule-of-thumb
Student debt is a different issue. I don't think someone should incur more than $100,000 in debt for an undergraduate education. However, that issue is complicated by stories about new college graduates with mountains of debt that come from enrollment in private schooling, either non-profit colleges and universities or for-profit programs. We need to watch the debt issue, but the streams of student debt origins are concentrated away from public colleges and universities (i.e., not what the solid majority of students face). There are plenty of public colleges and universities where the average debt for graduates carrying debt is under $20,000, and that's a reasonable debt to incur for the part of a college education with likely immediate payoffs in the job market (assuming that there's a job market in the next few years). In addition, the creation of income-based repayment plans is a buffer against college debt peonage if debt begins in the federal loan programs that are captured by income-based repayment. Again, that's easy when you're talking about public colleges and universities. Fortunately, a very large majority of high school seniors and their families are skeptical of mountains of debt, which is why (for example) two of my daughter's closest friends are going to the University of Florida next year rather than Rensselaer, Rutgers, or Georgia Tech (some of the other places one or the other was accepted, where they would have paid out-of-state or private tuition).
(As I've noted, private loans and gigantic debt coming from attendance at private institutions comprise a different matter, in addition to credit card debt. Part of the role of Pell grants, the new GI Bill, and federal loans is to encourage families to take on both subsidized and unsubsidized loans. That may sound remarkably like the type of public-private partnership that's become common in economic development, except that here, families and students incur substantial risk. Private non-profits and for-profits are in the same boat here, receiving a federal subsidy that's often bundled in with additional unsubsidized loans that families and students carry forward, something NYU is struggling to respond to, at least. And all university administrators who approve privacy-invading deals with credit-card companies should rot in Purgatory for a very, very long time.)
There is another way in which student debt is taken out of context: for full-time students and a number of part-time students, a significant part of the cost of college is the opportunity cost of not being in the labor market (or giving up some job opportunities, for part-time students). That can end up in debt if students borrow to pay for living expenses while going to school, and in any case, it reduces income and the accumulation of job experience. For a few years, that's more than balanced by expected greater earnings. The opportunity cost of not gaining job experience becomes a larger issue for someone who is out of the job market for an extended period, as happens with longer graduate programs (such as programs that have an average time-to-degree of nine years for students who finish, and that would be on top of the time spent in an undergraduate program).
A few rules of thumb, to summarize on debt and opportunity costs of attending college: if the direct debt incurred by going to college is on the order of magnitude of an economy or low-priced midsize car, it's justified by the anticipated concrete returns, so the chance to play with ideas isn't a giant financial risk. Don't go into debt on the order of a house note unless the degree leads directly to a lucrative career (e.g., medicine or law, and even there I have some questions). And if you're going to spend more than ten years out of the labor market as part of getting an education, definitely get that economy-car-sized education.
The assessment dilemma
Let me return now to the issue of public subsidies in part for what might look like surplus education. Part of the justification for public subsidy (concerned with value) is taken care of by the parts of college you can identify concretely as human capital, specific bits of skills and knowledge with clear social benefits. Part of the justification for subsidy (concerned with cost) is taken care of by the fact that the more expensive parts of college and university academic programs are concentrated where you see more clearly identified returns (the "humanities are cheap dates" principle). (Athletic programs and student affairs are different subjects.)
That might be enough from the perspective of some faculty (and Stanley Fish and David Brooks, at least this week), but the push for accountability in learning outcomes in higher education can easily be turned into the type of mechanism that squeezes out opportunities and structures for playing with ideas. For the foreseeable future, there will be key actors in several states who would be willing to impose reductive standardized testing on colleges and universities. That is the alternative to the current set of assessment mechanisms embedded in regional accreditation. So let's look at assessment and accreditation with regard to playing with ideas.
The black hole of accreditation-centered assessment
Assessment in the context of regional accreditation is best thought of as meta-assessment, where accreditors hold colleges and universities responsible for having a curriculum and assessing how well students learn it. That putatively gives institutions the freedom to create a structure consistent with a unique mission as long as there is assessment of student learning. In reality, this type of meta-game can be difficult to navigate, and the default behavior leans heavily towards mimesis: many colleges and universities hire consultants familiar with a particular regional accreditor, and they tend to suggest whatever structure has enabled similar institutions to pass muster. In addition, because consultants (or former consultants) are sometimes brought in-house to handle the logistics, they focus on the parts of the process that are most easily managed and cause the least hiccups internally... and that often turns into a small universe of reductive measures available commercially, especially for general-education goals. (Want to assess writing? Let's try the ABCXYZ. Want to assess problem-solving? Let's try the ABCXYZ. Want to assess critical thinking? Let's try the ABCXYZ. Yes, of course we can create our own in-house assessment, but we'd also have to justify its use to our accreditor, and it's just easier to use the ABCXYZ; why don't we at least try that as we're developing our own...) There's a reason why the Voluntary System of Accountability specified one of three cognitive measures: it piggybacked on existing trends in accreditation and institutional inertia.
My general concern is that the mechanisms of assessment through regional accreditation can become the black hole of faculty time, absorbing everything around it and making it difficult to plan a structure for more engaged projects or the type of activity I have described as intellectual play. In addition to what else I could say about that narrow range of measures, the long-term problem with institutional meta-gaming is that the rules of the game can change, sometimes with nasty consequences for faculty time. Every time that an accrediting body changes the rules by which institutions have to set rules for students (i.e., the curriculum), faculty have to rework their lives and often entire programs of studies to accommodate the changes. Every time my state reworks licensing requirements for college-based teacher education, or changes the rules for state review, faculty in my college have their time stolen by the logistics of meeting the rules. (Please don't ask a Florida dean of education to describe the double-standard between the rules for college-based teacher education and alt-cert unless you have a few hours.) One of the consequences is an overburden on both faculty and student time. Let me stop talking about faculty time and focus instead on student time: Look at a few random programs of study for baccalaureate programs in nursing or education. Count the number of elective courses. Compare with a program of studies in any social-science or humanities major. Then pick your jaw up off the floor.
On the one hand, the licensure requirements make a certain amount of sense from the perspective of professional training: you want teachers, social workers, and nurses to have the tools to do the job. On the other hand, an undergraduate education that is devoid of anything but instrumentalist technical courses is job-training and nothing else. And especially for teachers, that is inconsistent with one central purpose of college and dangerous for what we'd like them to do on the job. And the Holmes Group's proposal to shift all teacher training to the masters is unrealistic for working-class students if you apply the car-cost limit to student debt for future teachers. I am not sure there is a good way out of this problem for elementary teacher education, and it is on the extreme end of the "no room for thought" problem we face with accreditation-based assessment.
Outside elementary teacher education, there are a few escapes, but none are palatable. Ignoring assessment requirements of accreditors is either fatally brave or foolish, so what's left? Assessing intellectual play. You can stop groaning now. Yes, attempts to assess "creativity" make you tear your hair out, and the thought of assessing intellectual play makes you want to punch me out for the oxymoron or the threat of one of these projects unmoored from substance and rigor. But from an institutional standpoint for a faculty member in one of those regions with an accreditor that threatens micromanagement, you can either tilt at windmills or see what the power might be used for. I've got a limited appetite for windmill-tilting, and I've got enough blunted spears in my garage for a lifetime, thank you very much. This may sound like squaring the circle or getting out from within the horizon of a black hole, but the ability to assess intellectual play would allow faculty to justify all sorts of projects within an existing accreditation framework.
Defining and assessing a challenge
First, a reminder of Miller's notion of galumphing, or play: pleasurable activity that is deliberately inefficient and encourages the combination of existing skills to accomplish the self-defined or agreed-upon goals over and around the obstacles presented by the constructed inefficiencies. The tricky part of assessing such activity is not focusing on the issue of pleasure but instead on the meta-rules that characterize the nature of the activity. For this purpose, it's best to think about a circumscribed type of intellectual play: a challenge that is at least partially well-defined, based in considerable part on what others have done (i.e., not entirely reinventing the wheel), and that requires putting together at least a few skills. Then the assessment of the student activity has two levels: the level of the meta-game, where you assess how well the student defines the challenge, shows where and how the project relies on other work or is new, and how well the student used multiple skills; and the level of the project itself, where disciplinary conventions come into play...
And for history, at least, the disciplinary conventions match fairly well with the first level: having an appropriate historical topic, using the historiography in a sensible way, and handling a range of evidence and argument structures. The guts of most undergraduate history papers are in that last catch-all category: "handling a range of evidence and argument structures." There are a number of more idiosyncratic and less comparable assessment frames (such as student reflection on engagement), and this short essay is about the larger picture, not a detailed (let alone a tested!) framework for assessing intellectual play. And this sketch is about a narrowly-defined type of challenge, with lots left out. But it's a way to think a bit about the issue... or play with the idea of assessing playing with ideas.
Tools to explore
A few words about some recent developments to watch in this vein. The Lumina Foundation's Tuning project could have begun within a regional accreditation context, but it's geared instead towards a proof of concept that a faculty-driven definition of outcomes and assessments can simultaneously honor disciplinary conventions and also satisfy external constituencies (thus the term "tuning" to get everyone singing in the same key: I've got to ask Cliff Adelman sometime whether it's harmonic or tempered tuning). If I remember correctly, the first discipline-specific reports should have been available on the foundation website sometime this spring, but it's not there now (just a cutesy cartoonish presentation of the idea along with Cliff Adelman's concept paper and other materials from 2009). At a first glance, it looks like an application of the accountability framework of the American Association of Colleges and Universities (i.e., the liberal-arts office in One Dupont Circle). But without sample exemplar projects, it's hard to judge at the moment.
Then there's the movement for undergraduate research. When my daughter and I were visiting colleges over the past few years, it's clear that every institution devoted resources specifically to undergraduate research, whether they were public or private. Then again, these were generally small colleges where undergraduates were the only research assistants that faculty would be getting. On the third hand, undergraduate research is a type of operation that both liberal-arts colleges and universities are trying to develop and promote, albeit with different understandings of student engagement. I think my alma mater (a small liberal-arts college) now requires seniors to engage in a major thesis-like project. At my current university, that's expected only of Honors College students, and the resources of the Undergraduate Research office are available to all in theory and would be totally swamped if every student asked to be involved. Again, neither the development of Tuning and undergraduate research are models in any practical sense of the word, but they're something to watch and, if nothing else, they provide a few rocks on which to stand and survey the landscape of playing with ideas.Listen to this article
Posted in Higher education on June 8, 2010 10:30 PM |