The Weekly Standard on the New-Old Dating Game, Hooking Up, Daughter-Guarding, and much, much more

In “The New Dating Game: Back to the New Paleolithic Age,” Charlotte Allen describes the relatively widespread hookup culture:

Welcome to the New Paleolithic, where tens of thousands of years of human mating practices have swirled into oblivion like shampoo down the shower drain and Cro-Magnons once again drag women by the hair into their caves—and the women love every minute of it. Louts who might as well be clad in bearskins and wielding spears trample over every nicety developed over millennia to mark out a ritual of courtship as a prelude to sex: Not just marriage (that went years ago with the sexual revolution and the mass-marketing of the birth-control pill) or formal dating (the hookup culture finished that)—but amorous preliminaries and other civilities once regarded as elementary, at least among the college-educated classes.

She sees such a culture as a result and driver of devaluing marriage, feminism, and biology, citing as evidence Tucker Max, evolutionary psychology, Roissy in DC (who is despicable yet hilarious), women complaining publicly about their husbands, very long-term educations (medical residencies and PhDs now routinely stretch into the early 30s), and delayed marriage. This is mostly a bad thing in Allen’s eyes. Maybe it is mostly a bad thing, but even if it is, I don’t think the hook-up culture being described is likely to stop for basically economic reasons: the equilibrium for it appears to lean toward hooking up for most people and technology is lowering the “cost” of casual sex.

The second one is probably the most interesting, and the first can mostly be understood by reading Tim Harford’s The Logic of Life and Kathleen Bogle’s Hooking Up: Sex, Dating, and Relationships on Campus. The basic problem both describe is that situations in which women outnumber men tend to lead to hooking up, while situations in which the opposite occurs tend to lead to the opposite. But I wonder how much of this is due to technological development driving social change, rather than vice-versa. “From shame to game in one hundred years: An economic model of the rise in premarital sex and its de-stigmatisation” shows that parental and institutional attitudes towards premarital sex have softened over time, and “Contraception has reduced the chance of unwanted pregnancies from premarital sex, and this in turn has changed social attitudes.”

The parental attitudes issue can be seen in Perilloux, Fleischman, and Buss’ 2008 journal article, “The Daughter-Guarding Hypothesis: Parental Influence on, and Emotional Reactions to, Offspring’s Mating Behavior” (Evolutionary Psychology, 6, 217-233). The short version: parents work harder to control and limit their daughters’ sexuality than their sons’, perhaps for evolutionary reasons. They don’t say whether this effect has declined over time, but based on the research in “From shame to game,” I would guess that the answer is yes. Still, if the evolutionary incentive of parents is toward controlling and limiting their daughters’ sexuality, this would help explain why the stigma against extensive sexuality still exists, especially among younger women. And parents might want to limit sexuality because they have to deal with potential costs, like pregnancy, but don’t experience the obvious pleasures. Younger men, on the other hand, don’t get pregnant, and their perceived sexual value doesn’t seem to decline with the number of partners—hence why the double-standard persists, even though, as Allen points out, it is weakening. And technology is probably hastening that, which leads to laments like Allen’s.

One other technologically related issue is there too: porn, and the near-zero cost of its dissemination (cell phones, and “sexting,” can now make anyone a pornographer in under a minute, including those under 18). I remember reading about a study-in-progress in which the lead researcher said,

“We started our research seeking men in their twenties who had never consumed pornography. We couldn’t find any,” says Simon Louis Lajeunesse, a postdoctoral student and professor at the School of Social Work.

Although I doubt porn has the power that some of its detractors imply, it is also hard to believe that pornography’s sheer ubiquity hasn’t had some effect on how women and men treat sexuality—and, presumably, the effect is lowering the stigma of sex by showing that, regardless of what authority figures say, plenty of people are doing it.

In The Unbearable Lightness of Being, Milan Kundera writes about “… the profound moral perversity of a world that rests essentially on the nonexistence of return, for in this world everything is pardoned in advance and therefore nothing is permitted.” The quote is hilariously out of context but nonetheless gets closer to expressing something essential about modern sexual politics (and it seems like Europe got there first, as it often does socially): sex changes “things,” that nebulous word, but it in its consensual form it isn’t fundamentally harmful. Everything is pardoned in advance, except maybe pleasure for its own sake, and everything is permitted, contra Kundera. Sex is becoming less harmful all the time. Consequently and perhaps not surprisingly, people are having a lot more of it, since it’s probably still as fun as it used to be (although we all know that there’s nothing like forbidden fruit to spark an appetite: consequently the pleasure of novels that take as their impetus a love that exists even though it can’t or shouldn’t). Who can blame the Manhattan woman who’s had on average “20 sex partners during her lifetime,” according to Allen? I’m reminded of Tony Judt describing early 60s Britain in “Girls! Girls! Girls!:”

Even if you got a date, it was like courting your grandmother. Girls in those days came buttressed in an impenetrable Maginot Line of hooks, belts, girdles, nylons, roll-ons, suspenders, slips, and petticoats. Older boys assured me that these were mere erotic impedimenta, easily circumnavigated. I found them terrifying. And I was not alone, as any number of films and novels from that era can illustrate. Back then we all lived on Chesil Beach.

Now very few of us, unless we have unusual religious convictions without the usual hypocrisy those convictions entail (think of Margaret Talbot’s article “Red Sex, Blue Sex,” in which she asks, “Why do so many evangelical teen-agers become pregnant?“), live on Chesil Beach. Instead, we live in Roissy’s carnival, in a world of options, and the real question is whether we understand that world and our own choices in it. The bigger problem than the sex other people might be having is the gap between our behavior and our understanding of our behavior, which, at least to this observer, seems as wide as ever.

The validity of grades

Marco Arment writes at Marco.org (proving that he was prescient when it comes to domain names) and created the awesome web service Instapaper, which I use solely for its Kindle export feature. One of his favorite posts is “School grades are hopelessly broken.” It’s worth reading, and Marco is probably right: school grades are too high and don’t reflect real knowledge, but like healthcare or the military, there’s no easy way to fix them.

Although Marco’s essay is mostly right, it also doesn’t propose any real solutions to the problem he describes—because there aren’t any. The incentive for parents in high school is to want their kid to get the highest grade possible; consequently, they will often fight for their kids, leading to an overall negative equilibrium, and one that I benefited from in late middle and early high school when I decided to effectively fail math as an ill-conceived protest against my parents. At the time I didn’t consciously realize this dynamic, or that protesting in ways that chiefly hurt me aren’t terribly wise, but I was also 13 – 15 at the time and didn’t know any better.

For most teachers, the easiest thing to do on an individual level is inflate grades, which reduces complaints from both parents and students. This isn’t optimal on a societal level, which generates posts like Marco’s, but it is on an individual level, and I don’t see an easy way to generate incentives to change this (more on that later). Marco says:

Grades don’t reflect your aptitude, intelligence, or understanding of the subject matter. You don’t need to actually learn much useful material to get good grades. (And many of those who learn exceptionally well don’t get good grades.)

This is probably true. But if grades don’t reflect all this, then imagine what the people with really low GPAs are like. Grades aren’t good at stratifying the high end of the curve, but they at least show some of where the low end is. And I suspect the really high end, especially in hard college majors like engineering, CS, and so forth, are still reasonably good guides to aptitude. Marco says, “You can understand why I don’t trust the validity of grades.” You shouldn’t trust grades fully—but that’s because grades aren’t supposed to be the full measure of man. Nothing is, except maybe life, and what does that really mean?

I see a lot of comments about colleges, high schools, grades, and how to improve those kinds of issues on sites like Hacker News and Slashdot. Most are good at identifying problems; Chris Smeder, for example, tells us how to improve college teaching in three major ways. He’s probably right about all of those, but he misses an important point: most universities are not set up (or, if you like buzzwords, “incentivized”) to reward teaching.

Smeder misses the main point, which isn’t identifying the problem; a gazillion people in the Chronicle of Higher Education have said virtually the same thing at various times. The real problem is solving the problem, which requires changing the incentives that drive professors. At the moment, hiring and tenure decisions at virtually all universities (and all the big ones you’ve heard of) are made mostly on research and publication. Teaching simply doesn’t count for much. Therefore, the people who succeed in getting hired and getting tenured optimize for what they’re being judged on: research and publication. Teaching is secondary. Heroic individuals and people who want to practice better teaching will help somewhat, but they aren’t enough to change the system as a whole.

Once you’ve realized this incentive problem, the question becomes, “How do you change the incentives?” I have no good answers for that, but it’s the real question you should be asking if you’re genuinely interested. And it might keep you from generalizations like this one, which is back to Marco’s essay:

Most people from my generation can’t really do anything else in the real world except bullshit jobs because nobody ever held them to very high standards.

This probably is true of all generations, and the rhetoric of most people during most generations (consider, for example, the New York Times’ Generation Me vs. You Revisited). I suspect that not all the jobs Marco assumes are bullshit are bullshit. And even if all this is true, schools aren’t going to be offering the kind of information he’d presumably like GPAs to show.

I’d get near-zero homework grades because I’d never do it, so I needed (and usually got) near-100% test grades to make up the difference. I’d barely pull through and get a C most of the time.

This works for some people, especially who start their own businesses. Most people don’t and never will. So their grades count. I’m reminded of Paul Graham’s comment from “What You’ll Wish You’d Known:”

In retrospect this was stupid. It was like someone getting fouled in a soccer game and saying, hey, you fouled me, that’s against the rules, and walking off the field in indignation. Fouls happen. The thing to do when you get fouled is not to lose your cool. Just keep playing.

By putting you in this situation, society has fouled you. Yes, as you suspect, a lot of the stuff you learn in your classes is crap. And yes, as you suspect, the college admissions process is largely a charade. But like many fouls, this one was unintentional. So just keep playing.

Rebellion is almost as stupid as obedience. In either case you let yourself be defined by what they tell you to do. The best plan, I think, is to step onto an orthogonal vector. Don’t just do what they tell you, and don’t just refuse to. Instead treat school as a day job. As day jobs go, it’s pretty sweet. You’re done at 3 o’clock, and you can even work on your own stuff while you’re there.

The right thing to do is your homework, because it’s presumably easy, and then do the rest of your work on your own time. And although GPAs are broken, they’re also the best we’ve got. As Joel Spolsky says in his advice to Computer Science majors:

Never underestimate how big a deal your GPA is. Lots and lots of recruiters and hiring managers, myself included, go straight to the GPA when they scan a resume, and we’re not going to apologize for it. Why? Because the GPA, more than any other one number, reflects the sum of what dozens of professors over a long period of time in many different situations think about your work. SAT scores? Ha! That’s one test over a few hours. The GPA reflects hundreds of papers and midterms and classroom participations over four years. Yeah, it’s got its problems. There has been grade inflation over the years. Nothing about your GPA says whether you got that GPA taking easy classes in home economics at Podunk Community College or taking graduate level Quantum Mechanics at Caltech. Eventually, after I screen out all the 2.5 GPAs from Podunk Community, I’m going to ask for transcripts and recommendations. And then I’m going to look for consistently high grades, not just high grades in computer science.

Marco can be largely right in a micro sense and still be wrong, or at least doesn’t really deal with what should happen in a macro sense. If you’re the principal of a high school, or a college president, or an individual employer, or any number of other positions, what can you do to change the presumed brokenness of grades? How can you transform the system producing said grades? Until you’ve answered that, you’ve done a lot of the work that’s already been done (see, e.g., here for an older view of school problems) without facing the hardest part of the task.

Trolls, comments, and Slashdot: Thoughts on the response to Avatar

The vast majority of the comments attached to “Thoughts on James Cameron’s Avatar and Neal Stephenson’s ‘Turn On, Tune In, Veg Out’” are terrible. They tend toward mindless invective and avoid careful scrutiny of what I actually wrote; they’re quite different from the comments this blog normally gets, which is largely because I submitted the Avatar post to Slashdot, home of the trolls. One friend noted the vitriol and in an e-mail said, “Okay, the Slashdot link explains the overall tone of the comments your “Avatar” post is attracting.”

Part of the reason the comments are so bad is the hit and run nature of comments, especially on larger sites. If you have something substantial to say, and particularly if you regularly have something substantial to say, you tend to get a blog of your own. I wrote about this phenomenon in “Commenting on comments:”

In “Comment is King,” Virginia Heffernan writes in the New York Times, “What commenters don’t do is provide a sustained or inventive analysis of Applebaum’s work. In fact, critics hardly seem to connect one column to the next.” She notes that comments are often vitriolic and ignorant, which will hardly surprise those used to reading large, public forums.”

Furthermore, it’s easier and demands less thought to post hit and run comments than it is to really engage an argument. I deleted the worst offenders and sent e-mails to their authors with a pointer to Paul Graham’s How To Disagree; none responded, except for one guy who didn’t understand the point I was trying to make even after three e-mails, when I gave up (“never argue with fools because from a distance people can’t tell who is who”). The hope is that by consciously cultivating better comments and by not responding to random insults, the whole discussion might improve.

(Paul Graham has given the subject a lot of thought too: he even wrote an essay about trolls. As he says, “The core users of News.YC are mostly refugees from other sites that were overrun by trolls.”)

Not every comment I got one was terrible—this one, from a person named “Dutch Uncle,” was probably the best argued of the lot, and it mostly avoided ad hominem attacks. It, however, was very much the exception.

Most comments tended to deal in generalities and not to cite specific parts of my argument. In this respect, they have the same problems I see in freshmen papers, which often want to make generalizations and abstractions without the concrete base necessary. This happens so often that I’ve actually begun a keeping a list of all the things freshmen have told me are “human nature,” with a special eye toward placing contradictory elements next to each other, and in class I now ceaselessly emphasize specifics in arguments.

Since I’ve see this disease before, I’ve already thought about it, and I think the generalization problem is linked to the problem of close reading, which is a really hard skill to develop and one I didn’t develop in earnest till I was around 22 or 23. Even then it was only with a tremendous amount of effort and practice on my part. Close reading demands that you consider every aspect of a writer’s argument, that you pay attention to their word choices and their sentences, and that you don’t attribute to them opinions they don’t necessarily hold. Francine Prose wrote a whole book on the subject called Reading Like a Writer, but the book is a paradox: in order to develop the close reading skills she demonstrates, you have to be able to closely read her book in the first place, which is hard without good teaching.

Mentioning Francine Prose brings up one other common point I saw in the comments: few pointed to sources or ideas outside themselves, and allusions were rare. In the best writing I see, such elements are common. That isn’t to say every time you post a comment, you should cite four peer-reviewed sources and a couple of blog posts, but ideas are often stronger when they show evidence of learning and synthesis from others. In my Avatar post, I brought together Greg Egan, a New Yorker article, Alain de Botton citing Wilhelm Worringer, Robert Putnam’s Bowling Alone, the Neal Stephenson essay, and Star Trek. Now, my argument about Avatar could still be totally wrong, like an essay with hundred citations, but at the very least other writers’ thoughts usually show that more thought has gone into an essay, or a comment. Almost every article in every newspaper and magazine piece worth reading cites at least half a dozen and often many more sources: quotes, other articles, journals, books, and more. That’s part of what make The Atlantic and The New Yorker so worth reading.

Citations area common because things that are really worth arguing about require incredible background knowledge to say anything intelligent. The big response I’ve had to many of the comments, especially the deleted ones, are suggestions to read more: read How Fiction Works, The Art of Criticism, and Reading Like a Writer, then post your angry Internet screeds after you’ve thought more about what you’re arguing. These kinds of pleas probably fall on the proverbially deaf ears, but at least with this post now I have somewhere to point bad commenters in the future.

I think one reason I find Slashdot conversations much less interesting than I did as a teenager isn’t because the nature of the site has changed, but because I’ve learned enough to have learned how hard it is to really know about something. Now I’m often more engaged by pure information and less often in invective and pure opinion, especially when that opinion isn’t backed up by much. The information/opinion binary is of course false, especially because the kind of information one presents often leaves pointers to one’s opinion, but it’s nonetheless useful to consider when you’re posting on Internet forums—or writing anywhere.

Incidentally, one reason I like reading Hacker News so much is that the site consciously tries to cultivate smarter, deeper conversation, much as I wish to; it’s trying to meld technical and cultural forces into a system that rewards and encourages high-content comments of the sort I mostly didn’t get regarding Avatar. I submitted the Avatar post to Hacker News before Slashdot, and the first, relatively good comment came from a Hacker News reader.

The problem of trolls is also very old, and probably goes back to the Internet’s beginnings—hence the need for a word like “troll,” with a definition in the Jargon File. As a result, I’m probably not going to change much by writing this, and to judge from my e-mail correspondent, trying to do so via e-mails and blog posts is mostly hopeless. But a part of me is an optimist who thinks or hopes change is possible and that by having a meta conversation about the nature of trolling, one can avoid the behavior in general, at least on a small scale. At Slashdot or Reddit scales, however, the hope fades, and one simply experiences the tragedy of the commons.

EDIT: Robin Hanson has an interesting alternate, but not mutually incompatible, theory in Why Comments Snark:

Comments disagree more than responding posts because post, but not comment, authors must attract readers. Post authors expect that reader experiences of a post will influence whether those readers come back for future posts. In contrast, comment authors less expect reader experience to influence future comment readership; folks read blog posts more because of the post author than who they expect to author comments there.

Thoughts on James Cameron's Avatar and Neal Stephenson's "Turn On, Tune In, Veg Out"

Despite reading Greg Egan’s brilliant review of Avatar, I saw the movie. The strangest thing about Avatar is its anti-corporate, anti-technological argument. Let me elaborate: there are wonderful anti-corporate, anti-technological arguments to be made, but it seems contrived for them to be made in a movie that is, for the time being, apparently the most expensive ever made; virtually all mainstream movies are now approved solely on their profit-generating potential. So a vaguely anti-corporate movie is being made by… a profit-driven corporation.

The movie is among the most technically sophisticated ever made: it uses a crazy 2D and 3D camera, harnesses the most advanced computer animation techniques imaginable, and has advanced the cinematic state-of-the-art. But Avatar’s story is anti-technological: humans destroyed their home world through environmental disaster and use military might to annihilate the locals and steal their resources. Presumably, if Avatar’s creators genuinely believed that technology is bad, the movie itself would never have been made, leading to a paradox not dissimilar for those found in time travel movies.

Avatar also has a bunch of vaguely mythical elements, including some scenes that look like the world’s biggest yoga class. The Na’avi, an oppressed people modeled on American Indians, or at least American Indians as portrayed in 20th Century American movies, fight against an interstellar military using bows, arrows, horses, and flying lizards. They live in harmony with the world to an extent that most Westerners can probably barely conceive of, given that more people probably visit McDonald’s than national parks in a given year.

So why are we fascinated with the idea of returning to nature, as though we’re going to dance with wolves, when few of us actually do so? Alain de Botton’s The Architecture of Happiness may offer a clue: he cites Wilhelm Worringer’s essay, “Abstraction and Empathy,” which posits that art emphasizes, in de Botton’s words, “[…] those values which the society in question was lacking, for it would love in art whatever it did not possess in sufficient supply with in itself.” We live (presumably) happy lives coddled in buildings that have passed inspection, with takeout Chinese readily available, and therefore we fantasize about being mauled by wild beasts and being taken off the omnipresent grid, with its iPhones and wireless Internet access. We live in suburban anomie and therefore fantasize about group yoga. We make incredibly sophisticated movies about the pleasures of a world with no movies at all, where people still go through puberty rituals that don’t involve Bar Mitzvahs, and mate for life, like Mormons.

Neal Stephenson wrote a perceptive essay called “Turn On, Tune In, Veg Out,” which examines the underlying cultural values in the older and newer Star Wars films. I would’ve linked to it earlier but frankly can’t imagine anyone returning here afterwards. Therefore I’ll quote an important piece of Stephenson:

Anakin wins that race by repairing his crippled racer in an ecstasy of switch-flipping that looks about as intuitive as starting up a nuclear submarine. Clearly the boy is destined to be adopted into the Jedi order, where he will develop his geek talents – not by studying calculus but by meditating a lot and learning to trust his feelings. I lap this stuff up along with millions, maybe billions, of others. Why? Because every single one of us is as dependent on science and technology – and, by extension, on the geeks who make it work – as a patient in intensive care. Yet we much prefer to think otherwise.

Scientists and technologists have the same uneasy status in our society as the Jedi in the Galactic Republic. They are scorned by the cultural left and the cultural right, and young people avoid science and math classes in hordes. The tedious particulars of keeping ourselves alive, comfortable and free are being taken offline to countries where people are happy to sweat the details, as long as we have some foreign exchange left to send their way. Nothing is more seductive than to think that we, like the Jedi, could be masters of the most advanced technologies while living simple lives: to have a geek standard of living and spend our copious leisure time vegging out.

The tedious particulars of modern technological life are both embraced and avoided in Avatar too. The villain, rather than being political chaos, organized oppression, ignorance, entropy, or weak/ineffective institutions, to name a few of the real but abstract contemporary bad guys, is instead replaced by an army / mercenary commander who might be at home in Xe Services / Blackwater USA. The military villainy and disdain for superior firepower in Avatar is especially odd, given that the United States has held the technological advantage in major wars for at least a century; the people watching Avatar are probably also the ones who support our troops. The studio that made Avatar probably cares more about quarterly statements than about the environment. The movie villains, however, apparently aren’t being restrained by an intergalactic EPA.

Avatar is really a Western about the perils of modernity, but it gets contemporary politics utterly wrong—or perhaps it would be more accurate to say that contemporary politics are utterly absent. There is no intergalactic criminal court or committee for the protection of indigenous peoples, which seems like a probable development for a race nursed on Star Trek and post-colonialism and that is advanced enough to travel the stars. In the contemporary United States, a bewildering array of regulations govern activities that might have an environmental impact on communities; the National Environmental Policy Act (NEPA), for example, requires that federal agencies to monitor and report on their activities. Such regulations are growing, rather than shrinking. They’re a staple bogeyman of right-wing radio.

But in Avatar, decisions aren’t made at the future equivalent of the Copenhagen summit. Instead, they’re fought in battles reminiscent of World War I, or the Civil War, leavened with some personal combat. The battles are jarring but anachronistic, although maybe Iraq War II: The Sequel would’ve turned out better if George Bush and Saddam Hussein had dueled with swords, but that’s not how wars are fought any more. And when one side has machine guns and the other side doesn’t, you get something as nasty as World War I, where all the élan, spirit, and meditation in the world didn’t stop millions of people from dying.

My implicit argument isn’t perfect: Avatar does criticize our reliance on oil through the parable of the cleverly named “unobtainium,” but the thrust of the movie is unambiguous. We want to fantasize that solutions are as simple as putting a hole in the right guy, which will make things right again. That’s probably a comforting notion, and an easy one to fit into a two- to three- hour movie with a three-part arc, but it’s also a wrong one, and one that ignores or abstracts the world’s complexity. The people who tend to rule the world are the ones who pay attention to how the world really is, rather than how it was, or how they would like it to be. The real question is whether we are still people who see how the world is.

Ian McEwan and "The Use of Poetry"

The main use poetry in “The Use of Poetry” is seduction: specifically, the seduction of the liberal artist Maisie (recalling shades of Henry James: What did Maisie know?) by the scientist Michael Beard in the late 60s. Michael learns enough Milton to impress Maisie, with her artistic tendencies, a feat that I doubt I’d have the discipline for despite being another liberal artist; they go out, Michael realizes his disdain for what seems the foppish laziness of the liberal arts, and he reinforces the inferiority complex many English majors feel in the face of hard science.

Or maybe not: when we think we see Michael’s perspective on how easy it is to read “four of the best essays on Milton,” McEwan drops this in by airmail:

Many years later, Beard told this story and his conclusions to an English professor in Hong Kong, who said, “But, Michael, you’ve missed the point. If you had seduced ninety girls with ninety poets, one a week in a course of three academic years, and remembered them all at the end—the poets, I mean—and synthesized your reading into some kind of aesthetic overview, then you would have earned yourself a degree in English literature. But don’t pretend that it’s easy.”

That’s the only mention of the “English professor in Hong Kong,” who appears, nameless, only long enough to correct us. He or she disappears: there is no wrapping up, no coming together of the English professor and some deeper meaning. He or she is there to tell us, and “The Use of Poetry” seems like a rebuke to the “Show, don’t tell” school of writing: it is all telling, or nearly all, and it teasingly plays with real world correspondences. “The Use of Poetry” says:

This understanding was the mental equivalent of lifting very heavy weights—not possible at first attempt. He and his lot were at lectures and lab work nine till five every day, attempting to grasp some of the hardest things ever thought. The arts people fell out of bed at midday for their two tutorials a week.

A February 2009 profile of McEwan, also in the New Yorker, says:

McEwan enjoyed studying calculus—“It was like trying to lift a weight that was a little too heavy”—but he settled on literature, and showed enough promise that he was urged to apply for a scholarship at Cambridge.

Maybe McEwan fears the limits of our cognition, or his own cognition. Or maybe I am engaging the intentional fallacy. Surely the editors of The New Yorker noticed this correspondence in their earlier nonfiction piece and this later work of fiction. What, if anything, did they make of it? Were they as uncertain as me?

Finally, what to make of the title: “The Use of Poetry,” rather than “uses?” Apparently poetry has only one use, seduction, as I unfairly said in the first line of this post. But maybe it is not asking, “What is poetry used for?” but rather, “how and why is poetry used by a particular person—Michael—or people in general?” The title probably has other meanings too, like most poems, with their rascally habit of evading a single interpretation.

For some reason, I am reminded of Kundera’s The Unbearable Lightness of Being: both that novel and this story are highly directive, allusive, focusing on what love means in a modern context, using love to examine ideas and ideas to examine love. They both end, not with a statement or feeling of wholeness, but with a feeling of new sight but perpetual incompleteness, like that is our fate, no matter the math we learn or the poems we study. Could “The Use of Poetry” be to remind us of what we can never fully grasp, like Michael trying to understand the liberal arts, or Milton, who was in turn trying to understand us? Hard to say. But then, a lot in life is hard to say. The best we can do with it is try. Maybe with a poem.

Or a story.

EDIT: If you’re here because you’ve been assigned a paper on McEwan, you might find this post to be of great interest.

Malcolm Gladwell on Harper Lee's To Kill A Mockingbird

I have two fundamental problems with Malcolm Gladwell’s piece in the New Yorker concerning To Kill a Mockingbird: one is philosophical/moral, and the other aesthetic. The philosophical/moral problem is that incrementalism is not necessarily an invalid approach to major social injustice. Gladwell says:

Old-style Southern liberalism—gradual and paternalistic—crumbled in the face of liberalism in the form of an urgent demand for formal equality. Activism proved incompatible with Folsomism.

That’s true: but it doesn’t mean that the James Folsom approach—who was progressive by southern standards in the first of the twentieth century—wasn’t an improvement over what came later as part of the unjustified backlash. Gradual change can set the stage for radical change, as it did with the Civil Rights movement, and pragmatism is sometimes more effective than attempting to radically alter social, economic or political life.

The Stanford Encylopedia of Philosophy describes the philosopher Richard Rorty this way: “Rorty is a self-proclaimed romantic bourgeois liberal, a believer in piecemeal reforms advancing economic justice and increasing the freedoms that citizens are able to enjoy.” Rorty gives a convincing defense of those piecemeal reforms in his various books, and I’m not wholly convinced of Gladwell’s interpretation that To Kill a Mockingbird is problematic for that reason.

And this idea applies to more than politics. Megan McArdle just posted a piece on Federal Reserve Chairman Ben Bernacke that ended, “As it says in To Kill a Mockingbird, Bernanke did the best he could with what he had. It was not perfect. But looking around at the mostly employed people on the streets, I’m glad he was there.” From what I understand of the recent financial crisis, I basically agree with her assessment: Bernacke and the other players in Washington did the best they could given the information they had at the time, which is based on pieces like The Final Days of Merrill Lynch in The Atlantic and Inside The Crisis: Larry Summers and the White House economic team in the New Yorker.

The second problem is aesthetic: like Nabokov, I don’t think novels need to play the role of social arbiter or champion. A novel that is sufficiently abhorrent—like one that actively praises segregation in the fashion that Soviet novels would advance inaptly named social realism, or one that shills for retrograde religious ideals—would probably be bad by virtue of their social commentary, but I think To Kill a Mockingbird is subtler than that, and to me the novel’s most interesting component is the development of Scout as a person. That’s inherently tied up with morality and politics, of course, but how and whether the novel succeeds in that respect ought to be the major consideration in evaluating a novel.

In other words, once the novel passes the relatively low bar of not being actively abhorrent, it should be judged on other principles than whether it conforms to what appear to be a person or age’s moral norms.

Philip Greenspun's Why I'm Not a Writer and Hacker News

I submitted a Hacker News (HN) link to Philip Greenspun’s essay Why I’m Not a Writer, which begins:

I’m not a writer. Sometimes I write, but I don’t define myself as a career writer. And that isn’t because I couldn’t tolerate the garret lifestyle of an obscure writer. It is because I couldn’t tolerate the garret lifestyle of a successful writer.

He’s right. The garret lifestyle is one reason (there are many others too) why so many writers are now affiliated with universities, as detailed in Mark McGurl’s excellent book The Program Era. In fact, university affiliation has become so pervasive that Neal Stephenson told this hilarious story on the subject in a Slashdot interview:

[… A] while back, I went to a writers’ conference. I was making chitchat with another writer, a critically acclaimed literary novelist who taught at a university. She had never heard of me. After we’d exchanged a bit of of small talk, she asked me “And where do you teach?” just as naturally as one Slashdotter would ask another “And which distro do you use?”

I was taken aback. “I don’t teach anywhere,” I said.

Her turn to be taken aback. “Then what do you do?”

“I’m…a writer,” I said. Which admittedly was a stupid thing to say, since she already knew that.

“Yes, but what do you do?”

I couldn’t think of how to answer the question—I’d already answered it!

“You can’t make a living out of being a writer, so how do you make money?” she tried.

“From…being a writer,” I stammered.

At this point she finally got it, and her whole affect changed. She wasn’t snobbish about it. But it was obvious that, in her mind, the sort of writer who actually made a living from it was an entirely different creature from the sort she generally associated with.

And once I got over the excruciating awkwardness of this conversation, I began to think she was right in thinking so. One way to classify artists is by to whom they are accountable.

In the HN thread, another poster named Quantumhobbit linked to Orson Scott Card dealing with the same subject. As Quantumhobbit says, “Basically his advice is make sure you have another source of income, such as a rich uncle, before you decide to become a full-time writer. There is no guaranty that you will make enough to support yourself, even in genre writing.”

But the most interesting response comes from Gwern, who said, “I note that [Greenspun’s essay is] from 1996, when the bubble was getting hot; are you suggesting that the web has not panned out for writers and that they are equally screwed online as off?” In reply, I said:

I think that the date of Greenspun’s essay is indicative of how little has changed, rather than how much. Most writers didn’t make very much money then, and they still don’t, which many people don’t seem to realize; one writer friend who also teaches university classes recently wrote to me and said that a colleague had asked, in all seriousness, if he was rich now that he’d written a book. Writers often work like astronauts to achieve relatively modest financial success, which people like the poster in the original HN thread might want to know before getting started in earnest at trying to write for the book market. Take a look at these posts from a guy who works in the sales department of a major publishing house regarding current advances for most types of fiction.

“are you suggesting that the web has not panned out for writers and that they are equally screwed online as off?”

Depends on what you mean by “panned out” and “screwed”; I can’t really tell from the nature of the question. If you mean, “Do I think writers can make enough from the Internet to support themselves?” the answer is yes; if you mean, “Will many of them do so, especially relative to the number who would like to?” the answer is “no.” In fact, I even wrote a blog post at Grant Writing Confidential on the subject of how unlikely it is for people to make money from blogging.

(Note: the above is slightly edited from the original.)

Gwern replied:

But to expand on what I meant: I remember that back in the dot-com bubble, the bubble Greenspan wrote that essay in, there was a lot of enthusiasm and hype about how the future would be so much better for authors and artists than the old world of offline publishing – the Web would empower creators, cut out the middlemen, and channel tons of money to them, via the magic of 0-cost publishing, micropayments, and other things like search engines or aggregators. Greenspan’s essay seems to buy into that zeitgeist, albeit relatively modestly.

Of course, that vision has largely come failed to come true (spectacularly so in the case of micropayments and agents). I wondered if the point of your linking this old essay was to emphasize the contrast and make clear that writing is still a marginal business regardless of where it’s being distributed or what neat technical gadgets are involved.

That wasn’t my point, but if I’d been smarter it would’ve been. Half the 1996 equation Gwern describes has come true: the web has vastly empowered writers’ ability to reach readers (and consultants’ ability to reach clients). But it definitely hasn’t channeled vast amounts of money to most writers, and many kinds of writers—like professional journalists—are being laid off en-masse.

In the world of the web, as in the 1849 California gold rush, the people who make real money aren’t the people panning for gold, but the people selling equipment to and building infrastructure for the people panning for gold. So too with online writing: Matt Mullenweg, the founder of WordPress, which drives this blog, probably makes or will make far more than anyone writing on it.

All of this could probably be appended to advice for a very very beginning writer. I think that knowledge for its own sake is valuable, even, or maybe especially, for artists.

$20 Per Gallon: How the Inevitable Rise in the Price of Gasoline will Change Our Lives for the Better — Christopher Steiner

One major problem of $20 Per Gallon isn’t just the book itself, but its ancestors. Christopher Steiner argues that a) oil prices will rise like an Atlas rocket and b) that such a rise will result in people flocking to dense, urban cities, the return of manufacturing to the United States, and a host of cultural changes. But neither proposition is as certain as he implies, and Steiner comes from a long line of environmental doom-sayers. Books like Paul R. Ehrlich’s The Population Bomb—a best-seller in the 1970s—make Malthusian arguments that have proven wrong over the last 40 years. They predicted catastrophe, not iPods and the Internet.

Still, just because someone was incorrect about a past prediction doesn’t mean that a current prediction will be wrong; there’s probably a name for this kind of bias beyond “boy-who-cried-wolf-syndrome.” But the argument that $20 Per Gallon might be wrong goes deeper, as shown in Tad Friend’s “Plugged In: Is the electric car the future?” from this week’s New Yorker. Friend’s answer is “maybe,” which isn’t much of a surprise given the technological, infrastructure, and economic challenges surrounding electric vehicles. But if oil prices spike high enough, the switch might be painful and rapid—which could drive oil prices back down as demand drops. We saw something similar happen in the summer of 2008, when oil usage plummeted in response to higher prices. And judging by the amount of investment going into electric and hybrid vehicles, it’s not impossible imagine that climbing oil prices will lead people beyond those who want to show their environmental conscientiousness to buy them, resulting in exurban sprawl and a lifestyle not so different for most people, rather than the wholesale urban changes Steiner predicts.

Predictions about the end of the world or drastic changes to it have been so popular that Simon Pearson even wrote A Brief History of the End of the World: Apocalyptic Beliefs from Revelation to UFO Cults, which covers the history of people who predict the end of the world, or at least civilization (so far, their track record isn’t so hot, but many post-apocalyptic novels are fun to read). Steiner is more upbeat, seeing higher gas prices improving the world, and that part is refreshing and makes his work different from someone like Ehrlich’s.

Still, oil prices might not climb all that high in the immediate future. Although Steiner says “We have hit what’s popularly known as peak oil, meaning that global production of crude is at a zenith that will never again be realized,” Friend says, “It troubles [Elon] Musk [founder of Tesla Motors] that while few people know that the world’s oil supply could plateau by 2020 and run out as early as 2050, nearly everyone knows that electric cars suck.” Given the two sources, I would tend to trust the New Yorker’s famously fastidious fact-checkers over Steiner. Still, the Wall Street Journal reports today that Oil Prices Hit 2009 High. Based on this flurry of recent news, is Steiner more right or wrong? It depends on what happens to the market. People who think they know what will happen and bet accordingly will win or lose big. Some will presumably end up demonstrably wrong, like Ehrlich. Steiner cites an airline consultant who says “oil […] is bound to reach [eight dollars per gallon] within three or four years.” I wonder if someone will remember to call him on it then.

So the obviousness that Steiner argues just isn’t there. I’ve come to that conclusion in part because the book doesn’t break new ground or bring enough existing information together to make a compelling and new argument. If you’re familiar with the work of economist Edward Glaeser or writer Richard Florida, both of whom have often been cited in The Atlantic, you know where Steiner’s coming from. Florida even writes for the magazine, while Glaeser contributes to the New York Times’ Economix blog. Too much of $20 Per Gallon is going to be redundant or superfluous for anyone familiar with Glaeser and Florida’s work. To be worthwhile, a book needs to have such depth and such a strong animating idea that it must have hundreds of pages to flesh out its major ideas. Lately I’ve criticized a number of nonfiction books for that failing that test, including Rapt, America’s War on Sex, and The Secret Currency of Love.

In $20 Per Gallon, there’s also a troublesome undercurrent of snobbery that runs through, and a sense that Steiner looks down on the proles who like kitsch and SUVs for reasons other than economics, but those views are cloaked in economic arguments. In an aesthetic sense I’m more or less with Steiner, but he makes poorly supported arguments like this one:

According to some of American automakers’ own market researchers, the type of people who tend to buy SUVs are insecure and vain. They’re people who frequently are nervous about their marriages and uncomfortable about having become parents. They have little confidence in their skills as drivers.

The source for this? Two writers who also have a strong enough point of view to make me doubt their own research: Brian Hicks and Chris Nelder, who wrote Profit from the Peak: The End of Oil and the Greatest Investment Event of the Century. As I tell freshmen: you have to go back and find the primary research material if you’re going to cite extravagant or unusual claims. I want to believe Steiner’s argument about people who drive SUVs in part because I don’t, and his argument flatters my own prejudices, which is nice. But the analytic side of my mind doesn’t buy it. He also says that the vast McMansions that were in vogue until February 2009 “will be an entrapment, an entrapment to giant utility bills and the attachment to a dwelling unit that will, with time, become a kind of pariah.” His financial argument is probably sound: spending vast quantities of money on a signaling device like a distant house isn’t playing smart financial defense. I don’t want to live in one. But because of the hybrid and electric car argument above, Steiner might be wrong on the basic affordability of McMansions, even if he remains right in his unstated view that they’re gaudy, ugly, and likely to fall apart.

The basic problem with $20 Per Gallon is that if you’ve read this post and followed most of the links, you now know more about the issue that the book describes than the book itself tells you. Someone would probably be better off subscribing to The Atlantic and The New Yorker than they would reading $20 Per Gallon, since those magazines do a better job of dealing with issues surrounding oil prices and their consequences than Steiner does here. A lot of that work is online. Go find it there. Once you have a map to finding it, you don’t Steiner to do the work for you.

Thy Neighbor's Wife — Gay Talese

To read the new edition of Gay Talese’s Thy Neighbor’s Wife as someone who grew up in the era of American Pie and its considerably less tame Internet cousins is to step backwards into a time that, for many people, still exists. To judge from the nattering both on- and off-line, the debate goes, despite the sense of inevitability that Thy Neighbor’s Wife imparts; perhaps, as Jamais Cascio quotes William Gibson as saying in The Atlantic article “Get Smart,” “The future is already here, it’s just unevenly distributed.”

But it’s not at all clear that the vision implied by Talese will ever arrive for most people, or even that Thy Neighbor’s Wife is the “Timeless Classic” promised by the cover. The book is more an essay collection than book and feels the same malady as Joan Didion’s Slouching Towards Bethlehem: age. To me, the mores of the 1950s seem quaint, Bill O’Reilly’s silliness and faux outrage notwithstanding, and erotic hypocrisy in the media and culture at large is both well-known and documented, as it long has been. That brings one to the obvious point: what purpose does Thy Neighbor’s Wife still serve in an age of Bonk and The Book of Vice?

One can see predecessors to Thy Neighbor’s Wife in books ranging ranging from Madame Bovary upwards; in John Barth’s The Floating Opera and The End of the Road, adulterous triangles form with consequences that are serious chiefly because of the seriousness of their participants. The “other man” in The Floating Opera says that “Being intelligent people, they were able to talk about the matter frankly, and they tried hard to articulate their sentiments, and decide how they really felt about it.” The issue had already burbled toward popular consciousness when Barth’s novel was published in 1956. Many of Bellow’s novels spoke with bracing linguistic and intellectual clarity to issues around sexuality. Given that, one should try to read Thy Neighbor’s Wife not just as a chronicle of a time that now seems ancient, but as a guide to what undergirds social relations beyond the particulars of what is forbidden and why.

Social change and perspective

The most arresting sections of Thy Neighbor’s Wife deal with larger social changes rather than the strictly sexual—for example, the sense of anomie and rootlessness that seem reflected by sexuality rather than the cause of it. For example, Talese says that “The emphasis on youth made many Americans in their thirties feel older, particularly those junior executives who, having identified with corporations and having associated wisdom with seniority, now felt suddenly uncertain and outmoded in this age of new personalities and vacillating values.” That could have emerged from a Paul Graham essay on startups or a thousand banal pop sociology books of the last several decades. Still, it is effective in reminding one of pattern of change being played out across lives.

Likewise, Talese says that “Southern California’s characteristic disregard of traditional values, its relatively rootless society, its mobility and lack of continuity […] were accepted easily by [Diane Webber’s family].” Replace “Southern California” with “Silicon Valley,” and the comparison still holds, as does the idea that the larger problem might have been the continuing undermining of seniority and “traditional values,” which seems to have begun in the Enlightenment continues at this moment, as argued by Louis Dupre in The Enlightenment and the Intellectual Foundations of Modern Culture. From Dupre’s vantage, the larger social changes that emphasize youth, sexuality, fluid movement, and independence have been ongoing for centuries, making Talese’s wave a small part of a larger social tide.

Diane played a still smaller role, with her place in Thy Neighbor’s Wife springs from her role as a nude model in the 1950s—a role that, later, she would come to downplay, as if the earlier Webber was completely distinct from the later Webber. Her larger symbolic function in Thy Neighbor’s Wife wasn’t obvious—Talese seems to view her as someone who didn’t go all the way, or as someone who isn’t as much a seeker as others. Books often play a prominent role in this process; in eventual free-love guru John Williamson’s apartment, “the many books he owned dealing with psychology, anthropology, and sexuality represented not only intellectual curiosity on his part but also a growing professional interest. Twenty pages later, another John, this time surnamed Bullaro, “petulantly reminded himself that he must revive and broaden his education, must read more books…” Another man who becomes a pornographer “had matured in the Army, had done considerable reading during many lonely nights in the barracks…”

Williamson gets a starring role in many mini-essays. He sought to create an island of open sexuality that now seems more mocked than practiced. This took the form of a retreat named Sandstone, where the “living room at times resembled a literary salon, [while] the floor below remained a parlor for pleasure-seekers, providing sights and sounds that many visitors, however well versed they may have been in erotic arts and letters, had never imagined they would encounter under one roof during a single evening.” That’s all very nice, but the detached and yet voyeuristic prose feels silly and stilted, even if the idea is an important one, especially since the major qualities that required to participate in the events of places like Standstone—and there I go with my euphemistic phrases—are ones that probably help with success across broader avenues of life than just sexuality, like confidence, tenacity, fortitude, and, as Talese writes approvingly of Barbara Cramer, “not [being] intimidated by the possibility of rejection.”

Weakness and Strength

In one section we learn of a rebellious girl named Sally Binford, who “…lured young men with an ease that was the envy of her female contemporaries, who regarded her as bold and shameless.” They sound unable to complete, and another reading of Thy Neighbor’s Wife might more closely examine the evolutionary, social, and economic competitive forces swirling around it. But if Binford was envied, why didn’t the other girl emulate her? When one business finds success with a particular product, one can often can on a swarm of imitators. But when one person finds social success using a particular method, others tend to downplay that person’s success. Why? It seems that there are a variety of explanations, but perhaps the most interesting is to conceive that refusal to reject convention as a weakness.

Books like Leora Tanenbaum’s Slut! Growing Up Female with a Bad Reputation echo how the dominant social structures—the “Davids” if you will—use scorn against those who outcompete them. I’m reminded of Malcolm Gladwell’s recent New Yorker article, “How David Beats Goliath: When underdogs break the rules,” which says:

Insurgents work harder than Goliath. But their other advantage is that they will do what is “socially horrifying”—they will challenge the conventions about how battles are supposed to be fought… The price that the outsider pays for being so heedless of custom is, of course, the disapproval of the insider… Goliath does not simply dwarf David. He brings the full force of social convention against him; he has contempt for David.

That’s what Binford feels from her female contemporaries, and many women continue to feel that heat from their contemporaries today, as Tanenbaum shows.

One other fascinating aspect in Gladwell’s study could apply to Talese’s description:

When an underdog fought like David, he usually won. But most of the time underdogs didn’t fight like David. Of the two hundred and two lopsided conflicts in Arreguín-Toft’s database, the underdog chose to go toe to toe with Goliath the conventional way a hundred and fifty-two times—and lost a hundred and nineteen times.

Gladwell refers to military conflicts. The analogy to sex and dating is not hard to grasp: most people feel like romantic underdogs, at least to judge from cultural production, but they play like Goliaths and lose. In Talese’s descriptions, many constricting social forces are abrogated or elided by discarding conventional rules as a path toward romantic success and satisfaction. Sally Binford’s story expressed that. Yet most of us don’t play like Davids, preferring to simmer in dissatisfaction rather than face the disapproval of insiders. When put that way, or in the sexual way Talese presents it, this habit of acquiescence to social forces sounds like a weakness. Put other ways, like as respect for other people, it might sound like the strength, and the temptation is to announce that a middle road exists. Grasping that middle road, however, requires understanding both extremes, as well as one’s place in larger historical and social forces.

Larger Meaning and The Atlantic

The reissue of Thy Neighbor’s Wife caught my eye after “A Nonfiction Marriage” appeared in New York Magazine, which chronicles the Talese hidden behind the story of Talese. It seems that he and his wife, Nan, had as much tension, uncertainty, and ambivalence in their marriage as the subjects about whom Gay wrote. It has no resolution.

Maybe this obsessive study of sexuality and change means something, and maybe it means maybe. Perhaps it means nothing, or that we have all the options open to us and still don’t know what we want or how to resolve the mutually incompatible desires within us. The Thy Neighbor’s Wife solution of radical openness doesn’t appear to have gained ground; as Sandra Tsing Loh writes in “Let’s Call the Whole Thing Off: The Author is Ending her Marriage. Isn’t It Time You Did the Same?” for the July/August 2009 issue of The Atlantic (not yet online as of this writing): “But as we all know, the Sexually Open Marriage fizzled with the lava lamp, because it is just downright icky for most people” (it is for this kind of scintillating insight and incisive analysis that I subscribe to The Atlantic).

Nonetheless, Tsing Loh’s comment does illustrate that, for all the swapping and coupling Talese describes, social norms haven’t moved as Williamson and Hugh Hefner might have once imagined they would. We’re now free to negotiate the kinds of arrangements we want, but they don’t tend to be of the free-love style that Talese implies might have been plausible as the dominant social position. Consider as evidence both Tsing Loh’s article as well as Lori Gottlieb’s “Marry Him!” and “The XY Files.” Now, as in our jobs, we are all moving toward free agency. Judging by the timescales present in The Enlightenment and the Intellectual Foundations of Modern Culture, the consequences won’t be apparent for a long time yet. With that perspective, maybe the waves made by Thy Neighbor’s Wife are even smaller than they appear.

More words of advice for the writer of a negative review

Nigel Beale quotes Helen Gardner:

“Critics are wise to leave alone those works which they feel a crusading itch to attack and writers whose reputations they feel a call to deflate. Only too often it is not the writer who suffers ultimately but the critic…”

Beale asks: “Which is great and poetic and all, however, is silence enough?”

To me, the chief function of the critic ought to be explore a work as honestly as possible and to illuminate to the best of her abilities. This means openness and it means being willing to say that a work is weak (and why), as well as showing how it is weak. In other words, you should be able to answer the who, what, where, when, why, and how on it, with an emphasis on the last two.

One should squelch “a crusading itch to attack and writers whose reputations they feel a call to deflate,” if you’re attacking merely to attack, or merely because someone’s balloon is overinflated. For example, Tom Wolfe seems a frequent and, to my mind, unfair object of ridicule among critics. But if you’re rendering a knowledge opinion that happens to be negative, you’re doing what you should be, and what I strive to. Often this means writing about why a book fails—perhaps too frequently.

Good reviews and Updike

Every attempt at review and criticism ought to be good—but that doesn’t mean positive. A review should be “good” in the sense of well-done and engaging might be a negative one. In an ideal world, the book should decide that as much as the critic.

John Updike’s rules for reviewing are worth following to the extent possible. I would emphasize three of them:

1. Try to understand what the author wished to do, and do not blame him for not achieving what he did not attempt.

2. Give him enough direct quotation–at least one extended passage–of the book’s prose so the review’s reader can form his own impression, can get his own taste.

5. If the book is judged deficient, cite a successful example along the same lines, from the author’s ouevre or elsewhere. Try to understand the failure. Sure it’s his and not yours?

In the end, I think such rules are designed to keep the reviewer as honest as the reviewer can be. I keep coming back to the word “honesty” because it so well encapsulates the issues raised by Beale, Updike, Orwell, and others.

I especially like the “direct quotation” comment because there are no artificial word limits on web servers, meaning that you should give the reader a chance to disagree with your assessment through direct experience. Quoting of a sufficient amount of material will give others a chance to make their own judgments. Merit can be argued but not proven: thus, a critic can avoid silence and unfair attack.

As the above shows, I like Beale’s answer—”no”—which seems so obvious as to barely need stating. I’d rephrase Gardner’s assertion to this: “beware of relentlessly and thoughtlessly attacking.”

The Aeron, The Rite of Spring, and Critics

In Malcolm Gladwell’s book Blink: The Power of Thinking Without Thinking, he quotes Bill Dowell, who was the lead researcher for Herman Miller during the development and release of the now-famous Aeron in the early 1990s; I’m sitting in one as I type this. The Aeron eventually sold fantastically well and became a symbol of boom-era excess, aesthetic taste, ergonomic control, excessive time at computers, and probably other things as well. But Dowell says that the initial users hated the chair and expressed their displeasure in focus groups and testing sites. According to him, “Maybe the word ‘ugly’ was just a proxy for ‘different.’ ”

That’s a long wind-up for an analogy that explains how Helen Gardner might be telling us that when we instinctively dislike, we might be reacting against novelty rather than its real merit, as critics and listeners notoriously did during Stravinsky’s The Rite of Spring. She’s wise to warn us about that danger, because it’s how people who pride themselves on taste and knowledge become conservative, stuffy critics. If we’re saying something is “bad” merely because it’s “different,” then we’ve already effectively died aesthetically because we’re no longer able to expand what “good” means. One thing I like about Terry Teachout’s criticism and his blog, About Last Night, is that he has strong opinions but still very much seems to have aesthetic suppleness.

But the Aerons and Ulysses of the world are exceedingly rare. Dune and Harry Potter aren’t among them. Joseph O’Neill’s Netherland at least might be, which I concede obliquely in my post about it.

Most works of art are, by definition, average.

The question is: to what extent is that a bad thing? Maybe none at all: an average novel doesn’t cause the death or disfigurement of children, or propagate social inequality, or do any number of other pernicious things. Its chief ill is that it wastes time for the person who reads it and perceives it as average (as opposed to the person who reads it and judges it extraordinary, which many Harry Potter readers have evidently done).

Milan Kundera thinks otherwise—in The Curtain, he writes, “… a mediocre plumber may be useful to people, but a mediocre novelist who consciously produces books that are ephemeral, commonplace, conventional—thus not useful, thus burdensome, thus noxious—is contemptible.” He gives himself a key out here: the word “consciously.” I doubt many writers consciously set out to produce commonplace books, or do so with that intent, and so may be rescued from the burden of Kundera’s scorn. Like the criminal justice system, Kundera separates those who knowingly commit a crime from those who do so accidentally.

You need to have read widely, however, to be capable of knowing the average from the incredible, and those whose effusive praise for Harry Potter and Dan Brown splatters the web show they haven’t. Hence, perhaps, the hesitance many Amazon reviewers show toward low scores, which one of Beale’s commenters observes.

The Aerons of Art

I now look at the Aeron as beautiful, and to me the over-stuffed office chairs that used to symbolize lawyerly and corporate status look as quaint as black and white photos of Harvard graduation classes without women or minorities. If we’re open to seeing the new, I think we’ll be safe enough in condemning the indifferent and pointing towards the genuinely astonishing works that are very much out there.

Edit: The Virginia Quarterly Review weighs in.