Summary Judgment: The War of the Sexes — Paul Seabright

The War of the Sexes: How Conflict and Cooperation Have Shaped Men and Women from Prehistory to the Present isn’t a bad book, but you’ve already in effect read it if you have a cursory knowledge of the vast evolutionary biology literature—or if you’ve read books like Roy Baumeister’s Is There Anything Good About Men?: How Cultures Flourish by Exploiting Men, or Tim Harford’s The Logic of Life, or Sarah Blaffer Hrdy’s The Woman That Never Evolved. If you have read those books—especially the first—you don’t need to read this one, and that’s why I’m not linking directly to it. There are too many better books.

Given a choice between The War of the Sexes or Jonathan Haidt’s The Righteous Mind, choose the latter. You’ll learn more about topics like this one, from The War of the Sexes:

Much of the elusive, infuriating, and enchanting nature of what we feel and why we feel it. Far from being a flaw in our makeup, it is a testimony to the complexity of the problems natural selection had to solve to enable us to handle sexual reproduction at all.

Although this is true, it also feel perilously close to being banal; by now, it’s well-established that emotions/feelings and “intelligence” or “logic” aren’t really separable entities in the human cognitive makeup. What we might think of as “a flaw” is actually an adaptation. Haidt discusses this in far more detail. Seabright also points, again correctly, to the way our own desires are really trade-offs and tensions rather than absolutes:

All individuals, men and women, will also want contradictory things: to be successful and to be protected, to choose our partners and to be chosen by them, to be passionate and to be reasonable, to be forceful and to be tender, to make shrewd choices and to be seduced. With such contradictory impulses, all of us will sometimes make choices we regret. Sex is about danger as well as about tenderness: the two are inseparable, and they are what has made us such a tender and dangerous species.

Our romantic lives aren’t immune to trade-offs, which might be why we find those romantic lives so frustrating so much of the time: they’re hugely important and simultaneously impossible to do perfectly “right.” But, again, this doesn’t feel like news. It feels like olds.

The writing is competent and the research reasonably thorough, but, again, the book as a whole is only useful if you’ve read little or no evolutionary biology; as it went on, I skipped steadily more pages. It isn’t bad. I feel like I’m witnessing a guy burst into a room the day after a big game, breathlessly wanting to celebrate his team’s victory, only to find the rest of the group expunged its impulse the night before.

Cars and generational shift

In The Atlantic, Jordan Weissmann asks: Why Don’t Young Americans Buy Cars?. He’s responding to a New York Times article about how people my age don’t want or like cars. The NYT portrays the issue as one of marketing (“Mr. Martin is the executive vice president of MTV Scratch, a unit of the giant media company Viacom that consults with brands about connecting with consumers.” Ugh.) But I don’t think marketing is really issue: the real problem is that we’ve reached the point where cars suck as a mode of transportation for the marginal person.

Until the 1990s, car culture made sense, to some degree: space was available, exurbs weren’t so damn far from cities, and traffic in many cities wasn’t as bad as it is today. By now, we’ve seen the end-game of car culture, and its logical terminus is Southern California, where traffic is a perpetual nightmare. Going virtually anywhere can take 45 minutes or more, everyone has to have a car because everyone else has a car, and cars are pretty much the only transportation game in town. Urban height limits and other zoning rules prevent the development of really dense developments that might encourage busses or rail. In Southern California, you’re pretty much stuck with lousy car commutes—unless you move somewhere you don’t have to put up with them. And you’re stuck with the eternal, aggravating traffic. Given that setup, it shouldn’t surprise us that a lot of people want to get away from cars (I’ve seen some of this dynamic in my own family—more on that later).

The hatred of traffic and car commuting isn’t unique to me. In The New Yorker, Nick Paumgarten’s There and Back Again: The soul of the commuter reports all manner of ills that result from commuting (and, perhaps, from time spent alone in cars more generally):

Commuting makes people unhappy, or so many studies have shown. Recently, the Nobel laureate Daniel Kahneman and the economist Alan Krueger asked nine hundred working women in Texas to rate their daily activities, according to how much they enjoyed them. Commuting came in last. (Sex came in first.) The source of the unhappiness is not so much the commute itself as what it deprives you of. When you are commuting by car, you are not hanging out with the kids, sleeping with your spouse (or anyone else), playing soccer, watching soccer, coaching soccer, arguing about politics, praying in a church, or drinking in a bar. In short, you are not spending time with other people. The two hours or more of leisure time granted by the introduction, in the early twentieth century, of the eight-hour workday are now passed in solitude. You have cup holders for company.

“I was shocked to find how robust a predictor of social isolation commuting is,” Robert Putnam, a Harvard political scientist, told me. (Putnam wrote the best-seller “Bowling Alone,” about the disintegration of American civic life.) “There’s a simple rule of thumb: Every ten minutes of commuting results in ten per cent fewer social connections. Commuting is connected to social isolation, which causes unhappiness.”

I doubt most people my age are consciously thinking about how commuting makes people unhappy, or how miserable and unpredictable traffic is. But they probably have noticed that commuting sucks—which is part of the reason rents are so high in places where you can live without a car (New York, Boston, Seattle, Portland). Those are places a lot of people my age want to live—in part because you don’t have to drive everywhere. Services like Zipcar do a good job filling in the gap between bus/rail and cars, and much less expensively than single-car ownership. In my own family, it’s mostly my Dad who is obsessed with cars and driving; he’s a baby boomer, so to him, cars represent freedom, the open road, and possibility. To me, they represent smog, traffic, and tedium. To me, there are just too damn many of them in too small a space, and that problem is only going to get worse, not better, over time.

(For more on cities, density, and ideas, see Triumph of the City, The Gated City, and Where Good Ideas Come From.)

Why I write fewer book reviews

When I started writing this blog I mainly wrote book reviews. Now, as a couple readers have pointed out, I don’t write nearly as many. Why?

1) I know a lot more now than I did then and have lived, read, and synthesized enough that I can combine lots of distinct things into unique stories that share non-obvious thins about the world. When I started, I couldn’t do that. Now my skills have broadened substantially, and, as a result, I write on different topics.

2) For many writers, reviewing books for a couple years is extremely useful because it introduces a wide array of narratives, styles, and so forth, forcing you to develop, express, and justify your opinions if you’re going to write anything worthwhile. Few other environments force you to do this; in academia, the books you’re assigned are already supposed to be “great,” so you’re not asked to say if they’re crap—even though many of the assigned books in school are crap, you’re not supposed to say so. After going through dozens or hundreds of books and explaining why you think they’re good and bad and in between, you should end up developing at least a moderately coherent philosophy of what you like, why you like it, and, ideally, how you should implement it. You shouldn’t let that philosophy become a set of blinders, but it does help to think systematically about tastes and preferences and so forth.

You might not be saying much about the books you’re reviewing, but you are saying a lot about what you’ve come to think about books.

3) No one cares about book reviews. If people in the aggregate did care about book reviews, virtually every newspaper in the country wouldn’t have shuttered what book review section it once had. What a limited number of people do want to know is what books they should read and, to a lesser extent, why. Having established, I’d like to imagine, some level of credibility by going through 2), above, I think I’m better able to do this now than I was when I started, and without necessarily dissecting every aspect of every book.

It’s also very hard and time consuming to write a great review, at least for me.

Lev Grossman also points out a supply / demand issue in an interview:

There was a time not long ago when opinions about books were a scarce commodity. Now we have an extreme surplus of opinions about books, and it’s very easy to obtain them. So if you’re in the business of supplying opinions about books, you need to get into a slightly different business. Being a critic becomes much more about supplying context for books, talking about new ways of reading, sharing ways in which it can be a rich experience.

He’s right, and his economic perspective is useful: when something is plentiful, easy to produce, and thus cheap, we should do something else. And I’m doing more of the “something else,” using as my model writers like Derek Sivers and Paul Graham.

To return to Grossman’s point, we might also treat what we’re doing differently. Clay Shirky says in Cognitive Surplus: Creativity and Generosity in a Connected Age

Scarcity is easier to deal with than abundance, because when something becomes scarce, we simply think it more valuable than it was before, a conceptually easy change. Abundance is different: its advent means we can start treating previously valuable things as if they were cheap enough to waste, which is to say cheap enough to experiment with. Because abundance can remove the trade-offs we’re used to, it can be disorienting to people who’ve grown up with scarcity. When a resource is scarce, the people who manage it often regard it as valuable in itself, without stopping to consider how much of the value is tied to its scarcity.

Lots of people are writing lots of reviews, some of them good (I like to think some of mine are good) but most not. Most are just impressionistic or empty or garbage. By now, opinions are plentiful, which means we should probably shift towards greater understanding and knowledge production instead of raw opinion. That’s what I’m doing in point 1). I’m no longer convinced that book reviews are automatically to be regarded “as valuable in [themselves],” as they might’ve been when it was quite hard to get ahold of books and opinions about those books. Today, for any given book, you can type its name into Google and find dozens or hundreds of reviews. This might make pointing out lesser known but good books useful—which I did with Never the Face: A Story of Desire—and the New York Review of Books is doing on a mass scale with its publishing imprint. Granted, I’ve found few books in that series I’ve really liked aside from The Dud Avocado, but I pay attention to the books published by it.

4) It’s useful to keep When To Ignore Criticism (and How to Get People to Take Your Critique Seriously) by John Scalzi in mind; he says critics tend to have four major functions: consumer reporting, exegesis, instruction, and polemic (details at his site). The first is useful but easily found across the web, and it’s also of less and less use to me because deciding what’s “worth it” is so personal, like style. My tastes these days are much more refined and specific than they were, say, 10 years ago (and I suspect they’ll be more refined still in 10 years). The second is basically what academic articles do, and I’d rather do that for money, however indirectly. The third is still of interest to me, and I do it sometimes, especially with bad reviews. The fourth is a toss-up.

When I started, I mostly wanted to do one and two. Now I’m not that convinced they’re important. In addition, books that I really love and really think are worth reading don’t come along all that frequently; maybe I should make a list of them at the top. Every week, there’s an issue of the New York Times Book Review with a book on the cover, but that doesn’t mean every week brings a fabulous book very much much worth reading by a large number of people. Having been fooled by cover stories a couple of times (Angelology being the most salient example), I’m much warier of them now.

Unfortunately, academic writing is also usually less fun, less intelligent, more windy, and duller than writing on the Internet. Anything is accomplishes rhetorically or intellectually is usually done through a film of muck thrown on by the culture of academic publishing, peer reviewers, and journal editors. There’s a very good reason no one outside of academia reads academic literary criticism, although I hadn’t appreciated why until I began to read it.

5) Professionalization. To spend the time and energy writing the great review for this blog, I necessarily have to give up time that I would otherwise spend writing stuff for grad school. There could conceivably be tangible financial rewards from publishing literary criticism, however abstruse or little read. There are not such rewards in blogging, at least given academia’s current structural equilibrium.

(If you’re going to argue that this equilibrium is bad and the game is dumb, that’s a fine thing to do, but it’s also the subject for another day.)

6) People, including me, care more about books than book reviews. I’m better off spending more time writing fiction and less time writing about fiction. So I do that, even if the labors are not yet evident. A book might, conceivably, be important and read for a long period of time. Book reviews, on the other hand, seldom are. So I want to work toward the more important activity; instead of telling you what I think is good, I’d rather just do it.

Here’s T.C. Boyle o:

What I’d like to see more of are the sort of wide-ranging and penetrating overviews of a given writer’s work by writers and thinkers who are the equals of those they presume to analyze. This happens rarely. Why? Well, what’s in it for the critic? Is he/she going to be paid? By whom? Harper’s runs in-depth book essays, as does the New York Review of Books and other outlets. Fine and dandy. There would be more if there were more of an audience. But there isn’t.

For a long time, I did it free, though perhaps not at the level Boyle would desire; now I don’t, per the professionalization issue.

7) A great deal of art and art criticism does, in the end, reduce to taste, and the opinions and analyses of critics are basically votes that, over time, accumulate and lift some few works out of history’s ocean. But I’m not sure that book reviews are the optimal means of performing that work: better to do it by alluding to older work in newer work, or integrating ideas into more considered essays, or otherwise use artistic work in some larger synthesis.

8) In Jonathan Strange & Mr. Norrell, Norrell is having a debate with two toadies and says, “I really have no desire to write reviews of other people’s books. Modern publications upon magic are the most pernicious things in the world, full of misinformation and wrong opinions.” Lascelles, who has become a kind of self-appointed, high-status servant, says:

[I]t is precisely by passing judgements upon other people’s work and pointing out their errors that readers can be made to understand your opinions better. It is the easiest thing in the world to turn a review to one’s own ends. One only need mention the book once or twice and for the rest of the article one may develop one’s theme just as one chuses. It is, I assure you, what every body else does.

And because everybody else does it, we should do it too. Modern publications about literature probably feel the same as Norrell’s view of 1807 publications of magic, because it’s hard to tell what constitutes true information and right opinions in literature—making it seem that everyone else’s writing is “full of misinformation and wrong opinions.” (Norrell, of course, things he can right this, and in the context of the novel he may be right.) Besides, even if we are confronted by facts we don’t agree with, we tend to ignore them:

Recently, a few political scientists have begun to discover a human tendency deeply discouraging to anyone with faith in the power of information. It’s this: Facts don’t necessarily have the power to change our minds. In fact, quite the opposite.

Opinions are probably much the same, which explains how we get to where we are. Opinions about books even more so, which is how Lev Grossman came to say what he said above.

Anyway, Norrell realizes that book reviewing is often a waste of time, and Lascelles likes book reviewing not because of its intrinsic merit but because he thinks of it as high status (which it might’ve been in 1807). In 2011 or 2012, reviewing books might still be a waste of time and is a much lower status activity, so that even the Lascelles of the world–who I’ve met—are unlikely to be drawn to it.

As I said above, the best review of a book isn’t a review of it, but another book that speaks back to it, or incorporates its ideas, or disagrees with it, or uses it as a starting point. Which isn’t a book review at all, of course: It’s something more special, and more rare. So I’m more interested now in doing that kind of review, like Norrell is interested in doing magic instead of writing about other people’s opinions of doing magic, rather than writing about whether a book is worth reading or not. I’ll still do that to some extent, but I’ve been drifting away for some time and am likely to do so further. If Lev Grossman is remembered beyond his lifetime, I doubt it will be for his criticism, however worthy it might be: he’ll be remembered for The Magicians and his other literary work. I’d like to follow his example.

EDIT: Here’s Henry Bech in The Complete Henry Bech:

That a negative review might be a fallible verdict, delivered in haste, against a deadline, for a few dollars, by a writer with problems and limitations of his own was a reasonable and weaseling supposition he could no longer, in the dignity of his years, entertain.

Yet this is the supposition artists need to entertain; critics’ opinions are as cacophonous and random as a jungle, and listening to them is hard, and, the writers who react most vituperatively to critics are probably doing so because they fear the critic or critics might be right.

Updike is also writing close to home here: the better known the writer, the more critics he’s naturally going to attract. So the volume of critical attacks might also be linked to success.

Is There Anything Good About Men?: How Cultures Flourish by Exploiting Men — Roy Baumeister

I would emphasize this, from Arnold Kling, about Is There Anything Good About Men?: How Cultures Flourish by Exploiting Men:

1. If you are a zero-tolerance reader (“I stopped reading on page 9, because he said X, which is obviously wrong, so I figured there was no point in going any further”), then don’t pick up this book. If you are going to finish it, you have to follow almost the complete opposite approach. “Even if a lot of this is wrong, what insights can I take away?”

And there are a lot of ideas per word and little wasted space, especially because Baumeister goes out of his way to avoid dogmatic thinking, which he says overtly:

This book is not about the “battle of the sexes.” I’m not trying to score points for men against women, or vice versa. I don’t think the “battle” approach is healthy. In fact, I think the idea that men and women are natural enemies who conspire deviously to exploit and oppress each other is one of the most misguided and harmful myths that is distorting our current views about men and women.

That being said, Is There Anything Good About Men? has an unfortunate title but many of those deep “insights” worth exploring—and perhaps an equally large amount of unsupported bullshit. It’s frustrating, for example, to see issues like one on page 54 of the hardcover edition, where Baumeister’s claims about sex drive differences between men and women have no citations to actual underlying research. Nonetheless, it’s hard to conclude that men don’t have, on average, a higher desire for sex more often and with more partners than women do; the very structure of dating markets points to this idea. He does cite work later in the book, but why not cite it when the issue is first raised?

But most of the ideas are implications are better; it’s hard to choose among his many observations to discuss in a short blog post, but here’s one I find intriguing; apologies for the length of the quote:

Mostly, men had recognized that dangerous jobs fall to them and, more important, that to be a man they have to accept them. Whether this will continue is not entirely clear. Today’s men are brought up on a rhetoric of equality, and at some point they may balk at letting women be exempted from certain unpleasant tasks.

Even more important, the psychological processes that enable men to do the dangerous jobs may be weakened. Men of past eras were famously out of touch with their feelings. Today’s men are brought up to be more like women, and that includes becoming more conversant with their own emotions. But might that undermine the ability to make themselves do what needs to be done?

To do the dirty or dangerous jobs, you have to put your feelings aside. Being a man in that sense meant that you focused on the task at hand. It meant others could count on you not to let your emotions interfere with getting the job done. One reason traditional societies put those jobs on men was that women might be too fearful or squeamish or tentative to do them. Traditional men weren’t supposed to admit to having such feelings. Yet nowadays we encourage young men to revel in their feelings. Having uncorked the emotional bottle, can we count on the men to stuff the feelings back inside and cork them away when we need them to do so?

The traditional male role has had definite privileges, but it also had duties and obligations. Our culture has come far along in doing away with those privileges. It has been slower about equalizing the duties and obligations. (to quote [Warren] Farrell once more, ‘Women have rights. Men have responsibilities.’) As we make men more like women and remove their traditional privileges, they may begin to object more strenuously to the duties and responsibilities. The obligations of fatherhood weigh far less on today’s man than on earlier generations, as indicated not least by the increasing numbers of men who abandon pregnant girlfriends or small children.

In other words, whatever the rhetoric that gender writers may espouse, when men and women face real problems and dangerous situations, men still tend to get the dirty and dangerous jobs. Equality is fine when it only means the good stuff, but when there’s a strange noise downstairs or coal mines to be stripped, guys still end up there. On the flipside, however, it may also be that society is evolving away from a space where men need not have feelings and toward one where men having feelings is more beneficial than it was in the past.

We may be seeing cultural evolution, live, even as people fight over whether it’s happening and, if it is, what it might mean. The “traditional male role” might be changing or evolving, and its supposed “privileges” or lack thereof too. See, for example, “Sex Is Cheap: Why young men have the upper hand in bed, even when they’re failing in life from Slate.com. Given the choice between coal mining and war or video games and babes in skirts, I suspect most men would rather get in touch with whatever their feelings might be and assume the latter.

You can see other examples of cultural evolution: I’ve been watching The Sopranos lately, and the tension between the “do what needs to be done” aspect and Tony’s supposed feelings and nostalgia for the maybe good-old-days, when men were men, makes The Sopranos intriguing: Tony continuously hearkens back to his father’s time, when men didn’t have (or at least show) feelings; by contrast, he’s being treated by a female therapist, who helps him explore repressed feelings that manifest themselves in dreams and panic attacks.

For whatever this passage might be worth, however, I don’t love the writing itself: vague mentions about “corking” and “uncorking” feelings among “the men” is too abstract for my taste: if this were a freshman’s paper, I’d write as much in the margins and encourage the writer to think about what, precisely, this means for individuals. Even if I know what it means, I can see reasons why it might help for men to uncork their feelings. Consider the experience of World War I, which shows the problems of men not being willing to express fear or tentativeness and willingly walking to their own deaths for no cause at all: that stupid, destructive, largely pointless war occurred in part because men were willing to let themselves be mass-brainwashed into walking into their own deaths for no reason, directed by ignoramuses who’d failed to realize that the nature of warfare had changed and that 19th Century infantry tactics will not merely fail, but fail spectacularly against 20th Century weaponry. So before we romanticize a lost era of male stoicism, let’s remember some of its costs, too, and the fact that turning off feelings and empathy may also allow men to do the many barbaric and cruel things men do.

There are other social changes, too: notice that the state is far more willing to pick up the slack for “pregnant girlfriends and small children,” which changes incentives for men and women; in addition, women appear to be much more willing to dump men who don’t suit their needs than they once might’ve. They write long articles that get turned into books like Marry Him: The Case for Settling for Mr. Good Enough that are all about female unwilligness to compromise. It’s also become much more obvious that women do not always tell the truth about fatherhood, and it’s hard to read articles like “How DNA Testing is Changing Fatherhood” and not realize what’s at stake:

Over the last decade, the number of paternity tests taken every year jumped 64 percent, to more than 400,000. That figure counts only a subset of tests — those that are admissible in court and thus require an unbiased tester and a documented chain of possession from test site to lab. Other tests are conducted by men who, like Mike, buy kits from the Internet or at the corner Rite Aid, swab the inside of their cheeks and that of their putative child’s and mail the samples to a lab. Of course, the men who take the tests already question their paternity, and for about 30 percent of them, their hunch is right.”

It’s possible in many states for a man who signs a child’s birth certificate to be responsible for paying that child’s mother for eighteen years even if that child isn’t his. That’s not an optimal way to encourage male responsibility or eagerness to support Baumeister’s pregnant girlfriends. But Baumeister doesn’t quite this far.

Nonetheless, his central insights about the sexes facing potential trade-offs that guide median preferences is fascinating and possibly true. Notice the language in the previous sentence: “trade-offs” and “median preferences,” rather than saying all people are this way or that way. From that one can extrapolate to current cultural conditions.

I would guess that Baumeister, like me, wants equal opportunities in all parts of life, but he would also point out that equal opportunities doesn’t mean people will want the same things. Men, in his viewing, are optimized towards risk taking; DNA analyses indicate that we’re descended from 40% of the men who ever lived but 80% of the women. Which means the median man died without reproducing and the median woman did. Which means the median man has an evolutionary incentive to take risks, given that his outcome if he lost the gamble was zero but so was his outcome if he didn’t take the gamble at all. Hence the hierarchies in all parts of life that men love to set up; Baumeister eventually says: “The pyramid of success is steep and cruel. Nature dooms most of the males to fail but impels each of them to try to be the one.”

I do not think most women appreciate that. Which isn’t to say most men appreciate what it’s like to experience female incentives, costs, and desires. One of the more unusual nonfiction books I’ve read attempts to do exactly that: Norah Vincent’s Self-Made Man, in which she (a lesbian in “real life,” for lack of a better term) dresses and goes about life as a man for about a year. Baumeister says:

One of the most interesting books about gender in recent years was by Norah Vincent. She was a lesbian feminist who with some expert help could pass for a man, and so she went undercover, living as a man in several different social spheres for the better part of a year. The book, Self-Made Man, is her memoir. She is quite frank that she started out thinking she was going to find out how great men have it and write a shocking feminist expose of the fine life that the enemy (men) was enjoying.

Instead, she experienced a rude awakening of how hard it is to be a man. Her readings and classes in Women’s Studies had not prepared her to realize that the ostensible advantages of the male role come at high cost. She was glad when it was over, and in fact she cut the episode short in order to go back to what she concluded was the greatly preferable life as a woman. The book she wrote was far different from the one she planned, and any woman who thinks life is better for men will find it a sobering read.

He goes on to say that men and women don’t have it “better” than each other per se; they have it different, and his book is, among other things, an attempt to explain why.

Baumeister also said something that, incidentally, reminded me of a potential weakness of the novel as a genre, and that I’ve been thinking a lot about lately: “If you consider the problems facing the world today (e.g., global warming, terrorism, pandemics), you can see that they are not likely to be handled by single persons—more likely by large and complex networks of organizations.” One problem for novels is that they focus on individuals and small groups; it’s very hard for a novel to address very large-scale issues save in the context of an individual or small group. Think of how Ian McEwan’s Solar uses Michael Beard and his foibles to discuss some of the technical challenges around global warming.

This may explain why many men prefer nonfiction to fiction: nonfiction is more easily dedicated to large, abstract ideas and organizations potentially involving thousands or millions of people. Fiction is intimate, self has more than a half dozen major characters, and often focuses on a single or small number of very intimate relationships. The fiction that men prefer on average—Elmore Leonard, murder mysteries, and so forth—often involve a single protagonist who is matching wits and brawn with a single antagonist or series of antagonists, which he must confront using an array of shallow connections to many people.

College graduate earning and learning: more on student choice

There’s been a lot of talk among economists and others lately about declining wages for college graduates as a group (for example: Arnold Kling, Michael Mandel, and Tyler Cowen) and males in particular. Mandel says:

Real earnings for young male college grads are down 19% since their peak in 2000.
Real earnings for young female college grads are down 16% since their peak in 2003.

See the pretty graphs at the links. These accounts are interesting but don’t emphasize, or don’t emphasize as much as they should, student choice in college majors and how that affects earnings. In “Student choice, employment skills, and grade inflation,” I said that colleges and universities are, to some extent, responding to student demand for easier classes and majors that probably end up imparting fewer skills and paying less. I’ve linked to this Payscale.com salary data chart before, and I’ll do it again; the majors at the top of the income scale are really, really hard and have brutal weed-out classes for freshmen and sophomores, while those at the bottom aren’t that tough.

It appears that students are, on average, opting for majors that don’t require all that much effort.

From what I’ve observed, even naive undergrads “know” somehow that engineering, finance, econ, and a couple other majors produce graduates that pay more, yet many end up majoring in simple business (notice the linked NYT article: “Business majors spend less time preparing for class than do students in any other broad field, according to the most recent National Survey of Student Engagement [. . .]”), comm, and other fields not noted for their rigor. As such, I wonder how much of the earnings picture in your graph is about declining wages as such and how much of it is really about students choosing majors that don’t impart job skills of knowledge (cf Academically Adrift, etc.) but do leave plenty of time to hit the bars on Thursday night. Notice too what Philip Babcock and Mindy Marks found in “The Falling Time Cost of College: Evidence from Half a Century of Time Use Data:” “Full-time students allocated 40 hours per week toward class and studying in 1961, whereas by 2004 they were investing about 26 to 28 hours per week. Declines were extremely broad-based, and are not easily accounted for by compositional changes or framing effects.”

If students are studying less, maybe we shouldn’t be surprised that their earnings decline when they graduate. I can imagine a system in which students are told that “college” is the key to financial, economic, and social success, so they go to “college” but don’t want to study very hard or learn much. They want beer and circus. So they choose majors in which they don’t have to. Schools, in the meantime, like the tuition dollars such students bring—especially when freshmen and sophomores are often crammed in 300 – 1,000-person lecture halls that are extraordinarily cheap to operate because students are charged the same amount per credit hour for a class of 1,000 as they are for a seminar of 10. Some disciplines increasingly weaken their offerings in response to student demand.

Business appears to be one of those majors. It’s in the broad middle of Payscale.com’s salary data, which is interesting given how business majors presumably go into their discipline in part hoping to make money—but notice too just how many generic business majors there are. The New York Times article says “The family of majors under the business umbrella — including finance, accounting, marketing, management and “general business” — accounts for just over 20 percent [. . .] of all bachelor’s degrees awarded annually in the United States, making it the most popular field of study.” That’s close to what Louis Menand reports in The Marketplace of Ideas: “The biggest undergraduate major by far in the United States is business. Twenty-two percent of all bachelor’s degrees are awarded in that field. Ten percent of all bachelor’s degrees are awarded in education.” If all these business majors graduate without any job skills, maybe we shouldn’t be all that surprised at their inability to command high wages when they graduate.

I’d like to know: has the composition of majors changed over the years Mandel documents? If so, from what to what? Menand has some coarse data:

There are almost twice as many bachelor’s degrees conferred every year in social work as there are in all foreign languages and literatures combined. Only 4 percent of college graduates major in English. Just 2 percent major in history. In fact, the proportion of undergraduate degrees awarded annually in the liberal arts and sciences has been declining for a hundred years, apart from a brief rise between 1955 and 1970, which was a period of rapidly increasing enrollments and national economic growth. Except for those fifteen unusual years, the more American higher education has expanded, the more the liberal arts sector has shrunk in proportion to the whole.

But he’s not trying to answer questions about wages. Note too that my question about composition is a genuine one: I have no idea of what the answer is.

One other major point: if Bryan Caplan is right about college being about signaling, then there might also be a larger composition issue than the one I’ve already raised: people who aren’t skilled learners and who don’t have the willingness or capacity to succeed after college may be increasingly attending college. In that case, the signal of a college degree isn’t as valuable because the people themselves going through college aren’t as good—they’re on the margins, and the improvement to their skillset is limited. Furthermore, colleges universities aren’t doing all that much to improve that skillset—see again Academically Adrift.

I don’t know what, if anything, can be done to improve this dynamic. Information problems about which college major pay the most don’t seem to be a major issue, at least anecdotally; students know that comm degrees are easy and other, more lucrative degrees are hard. There may be Zimbardo / Boyd-style time preference issues going on, where students want to consume present pleasure in the form of parties and “hanging out” now at the expense of earnings later, and universities are abetting this in the form of easy majors.

This is the part where I’m supposed to posit how the issues described above might be improved. I don’t have top-down, pragmatic solutions to this problem—nor do I see strong incentives on the part of any major actors to solve it. Actually, I don’t see any solutions, whether top-down or bottom-up, because I don’t think the information asymmetry is all that great and consumption preferences mean that, even with better information, students might still choose comm and generic business.

Mandel ends his post by saying, “Finally, if we were going to design some economic policies to help young college grads, what would they be?” The answer might be something like, “make university disciplines harder, so students have to learn something by the end,” but I don’t see that happening. That he asks the question indicates to me he doesn’t have an answer either. If there were one, we wouldn’t have a set of interrelated problems regarding education, earnings, globalization, and economics, which aren’t easy to disentangle.

Although I don’t have solutions, I will say this post is a call to pay more attention to how student choices and preferences affect education and earnings discussions.

EDIT: See also College has been oversold, and pay special attention to the data on arts versus science majors. I say this as someone who majored in English and now is in grad school in the same subject, but by anecdotal observation I would guess about 75% of people in humanities grad schools are pointlessly delaying real life.

Mac OS 10.7 is out today, and I don't care because "In the Beginning was the Command Line"

A few days ago, I was reading Neal Stephenson’s incredible essay “In the Beginning was the Command Line,” which you can download for free at the link. His work can’t really be summarized because the metaphors he develops are too potent and elaborate to flatten into a single line that describes what he does with them; by the time you finish summarizing, you might as well recreate the whole thing. Despite the folly in attempting summarization, I want to note that he’s cottoned on to the major cultural differences between Windows, Macs, and Unixes like Linux, and by the time you’re done you with his essay realize the fundamental divide in the world isn’t between right and left or religions and secular, but between contemporary “Morlocks” and “Eloi,” the former being the ones who run things and the latter being the ones who mostly consume them (you can see similar themes running through Turn On, Tune In, Veg Out and Anathem). In the meantime, visual culture has become a poorly understood but highly developed global force bathing virtually everyone in its ambiance, and that might not be such a bad thing most of the time. The last issue doesn’t have that much to do with this particular post, but if you want to understand it, and hence an aspect of the world, go read “In the Beginning was the Command Line.”

He wrote the essay in 1999, and the problem with operating systems, or “OSes” in nerd parlance, is that none of them were very good. They crashed frequently or were incredibly hard to use, especially for Eloi, or both. In the last ten years, they’ve gotten much less crashy (OS X, Windows) or much easier to use (Linux) or both, to the point where the differences to a random user who wants to write e-mails, look at YouTube videos, browse for adult material, and look at FaceBook status updates probably won’t notice the quirks of each operating system. Games are a major difference, since OS X and Windows have lots of modern games available and Linux doesn’t, but if you don’t care about games either—and I don’t—you’ll want to discount those.

Some of the major technical differentiators have shrunk: on OS X, you can now communicate with your machine using the Terminal; on mine, I’ve changed the color scheme to trendy green-on-black. Windows has a system called PowerShell, and Linux has various ways to hide the stuff underneath it. But the cultural differences remain. Windows machines still mostly come festooned with ugly stickers (“These horrible stickers are much like the intrusive ads popular on pre-Google search engines. They say to the customer: you are unimportant. We care about Intel and Microsoft, not you”) and a lot of crap-ware installed. OS X machines look like they were designed by a forward-thinking 1960s science fiction special effects person for use by the alien beings who land promising peace and prosperity but actually want to build a conduit straight into your mind and control your thoughts. Linux machines still sometimes want you to edit files in /src to get your damn wireless network working. Given the slowness of cultural change relative to technical change, it shouldn’t be surprising that many of Stephenson’s generalizations hold up even though many technical issues have changed.

This throat clearing leads to the subject of today’s much-hyped launch of Apple’s latest operating system, which is an incremental improvement to the company’s previous operating system. I’ve been using Macs since 2004. I started with an aluminum PowerBook that you can see in this appropriately messy picture. In that time, I’ve steadily upgraded from 10.3 to 10.6, but the move from 10.5 to 10.6 didn’t bring any tangible benefits to my day-to-day activities. It did, however, mess up some of the programs I used and still use regularly, which made me more gun-shy about OS updates than I have been previously. Now 10.7 is out, and you can read the best review of it here. It’s got a bunch of minor new features, most of which I won’t use and are overhyped by Apple’s ferocious marketing department, which most people call “the press.”

I’ve looked at those features and found nothing or nothing compelling. Many are aimed to laptops, but I don’t use a laptop as my primary computer or have a trackpad on my iMac, and it seems like the “gestures” that are now part of OS X, while useful, aren’t all that useful. Apple is also integrating various Internet services into the operating system, but I don’t really care about them either and don’t want to pay for iCloud. I don’t see the point for the kinds of things I do, which mostly tend towards various kind of text manipulation and some messing around with video. It’s not that I can’t afford the upgrade—Apple is only charging $30 for it. I just don’t need it and simultaneously find it annoying that Apple will only offer it through their proprietary “app store,” which means that when I need to reinstall because the hard drive dies I won’t be able to use disks to start the machine.

Still, those are all quibbles of the kind that start boring flame wars among nerds on the Internet. I’ve saved the real news for the very bottom of the page: it’s not about Apple’s OS upgrade, which, at one point, I would’ve installed on Day 1. I remember when OS X 10.4 came out, offering Spotlight, and I was blown away. Full-text search anywhere on your machine is great. It’s magical. I use it every day. Even 10.5 finally had integrated backup software. But 10.6 had a lot of developer enhancements I don’t use directly. Now, 10.7 has improved things further, but in a way that’s just not important to me. The real news is about how mature a lot of computer technology has become. By far the most useful hardware upgrade I’ve seen in the last ten years is a solid state drive (SSD), which makes boots times minimal and applications launch quickly. Even Word and Photoshop, both notorious resource hogs, launch in seconds. New OS versions used to routinely offer faster day-to-day operation as libraries were improved, but it’s not important to move from “fast enough” to “faster.” The most useful software upgrades I’ve seen were moving from the insecure early versions of Windows XP to OS X, and the move from 10.3 to 10.4. The move to 10.7 is wildly unexciting. So much so that I’m going to skip it.

If you look at the list of features in 10.7, most sound okay (like application persistence) but aren’t essential. I rather suspect I’m going to skip a lot of software and hardware upgrades in the coming years. Why bother? The new iterations of OSes aren’t likely to enable me to be able to do something substantial that I wasn’t able to do before, which, in my view, is what computers are supposed to do—like most of the things we make of buy. If you’re an economist, you could call this something like the individual production possibility curve. Installing Devonthink Pro expanded mine. Scrivener might have too. Mac Freedom definitely has, and I’m going to turn it on shortly after I post this essay. The latest operating system, though? Not so much. The latest software comes and goes, but the cultural differences—and discussions of what those differences mean—endure, even as they shrink over time.

EDIT: Somewhat relevant:

The Gottlieb Paris Review interview

There’s a great Paris Review interview with editor Robert Gottlieb filled with quotable stuff. A sample:

* “I have fixed more sentences than most people have read in their lives.”

* “Your job as an editor is to figure out what the book needs, but the writer has to provide it. You can’t be the one who says, Send him to Hong Kong at this point, let him have a love affair with a cocker spaniel. Rather, you say, This book needs something at this point: it needs opening up, it needs a direction, it needs excitement.”

* From Toni Morrison, who Gottlieb edited: “I never wrote a line until after I became an editor, and only then because I wanted to read something that I couldn’t find. That was the first book I wrote.” This, incidentally, is also what keeps me writing—wanting to read books that no one else has written, though I suspect Morrison and I have very different tastes.

* A long excerpt from John le Carré:

Negotiations were always tight with Bob. He was celebrated for not believing in huge advances, and it didn’t matter that three other houses were offering literally twice what he was offering. He felt that for half the money, you got the best. Most publishers, when you arrive in New York with your (as you hope) best-selling manuscript, send flowers to your suite, arrange for a limo, maybe, at the airport, and then let you go and put on the nosebag at some great restaurant. The whole idea is to make you feel great. With Bob you did best to arrive in jeans and sneakers, and then you lay on your tummy side-by-side with him on the floor of his office and sandwiches were brought up.

After I finished one book, I think it was A Perfect Spy, my agent called me and said, Okay, we’ve got x-zillion yen and whatnot, and I said, And lunch. My agent said, What? I said, And lunch. When I get to New York I want to be taken, by Bob, to a decent restaurant for once and not eat one of those lousy tuna sandwiches lying on my tummy in his room. Bob called me that evening and said, I think we have a deal; and is that true about lunch? And I said, Yup, Bob, that’s the break point in the deal. Very well, he said. Not a lot of laughter. So I arrived in New York, and there was Bob, a rare sight in a suit, and we went to a restaurant he had found out about. He ate extremely frugally, and drank nothing, and watched me with venomous eyes as I made my way through the menu.

* Gottlieb again:

I happen to be a kind of word whore. I will read anything from Racine to a nurse romance, if it’s a good nurse romance. Many people just aren’t like that. Some of my closest friends cannot read anything that isn’t substantial—they don’t see the point. I don’t, however, like a certain kind of very rich, ornate, literary writing. I feel as if I’m being choked, as if gravel is being poured down my throat. Books like Under the Volcano, for instance, are not for me.

And this again describes me, despite my post On books, taste, and distaste. I think my penchant for Carlos Ruiz Zafón falls into the “word whore” category.

* The main thing he gets wrong is regarding science fiction, where he says of Doris Lessing’s The Sentimental Agents that “like all space fiction, or science fiction, it is underlain by a highly moralistic, utopian impulse.” This isn’t true: a lot of science fiction might be, but it’s hard to argue that, say, Stanislaw Lem’s is, or any number of dark, contemporary SF writers who want to describe things, not change them.

Books as civilization

“Far more than any other medium, books contain civilizations, the ongoing conversation between present and past. Without this conversation we are lost. But books are also a business. . .”

Jason Epstein, “Books: Onward to the Digital Revolution.”

Literary fiction and the current marketplace

Literary agent Betsy Learner posted on the business of selling novels. I’d shorten this quote if I could, but what Lerner writes is too compelling for paraphrase or a one-sentence excerpt:

A lot of painful conversations lately about literary fiction and its demise.

Was it ever any different?

When I was an assistant at Simon and Schuster 25 years ago, there was exactly one literary fiction editor. And his position was rumored to be precarious as a result of focusing exclusively on the literary stuff. (In fact, he was let go a year later.) Of course, this was especially true at a house like S&S where monster political and celebrity books ruled. I can still recall an anxious conversation between a senior editor and a publicist because they couldn’t remember if Jackie Collins preferred white roses or red.

I understood at that tender age that to focus entirely on fiction was to jeopardize my hope of becoming an editor.

This implies that nonfiction is the more secure field, which jives with what I’ve seen on many literary agents’ websites and blogs; there seem to be almost none who work solely with fiction but many who work exclusively or almost exclusively with nonfiction.

Which makes me wonder: why? Part of the reason might simply be that more nonfiction books move through stores in a given year than fiction, but I wonder also if part of the reason is that nonfiction simply has a shorter shelf life. I can’t imagine many pop nonfiction titles from, say, the 1930s to the 1960s are still read much because whatever fields those authors covered have changed sufficiently that their work is no longer useful save in a historical sense. Obviously, there are exceptions—both presidential candidates in the recent election cited Niebuhr Reinhold as an influence—but the general trend seems to hold.

But the novels of Bellow, Roth, and so forth are still fresh as the day they were published; I have ancient copies of For Whom the Bell Tolls and Tennyson’s Idylls of the King that are delightful. My used copy of John Barth’s Giles Goat-Boy is an original hardback. New copies of those works still sell. That’s a boon for readers but probably not so good for new writers, who have to compete with the masters. The result: a literary marketplace where it’s harder to break in as the length and number of established predecessors grows, leading to an equilibrium that favors nonfiction over fiction. “Monster political and celebrity books” flare brightly like supernovae while the literary stars are dimmer but give persistent light for those who would see them, while writers become more dependent on university and other forms of patronage to make it in a marketplace that, rightly or wrongly, doesn’t much value their work in a financial sense.

More words of advice for the writer of a negative review

Nigel Beale quotes Helen Gardner:

“Critics are wise to leave alone those works which they feel a crusading itch to attack and writers whose reputations they feel a call to deflate. Only too often it is not the writer who suffers ultimately but the critic…”

Beale asks: “Which is great and poetic and all, however, is silence enough?”

To me, the chief function of the critic ought to be explore a work as honestly as possible and to illuminate to the best of her abilities. This means openness and it means being willing to say that a work is weak (and why), as well as showing how it is weak. In other words, you should be able to answer the who, what, where, when, why, and how on it, with an emphasis on the last two.

One should squelch “a crusading itch to attack and writers whose reputations they feel a call to deflate,” if you’re attacking merely to attack, or merely because someone’s balloon is overinflated. For example, Tom Wolfe seems a frequent and, to my mind, unfair object of ridicule among critics. But if you’re rendering a knowledge opinion that happens to be negative, you’re doing what you should be, and what I strive to. Often this means writing about why a book fails—perhaps too frequently.

Good reviews and Updike

Every attempt at review and criticism ought to be good—but that doesn’t mean positive. A review should be “good” in the sense of well-done and engaging might be a negative one. In an ideal world, the book should decide that as much as the critic.

John Updike’s rules for reviewing are worth following to the extent possible. I would emphasize three of them:

1. Try to understand what the author wished to do, and do not blame him for not achieving what he did not attempt.

2. Give him enough direct quotation–at least one extended passage–of the book’s prose so the review’s reader can form his own impression, can get his own taste.

5. If the book is judged deficient, cite a successful example along the same lines, from the author’s ouevre or elsewhere. Try to understand the failure. Sure it’s his and not yours?

In the end, I think such rules are designed to keep the reviewer as honest as the reviewer can be. I keep coming back to the word “honesty” because it so well encapsulates the issues raised by Beale, Updike, Orwell, and others.

I especially like the “direct quotation” comment because there are no artificial word limits on web servers, meaning that you should give the reader a chance to disagree with your assessment through direct experience. Quoting of a sufficient amount of material will give others a chance to make their own judgments. Merit can be argued but not proven: thus, a critic can avoid silence and unfair attack.

As the above shows, I like Beale’s answer—”no”—which seems so obvious as to barely need stating. I’d rephrase Gardner’s assertion to this: “beware of relentlessly and thoughtlessly attacking.”

The Aeron, The Rite of Spring, and Critics

In Malcolm Gladwell’s book Blink: The Power of Thinking Without Thinking, he quotes Bill Dowell, who was the lead researcher for Herman Miller during the development and release of the now-famous Aeron in the early 1990s; I’m sitting in one as I type this. The Aeron eventually sold fantastically well and became a symbol of boom-era excess, aesthetic taste, ergonomic control, excessive time at computers, and probably other things as well. But Dowell says that the initial users hated the chair and expressed their displeasure in focus groups and testing sites. According to him, “Maybe the word ‘ugly’ was just a proxy for ‘different.’ ”

That’s a long wind-up for an analogy that explains how Helen Gardner might be telling us that when we instinctively dislike, we might be reacting against novelty rather than its real merit, as critics and listeners notoriously did during Stravinsky’s The Rite of Spring. She’s wise to warn us about that danger, because it’s how people who pride themselves on taste and knowledge become conservative, stuffy critics. If we’re saying something is “bad” merely because it’s “different,” then we’ve already effectively died aesthetically because we’re no longer able to expand what “good” means. One thing I like about Terry Teachout’s criticism and his blog, About Last Night, is that he has strong opinions but still very much seems to have aesthetic suppleness.

But the Aerons and Ulysses of the world are exceedingly rare. Dune and Harry Potter aren’t among them. Joseph O’Neill’s Netherland at least might be, which I concede obliquely in my post about it.

Most works of art are, by definition, average.

The question is: to what extent is that a bad thing? Maybe none at all: an average novel doesn’t cause the death or disfigurement of children, or propagate social inequality, or do any number of other pernicious things. Its chief ill is that it wastes time for the person who reads it and perceives it as average (as opposed to the person who reads it and judges it extraordinary, which many Harry Potter readers have evidently done).

Milan Kundera thinks otherwise—in The Curtain, he writes, “… a mediocre plumber may be useful to people, but a mediocre novelist who consciously produces books that are ephemeral, commonplace, conventional—thus not useful, thus burdensome, thus noxious—is contemptible.” He gives himself a key out here: the word “consciously.” I doubt many writers consciously set out to produce commonplace books, or do so with that intent, and so may be rescued from the burden of Kundera’s scorn. Like the criminal justice system, Kundera separates those who knowingly commit a crime from those who do so accidentally.

You need to have read widely, however, to be capable of knowing the average from the incredible, and those whose effusive praise for Harry Potter and Dan Brown splatters the web show they haven’t. Hence, perhaps, the hesitance many Amazon reviewers show toward low scores, which one of Beale’s commenters observes.

The Aerons of Art

I now look at the Aeron as beautiful, and to me the over-stuffed office chairs that used to symbolize lawyerly and corporate status look as quaint as black and white photos of Harvard graduation classes without women or minorities. If we’re open to seeing the new, I think we’ll be safe enough in condemning the indifferent and pointing towards the genuinely astonishing works that are very much out there.

Edit: The Virginia Quarterly Review weighs in.