Institutional hypocrisy enabled by wealth, part 2, gambling edition

My comment from July 18 regarding Daniel Okrent’s Last Call: The Rise and Fall of Prohibition:

[…] hypocrisy regarding victimless crimes is a luxury good. It can be indulged when a society has sufficient wealth that it can afford to be hypocritical, signaling that its members want to be perceived as virtuous even when many of them as individuals would prefer to indulge in alcohol, other drugs, or sex-for-money.

Today’s New York Times:

With pressure mounting on the federal government to find new revenues, Congress is considering legalizing, and taxing, an activity it banned just four years ago: Internet gambling.

Lesson: moralizing is a vice enabled by excess wealth. Now that our societal wealth is not quite so vast, maybe we’ll consider lowering the prison population; as the Economist wrote, America locks up too many people, some for acts that should not even be criminal.

Releasing some of them would be morally justified, as the Economist makes clear, and probably economically rational. This assumes such people are not Zero marginal product workers, as Tyler Cowen discusses at the link. Such an assumption might be too large to be sustained; I can’t evaluate the arguments about zero marginal product workers very well.

The dangers of over-reliance on evolutionary biology and evolutionary psychology, courtesy of Ernest Gellner and Henry Farrell

Primitive man has lived twice: once in and for himself, and the second time for us, in our reconstruction. Inconclusive evidence may oblige him to live such a double life forever. Ever since the principles of our own social order have become a matter of sustained debate, there has been a persistent tendency to invoke the First Man to settle our disputes for us. His vote in the next general election is eagerly solicited. It is not entirely clear why Early Man should possess such authority over our choices.

That’s from Henry Gellner’s Plough, Sword, and Book: The Structure of Human History. Today, we wouldn’t call the primitive man the primitive man because “primitive” it prejudicial and “people” usually used instead of “man” because it explicitly includes all humans. We would instead call “primitive man” the “pre-agrarian world” or “evolutionary times” and then continue from there. But the point Gellner is making about our habitual “reconstruction” of what that looked like, in large part for the prejudices of the present, is well-taken and worth remembering in the light of books like Sex at Dawn, The Mating Mind, or the entire oeuvre of evolutionary biology and psychology, which have undergone tremendous revision over the past three decades and will no doubt continue to undergo tremendous revision under the next three and beyond.

How we reconstruct that time and “invoke the First Man” should be remembered as a reconstruction and not as the last word; he shouldn’t necessarily “possess such authority over our choices” today, because what was good for people living before agriculture or before the Industrial Revolution may not be good for us now.

It helps to understand the kinds of things that influence us, but we need to be wary of cherry picking evidence to support whatever kinds of social views we already hold.

I’m reading Gellner thanks to Henry Farrell at Crooked Timber.

David Shields’ Reality Hunger and James Wood’s philosophy of fiction

In describing novels from the first half of the 19th Century, David Shields writes in Reality Hunger: A Manifesto that “All the technical elements of narrative—the systematic use of the past tense and the third person, the unconditional adoption of chronological development, linear plots, the regular trajectory of the passions, the impulse of each episode toward a conclusion, etc.—tended to impose the image of a stable, coherent, continuous, unequivocal, entirely decipherable universe.”

I’m not so sure; the more interesting novels didn’t necessarily have “the unconditional adoption of chronological development” or the other features Shields ascribes to them. Caleb Williams is the most obvious example I can immediately cite: the murderers aren’t really punished in it and madness is perpetual. Gothic fiction of the 19th Century had a highly subversive quality that didn’t feature “the regular trajectory of the passions.” To my mind, the novel has always had unsettling features and an unsettling effect on society, producing change even when that change isn’t immediately measurable or apparent, or when we can’t get away from the fundamental constraints of first- or third-person narration. Maybe I should develop this thought more: but Shields doesn’t in Reality Hunger, so maybe innuendo ought to be enough for me too.

Shields is very good at making provocative arguments and less good at making those arguments hold up under scrutiny. He says, “The creators of characters, in the traditional sense, no longer manage to offer us anything more than puppets in which they themselves have ceased to believe.” Really? I believe if the author is good enough. And I construct coherence where it sometimes appears to be lacking. Although I’m aware that I can’t shake hands with David Kepesh of The Professor of Desire, he and the characters around him feel like “more than puppets” in which Roth has ceased to believe.

Shields wants something made new. Don’t we all? Don’t we all want to throw off dead convention? Alas: few of us know how to successfully, and that word “successfully” is especially important. You could write a novel that systematically eschews whatever system you think the novel imposes (this is the basic idea behind the anti-novel), but most people probably won’t like it—a point that I’ll come back to. We won’t like it because it won’t seem real. Most of us have ideas about reality that are informed by some combination of lived experience and cultural conditioning. That culture shifts over time. Shields starts Reality Hunger with a premise that is probably less contentious than much of the rest of the manifesto: “Every artistic movement from the beginning of time is an attempt to figure out a way to smuggle more of what the artist thinks is reality into the work of art.” I can believe this, though I suspect that artists begin getting antsy when you try to pin them down on what reality is: I would call it this thing we all appear to live in but that no one can quite represent adequately.

That includes Shields. Reality Hunger doesn’t feel as new as it should; it feels more like a list of N things. It’s frustrating even when it makes one think. Shields says, “Culture and commercial languages invade us 24/7.” But “commercial languages” only invade us because we let them: TV seems like the main purveyor, and if we turn it off, we’ll probably cut most of the advertising from our lives. If “commercial languages” are invading my life to the extent I’d choose the word “invade,” I’m not aware of it, partially because I conspicuously avoid those languages. Shields says, “I try not to watch reality TV, but it happens anyway.” This is remarkable: I’ve never met anyone who’s tried not to watch reality TV and then been forced to, or had reality TV happen to them, like a car accident or freak weather.

Still, we need to think about how we experience the world and depict it, since that helps us make sense of the world. For me, the novel is the genre that does this best, especially when it bursts its perceived bounds in particularly productive ways. I can’t define those ways with any rigor, but the novel has far more going on than its worst and best critics imagine.

Both the worst and best critics tend to float around the concept of reality. To use Luc Sante’s description in “The Fiction of Memory,” a review of Reality Hunger:

The novel, for all the exertions of modernism, is by now as formalized and ritualized as a crop ceremony. It no longer reflects actual reality. The essay, on the other hand, is fluid. It is a container made of prose into which you can pour anything. The essay assumes the first person; the novel shies from it, insisting that personal experience be modestly draped.

I’m not sure what a “crop ceremony” is or how the novel is supposed to reflect “actual reality.” Did it ever? What is this thing called reality that the novel is attempting to mirror? Its authenticity or lack thereof has, as far as I know, always been in question. The search for realism is always a search and never a destination, even when we feel that some works are more realistic than others.

Yet Sante and Sheilds are right about the dangers of rigidity; as Andrew Potter writes in The Authenticity Hoax: How We Get Lost Finding Ourselves, “One effect of disenchantment is that pre-existing social relations come to be recognized not as being ordained by the structure of the cosmos, but as human constructs – the product of historical contingencies, evolved power relations, and raw injustices and discriminations.”

Despite this, however, we feel realism—if none of us did, we’d probably stop using the term. Our definitions might blur when we approach a precise definition, but that doesn’t mean something isn’t there.

Sante writes, quoting Shields, that “‘Anything processed by memory is fiction,’ as is any memory shaped into literature.” Maybe: but consider these three statements, if I were to make them to you (keep in mind the context of Reality Hunger, with comments like “Try to make it real—compared to what?”):

Aliens destroyed Seattle in 2004.

I attended Clark University.

Alice said she was sad.

One of them is, to most of us, undoubtedly fiction. One of them is true. The other I made up: no doubt there is an Alice somewhere who has said she is sad, but I don’t know her and made her up for the purposes of example. The second example might be “process by memory,” but I don’t think that makes it fiction, even if I can’t give you a firm, rigorous, absolute definition of where the gap between fact and interpretation begins. Jean Bricmont and Alan Sokal give it a shot in Fashionable Nonsense: “For us, as for most people, a ‘fact’ is a situation in the external world that exists irrespective of the knowledge that we have (or don’t have) of it—in particular, irrespective of any consensus or interpretation.”

They go to observe that scientists actually face some problems of definition that I see as similar to those of literature and realism:

Our answer [as to what makes science] is nuanced. First of all, there are some general (but basically negative) epistemological principles, which go back at least to the seventeenth century: to be skeptical of a priori arguments, revelation, sacred texts, and arguments from authority. Moreover, the experience accumulated during three centuries of scientific practice has given us a series of more-or-less general methodological principles—for example, to replicate experiments, to use controls, to test medicines in double-blind protocols—that can be justified by rational arguments. However, we do not claim that these principles can be codified in a definite way, nor that the list is exhaustive. In other words, there does not exist (at least present) a complete codification rationality, is always an adaptation to a new situation.

They lay out some criteria (beware of “revelation, sacred texts, and arguments from authority”) and “methodological principles” (“replicate experiments”) and then say “we do not claim that these principles can be codified in a definite way.” Neither can the principles of realism. James Wood does as good a job of exploring them as anyone. But I would posit that, despite our inability to pin down realism, either as convention or not, most of us recognize it: when I tell people that I attended Clark University, none have told me that my experience is an artifact of memory, or made up, or that there is no such thing as reality and therefore I didn’t. Such realism might merely be convention or training—or it might be real.

In the first paragraph of his review of Chang-Rae Lee’s The Surrendered, James Wood lays out the parameters of the essential question of literary development or evolution:

Does literature progress, like medicine or engineering? Nabokov seems to have thought so, and pointed out that Tolstoy, unlike Homer, was able to describe childbirth in convincing detail. Yet you could argue the opposite view; after all, no novelist strikes the modern reader as more Homeric than Tolstoy. And Homer does mention Hector’s wife getting a hot bath ready for her husband after a long day of war, and even Achilles, as a baby, spitting up on Phoenix’s shirt. Perhaps it is as absurd to talk about progress in literature as it is to talk about progress in electricity—both are natural resources awaiting different forms of activation. The novel is peculiar in this respect, because while anyone painting today exactly like Courbet, or composing music exactly like Brahms, would be accounted a fraud or a forger, much contemporary fiction borrows the codes and conventions—the basic narrative grammar—of Flaubert or Balzac without essential alteration.

I don’t think literature progresses “like medicine or engineering.” Using medical or engineering knowledge as it stood in 1900 would be extremely unwise if you’re trying to understand the genetic basis of disease or build a computer chip. Papers tend to decay within five to ten years of publication in the sciences.

But I do think literature progresses in some other, less obvious way, as we develop wider ranges of techniques and social constraints allow for wider ranges of subject matter or direct depiction: hence why Nabakov can point out that “Tolstoy, unlike Homer, was able to describe childbirth in convincing detail,” and I can point out that mainstream literature effectively couldn’t depict explicit sexuality until the 20th Century.

While that last statement can be qualified some, it is hard to miss the difference between a group of 19th Century writers like Thackeray, Dickens, Trollope, George Eliot, George Meredith, and Thomas Hardy (who J. Hillis Miller discusses in The Form of Victorian Fiction) and a group of 20th Century writers like D.H. Lawrence, James Joyce, Norman Rush, and A.S. Byatt, who are free to explicitly describe sexual relationships to the extent they see fit and famously use words like “cunt” that simply couldn’t be effectively used in the 19th Century.

In some ways I see literature as closer to math: the quadratic equation doesn’t change with time, but I wouldn’t want to be stuck in a world with only the quadratic equation. Wood gets close to this when he says that “Perhaps it is as absurd to talk about progress in literature as it is to talk about progress in electricity—both are natural resources awaiting different forms of activation.” The word “perhaps” is essential in this sentence: it gives a sense of possibility and realization that we can’t effectively answer the question, however much we might like to. But both question and answer give a sense of some useful parameters for the discussion. Most likely, literature isn’t exactly like anything else, and its development (or not) is a matter as much of the person doing the perceiving and ordering as anything intrinsic to the medium.

I have one more possible quibble with Wood’s description when he says that “the basic narrative grammar—of Flaubert or Balzac without essential alteration.” I wonder if it really hasn’t undergone “essential alteration,” and what would qualify as essential. Novelists like Elmore Leonard, George Higgins, or that Wood favorite Henry Green all feel quite different from Flaubert or Balzac because of how they use dialog to convey ideas. The characters in Tom Perrotta’s Election speak in a much more slangy, informal style than do any in Flaubert or Balzac, so far as I know. Bellow feels more erratic than the 19th Century writers and closer to the psyche, although that might be an artifact of how I’ve been trained by Bellow and writers after Bellow to perceive the novel and the idea of psychological realism. Taken together, however, the writers mentioned make me think that maybe “the basic narrative grammar” has changed for writers who want to adopt new styles. Yes, we’re still stuck with first- and third-person perspectives, but we get books that are heavier on dialog and lighter on formality than their predecessors.

Wood is a great chronicler of what it means to be real: his interrogation of this seemingly simple term runs through the essays collected in The Irresponsible Self: On Laughter and the Novel, The Broken Estate: Essays on Literature and Belief, and, most comprehensively, in the book How Fiction Works. Taken together, they ask how the “basic narrative grammar” of fiction works or has worked up to this point. In setting out some of the guidelines that allow literary fiction to work, Wood is asking novelists to find ways to break those guides in useful and interesting ways. In discussing Reality Hunger, Wood says, “[Shields’] complaints about the tediousness and terminality of current fictional convention are well-taken: it is always a good time to shred formulas.” I agree and doubt many would disagree, but the question is not merely one of “shred[ing] formulas,” but how and why those formulas should be shred. One doesn’t shred the quadratic formula: it works. But one might build on it.

By the same token, we may have this “basic narrative grammar” not because novelists are conformist slackers who don’t care about finding a new way forward: we may have it because it’s the most satisfying or useful way of conveying a story. Although I don’t think this is true, I think it might be true. Maybe most people won’t find major changes to the way we tell stories palatable. Despite modernism and postmodernism, fewer people appear to enjoy the narrative confusion and choppiness of Joyce than do enjoy the streamlined feel of the latest thriller. That doesn’t mean the latter is better than the former—by my values, it’s not—but it does mean that the overall thrust of fiction might remain where it is.

Robert McKee, in his not-very-good-but-useful book Story: Substance, Structure, Style and The Principles of Screenwriting, gives three major kinds of plots, which blend into one another: “arch plots” that are causal in nature and finish their story lines; “mini plots,” which he says are open and “strive for simplicity and economy while retaining enough of the classical […] to satisfy the audience,” and antiplot, which are where absurdism and the like fall.

He says that as one moves “toward the far reaches of Miniplot, Antiplot, and Non-plot, the audience shrinks” (emphasis in original). From there:

The atrophy has nothing to do with quality or lack of it. All three corners of the story triangle gleam with masterworks that the world treasures, pieces of perfection for our imperfect world. Rather, the audience shrinks for this reason: Most human beings believe that life brings closed experiences of absolute, irreversible change; that their greatest sources of conflict are external to themselves; that they are the single and active protagonists of their own existence; that their existence operates through continuous time within a consistent, causally interconnected reality; and that inside this reality events happen for explainable and meaningful reasons.

The connection between this and Wood’s “basic narrative grammar” might appear tenuous, but McKee and Wood are both pointing towards the ways stories are constructed. Wood is more concerned with language; although plot and its expression (whether in language or in video) can’t be separated from one another, they can still be analyzed independently enough of one another to make a distinction.

The conventions that underlie the “arch plots,” however, can become tedious over time. This is what Wood is highlighting when he discusses Roland Barthes’ “reality effect,” which fiction can achieve: “All this silly machinery of plotting and pacing, this corsetry of chapters and paragraphs, this doxology of dialogue and characterization! Who does not want to explode it, do something truly new, and rouse the implication slumbering in the word ‘novel’?” Yet we need some kind of form to contain story; what is that form? Is there an ideal method of conveying story? If so, what if we’ve found it and are now mostly tinkering, rather than creating radical new forms? If we take out “this silly machinery of plotting and pacing” and dialog, we’re left with something closer to philosophy than to a novel.

Alternately, maybe we need the filler and coordination that so many novels consist of if those novels are to be felt true to life, which appears to be one definition of what people mean by “realistic.” This is where Wood parts with Barthes, or at least makes a distinct case:

Convention may be boring, but it is not untrue simply because it is conventional. People do lie on their beds and think with shame about all that has happened during the day (at least, I do), or order a beer and a sandwich and open their computers; they walk in and out of rooms, they talk to other people (and sometimes, indeed, feel themselves to be talking inside quotation marks); and their lives do possess more or less traditional elements of plotting and pacing, of suspense and revelation and epiphany. Probably there are more coincidences in real life than in fiction. To say “I love you” is to say something at millionth hand, but it is not, then, necessarily to lie.

“Convention may be boring, but it is not untrue simply because it is conventional,” and the parts we think of as conventional might be necessary to realism. In Umberto Eco’s Reflections on The Name of the Rose, he says that “The postmodern reply to the modern consists of recognizing that the past, since it cannot really be destroyed, because its destruction leads to silence, must be revisited: but with irony, not innocently.” That is often the job of novelists dealing with the historical weight of the past and with conventions that are “not untrue simply because [they are] conventional.” Eco and Wood both use the example of love to demonstrate similar points. Wood’s is above; Eco says:

I think of the postmodern attitude as that of a man who loves a very cultivated woman and knows he cannot say to her, ‘I love you madly,’ because he knows that she knows (and that she knows that he knows) that these words have already been written by Barbara Cartland. Still, there is a solution. He can say, ‘As Barbara Cartland would put it, I love you madly.’ At this point, having avoided false innocence, having said clearly that it is no longer possible to speak innocently, he will nevertheless have said what he wanted to say to the woman: that he loves her, but he loves her in an age of lost innocence. If the woman goes along with this, she will have received a declaration of love all the same. Neither of the two speakers will feel innocent, both will have accepted the challenge of the past, of the already said, which cannot be eliminated […]

I wonder if every age thinks of itself as “an age of lost innocence,” only to be later looked on as pure, naive, or unsophisticated. Regardless, for Eco postmodernism requires that we look to the past long enough to wink and then move on with the story we’re going to tell in the manner we’re going to tell it. Perhaps Chang-Rae Lee doesn’t do so in The Surrendered, which is the topic of Wood’s essay—but like so many essays and reviews, Wood’s starts with a long and very useful consideration before coming to the putative topic of its discussion. Wood speaks of reading […] “Chang-Rae Lee’s new novel, “The Surrendered” (Riverhead; $26.95)—a book that is commendably ambitious, extremely well written, powerfully moving in places, and, alas, utterly conventional. Here the machinery of traditional, mainstream storytelling threshes efficiently.” I haven’t read The Surrendered and so can’t evaluate Wood’s assessment.

Has Wood merely overdosed on the kind of convention that Lee uses, as opposed to convention itself? If so, it’s not clear how that “machinery” could be fixed or improved on, and the image itself is telling because Wood begins his essay by asking whether literature is like technology. My taste in literature changes: as a teenager I loved Frank Herbert’s Dune and now find it almost unbearably tedious. Other revisited novels hold up poorly because I’ve overdosed on their conventions and start to crave something new—a lot of fantasy flattens over time like opened soda.

Still, I usually don’t know what “something new” entails until I read it. That’s the problem with saying that the old way is conventional or boring: that much is easier to observe than the fix. Wood knows it, and he’s unusually good at pointing to the problems of where we’ve been and pointing to places that we might go to fix it (see, for example, his recent essay on David Mitchell, who I now feel obliged to read). This, I suspect, is why he is so beloved by so many novelists, and why I spend so much time reading him, even when I don’t necessarily love what he loves. The Quickening Maze struck me as self-indulgent and lacking in urgency, despite the psychological insight Adam Foulds offers into a range of characters’ minds: a teenage girl, a madman, an unsuccessful inventor.

I wanted more plot. In How Fiction Works, Wood quotes from Adam Smith writing in the eighteenth century regarding how writers use suspense to maintain reader interest and then says that “[…] the novel [as an art form; one could also say the capital-N Novel] soon showed itself willing to surrender the essential juvenility of plot […]” Yet I want and crave this element that Wood dismisses—perhaps because of my (relatively) young age: Wood says that Chang-Rae Lee’s Native Speaker was “published when the author was just twenty-nine,” older than I am. I like suspense and the sense of something major at stake, and that could imply that I have a weakness for weak fiction. If so, I can do little more than someone who wants chocolate over vanilla, or someone who wants chocolate despite having heard the virtues of cherries extolled.

When I hear about the versions of the real, reality, and realism that get extolled, I often begin to think about chocolate, vanilla, and cherries, and why some novelists write in such a way that I can almost taste the cocoa while others are merely cardboard colored brown. Wood is very good at explaining this, and his work taken together represents some of the best answers to the questions that we have.

Even the best answers lead us toward more questions that are likely to be answered best by artists in a work of art that makes us say, “I’ve never seen it that way before,” or, better still, “I’ve never seen it.” Suddenly we do see, and we run off to describe to our friends what we’ve seen, and they look at us and say, “I don’t get it,” and we say, “maybe you just had to see it for yourself.” Then we pass them the book or the photo or the movie and wait for them to say, “I’ve already seen this somewhere before,” while we argue that they haven’t, and neither have we. But we press on, reading, watching, thinking, hoping to come across the thing we haven’t seen before so we can share it again with our friends, who will say, like the critics do, “I’ve seen it before.”

So we have. And we’ll see it again. But I still like the sights—and the search.

Sex at Dawn — Christopher Ryan and Cacilda Jethá

EDIT: This review, from the journal Evolutionary Psychology, is the one I would’ve written if I’d been better read in the field and had more time to read extensively in it. Read the linked review if you really want to understand the problems with Sex at Dawn.

Furthermore, “The Myth of Promiscuity: A review of Lynn Saxon, Sex at Dusk: Lifting the Shiny Wrapping from Sex at Dawn” discusses the (many) problems with Sex at Dawn in a more complete fashion than I did. So if you’re looking for a deeper discussion than the one I can offer, consider Sex at Dusk.


My bottom-line assessment of Sex at Dawn: The Prehistoric Origins of Modern Sexuality is that the book would never get past peer review because so many of its descriptions of existing research and ideas are wrong or skewed. The book argues that humans are not “naturally” monogamous. That might be true. But Sex at Dawn doesn’t prove it. The data are ambiguous.

The biggest problem with the book starts on page 46, with the chapter “A Closer Look at the Standard Narrative of Human Sexual Revolution.” But there is no standard narrative of human sexual revolution: there are a wide array of people who have made inferences about the evolutionary basis of sexuality, but their narratives aren’t consistent and new papers and ideas constantly jostle or replace old ones. Ryan and Jethá don’t cite anyone else who claims a “standard” narrative, because to my knowledge no one has, and the standard narrative they cobble together is just that: cobbled together from a variety of sources with a variety of views.

I mentioned the lack of citations as a problem that occurs in their chapter on the standard narrative. It continues throughout the book. On page 293, Ryan and Jethá say that “To avoid the genetic stagnation that would have dragged our ancestors into extinction long ago, males evolved a strong appetite for sexual novelty and a robust aversion to the overly familiar.” But they don’t have any evidence for that. Similarly, they accuse scientists and others of claiming that monogamy is “natural” or inborn and cite, the anthropologist Owen Lovejoy as saying, “The nuclear family and human behavior may have their ultimate origin long before the dawn of the Pleistocene” (34). And he’s right: such behaviors may have their origins there. Or they may not have. Good scientists tend to be more tentative than polemicists because scientists recognize the fragility of so much human knowledge.

In Melvin Konner’s The Evolution of Childhood, he writes:

A double standard of sexual restriction is common across cultures; still, most human marriages have been mainly monogamous, owing either to environmental constraint or cultural principle. Modern cultures are monogamous in principle, but both adultery and serial monogamy are common. In at least thirty-seven countries, men express preference for women several years younger than themselves and place more emphasis on appearance, while women prefer men several years older and emphasize status and wealth (41).

The “environmental constraint” is important because it takes a lot of resources to support multiple spouses; this means that most men in most places and most conditions cannot afford to support multiple women. One woman might be able to support or be supported by multiple men, but polyandry is far less common than polygyny, as Konner points out. This is probably as close to accurate as one is likely to get regarding the historical or anthropological record on the subject of polygamy. It also has the advantage of coming from someone who spent his entire career on the subject of childhood development and who is deeply familiar with the vast literature surrounding evolution, anthropology, and childhood.

Ryan and Jethá also have many sections where they ask rhetorical questions or pit themselves against imaginary foes of great power; the page after the Lovejoy quote, they say, “This is what we’re up against. It’s a song that is powerful, concise, self-reinforcing, and playing on the radio all day and all night . . . but still wrong, baby, oh so wrong” (35). Enough with the polemics: if you’re right, show us that you’re right and leave the judgment up tot he reader.

Dan Savage called Sex at Dawn “the single most important book about human sexuality since Alfred Kinsey unleashed Sexual Behavior in the Human Male on the American public in 1948.” The statement is hyperbolic and unlikely but nonetheless demonstrates the power of the book, especially when America’s most famous sex columnist is pimping it, so to speak.

In addition, Kinsey was at least doing original research by taking and compiling sexual histories. Ryan and Jethá aren’t: they’re rehashing a variety of other people’s research, and in doing so regularly misrepresenting that research. Furthermore, Kinsey was reacting to a much, much different culture than ours today; Sexual Behavior in the Human Male had essentially no real forerunners, while Sex at Dawn is a weak entry to a crowded field of evolutionary biologists and psychologists like Geoffrey Miller (The Mating Mind), Sarah Blaffer Hrdy (The Woman Who Never Evolved, and David Buss). All three get cited, but out of context, and their deeper arguments are never really engaged. I don’t think it a coincidence that all three are academics.

For another example of imprecision in Sex at Dawn, Ryan and Jethá point out that men are only 10% – 20% larger than women (in polygynous species, the larger the size difference between sexes, the greater the number of sex partners). But that raw size or height difference way underestimates how that size translates to muscle. Consider David Potts’ work:

When fat-free mass is considered, men are 40% heavier (Lassek & Gaulin, 2009; Mayhew & Salm, 1990) and have 60% more total lean muscle mass than women. Men have 80% greater arm muscle mass and 50% more lower body muscle mass (Abe, Kearns, & Fukunaga, 2003). Lassek and Gaulin (2009) note that the sex difference in upper-body muscle mass in humans is similar to the sex difference in fat-free mass in gorillas (Zihlman & MacFarland, 2000), the most sexually dimorphic of all living primates.

These differences in muscularity translate into large differences in strength and speed. Men have about 90% greater upper-body strength, a difference of approximately three standard deviations (Abe et al., 2003; Lassek & Gaulin, 2009). The average man is stronger than 99.9% of women (Lassek & Gaulin, 2009). Men also have about 65% greater lower body strength (Lassek & Gaulin, 2009; Mayhew & Salm, 1990), over 45% higher vertical leap, and over 22% faster sprint times (Mayhew & Salm, 1990).

(That’s from Puts, David, A. “Beauty and the Beast: Mechanisms of Sexual Selection in Humans.” Evolution & Human Behavior 31.3 (2010): 157-75.)

The weird thing is that this information supports their assertion that humans are polygynous but hurts their assertion that early societies were mostly kind and peaceful, which they probably weren’t, per Lawrence Keeley’s War Before Civilization. Both the Potts paper and the Keeley book are the kinds of things that peer reviewers should be apt to point out.

Even when they aren’t simplifying the research others have done or selectively quoting writers without fully engaging in their arguments, Ryan and Jethá are merely poor writers. Take this: “For better or worse, the human female’s naughty bits don’t swell up to five times their normal size and turn bright red just to signal her sexual availability,” which is true in many species of apes. But note how bad this writing is: the sentence starts with a cliche, moves on to a childish description of women more appropriate to 14-year-olds than a real book and that also reinforces the very cultural forces the authors are trying to counteract, and then proceeds to something that has already been stated earlier in the chapter. The writing in much of the book is equally bad, the reasoning sloppy, and the thought underdeveloped. Which isn’t to say the book doesn’t have interesting or useful elements—it does—but those tend to get subsumed by its flaws.

The more I read about humanity, history, and the rhetoric of authenticity, naturalness, human instinct, and the like, the more I think there aren’t such things and the claims about what is “natural” reflect more about the person making the claim than anything about humanity itself. I would say that it’s natural for people to make claims about what is natural, but relatively little else is; circumstances affect so much that it’s hard to perceive many higher order behaviors as anything other than reflecting the bizarre combinations of self and environment.

People simply vary widely in their preferences, and most appear to view whatever society and subculture they grew up in as normal and natural. I posit that it’s not normal or abnormal to be polygamous or monogamous: in some circumstances one might make more sense, and in others the other strategy would. And people are too variable to say one mode is completely correct for all people under all circumstances.

I had actually begun this post before I read Paul Graham’s latest essay, “The Top Idea in Your Mind.” This part especially resonated:

I’ve found there are two types of thoughts especially worth avoiding—thoughts like the Nile Perch in the way they push out more interesting ideas. One I’ve already mentioned: thoughts about money. Getting money is almost by definition an attention sink. The other is disputes. These too are engaging in the wrong way: they have the same velcro-like shape as genuinely interesting ideas, but without the substance. So avoid disputes if you want to get real work done. [3]

To really catalog everything that’s wrong with Sex at Dawn, I’d have to go back through at least five or six books (and probably more) and at least a dozen papers. It would take me all day. Why spend that much time on a book that’s not very good? A while ago I promised myself that I wasn’t going to write many more posts on books that are bad in a generic way that doesn’t do anything special because I’m usually not spending my time in an optimal way. And reading Sex at Dawn is unlikely to be an optimal use of your time.

Video Games Live — concert review

A friend and I saw Video Games Live, the concert featuring primarily music from video games; the show was emphatically so-so, mostly because the music kept being interrupted for banal reasons, chiefly related to defending the idea of video games as an art form. The structure of the concert went like this: the musicians would play for five to ten minutes, then a guy would show up to declare that video games are ART, DAMNIT! or run a contest, or show a video game, or pick his nose, or whatever. Then the music would resume. But is a show devoted to music of games really an ideal venue for the purpose of trying to show video games are art? In other concerts I’ve been to, no one comes out to defend Beethoven or The Offspring as art: it’s merely assumed. You’ll know video games are art when people stop claiming they are and merely assume that they are.

I feel the worst for the musicians themselves, who presumably haven’t spent more than 10,000 hours of practice time for underdeveloped pieces that, to highly trained ears, probably sound bombastic or manipulative, like bad romances seem to literary critics. You could see them looking at one another when the conductor / showman stopped to extol the virtues of video games and drench himself in glory for putting the show together.

You may notice that I haven’t mentioned much about the music: that’s because the show wasn’t really about music. Some video game music is interesting and deserves serious attention; Final Fantasy is particularly famous for its soundtracks. The Mario theme music has become a pop culture cliche. But you won’t find attention to music at Video Games Live: look elsewhere for that.

Without being able to discuss much of the music, someone dealing with the concert is left to discuss what the nominal concert really engages. Like a dizzying array of phenomena, Tyler Cowen has asked similar questions about the status of video games and art, which he engages a little bit here regarding a New York Times piece and also here. Salon.com is asking the same questions, but is more rah-rah about video games. I don’t think anyone has argued that video games don’t “matter,” whatever that means in the context. It seems unlikely to me that games will have a strong claim to art until they can deal with sexuality in a mature way—which paintings, novels, poetry, and movies have all accomplished.

We’ll know video games are art when their defenders stop saying that video games are art and merely assume they are while going about their business. This change happened in earnest with novels around the late nineteenth and early twentieth centuries, as Mark McGurl argues in The Novel Art: Elevations of American Fiction after Henry James. Maybe it’s happening now with video games. If so, I don’t think Video Games Live is helping.

One good thing: my friend won tickets. So the only cost of the show was opportunity, not money.

Anathem — Neal Stephenson

I read Anathem when it came out and tried it again recently because I’m a literary masochist. It concerns a giant graduate school/university where really smart people gather in seclusion from the rest of humanity, who are busy running around distracted by cell phones (now called “jeejahs”), futuristic TV, and religious-style demagogues. Erasmus (get it?) is in the middle of this and realizes something bad is going to happen. He’s a low-ranked “avout” who lives in one of those cloisters, which demands an kind of autarky of ideas, a bit like Vermont without Internet access. There are a lot of passages like this, taken from the beginning:

Guests from extramuros, like Artisan Flec, were allowed to come in the Day Gate and view auts from the north nave when they were not especially contagious and, by and large, behaving themselves. This had been more or less the case for the last century and a half.

It’s unfair to take this out of context and not explain what the hell is going on. But for the first quarter to half of the book, there is no context until you’ve created your own.

Confused yet? Hopefully not too much; if you pick up Anathem, you will be further. The novel famously comes with a glossary, which reads like code with too many GOTOs in it. And if I make the novel sound ridiculous, I’m doing so intentionally and picking up the flavor of those lofty New Yorker reviews whose greatest tactic against the manufactured noise and lights that sometimes pass as popcorn movies is ridicule. In Stephenson’s case, the noise is highbrow and intellectual, or maybe pseudo-intellectual, but noise nonetheless, regardless of the number of philosophic references put in it.

The biggest problems with the silly vocabulary is that it a) makes the the novel harder to take seriously, even in a humorous way, and b) make it more likely that readers will abandon the novel before reading it, and in turn badmouth it to their friends (and on their blogs, as I’m doing). I wanted to like the novel, but Neal Stephenson is beginning to feel like Melville: someone who peaked before he stopped writing novels, to the detriment of his readers, but who nonetheless still writes a lot of unconventional and interesting stuff. Stephenson’s Moby Dick is Cryptonomicon, a novel still justifiably beloved, and his earlier novels The Diamond Age and Snow Crash are both unusually strong science fiction.

By now, one gets the sense no one restrains Stephenson’s grandest impulses: the long well-done novel is a unique beauty, but the poorly done long novel is more likely to be abandoned than finished, and one could say that all the more of a poorly done series of long novels like The Baroque Trilogy , which is destined not to be a collected in a single physical volume thanks to its heft.

In Further Fridays, John Barth writes of great thick books that “One is reminded that the pleasures of the one-night stand, however fashionable, are not the only pleasures. There is also the extended, committed affair; there is even the devoted, faithful, happy marriage. One recalls, among several non-minimalist Moderns, Vladimir Nabokov seconding James Joyce’s wish for ‘the ideal reader with the ideal insomnia.’ ” Neal Stephenson answers this call for heft and then some: Cryptonomicon is a marvelous book that would demand more than a single night of insomnia to read, and yet none of it seems extraneous, or at least not in a way that deserves to be cut. Even the several page description of how one should eat Captain Crunch seems apt to the mind of the hackers and proto-hackers Stephenson follows. So it is again with Anathem, a novel whose demands are much greater.

Stephenson has made steadily greater demands of his readers, and I wonder if those demands were most justified for Cryptonomicon. Midway through Quicksilver I gave up, and what The Baroque Trilogy demands in sheer length, Anathem demands in depth. As has often been mentioned in reviews, it has a glossary, and the dangers of it are well-expressed by this graph:

(I will note, however, that one of my favorite novels of all times has not just made up words, but an entire made-up language embedded: Lord of the Rings. So it’s important to note that the probability of a book being good descends but never reaches zero, at least as far as we can tell from this graph.)

One other point: as Umberto Eco said of The Name of the Rose:

But there was another reason [beyond verisimilitude to the perspective of a 14th C. monk] for including those long didactic passages. After reading the manuscript, my friends and editors suggested I abbreviate the first hundred pages, which they found very difficult and demanding. Without thinking twice, I refused, because, as I insisted, if somebody wanted to enter the abbey and live there for seven days, he had to accept the abbey’s own pace. If he could not, he would never manage to read the whole book. Therefore those first hundred pages are like penance or an initiation, and if someone does not like them, so much the worse for him. He can stay at the foot of the hill.
Entering a novel is like going on a climb in the mountains: you have to learn the rhythm of respiration, acquire the pace; otherwise you stop right away.

Or, worse, you might think you get to the mountain’s summit and then intellectually die during the descent (and yes, the link embedded in this sentence is highly relevant to the issue at hand).

In the novel, Stephenson is dealing with the potential for an increasingly bifurcated society with supernerds on one side and proles on the other. You can see the same ideas in 800 words instead of 120,000 in his essay Turn On, Tune In, Veg Out, which I assign to my freshmen every semester and which almost none of them really get.

Still, the concern that smart people are going to rule others does have a certain pedigree, and the idea of a cerebral superclass detached from the material world is hardly a new one; monks were an expression of it in a religious context for centuries if not longer. H.G. Wells thought of something not dissimilar in his idea of an “Open Conspiracy,” through which leading scientists and philosophers would form a benevolent world government. In Richard Rhodes’ The Making of the Atomic Bomb, he describes physicist Leo Szilard similar conception of an improbably super-competent elite ruling the world:

[…] we could create a spiritual leadership class with inner cohesion which would renew itself on its own.

Members of this class would not be award wealth or personal glory. To the contrary, they would be required to take on exceptional responsibilities, “burdens” that might “demonstrate their devotion.”

Sounds great. Keep them away from me.

People who like second-hand philosophy and who need a superiority complex or to feed one that’s developing might like Anathem. Mastering it is perhaps as esoteric as being able to quote at will from Hegel The Phenomenology of Spirit and about as fun. I’ve barely talked about the novel, the text, and the story because the story feels like a skeleton for the novel’s concerns. Again, like Melville, Stephenson seems to have forgotten about the pull of story in his later.

Umberto Eco, in contrast, is another writer of enormous books filled with ideas, and his two best—The Name of the Rose and Foucault’s Pendulum contrast with weaker efforts in those others—like The Island of the Day Before and The Invisible Flame of Queen Loanna—become obsessed with how story is told rather than the story itself. The “how” is a fine subject to address in novels, as many postmodernist novels do, but it can’t be subjugated to the what—otherwise one isn’t writing a novel; one is writing literary criticism. Trying to shoehorn the latter into the former isn’t going to create anything but boredom, with characters who aren’t characters but Vessels of Great Meaning. Erasmus in Anathem isn’t a person—he’s a convenient way to explore ideas. I’d like a character who explores the idea of why idea must be integrated into characters rather than vice-versa.

Is there something wrong with story? For a novel to work, its meaning has to be at most equal to, but more likely subsumed beneath, its story and the language used to convey that story. But Anathem is too busy preening to let that happen. I’m reminded of something Philip Pullman said regarding the His Dark Materials Trilogy: for every page he wrote he threw five away, and he concentrated ceaselessly on moving it along. That has our hero, Lyra, in a closet, where she’s hiding because she’s broken a rule and sees someone attempt to poison her father, a returning hero. The novel moves ever faster from there.

It’s a beginning so forceful that I’m recalling it by memory. Where does Anathem begin again? I can’t remember, and I look at the tome on my desk and considering finding out. If I were to force myself to remember, it would be doing so with all the joy of memorizing for school. His Dark Materials, in contrast, I remember for pure joy, and for its impact.

This is, to be sure, an overlong post, but it suits an overlong novel. Let this serve as a warning regarding and substitute for Anathem.

Hypocrisy as enabled by wealth: a lesson from Daniel Okrent's Last Call

In Daniel Okrent’s Last Call: The Rise and Fall of Prohibition, he writes: “As businesses came apart, as banks folded, as massive unemployment and homelessness scoured the cities and much of the countryside, any remaining ability to enforce Prohibition evaporate.”

One can extract a larger point from this passage relating the Great Depression’s effects on Prohibition: hypocrisy regarding victimless crimes is a luxury good. It can be indulged when a society has sufficient wealth that it can afford to be hypocritical, signaling that its members want to be perceived as virtuous even when many of them as individuals would prefer to indulge in alcohol, other drugs, or sex-for-money. The same basic dynamic is playing out in California with weed: the state is broke; willing buyers buy from willing sellers; the cost of enforcement and imprisonment is pointless; and the tax revenue increases the temptations of legalization.

The Economist has recently reported on this dynamic regarding California: “Another big topic in a state with a $19 billion budget hole is the fiscal impact of legalisation. Some studies have estimated savings of nearly $1.9 billion as people are no longer arrested and imprisoned because of marijuana.”

A lesson Last Call offers is that societies can afford to become more hypocritical as they become wealthier. But when we have to confront the trade-offs that pointless policing of personal behavior entails, the costs of various kinds of prohibition become relatively higher and no longer look as appealing as they once did.

Hypocrisy as enabled by wealth: a lesson from Daniel Okrent’s Last Call

In Daniel Okrent’s Last Call: The Rise and Fall of Prohibition, he writes: “As businesses came apart, as banks folded, as massive unemployment and homelessness scoured the cities and much of the countryside, any remaining ability to enforce Prohibition evaporate.”

One can extract a larger point from this passage relating the Great Depression’s effects on Prohibition: hypocrisy regarding victimless crimes is a luxury good. It can be indulged when a society has sufficient wealth that it can afford to be hypocritical, signaling that its members want to be perceived as virtuous even when many of them as individuals would prefer to indulge in alcohol, other drugs, or sex-for-money. The same basic dynamic is playing out in California with weed: the state is broke; willing buyers buy from willing sellers; the cost of enforcement and imprisonment is pointless; and the tax revenue increases the temptations of legalization.

The Economist has recently reported on this dynamic regarding California: “Another big topic in a state with a $19 billion budget hole is the fiscal impact of legalisation. Some studies have estimated savings of nearly $1.9 billion as people are no longer arrested and imprisoned because of marijuana.”

Societies can afford to become more hypocritical as they become wealthier. But when we have to confront the trade-offs that pointless policing of personal behavior entails, the costs of various kinds of prohibition become relatively higher and no longer look as appealing as they once did. Drug prohibition is a salient example.