Literary fiction and the current marketplace

Literary agent Betsy Learner posted on the business of selling novels. I’d shorten this quote if I could, but what Lerner writes is too compelling for paraphrase or a one-sentence excerpt:

A lot of painful conversations lately about literary fiction and its demise.

Was it ever any different?

When I was an assistant at Simon and Schuster 25 years ago, there was exactly one literary fiction editor. And his position was rumored to be precarious as a result of focusing exclusively on the literary stuff. (In fact, he was let go a year later.) Of course, this was especially true at a house like S&S where monster political and celebrity books ruled. I can still recall an anxious conversation between a senior editor and a publicist because they couldn’t remember if Jackie Collins preferred white roses or red.

I understood at that tender age that to focus entirely on fiction was to jeopardize my hope of becoming an editor.

This implies that nonfiction is the more secure field, which jives with what I’ve seen on many literary agents’ websites and blogs; there seem to be almost none who work solely with fiction but many who work exclusively or almost exclusively with nonfiction.

Which makes me wonder: why? Part of the reason might simply be that more nonfiction books move through stores in a given year than fiction, but I wonder also if part of the reason is that nonfiction simply has a shorter shelf life. I can’t imagine many pop nonfiction titles from, say, the 1930s to the 1960s are still read much because whatever fields those authors covered have changed sufficiently that their work is no longer useful save in a historical sense. Obviously, there are exceptions—both presidential candidates in the recent election cited Niebuhr Reinhold as an influence—but the general trend seems to hold.

But the novels of Bellow, Roth, and so forth are still fresh as the day they were published; I have ancient copies of For Whom the Bell Tolls and Tennyson’s Idylls of the King that are delightful. My used copy of John Barth’s Giles Goat-Boy is an original hardback. New copies of those works still sell. That’s a boon for readers but probably not so good for new writers, who have to compete with the masters. The result: a literary marketplace where it’s harder to break in as the length and number of established predecessors grows, leading to an equilibrium that favors nonfiction over fiction. “Monster political and celebrity books” flare brightly like supernovae while the literary stars are dimmer but give persistent light for those who would see them, while writers become more dependent on university and other forms of patronage to make it in a marketplace that, rightly or wrongly, doesn’t much value their work in a financial sense.

Highly recommended: the perils of pop philosophy

Julian Sanchez has a brilliant post regarding some Perils of pop philosophy, which he uses as a synecdoche for blogging, journalism, and other forms of expression/knowledge that can easily be reduced to facile dilettantism rather than genuine knowledge acquisition and, ultimately, extension.

If the first paragraph confuses you, skip to the second, although by doing so you’ll probably be committing one of the sins Sanchez discusses, some of which are implied in this paragraph, which I would consider his money shot for this post:

This brings us around to some of my longstanding ambivalence about blogging and journalism more generally. On the one hand, while it’s probably not enormously important whether most people have a handle on the mind-body problem, a democracy can’t make ethics and political philosophy the exclusive province of cloistered academics. On the other hand, I look at the online public sphere and too often tend to find myself thinking: “Discourse at this level can’t possibly accomplish anything beyond giving people some simulation of justification for what they wanted to believe in the first place.” This is, needless to say, not a problem limited to philosophy. And I think it may contribute to the fragmentation and political polarization we see online, which are generally explained in sociological terms as an “echo chamber” effect or “groupthink.”

Those are real enough, but there’s also the problem that the general glut of information and opinion makes it disconcertingly easy to kid yourself about how well you understand a particular topic.

As should be obvious, the whole post is highly recommended. If you haven’t read it by now, do so.

More words of advice for the writer of a negative review

Nigel Beale quotes Helen Gardner:

“Critics are wise to leave alone those works which they feel a crusading itch to attack and writers whose reputations they feel a call to deflate. Only too often it is not the writer who suffers ultimately but the critic…”

Beale asks: “Which is great and poetic and all, however, is silence enough?”

To me, the chief function of the critic ought to be explore a work as honestly as possible and to illuminate to the best of her abilities. This means openness and it means being willing to say that a work is weak (and why), as well as showing how it is weak. In other words, you should be able to answer the who, what, where, when, why, and how on it, with an emphasis on the last two.

One should squelch “a crusading itch to attack and writers whose reputations they feel a call to deflate,” if you’re attacking merely to attack, or merely because someone’s balloon is overinflated. For example, Tom Wolfe seems a frequent and, to my mind, unfair object of ridicule among critics. But if you’re rendering a knowledge opinion that happens to be negative, you’re doing what you should be, and what I strive to. Often this means writing about why a book fails—perhaps too frequently.

Good reviews and Updike

Every attempt at review and criticism ought to be good—but that doesn’t mean positive. A review should be “good” in the sense of well-done and engaging might be a negative one. In an ideal world, the book should decide that as much as the critic.

John Updike’s rules for reviewing are worth following to the extent possible. I would emphasize three of them:

1. Try to understand what the author wished to do, and do not blame him for not achieving what he did not attempt.

2. Give him enough direct quotation–at least one extended passage–of the book’s prose so the review’s reader can form his own impression, can get his own taste.

5. If the book is judged deficient, cite a successful example along the same lines, from the author’s ouevre or elsewhere. Try to understand the failure. Sure it’s his and not yours?

In the end, I think such rules are designed to keep the reviewer as honest as the reviewer can be. I keep coming back to the word “honesty” because it so well encapsulates the issues raised by Beale, Updike, Orwell, and others.

I especially like the “direct quotation” comment because there are no artificial word limits on web servers, meaning that you should give the reader a chance to disagree with your assessment through direct experience. Quoting of a sufficient amount of material will give others a chance to make their own judgments. Merit can be argued but not proven: thus, a critic can avoid silence and unfair attack.

As the above shows, I like Beale’s answer—”no”—which seems so obvious as to barely need stating. I’d rephrase Gardner’s assertion to this: “beware of relentlessly and thoughtlessly attacking.”

The Aeron, The Rite of Spring, and Critics

In Malcolm Gladwell’s book Blink: The Power of Thinking Without Thinking, he quotes Bill Dowell, who was the lead researcher for Herman Miller during the development and release of the now-famous Aeron in the early 1990s; I’m sitting in one as I type this. The Aeron eventually sold fantastically well and became a symbol of boom-era excess, aesthetic taste, ergonomic control, excessive time at computers, and probably other things as well. But Dowell says that the initial users hated the chair and expressed their displeasure in focus groups and testing sites. According to him, “Maybe the word ‘ugly’ was just a proxy for ‘different.’ ”

That’s a long wind-up for an analogy that explains how Helen Gardner might be telling us that when we instinctively dislike, we might be reacting against novelty rather than its real merit, as critics and listeners notoriously did during Stravinsky’s The Rite of Spring. She’s wise to warn us about that danger, because it’s how people who pride themselves on taste and knowledge become conservative, stuffy critics. If we’re saying something is “bad” merely because it’s “different,” then we’ve already effectively died aesthetically because we’re no longer able to expand what “good” means. One thing I like about Terry Teachout’s criticism and his blog, About Last Night, is that he has strong opinions but still very much seems to have aesthetic suppleness.

But the Aerons and Ulysses of the world are exceedingly rare. Dune and Harry Potter aren’t among them. Joseph O’Neill’s Netherland at least might be, which I concede obliquely in my post about it.

Most works of art are, by definition, average.

The question is: to what extent is that a bad thing? Maybe none at all: an average novel doesn’t cause the death or disfigurement of children, or propagate social inequality, or do any number of other pernicious things. Its chief ill is that it wastes time for the person who reads it and perceives it as average (as opposed to the person who reads it and judges it extraordinary, which many Harry Potter readers have evidently done).

Milan Kundera thinks otherwise—in The Curtain, he writes, “… a mediocre plumber may be useful to people, but a mediocre novelist who consciously produces books that are ephemeral, commonplace, conventional—thus not useful, thus burdensome, thus noxious—is contemptible.” He gives himself a key out here: the word “consciously.” I doubt many writers consciously set out to produce commonplace books, or do so with that intent, and so may be rescued from the burden of Kundera’s scorn. Like the criminal justice system, Kundera separates those who knowingly commit a crime from those who do so accidentally.

You need to have read widely, however, to be capable of knowing the average from the incredible, and those whose effusive praise for Harry Potter and Dan Brown splatters the web show they haven’t. Hence, perhaps, the hesitance many Amazon reviewers show toward low scores, which one of Beale’s commenters observes.

The Aerons of Art

I now look at the Aeron as beautiful, and to me the over-stuffed office chairs that used to symbolize lawyerly and corporate status look as quaint as black and white photos of Harvard graduation classes without women or minorities. If we’re open to seeing the new, I think we’ll be safe enough in condemning the indifferent and pointing towards the genuinely astonishing works that are very much out there.

Edit: The Virginia Quarterly Review weighs in.

Architects of Fear: Conspiracy Theories and Paranoia in American Politics — George Johnson

Umberto Eco’s novel Foucault’s Pendulum is both more fun to read and more informative than George Johnson’s Architects of Fear: Conspiracy Theories and Paranoia in American Politics, which promises an in-depth explanation of conspiracy theories and theorists but doesn’t really deliver.

Johnson’s central claim is that conspiracy theorists see sinister links between a variety of unrelated or barely related occurrences while simultaneously lacking the ability to deal with ambiguity and change. They lack the critical rigor necessarily to separate cause and effect, correlation and causation, coincidence and connection. It’s an intriguing idea that he should have explored more, at the expense of vapid histories of mostly right-wing conspiracy theorists. The John Birch Society and Lyndon LaRouche both get prominent billing, but both now seem dated; the pinnacle of their ideas’ power came with the Oklahoma City Bombing, after which conspiracy theorists of that style receded very low-level background cultural noise—especially after 9/11 revealed real problems, as opposed to the invented ones Johnson chronicles.

Still, Architects of Fear is amusing for its depiction of bogus reasoning used by conspiracy theorists. For example, Adam Weishaupt was a Bavarian university professor who “wanted to bring the spirit of rationalism and the philosophical Age of Enlightenment to his benighted land.” To do so, he founded a group he called the Illuminati, who have provided fodder for lousy Dan Brown-style novels ever since (along with the aforementioned Foucault’s Pendulum, which is excellent, showing that cultural flowers do sometimes spring forth from the most unusual places). In turn, conspiracy theorists have cited the Illuminati, the Knights Templar, and others as possessing secret, hermeneutical knowledge, which is proven in a variety of absurd ways. For example, one section from Architects of Fear says:

As conspiracy theorists are fond of pointing out, Weishaupt structured [the Illuminati] like a pyramid […] Eventually, thirteen ranks were established. Thirteen levels, as on the dollar-bill pyramid. As initiates learned new powers and secrets, they ascended the step of the pyramid, coming increasingly closer to the light.

But virtually all organizations are structured as pyramids, with a relatively small number of leaders at the top and a larger number of functionaries below them. The United States itself functions like this, with a President as the leader, and most corporations have a CEO who is blamed, fairly or not, for what goes well or poorly in an organization, despite the amount of control she might or might not have.

Alas: Johnson didn’t point this out, and it’s one of the many examples of where his analysis is flat or inadequate. He does sometimes hit useful points, as when he says, “Many of the founding fathers were Freemasons and sympathized with Masonic aims of universal brotherhood, but sharing symbols and ideas is different from participating in a plot.” It is, and I would’ve liked to hear more on the subject.

Thin research might prevent Johnson from saying more; most of the research he does have comes from newspaper articles, and most of the chapters consist of rehashes of those articles rather than original observations built on substantial knowledge. Architects of Fear could have been a better book, but it shows the weakness of journalists-turned-book-writers, as opposed to something like Dave Cullen’s Columbine, which shows the strengths. Along those lines, in another section Johnson says that:

Modern historians […] believe the Antichrist predicted in Revelation refers to Roman emperor Nero. The book apparently was written after Christ’s death to comfort Christians persecuted by Nero’s “one-world government,” the Roman empire.

But he cited no sources for this claim in the bibliography. I have no idea whether it’s actually true because I know little about historical scholarship surrounding the Bible. He also gave no citation for his “one-world government” quote, meaning that it might have come from somewhere or merely be offset to show how conspiracy adherents might observe the Roman Empire. As far as I can tell, however, no one has come along to do it better; books like Jane Parish and Martin Parker’s The Age of Anxiety: Conspiracy Theory and the Human Sciences sound too narrow, while Daniel Pipes’ Conspiracy: How the Paranoid Style Flourishes and Where It Comes From is more promising but still reminiscent of an amorphous genre. Nonetheless, they seem better alternatives than Architects of Fear.

Worth keeping? No.
Worth buying? No.
Worth reading? No.

Dune and its laughable honor code relative to Beowulf and Fast & Furious

Note: this is an addendum to an earlier post on Dune.

In Faculty Towers: The Academic Novel and Its Discontents ,* Elaine Showalter quotes a letter that Kingsley Amis wrote as a student regarding the Old English requirement at Oxford: “The warriors and broken-down retainers who strut bawling across its pages repel by their childish fits of self-glorification and self-pity. The cheapest contemporary novel has more to teach us than those painful reminders of what we have long outgrown.” Although I think Old English has more merit than Amis gives it here, the sentiment regarding the sentiment of that time is one I can get behind, and one of my major criticisms of Frank Herbert’s Dune is essentially that it is guilty of the same sins: childish warriors, ceaseless strutting, and the acceptance/embrace of retrograde cultural ideals regarding the roles of women and the need for killing.

You can see the worship of honor in Seamus Heaney’s translation of Beowulf, when the eponymous warrior’s death is occasion for twelve warriors to ride around the king and for them to “extoll… his heroic nature and exploits / and [give] thanks for his greatness; which was the proper / thing.” This scene wouldn’t be out of place in Dune, which is a problem for a novel written in 1965 rather than, say, the tenth century.

That’s not to say that these problems are limited to Dune, or to novels. Take the recent movie Fast & Furious, which is is astonishingly good when measured by decibel. In it, Paul Walker is compared unfavorably to Vin Diesel when a character implies, with a completely straight face, that Walker has no “code.” It was one of many unintentionally funny moments because the creators of the movie apparently missed, say, the last two hundred years of cultural development away from the idea of rigid masculinity codes and towards a great sense of irony and fluidity. If your code of honor forces you to kill someone because they’ve disrespected your MacGuffin, or whatever, your most likely destination is jail, which is appropriate, and your code is likely to prevent or hamper you from adapting to new social or environmental situations. But Dune and Fast & Furious both present having codes and what not as positive. In that respect they resemble Beowulf

I would like to imagine that at some point the culture as a whole will move beyond its silly obsession with tit-for-tat internecine identity fighting that causes people, usually of the male persuasion, to behave like moose who ceaselessly charge against one another because it’s mating season. Still, given the deep cultural, and maybe even biological, roots of this disorder, I’m not counting on this happening anytime soon, but maybe recognizing malady, as Amis did, is a step towards dialectically surpassing it.


* Which I’m reading in preparation for a conference. More perhaps on that later.

Spent: Sex, Evolution, and Consumer Behavior — Geoffrey Miller

Spent: Sex, Evolution, and Consumer Behavior is worth reading, but only with a skeptical eye that will keep you from passively imbibe ideas like, “In a complex, media-rich society, perhaps only people with very good mental health can tolerate a high degree of openness without losing their equilibrium” (emphasis added). I suspect many if not most people would ignore “perhaps” and take away the larger message without questioning whether it has real backing. Like Malcolm Gladwell’s Outliers, Spent should be read but read with a doubter’s wariness of the false or ridiculous. Both Outliers and Spent tend to overstate their cases and exaggerate the power of the ideas they impart, and knowing that makes the books a better (and less misleading) read.

If I were in marketing or public relations, I would make sure to read Spent, if for no other reason than its unusual erudition relative to other pop science books and its delivery of a widely ignored framework for understanding products, branding and the like—including how individuals are turned off by branding and advertising as a reaction to it. I would like to imagine myself in the latter category but probably am not to the extent I would prefer. Spent might make me more so by acting as an inoculation against marketing.

One other structure note: Spent is probably three books: one about marketing, one about evolutionary mating theory, and one about consumerism. They’re not always integrated, but three good discrete books jumbled together definitely beat one indifferent standalone book.

I’ll begin with some of Spent’s problems:

1) Ignore the hokey dialog in Spent’s opening pages.

If I had read the first few pages of Spent in a book store, that might have turned me off it. The gimmick is annoying, yes, but don’t discard the book for that reason.

2) Miller puts too much stock into IQ testing and ignores or belittles the vast (and justifiably so) controversy around it.

In All Brains Are the Same Color, Richard E. Nisbett discusses some knowledge regarding the mutability of IQ tests in a racial context, but that context can be generalized to a broader domain. Malcolm Gladwell wrote about similar issues in None of the above: What I.Q. doesn’t tell you about race in The New Yorker, where he discusses the many problems of tests used to ascertain intelligence. He also wrote Outliers, which popularizes the “10,000 hours to mastery” idea. If the path to mastery is practice, people who conscientiously work toward improving IQ-like skills through schooling will in turn improve their scores. That most people don’t might more indicative of motivation or of institutional problems than of genetic intelligence, especially since we still can’t get much beyond correlation in measurements of it. If you want more support for Miller’s perspective, William Saletan’s Created Equal offers some in Slate. Miller says:

Human intelligence has two aspects that make it a bit confusing at first. There is a universal aspect: intelligence as a set of psychological adaptations common to all normal humans… Then there is an individual-differences aspect: intelligence as a set of correlated differences in the speed and efficiency of those natural human capacities…

But he again leaves out intelligence as a function of skill and training.

In any event, this post isn’t meant to be a rehashing or literature review of knowledge on intelligence testing; to perceive the arguments in full is practically a Ph.D. in itself given the history, breadth, and depth of such arguments. The evidence for absolute IQ heritability and genetic intelligence is far weaker than Miller presents it, and it’s frustrating that he doesn’t recognize this.

3) Some statements are vacuous (if interesting).

Miller writes:

Like most reasonable people, I feel deep ambivalence about marketing and consumerism. Their power is awe-inspiring. Like gods, they inspire both worshipful submission and mortal terror

That’s more than a little contrived, and whatever power marketing and consumerism have is power that we give them. Most people probably never or seldom consider either, at least not in the academic terms Miller uses. Still, he uses the section to comic effect, as when he notes the things “exciting and appalling” about consumerism and marketing, including “frappuccinos, business schools, In Style magazine, Glock handguns, Jerry Bruckheimer movies, Dubai airport duty-free shops… the contemporary art market, and Bangkok.”

4) Elitism runs through the book, even when it’s disguised.

This is in part a continuation of the second point. Take, for example, this:

If we do choose to ignore the marketing revolution, we do so because we are terrified of a world in which our elite ideals lose their power to control the fruits of technology. (If you have the leisure time, education, and inclination to read this book, you are obviously a member of the elite.)

The marketing revolution is only as important as we let it be. Much of marketing comes to us through TV and the Internet, but not owning a TV (preferably without being this guy) and Firefox’s Adblock Plus plugin go a long way toward neutering marketing.

I am reminded of a comment from Asher Lev’s uncle in My Name is Asher Lev: “I read. A watchmaker does not necessarily have to be an ignoramus.” So too with people in general.

Sometimes I’m susceptible to nodding through the elitist comments when they flatter my preconceived ideas, as with this statement:

People indoctrinated in hedonistic individualism, religious fundamentalism, or patriarchal nationalism—that is, 99 percent of humanity—are not accustomed to thinking imaginatively about how to change society through changing its behavioral norms and institutional habits.

That might be true, but might there also be a less snide way of stating it?

5) Maybe, maybe not.

I’m not convinced that “Marketing is central to culture,” which is the title of Spent’s third chapter, or at least not unless we’re to stretch marketing beyond a useful definition. I do like the way Miller calls marketing “… ideally, a systematic attempt to fulfill human desires by producing goods and services that people will buy.” Not that the actual marketing often lives up to that, but it’s impressive that Miller is willing to concede that given his ambivalence about the subject and his knowledge of how prone marketing and consumerism are to abuse.

Nations aren’t exactly marketing or signaling in all the examples Miller gives in his chapter “Flaunting Fitness,” like when he says that they “compete to show off their socioeconomic strength through wasteful public ‘investments’ in Olympic facilities, aircraft carriers, manned space flight, or skyscrapers.” Some of that is their for humorous effect, but aircraft carriers and manned space flight both improve their associated technologies enormously, giving us modern day marvels like GPS and massive cruise ships, while skyscrapers allow denser human interactions of the sort that my perhaps favorite economist, Edward Glaeser, describes in his many papers on the subject.

Strengths

The book is filled with ideas, which ought to be evident even from the weaknesses. Brilliant summations occur in places, as when Miller writes, “… plausible deniability and adaptive self-deception allow human social life to zip along like a maglev monorail above the ravines and crevasses of tactical selfishness, by allowing the most important things to go unsaid—but not unimagined.” The metaphor is overwrought, yes, but the sentiment reinforces the “Games People Play” chapter of Steven Pinker’s The Stuff of Thought. One can see ideas from his book reaching into others and vice-versa, which I consider a strength.

Humor

In talking about “Narcissism and Capitalism,” Miller says that the “core symptoms” of narcissism “lead narcissists to view themselves as stars in their own life stories, protagonists in their own epics, with everyone else a minor character. (They’re like bloggers in that way.)” The dig about bloggers too frequently rings true, even when given in jest.

Some of the funny parts of Spent might not be intended as such, as when Miller deadpans, “The typical Vogue magazine ad shows just two things: a brand name, and an attractive person.” Someone must think this is effective, and I wonder if those ads are part of the fifty percent of one’s advertising budget that’s wasted.

Another Brick

Nonfiction books like this one, most of Gladwell’s (questionable) work, Pinker’s, Ariely’s, and Zimbardo’s, along with the other recent pop professor books, are bricks in the road to greater understanding. They remind us of and help us correct our foibles, and even those of us who consider ourselves virtuous would do well to remember that “the renouncers [of materialism] remain awesomely self-deceived in believing that they have left behind the whole castle of self-display just by escaping the dungeon of runaway consumerism.” Instead, they take to other displays of taste, of artistic creation, of intellectual prowess, and the like, perhaps by writing book/literary blogs. Nonetheless, those activities are probably more socially productive than, say, McMansions, yachts, and SUVs. Spent helps us engage and grapple with those phenomena and our society as a whole, and even some of the weaknesses I enumerate above aren’t as weak as I imply, or else I wouldn’t spend as much time as I do.

(See also my earlier post about Spent and vacuous movies.)

(The New York Times also has a vacuous article about the book in the Times’ Science section. If I were one of those irritating triumphalist bloggers, I might point to this as an example of the superiority of Internet reporting.)

On marketing, movies, and Geoffrey Miller's Spent: Sex, Evolution, and Consumer Behavior and more

In “Why are so many movies awful?“, I quoted the fascinating New Yorker story “The Cobra: Inside a movie marketer’s playbook,” which says:

One of the oldest jokes in the business is that when a studio head takes over he’s given three envelopes, the first of which contains the advice “Fire the head of marketing.” Nowadays, though, former marketers, such as Oren Aviv, at Disney, and Marc Shmuger, at Universal, often run the studios. “Studios now are pimples on the ass of giant conglomerates,” one studio’s president of production says. “So at green-light meetings it’s a bunch of marketing and sales guys giving you educated guesses about what a property might gross. No one is saying, ‘This director was born to make this movie.’ ”

Geoffrey Miller’s book Spent: Sex, Evolution, and Consumer Behavior says:

That a company should produce what people desire, instead of trying to convince people to buy what the company happens to make, was a radical idea that seems obvious only in retrospect.

But maybe that theory works better in consumer goods purchases than in artistic or aesthetic fields, which movies are nominally supposed to be. The book so far intrigues even if its claims seem overstated; you can read more about it courtesy of Marginal Revolution here and here, which inspired me to get the book.

My guess so far at 40 pages in is that Spent will have lots of new ideas that don’t extend as far as Miller wants them to, but that it’s still a nice way to avoid mindless materialism (for more, see Paul Graham’s “Stuff” or Alain de Botton’s The Consolations of Philosophy) without resorting to overwrought pieces like Marx’s “Commodity Fetishism” or Horkheimer and Adorno’s The Dialectic of Enlightenment. As Miller says on page 16, “Evolutionary psychology can offer a deeper, more radical critique of consumerist culture than anything developed by Marx, Nietzsche, Veblen, Adorno, Marcuse, or Baudrillard,” as if rattling off the humanities’ intellectual grad school dream team. I’m not fully convinced but will happily hear the case.


EDIT: I wrote a full post about Spent here.

On marketing, movies, and Geoffrey Miller’s Spent: Sex, Evolution, and Consumer Behavior and more

In “Why are so many movies awful?“, I quoted the fascinating New Yorker story “The Cobra: Inside a movie marketer’s playbook:”

One of the oldest jokes in the business is that when a studio head takes over he’s given three envelopes, the first of which contains the advice “Fire the head of marketing.” Nowadays, though, former marketers, such as Oren Aviv, at Disney, and Marc Shmuger, at Universal, often run the studios. “Studios now are pimples on the ass of giant conglomerates,” one studio’s president of production says. “So at green-light meetings it’s a bunch of marketing and sales guys giving you educated guesses about what a property might gross. No one is saying, ‘This director was born to make this movie.’ ”

Geoffrey Miller’s book Spent: Sex, Evolution, and Consumer Behavior says:

That a company should produce what people desire, instead of trying to convince people to buy what the company happens to make, was a radical idea that seems obvious only in retrospect.

But maybe that theory works better in consumer goods purchases than in artistic or aesthetic fields, which movies are nominally supposed to be. The book so far intrigues even if its claims seem overstated; you can read more about it courtesy of Marginal Revolution here and here, which inspired me to get the book.

My guess so far at 40 pages in is that Spent will have lots of new ideas that don’t extend as far as Miller wants them to, but that it’s still a nice way to avoid mindless materialism (for more, see Paul Graham’s “Stuff” or Alain de Botton’s The Consolations of Philosophy) without resorting to overwrought pieces like Marx’s “Commodity Fetishism” or Horkheimer and Adorno’s The Dialectic of Enlightenment. As Miller says on page 16, “Evolutionary psychology can offer a deeper, more radical critique of consumerist culture than anything developed by Marx, Nietzsche, Veblen, Adorno, Marcuse, or Baudrillard,” as if rattling off the humanities’ intellectual grad school dream team. I’m not fully convinced but will happily hear the case.

EDIT: I wrote a full post about Spent here.

Commenting on comments

In “Comment is King,” Virginia Heffernan writes in the New York Times, “What commenters don’t do is provide a sustained or inventive analysis of Applebaum’s work. In fact, critics hardly seem to connect one column to the next.” She notes that comments are often vitriolic and ignorant, which will hardly surprise those used to reading large, public forums.”

She’s right. But part of the issue is that newspapers seem to encourage hit-and-run commenting because of their sheer size and, because of their attempt to be universal, also often hit the lowest common denominator. The latter is also one reason why Hacker News has a vastly better signal-to-noise ratio than, say, Digg.com.

In addition, think about this: if you’re going to incisively, laboriously, and knowledgeably comment on someone’s post or column, you’re probably better off getting your own blog and linking to the person’s post, thus developing a following of your own. It’s not really worth spending forty five minutes or an hour on an extensive critique that’s not likely to be read or remember by many people as a comment. When it becomes part of an ongoing narrative, however, it becomes more meaningful and important to the person who is writing.

That’s not to say comments have no place in blogs or newspapers, and I always read the comments on The Story’s Story and Grant Writing Confidential with care and attention. But I also understand the incentives against careful commenting and for trolling. Furthermore, in a typical comments section, it’s hard to tell who is a lunatic, who is worth listening to, who has background on the subject, and so forth. “Comment in King” now has five pages of comments attached, and I don’t feel like wading through them. With a single blog, however, I can relatively easily evaluate a handful of posts and decide if the rest are worth reading. Therefore I’m more likely to invest in a blog post replying to a story than I am a comment on that story.

You might notice that I’m not responding to Heffernan’s article in the comments section of the New York Times—but I might post a link to this response. Or maybe I’ll send her an e-mail. Heffernan might want to hear from me.

As a tangential point, comments that cite books or substantive articles are almost always better than blue-sky comments; maybe encouraging people to cite their sources would improve online discourse.

EBook Monday: Steven Berlin Johnson, Google Books, and more

* Steven Berlin Johnson speculates on “How the E-Book Will Change the Way We Read and Write: […] a future with more books, more distractions — and the end of reading alone.”

* I keep being tempted by the Amazon Kindle, despite my many posts on the Digital Restrictions Management (DRM) and other problems with the device. Then I see a post like “Amazon has banned my account – my Kindle is now a (partial) brick” and all those bad feelings return. The poster in question apparently returned too many items to Amazon, causing them to suspend his account and causing his Kindle to stop working.

* In other electronic news, a warning: Google Book Search settlement gives Google a virtual monopoly over literature. What am, random joe, supposed to do about it besides joining the Electronic Frontier Foundation? I have no idea. Still, the headline might be more sensationalistic than it should be, as this paragraph shows:

But the real risk is that Google could end up as the sole source of ultimate power in book discovery, distribution and sales. As the only legal place where all books can be searched, Google gets enormous market power: the structure of their search algorithm can make bestsellers or banish books to obscurity. The leverage they attain over publishing and authors through this settlement is incalculable.

(Emphasis added.)

I added a comment pointing out that the real response to this should lie with Congress and copyright law: at the moment, virtually everything published after 1923 is effectively under copyright. The solution is to start rolling the copyright year forward, so that 86 years (2008 – 1923) after a work is published, it automatically enters the public domain. Actually, 70 years would be nice, but the various Senators from Disney passed the Mickey Mouse Protection Act, making it seem unlikely to happen, so I stick to the (sightly) more pragmatic hope for 86 years as a possible reasonable length for copyright.

If the material in question isn’t in copyright, Google has no special power over it. Two problems solved at once.

* Speaking of all things Google, Nick Carr’s post “Google in the middle” has some brilliant parts and some absolutely wrong parts. Being the kind of person I am, I like to start with the wrong parts:

For much of the first decade of the Web’s existence, we were told that the Web, by efficiently connecting buyer and seller, or provider and user, would destroy middlemen. Middlemen were friction, and the Web was a friction-removing machine.

We were misinformed. The Web didn’t kill mediators. It made them stronger.

But Carr misses the fact that a) mediators are easier to replace than ever, since I only have to click on another one, and b) fact a has made other mediators ever-easier to find: Hacker News has become my chief aggregator, for example, and Google has nothing to do with them. Furthermore, if I want to use a different search engine, it’s only a click away.

The web still is a friction removing machine even if Google has an unusual amount of (probably temporary) power.

On the other hand, this bit is brilliant:

As I’ve written before, the essential problem facing the online news business is oversupply. The cure isn’t pretty. It requires, first, a massive reduction of production capacity – ie, the consolidation or disappearance of lots of news outlets. Second, and dependent on that reduction of production capacity, it requires news organizations to begin to impose controls on their content. By that, I don’t mean preventing bloggers from posting fair-use snippets of articles. I mean curbing the rampant syndication, authorized or not, of full-text articles. Syndication makes sense when articles remain on the paper they were printed on. It doesn’t make sense when articles float freely across the global web. (Take note, AP.)

Once the news business reduces supply, it can begin to consolidate traffic, which in turn consolidates ad revenues and, not least, opens opportunities to charge subscription fees of one sort or another – opportunities that today, given the structure of the industry, seem impossible. With less supply, the supplier gains market power at the expense of the middleman.

Newspapers are engaged in an almost Marxian race to the bottom in terms of production, and the more efficient the Internet makes news gathering and dissemination, the worse this race will become. It was obvious to me in 2002 (which I wrote about in Media myopia and the New Yorker), when I graduated from high school, that newspapers were bound to contract enormously (and catastrophically for those employed by newspapers); I was tempted to go to a big-time journalism school and try to make it as a journalist, but a rare bout of good sense stopped me. This is why.

(Incidentally, the New York Times has also noticed that J-Schools are Playing Catchup because of changes in journalism. Strangely enough, the Times seems to imply that journalism might become more like something akin to Grant Writing Confidential: people who find niches and then write the hell out of their subject.)