What happened with Deconstruction? And why is there so much bad writing in academia?

How To Deconstruct Almost Anything” has been making the online rounds for 20 years for a good reason: it’s an effective satire of writing in the humanities and some of the dumber currents of contemporary thought in academia.* It also usually raises an obvious question: How did “Deconstruction,” or its siblings “Poststructuralism” or “Postmodernism,” get started in the first place?

My take is a “meta” idea about institutions rather than a direct comment on the merits of deconstruction as a method or philosophy. The rise of deconstruction has more to do with the needs of academia as an institution than the quality of deconstruction as a tool, method, or philosophy. To understand why, however, one has to go far back in time.

Since at least the 18th Century, writers of various sorts have been systematically (key word: before the Enlightenment and Industrial Revolution, investigations were rarely systematic by modern standards) asking fundamental questions about what words mean and how they mean them, along with what works made of words mean and how they mean them. Though critical ideas go back to Plato and Aristotle, Dr. Johnson is a decent place to start. We eventually began calling such people “critics.” In the 19th Century this habit gets a big boost from the Romantics and then writers like Matthew Arnold.

Many of the debates about what things mean and why have inherent tensions, like: “Should you consider the author’s time period or point in history when evaluating a work?” or “Can art be inherently aesthetic or must it be political?” Others can be formulated. Different answers predominate in different periods.

In the 20th Century, critics start getting caught up in academia (I. A. Richards is one example); before that, most of them were what we’d now call freelancers who wrote for their own fancy or for general, education audiences. The shift happens for many reasons, and one is the invention of “research” universities; this may seem incidental to questions about Deconstruction, but it isn’t because Deconstruction wouldn’t exist or wouldn’t exist in the way it does without academia. Anyway, research universities get started in Germany, then spread to the U.S. through Johns Hopkins, which was founded in 1876. Professors of English start getting appointed. In research universities, professors need to produce “original research” to qualify for hiring, tenure, and promotion. This makes a lot of sense in the sciences, which have a very clear discover-and-build model in which new work is right and old work is wrong. This doesn’t work quite as well in the humanities and especially in fields like English.

English professors initially study words—these days we’d primarily call them philologists—and where they come from, and there is also a large contingent of professors of Greek or Latin who also teach some English. Over time English professors move from being primarily philological in nature towards being critics. The first people to really ratchet up the research-on-original-works game were the New Critics, starting in the 1930s. In the 1930s they are young whippersnappers who can ignore their elders in part because getting a job as a professor is a relatively easy, relatively genteel endeavor.

New Critics predominate until the 1950s, when Structuralists seize the high ground (think of someone like Northrop Frye) and begin asking about what sorts of universal questions literature might ask, or what universal qualities it might possess. After 1945, too, universities expand like crazy due to the G.I. Bill and then baby boomers goes to college. Pretty much anyone who can get a PhD can get a tenure-track job teaching English. That lets waves of people with new ideas who want to overthrow the ideas of their elders into academia. In the 1970s, Deconstructionists (otherwise known as Post-structuralists) show up. They’re the French theorists who are routinely mocked outside of academia for obvious reasons:

The move from a structuralist account in which capital is understood to structure social relations in relatively homologous ways to a view of hegemony in which power relations are subject to repetition, convergence, and rearticulation brought the question of temporality into the thinking of structure, and marked a shift from a form of Althusserian theory that takes structural totalities as theoretical objects to one in which the insights into the contingent possibility of structure inaugurate a renewed conception of hegemony as bound up with the contingent sites and strategies of the rearticulation of power.

That’s Judith Butler, quoted in Steven Pinker’s witty, readable The Sense of Style, in which he explains why this passage is terrible and how to avoid inflicting passages like it onto others. Inside of academia, she’s considered beyond criticism.

In each generational change of method and ideology, from philology to New Criticism to Structuralism to Poststructuralism, newly-minted professors needed to get PhDs, get hired by departments (often though not always in English), and get tenure by producing “original research.” One way to produce original research is to denounce the methods and ideas of your predecessors as horse shit and then set up a new set of methods and ideas, which can also be less charitably called “assumptions.”

But a funny thing happens to the critical-industrial complex in universities starting around 1975: the baby boomers finish college. The absolute number of students stops growing and even shrinks for a number of years. Colleges have all these tenured professors who can’t be gotten rid of, because tenure prevents them from being fired. So colleges stop hiring (see Menand’s The Marketplace of Ideas for a good account of this dynamic).

Colleges never really hired en masse again.

Other factors also reduced or discouraged the hiring of professors by colleges. In the 1980s and 1990s court decisions strike down mandatory retirement. Instead of getting a gold watch (or whatever academics gave), professors could continue being full profs well into their 70s or even 80s. Life expectancies lengthened throughout the 20th Century, and by now a professor gets tenure at say 35 could still be teaching at 85. In college I had a couple of professors who should have been forcibly retired at least a decade before I encountered them, but that is no longer possible.

Consequently, the personnel churn that used to produce new dominant ideologies in academia stops around the 1970s. The relatively few new faculty slots from 1975 to the present go to people who already believed in Deconstructionist ideals, though those ideals tend to go by the term “Literary Theory,” or just “Theory,” by the 1980s. When hundreds of plausible applications arrive for each faculty position, it’s very easy to select for comfortable ideological conformity. As noted above, the humanities don’t even have the backstop of experiment and reality against which radicals can base major changes. People who are gadflies like me can get blogs, but blogs don’t pay the bills and still don’t have much suck inside the academic edifice itself. Critics might also write academic novels, but those don’t seem to have had much of an impact on those inside. Perhaps the most salient example of institutional change is the rise of the MFA program for both undergrads and grad students, since those who teach in MFA programs tend to believe that it is possible to write well and that it is possible and even desirable to write for people who aren’t themselves academics.

Let’s return to Deconstruction as a concept. It has some interesting ideas, like this one: “he asks us to question not whether something is an X or a Y, but rather to get ‘meta’ and start examining what makes it possible for us to go through life assigning things too ontological categories (X or Y) in the first place” and others, like those pointing out that a work of art can mean two opposing things simultaneously, and that there often isn’t a single best reading of a particular work.

The problem, however, is that Deconstruction’s sillier adherents—who are all over universities—take a misreading of Saussure to argue that Deconstruction means that nothing means anything, except that everything means that men, white people, and Western imperialists oppress women, non-white people, and everyone else, and hell, as long as we’re at it capitalism is evil. History also means nothing because nothing means anything, or everything means nothing, or nothing means everything. But dressed up in sufficiently confusing language—see the Butler passage from earlier in this essay—no one can tell what if anything is really being argued.

There has been some blowback against this (Paglia, Falck, Windschuttle), but the sillier parts of Deconstructionist / Post-structuralist nonsense won, and the institutional forces operating within academia mean that that victory has been depressingly permanent. Those forces show no signs of abating. Almost no one in academia asks, “Is the work I’m doing actually important, for any reasonable value of ‘important?'” The ones who ask it tend to find something else to do. As my roommate from my first year of grad school observed when she quit after her M.A., “It’s all a bunch of bullshit.”

The people who would normally produce intellectual churn have mostly been shut out of the job market, or have moved to the healthier world of ideas online or in journalism, or have been marginalized (Paglia). Few people welcome genuine attacks on their ideas and few of us are as open-minded as we’d like to believe; academics like to think they’re open-minded, but my experience with peer review thus far indicates otherwise. So real critics tend to follow the “Exit, Voice, Loyalty” model described by Albert Hirschman in his eponymous book and exit.

The smarter ones who still want to write go for MFAs, where the goal is to produce art that someone else might actually want to read. The MFA option has grown for many reasons, but one is as an alternative for literary-minded people who want to produce writing that might matter to someone other than other English PhDs.

Few important thinkers have emerged from the humanities in the last 25 or so years. Many have in the sciences, which should be apparent through the Edge.org writers. As John Brockman, the Edge.org founder, says:

The third culture consists of those scientists and other thinkers in the empirical world who, through their work and expository writing, are taking the place of the traditional intellectual in rendering visible the deeper meanings of our lives, redefining who and what we are.

One would think that “the traditional intellectual” would wake up and do something about this. There have been some signs of this happening—like Franco Moretti or Jonathan Gottschall—but so far those green shoots have been easy to miss and far from the mainstream. “Theory” and the bad writing associated with remains king.

Works not cited but from which this reply draws:

Menand, Louis. The Marketplace of Ideas: Reform and Resistance in the American University. New York: W.W. Norton, 2010.

Paglia, Camille. “Junk Bonds and Corporate Raiders: Academe in the Hour of the Wolf.” Arion Third Series 1.2 (1991/04/01): 139-212.

Paglia, Camille. Sex, Art, and American Culture: Essays. 1 ed. New York: Vintage, 1992.

Falck, Colin. Myth, Truth and Literature: Towards a True Post-modernism. 2 ed. New York: Cambridge University Press, 1994.

Windschuttle, Keith. The Killing of History: How Literary Critics and Social Theorists are Murdering Our Past. 1st Free Press Ed., 1997 ed. New York: Free Press, 1997.

Star, Alexander. Quick Studies: The Best of Lingua Franca. 1st ed. Farrar, Straus and Giroux, 2002.

Cusset, Francois. French Theory: How Foucault, Derrida, Deleuze, & Co. Transformed the Intellectual Life of the United States. Trans. Jeff Fort. Minneapolis: University Of Minnesota Press, 2008.

Pinker, Steven. The Sense of Style: the Thinking Person’s Guide to Writing in the 21st Century. New York: Viking Adult, 2014.


* Here is one recent discussion, from which the original version of this essay was drawn. “How To Deconstruct Almost Anything” remains popular for the same reason academic novels remain popular: it is often easier to criticize through humor and satire than direct attack.

Bad boy Amazon and George Packer’s latest salvo

Until five or so years ago, every time I read yet another article about the perilous state of literary fiction I’d see complaints about how publishers ignore it in favor of airport thrillers and stupid self-help and romance and Michael Crichton and on and on. On or about December 2009 everything about the book business and human nature changed. Today, I read about how publishers are priestly custodians of high culture and the Amazon barbarians are knocking at the gate. Although George Packer doesn’t quite say as much in “Cheap Words: Amazon is good for customers. But is it good for books?“, it fits the genre.

Packer is concerned that Amazon has too much power and that it is indifferent to quality. By contrast, the small publisher Melville House “puts out quality fiction and nonfiction,” while “Bezos announced that the price of best-sellers and new titles would be nine-ninety-nine, regardless of length or quality” and “Several editors, agents, and authors told me that the money for serious fiction and nonfiction has eroded dramatically in recent years; advances on mid-list titles—books that are expected to sell modestly but whose quality gives them a strong chance of enduring—have declined by a quarter.”

Maybe all of this is true, but here’s another possibility: thanks to Amazon, people writing the most abstruse literary fiction possible don’t have to beg giant multinational megacorps for a print run of 3,000 copies. Amazon doesn’t care if you’re going to sell one million or one hundred copies; you still get a spot, and now midlist authors aren’t going to be forcibly ejected from the publishing industry by publishing houses.

Read Martha McPhee’s novel Dear Money. It verges on annoying at first but shifts to being delightful. The protagonist, Emma Chapman, is a “midlist” novelist sinking towards being a no-list novelist, and pay attention to her descriptions about “the details of how our lives really were” and how “not one of my novels had sold more than five thousand copies” and that “the awards by this point had been received long ago.” She makes money from teaching, not fiction, and her money barely adds up to rent and private schools and the rest of the New York bullshit. Under the system Packer describes, Emma is a relative success.

OLYMPUS DIGITAL CAMERASince Dear Money is a novel everything works out in the end, but in real life for many writers things don’t work out. Still, I would note that self-publishing as the norm has one major flaw: the absence of professional content editors, who are often key to writers’s growth can often turn a mess with potential into a great book (here’s one example of a promising self-published book that could’ve been saved; there are no doubt others).

Still, Amazon must save more books than it destroys. If you read any amount of literary criticism, journalism, or scholarly articles, you’ve read innumerable sentences like these: “[Malcolm] Cowley persuaded Viking to accept ‘On the Road’ after many publishers had turned it down. He worked to get Kerouac, who was broke, financial support.” How many Kerouacs and Nabokovs didn’t make it to publication, and are unknown to history because no Cowley persuaded a publisher to act in its own best interests? How many will now, thanks to Amazon?

Having spent half a decade banging around on various publishers’ and agents’ doors I’m not convinced that publishers are doing a great job of gatekeeping. I’d also note that it may be possible for many people to sell far fewer copies of a work and still be “successful;” a publisher apparently needs to sell at least 10,000 copies of a standard hardcover release, at $15 – $30 per hardcover and $9.99 – $14.99 for each ebook, to stay afloat. If I sell 10,000 copies of Asking Anna for $10 to $4 I’ll be doing peachy.

Amazon has done an incredible job setting up a fantastic amount of infrastructure, physical and electronic, and Packer doesn’t even mention that.

Amazon also offers referral fees to anyone with a website; most of the books linked to in this blog have my own referral tag attached. Not only does Amazon give a fee if someone buys the linked item directly, but Amazon gives out the fee for any other item that person buys the same day. So if a person buys a camera lens for $400 after clicking a link in my blog, I get a couple bucks.

It’s not a lot and I doubt anyone quits their day job to get rich on referral links, but it’s more than zero. I like to say that I’ve made tens of dollars through those fees; by now I’ve made a little more, though not so much that it’ll pay for both beer and books.

Publishing’s golden age has always just ended. In 1994, Larissa MacFarquhar could write in the introduction to Robert Gottlieb’s Paris Review interview that in the 1950s—when Gottlieb got started—”publishers were frequently willing and able to lose money publishing books they liked, and tended to foster a sense that theirs were houses with missions more lofty than profit.” Then Gottlieb is quoted directly:

It is not a happy business now [. . .] and once it was. It was smaller. The stakes were lower. It was a less sophisticated world.

Today publishers are noble keepers of a sacred flame; before December 2009 they were rapacious capitalists. Today writers can also run a million experiments in what people want to read. Had I been an editor with 50 Shades of Grey passed my desk, I would’ve rejected it. Oops.

But the Internet is very good at getting to revealed preferences. Maybe Americans say they want to read high-quality books but many want to read about the stuff they’re not getting in real life: sex with attractive people; car chases; being important; being quasi-omniscient; and so on. Some people who provide those things are going to succeed.

More than anything else, the Internet demonstrates that a lot of people really like porn (in its visual forms and its written form). People want what they want and while I not surprisingly think that a lot of people would be better off reading more and more interesting stuff, on a fundamental level everyone lives their own lives how they see fit. A lot of people would also be better off if they ran more, watched reality TV less, ate more broccoli, and the other usual stuff. The world is full of ignored messages. In the end each individual suffers or doesn’t according to the way they live their own life.

I don’t love Amazon or any company, but Amazon and the Internet more generally has enabled me to do things that wouldn’t have been possible or pragmatic in 1995. Since Amazon is ascending, however, it’s the bad guy in many narratives. Big publishers are wobbling, so they’re the good guys. We have always been at war with East Asia and will always be at war with East Asia.

Packer is a good writer, skilled with details and particularities, but he can’t translate those skills into generalities. He fits stories into political / intellectual frameworks that don’t quite fit, as happened last his Silicon Valley article (I responded: “George Packer’s Silicon Valley myopia“). Packer’s high quality makes him worth responding to. But Packer presumably ignores his critics on the uncouth Interwebs, since he occupies the high ground of the old-school New Yorker. Too bad. There are things to be learned from the Internet, even about the past.

Exploring the limits in art, writing, and science

In the poorly-titled but otherwise interesting essay “The Disquiet of Ziggy Zeitgeist: Unsettled by the sense that reality itself is dwindling, fading like sunstruck wallpaper,” Henry Allen says that “For the first time in my 72 years, I have no idea what’s going on,” because a lot of culture has splintered, for lack of a better term, and as a result “I don’t know what’s going on. I doubt that anyone does.”

That sense is a result of reaching boundaries or borders in many if not most artistic fields. In music, for example, John Cage famously “recorded” a track that is entirely silent. Composers have created songs or symphonies or whatever that seem indistinguishable from noise. Popular music’s last major style shift was the early 90s, with rap and grunge; since then, we’ve mostly heard dance-disco-hip-hop variations.

In the fine arts, the avant-garde is probably dead, as Camille Paglia has argued in various places for, what is perhaps not surprisingly, twenty years. What people call concept art or non-art or art from life appears indistinguishable from noise or pranks. Or, as Allen says, “Now I go to New York and look at a work of art in Chelsea and say: ‘Oh, that’s one of those.’ (Dripping, elephant dung, monochrome, squalor, scribbling.)”

Literature in some ways “got there” first, with Joyce (Finnegans Wake) and Beckett (whose novels are the whole of boredom) about which I wrote more in “Martin Amis, the essay, the novel, and how to have fun in fiction.” If you’re trying to write a novel that truly pushes the boundaries of the novel, you’re going to have a very hard time doing so while being comprehensible to readers.

OLYMPUS DIGITAL CAMERASex mores have fallen too: this weekend I’ve been reading Katherine Frank’s book Plays Well in Groups: A Journey Through the World of Group Sex, she describes gang bangs involving hundreds of participants, along with BDSM and assorted other sex adventures. Most people in developed countries have nothing between them and that, provided they want it. As a side note, she describes swingers who were featured on a TV show called Swing, and the swingers talked to the show’s crew, who said in this rendition that “We don’t know how you’ve done it but most people would kill to have this life.” But you don’t have to kill for that life: you only have to love for it, and most people probably could have it, or a version of it, if they want it. No murder necessary!

Porn has also reached limits or gotten asymptotically. The market has devolved from the monolithic Playboy to innumerable small, online outlets, some commercial and some not, and porn faces the same availability issues that any information does: perpetual availability. Although I’m not an expert, porn videos or pictures from, say, 2005 are still being passed around and viewed in 2013 and may continue to be in 2023. There is already more out there than a single person can digest and the amount is growing over time. Curating, searching, and sorting become the problem amid what is effectively infinite. If you want it, you can probably already find it, and if you don’t like what you find, you can probably make it for a couple hundred to a couple thousand dollars.

Video games are an intriguing exception to the trends described above. They’re a young medium, since they’ve only been popular in the last 30 to 40 years and have consequently seen a tremendous explosion in sophistication: compare Pong to a modern game versus a novel published in 1980 to a novel published in 2012. Video games also piggyback on growing computational capabilities. Video games, like the Internet, are still in relative infancy, and they appear to be very far from technical or comprehensibility limits.

I’m not saying that art or artists or culture is dead, but I am saying that the boundaries of comprehensibility have been reached in many fields. If I were more of a blowhard I would also pontificate about the role of the Internet in this—Allen picks 1993 by coincidence, perhaps, but 1993 was also just before the Internet reached the masses in the developed world. Within the next decade or two more than half of the people on the planet will probably get access, and that may further splinter culture. Already it’s possible for people with weird, niche interests to easily explore those interests, like Borgen, in the absence of social feedback.

Some fields, like math, appear inexhaustible. Others, like delivering things people want (which goes by the otherwise dull name “business”) appear if not inexhaustible then nearly so, since material desires keep expanding with GDP. I also doubt that art per se will ever be exhausted; the limits of comprehensibility don’t mean people will stop making art, only that we have to find ways to make it meaningful without being able to push constantly against a conservative establishment, which has been the animating force since Romanticism and now makes little sense.

The critic’s temperament and the problem of indifference: Orwell, Teachout, and Scalzi

In “Confessions of a Book Reviewer,” George Orwell points to an idea that almost any critic, or any person with a critical / systematic temperament, will eventually encounter:

[. . . ] the prolonged, indiscriminate reviewing of books is a quite exceptionally thankless, irritating and exhausting job. It not only involves praising trash–though it does involve that, as I will show in a moment–but constantly INVENTING reactions towards books about which one has no spontaneous feelings whatever. The reviewer, jaded though he may be, is professionally interested in books, and out of the thousands that appear annually, there are probably fifty or a hundred that he would enjoy writing about.

He’s not the only one; in 2004 Terry Teachout wrote:

[. . . ] I reviewed classical music and jazz for the Kansas City Star. It was great fun, but it was also a burden, not because of the bad concerts but because of the merely adequate ones–of which there were far more than too many.

Teachout uses the term “adequate.” Orwell says reviewers are “INVENTING reactions towards books about which one has no spontaneous feelings whatever.” Together, they remind me of what I feel towards most books: neutrality or indifference, which is close to “no spontaneous feelings.” Most books, even the ones I don’t especially like, I don’t hate, either. Hatred implies enormous emotional investment of the sort that very few books are worth. Conventionally bad books are just dull.

Still, writing about really bad books can be kind of fun, at first, especially when the bad books are educational through demonstrating what not to do. But after a couple of delicious slams, anyone bright and self-aware has to ask: Why bother wasting time on overtly bad books, especially if one isn’t being paid?

That leaves the books one loves and the books that don’t inspire feelings. The books one loves are difficult to praise without overused superlatives. The toughest books, however, are Teachout’s “merely adequate ones,” because there’s really nothing much to say and less reason to say it.

Critics may still write about indifferent books for other reasons; John Scalzi describes some purposes criticism serves, and he includes consumer reporting, exegesis, instruction, and polemics among the critic’s main purpose.* Of those four, I try to shoot four numbers two and three, though I used to think number one exceedingly valuable. Now I’ve realized that number one is almost entirely useless for a variety of reasons, the most notable being that literary merit and popularity have little if any relationship, which means that critics asking systematic questions about what makes good stuff good and bad stuff bad are mostly wasting their time. Polemics can be fun, but I’d rather focus on learning and understanding, rather than invective.


* Scalzi also says:

there are ways to be negative — even confrontational — while at the same time persuading others to consider one’s argument. It’s a nice skill if you have it, and people do. One of my favorite critiques of Old Man’s War came from Russell Letson in the pages of Locus, in which he described tossing the book away from him… and then grabbing it up to read again. His review was not a positive review, and it was a confrontational review (at least from my point of view as the author) — and it was also a good and interesting and well-tooled critical view of the work.

All of which is to note that the act of public criticism is also an act of persuasion. If a critic intends a piece to reach an audience, to be heard by an audience and then to have that audience give that critical opinion weight, then an awareness of the audience helps.

I think that one challenge for most modern writers, and virtually all self-published writers, will be finding people like Russell Letson, who are capable of producing “a good and interesting and well-tooled critical view.” Most Amazon.com reviews default to meaningless hate or praise, both of which can be discounted; getting someone who can “give that critical opinion weight” is the major challenge, since most people are lightweights. Even the heavyweights don’t waste their energy on weak opponents who aren’t even worth engaging.

Why I write fewer book reviews

When I started writing this blog I mainly wrote book reviews. Now, as a couple readers have pointed out, I don’t write nearly as many. Why?

1) I know a lot more now than I did then and have lived, read, and synthesized enough that I can combine lots of distinct things into unique stories that share non-obvious thins about the world. When I started, I couldn’t do that. Now my skills have broadened substantially, and, as a result, I write on different topics.

2) For many writers, reviewing books for a couple years is extremely useful because it introduces a wide array of narratives, styles, and so forth, forcing you to develop, express, and justify your opinions if you’re going to write anything worthwhile. Few other environments force you to do this; in academia, the books you’re assigned are already supposed to be “great,” so you’re not asked to say if they’re crap—even though many of the assigned books in school are crap, you’re not supposed to say so. After going through dozens or hundreds of books and explaining why you think they’re good and bad and in between, you should end up developing at least a moderately coherent philosophy of what you like, why you like it, and, ideally, how you should implement it. You shouldn’t let that philosophy become a set of blinders, but it does help to think systematically about tastes and preferences and so forth.

You might not be saying much about the books you’re reviewing, but you are saying a lot about what you’ve come to think about books.

3) No one cares about book reviews. If people in the aggregate did care about book reviews, virtually every newspaper in the country wouldn’t have shuttered what book review section it once had. What a limited number of people do want to know is what books they should read and, to a lesser extent, why. Having established, I’d like to imagine, some level of credibility by going through 2), above, I think I’m better able to do this now than I was when I started, and without necessarily dissecting every aspect of every book.

It’s also very hard and time consuming to write a great review, at least for me.

Lev Grossman also points out a supply / demand issue in an interview:

There was a time not long ago when opinions about books were a scarce commodity. Now we have an extreme surplus of opinions about books, and it’s very easy to obtain them. So if you’re in the business of supplying opinions about books, you need to get into a slightly different business. Being a critic becomes much more about supplying context for books, talking about new ways of reading, sharing ways in which it can be a rich experience.

He’s right, and his economic perspective is useful: when something is plentiful, easy to produce, and thus cheap, we should do something else. And I’m doing more of the “something else,” using as my model writers like Derek Sivers and Paul Graham.

To return to Grossman’s point, we might also treat what we’re doing differently. Clay Shirky says in Cognitive Surplus: Creativity and Generosity in a Connected Age

Scarcity is easier to deal with than abundance, because when something becomes scarce, we simply think it more valuable than it was before, a conceptually easy change. Abundance is different: its advent means we can start treating previously valuable things as if they were cheap enough to waste, which is to say cheap enough to experiment with. Because abundance can remove the trade-offs we’re used to, it can be disorienting to people who’ve grown up with scarcity. When a resource is scarce, the people who manage it often regard it as valuable in itself, without stopping to consider how much of the value is tied to its scarcity.

Lots of people are writing lots of reviews, some of them good (I like to think some of mine are good) but most not. Most are just impressionistic or empty or garbage. By now, opinions are plentiful, which means we should probably shift towards greater understanding and knowledge production instead of raw opinion. That’s what I’m doing in point 1). I’m no longer convinced that book reviews are automatically to be regarded “as valuable in [themselves],” as they might’ve been when it was quite hard to get ahold of books and opinions about those books. Today, for any given book, you can type its name into Google and find dozens or hundreds of reviews. This might make pointing out lesser known but good books useful—which I did with Never the Face: A Story of Desire—and the New York Review of Books is doing on a mass scale with its publishing imprint. Granted, I’ve found few books in that series I’ve really liked aside from The Dud Avocado, but I pay attention to the books published by it.

4) It’s useful to keep When To Ignore Criticism (and How to Get People to Take Your Critique Seriously) by John Scalzi in mind; he says critics tend to have four major functions: consumer reporting, exegesis, instruction, and polemic (details at his site). The first is useful but easily found across the web, and it’s also of less and less use to me because deciding what’s “worth it” is so personal, like style. My tastes these days are much more refined and specific than they were, say, 10 years ago (and I suspect they’ll be more refined still in 10 years). The second is basically what academic articles do, and I’d rather do that for money, however indirectly. The third is still of interest to me, and I do it sometimes, especially with bad reviews. The fourth is a toss-up.

When I started, I mostly wanted to do one and two. Now I’m not that convinced they’re important. In addition, books that I really love and really think are worth reading don’t come along all that frequently; maybe I should make a list of them at the top. Every week, there’s an issue of the New York Times Book Review with a book on the cover, but that doesn’t mean every week brings a fabulous book very much much worth reading by a large number of people. Having been fooled by cover stories a couple of times (Angelology being the most salient example), I’m much warier of them now.

Unfortunately, academic writing is also usually less fun, less intelligent, more windy, and duller than writing on the Internet. Anything is accomplishes rhetorically or intellectually is usually done through a film of muck thrown on by the culture of academic publishing, peer reviewers, and journal editors. There’s a very good reason no one outside of academia reads academic literary criticism, although I hadn’t appreciated why until I began to read it.

5) Professionalization. To spend the time and energy writing the great review for this blog, I necessarily have to give up time that I would otherwise spend writing stuff for grad school. There could conceivably be tangible financial rewards from publishing literary criticism, however abstruse or little read. There are not such rewards in blogging, at least given academia’s current structural equilibrium.

(If you’re going to argue that this equilibrium is bad and the game is dumb, that’s a fine thing to do, but it’s also the subject for another day.)

6) People, including me, care more about books than book reviews. I’m better off spending more time writing fiction and less time writing about fiction. So I do that, even if the labors are not yet evident. A book might, conceivably, be important and read for a long period of time. Book reviews, on the other hand, seldom are. So I want to work toward the more important activity; instead of telling you what I think is good, I’d rather just do it.

Here’s T.C. Boyle o:

What I’d like to see more of are the sort of wide-ranging and penetrating overviews of a given writer’s work by writers and thinkers who are the equals of those they presume to analyze. This happens rarely. Why? Well, what’s in it for the critic? Is he/she going to be paid? By whom? Harper’s runs in-depth book essays, as does the New York Review of Books and other outlets. Fine and dandy. There would be more if there were more of an audience. But there isn’t.

For a long time, I did it free, though perhaps not at the level Boyle would desire; now I don’t, per the professionalization issue.

7) A great deal of art and art criticism does, in the end, reduce to taste, and the opinions and analyses of critics are basically votes that, over time, accumulate and lift some few works out of history’s ocean. But I’m not sure that book reviews are the optimal means of performing that work: better to do it by alluding to older work in newer work, or integrating ideas into more considered essays, or otherwise use artistic work in some larger synthesis.

8) In Jonathan Strange & Mr. Norrell, Norrell is having a debate with two toadies and says, “I really have no desire to write reviews of other people’s books. Modern publications upon magic are the most pernicious things in the world, full of misinformation and wrong opinions.” Lascelles, who has become a kind of self-appointed, high-status servant, says:

[I]t is precisely by passing judgements upon other people’s work and pointing out their errors that readers can be made to understand your opinions better. It is the easiest thing in the world to turn a review to one’s own ends. One only need mention the book once or twice and for the rest of the article one may develop one’s theme just as one chuses. It is, I assure you, what every body else does.

And because everybody else does it, we should do it too. Modern publications about literature probably feel the same as Norrell’s view of 1807 publications of magic, because it’s hard to tell what constitutes true information and right opinions in literature—making it seem that everyone else’s writing is “full of misinformation and wrong opinions.” (Norrell, of course, things he can right this, and in the context of the novel he may be right.) Besides, even if we are confronted by facts we don’t agree with, we tend to ignore them:

Recently, a few political scientists have begun to discover a human tendency deeply discouraging to anyone with faith in the power of information. It’s this: Facts don’t necessarily have the power to change our minds. In fact, quite the opposite.

Opinions are probably much the same, which explains how we get to where we are. Opinions about books even more so, which is how Lev Grossman came to say what he said above.

Anyway, Norrell realizes that book reviewing is often a waste of time, and Lascelles likes book reviewing not because of its intrinsic merit but because he thinks of it as high status (which it might’ve been in 1807). In 2011 or 2012, reviewing books might still be a waste of time and is a much lower status activity, so that even the Lascelles of the world–who I’ve met—are unlikely to be drawn to it.

As I said above, the best review of a book isn’t a review of it, but another book that speaks back to it, or incorporates its ideas, or disagrees with it, or uses it as a starting point. Which isn’t a book review at all, of course: It’s something more special, and more rare. So I’m more interested now in doing that kind of review, like Norrell is interested in doing magic instead of writing about other people’s opinions of doing magic, rather than writing about whether a book is worth reading or not. I’ll still do that to some extent, but I’ve been drifting away for some time and am likely to do so further. If Lev Grossman is remembered beyond his lifetime, I doubt it will be for his criticism, however worthy it might be: he’ll be remembered for The Magicians and his other literary work. I’d like to follow his example.

EDIT: Here’s Henry Bech in The Complete Henry Bech:

That a negative review might be a fallible verdict, delivered in haste, against a deadline, for a few dollars, by a writer with problems and limitations of his own was a reasonable and weaseling supposition he could no longer, in the dignity of his years, entertain.

Yet this is the supposition artists need to entertain; critics’ opinions are as cacophonous and random as a jungle, and listening to them is hard, and, the writers who react most vituperatively to critics are probably doing so because they fear the critic or critics might be right.

Updike is also writing close to home here: the better known the writer, the more critics he’s naturally going to attract. So the volume of critical attacks might also be linked to success.

Bullshit politics in literary criticism: an example from Deceit, Desire, and the Novel

I’m reading Rene Girard’s great book Deceit, Desire, and the Novel (1961) and came to this:

Dostoyevsky [was] convinced [. . .] that Russian forms of experience were in advance of those in the West. Russia has passed, without any transitional period, from traditional and feudal structures to the most modern society. She has not known any bourgeois interregnum. Stendhal and Proust are the novelists of this interregnum. They occupy the upper regions of internal mediation, while Dostoyevsky occupies its lowest {Girard “Deceit”@44}.

By 1961, it was pretty damn obvious that Stalin had murdered millions of his own citizens in the 1920s and 1930s. It was pretty damn obvious that Russia was a totalitarian country, which I don’t really buy as a form of “the most modern society.” The political reality is simpler: Russian hasn’t really passed “from traditional and feudal structures.” It’s still a dictatorship, only this time it’s softer: Vladimir Putin doesn’t rule with an iron fist and direct gulags, but by co-opting putatively democratic institutions and controlling TV stations. Except for a period in the 1990s and perhaps the early 2000s, before Putin had completely solidified control, Russia was something other an autocracy or something close to it.

So a sentence like “Russia has passed, without any transitional period, from traditional and feudal structures to the most modern society” is about as wrong as one can get outside of the hard sciences, if a phrase like “most modern society” is to have any meaning at all. Given the choice between Russia and countries with “bourgeois interregnums” that manage not to murder their citizens, I’ll choose the latter any time. Most of the analysis in Deceit, Desire, and the Novel is so good that I pass over the occasional gaffe like the one above, but it’s symptomatic of where literary criticism goes wrong, which most often happens when it touches politics or economics in a naive or uninformed way.

If you’re interested in this sort of criticism, read Alan Sokal and Jean Bricmont’s Fashionable Nonsense: Postmodern Intellectuals’ Abuse of Science.

Edit: On the subject of Russia’s slide into autocracy, see also Russia’s Economy: Putin and the KGB State.

George Eliot’s Daniel Deronda and the Graham Handley’s description of it

In his introduction to George Eliot’s Daniel Deronda, Graham Handley writes:

Yet if all the research and criticism of Daniel Deronda, including scholarly articles of the type which discover but do not evaluate, were put together, a consensus would doubtless reveal that it is generally thought of either as a remarkable failure, a flawed success, or even an aberration unredeemed by incisive insights or distinguished writing. The character of Gwendolen is always praised; those of Mirah, Mordecai, and Daniel are often denigrated.

It is my opinion, as someone who regularly miscegenates evaluation and discovery, that the critical consensus is correct. I particularly like the description of the novel as an “aberration unredeemed by incisive insights or distinguished writing.” I’m also still amused that Handley would announce this in the introduction, as if inviting us to agree with the consensus and not his defense.

Reading Handley’s defense, it’s hard not to like the critical consensus more:

It is my contention that Daniel Deronda needs no apology. [. . .] Its greatness consists in its artistic integrity, its moral and imaginative cohesion, its subtle and consistent presentation of a character with psychological integration as its particular strength, together with what Colvin called the ‘sense of universal interests and outside forces.’

Most of those words and phrases don’t mean anything on their own. What is “moral and imaginative cohesion?” Do you get it or them with glue and spackle? And how does the “subtle and consistent presentation of a character” work? Those sound like code words for “nothing happens,” other than that characters talk to each other about who’s going to boff who after they get married or, if we’re lucky, before.

The introduction goes on:

The form is fluid and vital, not static and diagrammatic, and the sophisticated and studied use of image and symbol is tremulous with life, with the feelings, responses, and pressures of the individual moral and spiritual experience of fictional character registering with the factual reader.

Spare me “sophisticated and studied use of image and symbol” when they aren’t deployed to tell much of a story. “Moral and spiritual experience” sounds remarkable tedious. Once again, with accolades like these, who needs haters?

I will say, however, that Daniel Deronda makes me feel incredibly virtuous for having read it, or at least parts of it. This is more or less true of every novel I’ve read whose title consists solely of a name.

George Eliot's Daniel Deronda and the Graham Handley's description of it

In his introduction to George Eliot’s Daniel Deronda, Graham Handley writes:

Yet if all the research and criticism of Daniel Deronda, including scholarly articles of the type which discover but do not evaluate, were put together, a consensus would doubtless reveal that it is generally thought of either as a remarkable failure, a flawed success, or even an aberration unredeemed by incisive insights or distinguished writing. The character of Gwendolen is always praised; those of Mirah, Mordecai, and Daniel are often denigrated.

It is my opinion, as someone who regularly miscegenates evaluation and discovery, that the critical consensus is correct. I particularly like the description of the novel as an “aberration unredeemed by incisive insights or distinguished writing.” I’m also still amused that Handley would announce this in the introduction, as if inviting us to agree with the consensus and not his defense.

Reading Handley’s defense, it’s hard not to like the critical consensus more:

It is my contention that Daniel Deronda needs no apology. [. . .] Its greatness consists in its artistic integrity, its moral and imaginative cohesion, its subtle and consistent presentation of a character with psychological integration as its particular strength, together with what Colvin called the ‘sense of universal interests and outside forces.’

Most of those words and phrases don’t mean anything on their own. What is “moral and imaginative cohesion?” Do you get it or them with glue and spackle? And how does the “subtle and consistent presentation of a character” work? Those sound like code words for “nothing happens,” other than that characters talk to each other about who’s going to boff who after they get married or, if we’re lucky, before.

The introduction goes on:

The form is fluid and vital, not static and diagrammatic, and the sophisticated and studied use of image and symbol is tremulous with life, with the feelings, responses, and pressures of the individual moral and spiritual experience of fictional character registering with the factual reader.

Spare me “sophisticated and studied use of image and symbol” when they aren’t deployed to tell much of a story. “Moral and spiritual experience” sounds remarkable tedious. Once again, with accolades like these, who needs haters?

I will say, however, that Daniel Deronda makes me feel incredibly virtuous for having read it, or at least parts of it. This is more or less true of every novel I’ve read whose title consists solely of a name.

David Shields’ Reality Hunger and James Wood’s philosophy of fiction

In describing novels from the first half of the 19th Century, David Shields writes in Reality Hunger: A Manifesto that “All the technical elements of narrative—the systematic use of the past tense and the third person, the unconditional adoption of chronological development, linear plots, the regular trajectory of the passions, the impulse of each episode toward a conclusion, etc.—tended to impose the image of a stable, coherent, continuous, unequivocal, entirely decipherable universe.”

I’m not so sure; the more interesting novels didn’t necessarily have “the unconditional adoption of chronological development” or the other features Shields ascribes to them. Caleb Williams is the most obvious example I can immediately cite: the murderers aren’t really punished in it and madness is perpetual. Gothic fiction of the 19th Century had a highly subversive quality that didn’t feature “the regular trajectory of the passions.” To my mind, the novel has always had unsettling features and an unsettling effect on society, producing change even when that change isn’t immediately measurable or apparent, or when we can’t get away from the fundamental constraints of first- or third-person narration. Maybe I should develop this thought more: but Shields doesn’t in Reality Hunger, so maybe innuendo ought to be enough for me too.

Shields is very good at making provocative arguments and less good at making those arguments hold up under scrutiny. He says, “The creators of characters, in the traditional sense, no longer manage to offer us anything more than puppets in which they themselves have ceased to believe.” Really? I believe if the author is good enough. And I construct coherence where it sometimes appears to be lacking. Although I’m aware that I can’t shake hands with David Kepesh of The Professor of Desire, he and the characters around him feel like “more than puppets” in which Roth has ceased to believe.

Shields wants something made new. Don’t we all? Don’t we all want to throw off dead convention? Alas: few of us know how to successfully, and that word “successfully” is especially important. You could write a novel that systematically eschews whatever system you think the novel imposes (this is the basic idea behind the anti-novel), but most people probably won’t like it—a point that I’ll come back to. We won’t like it because it won’t seem real. Most of us have ideas about reality that are informed by some combination of lived experience and cultural conditioning. That culture shifts over time. Shields starts Reality Hunger with a premise that is probably less contentious than much of the rest of the manifesto: “Every artistic movement from the beginning of time is an attempt to figure out a way to smuggle more of what the artist thinks is reality into the work of art.” I can believe this, though I suspect that artists begin getting antsy when you try to pin them down on what reality is: I would call it this thing we all appear to live in but that no one can quite represent adequately.

That includes Shields. Reality Hunger doesn’t feel as new as it should; it feels more like a list of N things. It’s frustrating even when it makes one think. Shields says, “Culture and commercial languages invade us 24/7.” But “commercial languages” only invade us because we let them: TV seems like the main purveyor, and if we turn it off, we’ll probably cut most of the advertising from our lives. If “commercial languages” are invading my life to the extent I’d choose the word “invade,” I’m not aware of it, partially because I conspicuously avoid those languages. Shields says, “I try not to watch reality TV, but it happens anyway.” This is remarkable: I’ve never met anyone who’s tried not to watch reality TV and then been forced to, or had reality TV happen to them, like a car accident or freak weather.

Still, we need to think about how we experience the world and depict it, since that helps us make sense of the world. For me, the novel is the genre that does this best, especially when it bursts its perceived bounds in particularly productive ways. I can’t define those ways with any rigor, but the novel has far more going on than its worst and best critics imagine.

Both the worst and best critics tend to float around the concept of reality. To use Luc Sante’s description in “The Fiction of Memory,” a review of Reality Hunger:

The novel, for all the exertions of modernism, is by now as formalized and ritualized as a crop ceremony. It no longer reflects actual reality. The essay, on the other hand, is fluid. It is a container made of prose into which you can pour anything. The essay assumes the first person; the novel shies from it, insisting that personal experience be modestly draped.

I’m not sure what a “crop ceremony” is or how the novel is supposed to reflect “actual reality.” Did it ever? What is this thing called reality that the novel is attempting to mirror? Its authenticity or lack thereof has, as far as I know, always been in question. The search for realism is always a search and never a destination, even when we feel that some works are more realistic than others.

Yet Sante and Sheilds are right about the dangers of rigidity; as Andrew Potter writes in The Authenticity Hoax: How We Get Lost Finding Ourselves, “One effect of disenchantment is that pre-existing social relations come to be recognized not as being ordained by the structure of the cosmos, but as human constructs – the product of historical contingencies, evolved power relations, and raw injustices and discriminations.”

Despite this, however, we feel realism—if none of us did, we’d probably stop using the term. Our definitions might blur when we approach a precise definition, but that doesn’t mean something isn’t there.

Sante writes, quoting Shields, that “‘Anything processed by memory is fiction,’ as is any memory shaped into literature.” Maybe: but consider these three statements, if I were to make them to you (keep in mind the context of Reality Hunger, with comments like “Try to make it real—compared to what?”):

Aliens destroyed Seattle in 2004.

I attended Clark University.

Alice said she was sad.

One of them is, to most of us, undoubtedly fiction. One of them is true. The other I made up: no doubt there is an Alice somewhere who has said she is sad, but I don’t know her and made her up for the purposes of example. The second example might be “process by memory,” but I don’t think that makes it fiction, even if I can’t give you a firm, rigorous, absolute definition of where the gap between fact and interpretation begins. Jean Bricmont and Alan Sokal give it a shot in Fashionable Nonsense: “For us, as for most people, a ‘fact’ is a situation in the external world that exists irrespective of the knowledge that we have (or don’t have) of it—in particular, irrespective of any consensus or interpretation.”

They go to observe that scientists actually face some problems of definition that I see as similar to those of literature and realism:

Our answer [as to what makes science] is nuanced. First of all, there are some general (but basically negative) epistemological principles, which go back at least to the seventeenth century: to be skeptical of a priori arguments, revelation, sacred texts, and arguments from authority. Moreover, the experience accumulated during three centuries of scientific practice has given us a series of more-or-less general methodological principles—for example, to replicate experiments, to use controls, to test medicines in double-blind protocols—that can be justified by rational arguments. However, we do not claim that these principles can be codified in a definite way, nor that the list is exhaustive. In other words, there does not exist (at least present) a complete codification rationality, is always an adaptation to a new situation.

They lay out some criteria (beware of “revelation, sacred texts, and arguments from authority”) and “methodological principles” (“replicate experiments”) and then say “we do not claim that these principles can be codified in a definite way.” Neither can the principles of realism. James Wood does as good a job of exploring them as anyone. But I would posit that, despite our inability to pin down realism, either as convention or not, most of us recognize it: when I tell people that I attended Clark University, none have told me that my experience is an artifact of memory, or made up, or that there is no such thing as reality and therefore I didn’t. Such realism might merely be convention or training—or it might be real.

In the first paragraph of his review of Chang-Rae Lee’s The Surrendered, James Wood lays out the parameters of the essential question of literary development or evolution:

Does literature progress, like medicine or engineering? Nabokov seems to have thought so, and pointed out that Tolstoy, unlike Homer, was able to describe childbirth in convincing detail. Yet you could argue the opposite view; after all, no novelist strikes the modern reader as more Homeric than Tolstoy. And Homer does mention Hector’s wife getting a hot bath ready for her husband after a long day of war, and even Achilles, as a baby, spitting up on Phoenix’s shirt. Perhaps it is as absurd to talk about progress in literature as it is to talk about progress in electricity—both are natural resources awaiting different forms of activation. The novel is peculiar in this respect, because while anyone painting today exactly like Courbet, or composing music exactly like Brahms, would be accounted a fraud or a forger, much contemporary fiction borrows the codes and conventions—the basic narrative grammar—of Flaubert or Balzac without essential alteration.

I don’t think literature progresses “like medicine or engineering.” Using medical or engineering knowledge as it stood in 1900 would be extremely unwise if you’re trying to understand the genetic basis of disease or build a computer chip. Papers tend to decay within five to ten years of publication in the sciences.

But I do think literature progresses in some other, less obvious way, as we develop wider ranges of techniques and social constraints allow for wider ranges of subject matter or direct depiction: hence why Nabakov can point out that “Tolstoy, unlike Homer, was able to describe childbirth in convincing detail,” and I can point out that mainstream literature effectively couldn’t depict explicit sexuality until the 20th Century.

While that last statement can be qualified some, it is hard to miss the difference between a group of 19th Century writers like Thackeray, Dickens, Trollope, George Eliot, George Meredith, and Thomas Hardy (who J. Hillis Miller discusses in The Form of Victorian Fiction) and a group of 20th Century writers like D.H. Lawrence, James Joyce, Norman Rush, and A.S. Byatt, who are free to explicitly describe sexual relationships to the extent they see fit and famously use words like “cunt” that simply couldn’t be effectively used in the 19th Century.

In some ways I see literature as closer to math: the quadratic equation doesn’t change with time, but I wouldn’t want to be stuck in a world with only the quadratic equation. Wood gets close to this when he says that “Perhaps it is as absurd to talk about progress in literature as it is to talk about progress in electricity—both are natural resources awaiting different forms of activation.” The word “perhaps” is essential in this sentence: it gives a sense of possibility and realization that we can’t effectively answer the question, however much we might like to. But both question and answer give a sense of some useful parameters for the discussion. Most likely, literature isn’t exactly like anything else, and its development (or not) is a matter as much of the person doing the perceiving and ordering as anything intrinsic to the medium.

I have one more possible quibble with Wood’s description when he says that “the basic narrative grammar—of Flaubert or Balzac without essential alteration.” I wonder if it really hasn’t undergone “essential alteration,” and what would qualify as essential. Novelists like Elmore Leonard, George Higgins, or that Wood favorite Henry Green all feel quite different from Flaubert or Balzac because of how they use dialog to convey ideas. The characters in Tom Perrotta’s Election speak in a much more slangy, informal style than do any in Flaubert or Balzac, so far as I know. Bellow feels more erratic than the 19th Century writers and closer to the psyche, although that might be an artifact of how I’ve been trained by Bellow and writers after Bellow to perceive the novel and the idea of psychological realism. Taken together, however, the writers mentioned make me think that maybe “the basic narrative grammar” has changed for writers who want to adopt new styles. Yes, we’re still stuck with first- and third-person perspectives, but we get books that are heavier on dialog and lighter on formality than their predecessors.

Wood is a great chronicler of what it means to be real: his interrogation of this seemingly simple term runs through the essays collected in The Irresponsible Self: On Laughter and the Novel, The Broken Estate: Essays on Literature and Belief, and, most comprehensively, in the book How Fiction Works. Taken together, they ask how the “basic narrative grammar” of fiction works or has worked up to this point. In setting out some of the guidelines that allow literary fiction to work, Wood is asking novelists to find ways to break those guides in useful and interesting ways. In discussing Reality Hunger, Wood says, “[Shields’] complaints about the tediousness and terminality of current fictional convention are well-taken: it is always a good time to shred formulas.” I agree and doubt many would disagree, but the question is not merely one of “shred[ing] formulas,” but how and why those formulas should be shred. One doesn’t shred the quadratic formula: it works. But one might build on it.

By the same token, we may have this “basic narrative grammar” not because novelists are conformist slackers who don’t care about finding a new way forward: we may have it because it’s the most satisfying or useful way of conveying a story. Although I don’t think this is true, I think it might be true. Maybe most people won’t find major changes to the way we tell stories palatable. Despite modernism and postmodernism, fewer people appear to enjoy the narrative confusion and choppiness of Joyce than do enjoy the streamlined feel of the latest thriller. That doesn’t mean the latter is better than the former—by my values, it’s not—but it does mean that the overall thrust of fiction might remain where it is.

Robert McKee, in his not-very-good-but-useful book Story: Substance, Structure, Style and The Principles of Screenwriting, gives three major kinds of plots, which blend into one another: “arch plots” that are causal in nature and finish their story lines; “mini plots,” which he says are open and “strive for simplicity and economy while retaining enough of the classical […] to satisfy the audience,” and antiplot, which are where absurdism and the like fall.

He says that as one moves “toward the far reaches of Miniplot, Antiplot, and Non-plot, the audience shrinks” (emphasis in original). From there:

The atrophy has nothing to do with quality or lack of it. All three corners of the story triangle gleam with masterworks that the world treasures, pieces of perfection for our imperfect world. Rather, the audience shrinks for this reason: Most human beings believe that life brings closed experiences of absolute, irreversible change; that their greatest sources of conflict are external to themselves; that they are the single and active protagonists of their own existence; that their existence operates through continuous time within a consistent, causally interconnected reality; and that inside this reality events happen for explainable and meaningful reasons.

The connection between this and Wood’s “basic narrative grammar” might appear tenuous, but McKee and Wood are both pointing towards the ways stories are constructed. Wood is more concerned with language; although plot and its expression (whether in language or in video) can’t be separated from one another, they can still be analyzed independently enough of one another to make a distinction.

The conventions that underlie the “arch plots,” however, can become tedious over time. This is what Wood is highlighting when he discusses Roland Barthes’ “reality effect,” which fiction can achieve: “All this silly machinery of plotting and pacing, this corsetry of chapters and paragraphs, this doxology of dialogue and characterization! Who does not want to explode it, do something truly new, and rouse the implication slumbering in the word ‘novel’?” Yet we need some kind of form to contain story; what is that form? Is there an ideal method of conveying story? If so, what if we’ve found it and are now mostly tinkering, rather than creating radical new forms? If we take out “this silly machinery of plotting and pacing” and dialog, we’re left with something closer to philosophy than to a novel.

Alternately, maybe we need the filler and coordination that so many novels consist of if those novels are to be felt true to life, which appears to be one definition of what people mean by “realistic.” This is where Wood parts with Barthes, or at least makes a distinct case:

Convention may be boring, but it is not untrue simply because it is conventional. People do lie on their beds and think with shame about all that has happened during the day (at least, I do), or order a beer and a sandwich and open their computers; they walk in and out of rooms, they talk to other people (and sometimes, indeed, feel themselves to be talking inside quotation marks); and their lives do possess more or less traditional elements of plotting and pacing, of suspense and revelation and epiphany. Probably there are more coincidences in real life than in fiction. To say “I love you” is to say something at millionth hand, but it is not, then, necessarily to lie.

“Convention may be boring, but it is not untrue simply because it is conventional,” and the parts we think of as conventional might be necessary to realism. In Umberto Eco’s Reflections on The Name of the Rose, he says that “The postmodern reply to the modern consists of recognizing that the past, since it cannot really be destroyed, because its destruction leads to silence, must be revisited: but with irony, not innocently.” That is often the job of novelists dealing with the historical weight of the past and with conventions that are “not untrue simply because [they are] conventional.” Eco and Wood both use the example of love to demonstrate similar points. Wood’s is above; Eco says:

I think of the postmodern attitude as that of a man who loves a very cultivated woman and knows he cannot say to her, ‘I love you madly,’ because he knows that she knows (and that she knows that he knows) that these words have already been written by Barbara Cartland. Still, there is a solution. He can say, ‘As Barbara Cartland would put it, I love you madly.’ At this point, having avoided false innocence, having said clearly that it is no longer possible to speak innocently, he will nevertheless have said what he wanted to say to the woman: that he loves her, but he loves her in an age of lost innocence. If the woman goes along with this, she will have received a declaration of love all the same. Neither of the two speakers will feel innocent, both will have accepted the challenge of the past, of the already said, which cannot be eliminated […]

I wonder if every age thinks of itself as “an age of lost innocence,” only to be later looked on as pure, naive, or unsophisticated. Regardless, for Eco postmodernism requires that we look to the past long enough to wink and then move on with the story we’re going to tell in the manner we’re going to tell it. Perhaps Chang-Rae Lee doesn’t do so in The Surrendered, which is the topic of Wood’s essay—but like so many essays and reviews, Wood’s starts with a long and very useful consideration before coming to the putative topic of its discussion. Wood speaks of reading […] “Chang-Rae Lee’s new novel, “The Surrendered” (Riverhead; $26.95)—a book that is commendably ambitious, extremely well written, powerfully moving in places, and, alas, utterly conventional. Here the machinery of traditional, mainstream storytelling threshes efficiently.” I haven’t read The Surrendered and so can’t evaluate Wood’s assessment.

Has Wood merely overdosed on the kind of convention that Lee uses, as opposed to convention itself? If so, it’s not clear how that “machinery” could be fixed or improved on, and the image itself is telling because Wood begins his essay by asking whether literature is like technology. My taste in literature changes: as a teenager I loved Frank Herbert’s Dune and now find it almost unbearably tedious. Other revisited novels hold up poorly because I’ve overdosed on their conventions and start to crave something new—a lot of fantasy flattens over time like opened soda.

Still, I usually don’t know what “something new” entails until I read it. That’s the problem with saying that the old way is conventional or boring: that much is easier to observe than the fix. Wood knows it, and he’s unusually good at pointing to the problems of where we’ve been and pointing to places that we might go to fix it (see, for example, his recent essay on David Mitchell, who I now feel obliged to read). This, I suspect, is why he is so beloved by so many novelists, and why I spend so much time reading him, even when I don’t necessarily love what he loves. The Quickening Maze struck me as self-indulgent and lacking in urgency, despite the psychological insight Adam Foulds offers into a range of characters’ minds: a teenage girl, a madman, an unsuccessful inventor.

I wanted more plot. In How Fiction Works, Wood quotes from Adam Smith writing in the eighteenth century regarding how writers use suspense to maintain reader interest and then says that “[…] the novel [as an art form; one could also say the capital-N Novel] soon showed itself willing to surrender the essential juvenility of plot […]” Yet I want and crave this element that Wood dismisses—perhaps because of my (relatively) young age: Wood says that Chang-Rae Lee’s Native Speaker was “published when the author was just twenty-nine,” older than I am. I like suspense and the sense of something major at stake, and that could imply that I have a weakness for weak fiction. If so, I can do little more than someone who wants chocolate over vanilla, or someone who wants chocolate despite having heard the virtues of cherries extolled.

When I hear about the versions of the real, reality, and realism that get extolled, I often begin to think about chocolate, vanilla, and cherries, and why some novelists write in such a way that I can almost taste the cocoa while others are merely cardboard colored brown. Wood is very good at explaining this, and his work taken together represents some of the best answers to the questions that we have.

Even the best answers lead us toward more questions that are likely to be answered best by artists in a work of art that makes us say, “I’ve never seen it that way before,” or, better still, “I’ve never seen it.” Suddenly we do see, and we run off to describe to our friends what we’ve seen, and they look at us and say, “I don’t get it,” and we say, “maybe you just had to see it for yourself.” Then we pass them the book or the photo or the movie and wait for them to say, “I’ve already seen this somewhere before,” while we argue that they haven’t, and neither have we. But we press on, reading, watching, thinking, hoping to come across the thing we haven’t seen before so we can share it again with our friends, who will say, like the critics do, “I’ve seen it before.”

So we have. And we’ll see it again. But I still like the sights—and the search.

Video Games Live — concert review

A friend and I saw Video Games Live, the concert featuring primarily music from video games; the show was emphatically so-so, mostly because the music kept being interrupted for banal reasons, chiefly related to defending the idea of video games as an art form. The structure of the concert went like this: the musicians would play for five to ten minutes, then a guy would show up to declare that video games are ART, DAMNIT! or run a contest, or show a video game, or pick his nose, or whatever. Then the music would resume. But is a show devoted to music of games really an ideal venue for the purpose of trying to show video games are art? In other concerts I’ve been to, no one comes out to defend Beethoven or The Offspring as art: it’s merely assumed. You’ll know video games are art when people stop claiming they are and merely assume that they are.

I feel the worst for the musicians themselves, who presumably haven’t spent more than 10,000 hours of practice time for underdeveloped pieces that, to highly trained ears, probably sound bombastic or manipulative, like bad romances seem to literary critics. You could see them looking at one another when the conductor / showman stopped to extol the virtues of video games and drench himself in glory for putting the show together.

You may notice that I haven’t mentioned much about the music: that’s because the show wasn’t really about music. Some video game music is interesting and deserves serious attention; Final Fantasy is particularly famous for its soundtracks. The Mario theme music has become a pop culture cliche. But you won’t find attention to music at Video Games Live: look elsewhere for that.

Without being able to discuss much of the music, someone dealing with the concert is left to discuss what the nominal concert really engages. Like a dizzying array of phenomena, Tyler Cowen has asked similar questions about the status of video games and art, which he engages a little bit here regarding a New York Times piece and also here. Salon.com is asking the same questions, but is more rah-rah about video games. I don’t think anyone has argued that video games don’t “matter,” whatever that means in the context. It seems unlikely to me that games will have a strong claim to art until they can deal with sexuality in a mature way—which paintings, novels, poetry, and movies have all accomplished.

We’ll know video games are art when their defenders stop saying that video games are art and merely assume they are while going about their business. This change happened in earnest with novels around the late nineteenth and early twentieth centuries, as Mark McGurl argues in The Novel Art: Elevations of American Fiction after Henry James. Maybe it’s happening now with video games. If so, I don’t think Video Games Live is helping.

One good thing: my friend won tickets. So the only cost of the show was opportunity, not money.

%d bloggers like this: