What happened with Deconstruction? And why is there so much bad writing in academia?

How To Deconstruct Almost Anything” has been making the online rounds for 20 years for a good reason: it’s an effective satire of writing in the humanities and some of the dumber currents of contemporary thought in academia.* It also usually raises an obvious question: How did “Deconstruction,” or its siblings “Poststructuralism” or “Postmodernism,” get started in the first place?

My take is a “meta” idea about institutions rather than a direct comment on the merits of deconstruction as a method or philosophy. The rise of deconstruction has more to do with the needs of academia as an institution than the quality of deconstruction as a tool, method, or philosophy. To understand why, however, one has to go far back in time.

Since at least the 18th Century, writers of various sorts have been systematically (key word: before the Enlightenment and Industrial Revolution, investigations were rarely systematic by modern standards) asking fundamental questions about what words mean and how they mean them, along with what works made of words mean and how they mean them. Though critical ideas go back to Plato and Aristotle, Dr. Johnson is a decent place to start. We eventually began calling such people “critics.” In the 19th Century this habit gets a big boost from the Romantics and then writers like Matthew Arnold.

Many of the debates about what things mean and why have inherent tensions, like: “Should you consider the author’s time period or point in history when evaluating a work?” or “Can art be inherently aesthetic or must it be political?” Others can be formulated. Different answers predominate in different periods.

In the 20th Century, critics start getting caught up in academia (I. A. Richards is one example); before that, most of them were what we’d now call freelancers who wrote for their own fancy or for general, education audiences. The shift happens for many reasons, and one is the invention of “research” universities; this may seem incidental to questions about Deconstruction, but it isn’t because Deconstruction wouldn’t exist or wouldn’t exist in the way it does without academia. Anyway, research universities get started in Germany, then spread to the U.S. through Johns Hopkins, which was founded in 1876. Professors of English start getting appointed. In research universities, professors need to produce “original research” to qualify for hiring, tenure, and promotion. This makes a lot of sense in the sciences, which have a very clear discover-and-build model in which new work is right and old work is wrong. This doesn’t work quite as well in the humanities and especially in fields like English.

English professors initially study words—these days we’d primarily call them philologists—and where they come from, and there is also a large contingent of professors of Greek or Latin who also teach some English. Over time English professors move from being primarily philological in nature towards being critics. The first people to really ratchet up the research-on-original-works game were the New Critics, starting in the 1930s. In the 1930s they are young whippersnappers who can ignore their elders in part because getting a job as a professor is a relatively easy, relatively genteel endeavor.

New Critics predominate until the 1950s, when Structuralists seize the high ground (think of someone like Northrop Frye) and begin asking about what sorts of universal questions literature might ask, or what universal qualities it might possess. After 1945, too, universities expand like crazy due to the G.I. Bill and then baby boomers goes to college. Pretty much anyone who can get a PhD can get a tenure-track job teaching English. That lets waves of people with new ideas who want to overthrow the ideas of their elders into academia. In the 1970s, Deconstructionists (otherwise known as Post-structuralists) show up. They’re the French theorists who are routinely mocked outside of academia for obvious reasons:

The move from a structuralist account in which capital is understood to structure social relations in relatively homologous ways to a view of hegemony in which power relations are subject to repetition, convergence, and rearticulation brought the question of temporality into the thinking of structure, and marked a shift from a form of Althusserian theory that takes structural totalities as theoretical objects to one in which the insights into the contingent possibility of structure inaugurate a renewed conception of hegemony as bound up with the contingent sites and strategies of the rearticulation of power.

That’s Judith Butler, quoted in Steven Pinker’s witty, readable The Sense of Style, in which he explains why this passage is terrible and how to avoid inflicting passages like it onto others. Inside of academia, she’s considered beyond criticism.

In each generational change of method and ideology, from philology to New Criticism to Structuralism to Poststructuralism, newly-minted professors needed to get PhDs, get hired by departments (often though not always in English), and get tenure by producing “original research.” One way to produce original research is to denounce the methods and ideas of your predecessors as horse shit and then set up a new set of methods and ideas, which can also be less charitably called “assumptions.”

But a funny thing happens to the critical-industrial complex in universities starting around 1975: the baby boomers finish college. The absolute number of students stops growing and even shrinks for a number of years. Colleges have all these tenured professors who can’t be gotten rid of, because tenure prevents them from being fired. So colleges stop hiring (see Menand’s The Marketplace of Ideas for a good account of this dynamic).

Colleges never really hired en masse again.

Other factors also reduced or discouraged the hiring of professors by colleges. In the 1980s and 1990s court decisions strike down mandatory retirement. Instead of getting a gold watch (or whatever academics gave), professors could continue being full profs well into their 70s or even 80s. Life expectancies lengthened throughout the 20th Century, and by now a professor gets tenure at say 35 could still be teaching at 85. In college I had a couple of professors who should have been forcibly retired at least a decade before I encountered them, but that is no longer possible.

Consequently, the personnel churn that used to produce new dominant ideologies in academia stops around the 1970s. The relatively few new faculty slots from 1975 to the present go to people who already believed in Deconstructionist ideals, though those ideals tend to go by the term “Literary Theory,” or just “Theory,” by the 1980s. When hundreds of plausible applications arrive for each faculty position, it’s very easy to select for comfortable ideological conformity. As noted above, the humanities don’t even have the backstop of experiment and reality against which radicals can base major changes. People who are gadflies like me can get blogs, but blogs don’t pay the bills and still don’t have much suck inside the academic edifice itself. Critics might also write academic novels, but those don’t seem to have had much of an impact on those inside. Perhaps the most salient example of institutional change is the rise of the MFA program for both undergrads and grad students, since those who teach in MFA programs tend to believe that it is possible to write well and that it is possible and even desirable to write for people who aren’t themselves academics.

Let’s return to Deconstruction as a concept. It has some interesting ideas, like this one: “he asks us to question not whether something is an X or a Y, but rather to get ‘meta’ and start examining what makes it possible for us to go through life assigning things too ontological categories (X or Y) in the first place” and others, like those pointing out that a work of art can mean two opposing things simultaneously, and that there often isn’t a single best reading of a particular work.

The problem, however, is that Deconstruction’s sillier adherents—who are all over universities—take a misreading of Saussure to argue that Deconstruction means that nothing means anything, except that everything means that men, white people, and Western imperialists oppress women, non-white people, and everyone else, and hell, as long as we’re at it capitalism is evil. History also means nothing because nothing means anything, or everything means nothing, or nothing means everything. But dressed up in sufficiently confusing language—see the Butler passage from earlier in this essay—no one can tell what if anything is really being argued.

There has been some blowback against this (Paglia, Falck, Windschuttle), but the sillier parts of Deconstructionist / Post-structuralist nonsense won, and the institutional forces operating within academia mean that that victory has been depressingly permanent. Those forces show no signs of abating. Almost no one in academia asks, “Is the work I’m doing actually important, for any reasonable value of ‘important?'” The ones who ask it tend to find something else to do. As my roommate from my first year of grad school observed when she quit after her M.A., “It’s all a bunch of bullshit.”

The people who would normally produce intellectual churn have mostly been shut out of the job market, or have moved to the healthier world of ideas online or in journalism, or have been marginalized (Paglia). Few people welcome genuine attacks on their ideas and few of us are as open-minded as we’d like to believe; academics like to think they’re open-minded, but my experience with peer review thus far indicates otherwise. So real critics tend to follow the “Exit, Voice, Loyalty” model described by Albert Hirschman in his eponymous book and exit.

The smarter ones who still want to write go for MFAs, where the goal is to produce art that someone else might actually want to read. The MFA option has grown for many reasons, but one is as an alternative for literary-minded people who want to produce writing that might matter to someone other than other English PhDs.

Few important thinkers have emerged from the humanities in the last 25 or so years. Many have in the sciences, which should be apparent through the Edge.org writers. As John Brockman, the Edge.org founder, says:

The third culture consists of those scientists and other thinkers in the empirical world who, through their work and expository writing, are taking the place of the traditional intellectual in rendering visible the deeper meanings of our lives, redefining who and what we are.

One would think that “the traditional intellectual” would wake up and do something about this. There have been some signs of this happening—like Franco Moretti or Jonathan Gottschall—but so far those green shoots have been easy to miss and far from the mainstream. “Theory” and the bad writing associated with remains king.

Works not cited but from which this reply draws:

Menand, Louis. The Marketplace of Ideas: Reform and Resistance in the American University. New York: W.W. Norton, 2010.

Paglia, Camille. “Junk Bonds and Corporate Raiders: Academe in the Hour of the Wolf.” Arion Third Series 1.2 (1991/04/01): 139-212.

Paglia, Camille. Sex, Art, and American Culture: Essays. 1 ed. New York: Vintage, 1992.

Falck, Colin. Myth, Truth and Literature: Towards a True Post-modernism. 2 ed. New York: Cambridge University Press, 1994.

Windschuttle, Keith. The Killing of History: How Literary Critics and Social Theorists are Murdering Our Past. 1st Free Press Ed., 1997 ed. New York: Free Press, 1997.

Star, Alexander. Quick Studies: The Best of Lingua Franca. 1st ed. Farrar, Straus and Giroux, 2002.

Cusset, Francois. French Theory: How Foucault, Derrida, Deleuze, & Co. Transformed the Intellectual Life of the United States. Trans. Jeff Fort. Minneapolis: University Of Minnesota Press, 2008.

Pinker, Steven. The Sense of Style: the Thinking Person’s Guide to Writing in the 21st Century. New York: Viking Adult, 2014.


* Here is one recent discussion, from which the original version of this essay was drawn. “How To Deconstruct Almost Anything” remains popular for the same reason academic novels remain popular: it is often easier to criticize through humor and satire than direct attack.

Bad boy Amazon and George Packer’s latest salvo

Until five or so years ago, every time I read yet another article about the perilous state of literary fiction I’d see complaints about how publishers ignore it in favor of airport thrillers and stupid self-help and romance and Michael Crichton and on and on. On or about December 2009 everything about the book business and human nature changed. Today, I read about how publishers are priestly custodians of high culture and the Amazon barbarians are knocking at the gate. Although George Packer doesn’t quite say as much in “Cheap Words: Amazon is good for customers. But is it good for books?“, it fits the genre.

Packer is concerned that Amazon has too much power and that it is indifferent to quality. By contrast, the small publisher Melville House “puts out quality fiction and nonfiction,” while “Bezos announced that the price of best-sellers and new titles would be nine-ninety-nine, regardless of length or quality” and “Several editors, agents, and authors told me that the money for serious fiction and nonfiction has eroded dramatically in recent years; advances on mid-list titles—books that are expected to sell modestly but whose quality gives them a strong chance of enduring—have declined by a quarter.”

Maybe all of this is true, but here’s another possibility: thanks to Amazon, people writing the most abstruse literary fiction possible don’t have to beg giant multinational megacorps for a print run of 3,000 copies. Amazon doesn’t care if you’re going to sell one million or one hundred copies; you still get a spot, and now midlist authors aren’t going to be forcibly ejected from the publishing industry by publishing houses.

Read Martha McPhee’s novel Dear Money. It verges on annoying at first but shifts to being delightful. The protagonist, Emma Chapman, is a “midlist” novelist sinking towards being a no-list novelist, and pay attention to her descriptions about “the details of how our lives really were” and how “not one of my novels had sold more than five thousand copies” and that “the awards by this point had been received long ago.” She makes money from teaching, not fiction, and her money barely adds up to rent and private schools and the rest of the New York bullshit. Under the system Packer describes, Emma is a relative success.

OLYMPUS DIGITAL CAMERASince Dear Money is a novel everything works out in the end, but in real life for many writers things don’t work out. Still, I would note that self-publishing as the norm has one major flaw: the absence of professional content editors, who are often key to writers’s growth can often turn a mess with potential into a great book (here’s one example of a promising self-published book that could’ve been saved; there are no doubt others).

Still, Amazon must save more books than it destroys. If you read any amount of literary criticism, journalism, or scholarly articles, you’ve read innumerable sentences like these: “[Malcolm] Cowley persuaded Viking to accept ‘On the Road’ after many publishers had turned it down. He worked to get Kerouac, who was broke, financial support.” How many Kerouacs and Nabokovs didn’t make it to publication, and are unknown to history because no Cowley persuaded a publisher to act in its own best interests? How many will now, thanks to Amazon?

Having spent half a decade banging around on various publishers’ and agents’ doors I’m not convinced that publishers are doing a great job of gatekeeping. I’d also note that it may be possible for many people to sell far fewer copies of a work and still be “successful;” a publisher apparently needs to sell at least 10,000 copies of a standard hardcover release, at $15 – $30 per hardcover and $9.99 – $14.99 for each ebook, to stay afloat. If I sell 10,000 copies of Asking Anna for $10 to $4 I’ll be doing peachy.

Amazon has done an incredible job setting up a fantastic amount of infrastructure, physical and electronic, and Packer doesn’t even mention that.

Amazon also offers referral fees to anyone with a website; most of the books linked to in this blog have my own referral tag attached. Not only does Amazon give a fee if someone buys the linked item directly, but Amazon gives out the fee for any other item that person buys the same day. So if a person buys a camera lens for $400 after clicking a link in my blog, I get a couple bucks.

It’s not a lot and I doubt anyone quits their day job to get rich on referral links, but it’s more than zero. I like to say that I’ve made tens of dollars through those fees; by now I’ve made a little more, though not so much that it’ll pay for both beer and books.

Publishing’s golden age has always just ended. In 1994, Larissa MacFarquhar could write in the introduction to Robert Gottlieb’s Paris Review interview that in the 1950s—when Gottlieb got started—”publishers were frequently willing and able to lose money publishing books they liked, and tended to foster a sense that theirs were houses with missions more lofty than profit.” Then Gottlieb is quoted directly:

It is not a happy business now [. . .] and once it was. It was smaller. The stakes were lower. It was a less sophisticated world.

Today publishers are noble keepers of a sacred flame; before December 2009 they were rapacious capitalists. Today writers can also run a million experiments in what people want to read. Had I been an editor with 50 Shades of Grey passed my desk, I would’ve rejected it. Oops.

But the Internet is very good at getting to revealed preferences. Maybe Americans say they want to read high-quality books but many want to read about the stuff they’re not getting in real life: sex with attractive people; car chases; being important; being quasi-omniscient; and so on. Some people who provide those things are going to succeed.

More than anything else, the Internet demonstrates that a lot of people really like porn (in its visual forms and its written form). People want what they want and while I not surprisingly think that a lot of people would be better off reading more and more interesting stuff, on a fundamental level everyone lives their own lives how they see fit. A lot of people would also be better off if they ran more, watched reality TV less, ate more broccoli, and the other usual stuff. The world is full of ignored messages. In the end each individual suffers or doesn’t according to the way they live their own life.

I don’t love Amazon or any company, but Amazon and the Internet more generally has enabled me to do things that wouldn’t have been possible or pragmatic in 1995. Since Amazon is ascending, however, it’s the bad guy in many narratives. Big publishers are wobbling, so they’re the good guys. We have always been at war with East Asia and will always be at war with East Asia.

Packer is a good writer, skilled with details and particularities, but he can’t translate those skills into generalities. He fits stories into political / intellectual frameworks that don’t quite fit, as happened last his Silicon Valley article (I responded: “George Packer’s Silicon Valley myopia“). Packer’s high quality makes him worth responding to. But Packer presumably ignores his critics on the uncouth Interwebs, since he occupies the high ground of the old-school New Yorker. Too bad. There are things to be learned from the Internet, even about the past.

Exploring the limits in art, writing, and science

In the poorly-titled but otherwise interesting essay “The Disquiet of Ziggy Zeitgeist: Unsettled by the sense that reality itself is dwindling, fading like sunstruck wallpaper,” Henry Allen says that “For the first time in my 72 years, I have no idea what’s going on,” because a lot of culture has splintered, for lack of a better term, and as a result “I don’t know what’s going on. I doubt that anyone does.”

That sense is a result of reaching boundaries or borders in many if not most artistic fields. In music, for example, John Cage famously “recorded” a track that is entirely silent. Composers have created songs or symphonies or whatever that seem indistinguishable from noise. Popular music’s last major style shift was the early 90s, with rap and grunge; since then, we’ve mostly heard dance-disco-hip-hop variations.

In the fine arts, the avant-garde is probably dead, as Camille Paglia has argued in various places for, what is perhaps not surprisingly, twenty years. What people call concept art or non-art or art from life appears indistinguishable from noise or pranks. Or, as Allen says, “Now I go to New York and look at a work of art in Chelsea and say: ‘Oh, that’s one of those.’ (Dripping, elephant dung, monochrome, squalor, scribbling.)”

Literature in some ways “got there” first, with Joyce (Finnegans Wake) and Beckett (whose novels are the whole of boredom) about which I wrote more in “Martin Amis, the essay, the novel, and how to have fun in fiction.” If you’re trying to write a novel that truly pushes the boundaries of the novel, you’re going to have a very hard time doing so while being comprehensible to readers.

OLYMPUS DIGITAL CAMERASex mores have fallen too: this weekend I’ve been reading Katherine Frank’s book Plays Well in Groups: A Journey Through the World of Group Sex, she describes gang bangs involving hundreds of participants, along with BDSM and assorted other sex adventures. Most people in developed countries have nothing between them and that, provided they want it. As a side note, she describes swingers who were featured on a TV show called Swing, and the swingers talked to the show’s crew, who said in this rendition that “We don’t know how you’ve done it but most people would kill to have this life.” But you don’t have to kill for that life: you only have to love for it, and most people probably could have it, or a version of it, if they want it. No murder necessary!

Porn has also reached limits or gotten asymptotically. The market has devolved from the monolithic Playboy to innumerable small, online outlets, some commercial and some not, and porn faces the same availability issues that any information does: perpetual availability. Although I’m not an expert, porn videos or pictures from, say, 2005 are still being passed around and viewed in 2013 and may continue to be in 2023. There is already more out there than a single person can digest and the amount is growing over time. Curating, searching, and sorting become the problem amid what is effectively infinite. If you want it, you can probably already find it, and if you don’t like what you find, you can probably make it for a couple hundred to a couple thousand dollars.

Video games are an intriguing exception to the trends described above. They’re a young medium, since they’ve only been popular in the last 30 to 40 years and have consequently seen a tremendous explosion in sophistication: compare Pong to a modern game versus a novel published in 1980 to a novel published in 2012. Video games also piggyback on growing computational capabilities. Video games, like the Internet, are still in relative infancy, and they appear to be very far from technical or comprehensibility limits.

I’m not saying that art or artists or culture is dead, but I am saying that the boundaries of comprehensibility have been reached in many fields. If I were more of a blowhard I would also pontificate about the role of the Internet in this—Allen picks 1993 by coincidence, perhaps, but 1993 was also just before the Internet reached the masses in the developed world. Within the next decade or two more than half of the people on the planet will probably get access, and that may further splinter culture. Already it’s possible for people with weird, niche interests to easily explore those interests, like Borgen, in the absence of social feedback.

Some fields, like math, appear inexhaustible. Others, like delivering things people want (which goes by the otherwise dull name “business”) appear if not inexhaustible then nearly so, since material desires keep expanding with GDP. I also doubt that art per se will ever be exhausted; the limits of comprehensibility don’t mean people will stop making art, only that we have to find ways to make it meaningful without being able to push constantly against a conservative establishment, which has been the animating force since Romanticism and now makes little sense.

The critic’s temperament and the problem of indifference: Orwell, Teachout, and Scalzi

In “Confessions of a Book Reviewer,” George Orwell points to an idea that almost any critic, or any person with a critical / systematic temperament, will eventually encounter:

[. . . ] the prolonged, indiscriminate reviewing of books is a quite exceptionally thankless, irritating and exhausting job. It not only involves praising trash–though it does involve that, as I will show in a moment–but constantly INVENTING reactions towards books about which one has no spontaneous feelings whatever. The reviewer, jaded though he may be, is professionally interested in books, and out of the thousands that appear annually, there are probably fifty or a hundred that he would enjoy writing about.

He’s not the only one; in 2004 Terry Teachout wrote:

[. . . ] I reviewed classical music and jazz for the Kansas City Star. It was great fun, but it was also a burden, not because of the bad concerts but because of the merely adequate ones–of which there were far more than too many.

Teachout uses the term “adequate.” Orwell says reviewers are “INVENTING reactions towards books about which one has no spontaneous feelings whatever.” Together, they remind me of what I feel towards most books: neutrality or indifference, which is close to “no spontaneous feelings.” Most books, even the ones I don’t especially like, I don’t hate, either. Hatred implies enormous emotional investment of the sort that very few books are worth. Conventionally bad books are just dull.

Still, writing about really bad books can be kind of fun, at first, especially when the bad books are educational through demonstrating what not to do. But after a couple of delicious slams, anyone bright and self-aware has to ask: Why bother wasting time on overtly bad books, especially if one isn’t being paid?

That leaves the books one loves and the books that don’t inspire feelings. The books one loves are difficult to praise without overused superlatives. The toughest books, however, are Teachout’s “merely adequate ones,” because there’s really nothing much to say and less reason to say it.

Critics may still write about indifferent books for other reasons; John Scalzi describes some purposes criticism serves, and he includes consumer reporting, exegesis, instruction, and polemics among the critic’s main purpose.* Of those four, I try to shoot four numbers two and three, though I used to think number one exceedingly valuable. Now I’ve realized that number one is almost entirely useless for a variety of reasons, the most notable being that literary merit and popularity have little if any relationship, which means that critics asking systematic questions about what makes good stuff good and bad stuff bad are mostly wasting their time. Polemics can be fun, but I’d rather focus on learning and understanding, rather than invective.


* Scalzi also says:

there are ways to be negative — even confrontational — while at the same time persuading others to consider one’s argument. It’s a nice skill if you have it, and people do. One of my favorite critiques of Old Man’s War came from Russell Letson in the pages of Locus, in which he described tossing the book away from him… and then grabbing it up to read again. His review was not a positive review, and it was a confrontational review (at least from my point of view as the author) — and it was also a good and interesting and well-tooled critical view of the work.

All of which is to note that the act of public criticism is also an act of persuasion. If a critic intends a piece to reach an audience, to be heard by an audience and then to have that audience give that critical opinion weight, then an awareness of the audience helps.

I think that one challenge for most modern writers, and virtually all self-published writers, will be finding people like Russell Letson, who are capable of producing “a good and interesting and well-tooled critical view.” Most Amazon.com reviews default to meaningless hate or praise, both of which can be discounted; getting someone who can “give that critical opinion weight” is the major challenge, since most people are lightweights. Even the heavyweights don’t waste their energy on weak opponents who aren’t even worth engaging.

Why I write fewer book reviews

When I started writing this blog I mainly wrote book reviews. Now, as a couple readers have pointed out, I don’t write nearly as many. Why?

1) I know a lot more now than I did then and have lived, read, and synthesized enough that I can combine lots of distinct things into unique stories that share non-obvious thins about the world. When I started, I couldn’t do that. Now my skills have broadened substantially, and, as a result, I write on different topics.

2) For many writers, reviewing books for a couple years is extremely useful because it introduces a wide array of narratives, styles, and so forth, forcing you to develop, express, and justify your opinions if you’re going to write anything worthwhile. Few other environments force you to do this; in academia, the books you’re assigned are already supposed to be “great,” so you’re not asked to say if they’re crap—even though many of the assigned books in school are crap, you’re not supposed to say so. After going through dozens or hundreds of books and explaining why you think they’re good and bad and in between, you should end up developing at least a moderately coherent philosophy of what you like, why you like it, and, ideally, how you should implement it. You shouldn’t let that philosophy become a set of blinders, but it does help to think systematically about tastes and preferences and so forth.

You might not be saying much about the books you’re reviewing, but you are saying a lot about what you’ve come to think about books.

3) No one cares about book reviews. If people in the aggregate did care about book reviews, virtually every newspaper in the country wouldn’t have shuttered what book review section it once had. What a limited number of people do want to know is what books they should read and, to a lesser extent, why. Having established, I’d like to imagine, some level of credibility by going through 2), above, I think I’m better able to do this now than I was when I started, and without necessarily dissecting every aspect of every book.

It’s also very hard and time consuming to write a great review, at least for me.

Lev Grossman also points out a supply / demand issue in an interview:

There was a time not long ago when opinions about books were a scarce commodity. Now we have an extreme surplus of opinions about books, and it’s very easy to obtain them. So if you’re in the business of supplying opinions about books, you need to get into a slightly different business. Being a critic becomes much more about supplying context for books, talking about new ways of reading, sharing ways in which it can be a rich experience.

He’s right, and his economic perspective is useful: when something is plentiful, easy to produce, and thus cheap, we should do something else. And I’m doing more of the “something else,” using as my model writers like Derek Sivers and Paul Graham.

To return to Grossman’s point, we might also treat what we’re doing differently. Clay Shirky says in Cognitive Surplus: Creativity and Generosity in a Connected Age

Scarcity is easier to deal with than abundance, because when something becomes scarce, we simply think it more valuable than it was before, a conceptually easy change. Abundance is different: its advent means we can start treating previously valuable things as if they were cheap enough to waste, which is to say cheap enough to experiment with. Because abundance can remove the trade-offs we’re used to, it can be disorienting to people who’ve grown up with scarcity. When a resource is scarce, the people who manage it often regard it as valuable in itself, without stopping to consider how much of the value is tied to its scarcity.

Lots of people are writing lots of reviews, some of them good (I like to think some of mine are good) but most not. Most are just impressionistic or empty or garbage. By now, opinions are plentiful, which means we should probably shift towards greater understanding and knowledge production instead of raw opinion. That’s what I’m doing in point 1). I’m no longer convinced that book reviews are automatically to be regarded “as valuable in [themselves],” as they might’ve been when it was quite hard to get ahold of books and opinions about those books. Today, for any given book, you can type its name into Google and find dozens or hundreds of reviews. This might make pointing out lesser known but good books useful—which I did with Never the Face: A Story of Desire—and the New York Review of Books is doing on a mass scale with its publishing imprint. Granted, I’ve found few books in that series I’ve really liked aside from The Dud Avocado, but I pay attention to the books published by it.

4) It’s useful to keep When To Ignore Criticism (and How to Get People to Take Your Critique Seriously) by John Scalzi in mind; he says critics tend to have four major functions: consumer reporting, exegesis, instruction, and polemic (details at his site). The first is useful but easily found across the web, and it’s also of less and less use to me because deciding what’s “worth it” is so personal, like style. My tastes these days are much more refined and specific than they were, say, 10 years ago (and I suspect they’ll be more refined still in 10 years). The second is basically what academic articles do, and I’d rather do that for money, however indirectly. The third is still of interest to me, and I do it sometimes, especially with bad reviews. The fourth is a toss-up.

When I started, I mostly wanted to do one and two. Now I’m not that convinced they’re important. In addition, books that I really love and really think are worth reading don’t come along all that frequently; maybe I should make a list of them at the top. Every week, there’s an issue of the New York Times Book Review with a book on the cover, but that doesn’t mean every week brings a fabulous book very much much worth reading by a large number of people. Having been fooled by cover stories a couple of times (Angelology being the most salient example), I’m much warier of them now.

Unfortunately, academic writing is also usually less fun, less intelligent, more windy, and duller than writing on the Internet. Anything is accomplishes rhetorically or intellectually is usually done through a film of muck thrown on by the culture of academic publishing, peer reviewers, and journal editors. There’s a very good reason no one outside of academia reads academic literary criticism, although I hadn’t appreciated why until I began to read it.

5) Professionalization. To spend the time and energy writing the great review for this blog, I necessarily have to give up time that I would otherwise spend writing stuff for grad school. There could conceivably be tangible financial rewards from publishing literary criticism, however abstruse or little read. There are not such rewards in blogging, at least given academia’s current structural equilibrium.

(If you’re going to argue that this equilibrium is bad and the game is dumb, that’s a fine thing to do, but it’s also the subject for another day.)

6) People, including me, care more about books than book reviews. I’m better off spending more time writing fiction and less time writing about fiction. So I do that, even if the labors are not yet evident. A book might, conceivably, be important and read for a long period of time. Book reviews, on the other hand, seldom are. So I want to work toward the more important activity; instead of telling you what I think is good, I’d rather just do it.

Here’s T.C. Boyle o:

What I’d like to see more of are the sort of wide-ranging and penetrating overviews of a given writer’s work by writers and thinkers who are the equals of those they presume to analyze. This happens rarely. Why? Well, what’s in it for the critic? Is he/she going to be paid? By whom? Harper’s runs in-depth book essays, as does the New York Review of Books and other outlets. Fine and dandy. There would be more if there were more of an audience. But there isn’t.

For a long time, I did it free, though perhaps not at the level Boyle would desire; now I don’t, per the professionalization issue.

7) A great deal of art and art criticism does, in the end, reduce to taste, and the opinions and analyses of critics are basically votes that, over time, accumulate and lift some few works out of history’s ocean. But I’m not sure that book reviews are the optimal means of performing that work: better to do it by alluding to older work in newer work, or integrating ideas into more considered essays, or otherwise use artistic work in some larger synthesis.

8) In Jonathan Strange & Mr. Norrell, Norrell is having a debate with two toadies and says, “I really have no desire to write reviews of other people’s books. Modern publications upon magic are the most pernicious things in the world, full of misinformation and wrong opinions.” Lascelles, who has become a kind of self-appointed, high-status servant, says:

[I]t is precisely by passing judgements upon other people’s work and pointing out their errors that readers can be made to understand your opinions better. It is the easiest thing in the world to turn a review to one’s own ends. One only need mention the book once or twice and for the rest of the article one may develop one’s theme just as one chuses. It is, I assure you, what every body else does.

And because everybody else does it, we should do it too. Modern publications about literature probably feel the same as Norrell’s view of 1807 publications of magic, because it’s hard to tell what constitutes true information and right opinions in literature—making it seem that everyone else’s writing is “full of misinformation and wrong opinions.” (Norrell, of course, things he can right this, and in the context of the novel he may be right.) Besides, even if we are confronted by facts we don’t agree with, we tend to ignore them:

Recently, a few political scientists have begun to discover a human tendency deeply discouraging to anyone with faith in the power of information. It’s this: Facts don’t necessarily have the power to change our minds. In fact, quite the opposite.

Opinions are probably much the same, which explains how we get to where we are. Opinions about books even more so, which is how Lev Grossman came to say what he said above.

Anyway, Norrell realizes that book reviewing is often a waste of time, and Lascelles likes book reviewing not because of its intrinsic merit but because he thinks of it as high status (which it might’ve been in 1807). In 2011 or 2012, reviewing books might still be a waste of time and is a much lower status activity, so that even the Lascelles of the world–who I’ve met—are unlikely to be drawn to it.

As I said above, the best review of a book isn’t a review of it, but another book that speaks back to it, or incorporates its ideas, or disagrees with it, or uses it as a starting point. Which isn’t a book review at all, of course: It’s something more special, and more rare. So I’m more interested now in doing that kind of review, like Norrell is interested in doing magic instead of writing about other people’s opinions of doing magic, rather than writing about whether a book is worth reading or not. I’ll still do that to some extent, but I’ve been drifting away for some time and am likely to do so further. If Lev Grossman is remembered beyond his lifetime, I doubt it will be for his criticism, however worthy it might be: he’ll be remembered for The Magicians and his other literary work. I’d like to follow his example.

EDIT: Here’s Henry Bech in The Complete Henry Bech:

That a negative review might be a fallible verdict, delivered in haste, against a deadline, for a few dollars, by a writer with problems and limitations of his own was a reasonable and weaseling supposition he could no longer, in the dignity of his years, entertain.

Yet this is the supposition artists need to entertain; critics’ opinions are as cacophonous and random as a jungle, and listening to them is hard, and, the writers who react most vituperatively to critics are probably doing so because they fear the critic or critics might be right.

Updike is also writing close to home here: the better known the writer, the more critics he’s naturally going to attract. So the volume of critical attacks might also be linked to success.

Bullshit politics in literary criticism: an example from Deceit, Desire, and the Novel

I’m reading Rene Girard’s great book Deceit, Desire, and the Novel (1961) and came to this:

Dostoyevsky [was] convinced [. . .] that Russian forms of experience were in advance of those in the West. Russia has passed, without any transitional period, from traditional and feudal structures to the most modern society. She has not known any bourgeois interregnum. Stendhal and Proust are the novelists of this interregnum. They occupy the upper regions of internal mediation, while Dostoyevsky occupies its lowest {Girard “Deceit”@44}.

By 1961, it was pretty damn obvious that Stalin had murdered millions of his own citizens in the 1920s and 1930s. It was pretty damn obvious that Russia was a totalitarian country, which I don’t really buy as a form of “the most modern society.” The political reality is simpler: Russian hasn’t really passed “from traditional and feudal structures.” It’s still a dictatorship, only this time it’s softer: Vladimir Putin doesn’t rule with an iron fist and direct gulags, but by co-opting putatively democratic institutions and controlling TV stations. Except for a period in the 1990s and perhaps the early 2000s, before Putin had completely solidified control, Russia was something other an autocracy or something close to it.

So a sentence like “Russia has passed, without any transitional period, from traditional and feudal structures to the most modern society” is about as wrong as one can get outside of the hard sciences, if a phrase like “most modern society” is to have any meaning at all. Given the choice between Russia and countries with “bourgeois interregnums” that manage not to murder their citizens, I’ll choose the latter any time. Most of the analysis in Deceit, Desire, and the Novel is so good that I pass over the occasional gaffe like the one above, but it’s symptomatic of where literary criticism goes wrong, which most often happens when it touches politics or economics in a naive or uninformed way.

If you’re interested in this sort of criticism, read Alan Sokal and Jean Bricmont’s Fashionable Nonsense: Postmodern Intellectuals’ Abuse of Science.

Edit: On the subject of Russia’s slide into autocracy, see also Russia’s Economy: Putin and the KGB State.

George Eliot’s Daniel Deronda and the Graham Handley’s description of it

In his introduction to George Eliot’s Daniel Deronda, Graham Handley writes:

Yet if all the research and criticism of Daniel Deronda, including scholarly articles of the type which discover but do not evaluate, were put together, a consensus would doubtless reveal that it is generally thought of either as a remarkable failure, a flawed success, or even an aberration unredeemed by incisive insights or distinguished writing. The character of Gwendolen is always praised; those of Mirah, Mordecai, and Daniel are often denigrated.

It is my opinion, as someone who regularly miscegenates evaluation and discovery, that the critical consensus is correct. I particularly like the description of the novel as an “aberration unredeemed by incisive insights or distinguished writing.” I’m also still amused that Handley would announce this in the introduction, as if inviting us to agree with the consensus and not his defense.

Reading Handley’s defense, it’s hard not to like the critical consensus more:

It is my contention that Daniel Deronda needs no apology. [. . .] Its greatness consists in its artistic integrity, its moral and imaginative cohesion, its subtle and consistent presentation of a character with psychological integration as its particular strength, together with what Colvin called the ‘sense of universal interests and outside forces.’

Most of those words and phrases don’t mean anything on their own. What is “moral and imaginative cohesion?” Do you get it or them with glue and spackle? And how does the “subtle and consistent presentation of a character” work? Those sound like code words for “nothing happens,” other than that characters talk to each other about who’s going to boff who after they get married or, if we’re lucky, before.

The introduction goes on:

The form is fluid and vital, not static and diagrammatic, and the sophisticated and studied use of image and symbol is tremulous with life, with the feelings, responses, and pressures of the individual moral and spiritual experience of fictional character registering with the factual reader.

Spare me “sophisticated and studied use of image and symbol” when they aren’t deployed to tell much of a story. “Moral and spiritual experience” sounds remarkable tedious. Once again, with accolades like these, who needs haters?

I will say, however, that Daniel Deronda makes me feel incredibly virtuous for having read it, or at least parts of it. This is more or less true of every novel I’ve read whose title consists solely of a name.

%d bloggers like this: