David Shields’ Reality Hunger and James Wood’s philosophy of fiction

In describing novels from the first half of the 19th Century, David Shields writes in Reality Hunger: A Manifesto that “All the technical elements of narrative—the systematic use of the past tense and the third person, the unconditional adoption of chronological development, linear plots, the regular trajectory of the passions, the impulse of each episode toward a conclusion, etc.—tended to impose the image of a stable, coherent, continuous, unequivocal, entirely decipherable universe.”

I’m not so sure; the more interesting novels didn’t necessarily have “the unconditional adoption of chronological development” or the other features Shields ascribes to them. Caleb Williams is the most obvious example I can immediately cite: the murderers aren’t really punished in it and madness is perpetual. Gothic fiction of the 19th Century had a highly subversive quality that didn’t feature “the regular trajectory of the passions.” To my mind, the novel has always had unsettling features and an unsettling effect on society, producing change even when that change isn’t immediately measurable or apparent, or when we can’t get away from the fundamental constraints of first- or third-person narration. Maybe I should develop this thought more: but Shields doesn’t in Reality Hunger, so maybe innuendo ought to be enough for me too.

Shields is very good at making provocative arguments and less good at making those arguments hold up under scrutiny. He says, “The creators of characters, in the traditional sense, no longer manage to offer us anything more than puppets in which they themselves have ceased to believe.” Really? I believe if the author is good enough. And I construct coherence where it sometimes appears to be lacking. Although I’m aware that I can’t shake hands with David Kepesh of The Professor of Desire, he and the characters around him feel like “more than puppets” in which Roth has ceased to believe.

Shields wants something made new. Don’t we all? Don’t we all want to throw off dead convention? Alas: few of us know how to successfully, and that word “successfully” is especially important. You could write a novel that systematically eschews whatever system you think the novel imposes (this is the basic idea behind the anti-novel), but most people probably won’t like it—a point that I’ll come back to. We won’t like it because it won’t seem real. Most of us have ideas about reality that are informed by some combination of lived experience and cultural conditioning. That culture shifts over time. Shields starts Reality Hunger with a premise that is probably less contentious than much of the rest of the manifesto: “Every artistic movement from the beginning of time is an attempt to figure out a way to smuggle more of what the artist thinks is reality into the work of art.” I can believe this, though I suspect that artists begin getting antsy when you try to pin them down on what reality is: I would call it this thing we all appear to live in but that no one can quite represent adequately.

That includes Shields. Reality Hunger doesn’t feel as new as it should; it feels more like a list of N things. It’s frustrating even when it makes one think. Shields says, “Culture and commercial languages invade us 24/7.” But “commercial languages” only invade us because we let them: TV seems like the main purveyor, and if we turn it off, we’ll probably cut most of the advertising from our lives. If “commercial languages” are invading my life to the extent I’d choose the word “invade,” I’m not aware of it, partially because I conspicuously avoid those languages. Shields says, “I try not to watch reality TV, but it happens anyway.” This is remarkable: I’ve never met anyone who’s tried not to watch reality TV and then been forced to, or had reality TV happen to them, like a car accident or freak weather.

Still, we need to think about how we experience the world and depict it, since that helps us make sense of the world. For me, the novel is the genre that does this best, especially when it bursts its perceived bounds in particularly productive ways. I can’t define those ways with any rigor, but the novel has far more going on than its worst and best critics imagine.

Both the worst and best critics tend to float around the concept of reality. To use Luc Sante’s description in “The Fiction of Memory,” a review of Reality Hunger:

The novel, for all the exertions of modernism, is by now as formalized and ritualized as a crop ceremony. It no longer reflects actual reality. The essay, on the other hand, is fluid. It is a container made of prose into which you can pour anything. The essay assumes the first person; the novel shies from it, insisting that personal experience be modestly draped.

I’m not sure what a “crop ceremony” is or how the novel is supposed to reflect “actual reality.” Did it ever? What is this thing called reality that the novel is attempting to mirror? Its authenticity or lack thereof has, as far as I know, always been in question. The search for realism is always a search and never a destination, even when we feel that some works are more realistic than others.

Yet Sante and Sheilds are right about the dangers of rigidity; as Andrew Potter writes in The Authenticity Hoax: How We Get Lost Finding Ourselves, “One effect of disenchantment is that pre-existing social relations come to be recognized not as being ordained by the structure of the cosmos, but as human constructs – the product of historical contingencies, evolved power relations, and raw injustices and discriminations.”

Despite this, however, we feel realism—if none of us did, we’d probably stop using the term. Our definitions might blur when we approach a precise definition, but that doesn’t mean something isn’t there.

Sante writes, quoting Shields, that “‘Anything processed by memory is fiction,’ as is any memory shaped into literature.” Maybe: but consider these three statements, if I were to make them to you (keep in mind the context of Reality Hunger, with comments like “Try to make it real—compared to what?”):

Aliens destroyed Seattle in 2004.

I attended Clark University.

Alice said she was sad.

One of them is, to most of us, undoubtedly fiction. One of them is true. The other I made up: no doubt there is an Alice somewhere who has said she is sad, but I don’t know her and made her up for the purposes of example. The second example might be “process by memory,” but I don’t think that makes it fiction, even if I can’t give you a firm, rigorous, absolute definition of where the gap between fact and interpretation begins. Jean Bricmont and Alan Sokal give it a shot in Fashionable Nonsense: “For us, as for most people, a ‘fact’ is a situation in the external world that exists irrespective of the knowledge that we have (or don’t have) of it—in particular, irrespective of any consensus or interpretation.”

They go to observe that scientists actually face some problems of definition that I see as similar to those of literature and realism:

Our answer [as to what makes science] is nuanced. First of all, there are some general (but basically negative) epistemological principles, which go back at least to the seventeenth century: to be skeptical of a priori arguments, revelation, sacred texts, and arguments from authority. Moreover, the experience accumulated during three centuries of scientific practice has given us a series of more-or-less general methodological principles—for example, to replicate experiments, to use controls, to test medicines in double-blind protocols—that can be justified by rational arguments. However, we do not claim that these principles can be codified in a definite way, nor that the list is exhaustive. In other words, there does not exist (at least present) a complete codification rationality, is always an adaptation to a new situation.

They lay out some criteria (beware of “revelation, sacred texts, and arguments from authority”) and “methodological principles” (“replicate experiments”) and then say “we do not claim that these principles can be codified in a definite way.” Neither can the principles of realism. James Wood does as good a job of exploring them as anyone. But I would posit that, despite our inability to pin down realism, either as convention or not, most of us recognize it: when I tell people that I attended Clark University, none have told me that my experience is an artifact of memory, or made up, or that there is no such thing as reality and therefore I didn’t. Such realism might merely be convention or training—or it might be real.

In the first paragraph of his review of Chang-Rae Lee’s The Surrendered, James Wood lays out the parameters of the essential question of literary development or evolution:

Does literature progress, like medicine or engineering? Nabokov seems to have thought so, and pointed out that Tolstoy, unlike Homer, was able to describe childbirth in convincing detail. Yet you could argue the opposite view; after all, no novelist strikes the modern reader as more Homeric than Tolstoy. And Homer does mention Hector’s wife getting a hot bath ready for her husband after a long day of war, and even Achilles, as a baby, spitting up on Phoenix’s shirt. Perhaps it is as absurd to talk about progress in literature as it is to talk about progress in electricity—both are natural resources awaiting different forms of activation. The novel is peculiar in this respect, because while anyone painting today exactly like Courbet, or composing music exactly like Brahms, would be accounted a fraud or a forger, much contemporary fiction borrows the codes and conventions—the basic narrative grammar—of Flaubert or Balzac without essential alteration.

I don’t think literature progresses “like medicine or engineering.” Using medical or engineering knowledge as it stood in 1900 would be extremely unwise if you’re trying to understand the genetic basis of disease or build a computer chip. Papers tend to decay within five to ten years of publication in the sciences.

But I do think literature progresses in some other, less obvious way, as we develop wider ranges of techniques and social constraints allow for wider ranges of subject matter or direct depiction: hence why Nabakov can point out that “Tolstoy, unlike Homer, was able to describe childbirth in convincing detail,” and I can point out that mainstream literature effectively couldn’t depict explicit sexuality until the 20th Century.

While that last statement can be qualified some, it is hard to miss the difference between a group of 19th Century writers like Thackeray, Dickens, Trollope, George Eliot, George Meredith, and Thomas Hardy (who J. Hillis Miller discusses in The Form of Victorian Fiction) and a group of 20th Century writers like D.H. Lawrence, James Joyce, Norman Rush, and A.S. Byatt, who are free to explicitly describe sexual relationships to the extent they see fit and famously use words like “cunt” that simply couldn’t be effectively used in the 19th Century.

In some ways I see literature as closer to math: the quadratic equation doesn’t change with time, but I wouldn’t want to be stuck in a world with only the quadratic equation. Wood gets close to this when he says that “Perhaps it is as absurd to talk about progress in literature as it is to talk about progress in electricity—both are natural resources awaiting different forms of activation.” The word “perhaps” is essential in this sentence: it gives a sense of possibility and realization that we can’t effectively answer the question, however much we might like to. But both question and answer give a sense of some useful parameters for the discussion. Most likely, literature isn’t exactly like anything else, and its development (or not) is a matter as much of the person doing the perceiving and ordering as anything intrinsic to the medium.

I have one more possible quibble with Wood’s description when he says that “the basic narrative grammar—of Flaubert or Balzac without essential alteration.” I wonder if it really hasn’t undergone “essential alteration,” and what would qualify as essential. Novelists like Elmore Leonard, George Higgins, or that Wood favorite Henry Green all feel quite different from Flaubert or Balzac because of how they use dialog to convey ideas. The characters in Tom Perrotta’s Election speak in a much more slangy, informal style than do any in Flaubert or Balzac, so far as I know. Bellow feels more erratic than the 19th Century writers and closer to the psyche, although that might be an artifact of how I’ve been trained by Bellow and writers after Bellow to perceive the novel and the idea of psychological realism. Taken together, however, the writers mentioned make me think that maybe “the basic narrative grammar” has changed for writers who want to adopt new styles. Yes, we’re still stuck with first- and third-person perspectives, but we get books that are heavier on dialog and lighter on formality than their predecessors.

Wood is a great chronicler of what it means to be real: his interrogation of this seemingly simple term runs through the essays collected in The Irresponsible Self: On Laughter and the Novel, The Broken Estate: Essays on Literature and Belief, and, most comprehensively, in the book How Fiction Works. Taken together, they ask how the “basic narrative grammar” of fiction works or has worked up to this point. In setting out some of the guidelines that allow literary fiction to work, Wood is asking novelists to find ways to break those guides in useful and interesting ways. In discussing Reality Hunger, Wood says, “[Shields’] complaints about the tediousness and terminality of current fictional convention are well-taken: it is always a good time to shred formulas.” I agree and doubt many would disagree, but the question is not merely one of “shred[ing] formulas,” but how and why those formulas should be shred. One doesn’t shred the quadratic formula: it works. But one might build on it.

By the same token, we may have this “basic narrative grammar” not because novelists are conformist slackers who don’t care about finding a new way forward: we may have it because it’s the most satisfying or useful way of conveying a story. Although I don’t think this is true, I think it might be true. Maybe most people won’t find major changes to the way we tell stories palatable. Despite modernism and postmodernism, fewer people appear to enjoy the narrative confusion and choppiness of Joyce than do enjoy the streamlined feel of the latest thriller. That doesn’t mean the latter is better than the former—by my values, it’s not—but it does mean that the overall thrust of fiction might remain where it is.

Robert McKee, in his not-very-good-but-useful book Story: Substance, Structure, Style and The Principles of Screenwriting, gives three major kinds of plots, which blend into one another: “arch plots” that are causal in nature and finish their story lines; “mini plots,” which he says are open and “strive for simplicity and economy while retaining enough of the classical […] to satisfy the audience,” and antiplot, which are where absurdism and the like fall.

He says that as one moves “toward the far reaches of Miniplot, Antiplot, and Non-plot, the audience shrinks” (emphasis in original). From there:

The atrophy has nothing to do with quality or lack of it. All three corners of the story triangle gleam with masterworks that the world treasures, pieces of perfection for our imperfect world. Rather, the audience shrinks for this reason: Most human beings believe that life brings closed experiences of absolute, irreversible change; that their greatest sources of conflict are external to themselves; that they are the single and active protagonists of their own existence; that their existence operates through continuous time within a consistent, causally interconnected reality; and that inside this reality events happen for explainable and meaningful reasons.

The connection between this and Wood’s “basic narrative grammar” might appear tenuous, but McKee and Wood are both pointing towards the ways stories are constructed. Wood is more concerned with language; although plot and its expression (whether in language or in video) can’t be separated from one another, they can still be analyzed independently enough of one another to make a distinction.

The conventions that underlie the “arch plots,” however, can become tedious over time. This is what Wood is highlighting when he discusses Roland Barthes’ “reality effect,” which fiction can achieve: “All this silly machinery of plotting and pacing, this corsetry of chapters and paragraphs, this doxology of dialogue and characterization! Who does not want to explode it, do something truly new, and rouse the implication slumbering in the word ‘novel’?” Yet we need some kind of form to contain story; what is that form? Is there an ideal method of conveying story? If so, what if we’ve found it and are now mostly tinkering, rather than creating radical new forms? If we take out “this silly machinery of plotting and pacing” and dialog, we’re left with something closer to philosophy than to a novel.

Alternately, maybe we need the filler and coordination that so many novels consist of if those novels are to be felt true to life, which appears to be one definition of what people mean by “realistic.” This is where Wood parts with Barthes, or at least makes a distinct case:

Convention may be boring, but it is not untrue simply because it is conventional. People do lie on their beds and think with shame about all that has happened during the day (at least, I do), or order a beer and a sandwich and open their computers; they walk in and out of rooms, they talk to other people (and sometimes, indeed, feel themselves to be talking inside quotation marks); and their lives do possess more or less traditional elements of plotting and pacing, of suspense and revelation and epiphany. Probably there are more coincidences in real life than in fiction. To say “I love you” is to say something at millionth hand, but it is not, then, necessarily to lie.

“Convention may be boring, but it is not untrue simply because it is conventional,” and the parts we think of as conventional might be necessary to realism. In Umberto Eco’s Reflections on The Name of the Rose, he says that “The postmodern reply to the modern consists of recognizing that the past, since it cannot really be destroyed, because its destruction leads to silence, must be revisited: but with irony, not innocently.” That is often the job of novelists dealing with the historical weight of the past and with conventions that are “not untrue simply because [they are] conventional.” Eco and Wood both use the example of love to demonstrate similar points. Wood’s is above; Eco says:

I think of the postmodern attitude as that of a man who loves a very cultivated woman and knows he cannot say to her, ‘I love you madly,’ because he knows that she knows (and that she knows that he knows) that these words have already been written by Barbara Cartland. Still, there is a solution. He can say, ‘As Barbara Cartland would put it, I love you madly.’ At this point, having avoided false innocence, having said clearly that it is no longer possible to speak innocently, he will nevertheless have said what he wanted to say to the woman: that he loves her, but he loves her in an age of lost innocence. If the woman goes along with this, she will have received a declaration of love all the same. Neither of the two speakers will feel innocent, both will have accepted the challenge of the past, of the already said, which cannot be eliminated […]

I wonder if every age thinks of itself as “an age of lost innocence,” only to be later looked on as pure, naive, or unsophisticated. Regardless, for Eco postmodernism requires that we look to the past long enough to wink and then move on with the story we’re going to tell in the manner we’re going to tell it. Perhaps Chang-Rae Lee doesn’t do so in The Surrendered, which is the topic of Wood’s essay—but like so many essays and reviews, Wood’s starts with a long and very useful consideration before coming to the putative topic of its discussion. Wood speaks of reading […] “Chang-Rae Lee’s new novel, “The Surrendered” (Riverhead; $26.95)—a book that is commendably ambitious, extremely well written, powerfully moving in places, and, alas, utterly conventional. Here the machinery of traditional, mainstream storytelling threshes efficiently.” I haven’t read The Surrendered and so can’t evaluate Wood’s assessment.

Has Wood merely overdosed on the kind of convention that Lee uses, as opposed to convention itself? If so, it’s not clear how that “machinery” could be fixed or improved on, and the image itself is telling because Wood begins his essay by asking whether literature is like technology. My taste in literature changes: as a teenager I loved Frank Herbert’s Dune and now find it almost unbearably tedious. Other revisited novels hold up poorly because I’ve overdosed on their conventions and start to crave something new—a lot of fantasy flattens over time like opened soda.

Still, I usually don’t know what “something new” entails until I read it. That’s the problem with saying that the old way is conventional or boring: that much is easier to observe than the fix. Wood knows it, and he’s unusually good at pointing to the problems of where we’ve been and pointing to places that we might go to fix it (see, for example, his recent essay on David Mitchell, who I now feel obliged to read). This, I suspect, is why he is so beloved by so many novelists, and why I spend so much time reading him, even when I don’t necessarily love what he loves. The Quickening Maze struck me as self-indulgent and lacking in urgency, despite the psychological insight Adam Foulds offers into a range of characters’ minds: a teenage girl, a madman, an unsuccessful inventor.

I wanted more plot. In How Fiction Works, Wood quotes from Adam Smith writing in the eighteenth century regarding how writers use suspense to maintain reader interest and then says that “[…] the novel [as an art form; one could also say the capital-N Novel] soon showed itself willing to surrender the essential juvenility of plot […]” Yet I want and crave this element that Wood dismisses—perhaps because of my (relatively) young age: Wood says that Chang-Rae Lee’s Native Speaker was “published when the author was just twenty-nine,” older than I am. I like suspense and the sense of something major at stake, and that could imply that I have a weakness for weak fiction. If so, I can do little more than someone who wants chocolate over vanilla, or someone who wants chocolate despite having heard the virtues of cherries extolled.

When I hear about the versions of the real, reality, and realism that get extolled, I often begin to think about chocolate, vanilla, and cherries, and why some novelists write in such a way that I can almost taste the cocoa while others are merely cardboard colored brown. Wood is very good at explaining this, and his work taken together represents some of the best answers to the questions that we have.

Even the best answers lead us toward more questions that are likely to be answered best by artists in a work of art that makes us say, “I’ve never seen it that way before,” or, better still, “I’ve never seen it.” Suddenly we do see, and we run off to describe to our friends what we’ve seen, and they look at us and say, “I don’t get it,” and we say, “maybe you just had to see it for yourself.” Then we pass them the book or the photo or the movie and wait for them to say, “I’ve already seen this somewhere before,” while we argue that they haven’t, and neither have we. But we press on, reading, watching, thinking, hoping to come across the thing we haven’t seen before so we can share it again with our friends, who will say, like the critics do, “I’ve seen it before.”

So we have. And we’ll see it again. But I still like the sights—and the search.

The Shallows: What the Internet is Doing to Our Brains — Nicholas Carr

One irony of this post is that you’re reading a piece on the Internet about a book that is in part about how the Internet is usurping the place of books. In The Shallows, Carr argues that the Internet encourages short attention spans, skimming, shallow knowledge, and distraction, and that this is a bad thing.

He might be right, but his argument misses one essential component: the absolute link between the Internet and distraction. He cites suggestive research but never quite crosses the causal bridge from the Internet as inherently distracting, both because of links and because of the overwhelming potential amount of material out there, and that we as a society and as a people are now endlessly distracted. Along the way, there are many soaring sentiments (“Our rich literary tradition is unthinkable without the intimate exchanges that take place between reader and writer within the crucible of a book”) and clever quotes (Nietzsche as quoted by Carr: “Our writing equipment takes part in the forming of our thoughts”), but that causal link is still weak.

I liked many of the points Carr made; that one about Nietzsche is something I’ve meditated over before, as shown here and here (I’ve now distracted you and you’re probably less likely to finish this post than you would be otherwise; if I offered you $20 for repeating the penultimate sentence in the comments section, I’d probably get no takers); I think our tools do cause us to think differently in some way, which might explain why I pay more attention to them than some bloggers do. And posts on tools and computer set ups and so forth seem to generate a lot of hits; Tools of the Trade—What a Grant Writer Should Have is among the more popular Grant Writing Confidential posts.

I use Devonthink Pro as described by Steven Berlin Johnson, which supplements my memory and acts as research tool, commonplace book, and quote database, and probably weakens my memory while allowing me to write deeper blog posts and papers. Maybe I remember less in my mind and more in my computer, but it still takes my mind to give context to the material copied into the database.

In fact, Devonthink Pro helped me figure out a potential contradiction in Carr’s writing. On page 209, he says:

Even as our technologies become extensions of ourselves, we become extensions of our technologies […] every tool imposes limitations even as it opens possibilities. The more we use it, the more we mold ourselves to its form and function.

But on page 47 he says: “Sometimes our tools do what we tell them to. Other times, we adapt ourselves to our tools’ requirements.” So if “sometimes our tools do what we tell them to,” then is it true that “The more we use it, the more we mold ourselves to its form and function?” The two statements aren’t quite mutually exclusive, but they’re close. Maybe reading Heidegger’s Being and Time and Graham Harman’s Tool-Being will clear up or deepen whatever confusion exists, since he a) went deep but b) like many philosophers, is hard to read and is closer to a machine for generating multiple interpretations than an illuminator and simplifier of problems. This could apply to philosophy in general as seen from the outside.

This post mirrors some of Carr’s tendencies, like the detour in the preceding paragraph. I’ll get back to the main point for a moment: Carr’s examples don’t necessarily add up to proving his argument, and some of them feel awfully tenuous. Some are also inaccurate; on page 74 he mentions a study that used brain scans to “examine what happens inside people’s heads as they read fiction” and cites Nicole K. Speer’s journal article “Reading Stories Activates Neural Representations of Visual and Motor Experiences,” which doesn’t mention fiction and uses a memoir from 1951 as its sample text.

Oops.

That’s a relatively minor issue, however, and one that I only discovered because I found the study interesting enough to look up.

Along the way in The Shallows we get lots of digressions, and many of them are well-trod ones: the history of the printing press; the origins of the commonplace books; the early artificial intelligence program ELIZA; Frederick Winslow Taylor and his efficiency interest; the plasticity of the brain; technologies that’ve been used for various purposes, including metaphor.

Those digressions almost add up to one of my common criticisms of nonfiction books, which is that they’d be better as long magazine articles. The Shallows started as one, and one I’ve mentioned before: “Is Google Making Us Stupid?” The answer: maybe. The answer now, two years and 200 pages later: maybe. Is the book a substantial improvement on the article? Maybe. You’ll probably get 80% of the book’s content from the article, which makes me think you’d be better off following the link to the article and printing it—the better not to be distracted by the rest of The Atlantic. This might tie into the irony that I mentioned in the first line of this post, which you’ve probably forgotten by now because you’re used to skimming works on the Internet, especially moderately long ones that make somewhat subtle arguments.

Offline, Carr says, you’re used to linear reading—from start to finish. Online, you’re used to… something else. But we’re not sure what, or how to label the reading that leads away from the ideal we’ve been living in: “Calm, focused, undistracted, the linear mind is being pushed aside by a new kind of mind that wants and needs to take in and dole out information in short, disjointed, often overlapping bursts—the faster, the better.”

Again, maybe, which is the definitive word for analyzing The Shallows: but we don’t actually have a name for this kind of mind, and it’s not apparent that the change is as major as Carr describes: haven’t we always made disparate connections among many things? Haven’t we always skimmed until we’ve found what we’re looking for, and then decided to dive in? His point is that we no longer do dive in, and he might be right—for some people; but for me, online surfing, skimming, and reading coexists with long-form book reading. Otherwise I wouldn’t have had the fortitude to get through The Shallows.

Still, I don’t like reading on my Kindle very much because I’ve discovered that I often tend to hop back and forth between pages. In addition, grad school requires citations that favor conventional books. And for all my carping about the lack of causal certainty regarding Carr’s argument, I do think he’s on to something because of my own experience. He says:

Over the last few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going—so far as I can tell—but it’s changing. I’m not thinking the way I used to think. I feel it most strongly when I’m reading. I used to find it easy to immerse myself in a book or a lengthy article. My mind would get caught up in the twists of the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore. Now my concentration starts to drift after a page or two. I get fidgety, lose the thread, begin looking for something else to do. I feel like I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle.

I think I know what’s going on. For well over a decade now, I’ve been spending a lot of time online, searching and surfing and sometimes adding to the great databases of the Internet.

He says friends have reported similar experiences. I feel the same way as him and his friends: the best thing I’ve found for improving my productivity and making reading and writing easier is a program called Freedom, which prevents me from getting online unless I reboot my iMac. It throws enough of a barrier between me and the Internet that I can’t easily distract myself through e-mail or Hacker News (Freedom has also made writing this post slightly harder, because during the first draft, I haven’t been able to add links to various appropriate places, but I think it worth the trade-off, and I didn’t realize I was going to write this post when I turned it on). Paul Graham has enough money that he uses another computer for the same purpose, as he describes in the linked essay, which is titled, appropriately enough, “Disconnecting Distraction” (sample: “After years of carefully avoiding classic time sinks like TV, games, and Usenet, I still managed to fall prey to distraction, because I didn’t realize that it evolves.” Guess what distraction evolved into: the Internet).

Another grad student in English Lit expressed shock when I told him that I check my e-mail at most once a day and shook for every two days, primarily in an effort not to distract myself with electronic kibble or kipple. Carr himself had to do the same thing: he moves to Colorado and jettisons much of his electronic life, and he “throttled back my e-mail application […] I reset it to check only once an hour, and when that still created too much of a distraction, I began to keeping the program closed much of the day.” I work better that way. And I think I read better, or deeper, offline.

For me, reading a book is a very different experience from searching the web, in part because most of the websites I visit are exhaustible much faster than books. I have a great pile of them from the library waiting to be read, and an even greater number bought or gifted over the years. Books worth reading seem to go on forever. Websites don’t.

But if I don’t have that spark of discipline to stay off the Internet for a few hours at a time, I’m tempted to do the RSS round-robin and triple check the New York Times for hours, at which point I look up and say, “What did I do with my time?” If I read a book—like The Shallows, or Carlos Ruiz Zafon’s The Shadow of the Wind, which I’m most of the way through now—I look up in a couple of hours and know I’ve done something. This is particularly helpful for me because, as previously mentioned, I’m in grad school, which means I have to be a perpetual reader (if I didn’t want to be, I’d find another occupation).

To my mind, getting offline can become a comparative advantage because, like Carr, “I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain,” and that someone is me and that someone is the Internet. But I can’t claim this is true for all people in all places, even as I tell my students to try turning off their Internet access and cell phones when they write their papers. Most of them no doubt don’t. But the few who do learn how to turn off the electronic carnival are probably getting something very useful out of that advice. The ones who don’t probably would benefit from reading The Shallows because they’d at least become aware of the possibility that the Internet is rewiring our brains in ways that might not be beneficial to us, however tenuous the evidence (notice my hedging language: “at least,” “the possibility” “might not”).

Alas: they’re probably the ones least likely to read it.

Melville and the theme of boredom

I’m writing a paper on Melville’s Pierre: or, The Ambiguities, which is slowly driving me crazy (I leave it to the reader to decide whether I refer to paper or book). While searching the library last week, I came across a book whose content has no doubt been contemplated by many a student over the years:

In case you can’t read the spine, it says Melville and the Theme of Boredom. If I hadn’t seen it in a research library, I’d assume the title to be a work of parody.

Rereading Nick Hornby's High Fidelity

There are two really remarkable things about High Fidelity: how funny it is and how well constructed it is, especially given that the subject matter (romantic entanglements and existential dilemmas for the aging man and relationship) could easily be a plotless mess.

A novel about extended adolescence (or extended adolescence in general) can become vague, wishy-washy, and meandering. I’m trolling for specific examples of constructedness, but most aren’t as good out of context as they are in context as they are in it. Still, the novel moves: it starts with Rob’s “desert-island, all-time, top five most memorable split-ups, in chronological order,” proceeds through them, brings him back to the cause of his most recent breakup, propels him forward to his most recent hook-up, and then moves through scenes involving a funeral, a dinner party, a move-out, and a real party, each of which feels developed and connected to each other. There’s a strong sense of Rob moving, and him both acting and being acted upon that’s so often absent in similar novels, like Bret Easton Ellis’ Less Than Zero, Richard Price’s Ladies’ Man, or Kate Christensen’s Trouble, all of which ramble and drift and make you long for the cohesiveness you don’t realize you’re missing until you see something like High Fidelity, or Elmore Leonard’s caper novels, or Robert Penn Warren’s All the King’s Men.

Not that jokes are everything, but High Fidelity is filled with them, and the tremendous humor gives poignance to moments of seriousness, especially when those moments are tinged with existential fear about the future and one’s social position:

You need as much ballast as possible to stop you from floating away; you need people around you, things going on, otherwise life is like some film where the money ran out, and there are no sets, or locations, or supporting actors, and it’s just one bloke on his own staring into the camera with nothing to do and nobody to speak to, and who’d believe in this character then?

(Not to worry: in the next paragraph, a woman asks “Have you got any soul?” and the narrator thinks, “That depends […] some days yes, some days no.”)

Rob both rationalizes and sees himself as others might:

Me, I’m unmarried—at the moment as unmarried as it’s possible to be—and I’m the owner of a failing record shop. It seems to me that if you place music (and books, probably, and films, and plays, and anything that makes you feel) at the center of your being, then you can’t afford to sort out your love life, start to think of it of it the finished product. You’ve got to pick it up, keep it alive and in turmoil, you’ve got to pick at it and unravel it until it all comes apart and you’re compelled to start all over again […] Maybe Al Green is directly responsible for more than I ever realized.

He knows the argument is wrong and unlikely—art comes from the most unlikely places and conditions, much like the appreciation of art—yet he still half-believes it, just as we half-believe the joking things we say to ourselves to get through the day, or to convince ourselves of our value and self-worth. The alternative is often a depressing sense of how you look in others’ eyes, a kind of objectivity that might be a cure worse than the disease of being wrong. This doubleness of Rob’s view—he’s joking, but aware that he’s half-serious, which makes the joke funnier—is one of the novel’s great pleasures.

To return to the blockquote above, it should be obvious based on the plentitude bordering on plethora of novels about marriage in all its configurations (see: the collected work of Updike and Roth), one’s love life isn’t finished until one decides it is or one dies. Rob knows this: he warns against starting “to think of it as finished,” rather than knowing it is finished. And blaming your love life on listening to pop music is a cute pop psychology theory that’s hilariously wrong and yet plausible enough for us to appreciate it.

The novel’s humor and voice combined with its structure to give it meaning where so many not dissimilar set ups fail. The TV show Californication, though mildly entertaining, is basically about the difficulties of information hiding: Hank is a frequently blocked writer who derives pleasure from sleeping with various women, which he in turn has to conceal from various women because of the potential sexual and emotional side effects of revelation. But the show trades in a narrow range: who can Hank sleep with, and who matters enough to keep it from? If the show has a larger plot, it’s not evident: Hank’s relationship with his ex-wife, whose name I can’t remember because she isn’t that important to the show, oscillates in a narrow band between reconciliation and estrangement from which it cannot escape with eliminating the show’s potential for future seasons. Although the show isn’t pornography, its limits become steadily clearer over time.

One of the few disappointing things about High Fidelity isn’t the book itself— the other output of its author. Like Robert Penn Warren, Hornby seems to have only one really, really good book in him; I’ve at least started most of the rest of his work. Some books, like A Long Way Down, aren’t bad but aren’t compelling, and they don’t have that sense of drive and purpose High Fidelity. They’re like the story about a dream your friend wants to relay in exhaustive detail. The events in those other books are exhaustive even when they’re short, and they don’t have the pep and vigor of High Fidelity, which almost has too many short and wonderful asides to mention them all.

The end of High Fidelity trends toward sentimentality, but it’s saved by a continuing self-awareness that its concerns are silly. By making them serious while retaining its essential lightness, the novel works. And, the ending implies, life trends toward sentimentality: if you never indulge in any sort of authentic feeling, then you’re left alone and an agglomeration of preferences in music, books, or movies, dangling before a world that will, more likely than not, be mostly indifferent to your existence. But that’s an awfully heavy premise: I’d rather hear about Rob’s top five breakups and the linguistic implications of “I haven’t slept with him yet” as compared to “I haven’t seen Evil Dead 2 yet.”

Trolls, comments, and Slashdot: Thoughts on the response to Avatar

The vast majority of the comments attached to “Thoughts on James Cameron’s Avatar and Neal Stephenson’s ‘Turn On, Tune In, Veg Out’” are terrible. They tend toward mindless invective and avoid careful scrutiny of what I actually wrote; they’re quite different from the comments this blog normally gets, which is largely because I submitted the Avatar post to Slashdot, home of the trolls. One friend noted the vitriol and in an e-mail said, “Okay, the Slashdot link explains the overall tone of the comments your “Avatar” post is attracting.”

Part of the reason the comments are so bad is the hit and run nature of comments, especially on larger sites. If you have something substantial to say, and particularly if you regularly have something substantial to say, you tend to get a blog of your own. I wrote about this phenomenon in “Commenting on comments:”

In “Comment is King,” Virginia Heffernan writes in the New York Times, “What commenters don’t do is provide a sustained or inventive analysis of Applebaum’s work. In fact, critics hardly seem to connect one column to the next.” She notes that comments are often vitriolic and ignorant, which will hardly surprise those used to reading large, public forums.”

Furthermore, it’s easier and demands less thought to post hit and run comments than it is to really engage an argument. I deleted the worst offenders and sent e-mails to their authors with a pointer to Paul Graham’s How To Disagree; none responded, except for one guy who didn’t understand the point I was trying to make even after three e-mails, when I gave up (“never argue with fools because from a distance people can’t tell who is who”). The hope is that by consciously cultivating better comments and by not responding to random insults, the whole discussion might improve.

(Paul Graham has given the subject a lot of thought too: he even wrote an essay about trolls. As he says, “The core users of News.YC are mostly refugees from other sites that were overrun by trolls.”)

Not every comment I got one was terrible—this one, from a person named “Dutch Uncle,” was probably the best argued of the lot, and it mostly avoided ad hominem attacks. It, however, was very much the exception.

Most comments tended to deal in generalities and not to cite specific parts of my argument. In this respect, they have the same problems I see in freshmen papers, which often want to make generalizations and abstractions without the concrete base necessary. This happens so often that I’ve actually begun a keeping a list of all the things freshmen have told me are “human nature,” with a special eye toward placing contradictory elements next to each other, and in class I now ceaselessly emphasize specifics in arguments.

Since I’ve see this disease before, I’ve already thought about it, and I think the generalization problem is linked to the problem of close reading, which is a really hard skill to develop and one I didn’t develop in earnest till I was around 22 or 23. Even then it was only with a tremendous amount of effort and practice on my part. Close reading demands that you consider every aspect of a writer’s argument, that you pay attention to their word choices and their sentences, and that you don’t attribute to them opinions they don’t necessarily hold. Francine Prose wrote a whole book on the subject called Reading Like a Writer, but the book is a paradox: in order to develop the close reading skills she demonstrates, you have to be able to closely read her book in the first place, which is hard without good teaching.

Mentioning Francine Prose brings up one other common point I saw in the comments: few pointed to sources or ideas outside themselves, and allusions were rare. In the best writing I see, such elements are common. That isn’t to say every time you post a comment, you should cite four peer-reviewed sources and a couple of blog posts, but ideas are often stronger when they show evidence of learning and synthesis from others. In my Avatar post, I brought together Greg Egan, a New Yorker article, Alain de Botton citing Wilhelm Worringer, Robert Putnam’s Bowling Alone, the Neal Stephenson essay, and Star Trek. Now, my argument about Avatar could still be totally wrong, like an essay with hundred citations, but at the very least other writers’ thoughts usually show that more thought has gone into an essay, or a comment. Almost every article in every newspaper and magazine piece worth reading cites at least half a dozen and often many more sources: quotes, other articles, journals, books, and more. That’s part of what make The Atlantic and The New Yorker so worth reading.

Citations area common because things that are really worth arguing about require incredible background knowledge to say anything intelligent. The big response I’ve had to many of the comments, especially the deleted ones, are suggestions to read more: read How Fiction Works, The Art of Criticism, and Reading Like a Writer, then post your angry Internet screeds after you’ve thought more about what you’re arguing. These kinds of pleas probably fall on the proverbially deaf ears, but at least with this post now I have somewhere to point bad commenters in the future.

I think one reason I find Slashdot conversations much less interesting than I did as a teenager isn’t because the nature of the site has changed, but because I’ve learned enough to have learned how hard it is to really know about something. Now I’m often more engaged by pure information and less often in invective and pure opinion, especially when that opinion isn’t backed up by much. The information/opinion binary is of course false, especially because the kind of information one presents often leaves pointers to one’s opinion, but it’s nonetheless useful to consider when you’re posting on Internet forums—or writing anywhere.

Incidentally, one reason I like reading Hacker News so much is that the site consciously tries to cultivate smarter, deeper conversation, much as I wish to; it’s trying to meld technical and cultural forces into a system that rewards and encourages high-content comments of the sort I mostly didn’t get regarding Avatar. I submitted the Avatar post to Hacker News before Slashdot, and the first, relatively good comment came from a Hacker News reader.

The problem of trolls is also very old, and probably goes back to the Internet’s beginnings—hence the need for a word like “troll,” with a definition in the Jargon File. As a result, I’m probably not going to change much by writing this, and to judge from my e-mail correspondent, trying to do so via e-mails and blog posts is mostly hopeless. But a part of me is an optimist who thinks or hopes change is possible and that by having a meta conversation about the nature of trolling, one can avoid the behavior in general, at least on a small scale. At Slashdot or Reddit scales, however, the hope fades, and one simply experiences the tragedy of the commons.

EDIT: Robin Hanson has an interesting alternate, but not mutually incompatible, theory in Why Comments Snark:

Comments disagree more than responding posts because post, but not comment, authors must attract readers. Post authors expect that reader experiences of a post will influence whether those readers come back for future posts. In contrast, comment authors less expect reader experience to influence future comment readership; folks read blog posts more because of the post author than who they expect to author comments there.

The death of literature part 11,274, from Saul Bellow

“From the first, too, I had been warned that the novel was at the point of death, that like the walled city or the crossbow, it was a thing of the past. And no one likes to be at odds with history. Oswald Spengler, one of the most widely read authors of the early ’30s, taught that our old tired civilization was very nearly finished. His advice to the young was to avoid literature and the arts and to embrace mechanization and become engineers.”

That’s from Saul Bellow’s “Hidden Within Technology’s Empire, a Republic of Letters” for the New York Times’ Writers on Writing collection. Fortunately he didn’t listen to the various Spenglers of his day. I often find it amusing to read the various predictions of literature’s demise, which have so frequently been trumpeted in the 20th Century and now the 21st; Orwell does a good job with the same theme in his collected essays.

Although being wrong in the past doesn’t necessarily equate to being wrong in the present, the poor track records of both religious apocalypse and the demise of reading tend to make me skeptical of new claims about either.

The Writer's Notebook: Craft Essays from Tin House

I rather liked the eclectic material in Writers on Writing: Collected Essays from The New York Times and its sequel; many of the short essays didn’t impart, but they fascinated because of the range of their concerns and how appropriately well written they were, whether about people who always ask authors where they get their ideas, or what kind of typewriter/computer/paper/pen they use, or the importance of avoiding cliché. The subjects stay with me even when I haven’t read the novels of the authors writing, and the collections stay with me because they’re often enough correct in their descriptions of problems if not always their conclusions that they made me evaluate writing anew. Yes, some specimens had apparently either been written for the money or because the author had nothing else to say, but at eight hundred or so words each they were easy enough to skip. Word limits also have the benefit of forcing the author to be concise, logorrhea being an occupational hazard for many.

Given that, I went into The Writer’s Notebook with sympathy in mind. Its contents have the benefits and drawbacks of length: Matthea Harvey’s “Mercurial Worlds of the Mind” is clever, but a sharp editor might have cut the section on what 2-D versus 3-D means. Her opening metaphor is clever but overly broad: “Trying to write about imaginary worlds is like breaking a thermometer in a classroom, then trying to collect the little balls of mercury that go shooting off under the desks, down the hallways.” Maybe: but I don’t get the impression that’s how Tolkien felt as he invented Middle-earth, as the myths of Lord of the Rings feel built and layered, rather than chased down. In my own world-building efforts, I don’t at all feel like I’m chasing mercury.

Despite the first sentence, Harvey’s essay works. Someone must have told many of these writers that you have to start with a bang even if its decibel level doesn’t correspond to accuracy. For example, Tom Grimes’ “There will be no Stories in Heaven” is about how fiction uses time, but his lead says, “To me, we read and write stories for a simple reason: we all die.” Good thing his first two words qualify all of what follows! Despite the off note at the beginning, his essay works, and so does Harvey’s; she shows that what one must do to build fantastic worlds is not so different from what one must do to build a “realistic” one. You need rules, size, and so forth; each of those subjects could be an essay unto themselves. When you’ve finished Harvey, Stanislaw Lem’s Microworld’s is the next logical step.

Elsewhere, Margot Livesey’s “Shakespeare for Writers” might be shallow for those who’ve read John Updike on the Bard, but it still examines Shakespeare from the structure standpoint much criticism leaves out by asking, for example, why so much of Shakespeare makes implausible leaps of character and plot yet gets away with it. As she writes:

In A Midsummer Night’s Dream the drug-induced affections of the lovers seem, in depth and passion, very similar to their real feelings. Motivation is often left out and provided, or not, by the actors and, of course, by the readers and viewers.

Why? The audience doesn’t have to ask the question, but the writer must, and maybe the real lesson, for the writer that language excuses all else; Livesey quotes some of the many, many examples of where Shakespeare nails speeches through elaborate, figurative language. The idea of language excusing all else brings me back to Henry James, since I didn’t love Portrait of a Lady because its plot was empty even if its language was vacuous. Shakespeare’s plots usually charge like cavalry. But they don’t overturn feelings, and they don’t override each characters’ interiority. Livesey’s essay explains how, and if I could summarize it, I would.

The Writer’s Notebook continues a conversation about aesthetic form, meaning, and creation that’s lasted for centuries if not longer; they are a small effort to map an infinite space and discuss the fundamental choices writers must make: where to revise; whether one should organize a story around a “clock” or time period; how to use language; historical influence; and more. Some might not be finding new space so much as configuring what we already have. Anna Keesey’s “Making a Scene” uses the terms “outfolding” and “infolding” to describe how a writer can primarily move forward by dialog and action or by interior thoughts, respectively, with Hemingway and Virginia Woolf as examples. The line isn’t perfectly clear, and the point about how things happen either within or outside a character has been made in various ways before, but I’d never seen it articulated so well.

Collectively, many essays from The Writer’s Notebook are also keeping an eye on one’s back, toward how history affects or should affect writers and how genre and literature aren’t as separate as they appear. None are so gauche as to come out and say either point, but it’s there, lurking beneath them, because for a writer, who cares if one is writing capital-L Literature? You’re always in pursuit of whatever works, and if works, maybe it is, or will become, Literature, which is fundamentally about stories, how we tell stories, and how we listen to them.

The Writer’s Notebook: Craft Essays from Tin House

I rather liked the eclectic material in Writers on Writing: Collected Essays from The New York Times and its sequel; many of the short essays didn’t impart, but they fascinated because of the range of their concerns and how appropriately well written they were, whether about people who always ask authors where they get their ideas, or what kind of typewriter/computer/paper/pen they use, or the importance of avoiding cliché. The subjects stay with me even when I haven’t read the novels of the authors writing, and the collections stay with me because they’re often enough correct in their descriptions of problems if not always their conclusions that they made me evaluate writing anew. Yes, some specimens had apparently either been written for the money or because the author had nothing else to say, but at eight hundred or so words each they were easy enough to skip. Word limits also have the benefit of forcing the author to be concise, logorrhea being an occupational hazard for many.

Given that, I went into The Writer’s Notebook with sympathy in mind. Its contents have the benefits and drawbacks of length: Matthea Harvey’s “Mercurial Worlds of the Mind” is clever, but a sharp editor might have cut the section on what 2-D versus 3-D means. Her opening metaphor is clever but overly broad: “Trying to write about imaginary worlds is like breaking a thermometer in a classroom, then trying to collect the little balls of mercury that go shooting off under the desks, down the hallways.” Maybe: but I don’t get the impression that’s how Tolkien felt as he invented Middle-earth, as the myths of Lord of the Rings feel built and layered, rather than chased down. In my own world-building efforts, I don’t at all feel like I’m chasing mercury.

Despite the first sentence, Harvey’s essay works. Someone must have told many of these writers that you have to start with a bang even if its decibel level doesn’t correspond to accuracy. For example, Tom Grimes’ “There will be no Stories in Heaven” is about how fiction uses time, but his lead says, “To me, we read and write stories for a simple reason: we all die.” Good thing his first two words qualify all of what follows! Despite the off note at the beginning, his essay works, and so does Harvey’s; she shows that what one must do to build fantastic worlds is not so different from what one must do to build a “realistic” one. You need rules, size, and so forth; each of those subjects could be an essay unto themselves. When you’ve finished Harvey, Stanislaw Lem’s Microworld’s is the next logical step.

Elsewhere, Margot Livesey’s “Shakespeare for Writers” might be shallow for those who’ve read John Updike on the Bard, but it still examines Shakespeare from the structure standpoint much criticism leaves out by asking, for example, why so much of Shakespeare makes implausible leaps of character and plot yet gets away with it. As she writes:

In A Midsummer Night’s Dream the drug-induced affections of the lovers seem, in depth and passion, very similar to their real feelings. Motivation is often left out and provided, or not, by the actors and, of course, by the readers and viewers.

Why? The audience doesn’t have to ask the question, but the writer must, and maybe the real lesson, for the writer that language excuses all else; Livesey quotes some of the many, many examples of where Shakespeare nails speeches through elaborate, figurative language. The idea of language excusing all else brings me back to Henry James, since I didn’t love Portrait of a Lady because its plot was empty even if its language was vacuous. Shakespeare’s plots usually charge like cavalry. But they don’t overturn feelings, and they don’t override each characters’ interiority. Livesey’s essay explains how, and if I could summarize it, I would.

The Writer’s Notebook continues a conversation about aesthetic form, meaning, and creation that’s lasted for centuries if not longer; they are a small effort to map an infinite space and discuss the fundamental choices writers must make: where to revise; whether one should organize a story around a “clock” or time period; how to use language; historical influence; and more. Some might not be finding new space so much as configuring what we already have. Anna Keesey’s “Making a Scene” uses the terms “outfolding” and “infolding” to describe how a writer can primarily move forward by dialog and action or by interior thoughts, respectively, with Hemingway and Virginia Woolf as examples. The line isn’t perfectly clear, and the point about how things happen either within or outside a character has been made in various ways before, but I’d never seen it articulated so well.

Collectively, many essays from The Writer’s Notebook are also keeping an eye on one’s back, toward how history affects or should affect writers and how genre and literature aren’t as separate as they appear. None are so gauche as to come out and say either point, but it’s there, lurking beneath them, because for a writer, who cares if one is writing capital-L Literature? You’re always in pursuit of whatever works, and if works, maybe it is, or will become, Literature, which is fundamentally about stories, how we tell stories, and how we listen to them.

But isn’t this obvious?

“People who love to read are always looking for that next great find, as the marketing department at every major publishing house knows.”

This is from Seduced by the Blurb.

But isn't this obvious?

“People who love to read are always looking for that next great find, as the marketing department at every major publishing house knows.”

This is from Seduced by the Blurb.