Working out the plot with the Rejector, Carlos Ruiz Zafón, and other friends

Over at the Rejecter, someone is asking whether an MFA program will teach her how to structure her novels. Actually, she’s asking about the professional and intellectual utility of MFA programs, but I want to focus on the plot issue, since that’s what the Rejecter doesn’t address. I had the same problems as the correspondent, but I don’t think I have them any more.

Specifically, the problem:

I have been writing novels since I was about seven. I literally think about it all the time. However, try as I might I have never been able to get beyond the 40,000 word mark before losing the plot and momentum of my story and deciding to start something else entirely. I’m a journalist on a big women’s glossy in the UK so it’s not getting the words down on paper that’s the problem, it’s rather getting my plot from A to B that stumps me.

That sounds really similar to me: the first two novels I actually completed are now, in retrospect, unpublishable, although I didn’t know that at the time and couldn’t have articulated why. Now I know: nothing happened. The novels had interesting premises but no action. There were a lot of bits of clever dialog and some good scenes, but nothing that held those scenes together. The novels lacked narrative tension.

The next two I wrote were and are publishable; they got a lot of agent activity and requests but no agents who took me on. Ditto for the latest, currently titled Asking Alice, which is still out. Look for my name in lights shortly.

One big thing changed between the first two unpublishable novels and the later three: I started writing outlines, which I’d previously considered unnecessary because I’m so smart that I can hold everything in my head (oops). Those outlines were and are pretty loose and fluid, but they’re outlines nonetheless, in which I asked myself essential questions about each chapter: what happens in it? Why? Why this chapter and not some other? What’s the central tension? What does each character worry about? These kinds of questions guided me toward writing better plots because I thought about how information was doled out and what kinds of things the characters are struggling to achieve. In addition, I thought about how drama works: is something important happening in this chapter? What is it?

If I can’t identify what’s important or why the characters should care, I’m probably doing something wrong.

This doesn’t mean each chapter has to end with someone getting shot, or the heroine declaring her love, or the revelation of a shocking fact, or an alien invasion.* But it does mean that I have to at least think about what the scene or chapter is conveying to the reader, what is happening to the characters, how it relates to the previous scene or the next scene, and, perhaps most importantly, what dilemmas it raise that have to be resolved in the future.

Every scene or almost every scene needs some kind of tension or uncertainty. Once again, this doesn’t necessarily mean a guy holding a gun: it could be highly cerebral. In Adam Foulds’ The Quickening Maze, the tension in some scenes concerns the interior dialog and sanity of John Clare: is he sane? Are we seeing the mind of someone else, or are we seeing his mind, which has assumed the shape of someone else? Those scenes can be quite tense but also quite subtle. Others can hinge on a piece of information, as when Randy Waterhouse realizes he’s actually building a datahaven in Cryptonomicon.

Over time, through reading and writing, you’ll learn where to end scenes and how the form of the novel works, and by “you” I mean “me.” You have to learn if you’re the kind of writer who wants to break that form successfully. I remember being on the high school newspaper and going to journalism contests. A lot of traditional news articles end with a whimsical or funny quote that’s not essential but does a good job of encapsulating the story. I’d read enough articles to have picked that idea up, and at one of the competitions I remember taking notes as a source spoke and putting a star next to something he said and thinking, “that’s my final quote.” I wrote the piece and later looked at what the judges had to say; I don’t think that was one of the times I won anything, but I do remember them commenting on the money quote at the end.

They did it because I’d successfully synthesized a principle no one had explicitly stated but that nonetheless made my article a little bit better.

Learning to write scenes is similar: you can’t enumerate all the principles involved, but over time you start to feel them. Once you become attuned to reading novels for what each scene does or what tensions exist in a scene, you’ll probably become better at plotting them for yourself—if you’re anything like me, at least. And you might start telling stories that build plots. I talked out a lot of Asking Alice, the novel making the rounds with agents right now, with a friend. It didn’t hurt and might’ve helped. Sometimes it’s also fun to make up a plot when you’re out. Michael Chabon portrays this in Wonder Boys, when the blocked English professor Grady Tripp and his gay editor, Crabtree, are in a bar:

‘Hey,’ said Crabtree, ‘look at that guy.’ […]
‘Who? Oh my.’ I smiled. ‘The one with the hair sculpture.’ […]
‘He’s a boxer,’ I said. ‘A flyweight.’
‘He’s a jockey,’ said Crabtree. ‘His name’s, um, Curtis. Hardapple.’
‘Not Curtis,’ I said.
‘Vernon, then. Vernon Hardapple. The scars are from a—from a horse’s hooves. He fell during a race and got trampled.’
‘He’s addicted to painkillers.’
‘He’s got a plate in his head.’

And they go on from there. They could be building a plot (telling the story of Hardapple’s rise and fall as a jockey) or they could be building the background. Either way, they’re doing something useful. Where do stories come from? Everywhere and nowhere. They’re not talking about plot, not just yet, but they begin moving in that direction.

The original querier to the Rejector has identified a particular weakness, which is a good start. My proposed solution: read some novels she admires; pick them apart and write outlines that focus on why characters do what they do, what information they reveal when, and so on. Some writers who I think do this particularly well: Ruiz Zafón, as mentioned; Elmore Leonard, especially in Get Shorty and Out of Sight, which I still think are his best; Anita Shreve in Testimony; Graham Greene in The End of the Affair; Umberto Eco in The Name of the Rose. Mystery and detective novels are often very good at plot because all they have is plot. Note that this path is not recommended.

If anyone out there is sufficiently interested, drop me an e-mail and I’ll send you my quick-and-dirty outline of The Angel’s Game, although I wouldn’t recommend reading it until after you’ve read the novel. Ruiz Zafón is astonishingly good at making each scene count in both this novel and The Shadow of the Wind; one shocking thing about reading The Prince of Mist is how weak that novel is in comparison. Ruiz Zafón is clearly someone who’s learned a lot about writing over the course of his publishing career, and he’s an example that makes me more hesitant to condemn not-very-good first novels—even those that gets published. People learn over time. I’ve read Saul Bellow’s The Dangling Man and thought it was okay—but no Herzog.

That’s not a slam: very few artists of any kind in any medium do their best work on their first try. Like anyone else in any other activity, artists learn as they go along, and they have to assimilate a huge body of material.

Anyway, I’m not sure how many MFA programs teach plot or tell their students some ways to think about plot; if I end up teaching in one, you can bet I’ll talk about it some. As an undergrad, I took a lovely novel writing course from a guy named Bill Tapply, who passed away last year. Although I got a lot out of his class, he seldom talked much about plot, which in retrospect I find curious because his Brady Coyne mysteries work very well in this respect. From chatting with others who’ve taken fiction writing classes, I gather that this is common: they talk about language and ideas and description and all kinds of things, but not plot. If I ever end up teaching one, I’m going to talk about plot—not to the exclusion of everything else, certainly, but enough to give a sense of what my 19-year-old self needed to hear. And, from what the correspondent to the Rejecter says, what she needs to hear too.

This is important because I’ve read so many novels with dynamite first halves and dreary second halves, especially in literary fiction (one reason I like Carlos Ruiz Zafón and have been writing about him a lot lately: his novels hold together). Sometimes otherwise very good novels fall apart plot-wise. I started Sam Lipsyte’s The Ask a few days ago, based on an agents advice,** but gave it up because it feels too episodic and disconnected; the novel strays so fair that it loses me as I find my mind wandering and myself thinking, “So what? What’s at stake here?” By halfway through, the answer frequently felt like “nothing.” Too bad: the first page of The Ask is terrific. A lot of the droll humor works. It just lacks…

something.

Too bad I can’t better define what that something is. But I can talk around it enough to know when it’s missing.***


* For the record, zero of my novels thus far have featured an alien invasion, although I’m not opposed to that sort of thing on principle and my eventually deploy it. One of my ambitions is to eventually write a novel that begins as a fairly straightforward love story about modern urban couples / triangles and angst that suddenly shifts, about halfway through, when aliens attacks. I think this would be totally awesome.

** It was a rejection, but not a form rejection, which counts for a lot when they pile up and you’re looking for some pattern with no more success than people who see secret signals in the white noise of a random universe: “I hope you receive that as no more damning than had I written ‘I like hamburger dill pickles, but I love capers.’ ”

*** I’d like a book on plot that’s as good as How Fiction Works, which I could then add to my post on The very very beginning writer. Suggestions would be appreciated. The books I’ve found that deal with plot tend to be of the “heroine reveals her love” variety that I mocked above, instead of the, “this is how literature might work” variety that James Wood and Francine Prose offer.

Someone has probably already written a lot of what I wrote above. I just don’t know who that person is or where their work is.

The dangers of over-reliance on evolutionary biology and evolutionary psychology, courtesy of Ernest Gellner and Henry Farrell

Primitive man has lived twice: once in and for himself, and the second time for us, in our reconstruction. Inconclusive evidence may oblige him to live such a double life forever. Ever since the principles of our own social order have become a matter of sustained debate, there has been a persistent tendency to invoke the First Man to settle our disputes for us. His vote in the next general election is eagerly solicited. It is not entirely clear why Early Man should possess such authority over our choices.

That’s from Henry Gellner’s Plough, Sword, and Book: The Structure of Human History. Today, we wouldn’t call the primitive man the primitive man because “primitive” it prejudicial and “people” usually used instead of “man” because it explicitly includes all humans. We would instead call “primitive man” the “pre-agrarian world” or “evolutionary times” and then continue from there. But the point Gellner is making about our habitual “reconstruction” of what that looked like, in large part for the prejudices of the present, is well-taken and worth remembering in the light of books like Sex at Dawn, The Mating Mind, or the entire oeuvre of evolutionary biology and psychology, which have undergone tremendous revision over the past three decades and will no doubt continue to undergo tremendous revision under the next three and beyond.

How we reconstruct that time and “invoke the First Man” should be remembered as a reconstruction and not as the last word; he shouldn’t necessarily “possess such authority over our choices” today, because what was good for people living before agriculture or before the Industrial Revolution may not be good for us now.

It helps to understand the kinds of things that influence us, but we need to be wary of cherry picking evidence to support whatever kinds of social views we already hold.

I’m reading Gellner thanks to Henry Farrell at Crooked Timber.

In Praise of William Deresiewicz

I’ve read three long, fascinating essays by English professor William Deresiewicz over the last two days: Solitude and Leadership:If you want others to follow, learn to be alone with your thoughts; Love on Campus: Why we should understand, and even encourage, a certain sort of erotic intensity between student and professor (and he’s not talking about the bed-shaking kind, unless one’s partner is in paper form); and The Disadvantages of an Elite Education: Our best universities have forgotten that the reason they exist is to make minds, not careers.

I don’t agree with everything he’s written in those pieces, but their scope and unexpectedness is refreshing: in all three cases, he takes potentially tired themes (people are distracted a lot today; a great deal of film and fiction depicts randy professors sleeping with students; and elite colleges are training too many hoop jumpers instead of thinkers) and goes with them to unexpected places: how Heart of Darkness depicts bureaucracy and finding yourself; the erotic intensity of ideas and how they can be mingled with erotic intensity of the more conventional variety; and the entitlement complex that paradoxically can scare people into hewing to the narrow path. Even my summaries of a small portion of where he goes in each essay is hopelessly inadequate, which is part of what makes those essays so good.

The three are not all that separate: they all deal with conformity, individuality, college life, and the place of the university in society. Read together, they have more cohesiveness than many entire books. Most importantly, however, they go places I haven’t even thought about going, which is their most useful and unusual feature of all.

Jeff Sypeck pointed me to one and Robert Nagle to another; I only know both through e-mail, which is a very small but real demonstration of the Internet’s true power to make connections. All three essays might play into my eventual dissertation; at the very least, they’ve changed the way I think about many of the issues discussed, which to me is more valuable still.

The Shallows: What the Internet is Doing to Our Brains — Nicholas Carr

One irony of this post is that you’re reading a piece on the Internet about a book that is in part about how the Internet is usurping the place of books. In The Shallows, Carr argues that the Internet encourages short attention spans, skimming, shallow knowledge, and distraction, and that this is a bad thing.

He might be right, but his argument misses one essential component: the absolute link between the Internet and distraction. He cites suggestive research but never quite crosses the causal bridge from the Internet as inherently distracting, both because of links and because of the overwhelming potential amount of material out there, and that we as a society and as a people are now endlessly distracted. Along the way, there are many soaring sentiments (“Our rich literary tradition is unthinkable without the intimate exchanges that take place between reader and writer within the crucible of a book”) and clever quotes (Nietzsche as quoted by Carr: “Our writing equipment takes part in the forming of our thoughts”), but that causal link is still weak.

I liked many of the points Carr made; that one about Nietzsche is something I’ve meditated over before, as shown here and here (I’ve now distracted you and you’re probably less likely to finish this post than you would be otherwise; if I offered you $20 for repeating the penultimate sentence in the comments section, I’d probably get no takers); I think our tools do cause us to think differently in some way, which might explain why I pay more attention to them than some bloggers do. And posts on tools and computer set ups and so forth seem to generate a lot of hits; Tools of the Trade—What a Grant Writer Should Have is among the more popular Grant Writing Confidential posts.

I use Devonthink Pro as described by Steven Berlin Johnson, which supplements my memory and acts as research tool, commonplace book, and quote database, and probably weakens my memory while allowing me to write deeper blog posts and papers. Maybe I remember less in my mind and more in my computer, but it still takes my mind to give context to the material copied into the database.

In fact, Devonthink Pro helped me figure out a potential contradiction in Carr’s writing. On page 209, he says:

Even as our technologies become extensions of ourselves, we become extensions of our technologies […] every tool imposes limitations even as it opens possibilities. The more we use it, the more we mold ourselves to its form and function.

But on page 47 he says: “Sometimes our tools do what we tell them to. Other times, we adapt ourselves to our tools’ requirements.” So if “sometimes our tools do what we tell them to,” then is it true that “The more we use it, the more we mold ourselves to its form and function?” The two statements aren’t quite mutually exclusive, but they’re close. Maybe reading Heidegger’s Being and Time and Graham Harman’s Tool-Being will clear up or deepen whatever confusion exists, since he a) went deep but b) like many philosophers, is hard to read and is closer to a machine for generating multiple interpretations than an illuminator and simplifier of problems. This could apply to philosophy in general as seen from the outside.

This post mirrors some of Carr’s tendencies, like the detour in the preceding paragraph. I’ll get back to the main point for a moment: Carr’s examples don’t necessarily add up to proving his argument, and some of them feel awfully tenuous. Some are also inaccurate; on page 74 he mentions a study that used brain scans to “examine what happens inside people’s heads as they read fiction” and cites Nicole K. Speer’s journal article “Reading Stories Activates Neural Representations of Visual and Motor Experiences,” which doesn’t mention fiction and uses a memoir from 1951 as its sample text.

Oops.

That’s a relatively minor issue, however, and one that I only discovered because I found the study interesting enough to look up.

Along the way in The Shallows we get lots of digressions, and many of them are well-trod ones: the history of the printing press; the origins of the commonplace books; the early artificial intelligence program ELIZA; Frederick Winslow Taylor and his efficiency interest; the plasticity of the brain; technologies that’ve been used for various purposes, including metaphor.

Those digressions almost add up to one of my common criticisms of nonfiction books, which is that they’d be better as long magazine articles. The Shallows started as one, and one I’ve mentioned before: “Is Google Making Us Stupid?” The answer: maybe. The answer now, two years and 200 pages later: maybe. Is the book a substantial improvement on the article? Maybe. You’ll probably get 80% of the book’s content from the article, which makes me think you’d be better off following the link to the article and printing it—the better not to be distracted by the rest of The Atlantic. This might tie into the irony that I mentioned in the first line of this post, which you’ve probably forgotten by now because you’re used to skimming works on the Internet, especially moderately long ones that make somewhat subtle arguments.

Offline, Carr says, you’re used to linear reading—from start to finish. Online, you’re used to… something else. But we’re not sure what, or how to label the reading that leads away from the ideal we’ve been living in: “Calm, focused, undistracted, the linear mind is being pushed aside by a new kind of mind that wants and needs to take in and dole out information in short, disjointed, often overlapping bursts—the faster, the better.”

Again, maybe, which is the definitive word for analyzing The Shallows: but we don’t actually have a name for this kind of mind, and it’s not apparent that the change is as major as Carr describes: haven’t we always made disparate connections among many things? Haven’t we always skimmed until we’ve found what we’re looking for, and then decided to dive in? His point is that we no longer do dive in, and he might be right—for some people; but for me, online surfing, skimming, and reading coexists with long-form book reading. Otherwise I wouldn’t have had the fortitude to get through The Shallows.

Still, I don’t like reading on my Kindle very much because I’ve discovered that I often tend to hop back and forth between pages. In addition, grad school requires citations that favor conventional books. And for all my carping about the lack of causal certainty regarding Carr’s argument, I do think he’s on to something because of my own experience. He says:

Over the last few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going—so far as I can tell—but it’s changing. I’m not thinking the way I used to think. I feel it most strongly when I’m reading. I used to find it easy to immerse myself in a book or a lengthy article. My mind would get caught up in the twists of the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore. Now my concentration starts to drift after a page or two. I get fidgety, lose the thread, begin looking for something else to do. I feel like I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle.

I think I know what’s going on. For well over a decade now, I’ve been spending a lot of time online, searching and surfing and sometimes adding to the great databases of the Internet.

He says friends have reported similar experiences. I feel the same way as him and his friends: the best thing I’ve found for improving my productivity and making reading and writing easier is a program called Freedom, which prevents me from getting online unless I reboot my iMac. It throws enough of a barrier between me and the Internet that I can’t easily distract myself through e-mail or Hacker News (Freedom has also made writing this post slightly harder, because during the first draft, I haven’t been able to add links to various appropriate places, but I think it worth the trade-off, and I didn’t realize I was going to write this post when I turned it on). Paul Graham has enough money that he uses another computer for the same purpose, as he describes in the linked essay, which is titled, appropriately enough, “Disconnecting Distraction” (sample: “After years of carefully avoiding classic time sinks like TV, games, and Usenet, I still managed to fall prey to distraction, because I didn’t realize that it evolves.” Guess what distraction evolved into: the Internet).

Another grad student in English Lit expressed shock when I told him that I check my e-mail at most once a day and shook for every two days, primarily in an effort not to distract myself with electronic kibble or kipple. Carr himself had to do the same thing: he moves to Colorado and jettisons much of his electronic life, and he “throttled back my e-mail application […] I reset it to check only once an hour, and when that still created too much of a distraction, I began to keeping the program closed much of the day.” I work better that way. And I think I read better, or deeper, offline.

For me, reading a book is a very different experience from searching the web, in part because most of the websites I visit are exhaustible much faster than books. I have a great pile of them from the library waiting to be read, and an even greater number bought or gifted over the years. Books worth reading seem to go on forever. Websites don’t.

But if I don’t have that spark of discipline to stay off the Internet for a few hours at a time, I’m tempted to do the RSS round-robin and triple check the New York Times for hours, at which point I look up and say, “What did I do with my time?” If I read a book—like The Shallows, or Carlos Ruiz Zafon’s The Shadow of the Wind, which I’m most of the way through now—I look up in a couple of hours and know I’ve done something. This is particularly helpful for me because, as previously mentioned, I’m in grad school, which means I have to be a perpetual reader (if I didn’t want to be, I’d find another occupation).

To my mind, getting offline can become a comparative advantage because, like Carr, “I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain,” and that someone is me and that someone is the Internet. But I can’t claim this is true for all people in all places, even as I tell my students to try turning off their Internet access and cell phones when they write their papers. Most of them no doubt don’t. But the few who do learn how to turn off the electronic carnival are probably getting something very useful out of that advice. The ones who don’t probably would benefit from reading The Shallows because they’d at least become aware of the possibility that the Internet is rewiring our brains in ways that might not be beneficial to us, however tenuous the evidence (notice my hedging language: “at least,” “the possibility” “might not”).

Alas: they’re probably the ones least likely to read it.

Steve Jobs’ prescient comment

“The desktop computer industry is dead. Innovation has virtually ceased. Microsoft dominates with very little innovation. That’s over. Apple lost. The desktop market has entered the dark ages, and it’s going to be in the dark ages for the next 10 years, or certainly for the rest of this decade.”

(Emphasis added.)

—That’s from a 1996 interview with Jobs, and he was completely right: little of interest happened to the desktop interface virtually everyone uses until around 2003 or 2004, when OS X 10.3 was released. The first major useful change in desktops that I recall during the period was Spotlight in OS X 10.4, which was, not coincidentally, around the time I got a PowerBook.

Who is our authentic self, exactly?

We can tie ourselves in knots [over the cynical idea of society’s corruption and commerce’s alienation], but the fact is, the relationship between the stuff we buy and who we are, and the broader relationship among consumer culture, artistic vision, and the authentic self, is fraught with bad arguments and bad faith, and the usual themes and oppositions (between genuine needs and false wants, or between the shallowness of a branded identity and the depths of the true self) are too crude to be helpful.

That’s from Andrew Potter’s The Authenticity Hoax, which is so far a fascinating rebuttal to the idea that we’re all merely automatons, creations of the media, men in gray flannel suits, mindless conformists, better off going back to nature, incapable of meta thinking, mere cogs in the machine, alienated labor, brainwashed by Disney, or instinctive conservative reactionaries.

My authentic self appears to be the kind of person who doubts that my authentic self exists.

Charlie Stross on the Real Reason Steve Jobs hates flash (and how lives change)

Charlie Stross has a typically fascinating post about the real reason Steve Jobs hates flash. The title is deceptive: the post is really about the future of the computing industry, which is to say, the future of our day-to-day lives.

If you read tech blogs, you’ve read a million people in the echo chamber repeating the same things to one another over and over again. Some of that stuff is probably right, but even if Stross is wrong, he’s at least pulling his head more than six inches off the ground, looking around, and saying “what are we going to do when we hit those mountains up ahead?”

And I don’t even own an iPad, or have much desire to be in the cloud for the sake of being in the cloud. But the argument about the importance of always-on networking is a strong one, even if, to me, it also points to the points to the greater importance of being able to disconnect distraction.

In the meantime, however, I’m going back to the story that I’m working on. Stories have the advantage that they’ll probably always be popular, even if the medium through which one experiences them changes. Consequently, I’m turning Mac Freedom on and Internet access off.

Non-Places: Introduction to an Anthropology of Supermodernity — Marc Augé

Marc Augé’s Non-Places: Introduction to an Anthropology of Supermodernity is fascinating because it describes a process and some places that almost all of us feel like we’ve been. In my post about Lewis Hyde’s The Gift, I wrote about one such bureaucratized space in the form of airports:

As I write this, I sit in a Tucson airpot bar. Airports have everything wrong with them: they are transitional, one-off spaces filled with strangers, the “restaurants” they offer consist of pre-made food with character slightly above a TV dinner, and for some reason we as a society have decided that Constitution rights and privacy don’t apply here. People I don’t know can stop me at will, and merely flying requires that I submit to security theater that is simultaneously ineffective and invasive. Everything is exorbitantly expensive but not of particularly high quality. Menus don’t have beer prices on them.

The airport, in short, is designed to extract money from a captive audience; this might be in part why I don’t care much for sports stadiums, Disneyland, and other areas where I feel vaguely captive.

And it’s miserable, at least to me, and Augé traces that feeling at least partially to a place’s relationship or lack thereof with history. His book is useful because it offers a theoretical framework for understanding why we think of some places the way we do and frustrating because it’s written in French academic-ese. John Howe translates it but can’t change the fact that most of the book is actually concerned with how ethnologists view places. In other words, the major action described by the title isn’t reached until about two thirds of the way through the book. And it takes until page 94, and nearly at the end, to get a somewhat clear definition of what constitutes a “non-place:”

Clearly the word ‘non-place’ designates two complementary but distinct realities: spaces formed in relation to certain ends (transport, transit, commerce, leisure), and the relations that individuals have these spaces. Although the two sets of relations overlap to a large extent, and in any case officially (individuals travel, make purchases, relax), they are still not confused with one another; for non-places mediate a whole mass of relations, with the self and with others, which are only indirectly connected with their purposes. As anthropological places create the organically, so non-places create solitary contractuality.

Any time someone uses “clearly” or “obviously,” they should have their text examined more carefully, because anything that is genuinely clear or obvious doesn’t need the modifier. The text itself is unsure: what are “certain ends” as opposed to “non-certain ends?” I’m not sure: maybe he means where people live. What is the ‘organically social?’ Presumably something like neighborhoods, common cause, not “Bowling Alone” and the like. The gap between what is said and what is probably meant looms large with these phrases, even if the passage as a whole at least yields some kind of framework for discussing the problem.

I would say that non-places are basically commerce or exchange economies, while places are gift economies. In other words, in non-places one cannot have any real recourse to common humanity: you can’t ask to borrow something, to be done a favor, or to expect to know the myriad of strangers you cross. In a place, you can expect to have local knowledge, to not have to rely entirely on signs, to be able to decorate it as you will, and to have the opportunity for whimsy.

One thing I like about universities is that they do a decent job of being both gift and commerce economies, thanks in part to state subsidies: although my students have to pay the bursar’s office to take my class, once they are within it, we do not discuss or exchange money directly, and this mediating bureaucratic influence helps maintain something closer to a gift economy. Most professors I have met are more than willing to give their time to those who do not waste it and who wish to learn. By “do not waste it,” I mean those who are prepared, conscientious, and willing to read or experiment per the professor’s instructions, as opposed to the inevitable students who, at least in English, want the professor to read a half-baked paper the night before it is due in order to receive a higher grade. Professors are willing, in short, to make what Augé calls a “relational” space that is “concerned with identity,” or, as his long quotes have it:

If a place can be defined as relational, historical and concerned with identity, then a space which cannot be defined as relation, or historical, or concerned with identity will be a non-place. The hypothesis advanced here is that supermodernity produces non-places, meaning spaces which are not themselves anthropological places and which, unlike Baudelairean modernity, do not integrate the earlier places: instead these are listed, classified, promoted to the status of ‘places of memory’, and assigned to a circumscribed and specific position. A world where people are born in the clinic and die in hospital, where transit points and temporary abodes are proliferating under luxurious or inhuman conditions (hotels chains and squats […]) (78 – 9).

This is the sort of assertion that almost works: notice how it starts with a major “if” at the start and never quite defines the terms relational, historical, and concerned with identity: although airports feel like they have none of those attributes today, they might a hundred years from now. Maybe airpots will one day be places in the sense that Belltown or the University District in Seaattle are. It’s hard to say, even if I feel like the idea that “supermodernity produces non-places” is correct, since those kinds of spaces (like airports, as stated above) produce the unhappy torpor of being totally unmoored and being buffeted by bureaucratic forces that cannot be directly negotiated with.

The last comparison Augé uses is curious: hotel chains feel quite different squatter camps, although I only have direct experience of the former. And being born in the clinic and dying in the hospital sounds like an improvement over being born in a hut and dying in a house, if the latter involve an earlier death. And what it means to be modern is something that seems like it’s being ceaselessly re-described—to be modern is to debate what it means to be modern, or to be acutely aware of history. This is another way of thinking about connections between people, among groups, and the like. Here’s one way Augé gets at that:

Collectivities (or those who direct them), like their individual members, need to think simultaneously about identity and relations; and to this end, they need to symbolize the components of shared identity (shared by the whole group), particularly identity (of a given group or individual in relation to the others) and singular identity (what makes the individual or group of individuals different from any other). The handling of space is one of the means to this end, and it is hardly astonishing that the ethnologist should be tempted to follow in reverse the route from space to the social, as if the latter had produced the former once and for all (Augé 51).

Neither wholly produces the other, but they both work systematically, space constraining daily contact and time constraining members in terms of particular history. Notice the idea of the “reverse […] route from space to the social,” although the social also affects space. In Jane Austen this happens less, but the space of the manor or inheritance affects everything the characters do: think of the vitality of the entailment on the actions of the characters in Pride and Prejudice. Love does not conqueror all in that novel, even if it affects relations with space and vice-versa. It is hard to imagine Charlotte Lucas loving the irritating Mr. Collins if not for his eventual, deferred wealth.

The book’s penultimate paragraph suddenly moves away from place and toward humanity:

One day, perhaps, there will be a sign of intelligent life on another world. Then, through an effect of solidarity whose mechanisms the ethnologist has studied on a small scale, the whole terrestrial space will become a single place. Being from earth will signify something. In the meantime, though, it is far from certain that threats to the environment are sufficient to produce the same effect. The community of human destinies is experienced in the anonymity of non-place, and in solitude (120).

The idea of distance and perspective is evoked from the first words: “one day” implies a day so distant that it cannot be envisaged, only held up as a trope. And the sense of vastness continues, with the “whole terrestrial space,” as opposed to the way we divide up now, and the possibility that such an orientation, however improbable that it will come to pass, might bring. I hope we get there, unlikely though it may seem, and unlikely as it is that non-places will bring us closer to place.

Outliers and Blink — Malcolm Gladwell

The Gladwell coda and its problems can be seen in this passage from the introduction to Blink: The Power of Thinking Without Thinking: “The task of Blink is to convince you of a simple fact: decisions made very quickly can be every bit as good as decisions made cautiously and deliberately.” I add the emphasis because Gladwell is not actually making a very strong claim: he’s essentially arguing for maybe. In that respect he certainly succeeds, though if you’re not reading closely you might miss the caveat.

In finding rules for determining how, of all the situations in the world, which respond to a “blink” decision and which will fail with that approach, Gladwell can’t do much more than find some examples, leaving a vast space unmapped. I don’t necessarily mean this as negative criticism: it is, rather, a description of the Gladwell technique that can very easily morph into a weakness if one is not aware of it going into his books. I treat his output as a single unit because there is far more unifying them in terms of style and content than not: they all collect anecdotes and research studies and combine them to form ideas that seem intuitive once you hear them and yet skew towards the quirky. His recent articles for the New Yorker use the same technique. He then divides these subjects into loosely linked chapters.

Gladwell gives examples of where what we claim to want or think want doesn’t match what we actually do, or what we actually seek out. As he says in Blink, “We have, as human beings, a storytelling problem. We’re a bit too quick to come up with explanations for things we don’t really have an explanation for.” He’s right, and he’s probably a bit too quick to accept explanations that have been published in peer-reviewed journals, rather than examining them with the skepticism appropriate to any effort to prove cause and effect. To me, however, the storytelling claim borders on obvious, but I like the succinct formulation he gives as well as the examples, which seem to back up his idea, though one could just as easily, say, cite the Bible, or any number of mythological and religious explanations for the cosmos that developed before science got started in earnest a few centuries back. In Northrop Frye and the Phenomenology of Myth, Glen Robert Gill writes that

Frye’s encounter … with the work of Oswald Spengler, a philosopher who observed mythic patterns in history, was ‘the first of several epiphanic experiences which turned vague personal ambitions into one great vision…

One might say something similar of Gladwell, who observes patterns that are not quite mythic but take on an almost mythic scope of destiny in parts of his book, which balances on the idea that we’re shaped or even determined by culture and experience and yet still have to work incredibly hard to achieve mastery. He is never overcome by that tension, but it’s a persistent background hum: if it takes 10,000 hours of practice to achieve mastery, then what can we say of Bill Gates, Bill Joy, and Flom, all of whom had opportunity to work incredibly hard? And what do we say of people who expand the scope of their opportunity to make it greater than it was? To that Gladwell has few answers, and it seems one of the overlooked sections in his drive to create narrative coherence—which might be another word for “mythic pattern”—out of what appears to be chaos.

Gladwell also has a clever shtick: if you discount his specific examples, the general principle might still hold, and if you discount his general principle, the specific examples might still be of interest. For example, a section in Outliers: The Story of Success about why Asian countries tend have students who score better on the math portions of international exams explains that seemingly innate ability as a cultural gift because Asian countries have traditionally built and maintained rice paddies, where you have to work at them virtually every day to get rice, while Western countries tended to farm, where you worked like a dog during planting and harvesting season but otherwise lounged. The point you’re supposed to take is that Asians aren’t innately good at math, which I buy, but that they tend to work harder at it in many cases, which I also buy. The problem is that I’m not so convinced that rice paddy work is necessarily the catalyst for this: what if some other cultural or political marker is the actual truth? Gladwell doesn’t sufficiently rule out alternate causes.

Even if one accepts the rice paddies explanation, Gladwell doesn’t go on to the other obvious inferences. Shouldn’t students in Asian countries excel not just at math, but at virtually every topic in school? They do, or they seem to. But then one should ask why, historically, most Asian countries with the exception of Japan haven’t industrialized at the rate of Western countries; if they’ve been exposed to Western technologies for centuries and are so industrious, why has the world taken the larger shape it has? Those questions lead one in the direction of Jared Diamond’s famous Gun, Germs, and Steel (answer: colonialism; oppression; luck) and Gregory Clark’s A Farewell to Alms (answer: evolutionary cultural (and perhaps biological) success), but Gladwell doesn’t go there: he stays in the “Asians are good at math” rice paddies idea rather than exploring the limits and consequences of what he says.

In other words, the situation is more complex than it’s presented. Gladwell’s specific examples might not hold to explain the general principle. But that principle might still stand. And it’s got a great tagline in this case: “No one who can rise before dawn three hundred and sixty days a year fails to make his family rich.” That might be true, or mostly true, or true enough that believing it is much more likely to make your family rich than not believing it.

In Outliers, Gladwell puts a different spin on the bigger pictures, writing that:

The people who stand before kings may look like they did it all by themselves. But in fact they are in variably the beneficiaries of hidden advantages and extraordinary opportunities and cultural legacies that allow them to learn and work hard and make sense of the world in ways that others cannot.

Let’s unpack that idea for a moment. If you stretch Gladwell’s comment in one direction, he’s completely right: people who are successful by conventional materialistic or intellectual measures benefit from being born into the industrialized world. If I’d been born into the dwindling stock of indigenous peoples, I’d be highly unlikely to be writing this at the moment. Furthermore, if I’d been born five hundred years ago, I’d almost certainly not be writing this because I’d probably be a peasant hoeing tubers or something to that effect. At the same time that Gladwell writes about how cultural advantages allows people to succeed, however, he doesn’t emphasize the people who don’t succeed despite all the cultural advantages in the world: the people who are born rich and privileged and end up drug addicts or moochers or whatever. Why do some people show great resilience in terrible circumstances while others fail to thrive in opulence? If I had definitive answers to that question, I’d have solved many of the worlds questions, but I think this paragraph nonetheless demonstrates that “hidden advantages and extraordinary opportunities and cultural legacies” are not the whole story. Gladwell doesn’t say they are: but he implies it strongly enough that it’d be easy to come away with that impression. It matters where we grow up, as he argues, but what could matter more is how far we go with what we’re dealt.

Gladwell can also contradict himself. On page 42 of Outliers, he says “You can’t be poor [and have time for the 10,000 hours it takes to master complex skills], because if you have to hold down a part-time job on the side to help make ends meet, there won’t be time left in the day to practice enough.” On page 117, he tells the story of Joe Flom, a poor boy who grows up to be a name partner at one of the world’s most prestigious and wealthy firms. He says of Flom’s background that “After school, he pushed a hand truck in the garment district. He did two years of night school at City College in upper Manhattan—working during the days to make ends meet—signed up for the army, served his time, and applied to Harvard Law School.” So which is it: if you’re poor, you don’t have time to practice and you’re likely to remain poor, or it’s possible to work your way up? Neither and both, of course, because the world isn’t as definitive as either version would have you believe.

These problems do not make Gladwell worthless, and if you’re aware of them you can still learn to think better while not succumbing to potentially fatuous stories. I’ve cited his story about the conception and execution of the Herman Miller Aeron chair several times. But I suspect most of Gladwell’s millions of readers aren’t reading with the critical eye they need; they’re being taken in, repeating whatever he says, and thinking they’ve got gold. Not everyone is so taken—Megan McArdle notes some problems with Gladwell stories too, as she writes here—but I suspect many are.

I would put Gladwell in the same category as Geoffrey Miller and his books The Mating Mind and Spent, or as Freakonomics: read them, but with care, and without being ready to accept everything they claim. Of course, that basically describes what educator-types call “critical reading” anyway, but some books demand it more than others because of the extravagance of their claims against the paucity of their evidence.

One other thing I wonder about is the story of Gladwell’s success: his books have been bestsellers for years, which indicates that 1) bestsellers have random properties or are simply random, which I suspect to be the reason behind Harry Potter’s success, or 2) he taps into some non-obvious social need or desire. In his case, if the answer is number two, maybe people like his books because he’s good at connecting abstract data to stories; popular television shows are, well, popular, while math journals tend to find a niche audience. People like stories, and when you combine ideas with stories, the ideas are often more memorable. I don’t think Gladwell’s books will endure, however, and he might be an example of the tendency I posited in Literary fiction and the current marketplace: nonfiction has a shorter shelf life than fiction because it’s easier for the state of the art to advance.

In the end, however, I’m a hypocrite too: the paragraph above indulges in the same Gladwell-like speculation that I’m criticizing. But I also take more care to make the uncertainties in the stories I tell clear, rather than covering them up. When you read Gladwell—and it appears that you or someone you know will—don’t necessarily believe it all and look for the potential holes in the arguments. Still, you’ll find many rich anecdotes and strange new ways of looking at the world. With those rewards, the risk of Gladwell is relatively low, especially because reading him is so easy. For all his problems, Gladwell is very good at extending the range, if not the precision, of your intellectual vision.

The Gift — Lewis Hyde

Lewis Hyde’s The Gift is one of these frustrating books whose last chapter is vastly better than any other and whose main point is somehow true even as the support for that point is weak, nonexistent, or wrong. He argues, reasonably enough, that contemporary Western capitalist societies tend to undervalue creativity in the arts, particularly when said creativity doesn’t sell. But in trying to make his point, he too pretends that a firewall exists between the creative, “gift” economies and the exchange/contract economies. At the end he decides the two can be reconciled, but that occurs after a series of irritating pronouncements with unsubtle jabs the exchange/contract economy. Nonetheless, The Gift made me think differently about the world by the time I finished with it, which few books do. I’ll swing back to that at the end, because The Gift also deserves plenty of criticism.

Although The Gift is a book, it feels like a long magazine article might have been the more appropriate form for it. Do we really need more than 50 pages about American Indian gift exchange cultures? And the chapters on Ezra Pound seems particularly worthless, and the one on Whitman interesting but overlong—a microcosm for The Gift as a whole. Some of its metaphors strain credulity and seem almost deliberately narrow, as when Hyde writes:

Gifts of peace have the same synthetic character. Gifts have always constituted peace overtures among tribal groups and they still signify the close of war in the modern world, as when the United States helped Japan to rebuild after the Second World War. A gift is often the first step toward normalized relations. (To take a negative example, the United States did not offer aid to Vietnam after the war. […])

The United States didn’t rebuilt Japan in hopes of joining hands and singing about world harmony—the goal was to build Japan as a bulwark against Communism in Asia. The “gifts” were probably closer to bribes. At the same time, the United States didn’t win in Vietnam, which might explain why no foreign aid money went to the country; if the North had been overrun and destroyed, then it might have been rebuilt with American dollars. Likewise, the United States’ proxy war in Afghanistan resulted in little subsequent aid, as discussed at the end of George Crile’s Charlie Wilson’s War.

I’m not sure this shows anything other than Hyde failed in this example, but he does in many others too. In a footnote, he says:

There is no technology, no time-saving device that can alter the rhythms of creative labor. When the worth of labor is expressed in terms of exchange value, therefore, creativity is automatically devalued every time there is an advance in the technology of work.

Time-saving devices can free up more time for creative labor: there are more writers and artists today than there were in, say, 1800, in part because most people aren’t engaged in backbreaking farming or 15-hour days in factories. Education levels have risen enormously, in part thanks largely to time-saving devices that give us more time to study and more wealth to devote to schools, libraries, and the like. Furthermore, labor expressed in exchange value does not automatically devalue creativity—establishing things like copyright, which allowed writers and others to derive an independent income from their work, if anything increased the worth of creative labor for people like writers. And creativity is not limited to what we think of as traditional arts—for example, computer programming is often enormously creative, and those who tend to be maximally creative also tend to be better compensated than those who aren’t. If Hyde were going to say that “When the worth of labor is expressed in terms of exchange value, creativity can be devalued,” I would agree: the number of poets whose contribution can be measured monetarily is small. As Gabriel Zaid says in So Many Books: Reading and Publishing in an Age of Abundance, “[…] the conversation continues, unheeded by television, which will never report: ‘Yesterday, a student read Socrates’ Apology and felt free.’ ”

Such problems occur throughout the book, although they’re alleviated in the conclusion, where Hyde retreats on many of his more ridiculous assertions. He says:

[…] my own ideas underwent a bit of a re-formation. I began to understand that the permission to usure is also a permission to trade between two spheres [the commercial and gift economies]. The boundary can be permeable. Gift-increase (unreckoned, positive reciprocity) may be converted into market-increase (reckoned, negative reciprocity). And vice versa: the interest that a stranger pays on a loan may be brought into the center and converted into gifts. Put more generally, within certain limits what has been given us as a gift may be sold in the marketplace and what has been earned in the marketplace may be given as a gift.

Damn: if only that line of reasonable thinking had informed the entire book. It didn’t, which render sections of The Gift reminiscent of freshman-year manifestos written by tipsy students who just finished Marx. Despite those problems, some sections early fascinate, like the chapter on “The Gift Community,” where Hyde says:

Is is a rare society that can be sustained by bonds of affection alone; most, and particularly mass societies, must have as well those unions which are sanctioned and enforced by law that is detached from feeling. But just as the Roman saw the familia divided into res and personae, the modern world has seen the extension of law further and further into what was earlier the exclusive realm of the heart.

The more one tries to regulate the affairs of the heart, the less those affairs seem like they are of the heart. Dan Ariely and Tim Harford make similar observations, backed up by experiments, in Predictably Irrational and The Logic of Life, respectively. And institutions are fond of exploiting the gift economy by masquerading their exchange/commercial actions. For example, Division I American college sports piggyback on the gift economy: although to football, basketball and baseball players are essentially professionals, high-caliber universities pretend that tuition is a “gift” even as the same universities extract millions of dollars in television and merchandising revenue from such players. The idea that Division I players are amateurs has become increasingly absurd, much as the Olympics have been professionalized. In Beer and Circus: How Big-Time Sports Is Crippling Undergraduate Education, Murray Sperber even argues that the sports mentality harms students.

Yet I can’t help but imagine places where gift economies don’t apply at all, and they’re often not very pleasant. As I write this, I sit in a Tucson airpot bar. Airports have everything wrong with them: they are transitional, one-off spaces filled with strangers, the “restaurants” they offer consist of pre-made food with character slightly above a TV dinner, and for some reason we as a society have decided that Constitution rights and privacy don’t apply here. People I don’t know can stop me at will, and merely flying requires that I submit to security theater that is simultaneously ineffective and invasive. Everything is exorbitantly expensive but not of particularly high quality. Menus don’t have beer prices on them.

The airport, in short, is designed to extract money from a captive audience; this might be in part why I don’t care much for sports stadiums, Disneyland, and other areas where I feel vaguely captive. In Great American Cities (to use Jane Jacob’s phrase), something is always happening, there is always another place down the street, and you can decide to be as invested or anonymous in society as you like. In contrast, airpots feel like a trap: you can’t choose to avoid them, at least not without enormous costs in terms of time, money, and concentration. Maybe I wrote about college sports above because a few basketball games are playing around me, along with facile, noisy political news that’s more like a talk show than newspaper. If there were a bar in the Tucson airport without this ceaseless parade of visual noise, I would go to it. I’m trapped in an extreme form of the market economy, where no reciprocity exists and the gift is hidden and completely subservient to commerce. I might not have the gift, but regardless of whether I do, I’m frustrated here, where the food is more fuel more pleasure, as if choosing between burritos and pasta is like choosing between octane grades. Good chefs are artists, and maybe none could work in the security of an airport. I only wish that I had somewhere quiet and comfortable to sit. Neither kind of place exists in airports, unless you pay for it, and, again, I have no choice but to participate. At least with the most market, you have a choice.

In short, there are few better places to instill sympathy to the arguments of a book like The Gift, which, for all its problems in expression, nonetheless drives at a serious problem in market economies that seem unlikely to depart. They are not as serious as Hyde makes them out to be—I too would like it if more people read Saul Bellow and fewer watched Flavor of Love, a show I’ve never seen but have heard allusions to at least three times in the last week—and the market has a habit of self-correction, but that doesn’t mean they do not exist. And The Gift gives one a better way for analyzing the world and believing in creative acts that don’t necessary have immediate financial gain.

Despite my antipathy towards The Gift I occasionally find myself recommending it, albeit with caveats attached. It threads an argument that deserves to be more often heard in a non-sentimental or strident context: that not all worthy forms of creativity are financially remunerated adequately but that they are valuable nonetheless. The Gift is not brilliant, as the jacket copy claims, but art deserves all the defense it can muster, but over the long term, I suspect that art will be its own defense.