Jonah Lehrer’s Imagine is still worth reading

Jonah Lehrer, as is now well known, repeatedly misrepresented research and plagiarized other people’s writing in Imagine: How Creativity Works. But, as Roy Peter Clark points out, “Jonah Lehrer’s ‘Imagine’ is worth reading, despite the problems.” Clark goes on to say, “not all the sins [Lehrer commits . . .] are equally grievous,” but, despite that, “the reading of the book ‘Imagine’ helped me understand my world and my craft, and what else can you hope for from a non-fiction book.”

I’ve found the same thing after reading Imagine based on Clark’s endorsement. But reading it in light of Lehrer’s indiscretions reveals new potential layers of meaning, because a couple of passages have a very different resonance, like this one, about Shakespeare’s milieu:

His [Shakespeare’s] peers repeatedly accused him of plagiarism, and he was often guilty, at least by contemporary standards. What these allegations failed to take into account, however, was that Shakespeare was pioneering a new creative method in which every conceivable source informed his art. For Shakespeare, the act of creation was inseparable from the act of connection. {Lehrer “Imagine”@221}

Lehrer seems to be using the same method. But the age of the Internet makes tracking sources much, much easier than it used to be. And he goes on:

The point isn’t that Shakespeare stole. It’s that, for the first time in a long time, there was stuff worth stealing—and nobody stopped him. Shakespeare seemed to know this—he was intensely aware that his genius depended on the culture around him. {Lehrer “Imagine”@221}

In retrospect, this reads as a preemptive defense of Lehrer’s own method. But I don’t get why Lehrer made stuff up: most of what he invented doesn’t seem to be very important, and it’s the kind of peripheral material that makes for good reading but isn’t essential. Given contemporary attitudes towards plagiarism—the passages above show that he knows and understands those attitudes—why risk so much for so little gain? It’s like a millionaire stealing a pair of $20 jeans. Why tarnish success? I can imagine some possible answers to these questions, but none of them are very satisfying, and I ultimately want to ascribe Lehrer’s lies to simple human vanity.

Imagine is still pretty interesting. I doubt it’s a perfect book, and I wouldn’t cite Lehrer in my neuroscience PhD dissertation. But I am now conscious of the tension between free-form creative thought and focused attention to a particular, grinding problem (“We need structure or everything falls apart. But we also need spaces that surprise us. Because it is the exchanges we don’t expect, with the people we just met, that will change the way we think about everything”); I am conscious of the need for both longtime collaborators and for new faces; and I am conscious of how people with deep domain expertise may benefit from applying that expertise elsewhere. Some of Lehrer’s points, like his description of the virtues of cities or the eccentric greatness of Paul Erdos, are already familiar. But he helps me see them in new ways. A moment like this, for example, shows me something important about my own writing and creative work:

Friedrich Nietzsche, in The Birth of Tragedy, distinguished between two archetypes of creativity, both borrowed from Greek mythology. There was the Dionysian drive—Dionysus was the god of wine and intoxication—which led people to embrace their unconscious and create radically new forms of art. [. . .] The Apollonian artist, by contrast, attempted to resolve the messiness and impose a sober order onto the disorder of reality. Like Auden, creators in the spirit of Apollo distrust the rumors of the right hemisphere. Instead, they insist on paying careful attention, scrutinizing their thoughts until they make sense. Auden put it best: ‘All genuine poetry is in a sense the formation of private spheres out of public chaos.’ {Lehrer “Imagine”@64}

I am far more in the Apollonian mode than the Dionysian mode, but, perhaps for that reason, I’m fascinated by and perhaps even envious of Dionysian thinking, acting, and living. A novel like The Secret History thus becomes all the more important to me, because it has an Apollonian narrator, Richard, dealing with the aftermath of an attempt to reach Dionysian ecstasy. In the novel, not surprisingly, the outcomes are pretty bad, but the idea of deliberately trying to reach an ecstatic experience resonates with my temperament.

There are some moments that appear, on the surface, self-contradictory. Lehrer says, “The most creative ideas, it turns out, don’t occur when we’re alone. Rather, they emerge from our social circles, from collections of acquaintances who inspire novel thoughts. Sometimes the most important people in life are the people we barely know” {Lehrer “Imagine”@204}.

Earlier in Imagine, however, Lehrer discusses how many creative ideas when people are taking morning showers—where most are presumably alone. So do creative ideas emerge from chatting with others, or when our mind is a relaxed state that lets it make disparate connections among ideas? The answer appears to be “both,” but Lehrer doesn’t explicitly discuss the implied contradictions. I’m not saying he couldn’t reconcile them, but I am saying that someone should’ve pointed these kinds of contradictions out.

Even if all of Imagine’s research and stories are somehow wrong—and I don’t think they are—the book still offers novel ways to think about creativity and how to structure one’s life or work more effectively and in ways that I hadn’t foreseen. I wish the publisher hadn’t withdrawn it altogether. Used copies on Amazon now start at $25. It may be that the existing copies thus continue to rise in value because of their scarcity; alternately, readers might turn to pirate editions on the Internet, which I can only assume are easy enough to find (my book came from the University of Arizona’s library).

Don’t Go to Law School (Unless) — Paul Campos

Paul Campos saves what might be the best paragraph of Don’t Go To Law School (Unless): A Law Professor’s Inside Guide to Maximizing Opportunity and Minimizing Risk for the very end of the book, so I’m going to invert his structure and start with it:

Have you ever said to yourself, “I don’t know what to do with my life – so I’m going to spend three years of it going deeply and irreversibly into debt, in a quite possibly futile attempt to enter a profession that I have no actual desire to join?” I bet you haven’t, because who would ever say something that idiotic? Every year, however, thousands of people are perfectly capable of doing something that idiotic. If they weren’t, half the law schools in the country would be out of business tomorrow.

We’ve looked into the mirror and seen the enemy, and the enemy is ourselves. Sure, someone else might hand us the weapons we use to mutilate ourselves—that is, student loans—but someone who hands you a loaded gun isn’t obligating you to shoot yourself in the foot. Perhaps they shouldn’t have handed you the gun, but they did, and you can’t wholly blame that person for your mistake. It sure is more fun, however, to blame someone else for your mistakes than it is to stand up and say, “I’m an idiot and I’ve made bad life choices.”

I, however, am idiot and made a bad life choice—but I quit law school after one year, based largely on bad assumptions, fear, stupid desire, anachronistic beliefs about the legal market, and various other factors I’d rather not examine in detail. The problems with law school are slowly becoming better known: “for more than 30 years now the market for legal services has been contracting relative to the rest of the economy.” The basic problem is that law schools have been raising tuition faster than the rate of inflation for decades, and the legal market is a well-defined and studied one: there are about twice as many credentialed lawyers being minted as there are jobs for them to enter.

You don’t have to be a mathematician to realize that some of those would-be lawyers are going to be left out. In the last, they would have been left with relatively little debt, which would have made arguments like “a law degree will open doors even if you don’t practice law” at least somewhat plausible and mildly tenable. Now those kinds of arguments aren’t. There are lots of common, bad reasons people go to law school: “Like a lot of other people, I went to law school I couldn’t think of anything better to do. At the time I applied I was three years removed from my undergraduate days as a somewhat aimless English major,” and, though this may sound odd, law school itself doesn’t prepare people for practicing law.

That wasn’t really a problem when tuition was cheap and proto-lawyers could work cheaply for a couple years to learn the trade. Now the stakes are high and law school’s inadequacies are a huge problem, because having more than $100,000 in law school debt that can’t be discharged through bankruptcy will hurt people for decades, especially if they can’t get the training necessary to actually practice law. As Campos says, “The two most important practical skills that any lawyer working in private practice must possess are the ability to acquire clients, and to get them to pay their bills, which happens to be two things that most legal academics have never done in their lives.”

There’s little pressure, at least right now, to change the system. There’s little pressure legal academics to learn these kinds of skills and impart them to their students. The only way I can see to create that kind of pressure is by convincing enough people not to go to law school that the schools themselves start receiving market pressure to reform. Without that pressure, they can simply continue.

Campos is a law professor and has spent the last year and a half writing about the problems in law school on the blog Inside the Law School Scam, which is like porn for academic eggheads. It’s got lots of well-researched money shots. But, also like porn, too much of it all at once is enervating, and by now the larger point—don’t go to law school—is or should be well-known. For people considering law school, the only real question can be answered with a binary: Should I go to law school? The answer is almost certainly “no.” For most people, ITLSS only needs to be read once: the problems of law schools are most pressing for law school insiders, not for the rest of us. We need to know that “most people currently attending law school would be better off not doing so.”

And it’s intellectually honest to admit as much: “I’ve become increasingly aware that my ridiculously good job is being paid for by people who are increasingly unable to get the kinds of jobs they came to law school to get.” But relatively few insiders are willing to admit that the systems they participate in and propagate aren’t good for outsiders. That’s one reason Campos’s book is so admirable. It’s also uses stories but eschews relying exclusively on them and focuses instead on money.

The more I pay attention to the world, the more I see how much money and financial constraints underlie a lot of surface phenomena. In an ideal world, money is a strong proxy for value; a company like Google or Apple is worth a lot because both provide a lot of value to people. The education world, however, has broken that link, and the breakage is getting worse with time.

I wonder how long it’s going to take until some law school decides to utterly reverse course and simply say that it’s going to have ugly buildings, a small library, huge class sizes, and very low tuition—say, $10,000 a year. Or $9,500. The professor-to-student ratio would be something like 1:100, and there’d be a dean and virtually no other administrative support or special programs. But this model would focus on being sustainable and making sure that students don’t face penury at the end of law school.

Instead of working to compete with the current model that almost all law schools employ (or deploy), Jake’s hypothetical would do the opposite, and be proud of getting people a legal education for under $30,000 in tuition, with a maximal focus on employability following graduation and a minimum focus on student loans (I’d also love to see open-source textbooks). They could advertise their alternate strategy, and maybe have a blog that explains the ways the conventional system is set up to screw students.

As far as I know, a couple of schools try the “admit everybody and charge them a lot” model, but few try the “admit many, but charge them a little model.” The notorious Thomas M. Cooley Law School does the former, and no one who knows anything about law school will go there, but they charge $54,000 per year right now. There might be institutional or ABA-imposed barriers that I’m unaware of. Still, if that kind of model is successful, it could at least challenge the hegemony of the Harvard-Yale-Stanford model of law school, which is untenable and getting worse.

See also “The specious reasoning in Lawrence M. Mitchell’s ‘Law School Is Worth the Money‘” and “Why You Should Not Go to Law School.” Do not listen to your parents, for whom law school might’ve made financial sense, or your friends’s empty congratulations, because most of your friends don’t know any better. Law school enrollments have plummeted since their 2008 high, for good reason.

Here’s an interview with a Columbia law grad who quit law for a coding bootcamp. Skipping law school would’ve made more sense, but news about how bad the legal market is relative to the tech sector has not percolated through the entire country (yet).

Back to Blood — Tom Wolfe

The real problem with Back to Blood is that you’ve already read it, most notably in The Bonfire of the Vanities and A Man in Full—and if you haven’t read those, you should start with them. Back to Blood has the same assortment of obsessions and interests: there is the child with an unusual name and an elite pedigree: “Last week he totally forgot to call the dean, the one with the rehabilitated harelip, at their son Fiver’s boarding school, Hotchkiss [. . .]” But does anyone still care about elite boarding schools? Does anyone still care about the Miami Herald other than the people who work there? The father of Fiver is the editor, and he thinks it is “one of the half-dozen-or-so most important newspapers in the United States” in an era when the era of newspapers has passed.

The Miami nightclub is named “Balzac’s,” after another Wolfe preoccupations. There is a prurient mention of girls who “were wearing denim shorts with the belt lines down perilously close to the mons veneris and the pants legs cut off up to. . . here . . .” Has anyone in the U.S. ever used the term mons veneris, outside of Tom Wolfe and medical schools? I think it appeared in I am Charlotte Simmons a couple of times too, and there it was even more improbable. And the word loins! In this case, “juicy little loins and perfect little cupcake bottoms.” I’ve heard loins described as loins before, but only by Tom Wolfe and the writers of the Bible. Someone born more recently than 1931 would use “pussy” if they wanted to be crude, “va jay jay” if they wanted to be hipster, or “vagina” if they wanted clinical directness. But not loins. No one but Tom Wolfe would use loins, and use it again and again.

Sometimes writers working out variations on ideas that iterate subtly book by book can work—Elmore Leonard is a good example. Others just feel like they’re repeating themselves. When I am Charlotte Simmons came out, I was in college and skipped class to read it, only to feel an increasing sense of disappointment with the wrongness of many scenes—like Charlotte feeling nervous about the cost of long distance calls. That was an anachronism. Most college students had free long distance by 2004. I would’ve let anyone who asked use my phone to call home. Or, for another example of reportorial wrongness, Charlotte gets a salvaged, pieced-together computer, like a salvaged car. By 2004, however, older but working computers were $25 on Craigslist, or outright given away by schools. These two examples are salient, but there were others, just as I am Charlotte Simmons repeated words, phrases, and ideas from Wolfe’s earlier books. It, and Back to Blood, repeatedly describe moments of cowardly prurience, with men likes wolves and women who didn’t want it or didn’t want to want it and submitted to it only reluctantly, like a female character from the 19th century and not at all like many of the contemporary women I know.

The period details in Back to Blood are wrong. Today, anyone cool would be driving a Tesla Roadster, or Fisker Karma, not a Ferrari 403; Ferraris might’ve been cool twenty years ago, but technology and culture have moved on. Then there’s the simply and wildly improbable: a French professor named Lantier thinks of his daughter that she wasn’t ready for “snobbery” because “She was at the age, twenty-one, when a girl’s heart is filled to the brim with charity and love for the little people.” Someone exposed to live students every semester is unlikely to think of their hearts as “filled to the brim with charity and love” for much of anything, except perhaps alcohol, condoms, iPhones, verbing nouns, and obsessive Facebooking. Not that there’s anything wrong with those things, but familiarity is a great slayer of illusions like Lantier’s belief about the hearts of most 21-year-old girls.

Back to Blood isn’t a bad book, but it has the same but lesser strengths of the earlier novels, with the same but exaggerated weaknesses of them. We’re told, not shown, that “Mac was an exemplar of the genus WASP in a moral and cultural sense,” without knowing why, if at all, that’s important. We’re told a lot of things, most of them not especially new if we’re familiar with the Wolfe oeuvre.

There are clever moments, as when Magdalena, in a fight with her Spanish-speaking mother (or, in Wolfe-land, Mother), resorts “to the E-bomb: English.” It’s a moment of geriatric cruelty, since “Her mother had no idea what colloquially meant. Magdalena didn’t, either, until not all that many nights ago when Norman used it and explained it to her. Her mother might know hang and possibly even slang, but the hang of slang no doubt baffled her, and the expression clueless was guaranteed to make her look the way she did right now, which is to say, clueless.” It’s clever, and the kind of cleverness that makes the scene fresh and unusual. It’s also the kind of cleverness missing in repeated references to the mons verneris, or to loins, or to high-end private schools.

Wolfe also gets and has gotten for decades the weirdness and power of modern media; its spotlight is restless yet powerful, and it plays a tremendous role in Bonfire. In Back to Blood, Nestor Camacho, a Miami cop, rescues a refugee from the mast of a ship and is recorded doing it; consequently, he becomes momentarily famous, such that: “Even now, at the midnight hour, the sun shone ’round about him.” The analogizing of fame to light seems obvious, even necessary, and although I don’t want to probe its deeper properties here I like how Wolfe avoids the spotlight metaphor, much as I didn’t a few sentences ago. Wolfe uses metaphor in an almost 19th Century fashion, usually effectively.

He gets the way civic booster types think of the arts not as a thing in and of themselves, but as a checkbox; an editor at the Miami Herald thinks that “Urban planners all over the country were abuzz with this fuzzy idea that that every ‘world-class’ city—world class was another au courant term—must have a world class cultural destination. Cultural referred to the arts. . . in the form of a world-class art museum” {Wolfe “Blood”@111}. He’s right, of course, but right in a generic way, like people are right about love being like a rose. If you’ve read anything about urban planning, or cities (and I have), you won’t be surprised at the editor’s knowledge, which he probably picked up in the same places I did, and which says very little about him as a character, exception that he, like so many Wolfe characters, is an information and status receptacle more than he is a person with his own needs and desires.

The complaint expressed throughout this post is similar to but a bit different than James Woods’, which concerns how Wolfe’s characters tend to speak in similar or identical registers, despite coming from wildly different backgrounds. That isn’t necessarily a weakness, but the verisimilitude of the characters must be maintained in novels that portray such startlingly different people in a similar register; that’s what Bonfire of the Vanities does and what Back to Blood doesn’t, quite. The earlier novel also doesn’t feel reported even if it was reported; the latter does, in the same way I am Charlotte Simmons misses the college milieu in a thousand subtle ways. If you swing, it doesn’t matter whether you miss the ball by a millimeter or a meter. The scrim of realism is pierced and the novel doesn’t quite work.

Wood also says that “Wolfe isn’t interested in ordinary life. Ordinary life is complex, contradictory, prismatic. Wolfe’s characters are never contradictory, because they have only one big emotion, and it is lust—for sex, money, power, status.” But this isn’t quite true: Wolfe is interested in ordinary life when it’s touched by big events, or ordinary life when its inhabitants have a powerful yearning for something other than ordinary life. That yearning, that drive, can be fascinating. Plus, there’s nothing wrong with writing about extraordinary life, which can be as fascinating, “complex, contradictory, prismatic.” Wood obviously isn’t making this argument, and I doubt he would make it in the kind of caricature I’m making it here, but it’s easy to draw this kind of false lesson from the Back to Blood review. Almost every Wood review is a momentary master class in the novel as a genre, which is why so many writers and would-be writers attend so carefully to them, and why it’s worth appending this brief commentary to a review that in some ways is more useful and interesting than the impressively hyped novel being discussed.

Back to Blood is drawing on capital built up from Wolfe’s earlier novels, and overall it leaves a sense of “Fool me once, shame on you; fool me twice, shame on me.” If another Wolfe novel appears, I don’t think I’m likely to be fooled again. There are better novels about the state of America—Gillian Flynn’s Gone Girl is one—even if they don’t announce themselves as tomes about the state of America. Given how the voices of Back to Blood don’t quite work and the book-report function doesn’t quite work, there are probably better uses of one’s reading time.

Bad academic writing: Rebecca Biron and the Mexican drug war in PMLA

In “It’s a Living: Hit Men in the Mexican Narco War,” Rebecca E. Biron writes:

Hit men in the twenty-first-century Mexican drug war engage in paid labor at the extreme end of capitalist exploitation. By “extreme end,” I mean the period of late hyper- capitalism in which transnational profit seeking trumps national as well as international regulatory systems designed to serve broad social stability. I also mean the outer limits of how capitalist interests use (up) human beings [. . .]

But “the twenty-first-century Mexican drug war” isn’t a good example of capitalism at work: to the extent that capitalism is about selling people things they actually want, with a (relatively) limited amount of state control, drugs should be legal: there’s a willing buyer, a willing seller, and no intermediary who gets hurt. Yet the state—which is conventionally associated with communism / socialism—prohibits drug use, using the logic of “serv[ing] broad social stability” and similarly bogus euphemisms.

If anything, the hit men should be considered exploited by state policies around prohibition, rather than capitalism or capitalists.

Plus, if exploitation is inherent capitalism, what kind of economic or political system doesn’t or hasn’t involved exploitation? And I’m not talking about a theoretical one: I’m talking about a real example in the real world. I don’t think any exist, at least in any meaningful sense. Although the U.S. and Western Europe certainly aren’t without warts and blemishes, both historical and contemporary, it’s notable that the Soviet Union exterminated millions of its own citizens in a calculated, industrialized fashion. The Soviet Union also engaged in foreign conquest and terror to a vastly greater extent than the U.S. did or, today, could even aspire to.

Complaints about Amazon’s rise ignore how long it has taken the company to rise

The latest raft of articles about Amazon and its power over the publishing industry appeared in the last couple of days (“Amazon, Destroyer of Worlds,” “What Amazon’s ebook strategy means,” “Booksellers Resisting Amazon’s Disruption”), and the first two note what is the most significant thing about Amazon, at least to my mind: how much better an experience Amazon is than the things it replaces (or complements, depending on your perspective).

Like any incumbents, publishers, as far as I can tell, want the status quo, but readers (and consumers of electronic gear) are happy to get something for less than they would’ve otherwise. Stross gets this—”Bookselling in 1994 was a notoriously backward-looking, inefficient, and old-fashioned area of the retail sector. There are structural reasons for this” and so does Yglesias—”But for consumers, it’s great. An Amazon Prime membership is the most outrageously good deal in commerce today. But competitors should be afraid.” Stross is suspicious of Amazon, and so is the New York Times writer. Their suspicions are worth holding, but the basic issue remains: Amazon is successful because it’s good.

Their books are cheap and arrive fast. Their used section is really great, for both buying and selling. Prior to Amazon and its smaller analogues, used bookstores simply wouldn’t buy books with writing in them. Amazon used buyers, however, don’t care, as long as the book is described honestly. I’m getting ready to move, which means that I’m selling or giving away somewhere between a couple hundred and a thousand books. I sold about 15 through Amazon, resulting in about $100 that I wouldn’t have otherwise. That efficiency is great, but it’s great in a way that publishers don’t like, because publishers would rather have everyone buying new books.

Amazon looks particularly good to me because I’ve spent a lot of time trying to wrangle a literary agent and failing. Five or six years ago, that meant my work would’ve spent its life on my hard drive, and that’s about it. But now that I’m done with comprehensive exams, I have time to hire an editor and a book designer and see what happens through self-publishing. The likely answer is “nothing,” but the probability of nothing happening is 1.0 if I leave the novels and other work on my hard drive forever.

Most of this was predictable: in 1997, Philip Greenspun wrote “The book behind the book behind the book…“, in which he observed: “Looking at the way my book was marketed made me realize that amazon.com is going to rule the world.” I’m sure others predicted the same thing. The publishing industry’s collective response was to shrug. I guess no one read The Innovator’s Dilemma. If publishers once were innovators, they’re not anymore.

Stross is averse to profit to the point that I think he’s signaling mood / group affiliation to some extent, but his basic economic analysis is good. Stuff like this: “piracy is a much less immediate threat than a gigantic multinational [. . .] that has expressed its intention to “disrupt” them, and whose chief executive said recently “even well-meaning gatekeepers slow innovation” (where ‘innovation’ is code-speak for ‘opportunities for me to turn a profit’)” could be rephrased; Amazon selling for less means more consumer surplus, and it appears that Amazon’s whole modus operandi is to very low, if any, profit margins; if it had margins as high or higher than what publishers and retailers shoot for, it wouldn’t be such a threat.

Anyhow, I too don’t want an Amazon monopoly or monopsony, but I don’t see a good alternative to Amazon. Barnes and Noble is, at best, second-best; their online prices finally became competitive with Amazon’s a year or two ago, but they still they’re chasing the leader instead of striving to be the leader.

If DRM on ebooks actually dies—as Stross thinks it will—that will make Barnes and Noble and other players more viable, in the same way that killing DRM on music made Amazon a viable purveyor of music (although a lot of people still use the iTunes Music Store).

We all have value systems, even if dollars aren’t their main currency

In Robert Skidelsky’s Econtalk interview, he mentions that we get restless if we have nothing to do, and there’s a certain amount of insatiability that appears built into the human condition. He’s referencing money, but it made me realize something: academics and intellectuals are restless and insatiable too, but they don’t use conventional currency: they use citation counts and perceived intellectual influence. They aren’t (mostly) acquisitively forward-looking, but they are interested in writing more and more, in order to have a greater and greater reputation.

Skidelsky’s most recent book is How Much is Enough?: Money and the Good Life, and in it he evidently discusses the idea of material good saturation, which is, I suspect, a topic that’s going to become more and more interesting over the course of my life. Most of us, as he points out and he points out that Keynes pointed out, reach a point of diminishing returns when it comes to goods and many other things: having a working car is very valuable to many of us, but having a $100,000 car is less so. Having a computer is very valuable, but having the latest model is less so. But we’re still working quite hard for goods that might not be valuable enough.

I leave it to the reader’s imagination to apply how the previous paragraph might be applied to academics or intellectuals, for whom it seems there is never enough respect go around.

Skidelsky’s point about work is especially interesting to me because I’m a person who has been working, so to speak, to make the kind of “work” that I do fun—at which point it’s not really onerous. I wonder if that kind of move is the future of work. We also get a certain amount of satisfaction from doing a thing well, and perhaps that will drive us, collectively, even in the face of not needing to do certain things to the extent that we need to do them now.

A Jane Austen Education: How Six Novels Taught Me About Love, Friendship, and the Things That Really Matter — William Deresiewicz

I really like and admire A Jane Austen Education, despite agreeing with the younger Deresiewicz who the older one mocks for believing sentiments like this one, about Jane Austen’s Emma: “The story seemed to consist of nothing more than a lot of chitchat among a bunch of commonplace characters in a country village. No grand events, no great issues, and, inexplicably for a writer of romance novels, not even any passion.” Deresiewicz is setting himself up to be knocked down, and yet when I read Emma I, too, was bored by the “chitchat” among the bumpkins.

But Deresiewicz goes on to explain why his younger self was totally wrong, and how he grew as a person through closely reading Jane Austen and applying her novels to his life experience. Though his explanation is persuasive, I still don’t buy it. To me, the characters in Emma are still “a pretty unpromising bunch of people to begin with, and then all they seemed to do was sit around and talk: about who was sick, who had had a card party the night before, who had said what to whom. Mr. Woodhouse’s idea of a big time was taking a stroll around the garden.” I usually call the ceaseless chatter without any action referent “empty status games,” because the games don’t refer to anything outside their immediate social situations (granted, it might also be that I don’t usually excel in them). These sorts of situations are akin to the ones Paul Graham describes in “Why Nerds Are Unpopular:”

I think the important thing about the real world is [that. . . ] it’s very large, and the things you do have real effects. That’s what school, prison, and ladies-who-lunch all lack. The inhabitants of all those worlds are trapped in little bubbles where nothing they do can have more than a local effect. Naturally these societies degenerate into savagery. They have no function for their form to follow.

Jane Austen’s societies obviously don’t generate into savagery—unless they’ve been transformed into Pride and Prejudice and Zombies (“Now with Ultraviolent Zombie Mayhem!”)—but their inhabitants do feel “trapped in little bubbles where nothing they do can have more than a local effect,” which makes them unsatisfying, at least to my temperament. Graham might also not be an ideal person to cite, given how much he admires Austen: “Everyone admires Jane Austen. Add my name to the list. To me she seems the best novelist of all time.” Still, strike me from the list: her style is amazing and her content vapid. Consider this description, also from Deresiewicz:

One whole chapter—Isabella had just brought her family home for Christmas—consisted entirely of aimless talk, as everyone caught up on one another’s news. For more than half a dozen pages, the plot simply came to a halt. But the truth was, for long stretches of the book there really wasn’t much plot to speak of.

Or this: “What could be duller, I thought, than a bunch of long, heavy novels, by women novelists, in stilted language, on trivial subjects?” There are much duller books—Beckett’s trilogy, Molloy, Malone Dies, The Unnamable comes to mind, since those are novels written to make some philosophical statement about the meaninglessness of life or to give English professors a bone to gnaw into scholarly papers—but the point stands. I’m not opposed to “women novelists,” and anyone who is on the grounds of perceived unimportance should try The Secret History and Gone Girl, but “long, heavy novels [. . .] on trivial subjects” are tedious regardless of their author’s gender.

Moreover, I’m not alone: “As it turned out, people had been reacting to Jane Austen exactly as I had for as long as they’d been reading her. The first reviews warned that readers might find her stories ‘trifling,’ with ‘no great variety,’ ‘extremely deficient’ in imagination and ‘entirely devoid of invention,’ with ‘so little narrative’ that it was hard to even describe what they were about.” At some level, as happens with much art, a preference for Austen may come down to temperament, and to what a person believes about what The Novel or a novel should do. I’ve never been able to get into novels that don’t have some kind of narrative drive or energy—both vague terms that I could spend the rest of this essay describing, or, rather, trying to describe—and, like Lev Grossman, I think “Plot makes perverts of us all:”

A good story is a dirty secret that we all share. It’s what makes guilty pleasures so pleasurable, but it’s also what makes them so guilty. A juicy tale reeks of crass commercialism and cheap thrills. We crave such entertainments, but we despise them.

For as long as a century, however, if not longer, literary culture has been bifurcating between high-culture, non-plot types who inhabit universities and book reviews and institutions, and common readers, who like something to happen and maybe some T&A or depraved longings in their fiction, even if the language used for the T&A and depraved longings isn’t very interesting. Most of us are taught that long, tedious books written in stilted language are more valuable than those that do the opposite.

To be sure, I don’t think the people who genuinely love Austen have been academically brainwashed—I think they do authentically love her writing—but I also think the original reviewers and the younger Deresiewicz have a point too, but that point is mostly drowned in school-based settings.

At the time Deresiewicz had his Austen breakthrough, he was seeing a waitress, and they “had little in common and had never progressed beyond the sex. She was gorgeous, bisexual, impulsive, experienced, with a look that knew things and a laugh that didn’t give a damn.” Perhaps this is a function of me being in my 20s, but this arrangement doesn’t sound so bad, and, having dated the equivalent woman, I rather enjoyed those things at the time. Furthermore, I don’t think such relationships are wrong—though I would also say, obviously, that they’re not the only kind of relationships available, or the only kind a person should have over the course of their life. Sometimes people eat fast food; other times they dine in fine restaurants, or at the Cheesecake Factory, or cook for themselves, or cook with another person, or cook simple foods, or complex ones, or have potlucks. I leave it to you to map that metaphor onto sexuality and relationships, but the point about variety in relationships is useful. For Deresiewicz, “Austen taught me a new kind of moral seriousness—taught me what moral seriousness really means. It means taking responsibility for the little world, not the big one. It means taking responsibility for yourself.” But people who are always morally serious can also be dull, just as people who are never morally serious are often unintentionally cruel.

The trick is being able to distinguish the two, and to find a middle way, and to develop some self-awareness, which is hard for many if not most of us. Certainly it was hard for Deresiewicz’s younger self:

If you’re oblivious to other people, chances are pretty good that you’re going to hurt them. I knew now that if I was ever going to have any real friends—or I should say, any real friends with my friends—I’d have to do something about it. I’d have to learn to stop being a defensive, reactive, self-enclosed jerk.

On the other hand, being oblivious to other people sometimes means being very tuned into technical or other problems that need solving—for the best example of this I’ve seen in literature, consider Lawrence Waterhouse in Cryptonomicon, who is shockingly oblivious and essential to the Allied war effort and who extends cryptography. It should also be noted that he’s not intentionally mean to others, and in the novel no one is emotionally hurt by him in an obvious fashion, but the depiction of his thought process as an engineer / mathematician seems pretty accurate. You get moments like this: “In particular, the final steps of the organist’s explanation were like a falcon’s dive through layer after layer of pretense and illusion, thrilling or sickening or confusing depending on what you were. The heavens were riven open. Lawrence glimpsed choirs of angels ranking off into geometrical infinity,” perhaps in exchange for attention to other people. To what extent are dispositions trade-offs? It’s a decent question, I think, but also one I can’t really answer.

Which is the kind of thing that I’m encouraged to do; in one moment, Deresiewicz praises the kind of professor we all hope to have: “When my professor asked a question, it wasn’t because he wanted us to get or guess ‘the’ answer; it was because he hadn’t figured out an answer yet himself, and genuinely wanted to hear what we had to say.” This is what I try to do in the classroom, although I’m guessing this kind of strategy works better for humanities students than for, say, math students, when the answer or answers are well-known, at least up to a fairly high level.

There are also intellectual surprises in A Jane Austen Education, and those surprises made me realize things I didn’t before:

Popular music is one giant shout of desire, one great rallying cry for freedom and pleasure. Pop psychology sends us the same signals, and so does advertising. ‘Trust your feelings,’ we are told. ‘Listen to your heart.’ ‘If it feels good, do it.’

And if everything is pointing you in one direction, it might be time to ask what lies in the other. Literature seems to ask this question. Pop music, as Deresiewicz points out, doesn’t. In Deresiewicz’s rendition, Austen herself was reacting against her time, which is to be commended:

Austen lived in the great age of trash fiction: the gothic novel, the sentimental novel, the bodice ripper—crumbling castles, creaking doors, and secret passageways; heavenly maidens and dark seducers, piercing shrieks and floods of tears, wild rides and breathless escapes; shipwrecks, deathbeds, abductions, avowals; poverty, misery, rape, and incest.

In other words, she lived in “the great age” of all the good stuff, though I would argue that the good stuff is still with us if we know where to look—I’m pretty sure Game of Thrones has every element in the Deresiewicz list.

Some weird stylistic quirks recur in the book, like the habit of “Austen was showing me” or “Austen was saying”-style constructions (“I could grow up and finding happiness, Austen was letting me know, but only if I was willing to give up something very important” or “Austen taught me a new kind of moral seriousness—taught me what moral seriousness really means” or “Austen understood that kids are going to make mistakes, and she also understood that making mistakes is not the end of the world”). But the overall effectiveness is tremendous, and not only because I might be a major component of Deresiewicz’s target audience: self-absorbed people who secretly think they have the answers other people lack.

Alif the Unseen — G. Willow Wilson

Alif the Unseen almost works, but it persistently mischaracterizes technology in a distracting, false-sounding way that its eponymous hacker protagonist wouldn’t. On the first page of Alif’s narration, we find this about his phone: “Another hack had set this one up for him, bypassing the encryption installed by whatever telecom giant monopolized its patent.” But encryption algorithms are math, and math can’t be patented. Furthermore, a patent is by definition a limited monopoly right. As a result, the last part of the sentence seems incoherent. And what is being encrypted? The phone’s operating system? Its user data?

A few pages later, Alif is “watching as a readout began to scroll up the screen, tracking the IP address and usage statistics of whoever was attempting to break through his encryption software.” But reading “usage statistics” makes no sense here: Alif isn’t, say, providing blogging software or an e-commerce platform (a few pages later, he installs a keystroke logger and other software on the computer of his love interest, and says that he does so “to track her usage statistics.” This makes more sense). Someone wouldn’t “break through his encryption software;” he or she would attempt to penetrate Alif’s firewall. The same would-be intruder leaves after “executing Pony Express, a trojan Alif had hidden in what looked like an encryption glitch.” I don’t know what “an encryption glitch” means here, and I don’t think the author does either.

Alif notes that, in the Arab Spring, “the digital stratosphere became a war zone. The bloggers who used free software platforms were most vulnerable.” If anything, open-source software should be less vulnerable, because well-known open-source software systems won’t have obvious backdoors (because they’d be found) and they have the advantage of many eyes on their source code. There’s an equally jarring moment when Alif says that he’s written a piece of software in “C++. But the type system is soft of—new. I’ve made a lot of modifications.” But he probably is referring to whether it’s dynamically or statically typed—that is, checking whether a program’s internal variables and other values are computable and safe when the program is run or when it’s compiled. It isn’t clear why Alif would change C++’s type system. At another moment, Alif worries that a malevolent, governmental entity is watching him: “The Hand would see Alif using his e-mail and cloud computing accounts, but until he could crack his algorithm, Alif would appear to be working from Portugal, Hawaii, Tibet.” The phrase “crack his algorithm” is meaningless here. “Cloud computing” is the kind of term marketers use; programmers or hackers would probably say “servers.”

These kinds of persistent, distracting errors detract from the story and the novel’s realism. It might seem strange to discuss realism in a book that features Djinn, vampires, and other supernatural elements, but any writer still has a duty to get the language of the “real” or mundane world right. Wilson doesn’t, and that makes the whole novel feel fake when it shouldn’t. In The Name of the Rose, religious language and medieval thought infuse every line, even when contemporary philosophical ideas are being expressed through the language of the time. Eco knows the period like Wilson doesn’t know the language of hackers, programmers, and computer science. I’m not an expert, but I’ve read enough in the field to understand what she misses.

Still, there’s a sense of hidden knowledge that runs throughout Alif the Unseen, and a melding of old ideas with new technology. That’s an appealing idea, and so is the idea of an Arab Golden Compass. It’s got some religious elements that could come from The Name of the Rose. Much of the writing is skillful if not particularly memorable. Funny moments appear: Vikram the Vampire, on hearing one of Alif’s schemes, says, “I don’t want foreigners involved in my business. Jinn are one thing but I draw the line at Americans.” Such moments are just not common enough to merit reading this book over something better, like The Golden Compass or Gillian Flynn’s Gone Girl or Carlos Ruiz Zafon’s recent novels, all of which do language better.

Martin Amis, the essay, the novel, and how to have fun in fiction

There’s an unusually interesting interview with Martin Amis in New York Magazine, where he says:

I think what has happened in fiction is that fiction has responded to the fact that the rate of history has accelerated in this last generation, and will continue to accelerate, with more sort of light-speed kind of communications. Those huge, leisurely, digressive, essayistic, meditative novels of the postwar era—some of which were on the best-seller lists for months—don’t have an audience anymore. [. . .]

No one is writing that kind of novel now. Well [. . . ] David Foster Wallace—that posthumous one looks sort of Joycean and huge and very left-field. But most novelists I think are much more aware than they used to be of the need for forward motion, for propulsion in a novel. Novelists are people too, and they’re responding to this just as the reader is.

I think people aren’t reading the “essayistic, meditative novels” because “essayistic, meditative novels” reads like code-words for boring. In addition, we’re living in “The Age of the Essay.” We don’t need novelists to write essays disguised as novels when we can get the real thing in damn near infinite supply.

The discovery mechanisms for essays are getting steadily better. Think of Marginal Revolution, Paul Graham’s essays, Hacker News, The Feature, and others I’m not aware. Every Saturday, Slate releases a link collection of 5 – 10 essays in its Longform series. Recent collections include the Olympics, startups, madness in Mexico, and disease. The pieces selected tend to be deep, simultaneously intro- and extrospective, substantive, and engaging. They also feel like narrative, and nonfiction writers routinely deploy the narrative tricks and voice that fiction pioneered. The best essay writers have the writing skill of all but perhaps the very best novelists.

As a result, both professional (in the sense of getting paid) and non-professional (in the sense of being good but not earning money directly from the job) writers have an easy means of publishing what they produce. Aggregators help disseminate that writing. A lot of academics who are experts in a particular subject have fairly readable blogs (many have no blogs, or unreadable blogs, but we’ll focus on the readable ones), and the academics who once would have been consigned to journals now have an outlet—assuming they can write well (many can’t).

We don’t need to wait two to five years for a novelist to decide to write a Big Novel on a topic. We often have the raw materials at hand, and the raw material is shaped and written by someone with more respect for the reader and the reader’s time than many “essayistic” novelists. I’ve read many of those, chiefly because they’ve been assigned at various levels of my academic career. They’re not incredibly engaging.

This is not a swansong about how the novel is dead; you can find those all over the Internet, and, before the Internet, in innumerable essays and books (an awful lot of novels are read and sold, which at the very least gives the form the appearance of life). But it is a description of how the novel is, or should be, changing. Too many novels are self-involved and boring. Too many pay too little to narrative pacing—in other words, to their readers. Too many novels aren’t about stuff. Too many are obsessed with themselves.

Novels might have gotten away with these problems before the Internet. For the most part, they can’t any more, except perhaps among people who read or pretend to read novels in order to derive status from their status as readers. But being holier-than-thou via literary achievement, if it ever worked all that well, seems pretty silly today. I suppose you could write novels about how hard it is to write novels in this condition—the Zuckerman books have this quality at times, but who is the modern Zuckerman?—but I don’t think anyone beyond other writers will be much interested.

If they’re not going to be essayistic and meditative, what are novels to be? “Fun” is an obvious answer. The “forward motion” and “propulsion” that Amis mentions are good places to start. That’s how novels differ, ideally, from nonfiction.

Novels also used to have a near-monopoly on erotic material and commentary. No more. If you want to read something weird, perverse, and compelling, Reddit does a fine job of providing it (threads like “What’s your secret that could literally ruin your life if it came out?” provides what novels used to).

Stylistically, there’s still the question of how weird and attenuated a writer can make individual sentences before the work as a whole becomes unreadable or boring or both. For at least a century and change, writers could go further and further in breaking grammar, syntax, and point of view rules while still being comprehensible. By the time you get to late Joyce or Samuel Beckett’s novels, however, you start to see the limits of incomprehensibility and rule breaking regarding sentence structure, grammar, or both.

Break enough rules and you have word salad instead of language.

Most of us don’t want to read word salad, though, so Finnegans Wake and Malone Dies remain the province of specialists writing papers to impress other specialists. We want “forward motion” and “propulsion.” A novel must delight in terms of the plot and the language used. Many, many novels don’t. Amis is aware of this—he says, “I’m not interested in making a diagnostic novel. I’m 100 percent committed in fiction to the pleasure principle—that’s what fiction is, and should be.” But I’m not sure his fiction shows this (as House of Meetings and Koba the Dread show). Nonetheless, I’m with him in principle, and, I hope, practice.

It’s here: Carlos Ruiz Zafón’s The Prisoner of Heaven

This came in the mail yesterday:

(It was actually released today, but books that are pre-ordered through Amazon have a nifty habit of showing up a day early.)

I finished it between some of the monumentally tedious readings for my PhD exams. Expect more later. The short version: the novel starts slower than The Shadow of the Wind and The Angel’s Game, and, despite the note that

The Prisoner of Heaven is part of a cycle of novels set in the literary universe of the Cemetery of Forgotten Books of which The Shadow of the Wind and The Angel’s Game are the two first instalments. Although each work within the cycle presents an independent, self-contained tale, they are all connected through characters and storylines, creating thematic and narrative links.

the new novel depends substantially on its predecessors, either of which can be read independently much more easily than The Prisoner of Heaven.

The paper quality is also much worse than the previous hardcovers.