The Best Software Writing — Joel Spolsky

Well-written, insightful books on subjects I know nothing about often impart some lasting and surprising ideas. The biggest problem is finding them, since you don’t know they’re well-written or insightful till it’s too late. Pleasant surprises have abounded recently, one being The Devil’s Candy: The Bonfire of the Vanities Goes to Hollywood. Another comes from Joel Spolsky, who writes a popular blog on software called Joel on Software and edited The Best Software Writing I. In an industry where books age date so fast as to be almost pointless, like the hardware that runs software, one astonishing aspect is how The Best Software Writing, published in 2005 and composed of many essays written earlier, is still relevant and fascinating—and will probably be so for a long time yet.

Take Danah Boyd’s “Autistic Social Software,” which, like most of The Best Software Writing, explains how computers and people interact. It was published around 2004, which represented a societal turning point not widely recognized at the time, as virtually everyone my age hopped on what we now call “social networking sites.”* She observes that those sites weren’t very good because they’re not focused on users, even drawing a not entirely apt analogy similar to the one I made Science Fiction, literature, and the haters:

While many science fiction writers try to convey the nuances of human behavior, their emphasis is on the storyline, and they often convey the social issues around a technology as it affects that story. Building universal assumptions based on the limited scenarios set forth by sci-fi is problematic; doing so fails to capture the rich diversity of human behavior.

Her comments about science fiction are accurate regarding much, but not all of it, just like her comments about the focus of programmers on computers and their limitations, forcing us to adapt to them rather than vice-versa. The market has a knack for giving people what they want, however, and that focus is changing over time as iterative generations of software improve and people move to sites that work better. Boyd says, “[…] there is a value in understanding social life and figuring out how to interact with people on shared terms.” Right: and those who figure out what that means will be rewarded. I’m reminded of a programmer friend whose e-mail signature says “Computers aren’t the future; people are,” and I suspect he would approve of the lessons in this essay and larger book.

That’s a single example of how you take offline phenomenon—how people congregate—and apply it to an online context. Other essays reverse that dynamic. Clay Shirky’s “A Group Is Its Own Worst Enemy” explains how online groups form and break apart in much the same fashion as offline groups. You could look at this in terms of clubs, families, countries or jobs, all of which have similar cohesive and destructive forces assailing them over different time periods. One thing the military has going for it is hundreds of years of experience in taking people and forcing them to work together toward a common goal. Many sports accomplish the same thing. But in both cases, the tasks—destroying things and killing people, moving a ball down a field—are narrow and well-defined compared to the wide-open field of artistic creation. Granted, both the military and sports have their wider, macro possibilities—what do we destroy and who do we kill and why? (this question is more often known as politics), or what rules should the game have and why?—but they’re not intrinsically undefined like software, or other forms of intellectual endeavor (Paul Graham wrote about this in Great Hackers.) The incentives are easier to get right. In software, like life, they’re not. Compensation becomes harder to get right when goals are less easily defined, which is a major subject in one essay and subsidiary in others. I wrote about it as applied to grant writing, using Spolsky as a launching pad, and if more people realized what he’s already discovered, we might not waste so much effort trying to reinvent the wheel or invent futile algorithms for what is inherently a tricky subject.

The Best Software Writing is, yes, about software, but it’s about more, including the future. Those interested in seeing it, and the inside of the most transformative industry of recent times, would do well to read it. It contains more thought than Literacy Debate: Online, R U Really Reading?, a New York Times article published yesterday (read it, or the rest of the paragraph won’t make much sense). Why hasn’t the reporter figured done enough background research? I wish I could say. It contrasts with Shirky’s other article, “User as Group,” which demonstrates much of what’s right about the new mediums without questioning the medium’s utility—something that the New York Times article utterly misses. Furthermore, on the individual level, the individual is going to suffer the pain of insufficient literacy or numeracy in the form of inferior jobs and a less intense life. Many seem happy to make such trade-offs, and we go on telling them to eat their Wheaties. If they don’t, they won’t be able to write at the level of skill and detail in The Best Software Writing, which would make the world a poorer place, but those involved don’t seem to care as a group. Oh well. What harm not reading Spolsky or Fred Brooks will harm the individual, but it will also cause splash damage to others who have to work with them. To the extent reading online ameliorates those problems, as Shirky implies, we’ve made improvements. He, Spolsky, and Brooks who write about programming only to the extent you’re unwilling to see programming as a metaphor.

The major fear articles like “Literacy Debate: Online, R U Really Reading?” express, I suspect, is that many people are getting along without books and stories. On a societal basis, this probably isn’t a good thing, since democracies depend on educated citizens with historical knowledge—but on a personal level, if you’re a mid-level account manager at some large company, how much does your familiarity with Tolstoy and Norman Rush really help or hurt you? On the other hand, if you want to be at the top of virtually any field, you need to read and understand the world. In software, that means books like The Best Software Writing, which, though it consists almost entirely of pieces that originally appeared online, is a physical, dead-tree book that I liked reading on paper far more than I would’ve on the screen, where I already spend entirely too much of my face time. I want what I find convenient, as do most people, and many of the essays point toward defining what that means. It’s got more about how fulfill human desires than most books, fiction or nonfiction. Volume II of The Best Software Writing might never appear. Given the strength of the first, I wish it would.


* I hope future readers find this strange phrase an anachronism showing how primitive we are, because it’s ugly and imprecise. If a phrase must be one, it at least shouldn’t be the other.

More on How Fiction Works and someone else’s review doesn’t

In The Australian, a nominal review of James Wood’s How Fiction Works is really a discussion of Wood’s work more generally. It also shows why I shirked writing a deep review of How Fiction Works, as I I have more than a few quibbles:

If Wood doesn’t “get” the overall trick of an author’s writing he tends to dismiss it. This was most evident in his notorious Guardian review (reworked in The Irresponsible Self) of “hysterical realism”, a term Wood has coined to sum up the work of a whole slew of contemporary novelists that includes Don DeLillo, Zadie Smith, David Foster Wallace, Salman Rushdie and Thomas Pynchon.

Is this an issue of not “getting” the works, or of getting them too well and not liking or caring for what they represent? To me, DeLillo and Pynchon in particular have long been overrated. I remember trying to read them in late school and early college and thinking, “why are these awful writers so highly praised?” At the time I didn’t realize that they were a reaction against earlier literary trends and that they were trying to be stylistically unusual merely for the sake of being stylistically unusual, or for obscure philosophical points without writing actual philosophy. Paul Graham seems to have had a similar experience with actual philosophy. Wood gets this, and probably better than I do, and I’m not the only one who’s noticed the overpraised and under-talented; one thing I very much appreciate about A Reader’s Manifesto is its willingness to engage with writing, rather than politics surrounding writing, or whatever propelled DeLillo to fame.

To return to the review:

While another critic might see the impulse towards jam-packed, baroquely hyperreal novels as a legitimate and thoughtful, albeit varyingly skilful, response to our postmodern world (a mimetic reflection of the different status of information in an age of instant and indiscriminate communication, say, or an attempt to “wake up” a form whose traditional gestures are now the cliched staples of Hollywood cinema) […]

The problem is that these techniques aren’t mimetic: in trying to mimic the supposed techniques that they implicitly criticize, they don’t reflect information, but chaos; they aren’t hyperreal, but fake. And I’m not convinced modern life is so different in terms of “the different status of information in an age of instant and indiscriminate communication.” Information isn’t indiscriminate: I still choose what to read and what to watch most of the time; if I’m exposed to ads, it’s because I choose to be. In some essays, Umberto Eco discusses how he sees ideas and battles from the Middle Ages underlying much of everyday life, and the more I read, the more I tend to trace the lineage of intellectual and personal ideas backwards through time. Although our technological and physical world has changed enormously in the last two hundred years, I’m not sure the purposes to which we put technology and power (conquest, sex, etc.) has much. That isn’t to say literary style hasn’t evolved, as it obviously has, and my preference tends toward novels written after 1900. Ideas have shifted and evolved too. Still, techniques used by modern authors like the hyperrealists just because they can be used doesn’t make them an improvement. Furthermore, not all of Wood’s loves are mine—I just finished Henry James’ The Portrait of a Lady and wouldn’t have if I didn’t need to. But I have seldom read a stronger argument for the capital-N Novel than I have in How Fiction Works, and even when I sometimes don’t find Wood persuasive, the power of his argument and depth of his reading always compels me to think more clearly and deeply about my own positions and thoughts.

More on How Fiction Works and someone else's review doesn't

In The Australian, a nominal review of James Wood’s How Fiction Works is really a discussion of Wood’s work more generally. It also shows why I shirked writing a deep review of How Fiction Works, as I I have more than a few quibbles:

If Wood doesn’t “get” the overall trick of an author’s writing he tends to dismiss it. This was most evident in his notorious Guardian review (reworked in The Irresponsible Self) of “hysterical realism”, a term Wood has coined to sum up the work of a whole slew of contemporary novelists that includes Don DeLillo, Zadie Smith, David Foster Wallace, Salman Rushdie and Thomas Pynchon.

Is this an issue of not “getting” the works, or of getting them too well and not liking or caring for what they represent? To me, DeLillo and Pynchon in particular have long been overrated. I remember trying to read them in late school and early college and thinking, “why are these awful writers so highly praised?” At the time I didn’t realize that they were a reaction against earlier literary trends and that they were trying to be stylistically unusual merely for the sake of being stylistically unusual, or for obscure philosophical points without writing actual philosophy. Paul Graham seems to have had a similar experience with actual philosophy. Wood gets this, and probably better than I do, and I’m not the only one who’s noticed the overpraised and under-talented; one thing I very much appreciate about A Reader’s Manifesto is its willingness to engage with writing, rather than politics surrounding writing, or whatever propelled DeLillo to fame.

To return to the review:

While another critic might see the impulse towards jam-packed, baroquely hyperreal novels as a legitimate and thoughtful, albeit varyingly skilful, response to our postmodern world (a mimetic reflection of the different status of information in an age of instant and indiscriminate communication, say, or an attempt to “wake up” a form whose traditional gestures are now the cliched staples of Hollywood cinema) […]

The problem is that these techniques aren’t mimetic: in trying to mimic the supposed techniques that they implicitly criticize, they don’t reflect information, but chaos; they aren’t hyperreal, but fake. And I’m not convinced modern life is so different in terms of “the different status of information in an age of instant and indiscriminate communication.” Information isn’t indiscriminate: I still choose what to read and what to watch most of the time; if I’m exposed to ads, it’s because I choose to be. In some essays, Umberto Eco discusses how he sees ideas and battles from the Middle Ages underlying much of everyday life, and the more I read, the more I tend to trace the lineage of intellectual and personal ideas backwards through time. Although our technological and physical world has changed enormously in the last two hundred years, I’m not sure the purposes to which we put technology and power (conquest, sex, etc.) has much. That isn’t to say literary style hasn’t evolved, as it obviously has, and my preference tends toward novels written after 1900. Ideas have shifted and evolved too. Still, techniques used by modern authors like the hyperrealists just because they can be used doesn’t make them an improvement. Furthermore, not all of Wood’s loves are mine—I just finished Henry James’ The Portrait of a Lady and wouldn’t have if I didn’t need to. But I have seldom read a stronger argument for the capital-N Novel than I have in How Fiction Works, and even when I sometimes don’t find Wood persuasive, the power of his argument and depth of his reading always compels me to think more clearly and deeply about my own positions and thoughts.

Entertainment and the novel

“Entertaining” is often thrown around and almost never defined, and its implicit definitions have assumed such a plethora of meanings that I’m not sure it still has any meaning, like a symbol so overloaded—the white whale, roses—that it collapses under its epistemological baggage. The issue arises because some correspondents and one commenter in Science Fiction, literature, and the haters wrote about it; the commenter says of science fiction readers:

They are looking for entertainment–space opera–and not a metaphysical journey. Just my 2 cents worth, adjusted for the cost of living since the expression first appeared.

Maybe: but what does “entertainment” mean in this context? Or in the context of any novel or work of art? Does it mean novelty? Continuity? Plot? Structure? Or some combination thereof? As one interrogates what entertaining means, one gets closer and closer to being a critic. Most of the usage about it seems to imply that challenging or unusual novels aren’t entertaining, or at least aren’t as entertaining as those novels that seem to dominate bestseller lists like locusts dominating a field of grass. Umberto Eco gives his thoughts Reflections on The Name of the Rose (which is only apparently available, and used at that, in the UK):

The reader should learn something either about the world or about language: this difference distinguishes various narrative poetics, but the point remains the same. The ideal reader of Finnegans Wake must, finally, enjoy himself as much as the reader of Erle Stanley Gardner. Exactly as much, but in a different way.

Now, the concept of amusement is historical. There are different means of amusing and of being amused for every season in the history of the novel. Unquestionably, the modern novel has sought to diminish amusement resulting from the plot in order to enhance other kinds of amusement. As a greater admirer of Aristotle’s Poetics, I have always thought that, no matter what, a novel must also—especially—amuse through its plot.

There is no question that if a novel is amusing, it wins the approval of the public. Now, for a certain period, it was thought that this approval was a bad sign: if a novel was popular, this was because it said nothing new and gave the public only what the public was already expecting.

I believe, however, that to say, “If a novel gives the reader what he was expected, it becomes popular,” is different from saying, “If a novel is popular, this is because it gives the reader what he was expected of it.”

The second statement is not always true.

Perhaps, but what of a novel with a strong plot expressed in unusual ways, like Ulysses? And even then, what is the difference between a “strong” and “weak” plot? The more I try to imagine how I would define them, the more they slip through my hands.

Elsewhere, Nigel Beale says a good book needs:

1) to find and revel in funny, beautiful, thought-provoking phrases, 2) dwell on profound paragraphs that contain useful truths about life and human nature, 3) lose myself in the lives of exceptional characters.

I’m not sure if that counts as entertaining or not, and, if so, why it does and others don’t. It also gels with Eco’s comment about modern literature decoupling entertainment from plot. Some novels I love don’t have much of Beale’s second criteria, or at least not explicitly—like Elmore Leonard, for example. And is a character exceptional for what the person does (explorer, astronaut, spy?) or for how the person is described (like Marilyn Robinson or Tom Perrotta’s novels).

Entertainment also seems to drift with experience: what I found entertaining at 12—like Robert Heinlein—I can’t or can barely read now, and what I like now—such as To The Lighthouse—I wouldn’t have accepted then. For me, entertainment involves novelty in language and content, and the more I read, the harder that becomes to achieve, and so for prolific readers (or, I suspect, watchers of movies), one has to search harder and harder for the genuinely novel. Demands grow higher, perhaps helping to open the supposed rift between high and low, or elite and mass, culture. When entertainment is cited as a factor of pleasure or not, I think many of those who use it are talking past one another, and without turning this into a philosophical discussion—too many of those turn into word battles, as Paul Graham says at that link: “Wittgenstein is popularly credited with the idea that most philosophical controversies are due to confusions over language.” Richard Rorty deals with the same issue less pejoratively in Contingency, Irony, and Solidarity. Those who would talk about entertainment should also be ready to talk about what they mean, but it appears too few are.

Dan Ariely in Seattle

In addition to being an excellent economist and writer, Dan Ariely has among the best syllable-to-letter ratios for any last name I’ve heard. I only learned how to pronounce AR-EE-el-EE on Feb. 27, when he visited Seattle to discuss Predictably Irrational. He warmed the crowd with a visual illusion I fell for; this YouTube clip is a variation. Carefully count the number of one- versus two-handed passes in the video.

If you haven’t watched the clip, don’t read on. If you have, the question isn’t about passes: did you notice the guy with the cell phone walk up to the door behind the girls with the ball? Ariely’s video was more obvious: men in black and white shirts passed two basketballs and a guy in a gorilla suit walked through. Like most of the rest of the crowd, I didn’t notice the gorilla because I was busy counting passes (18 in all, though it depends on whether one counts a pass at the very end). To judge from the self-conscious laughter when Ariely pointed this out to us and the few hands that went up when he asked how many of us saw the gorilla, many others were in my situation. And with that, we were primed with a metaphor for the brain’s ability to create mental illusions.

Ariely gave many examples of such illusions and preferences. For example, opt-in versus opt-out retirement systems have widely varying degrees of participation, as do countries with organ donations, depending on whether people are enrolled by default or must opt-in. It turns out that we seem to have difficulty with multiple, complex choices and a tendency to fall back on defaults in the face of these choices. I’m reminded of Philip Zimbardo’s The Lucifer Effect: Understanding How Good People Turn Evil, which shows how otherwise normal people who receive arbitrary authority and limited oversight can do evil acts. That tendency might be an aspect of a default option: obeying perceived authority.

Both Zimbardo and, implicitly, Ariely, argue that by becoming aware of such tendencies we can better correct or fight them. The tendency towards defaults, initial choices, and authority might also explain why change in societal attitudes often happens slowly: it takes generations for tides to shift and first decisions to be made anew. Paul Graham says, “I suspect there is some speed limit to the evolution of an economy. Economies are made out of people, and attitudes can only change a certain amount per generation.” Ariely’s research supports that conclusion, but I can also see how and why change might be accelerating: as people become more accustomed to change as the norm and as the first choice, it becomes more natural for the individuals who make up societies to reorient themselves faster to new choices. This could also help explain some of the findings in Gregory Clark’s A Farewell to Alms, which argues that the Industrial Revolution took off more because of attitudes and culture in England than other conditions. England’s culture during the Industrial Revolution had finally reached a place where change and innovation became the norm, and where society could support that change rather than relying on defaults like superstition or religion to explain worldly phenomena. It’s an intriguing hypothesis, though off the top of my head I can’t immediately think of a clever way to test whether change becoming a default norm might help change in the future, perhaps explaining why I’m not a behavioral economics professor.

Ariely also showed how we’re constantly using imperfect and imprecise knowledge to make decisions, allowing first decisions their power to frame how we think about something. In an experiment, Ariely read poetry to students and then asked how much groups of students would either pay or agreed to be paid to hear him recite poetry again shortly. The group asked how much they would pay offered to pay to hear Ariely read, and the group that he offered to pay demanded money. It would appear that the way he framed the question caused them to offer or demand money—and offer more or demand more the longer the reading went on. I would also note that, although Ariely gave an excellent econ talk, I’m not sure I would go for his rendition of “Leaves of Grass.” But students who asked how much they would pay did offer money for it because of the way Ariely framed the question.

Now that I know, I wouldn’t pay to hear him read poetry regardless of whether he asked. But if he’s in town for economics, I’d see him, and so should you. You’ll laugh and learn, and the former might be the optimal way to induce the latter.

Conversations with Robertson Davies

I’m tempted to summarize Conversations with Robertson Davies, a collection of interviews with the great author, but I can’t, and even if I could I’d probably do better to give a few thoughts stemming from a comment Davies made about reading. As you can probably surmise, I like Davies’s work, so I find his comments without a fictional scrim interesting too. One exchange particularly resonates:

Robert Fulford: Books are things to be studied, judged rather than experienced. I think you once said that the heresy of the critic is that he is a judge rather than experiencer of literature.
Davies: Yes. […] As for my own books, I hope that the readers will have to use their heads and be collaborators, which is a thing I stressed in that earlier book. They should be collaborators in creating the work of art which is the book.

I tend toward judgement, and my chief criterion for greatness is met when a book causes me to spontaneously stop judging and start experiencing. To be fair, I can’t fully stop judging, but to the extent that my reading becomes more experience and less judgment I am inclined to like and love the book that induces this sensation. The best of Davies’s books—The Deptford Trilogy, The Cornish Trilogy, The Cunning Man—all accomplish this goal. Cryptonomicon and Straight Man and Lord of the Rings achieve the same effect. I wish I could fully explain how and why they do, but part of writing about books is writing about the inexplicable. Criticism is an effort to reveal more of the mystery that can’t ever be fully revealed.

To intersperse Elmore Leonard:

[Q:] There’s this presumption that a book is somehow a higher form of art, of a higher form of expression, than a movie. Do you agree?
[Leonard:] I don’t think the book is a higher form at all. Because most books are not very good. They’re a chore to read.

Occasionally a worthwhile book is also a chore, but only very seldom, and usually because I don’t understand it at first, as I didn’t Romeo and Juliet when reading it as a high school freshman. Recently I described The Bad Girl with language that brings to mind duty. I think Davies felt similar to Leonard regarding bad books, or even books that aren’t essential (essential meaning different things to different people, of course, which might make the debate more a semantic than one getting at underlying truth). Elsewhere in Conversations, Davies recommends reading fewer books but reading them with more depth and feeling.

I hope to read with more depth and feeling, and part of the reason I write is to find both. Paul Graham explains the process well:

Expressing ideas helps to form them. Indeed, helps is far too weak a word. Most of what ends up in my essays I only thought of when I sat down to write them. That’s why I write them.

Wow! I started the post writing about Robertson Davies, but along the way became more interested in the diversions than the original topic. And that is a good thing: one idea bumps into another, reminding me of something else, and off I go. I hope that is reading with feeling and intellect. The Elegant Variation, in discussing the maladies affecting book reporting, says “Too many reviews are dull, workmanlike book reports.” I agree, and think that many books are dull and workmanlike, so perhaps the reviews reflect them. That’s why I felt a sense of wonder at Davies’ books, as well as Conversations: they are not dull and workmanlike, and I hope my writing isn’t. After reading Mark Sarvas’s comments, I’ve tried harder not to write dull, workmanlike book reports. Is it working?

I hope so. Davies wrote many reviews of varying quality, but he was also a man who knew good work when he saw it. Conversations is filled with criticism (in the bad sense) of academic criticism (in the sense of commentary). I’ve heard James Wood (a TEV favorite) and others I know I’ve read but can’t think to cite at the moment say or write the same. So here’s to them, and to Davies, and to reading, and to experience.