Charlie Stross on the Real Reason Steve Jobs hates flash (and how lives change)

Charlie Stross has a typically fascinating post about the real reason Steve Jobs hates flash. The title is deceptive: the post is really about the future of the computing industry, which is to say, the future of our day-to-day lives.

If you read tech blogs, you’ve read a million people in the echo chamber repeating the same things to one another over and over again. Some of that stuff is probably right, but even if Stross is wrong, he’s at least pulling his head more than six inches off the ground, looking around, and saying “what are we going to do when we hit those mountains up ahead?”

And I don’t even own an iPad, or have much desire to be in the cloud for the sake of being in the cloud. But the argument about the importance of always-on networking is a strong one, even if, to me, it also points to the points to the greater importance of being able to disconnect distraction.

In the meantime, however, I’m going back to the story that I’m working on. Stories have the advantage that they’ll probably always be popular, even if the medium through which one experiences them changes. Consequently, I’m turning Mac Freedom on and Internet access off.

What's Going on With Amazon and Macmillan?

The book blagosphere has been buzzing with the news that Amazon, a big website to which I link in most of my posts, isn’t selling any titles published by Macmillan, the smallest of the big publishers in the U.S. The dominant question in all this is “why?” There’s been lots of speculation, much of it not worth linking to, but Charlie Stross has written a handy outsider’s guide to the fight, which is actually about how the publishing industry will shake out as a book makes its way from an author to you, a reader.

The bad news is that Stross’ post is almost impossible to excerpt effectively, but I’ll try:

Publishing is made out of pipes. Traditionally the supply chain ran: author -> publisher -> wholesaler -> bookstore -> consumer.

Then the internet came along, a communications medium the main effect of which is to disintermediate indirect relationships, for example by collapsing supply chains with lots of middle-men.

From the point of view of the public, to whom they sell, Amazon is a bookstore.

From the point of view of the publishers, from whom they buy, Amazon is a wholesaler.

From the point of view of Jeff Bezos’ bank account, Amazon is the entire supply chain and should take that share of the cake that formerly went to both wholesalers and booksellers. They do this by buying wholesale and selling retail, taking up to a 70% discount from the publishers and selling for whatever they can get. Their stalking horse for this is the Kindle publishing platform; they’re trying to in-source the publisher by asserting contractual terms that mean the publisher isn’t merely selling them books wholesale, but is sublicencing the works to be republished via the Kindle publishing platform. Publishers sublicensing rights is SOP in the industry, but not normally handled this way — and it allows Amazon to grab another chunk of the supply chain if they get away with it, turning the traditional publishers into vestigial editing/marketing appendages.

The agency model Apple proposed — and that publishers like Macmillan enthusiastically endorse — collapses the supply chain in a different direction, so it looks like: author -> publisher -> fixed-price distributor -> reader. In this model Amazon is shoved back into the box labelled ‘fixed-price distributor’ and get to take the retail cut only. Meanwhile: fewer supply chain links mean lower overheads and, ultimately, cheaper books without cutting into the authors or publishers profits.

Read the rest on Stross’ blog.

This makes me feel slightly dirty for having bought a Kindle recently. On the other hand, this… thing… is between giant corporations, both of which are working to extract as much money from me as possible. If I had to root for either Macmillan or Amazon, I’d chose the former, since the prospect of Amazon as the middleman between virtually every reader and every author is unpalatable. But with the iPad en route, the Barnes and Noble Nook at least in existence, and other eReaders on the way, the prospect of Amazon’s dominance looks far less likely than it did. That’s probably why the company is so desperate at the time.

Harold Bloom on word processors (and, for good measure, editing)

Interviewer: Do you think that the word processor has had or is having any effect on the study of literature?

Bloom: There cannot be a human being who has fewer thoughts on the whole question of word processing than I do. I’ve never even seen a word processor. I am hopelessly archaic.

Interviewer: Perhaps you see an effect on students’ papers then?

Bloom: But for me the typewriter hasn’t even been invented yet, so how can I speak to this matter? I protest! A man who has never learned to type is not going to be able to add anything to this debate. As far as I’m concerned, computers have as much to do with literature as space travel, perhaps much less. I can only write with a ballpoint pen, with a Rolling Writer, they’re called, a black Rolling Writer on a lined yellow legal pad on a certain kind of clipboard. And then someone else types it.

Interviewer: And someone else edits?

Bloom: No one edits. I edit. I refuse to be edited.

This passages comes from The Paris Review Interviews Vol. II, which is much recommended, and should be considered in light of my recent post on The computer, operating system, or word processor a writer or novelist uses doesn’t matter much, although I still like Macs. If Bloom, Freud, and Shakespeare could get by without debating the operating system or word processor being used, so too should you (this isn’t the same as saying you shouldn’t use a word processor, but rather that you should spend the minimum amount of time worrying about it, and the maximum amount of time worrying about your writing).

Apple’s Snow Leopard Day!

If you’re a Mac user, today is Snow Leopard Day—meaning that Mac OS 10.6 is out. It has few major “features” in the sense that earlier versions did but is supposed to be much refined from Leopard. My copy is due to arrive early next week.

You can also read David Pogue’s review, Joshua Topolsky’s review (which has numerous screen shots), Brian Lam’s review, and Walter Mossberg’s review if you want to know more.

Apple's Snow Leopard Day!

If you’re a Mac user, today is Snow Leopard Day—meaning that Mac OS 10.6 is out. It has few major “features” in the sense that earlier versions did but is supposed to be much refined from Leopard. My copy is due to arrive early next week.

You can also read David Pogue’s review, Joshua Topolsky’s review (which has numerous screen shots), Brian Lam’s review, and Walter Mossberg’s review if you want to know more.

Microsoft Word and the fate of the word processor

There’s a fascinating discussion at Slashdot regarding the life (and death?) of Microsoft Word, the much used and much despised word processor. Jeremy Reimer of Ars Technica posits that Word is going to lose out to wiki-style online editing tools. Maybe he’s right, but I’m skeptical because I suspect that most documents are only read by a single person, and when they’re edited by multiple people, they still tend to revolve around a single person. Writing tends to work best in serial, not parallel, mode; this might be the subject of a future Grant Writing Confidential post. (Edit: See One Person, One Proposal: Don’t Split Grant Writing Tasks.)

Some of the Slashdot comments show the worst of Slashdot’s solipsism and narrow-mindedness, however. This one in particular is galling because its author obviously doesn’t know what he’s talking about. For example, he says that “If, by “professional writer,” you mean someone actually producing text, the main needs are a good text editor, which can be found many places.”

With all due respect, I don’t think he knows what he’s talking about. A good text editor, even one that’ll give diffs, is nowhere near as fast and as easy as Word’s track changes system. As Philip Greenspun, well-known Microsoft shill, says regarding his book writing project:

At least at Macmillan, everyone collaborates using Microsoft Word. I’d wanted to write my book in HTML using Emacs, the text editor I’ve been using since 1978. That way I wouldn’t have to do any extra work to produce the on-line edition and I wouldn’t be slowed down by leaving Emacs (the world’s most productive text editor, though a bit daunting for first-time users and useless for the kind of fancy formatting that one can do with Frame, Pagemaker, or Word). Macmillan said that the contract provision to use Word was non-negotiable and now I understand why.

Microsoft Word incorporates a fairly impressive revision control system. With revision control turned on, you can see what you originally wrote with a big line through it. If you put the mouse over the crossed-out text, Word tells you that “Angela Allen at Ziff Davis Press crossed this out on March 1, 1997 at 2:30 pm.” Similarly, new text shows up in a different color and Word remembers who added it. Finally, it is possible to define special styles for, say, Tech Reviewer Comments. These show up in a different color and won’t print in the final manuscript.

The original commenter says that free software can replace Word. I’d observe that a) everyone I have to collaborate with has Word and b) only one other person I know has Open Office.org, which also looks hideously ugly on OS X and, when I’ve tried to use it, crashes frequently. Most professional writers appear to use Word. That they don’t migrate en masse to text editors, which have been around since at least the 1970s, shows that there must be some advantage, even if it’s merely network effects, to using it.

Another commenter said that Word “has too large an installed base and there is too much inertia for people to change,” inspiring a third person to chime in, “You know, I’m sure they used to say the same thing about Wordperfect, remember them?”

And in those days, the total number of computers bought every year exceeded the entire previous install base, year after year. Since the neighborhood of the late ’90s, however, that hasn’t been true. Today, if you want to get people to switch operating systems/word processors/e-mail clients/whatever, you have to get people who already have computers to consciously change their behavior. This is really, really hard to do. That’s the difference between WordPerfect’s dominance in the ’80s and Word’s dominance today.

As Joel Spolsky says:

Microsoft grew up during the 1980s and 1990s, when the growth in personal computers was so dramatic that every year there were more new computers sold than the entire installed base. That meant that if you made a product that only worked on new computers, within a year or two it could take over the world even if nobody switched to your product. That was one of the reasons Word and Excel displaced WordPerfect and Lotus so thoroughly: Microsoft just waited for the next big wave of hardware upgrades and sold Windows, Word and Excel to corporations buying their next round of desktop computers (in some cases their first round).

According to the research firm Gardner, “For the year [2008], worldwide PC shipments totaled 302.2 million units…” But Forrester estimates that there are about a billion computers in use. Many of those are probably first-time buyers in developing countries, second computers, computers for children, used by the same person at home and at work, and so forth; nonetheless, even if every one of those new computers replaced a single old computer, it would still take more than three years for the market to churn. That’s a major difference, and the installed based issue is why Word (and office) aren’t going anywhere fast.

I don’t love Word and used Lotus Word Pro for years after it had been effectively abandoned because its styles functionality was (and still is) vastly superior to Word’s. But the program isn’t available for OS X and has died in IBM’s bowels. In the current computing world, it’s hard to imagine Word being superseded on the desktop; at some point in the future, a browser-based word processor might overtake it, but that day is still further off than many Internet prognosticators believe.

Computer post follow-up: The relative reliability of laptops versus desktops

In a post on the merits of laptops versus desktops, I wrote that “the inflated notebook total [regarding units sold] is probably in part due to the disposable nature and limited longevity of notebooks.” Two e-mailers took issue with this assertion because I didn’t have any direct evidence backing it up aside from the obvious engineering constraints that impair notebooks.

Evidence isn’t easy to find. The best I’ve seen comes from a recent issue of Consumer Reports, which says:

Cons: Laptops cost more than comparably equipped desktops. Our reliability surveys show laptops are more repair-prone than desktops. Components are more expensive to repair.

Isn’t that obvious? The miniaturization necessary to cram components into a laptop case combined with inferior heat dissipation and the wear of constantly opening, closing, and moving a computer would reduce the reliability of comparable laptops versus desktops. People who need the mobility should obviously make that trade-off, but to me the benefits of a laptop are overrated, especially given the price premium most already command. This holds true across brands.

One issue bigger than price is hassle: the longer I can keep a computer without the hard drive, logic board, or other components breaking, the better off I am because I don’t have to undertake the tedious process of fixing the computer. By that standard, a long-lived desktop is a beautiful thing indeed—especially when one doesn’t need the portability.

A recent NPD survey on netbooks found that “60 percent of buyers said they never even took their netbooks out of the house.” If your laptop never travels, why bother having one?

Still, my desktop preference may  eventually change. AnandTech’s review of the new MacBook Pro batteries highlights the astonishingly long charge they hold and waxes euphorically in a way most unlike the normally staid tech site:

Ever since I first looked at the power consumption specs of Nehalem I thought it didn’t make any sense to buy a new, expensive notebook before Arrandale’s launch in Q4 2009/Q1 2010. While performance will definitely increase considerably with Arrandale, Apple just threw a huge wrench in my recommendation. The new MacBook Pro is near perfect today. If you need a new laptop now, thanks to its incredible battery life, I have no qualms recommending the new MBP.

But the power/performance of desktops today still beats laptops for those who don’t need the mobility. A MacBook Pro, 24″ monitor, and Intel X-25 SSD run well north of $2,000, compared to $1,500 for an iMac, which, according to Consumer Reports, should have greater longevity.

AnandTech has one other piece that sways me towards desktops, as this 2007 article shows: “Without even running any objective tests, most people could pretty easily tell you that the latest and greatest desktop LCDs are far superior to any of the laptop LCDs currently available.”

Gizmodo has also run a polemic announcing the justified end of the desktop. I’m not buying it.

EDIT: It’s 2016 and I’m using a Retina iMac. Laptops have, however, made impressive strides in screen quality.

Computer post: desktop or laptop/notebook?

Ars Technica reports that Global notebook shipments [have] finally overtake[n] desktops, making the issue all the more salient (Slashdot’s coverage is here). Of course, many of those notebooks are probably netbooks that supplement rather than supplant desktops, and the inflated notebook total is probably in part due to the disposable nature and limited longevity of notebooks. Still, the legitimate question remains, and my short answer for most people in most circumstances in “desktop.”

My work demands sustained concentration (see, for example, “Disconnecting Distraction”) and being in spot for a time helps that; I sold my PowerBook and used the proceeds for a 24″ aluminum iMac. It’s a vastly faster machine that’ll probably last longer than an equivalent laptop will and cost less. Those who want mobility pay for it, and I suspect most people overestimate their mobility and underestimate the benefits of a desktop.

But the question is one that an individual is better suited to answer, as it depends on that person’s needs, and I can only enumerate the trade-offs inherent in the laptop/desktop decision. The question becomes almost philosophical concerning the nature of the person you are: more peripatetic or less? Working for longer at a computer or not as long? Used to a large screen or not (becoming accustomed to space and then having it removed it difficult)? Annoyed by cable creep or not? To be sure, some groups of people are well-suited to notebooks: people who move often, have to travel frequently, and students scurrying between dorm and home all probably fit that category. I suspect there are fewer of them than the laptop numbers indicate and that many people don’t consider the detriments, especially ergonomically. I’ve heard the complaint too many times: my wrists hurt, or my back hurts, or my eyes are tired, and they almost always come from laptop users. I recently gave a friend an a Griffin iCurve for her laptop, which seemed to improve the problem. ICurves are no longer made, but the new version is called an Elevator.

An Elevator, external keyboard, monitor, and mouse improves the laptop, but they’re expensive. Comparing Mac equipment makes this delta particularly obvious—even if one buys third-party monitors—as various pricing specials and what not don’t obscure the underlying prices. One person in an Ars thread said, “I’ve found that if you don’t need mobility, paying for it is a bad idea.” Indeed: and the question becomes “need,” which I can’t answer. A Slashdot commenter said that “the lack of replaceable parts is one other reason why laptop sales are ‘higher’ than desktop sales.” Combined with a) the inherent jostling laptops experience and b) the compactness of the parts, raising the temperature inside the machine and increasing the likelihood that subtle manufacturing flaws will do things like pinch video cords or dislodge logic boards, this means laptops are likely to need to be replaced more often, in addition to their higher upfront costs.

I have an iMac, which has some of a laptop’s drawbacks, including no user-serviceable parts aside from RAM. But it’s also relatively easy to move and more likely to last than a ceaselessly mobile laptop. It remains in one place, making it easier to get in the zone, as described by Rands in Repose at the link. Books, mostly fiction but still a few technical ones too, surround my desk, and, like Malcolm Gladwell, I’m more likely to turn to them for quotes, inspiration, and sounding in many circumstances than to the much-scattered Internet:

[Gladwell ….] still prefers to do most of his research at the NYU library. Google is something of a personal hobbyhorse: “Google is the answer to the problem we didn’t have. It doesn’t tell you what’s interesting or what’s important. There’s still more in the library than there is on Google.”

He’s overstating his case but I take his point. Then again, the article also says that Gladwell likes to work in coffeeshops, which is anathema to me: I look every time someone walks by or the espresso machine goes off like a whistle, and at the end of three hours I’ve written as many sentences. There’s even a picture of him sitting at a laptop, perhaps contradicting some of my overall point.

Nonetheless, like most philosophy problems, this one has no perfect answer and is more an expression of underlying value than anything else. Granted, this decision has a greater economic aspect given the continued cost disparity between laptops and desktops, which seems unlikely to disappear in the immediate future. But I think that, if most people weigh what they value, the money and advantages of a desktop more often than not make them better machines. If you’re writing, or coding, or editing movies, or doing any number of other things for a sustained period of time on a somewhat regular basis, a desktop or laptop + external peripherals seems an improvement over a laptop. If you’re chiefly using a computer to read e-mail, check Facebook, and the like, the computer choice probably doesn’t matter. Either way, I’d rather the save money, although many others obviously prefer the mobility. To me, and presumably many others who like to write and to read, and the “deep thought” stage is, to my mind, more important than shallower activities that demand less cognitive attention. That’s not to say you can’t get in the zone or produce useful work on a laptop—millions of people obviously do—but I still think a desktop a more satisfying overall choice.

I can guarantee nothing, of course, and Lord of the Rings speaks to this issue, as it does to so many:

“… The choice is yours: to go or wait.” [Gildor said.]
“And it is also said,” answered Frodo, “Go not to the Elves for counsel, for they will say both no and yes.”
“Is it indeed?” laughed Gildor. “Elves seldom give unguarded advice, for advice is a dangerous gift, even from the wise to the wise, and all courses may run ill. But what would you? You have not told me all concerning yourself; how should I choose better than you? But if you demand advice, I will for friendship’s sake give it.”


EDIT: A recent NPD survey on netbooks found that “60 percent of buyers said they never even took their netbooks out of the house” (hat tip Salon.com). If your laptop never travels, why bother having one?

EDIT 2: I posted a follow-up regarding the relative reliability of desktops versus laptops. The former win according to the best data I’ve seen.

EDIT 3: Marco Arment has a post on why he’s now using a MacBook Pro instead of a Mac Pro. The reason: Solid State Drives (SSDs). The limiting factor on laptop performance for most people used to be the hard drive. With an SSD, it’s not. If you have enough money for a large-capacity SSD and are willing to put a conventional hard drive in the CD / DVD bay, you’re not giving up any substantial performance in day-to-day tasks. More than anything else, the growing power of SSDs make me think the days of desktop computers are limited.

Life: thoughts on computers and tools

“Walking into Nathan and Kristi’s empty house was a reminder of why stuff doesn’t really matter: We make the inanimate objects come to life, and not vice versa. Similarly, it reminded me that the fond feelings I have for this place are all wrapped up in the people. There was certainly no charm to those bare walls, studded with hooks where pictures once hung.”

—Alan Paul, “The Annual Expat Exodus Never Gets Any Easier

This is an appropriate quote given a friend’s recent e-mail asking if I’d become overly enamored of computers, given what she called an “almost pornographic” shot of my desk. It’s not dissimilar from Faramir’s comment in The Lord of the Rings, when he separates tools from their uses this way in The Two Towers: “[…] I do not love the bright sword for its sharpness, nor the arrow for its swiftness, nor the warrior for his glory. I love only that which they defend […]”

So too I feel about tools, be they computers or pens, or books themselves, which I see not as objects of reverence, but as bulbs that only shed light when read and shared. This could in part be a decadent opinion born of economic opportunity: five hundred years ago, or even fifty years ago, I might not have been so blithe, as books were far more expensive than they are today and have been declining in relative price for almost all of the 20th Century. Regardless of that, I’m lucky enough to live in a time when books are relatively inexpensive; though a book might have symbolic meaning, it is the thing or potential within, not the thing itself, that appeals, and it’s only to the extent that the exterior thing has the potential to manifest what’s within that I’m interested.

Predictably Irrational — Dan Ariely

One of the central tenets of economics is that we behave rationally, and yet much of what we see on a day-to-day basis defies rationality like some Modernists defy the conventions of plot. We become irrationally attached to concepts like “free,” even if something else is a better value, and our price preferences are relative: experiments in Dan Ariely’s Predictably Irrational show that we’re willing to forgo what seems to be a better deal just so we don’t have to risk even tiny amounts of money. These tendencies can be manipulated to some extent; Ariely says that the main lesson that could be distilled is that “we are pawns in a game whose forces we largely fail to comprehend.” I disagree with the chess metaphor, as it seems to deny us the will and ability we have to learn about the game and not move forward just one square at a time, but the thought it expresses is accurate, and throughout the book I could think of parallel examples to the ones Ariely gives. We don’t see the blindness in others as well as ourselves, and we become attached to prices, things or ideas.

I remember turning 21 and being able to drink legally for the first time and being shocked at the price of going to bars; parties in college and high school usually charged three to five dollars for a cup and as much beer as you could drink. Girls got in free. If the door guy raised the price from three to five, I would try negotiating and sometimes leave. If I came with a group of attractive girls, which wasn’t often, I’d sometimes get in free. In contrast, at bars five dollars only gets you the first beer; to be fair, however, that beer is usually of higher quality than keg beer. Nonetheless, the price increase of an evening out caused much consternation at first, but now I’ve acclimated to the idea that, although Ariely says “[…] first decisions resonate over a long sequence of decisions,” I also use anchoring points in my price expectation continuum. Now paying $15 to $20 at a bar seems normal and $5 at a party would seem cheap. These “anchors” can change over time and with context. If I went to New York or L.A., where trendy bars allegedly now charge $15 a drink, I’d be astonished. When I was a freshman in college and a New York club accidentally gave me a band that allowed me to drink even though I was 18, I was shocked at having to pay $10 per drink and consequently didn’t drink much, even when a 23-year-old girl wanted to get me to buy shots. Buying her shots isn’t a good idea for reasons Richard Feynman goes into in Surely You’re Joking, Mr. Feynman! Nonetheless, I’m wandering far afield from the central point, which is that original decisions about price can resonate powerfully over time and can be hard to change.

Ariely uses Starbucks versus Dunkin’ Donuts as an example: Dunkin’ Donuts coffee was and probably still is much less expensive than Starbucks and, I would argue, not much worse if it is at all, but Starbucks still manages to charge millions of people three or more dollars for various drinks. They can do so in part because they’ve changed expectation through decor, drink names, and the like. “Starbucks did everything in its power […] to make the experience feel different—so different that we would not use the prices at Dunkin’ Donuts as an anchor, but instead would be open to the new anchor that Starbucks was preparing for us.” In other words, Starbucks created a new anchor. This raises fundamental questions about the nature of things like supply and demand—or, as Ariely says, “As our experiments demonstrate, what consumers are willing to pay can easily be manipulated, and this means that consumers don’t in fact have a good handle of their own preferences and the prices they are willing to pay for different goods and experiences.” I agree to some extent, as I didn’t like paying extra money to go to bars and avoided it to the extent I could when I first turned 21, but now all my friends go and they’ve become the new norm. In the land of companies, Apple might be the best example of a company manipulating consumer expectations: only its operating system and industrial design separates it from other manufacturers, and yet it can get away with offering unusual machines and limited, premium product lineup.

I wonder if Ariely has read Trading Up: The New American Luxury, which describes how some companies are trying to harness these price point anchors—and redefine them. One point of Trading Up, however, is that the new or luxury products must have at least some technical advantage of what they replace. Starbucks does: it offered espresso drinks when, to my knowledge, they were not readily available at most places. Not surprisingly, the book also covers Apple and BMW. Apple offers a real technical advantage to me in the form of OS X, but you can’t buy a regular desktop tower and separate monitor. Where Apple does compete it offers hardware at prices similar to competitors, but you can’t get low-cost towers stripped of the computer equivalent of bells and whistles. In addition, this morning Apple released new versions of its MacBook and MacBook Pro laptops. The base-level MacBook is $1,100—or, thanks to Apple’s marketing, $1,099—but comes without a DVD burner, an extra gigabyte (GB) of RAM, and the extra 40 GB hard drive. Its processor is also slower. Given these drawbacks, it makes sense to buy the $1,300 version—but Apple’s website touts that the MacBook starts at $1,099. Yet buying the middle version is better, for resale value if no other reason. In doing so, the company might have differentiated itself enough to set new anchors for many consumers. And we either fall for it or make a rational choice, depending on one’s perspective.

Ariely doesn’t specifically cover Apple because he’s more interested in experiments where you have two things that are absolute equivalents, rather than OS X versus Windows. But I begin to see examples of some of his thinking in the world I see. There are limits to manipulation—I won’t pay $10 for coffee or $2,000 for any computer with the capabilities of a present-day MacBook. But I might pay marginally more for some products, like beer, depending on the setting and my age. In addition, product preferences change; in Ariely’s next chapter, “The cost of zero cost,” he describes how people will often take free even when it appears to be a better value to take money. He offered a $10 Amazon gift certificate for free or a $20 gift certificate for seven dollars. Buying the larger certificate nets more profit, but most people take the free one. To conventional economics, this would seem irrational, but for some people an Amazon gift certificate might not be of as much use as cash; they might not read much, or want to buy DVDs, and the like. In essence, I believe their demand is lower on the demand curve for Amazon products. I would take the $20 certificate because I buy too much from them already. In addition, he describes how Amazon’s free shipping policies can cause people to buy more than they would otherwise to reach the $25 free shipping threshold, but I often will add an extra book to reach it because I always have a backlog waiting. Not all those who act in response to Amazon’s offer act irrationally.

Still, the issues of Amazon gift certificates and free shipping are mostly nitpicks. My bigger question concerns some of his methods for generating data—many of the stories and anecdotes come from experimenting on convenient undergraduates at good Universities, who might not be representatives of the general population. Though he follows up many with experiments elsewhere, I’m still leery of drawing overly broad conclusions based on limited samples. In addition, how reliably can we extrapolate data from a limited number of people in artificial settings and then apply it to the bigger world? Posing the question is much easier than answering it, and to Ariely’s credit he has given us a framework for exploring the issue, while I throw popcorn from the sidelines and offer stories about drinking. But the issues are real, and there’s a perpetual danger of finding a correlation that works only to discover that some other variable drives the correlations or causes experiments to turn out as they do. Will our tendency to cheat and steal more when dealing with abstractions for cash rather than cash itself, as Ariely describes in “The context of our character, Part II,” really scale up to the level of Enron-style fraud? He makes a convincing case, but not one beyond all reasonable doubt, even if I can certainly agree that he meets the lower legal standard of a preponderance of the evidence.

And even if some of his conclusions make you go, “Really?”, his book is still fun to read. The chapters I discussed in-depth were just a small part of Predictably Irrational, and to give every chapter the same treatment would lead to a document almost as long as his book. But maybe I’m inclined to like his book more because Tim Harford recommend it (in addition, Ariely sent me an e-mail about my Harford post, and, as often happens with famous authors, I have a slight tendency towards being star-struck. But I can also admit that, perhaps alleviating some of its effects). In “The effect of expectations,” he describes experiments that show “When we believe beforehand that something will be good, therefore, it generally will be good—and when we think it will be bad, it will be bad.” He finds the influences go deep, and that signaling that an experience will be good can often make it good. Compare this, however, to Chris Matthews’ advice that one was better served by setting expectations low and exceeding them than setting them high and missing, even if the ultimate result was the same. He discussed politics, however, and Ariely is describing, well, something more domestic and more grand at the same time. I feel like there is a way to reconcile the views even if I have not found it yet, and it might speak to the depth of both writers that I have not been able to (incidentally, you should read Mattews’ Hardball).

Harford’s signal that this book will be good has an impact on the pleasure I derive from reading it, and I can’t help comparing The Logic of Life and Predictably Irrational, given their similar subject matter and proximity in both publishing date and my reading. Arguably, Harford is the better writer, with more journalistic zing, but this tendency also gets him into trouble: he jumps without transitions from idea to idea too often, and his chapters seem more loosely linked than Ariely’s. To be sure, both books are similar in that their chapters are more or less independent, but Ariely’s passes what I now call “the blog test” in that its content doesn’t seem to have been replicated on blogs and its form is not necessarily better suited to that medium. The buffet approach in Predictably Irrational by its nature lacks total coherence, but also allows one to skip chapters at will and not lose much. It also makes generalizing about an entire book more difficult, which is why I focused on particular chapters. The largest difference between The Logic of Life and Predictably Irrational is that the former makes the case for logic and rationality in a larger, social, macro sense, while the latter makes the case for irrationality in a smaller, individual, micro sense. And yet I can’t help but wonder if the latter approach supports the former approach, much the same way that the self-interest of capitalism might end up altruistically benefitting society on a large scale, or the way we might not be able to predict how an individual will act but can sometimes guess how large bodies of individuals turn out. Take two people with different SAT scores and you can’t know that one will do better than the other, but take 100,000 people with very different scores and you’ll know that most of the top group will outperform most of the bottom. So too, maybe, with Ariely’s Predictably Irrational on the small scale and Harford’s The Logic of Life on the larger. Both books also have a self-help aspect to them in that if you can understand your own weakness and how others will behave, you’ll be more likely to correct those weaknesses and exploit them in others. Of course, if enough people read both books, then their behavior could change en masse, leading to the books changing what they seek to measure, but this seems unlikely. Ariely knows about the issues with weakness, too: “[…] these results suggest that although almost everyone has problems with procrastination, those who recognize and admit their weakness are in a better position to utilize available tools for precommittment and by doing so, help themselves overcome it.”

Perhaps that is also true of readers of what I call, tongue-in-cheek, econ-for-dummies books.

Many of Ariely’s chapters are structured like this post: they tell a story, conduct an experiment, and then draw more general conclusions. The story could be a personal one from Ariely or drawn from another source. In my case, I tell a story, link it to Ariely’s experiments, and then draw a more general conclusion about his book and methods. Mostly, I suspect his book shows that we don’t really know what we want, which probably shouldn’t be a surprise given all the lonely hearts columns, uncertainty, regret, and the like we collectively experience. As such, it helps us better evaluate what we want and why we act the way we do, and that the book is fun to read helps as well. And it has enough substance to fuel more than 2,000 words of commentary and analysis.

NOTE: Ariely will be in Seattle tomorrow night, and I’ll be at Town Hall to hear him.For more about Ariely and behavioral economics, read What Was I Thinking? The latest reasoning about our irrational ways, an excellent New Yorker article, or this much shorter post on Marginal Revolution. Finally, the Economist’s Free Exchange has a very negative review that I think is wrong, as my comments above should illustrate. Its biggest complaint seems to be that Ariely doesn’t define what he means by rational, but if the writer missed that, I’m not sure he understood the book.For a descriptive but positive view, see The New York Times’ story, which is in the science rather than books section.EDIT: Dan Ariely’s visit was excellent, and I wrote about it here.