Kindle land, with requisite ruminations on the iPad

EDIT: See this comment on my long-term analysis of this generation of Kindle.

James Fallows says that in order to avoid becoming a Kindle bore, you should “Just shut up when tempted to say or write anything about it. Otherwise you’ll be driving people crazy with your enthusing about how useful and convenient it is, and what its potential might be, and how many elegant decisions are evident in its conception and design.” I’m going to violate that right now by enumerating the number of things the Kindle does right and huge, giant thing it does wrong. If this makes me a bore, proceed to the next post.

Things done right: The screen is very, very nice, as is the tactile feel of the device itself. Although notes aren’t as satisfying to write as they are in paper, they work reasonably well and are easily aggregated. Using the “search” feature allows effectively infinite, immediate concordances in realtime. Shopping in the Kindle store is easy, although I think I’ve only bought two books from it because of the DRM.

The most useful thing about the Kindle for me isn’t actually reading books bought from Amazon—I’m reluctant to spend much money on them, knowing there’s a decent chance that in five years I’ll have a different device or won’t be able to transfer the books I buy now. Rather, Marco Arment’s Instapaper makes the Kindle insanely useful. If I find a longish article online, I hit the “Read Later” bookmarklet in Firefox. About once a week, I log into Instapaper and download all those articles on my Kindle. Bingo: I don’t have to keep printing and losing papers and I still get to read everything I want to read.

Things done wrong: The big-time, number one problem with the Kindle is its terrible software for organizing and managing documents. Actually, scratch that: it doesn’t really have software for managing documents.

The Kindle shows up as a generic USB device on OS X. Want to load it with .pdfs? Be prepared to drag them into a folder labeled “documents.” This process reminds me of .mp3 players… before the iPod. This doesn’t bode well for Amazon, especially now that the iPad is out.

The closest third-party app I’ve found so far is Calibre, which is clunky and doesn’t work that well, especially out-of-the-box. It won’t automatically sync to my Kindle at the moment for reasons not abundantly clear to me; it doesn’t have built-in optical character recognition (OCR) for .pdfs; it doesn’t automatically copy things bought off my Kindle to the computer. The list goes on. The difficulty of writing really good, really intuitive software like iTunes is really, really high.

I’m reminded of this post comparing Tumblr and Posterous, which compares those two “reblogging” tools. The basic point: design counts more than technology. At the moment, the Kindle’s technology is impressive. The physical hardware isn’t bad, although the screen should be bigger: there isn’t enough space before I have to scroll. But until iTunes for the Kindle comes along and whisks the searching and sorting problems away, the Kindle is effectively crippled by software.

I’m sure the omission of iTunes-for-the-Kindle is intentional on Amazon’s part: what they really want you to do is pay them money every time you buy a book or convert a .pdf. That’s okay but seems penny-wise and pound-foolish; think of Scott Adams’ complaint about bad user interfaces. At the end he asks, “What is your biggest interface peeve?” I now have one.

In other news, Apple released the iPad not long ago, which virtually every media outlet on the planet has covered. Megan McArdle says of it:

I’m still unsure how the iPad gets around the core problem: it doesn’t replace anything. Buying an iPhone let me take my phone, my camera, and my iPod out of the briefcase. Buying a Kindle let me remove a newspaper, several books, and some documents I have on PDF.

You can see similar comments here.

But if the iPad software is sufficiently better than the Kindle, users might end up chiefly with it.
One should read this article from Paul Buchheit’s blog, in which he notes the three reasons why the original iPod succeeded where others didn’t. It was:

1) small enough to fit in your pocket, 2) had enough storage to hold many hours of music and 3) easy to sync with your Mac (most hardware companies can’t make software, so I bet the others got this wrong).

Emphasis added. The weird thing is that Amazon is getting this wrong right now. Syncing the Kindle to my computer is cumbersome; there isn’t a good program for organizing my books and .pdfs. Charlie Stross writes about why he, a self-described UNIX bigot, uses a couple of Macs, instead of cheaper Linux boxes:

The reason I choose to pay through the nose for my computers is very simple: unlike just about every other manufacturer in the business, Apple appreciate the importance of good industrial design.

(Note: he’s British, which explains the “Apple appreciate” rather than “Apple appreciates.” The Brits think of corporations as plural, we think of them as singular. What would Steven Pinker say?)

I would also add that Apple has fewer and different hassles than running Linux boxes, which I say as someone who had periodic problems with audio drivers and other things in the ~2001 – 2003 range before I gave up. But the Kindle’s hassles are reminiscent of a product that should be better than it is. I’ve drifted somewhat from the main point regarding the Kindle, but the device is one of these “close, but still wrong” items that is somewhat frustrating, much like Linux, the last Volvo I drove, the Ikea desks I’ve seen, and chairs that unsuccessfully mimic the Aeron.

The Kindle is very, very good for English majors who get assigned a lot of pre-1923 fiction (which they can get free online) or for people who like reading from that era and do so voluminously. For the rest of us, it lacks, especially in the nonfiction department, where it’s hard to skip from section to section quickly.

Reading fiction on it is a substantially better experience because I seldom skip long sections in novels—it’s pretty hard to decide an entire chapter should be skipped, since that chapter will usually contribute something important to the story (and, if it doesn’t, the novel isn’t very good). In addition, novels are relatively unlikely to have research citations, which are sometimes important in evaluating nonfiction, especially if that nonfiction makes extensive or dubious claims. Right now, the small amount of nonfiction I’ve got doesn’t come with footnote hyperlinks. It shouldn’t be all that hard to create a style named, say, footnote with an automated number linking it to a later number so that one can jump freely back and forth between them. But that’s rare in the books I’ve read.

Amazon has released a kindle Software Development Kit (SDK), which might improve some of its current problems. But until it solves the “organizing home” problem that iTunes does so well, it’s not going to be a tremendously useful device for me and many other heavy readers who need some way of getting articles to and from the device. That’s a huge missing feature that Instapaper (somewhat) solves, but not well enough. The Kindle is an “almost” device, like many of the “almost” mp3 players before the iPod. But I don’t think almost is enough.

Influential books (on me, that is)

Econ (and generally interesting) blogger Tyler Cowen lists the 10 books that have most influenced him and invites other bloggers to do the same. Here’s mine, in the order I thought them up rather than importance:

1. Tolkien’s Lord of the Rings: I think many people find that their first “adult” books is a powerful influence, and I first read Lord of the Rings in late elementary or early middle school and reread it periodically: its commentary on power dynamics, the limits of knowledge, and the challenge of understanding still affect me. And it makes its way into a surprising amount of otherwise unrelated academic work. And the story. And, and, and…

2. Robert Jordan’s The Wheel of Time and the DragonLance series: Another early influence, this time mostly for the worst: the view of sexuality in both series is juvenile, the writing atrocious, and the mindless glorification of battle and power for its own sake is, from my current vantage, almost sickening. But they now show me what not to do as a writer and thinker and probably contributed to the lost, unhappy middle school years so many have.

3. Geoffrey Miller, The Mating Mind: Robin Hanson recommended this. Its ideas about the role of art and culture in sexuality and why the second half of Darwin’s theory—sexual, as opposed to natural, selection—clarified a lot of my thinking. Even today, many people focus on “natural” selection but miss the importance of sexual selection. I still don’t think I’ve exhausted the book, although I have read others in the genre. Along with some of the books below, it pointed me toward a better understanding of how people signal and how people perceive, which I didn’t understand previously.

4. Dan Ariely, Predictably Irrational: I’d read many critiques of our rationality, but before Predictably Irrational I probably would’ve argued that we should look solely at behavior to discern individual wants and that individuals are independent in an almost Ayn Rand way. Although Predictably Irrational isn’t solely responsible for this shift and others, it probably catalyzed them.

5. Conrad’s Heart of Darkness: By most of the conventional tropes of creative writing classes, Heart of Darkness is terrible. But it reminded me of the power of the unknowable and of the limitations of what we know. The most famous scene is of course Kurtz’s “The horror! The horror!”, but what strikes me in rereadiang it is how little time Kurtz actually gets and how great Marlowe’s anticipation of Kurtz is. The novel is more about how Marlowe perceives (and thinks he will perceive) Kurtz than about Kurtz himself, which taught me the power of how perception shapes reality.

6. Paul Graham’s essays: Although not technically a book, some of Graham’s essays have been collected into Hackers and Painters. I pay special attention to the essays about social structure and the role of the individual in social structures. Some of the ones about school, especially high school, I assign to students.

7. Umberto Eco’s The Name of the Rose: This is the kind of novel that I wish someone had demanded that I read earlier than I did. Claiming that something is the “greatest novel” strikes me as silly, but if I were forced to choose one, this would be in the running and seems like it contains the world as few novels do. Is this vague? That’s because trying to encompass it is beyond me.

8. Robert Penn Warren’s All the King’s Men: This book made me more cynical and hopeful about politics—at the same time. Its style isn’t baroque but tends toward long, beautiful sentences; Jack Burden’s understanding of what actually matters, which doesn’t really occur until the last chapter, is so authentic and wonderful that it seems truer to life than the darker ending of Gatsby. Still, its depiction of sexuality now feels very much of its time, rather than of all time.

9. James Wood, How Fiction Works: Wood asks a critic’s questions and gives a writer’s answers with such precision and beauty that this essentially defines the terms of the novel for me. The last two words of the preceding sentence are essential: the joy of the novel is the inability to define or encompass it.

10. Neil Strauss, The Game: I didn’t love The Game for its stories about pickups, but it has a central, important idea: most conversations in most situations are boring and predictable. Solution: shake things up. Predictability can be boring; in social situations around an attractive person, many people (not just men) get scared, and when they’re scared they become more conversationally conservative, and then fail through excess caution. Chances are, no one wants to tell you where they’re from; ask them for an opinion that elicits interpersonal beliefs instead. Most guys are also poorly educated and socialized around dating, women, and sex. The Game may not be a perfect book but it moves the conversation about dating and sexuality forward in a way that few other books have accomplished. Most of the negative discourse around the The Game doesn’t address the elisions The Game is addressing. If you have a prosocial equivalent of The Game I’d be happy to hear about it.

I’ll stress that I’m not most proud of these books: The Wheel of Time is terrible, some seem like lightweight popularization, others are not books I would necessarily recommend today, or to everyone—but they all did their work. If I could pick an 11th I’d choose Daniel Gilbert’s Stumbling on Happiness, which has several core insights that I try (and often) to fail to apply: money above something like the median household income won’t make us happy; our sex and social lives matter more; and our ability to predict what will make us happy is weak. Robertson Davies continues to be a favorite author because he has perspective in a complicated way I can’t easily define, but he combines much of the best of Victorian fiction with a modern sensibility and style that’s his and yet universal. In the His Dark Materials trilogy Philip Pullman shows how fantasy can be done not just right, but spectacularly well.

Another omission: I wish I could think of an individual book that convinced me dense cities are vital because of their networking effects, environmental improvements, the possibility (seldom achieved) of affordability, and the well-intentioned but wrong preservationist/anti-growth types. I’ve had several arguments with people who are a) pro-affordability, b) anti-sprawl and c) anti-height. You can’t consistently have all those things; a) is most often neglected. Newspaper articles in particular like to pretend these trade-offs don’t exist.

Many others have answered the call for books too, and I find their posts fascinating even though I don’t read most of the bloggers involved. But the books themselves (and the rationale for their influence) point to deeper ideas about how influence works and the serendipity of the right person finding the right book at the right time. Most of the answers are political science- and/or economics-oriented, but a fair amount of fiction crops up.

At some point I’ll also post a list of books that I wish someone had shoved into my hands when I was younger with a demand that I read said books.

EDIT: Julian Sanchez has an interesting meta post about influence, in which he posits that people mean influence in two major ways: on a formal/substantive axis (does it show me how to do something?) and on a theoretical/practical axis (does it show me what I should think/believe?). The distinction seems useful. Most of my list is heavier towards the theoretical/practical level. One thing that I’ve noticed about meta lists is that they very seldom have examples of what not to do—in other words, books that one reacts strongly against.

How to request review copies or products if you're a blogger

A number of people have written to ask how/why Kinesis, Metadot Corporation (which makes the Das Keyboard), and others send review keyboards or books. The short answer is that I asked, had a reasonable purpose in trying to review keyboards or books, and have a significant enough forum to make it worthwhile. To do the same, bloggers need a number of key features: credibility, good writing, some connection to the topic, and manners.

Credibility

Don’t write to manufacturers two weeks after starting your blog when they can still see the “hello world” post. Anyone can register joeblow.wordpress.com and write a couple of posts, then start clamoring for “free stuff.” If you’re going to request review items, make sure your blog has enough history to make it plausible that you’re a) committed to writing and b) have enough readers. “Enough” is a bit slippery because a blog with the right 50 readers a day who come for a specialized subject might be more useful than a blog with 500 or even 5,000 readers—it’s probably easier to get 500 hits by posting pictures of scantily or unclad teenage girls than it is to get 50 writing about the art of the novel, but if you want to review fiction, the latter group is probably of greater interest to publishers.

Still, all things being equal, more popular blogs are often more popular because they’re better, which causes people to link to them, which causes more readers to find that blog, which causes more people to link, and so forth. You don’t need to be on the Technorati top 100 blogs, but make sure you’ve written enough for people to evaluate your writing skill and for some kind of audience to have found you. As a loose rule, I’d say that you should write at least one substantive post a week for about a year before you request review items.

Write a good review, not a positive review

In How to Get Free Books to Review on Your Blog, “Nick” says:

Note that I didn’t say [that you should write a] positive review. I said a good review. You should not feel inclined to write positive things about the book just because you received a free copy. If you write a fair, honest, and professional review, most publishers will respect your opinion.

He’s correct: you’ll lose credibility with readers if you’re nothing more than a shill, especially in an age when sophisticated readers have their bullshit detectors justifiably set on “maximum.” Bloggers are best when they’re honest, or as honest as they can be; that’s one reason why I include the disclaimer at the bottom of keyboard reviews if the keyboards come from the manufacturer, rather than bought by me: at least readers know the provenance of the items I’m looking at.

I don’t usually do this with books because it’s less important: the cost of a book, usually between $10 – $20, is lower, and publishers don’t expect or want review copies back. But when I write reviews, I make sure they’re meaty enough to justify my effort in producing them and the reader’s effort in reading them by citing as many specific characteristics as possible that justify whatever opinion I’m expressing or conclusion that I’m coming to.

Be (or become) a good writer

There’s nowhere to hide on the Internet and it’s easy to judge the quality of a blogger’s writing simply by reading their work. If the writer can’t explain what they like or dislike and why, they’re probably not a very good writer; many, many bloggers (and mainstream reviewers too) just write “this is awesome!” or “this sucks!” without much elaboration. That tendency towards shallowness is one reason I started writing in-depth keyboard reviews: because they didn’t exist or, if they did, they weren’t readily available. Some novelists have said they write novels that they would like to read but that no one else has written, which is how I often feel about my reviews (and much of my other work).

If you don’t know what good writing looks like, or dispute the very idea that there can be good writing (as some of my students do), you’re probably not a good writer. If you want to become one, there are many, many resources out there to help you, mostly in book form. A few that I like and that have helped me include William Zinsser’s On Writing Well, Francine Prose’s Reading Like a Writer, James Wood’s How Fiction Works, Harold Bloom’s How to Read and Why, and the New York Times’ collections, Writers on Writing. In addition, one thing that separates good from bad writers is that good writers read a lot and write a lot.

One note: being a good writer doesn’t mean that your grammar has to be perfect or your blog typo-free, but your posts shouldn’t be riddled with typos and elementary grammar errors either. I’m sure many of my posts, especially the long ones, have typos, but they tend to be minor and easily overlooked; if readers send me notes or leave comments pointing out typos, I silently correct them.

Connection

If you’re writing a blog about, say, cats, and you request a hard drive review unit, you’re probably doing something wrong. If you write hard drive reviews and request a new kind of kitty litter, you’re also probably doing something wrong. Seek things that relate to your niche.

In my case, I started a blog about books and literature because I like to read and like to write; to me, most of the posts on this site are leisure, not work. The first time I got a free book (or “review copy” in industry jargon), a publicist contacted me regarding Lily Koppel’s The Red Leather Diary because I’d written a post about the New York Times article that led to the book. I was surprised: since when do publishers chase bloggers, rather than vice-versa?

I don’t know when the shift happened, but it did, which is why I now include my mailing address in the “About” section of The Story’s Story, and I take a look at everything that passes my desk even if I don’t always write about them. Sometimes I request books that pique my interest.

All this is to show that I have a) a narrowly focused blog and b) the things I request—books—fall into that narrow focus.

The keyboards are tangential to books but still related, and I stumbled into reviewing them by accident: I read about the famous IBM Model M keyboard on Slashdot, the geek tech site, and started doing some research into it and other quality keyboards, like the Apple Extended II. Most of the reviews and comments were not very helpful, especially for Mac users, but they pointed to Unicomp, which manufactures the Customizer Keyboard, and to Matias, which produces the Tactile Pro. I tried both and wrote extensively about my experience with them.

I’m interested in keyboards because I spend a lot of time writing professionally, both as a grad student in English literature and as a grant writer with Seliger + Associates. Writers and programmers are probably more likely to be interested in keyboards than most people because keyboards are a fundamental part of their toolset, and when you use a tool a lot, you want it to be right.

To understand literature, I think it helps at least somewhat to have an understanding of literary production: the publishing environment, the historical circumstances in which a work was/is produced, and so forth. Such factors can’t supersede the work itself, but they nonetheless matter. They also matter for practicing writers, and if a good keyboard means that a writer can or wants to go for an extra half hour or hour a day, that’s a tremendous difference over the course of a year, a decade, or a lifetime. Writing about the tools writers use, therefore, seems sufficiently related to writing and books that I think keyboard reviews are worth posting.

Use your real name

Penelope Trunk’s Guide to Blogging is useful, and one of her posts is on the subject of why you should blog under your real name, and ignore the harassment.

I agree. Your real name lends credibility and makes you seem like (slightly) more than another random Internet squawker; public relation or press people are more likely to want to send something to site run by Jake Seliger than they are to HoneyBunny or l33t48 or whatever. In looking through my RSS feed, I can see that most of the bloggers I read use their real names. Anonymity has its place in blogging, as it does in journalism, but if you’re going to review things you should have your name attached to that review. Some blogs demand anonymity, as Belle de Jour did until recently, but they should be the exception.

Manners

In the Internet age, we’re all supposedly turning into barbarians with the attention span of fruit flies. That’s the stereotype, anyway, and although it has some truth to it I think it largely wrong, at least among the better bloggers. Still, one way to catch people’s attention is to do the opposite of what bloggers represent in the popular imagination. I’ve already covered the importance of attention spans in the section about “good” versus “positive” reviews, but I’ll deal with the “barbarians” idea here.

When you make contact with a publisher or company, figure out how they want to be contacted. There’s usually a public relations, media, or press contact. You should write to that person with a short note that says, briefly, what you want, why, and who you are. Covering those shouldn’t take more than two or three paragraphs. Don’t include your life’s story and don’t be vague: the contact person will decide if they want to send a review model more based on your writing than based on your e-mail, and they’ll be used to dealing with people who are professionals or at least act like them.

In my case, that means sending keyboard makers a note saying that I’d like review their keyboard because I’ve reviewed a number of other keyboards, which causes people to write asking for comparisons, which causes me to seek review models. This bleeds into the “who am I” issue, where I state that I write The Story’s Story and contribute to Grant Writing Confidential, with links to both. From there, they can figure out as much or as little as they like.

If they send the keyboard, I say thanks, review it, and send it back, with another brief note that says “thanks, I appreciate you sending it.” I do that because it’s how I’d ideally like to be treated were our situations reversed, and also so that in the future, if I want to review a new model or whatever, they’ll be positively disposed towards me.

Don’t start a blog for free stuff

If I counted the number of hours I’d spent working on The Story’s Story versus the “pay” I’ve gotten in books or Amazon referral cash, I’m sure I’d be making well under a buck an hour. It’s probably closer to a cent an hour. If your purpose for starting a blog is to get free stuff, you’re doing something terribly wrong because you’re very unlikely to make real money as a blogger. Write because you want to, not because you expect direct monetary rewards. They definitely won’t come in the form of books or hardware; indeed, my bigger problem now is wading through and dealing with the books I don’t want, rather than cackling at the booty from the stuff I do want.

The Possessions Exercise (According to Geoffrey Miller)

I’m re-reading Geoffrey Miller’s books The Mating Mind and Spent: Sex, Evolution, and Consumer Behavior, partially for pleasure and partially because some of his ideas might make it into my dissertation. The latter book is worth reading if for nothing other than the exercises he lists at the end, including “The Possessions Exercise:”

List the ten most expensive things (products, services, or experiences) that you have ever paid for (including houses, cars, university degrees, marriage ceremonies, divorce settlements, and taxes). Then, list the ten items that you have ever bought that gave you the most happiness. Count how many items appear on both lists.

(This exercise ought to be conjoined with the reading of Paul Graham’s essay Stuff.)

For many people, I suspect that relatively few items appear on each list, although that might be projection on my own part.

I do a lot of work on my computer, so many of the “bought” items tend to be related to that: an iMac, an Aeron, a Kinesis Advantage. The “university degree” appears on both lists, although I suspect that I often appreciated the experience of being at a university for undergrad as much if not more than the classes I was actually putatively there to take.

The big takeway from Miller’s exercise is obvious: what we really value often isn’t what we pay the most for, but few of us realize that. We overvalue stuff, to use Paul Graham’s phrase, and we undervalue each other, learning, making things, and interpersonal experience.

The Next 100 Years: A Forecast for the 21st Century — George Friedman

The Next 100 Years is fun because of its contrary, anti-conventional wisdom thinking about the shape of nations: instead of assuming the perpetual rise of China and India, the book sees internal weakness in both, as well as greater problems with a resurgent Russia and a nationalistic Turkey. Rather than focusing on current American battles with what Friedman calls “global jihadist,” which he argues are a passing trend in terms of their overall threat, it examines what a more assertive Russia might look like as it tries to expand its influence in Eastern Europe and the Caucuses. Immigration from Mexico and Latin America is unpopular in the United States today, that immigration might become desired by the late 2020s as industrialized countries age. The United States is a “young” and “barbaric” country using the definitions Friedman gives. And the list goes on.

The problem with The Next 100 Years is that almost every page also contains a wildly implausible assertion or historical reading. To pick one example: after an extended discussion of Russia’s geopolitical interests leading toward 2020, Friedman says that openings in southern Russia combined with a continued American presence in Afghanistan means that “If there were an army interested in invading, the Russian Federation is virtually indefensible.” By conventional metrics, this is true, but it ignores the thousands of nuclear weapons Russia might have. Such an analysis reads like someone planning military adventures in Europe in 1900: it so utterly miscalculates what kind of destruction would occur under its situations that it really doesn’t seem to understand the situation.

Elsewhere, in a specious discussion of the 50-year cycles of American history, Friedman talks about the cycle “From industrial cities to service suburbs,” along with the malaise of the 1970s. He doesn’t mention the Arab oil embargo, energy spikes, or our response to both—instead he focuses on tax policy. Friedman says that in the 1980s, “Reagan’s solution [to economic problems] was maintaing consumption while simultaneously increasing the amount of investment capital. He did so through ‘supply-side economics’: reducing taxes in order to stimulate investment.” But Friedman completely ignores the monetary policy side and Paul Volcker’s efforts to tame inflation (see here, here, and here for more on him). He also ignores the foreign currency issues regarding China, as described, for example, here.

On the war front, the introduction of The Next 100 Years says regarding World War II that “The United States simultaneously conquered and occupied Japan, almost as an afterthought to the European campaigns.” This a) ignores that Japan was the proximate cause of the United States’ entry to the war, b) ignores the enormous strain of fighting World War II in the Atlantic and Pacific, and c) ignores the hundreds of thousands of United States causalities in the Pacific. Calling it an “afterthought” seems wrong. In addition, Friedman writes that:

A country’s grand strategy is so deeply embedded in that nation’s DNA, and appears so natural and obvious, that politicians and generals are not always aware of it.

Funny: I’ve yet to see a country’s “DNA” expressed as a double-helix, and the idea of countries having completely describable characters seems overly limiting and simplistic in this sense.

Still, despite these kinds of problems, Friedman does an admirable if shaky job of refocusing on long-term trends; for example, he says that Vietnam and Iraq were and are, respectively, “merely isolated episodes in U.S. history, of little lasting importance—except to the Vietnamese and Iraqis.” In both cases (at least so far), it appears unlikely that the United States has been permanently hurt, and the great strengths the country possesses, like the universities and immigration that James Fallows writes about here, have not been affected in major ways.

Friedman ties together demographic trends, the status of women, the status of families, and international politics in novel, unusual ways, arguing (for example) that, for example, Osama bin Laden’s rants often include comments about family values and the status of women that indicate he, like Pat Robertson, is riled up about women being independent enough to choose partners, divorce, and so forth. Demographics power some of the major social and political tensions of our era, even when they’re masked by surface reasoning, much as the 100 Years’ War was putatively about the souls of Catholics and Protestants while actually being about the distribution of power and resources in Europe.

I haven’t said much about Friedman’s views about China because those views are so easily arguable. He thinks that China is riven by tensions between wealthy coastal cities and the poor interior, which might eventually tear the country apart again, and that China is heading towards major problems with bad debt, economic structural incoherence, and banking problems. Maybe: but it’s also possible that China will knit itself closer together through telecommunications, roads, and railroads, and that its central leadership is aware of the problems Friedman enumerates.

By the same token, Russia could collapse again around 2020, but one could construct an equally attractive alternative scenario. In his defense, Friedman says that he thinks the broad outlines he gives will be followed even if the specifics are wrong, and in the epilogue he says:

It might seem far-fetched to speculate that a rising Mexico will ultimately challenge American power, but I suspect that the world we are living in today would have seemed far-fetched to someone living at the beginning of the twentieth century.

I’m sure the world of 2100 will seem “far-fetched” to someone from today, but the real question is, “far-fetched in what way?” The way Friedman describes, or some as-yet unforeseen way? I would bet more the latter, amusing though it is to anticipate the former. Too much is left out, including, notably, the threat of nuclear weapons and the possibility of global climate change. He says, however, that “My mission, as I see it, is to provide you with a sense of what the twenty-first century will look and feel like.” On this account he succeeds, provided that he changes the word “will” to “might.”

(500) Days of Summer with, as a bonus, Alain de Botton's On Love

(500) Days of Summer is about the mating habits of angsty hipsters. Said hipsters are endlessly concerned with the nature of love in a deep, romantic fashion when they should be thinking more about the mechanics of how and why someone is actually attracted to another person. To heal the anxiety that hipsters feel about attraction and love, I would prescribe Belle de Jour, Neil Strauss’ The Game, and Kundera’s The Unbearable Lightness of Being, which, taken together, remind one that the important thing about love is having enough game to get someone else to love you, not merely mooning over another person—which is more likely to drive them away than attract them.

In (500) Days of Summer, Tom does the mooning and Summer is indifferent and perhaps callous to his puppyish attention. Tom wants romance so bad that his 11-year-old sister says, “Easy Tom. Don’t be a pussy” at one point. We’re thinking the same thing, although perhaps not in those words, which are given to the sister chiefly, I assume, for getting a laugh out of the incongruity of hearing her say them. In the next scene, Tom asks Summer, “What are we doing?” The better question, at least for the audience, is, “Should we care?” If Tom doesn’t get with Summer—who manifests no special or particular interests, talents, abilities, thoughts, capability, or expertise—there are another thousand girls right behind her, exactly like her, who are also part of the quirk genre, as described in the linked Atlantic article:

As an aesthetic principle, quirk is an embrace of the odd against the blandly mainstream. It features mannered ingenuousness, an embrace of small moments, narrative randomness, situationally amusing but not hilarious character juxtapositions (on HBO’s recent indie-cred comedy Flight of the Conchords, the titular folk-rock duo have one fan), and unexplainable but nonetheless charming character traits. Quirk takes not mattering very seriously.

Quirk is odd, but not too odd. That would take us all the way to weird, and there someone might get hurt.

Over time, quirk gets boring and reminds you why you like the real feeling of, say, King Lear, or the plot of The Usual Suspects. The mopey plight of undifferentiated office workers is less compelling, and, once sufficiently repeated, it feels like disposable culture: another story about two modern people with no serious threats to their existence save the self-imposed ones that arise chiefly from their minds.

Love stories about the relatively pampered can work: I watched (500) Days of Summer because a bunch of students mentioned it in relation to Alain de Botton’s On Love. But the novel is better: a philosophically minded and self-aware narrator is fascinating precisely because he is aware of the ridiculousness of his own predicament and the randomness of love. He has a therapist and a philosophy professor in his mind. The dichotomy between how he should feel (she’s just another girl) and how he does (transformed through love!) fuels much of the comedy, as does the narrator’s tendency toward self-sabotage thanks to Marxism as applied to love: he would never want to be a member of any club that would have him as a member. Tom would, apparently, sign up to be a member of any club that would have him as a member. His lack of interiority makes him boring. His lack of exteriority makes the movie boring.

Whoever wrote (500) Days of Summer must have read On Love (Tom is a wannabe architect and gives Alain de Botton’s The Architecture of Happiness as a gift) and wanted to do a film version, or at least steal from it. Stealing from On Love, by the way, is a brilliant idea: the novel still leaves much territory to be explored, and it’s probably impossible to draw a complete map to represent the problems that love provide. But the interior commentary that makes the novel special can’t be effectively represented on screen. So we’re stuck with two people whose averageness is painful and unleavened by any real sense of awareness of their own situation. One of my favorite passages from On Love goes:

But there wasn’t much adventure or struggle around to be had. The world that Chloe and I lived in had largely been stripped of possibilities for epic conflict. Our parents didn’t care, the jungle had been tamed, society its disapproval behind universal tolerance, restaurants stayed open late, credit cards were accepted almost everywhere, and sex was a duty, not a crime.

On Love is acknowledging that the stuff that makes good fiction has largely been evacuated from modern love stories. In doing so, I laughed with recognition and at the narrator’s neuroticism about his own love stories. Moments like this abound in On Love and make it such a wonderful novel. Moments like this are absent in (500) Days of Summer, which make it a tedious movie.

Problems in the Academy: Louis Menand’s The Marketplace of Ideas: Reform and Resistance in the American University

The problems in American universities are mostly structural and economic, and the biggest are occurring on the faculty side of the liberal arts and social sciences: since around 1975, too many professors (or at least people earning PhDs) vie for faculty slots relative to the number of undergraduates. Menand says (twice) that “Between 1945 and 1975, the number of American undergraduates increased by 500 percent, but the number of graduate students increased by nearly 900.” Undergraduates clear out of the system in four to six years; graduate students who get PhDs (presumably) stay or wish to stay for whole careers. Since 1975, college enrollments have grown much more modestly than they did from 1945 – 1975, and the department that’s grown most is business, since so many undergraduates now major in it. But grad programs haven’t scaled back, leaving humanities types to fight for scarce jobs and write polemics about how much it sucks to fight for scarce jobs.

Menand doesn’t identify the supply/demand problems as the major root cause of the other issues around political/social conformity, time to degree for academic grad students, and so forth, but it’s hard not to trace “the humanities revolution,” “interdisiplinarity and anxiety,” and why all professors think alike to supply and demand. Each of those topics are each covered in a long chapter, and Menand’s first, on “The Problem of General Education,” seems least related to the others because it is mostly inside baseball: how we ended up requiring undergrads to take a certain number of courses in a certain number of fields, and what academia should be like. But the others make up for it.

The Marketplace of Ideas is worth reading for knowledge and style: the book has the feeling of a long New Yorker article—Menand is a staff writer there—and if he occasionally pays for it with the generalization that gets coldly stamped out of peer-reviewed writing, the trade-off is worthwhile. Menand is also unusually good at thinking institutionally, in terms of incentives, and about systems: those systems tend to evolve over time, but they also tend to harden in place unless some catastrophic failure eventually occurs. Such failures are often more evident in business than in public life, since businesses that fail catastrophically go bankrupt and are much more susceptible to competitors and regulators than governments. The academic system is, as Menand points out, something out of the 19th Century in its modes of tenure, promotion, displinarity, and so forth. But it’s unlikely to go anywhere in an immediate and obvious way because public universities are supported by taxpayers and even private ones are most often nonprofit. Furthermore, whatever problems exist, universities do well enough, especially from the perspective of students, and having a glut of PhDs to choose from doesn’t harm universities themselves. Consequently, I don’t see as great an impetus for change as Menand implies, very loosely, that there is.

Take, for example, the PhD production problems from earlier in this post. The logical conclusion would be for fewer people to enter PhD programs, for universities to close some programs, for degrees to take less time (the natural sciences often end up requiring five years from entering to conferring degrees, while humanities programs creeping above ten years), and so on. But there’s no real incentive for that on the part of an individual university: having graduate programs is impressive, grad students are cheap teachers, and people keep applying—even though they know the odds (this basically describes me).

Thus supply and demand stay out-of-whack. University departments can remain perhaps more insular than they should be. Publishing requirements increase as publishing becomes more difficult. But there’s little need to change so long as enough students enter PhD programs. Menand suggests shortening the time to graduate degrees, making them more immediately relevant, and closing some programs—none of which seem likely in the near future unless students stop enrolling. But they don’t because, once again like me, they see professors and think, “that looks like fun. I’ll take a flyer and see what happens.” Nonetheless, the professoriate is already changing in some ways: about half of students, as Menand observes and the Chronicle of Higher Education does too, are now taught by part-timers. With as many choices among instructors as universities have, that trend seems ripe for further acceleration.

Menand says that “For most of the book, I write as a historian.” He also says that he’s “not a prescriptivist” and implies pragmatism, rather than polemic. That’s wise: identifying the problems are probably easier than finding those pragmatic solutions to them. He uses English as an example of what’s going on more broadly, and he is an English professor at Harvard. Part of the crisis is within English departments—what exactly does it mean to study “English?”—and part of it is external. The part outside English departments has to do with rationale and economics—as Menand says, “People feel, out of ignorance or not, that there is a good return on investment in physics departments. In the 1980s, people began wondering what the return on investment was in the humanities.” Note his “people feel” formulation, which is unsourced but occurs throughout; most of the time, speaking of a common culture feels right because Menand has his finger on the intellectual zeitgeist enough to pull off such comments, and elsewhere he has the numbers to back those comments up, especially regarding the flatlining and even decline in the absolute and relative percentages of English majors on campus.

The other interesting thing is the word “crisis,” which I’ve used several times. The Oxford American Dictionary included with OS X says that crisis is “a time of intense difficulty, trouble, or danger.” The word “time” implies that crises should pass; but in English, the one or ones Menand identifies has lasted for more than a generation of academics. According to “The Opening of the Academic Mind” in Slate, “The state of higher education in America is one of those things, like the airline industry or publishing, that’s always in crisis.” In Rebecca Goldstein’s The Mind-Body Problem, the protagonist, Renee, thinks:

In the great boom of the late fifties and early sixties, graduate departments, particularly at state universities, had expanded and conferred degrees in great abundance. But then the funds, from both government and private foundations, had dried up, and departments shrunk, resulting in diminishing need. Suddenly there was a large superfluity of Ph.D.s, compounded by demographic changes […] The result has been a severe depression, in both the economic and psychological senses, in the academic community.

That was published in 1983. People are still publishing the same basic argument today, only now they often do it online. Perhaps the real lesson is that academics are great at learning many things, but supply/demand curves and opportunity costs are not among them, except for economists.

The problems are exacerbated in the humanities and social sciences because grad students in those fields don’t have industry to fall back on, but the natural sciences are not immune either. As Philip Greenspun points out in “Women in Science,” America seems more than willing to source its science graduate students from developing countries, which takes care of supply from that angle (if you read his essay, ignore the borderline or outright sexist commentary regarding women, even if his point is that women are too smart to go to grad school in the sciences; pay attention to the institutional and systematic focus, especially when he points out that “Adjusted for IQ, quantitative skills, and working hours, jobs in science are the lowest paid in the United States”).

Of course, even as I make myself aware of works like The Marketplace of Ideas, I continue working toward that PhD, convinced that I’ll be the one who beats the odds that are still better than Vegas, though not by a lot. But I’m also part of the imbalance: too many people seeking PhDs for few too jobs, particularly too few jobs of the sort we’re being trained to do. Yet academics still provide a vital function to society in the form of knowledge, and in particular knowledge that’s undergone peer review, however difficult or abstruse peer review may have become in the humanities (for more, see Careers—and careerism—in academia and criticism).

The question of what academia should be like is to some extent driven by what professors think it should be like, but it’s also driven by what students think it should be like. Students ultimately drive academia by choosing where to go to school. An increasing number of them are choosing community and online higher education. It’s not clear what this shift means either. Still, professors have blame as well: as the aforementioned Slate article suggests, “[…] Professors, the people most visibly responsible for the creation of new ideas, have, over the last century, become all too consummate professionals, initiates in a system committed to its own protection and perpetuation.” True. But given that they have tenure, control departments, and confer the PhDs necessary to become professors, it seems unlikely that major change will come from that quarter.

Europe, the United States, living standards, GDP, and the University of East Anglia (UEA)

I’ve only lived in Europe—and even then it was England, where I found that many people considered the country not part of Europe—briefly, but like Megan McArdle and Matt Welch, I “found it noticeably poorer than the United States.”

The debate is mostly symbolic and a proxy for U.S. healthcare issues, about which I know sufficiently little to not comment in public. Nonetheless, the living standards issue comes up because McArdle writes about “The Difference Between the US and Europe” in response to Paul Krugman’s comments on the same, where he says:

Actually, Europe’s economic success should be obvious even without statistics. For those Americans who have visited Paris: did it look poor and backward? What about Frankfurt or London? You should always bear in mind that when the question is which to believe — official economic statistics or your own lying eyes — the eyes have it.

McArdle and Welch think otherwise. My limited experience occurred at the University of East Anglia (UEA), which is in Norwich. The school was noticeably more run down than any university I’ve seen in the U.S. The dorm cots—they weren’t really beds—were tiny and hard; the desks made the ones at Clark University, where I was an undergrad, wonderful by comparison; and the campus had a general feeling of dilapidation that was enhanced by graffiti on walls.

That was just the physical plant. Classes were only taught for six hours a week. I have no idea what most students did the rest of the time. There were in effect no meal plans, so students were supposed to do their own cooking in dirty communal kitchens. To use the gym, one had to pay £6 to take a useless orientation class and then pay £1 or £2 to get in every time thereafter. It was so bad that a friend and I wrote a document called “About UEA” and e-mailed it to others at our home schools. Bathrooms were—charitably—vile.

But wait! Aren’t dorms terribly everywhere? Maybe so, but in the limited number of places I’ve spent some time in or on dorm beds, none have been nearly as bad as UEA’s, and that includes Clark, the University of Washington, Seattle University, Harvard, USC, and the University of Arizona. This isn’t a full sample, but the difference was obvious. So was the price of books, which did help explain why so many excellent used bookshops popped up but didn’t help the £10 trade paperback price or hardcover prices that verged on £20.

Perhaps because of exchange rate issues, the UK also felt very expensive. “Expensive” and “worse” is a bad and unusual combination.

The debate reminds me of the New York Times piece, “We’re Rich, You’re Not. End of Story,” which studies how rich Scandinavian countries feel relative to the U.S., Spain, and others:

After I moved here six years ago, I quickly noticed that Norwegians live more frugally than Americans do. They hang on to old appliances and furniture that we would throw out. And they drive around in wrecks. In 2003, when my partner and I took his teenage brother to New York – his first trip outside of Europe – he stared boggle-eyed at the cars in the Newark Airport parking lot, as mesmerized as Robin Williams in a New York grocery store in “Moscow on the Hudson.”

The plural of anecdote is not data, and I like what I’ve seen of European cities, especially because they feel more like cities and less like giant suburbs than places like Tucson, Arizona do. Europe is a lovely place in many respects and has decided, as a continent, on a different set of trade-offs than the United States. But the difference in living standards is noticeable, at least to me, and evidently to others, at a given income level; if you have enough money, almost anywhere can be nice.

EDIT: I uploaded “About UEA,” a document a friend and I wrote to warn our other friends about life at UEA. Commenters say the university has gotten better since, but I can’t tell if they’re astroturfers or the real thing.

EDIT 2: It appears that Britain has a well-known and measurable productivity gap, which is elaborated on and explained at the link. The post is interesting throughout and you should really read it, including this:

I’ll never forget the first time I visited the Netherlands in 1985. I was in Dordrecht and reading through the comments of a guest book for a modest hotel. The writer was British, and apparently was visiting the Continent for the first time. He/she expressed shock at seeing that virtually everywhere in the Netherlands was a nice place, compared to the home country, much of which was not so clean and not so nice. He/she lamented and apologized for this feature of Great Britain, and that is yet another way of expressing the productivity gap.

Trolls, comments, and Slashdot: Thoughts on the response to Avatar

The vast majority of the comments attached to “Thoughts on James Cameron’s Avatar and Neal Stephenson’s ‘Turn On, Tune In, Veg Out’” are terrible. They tend toward mindless invective and avoid careful scrutiny of what I actually wrote; they’re quite different from the comments this blog normally gets, which is largely because I submitted the Avatar post to Slashdot, home of the trolls. One friend noted the vitriol and in an e-mail said, “Okay, the Slashdot link explains the overall tone of the comments your “Avatar” post is attracting.”

Part of the reason the comments are so bad is the hit and run nature of comments, especially on larger sites. If you have something substantial to say, and particularly if you regularly have something substantial to say, you tend to get a blog of your own. I wrote about this phenomenon in “Commenting on comments:”

In “Comment is King,” Virginia Heffernan writes in the New York Times, “What commenters don’t do is provide a sustained or inventive analysis of Applebaum’s work. In fact, critics hardly seem to connect one column to the next.” She notes that comments are often vitriolic and ignorant, which will hardly surprise those used to reading large, public forums.”

Furthermore, it’s easier and demands less thought to post hit and run comments than it is to really engage an argument. I deleted the worst offenders and sent e-mails to their authors with a pointer to Paul Graham’s How To Disagree; none responded, except for one guy who didn’t understand the point I was trying to make even after three e-mails, when I gave up (“never argue with fools because from a distance people can’t tell who is who”). The hope is that by consciously cultivating better comments and by not responding to random insults, the whole discussion might improve.

(Paul Graham has given the subject a lot of thought too: he even wrote an essay about trolls. As he says, “The core users of News.YC are mostly refugees from other sites that were overrun by trolls.”)

Not every comment I got one was terrible—this one, from a person named “Dutch Uncle,” was probably the best argued of the lot, and it mostly avoided ad hominem attacks. It, however, was very much the exception.

Most comments tended to deal in generalities and not to cite specific parts of my argument. In this respect, they have the same problems I see in freshmen papers, which often want to make generalizations and abstractions without the concrete base necessary. This happens so often that I’ve actually begun a keeping a list of all the things freshmen have told me are “human nature,” with a special eye toward placing contradictory elements next to each other, and in class I now ceaselessly emphasize specifics in arguments.

Since I’ve see this disease before, I’ve already thought about it, and I think the generalization problem is linked to the problem of close reading, which is a really hard skill to develop and one I didn’t develop in earnest till I was around 22 or 23. Even then it was only with a tremendous amount of effort and practice on my part. Close reading demands that you consider every aspect of a writer’s argument, that you pay attention to their word choices and their sentences, and that you don’t attribute to them opinions they don’t necessarily hold. Francine Prose wrote a whole book on the subject called Reading Like a Writer, but the book is a paradox: in order to develop the close reading skills she demonstrates, you have to be able to closely read her book in the first place, which is hard without good teaching.

Mentioning Francine Prose brings up one other common point I saw in the comments: few pointed to sources or ideas outside themselves, and allusions were rare. In the best writing I see, such elements are common. That isn’t to say every time you post a comment, you should cite four peer-reviewed sources and a couple of blog posts, but ideas are often stronger when they show evidence of learning and synthesis from others. In my Avatar post, I brought together Greg Egan, a New Yorker article, Alain de Botton citing Wilhelm Worringer, Robert Putnam’s Bowling Alone, the Neal Stephenson essay, and Star Trek. Now, my argument about Avatar could still be totally wrong, like an essay with hundred citations, but at the very least other writers’ thoughts usually show that more thought has gone into an essay, or a comment. Almost every article in every newspaper and magazine piece worth reading cites at least half a dozen and often many more sources: quotes, other articles, journals, books, and more. That’s part of what make The Atlantic and The New Yorker so worth reading.

Citations area common because things that are really worth arguing about require incredible background knowledge to say anything intelligent. The big response I’ve had to many of the comments, especially the deleted ones, are suggestions to read more: read How Fiction Works, The Art of Criticism, and Reading Like a Writer, then post your angry Internet screeds after you’ve thought more about what you’re arguing. These kinds of pleas probably fall on the proverbially deaf ears, but at least with this post now I have somewhere to point bad commenters in the future.

I think one reason I find Slashdot conversations much less interesting than I did as a teenager isn’t because the nature of the site has changed, but because I’ve learned enough to have learned how hard it is to really know about something. Now I’m often more engaged by pure information and less often in invective and pure opinion, especially when that opinion isn’t backed up by much. The information/opinion binary is of course false, especially because the kind of information one presents often leaves pointers to one’s opinion, but it’s nonetheless useful to consider when you’re posting on Internet forums—or writing anywhere.

Incidentally, one reason I like reading Hacker News so much is that the site consciously tries to cultivate smarter, deeper conversation, much as I wish to; it’s trying to meld technical and cultural forces into a system that rewards and encourages high-content comments of the sort I mostly didn’t get regarding Avatar. I submitted the Avatar post to Hacker News before Slashdot, and the first, relatively good comment came from a Hacker News reader.

The problem of trolls is also very old, and probably goes back to the Internet’s beginnings—hence the need for a word like “troll,” with a definition in the Jargon File. As a result, I’m probably not going to change much by writing this, and to judge from my e-mail correspondent, trying to do so via e-mails and blog posts is mostly hopeless. But a part of me is an optimist who thinks or hopes change is possible and that by having a meta conversation about the nature of trolling, one can avoid the behavior in general, at least on a small scale. At Slashdot or Reddit scales, however, the hope fades, and one simply experiences the tragedy of the commons.

EDIT: Robin Hanson has an interesting alternate, but not mutually incompatible, theory in Why Comments Snark:

Comments disagree more than responding posts because post, but not comment, authors must attract readers. Post authors expect that reader experiences of a post will influence whether those readers come back for future posts. In contrast, comment authors less expect reader experience to influence future comment readership; folks read blog posts more because of the post author than who they expect to author comments there.

Thoughts on James Cameron's Avatar and Neal Stephenson's "Turn On, Tune In, Veg Out"

Despite reading Greg Egan’s brilliant review of Avatar, I saw the movie. The strangest thing about Avatar is its anti-corporate, anti-technological argument. Let me elaborate: there are wonderful anti-corporate, anti-technological arguments to be made, but it seems contrived for them to be made in a movie that is, for the time being, apparently the most expensive ever made; virtually all mainstream movies are now approved solely on their profit-generating potential. So a vaguely anti-corporate movie is being made by… a profit-driven corporation.

The movie is among the most technically sophisticated ever made: it uses a crazy 2D and 3D camera, harnesses the most advanced computer animation techniques imaginable, and has advanced the cinematic state-of-the-art. But Avatar’s story is anti-technological: humans destroyed their home world through environmental disaster and use military might to annihilate the locals and steal their resources. Presumably, if Avatar’s creators genuinely believed that technology is bad, the movie itself would never have been made, leading to a paradox not dissimilar for those found in time travel movies.

Avatar also has a bunch of vaguely mythical elements, including some scenes that look like the world’s biggest yoga class. The Na’avi, an oppressed people modeled on American Indians, or at least American Indians as portrayed in 20th Century American movies, fight against an interstellar military using bows, arrows, horses, and flying lizards. They live in harmony with the world to an extent that most Westerners can probably barely conceive of, given that more people probably visit McDonald’s than national parks in a given year.

So why are we fascinated with the idea of returning to nature, as though we’re going to dance with wolves, when few of us actually do so? Alain de Botton’s The Architecture of Happiness may offer a clue: he cites Wilhelm Worringer’s essay, “Abstraction and Empathy,” which posits that art emphasizes, in de Botton’s words, “[…] those values which the society in question was lacking, for it would love in art whatever it did not possess in sufficient supply with in itself.” We live (presumably) happy lives coddled in buildings that have passed inspection, with takeout Chinese readily available, and therefore we fantasize about being mauled by wild beasts and being taken off the omnipresent grid, with its iPhones and wireless Internet access. We live in suburban anomie and therefore fantasize about group yoga. We make incredibly sophisticated movies about the pleasures of a world with no movies at all, where people still go through puberty rituals that don’t involve Bar Mitzvahs, and mate for life, like Mormons.

Neal Stephenson wrote a perceptive essay called “Turn On, Tune In, Veg Out,” which examines the underlying cultural values in the older and newer Star Wars films. I would’ve linked to it earlier but frankly can’t imagine anyone returning here afterwards. Therefore I’ll quote an important piece of Stephenson:

Anakin wins that race by repairing his crippled racer in an ecstasy of switch-flipping that looks about as intuitive as starting up a nuclear submarine. Clearly the boy is destined to be adopted into the Jedi order, where he will develop his geek talents – not by studying calculus but by meditating a lot and learning to trust his feelings. I lap this stuff up along with millions, maybe billions, of others. Why? Because every single one of us is as dependent on science and technology – and, by extension, on the geeks who make it work – as a patient in intensive care. Yet we much prefer to think otherwise.

Scientists and technologists have the same uneasy status in our society as the Jedi in the Galactic Republic. They are scorned by the cultural left and the cultural right, and young people avoid science and math classes in hordes. The tedious particulars of keeping ourselves alive, comfortable and free are being taken offline to countries where people are happy to sweat the details, as long as we have some foreign exchange left to send their way. Nothing is more seductive than to think that we, like the Jedi, could be masters of the most advanced technologies while living simple lives: to have a geek standard of living and spend our copious leisure time vegging out.

The tedious particulars of modern technological life are both embraced and avoided in Avatar too. The villain, rather than being political chaos, organized oppression, ignorance, entropy, or weak/ineffective institutions, to name a few of the real but abstract contemporary bad guys, is instead replaced by an army / mercenary commander who might be at home in Xe Services / Blackwater USA. The military villainy and disdain for superior firepower in Avatar is especially odd, given that the United States has held the technological advantage in major wars for at least a century; the people watching Avatar are probably also the ones who support our troops. The studio that made Avatar probably cares more about quarterly statements than about the environment. The movie villains, however, apparently aren’t being restrained by an intergalactic EPA.

Avatar is really a Western about the perils of modernity, but it gets contemporary politics utterly wrong—or perhaps it would be more accurate to say that contemporary politics are utterly absent. There is no intergalactic criminal court or committee for the protection of indigenous peoples, which seems like a probable development for a race nursed on Star Trek and post-colonialism and that is advanced enough to travel the stars. In the contemporary United States, a bewildering array of regulations govern activities that might have an environmental impact on communities; the National Environmental Policy Act (NEPA), for example, requires that federal agencies to monitor and report on their activities. Such regulations are growing, rather than shrinking. They’re a staple bogeyman of right-wing radio.

But in Avatar, decisions aren’t made at the future equivalent of the Copenhagen summit. Instead, they’re fought in battles reminiscent of World War I, or the Civil War, leavened with some personal combat. The battles are jarring but anachronistic, although maybe Iraq War II: The Sequel would’ve turned out better if George Bush and Saddam Hussein had dueled with swords, but that’s not how wars are fought any more. And when one side has machine guns and the other side doesn’t, you get something as nasty as World War I, where all the élan, spirit, and meditation in the world didn’t stop millions of people from dying.

My implicit argument isn’t perfect: Avatar does criticize our reliance on oil through the parable of the cleverly named “unobtainium,” but the thrust of the movie is unambiguous. We want to fantasize that solutions are as simple as putting a hole in the right guy, which will make things right again. That’s probably a comforting notion, and an easy one to fit into a two- to three- hour movie with a three-part arc, but it’s also a wrong one, and one that ignores or abstracts the world’s complexity. The people who tend to rule the world are the ones who pay attention to how the world really is, rather than how it was, or how they would like it to be. The real question is whether we are still people who see how the world is.