How could Twitter not change how protests happen?: Egypt and the history of the novel

There’s been a lot of talk about the role Twitter, text messaging, and other communication mediums are playing in the unfolding drama in Egypt. Malcolm Gladwell basically says the role isn’t great: “People with a grievance will always find ways to communicate with each other. How they choose to do it is less interesting, in the end, than why they were driven to do it in the first place.”

But I am not convinced this is true: by lowering the friction of communication, thus making it real-time and instantaneous, Twitter and other technological tools are almost certainly changing what is said. Quantity has a quality all its own, and how we speak has a habit of changing what we say.

Gladwell’s post (and others) remind me of the arguments in English literature the field around the development of the novel as a genre (see, for example this post on Steven Moore’s The Novel: An Alternative History). Basically, a lot of people want to argue about the development of the novel without taking into account the printing press.

To me this is silly because mass cheap printing was a precondition to the novel as we know it. Without that, we would have fictional prose narratives of some length, but we probably wouldn’t have them alluding to one another, we wouldn’t have large portions of the population reading them, and we wouldn’t have (relatively) large portions of the population with enough disposable income to avoid them. If you look at surviving works that we would now classify as fiction that were written prior to ~1600, almost all of them are religious in nature because only the church had the resources to fund writing, maintain large collections of writing, and bother writing anything down.

After ~1600 (or ~1500, if you prefer, but that’s about it), you have a lot of things written that would previously not have been considered “worth” writing down because writing and copying manuscripts was so expensive and time consuming. Technology did change what was said. How something was said changed what was said. Technology is doing the same thing now. I don’t know how the current drama will play out; if you looked at the printing press around the time it was first created, it was mostly used to print religious stuff (hence the “Gutenberg Bible”). Elizabeth Eisenstein’s The Printing Press as an Agent of Change describes some of this. By the nineteenth century, however, writers are grappling with the idea of a world without God, per J. Hillis Miller’s Form of Victorian Fiction, or a world where “God is dead,” to use Nietzsche’s famous and misunderstood proclamation: he wasn’t saying that people would stop believing in God or that would religion would stop being a force society, but rather that religious studies were a dead end and people would cease to attribute everything in their life to God or God’s will.

In 1500, the material published via printing press looked basically continuous with what came about in 1400. By 1850, things are looking pretty different, and the diversity of printed materials has fundamentally changed what people could say. The printing press allowed people with grievances, to use Gladwell’s formulation, to communicate with each other much more efficiently than they previously could, which leads to a lot of political, social, scientific, and philosophical developments that most of us living today approve of. How many of us want to return to being illiterate serfs toiling in fields for distant masters?

Gladwell is right in one sense: the media is probably overstating the importance of Twitter and SMS. But both of those still play an important role in what’s going on. Somehow, people with grievances against monarchs and dictators weren’t all that successful on average in the years prior to ~1600. After that, they got more and more successful, to the point where a fair bit of world’s population now lives without dictators. Part of the reason is because ideas about freedom and good governance could be disseminated cheaply, where before they couldn’t, and everyone spent most waking hours covered in shit, farming, and hoping they’re not going to starve to death in late winter / early spring.

Mark at the computing education blog says, “A particularly interesting anecdote for me is the below: That the Internet was turned off in Egypt, but the protests continued. So what role was Facebook and Twitter playing, really?” Depends on the timeframe. Various technological tools helped people initially organize and helped the conditions for organization come about. They will probably do so again in the future. In the long term, such tools will probably create the conditions for much larger projects that we only dimly perceive now. I would predict what those will be, but things have a habit of turning out much stranger than random prognosticators like me can predict.

Has science fiction "run out of steam?"

This post began life as a Slashdot comment in response to Has Sci-Fi Run Out of Steam?:

I doubt it, any more than science or technology has run out of steam due to a lack of imagination. Rather, I wonder if the science fiction publishing business has either run out of steam or become an active roadblock between writers and readers. It seems that most publishers are trying a play-it-safe approach that demands repetition over originality. This is based partially on what I see featured in bookstores and partially on my own experience, which I discuss extensively in Science fiction, literature, and the haters. It begins:

Why does so little science fiction rise to the standards of literary fiction?

This question arose from two overlapping events. The first came from reading Day of the Triffids (link goes to my post); although I don’t remember how I came to the book, someone must’ve recommended it on a blog or newspaper in compelling enough terms for me to buy it. Its weaknesses, as discussed in the post, brought up science fiction and its relation to the larger book world.

The second event arose from a science fiction novel I wrote called Pearle Transit that I’ve been submitting to agents. It’s based on Conrad’s Heart of Darkness—think, on a superficial level, “Heart of Darkness in space.” Two replies stand out: one came from an agent who said he found the idea intriguing but that science fiction novels must be at least 100,000 words long and have sequels already started. “Wow,” I thought. How many great literary novels have enough narrative force and character drive for sequels? The answer that came immediately to mind was “zero,” and after reflection and consultation with friends I still can’t find any. Most novels expend all their ideas at once, and to keep going would be like wearing a shirt that fades from too many washes. Even in science fiction, very few if any series maintain their momentum over time; think of how awful the Dune books rapidly became, or Arthur C. Clarke’s Rama series. A few novels can make it as multiple-part works, but most of those were conceived of and executed as a single work, like Dan Simmons’ Hyperion or Tolkien’s The Lord of the Rings (more on those later).

The minimum word count bothers me too. It’s not possible for Pearle Transit to be stretched beyond its present size without destroying what makes it coherent and, I hope, good. By its nature it is supposed to be taunt, and much as a 120-pound person cannot be safely made into a 240-pound person, Pearle Transit can’t be engorged without making it like the bloated star that sets its opening scene. If the market reality is that such books can’t or won’t sell, I begin to tie the quality of the science fiction I’ve read together with the system that produces it.

If the publishing system itself is broken and nothing yet has grown up to take its place (I have no interest in trolling through thousands of terrible novels uploaded to websites in search of a single potential gem, for those of you Internet utopians out there), maybe the source of the genre’s troubles isn’t where PC Pro places it.

In addition, although science fiction publishing might appear sclerotic at times, science fiction in movies and TV shows continues unabated—many of which draw material from books. One commenter realized this: “The huge change in SF since I first started reading it in the 70’s is that these days, movie/TV SF is a gigantic, popular commercial enterprise, utterly dwarfing written SF.”

Still, I’ve found fun and fascinating SF writers thanks to the Internet: Jack Vance started as a recommendation and an article in the NYT magazine; Charlie Stross writes a blog; and others have sent good advice on where to look. But I think a lot of SF has turned towards the cerebral, towards alternate / fake worlds, and towards dealing with massive institutions on earth. These are all broad claims—too broad for a blog post—that I might follow-up in a future essay, but they’ve been churning in my mind enough for me to look for them in fiction—where they seem to be almost everywhere.

One other funny item: PC Pro uses the antiquated cliche “run out of steam,” which refers to steam engines that probably haven’t been widely used since the 19th century, to refer to a genre concerned with how the present represents the future. Maybe this indicates language itself can run far behind whatever the perceived times are.

Charlie Stross on the Real Reason Steve Jobs hates flash (and how lives change)

Charlie Stross has a typically fascinating post about the real reason Steve Jobs hates flash. The title is deceptive: the post is really about the future of the computing industry, which is to say, the future of our day-to-day lives.

If you read tech blogs, you’ve read a million people in the echo chamber repeating the same things to one another over and over again. Some of that stuff is probably right, but even if Stross is wrong, he’s at least pulling his head more than six inches off the ground, looking around, and saying “what are we going to do when we hit those mountains up ahead?”

And I don’t even own an iPad, or have much desire to be in the cloud for the sake of being in the cloud. But the argument about the importance of always-on networking is a strong one, even if, to me, it also points to the points to the greater importance of being able to disconnect distraction.

In the meantime, however, I’m going back to the story that I’m working on. Stories have the advantage that they’ll probably always be popular, even if the medium through which one experiences them changes. Consequently, I’m turning Mac Freedom on and Internet access off.

Buying a Kindle: Why Didn't I Think of This Last Semester?

Despite my extensive carping about the Digital Restrictions Management on the Amazon Kindle, I ordered one earlier today and now wish I’d been smart enough to do so last semester.

Why? I’m a graduate student in English Lit, and I looked at my reading requirements for this semester and found that the vast majority of the assigned books are out-of-copyright (meaning they were published before 1923), and I can download them free; most are also famous enough to make them easily accessible online. In other words, buying all my books for the semester will cost $200. Buying a Kindle will cost $259, plus another $30 for a case. The Kindle + free books effectively makes the Kindle $59. If I’d realized this last semester, it already would’ve paid for itself. In addition, I won’t have to lug around nearly as many .pdfs as I do now.

Given that the English curriculums appear to focus on pre-1923 texts, I’d be surprised if more English majors and grad students don’t take this path. At the moment, it’s possible to read class books either on a computer screen or print them out, but neither solution works all that well. I suspect this one will, though, as always, we shall see.

Thoughts on James Cameron's Avatar and Neal Stephenson's "Turn On, Tune In, Veg Out"

Despite reading Greg Egan’s brilliant review of Avatar, I saw the movie. The strangest thing about Avatar is its anti-corporate, anti-technological argument. Let me elaborate: there are wonderful anti-corporate, anti-technological arguments to be made, but it seems contrived for them to be made in a movie that is, for the time being, apparently the most expensive ever made; virtually all mainstream movies are now approved solely on their profit-generating potential. So a vaguely anti-corporate movie is being made by… a profit-driven corporation.

The movie is among the most technically sophisticated ever made: it uses a crazy 2D and 3D camera, harnesses the most advanced computer animation techniques imaginable, and has advanced the cinematic state-of-the-art. But Avatar’s story is anti-technological: humans destroyed their home world through environmental disaster and use military might to annihilate the locals and steal their resources. Presumably, if Avatar’s creators genuinely believed that technology is bad, the movie itself would never have been made, leading to a paradox not dissimilar for those found in time travel movies.

Avatar also has a bunch of vaguely mythical elements, including some scenes that look like the world’s biggest yoga class. The Na’avi, an oppressed people modeled on American Indians, or at least American Indians as portrayed in 20th Century American movies, fight against an interstellar military using bows, arrows, horses, and flying lizards. They live in harmony with the world to an extent that most Westerners can probably barely conceive of, given that more people probably visit McDonald’s than national parks in a given year.

So why are we fascinated with the idea of returning to nature, as though we’re going to dance with wolves, when few of us actually do so? Alain de Botton’s The Architecture of Happiness may offer a clue: he cites Wilhelm Worringer’s essay, “Abstraction and Empathy,” which posits that art emphasizes, in de Botton’s words, “[…] those values which the society in question was lacking, for it would love in art whatever it did not possess in sufficient supply with in itself.” We live (presumably) happy lives coddled in buildings that have passed inspection, with takeout Chinese readily available, and therefore we fantasize about being mauled by wild beasts and being taken off the omnipresent grid, with its iPhones and wireless Internet access. We live in suburban anomie and therefore fantasize about group yoga. We make incredibly sophisticated movies about the pleasures of a world with no movies at all, where people still go through puberty rituals that don’t involve Bar Mitzvahs, and mate for life, like Mormons.

Neal Stephenson wrote a perceptive essay called “Turn On, Tune In, Veg Out,” which examines the underlying cultural values in the older and newer Star Wars films. I would’ve linked to it earlier but frankly can’t imagine anyone returning here afterwards. Therefore I’ll quote an important piece of Stephenson:

Anakin wins that race by repairing his crippled racer in an ecstasy of switch-flipping that looks about as intuitive as starting up a nuclear submarine. Clearly the boy is destined to be adopted into the Jedi order, where he will develop his geek talents – not by studying calculus but by meditating a lot and learning to trust his feelings. I lap this stuff up along with millions, maybe billions, of others. Why? Because every single one of us is as dependent on science and technology – and, by extension, on the geeks who make it work – as a patient in intensive care. Yet we much prefer to think otherwise.

Scientists and technologists have the same uneasy status in our society as the Jedi in the Galactic Republic. They are scorned by the cultural left and the cultural right, and young people avoid science and math classes in hordes. The tedious particulars of keeping ourselves alive, comfortable and free are being taken offline to countries where people are happy to sweat the details, as long as we have some foreign exchange left to send their way. Nothing is more seductive than to think that we, like the Jedi, could be masters of the most advanced technologies while living simple lives: to have a geek standard of living and spend our copious leisure time vegging out.

The tedious particulars of modern technological life are both embraced and avoided in Avatar too. The villain, rather than being political chaos, organized oppression, ignorance, entropy, or weak/ineffective institutions, to name a few of the real but abstract contemporary bad guys, is instead replaced by an army / mercenary commander who might be at home in Xe Services / Blackwater USA. The military villainy and disdain for superior firepower in Avatar is especially odd, given that the United States has held the technological advantage in major wars for at least a century; the people watching Avatar are probably also the ones who support our troops. The studio that made Avatar probably cares more about quarterly statements than about the environment. The movie villains, however, apparently aren’t being restrained by an intergalactic EPA.

Avatar is really a Western about the perils of modernity, but it gets contemporary politics utterly wrong—or perhaps it would be more accurate to say that contemporary politics are utterly absent. There is no intergalactic criminal court or committee for the protection of indigenous peoples, which seems like a probable development for a race nursed on Star Trek and post-colonialism and that is advanced enough to travel the stars. In the contemporary United States, a bewildering array of regulations govern activities that might have an environmental impact on communities; the National Environmental Policy Act (NEPA), for example, requires that federal agencies to monitor and report on their activities. Such regulations are growing, rather than shrinking. They’re a staple bogeyman of right-wing radio.

But in Avatar, decisions aren’t made at the future equivalent of the Copenhagen summit. Instead, they’re fought in battles reminiscent of World War I, or the Civil War, leavened with some personal combat. The battles are jarring but anachronistic, although maybe Iraq War II: The Sequel would’ve turned out better if George Bush and Saddam Hussein had dueled with swords, but that’s not how wars are fought any more. And when one side has machine guns and the other side doesn’t, you get something as nasty as World War I, where all the élan, spirit, and meditation in the world didn’t stop millions of people from dying.

My implicit argument isn’t perfect: Avatar does criticize our reliance on oil through the parable of the cleverly named “unobtainium,” but the thrust of the movie is unambiguous. We want to fantasize that solutions are as simple as putting a hole in the right guy, which will make things right again. That’s probably a comforting notion, and an easy one to fit into a two- to three- hour movie with a three-part arc, but it’s also a wrong one, and one that ignores or abstracts the world’s complexity. The people who tend to rule the world are the ones who pay attention to how the world really is, rather than how it was, or how they would like it to be. The real question is whether we are still people who see how the world is.

Nuts: The Barnes and Noble Nook isn't very good

The Barnes and Noble Nook isn’t ready for prime time, according to David Pogue of the New York Times. Walter Mossberg of the WSJ agrees. Too bad: I was thinking about buying one, mostly for the .pdf capabilities, but I think I’ll wait—maybe for Kindle 3. I don’t think the Kindle’s current and potential dominance of the eBook market is good for books or consumers, and part of the reason that the Nook attracted me is precisely because it represents a real competitor to the Kindle. But these reviews indicate that the Nook was either rushed to market or poorly tested.

The .pdf issue is important to me because I’m a grad student in English and have to read a steadily larger number of articles and book chapters. Most get printed, but I no longer have the physical capacity to store, organize, and carry all of them, which makes something like the Kindle or Nook appealing, despite my reservations concerning the Digital Restrictions Management (DRM). By the way, you might want to check out the comments section on my post “New Kindle, same problems,” as Jason Fisher and Maggie Brookes have been talking books, ebooks, and culture in that space.

Mellel 2.7 released

I should’ve noted this earlier but forgot: Redlers released Mellel 2.7 in mid-September. The biggest new feature from my perspective is Snow Leopard compatibility.

Quick background: Mellel is a word processor for OS X, and Redlers has listed the top ten reasons to switch (presumably from Word) at the link. Many academics use Mellel for its stability, language support, and excellent formatting, and I sometimes use it for very long documents because it doesn’t crash easily. The major problem: no track changes/editing support. But recent forum activity indicates that track changes might be in development; you can see my own comments in the thread.

Product review: Matias Tactile Pro 2

I recently tried a product as disappointing as Children of Húrin: the Matias Tactile Pro 2 keyboard, which combined a fat price ($150) with poor build quality (loose keys, a malformed edge, and a continuing shadow key problem). Combined, they make a keyboard worse than the one they supersede—in the words of one reviewer, “[…] It’s 4 steps backwards, one step sideways, and 0 steps forward.”

I type a lot, as implied here, and so spend a greater-than-average amount of time thinking about my keyboard. When I heard about the Tactile Pro 2, I sent an e-mail to Derek Trideja, who gave me the title “Alert keyboard fetishist.” An exaggeration, but not far from the truth, and I’ve yet to find that perfect keyboard. Frequent readers will remember when I posted a picture of my writing space—since changed—and the Matias Tactile Pro Keyboard version 1 that peaks out. It’s as close as I’ve come to the perfect keyboard, and if not for the shadow keys problem it would be. Seventy nine dollars was a lot for a keyboard until I began using it regularly, and I found this one much better than the mushy keyboards that most computers come with, or the new and hideous keyboard that came with my iMac.

Programmers sometimes raved about old school IBM Model M keyboards, but the regular ones were discontinued in 1996 and don’t have an easy place for command, option, and control keys, making them poorly suited for OS X. The Tactile Pro 1 filled that gap because it had a Mac layout and the comfort I want. Shadow keys, however, develop when the writer hits a number of keys in succession—apparently the keyboard has multiple keys on the same path in some instances, which can cause characters to appear even when the user doesn’t press them. Problems occur when you type anything ending in “ion”, like “division,” which appears as “divisioqn” if you strike the keys in rapid succesion. Not fun, but still better than the mushy keyboards.

Version 2 still has those problems, although they’re not as pronounced. In an e-mail to me, someone from Matias said that the shadow key problem had been reduced in version 2. The person was right, but it hasn’t been reduced enough. In addition, the USB port situation irritated me—the old version has one cable and two USB ports, one on each side of the keyboard. The new one has a single USB port on the side of the keyboard and two ends, as depicted here:

Matias Tactile Pro USB Plugs

(Notice the background: an Oxford edition of Conrad’s Heart of Darkness and Other Tales.)

This causes me to run out of USB ports on the back of my computer and to have to continually unplug things if I want to download pictures or transfer files to a USB drive. Their marketing materials don’t mention that they’ve lost one USB port on the keyboard. In addition, the one I received has keys much looser than my previous Tactile Pro—it feels flimsier and doesn’t have the same satisfying action with each keystroke. The front edge was also malformed, as this picture shows, though not perfectly:

The deformed edge of the keyboard

I was tempted to return mine and ask for a replacement unit, but after reading this thread on Ars Technica and the previously mentioned Bronzefinger review I decided not to bother. I’d rather just have the money back, and one thing Matias does offer is a 30-day money-back guarantee. I’m sure that the writer of Bronzefinger and I are not the only ones to have made use of this policy. The keyboard feels more like something hacked together by electrical engineering students one weekend or a science fair project.

What went wrong? I have no idea. I’ve heard engineering friends say that late projects seldom bode well for the finished project, which is more likely to turn out poorly because the delay manifests underlying problems; I’ve read similar things on Slashdot, for what their opinion is worth. The Matias Tactile Pro 2 was supposed to ship in March, but the initial batch didn’t arrive until, as far as I could tell, June, and the one I bought came from the second run that shipped in September. If Matias hasn’t worked the kinks out yet, I’m not sure they will in this iteration. In the meantime, those interested in a better keyboard might want to try and snag a used Tactile Pro 1 or a reborn Model M. The Tactile Pro 2 does have a few stronger points, like an optimizer feature that allows one to change the keyboard layout, but its benefit is minor compared to the keyboard’s drawbacks.


In other technology news, Apple just announced the latest versioqn—excuse me, version—of OS X, Leopard. I’ve also started using iWork, and especially Pages, for some of my writing. Pages simply looks nicer than Word, even if Pages is still missing many features.


EDIT: I posted a review of the Customizer, which is the new version of the Model M mentioned above.