Computers and network effects: Why your computer is “slow”

Going Nowhere Really Fast, or How Computers Only Come in Two Speeds” is half-right. Here’s the part that’s right:

[…] it remains obvious that computers come in just two speeds: slow and fast. A slow computer is one which cannot keep up with the operator’s actions in real time, and forces the hapless human to wait. A fast computer is one which can, and does not.

Today’s personal computers (with a few possible exceptions) are only available in the “slow” speed grade.

So far so good: I wish I didn’t have to wait as long as I do for Word to open documents or load or for OS X to become responsive after reboot. But then there’s the reason offered as to why computers feel subjectively slower in many respects than they did:

The GUI of my 4MHz Symbolics 3620 lisp machine is more responsive on average than that of my 3GHz office PC. The former boots (into a graphical everything-visible-and-modifiable programming environment, the most expressive ever created) faster than the latter boots into its syrupy imponade hell.

This implies that the world is filled with “bloat.” But such an argument reminds me of Joel Spolsky’s Bloatware and the 80/20 myth. He says:

A lot of software developers are seduced by the old “80/20” rule. It seems to make a lot of sense: 80% of the people use 20% of the features. So you convince yourself that you only need to implement 20% of the features, and you can still sell 80% as many copies.

Unfortunately, it’s never the same 20%. Everybody uses a different set of features.

Exactly. And he goes on to quote Jamie Zawinski saying, “Convenient though it would be if it were true, Mozilla [Netscape 1.0] is not big because it’s full of useless crap. Mozilla is big because your needs are big. Your needs are big because the Internet is big. There are lots of small, lean web browsers out there that, incidentally, do almost nothing useful.”

That’s correct; Stanislav’s 4MHz Symbolics 3620 lisp machine was/is no doubt a nice computer. But modern, ultra-responsive computers don’t exist not because people like bloat—they don’t exist because people in the aggregate choose trade-offs that favor a very wide diversity of uses. People don’t want to make the trade-offs that fast responsiveness implies in sufficient numbers for there to be a market for such a computer.

Nothing is stopping someone from making a stripped-down version of, say, Linux that will boot “into a graphical everything-visible-and-modifiable programming environment, the most expressive ever created faster than the latter boots into its syrupy imponade hell.” But most people evidently prefer the features that modern OSes and programs offer. Or, rather, they prefer that modern OSes support THEIR pet feature and make everything as easy to accomplish as possible at the expense of speed. If you take out their favorite feature… well, then you can keep your superfast response time and they’ll stick with Windows.

To his credit, Stanislav responded to a version of what I wrote above, noting some of the possible technical deficiencies of Linux:

If you think that a static-language-kernel abomination like Linux (or any other UNIX clone) could be turned into a civilized programming environment, you are gravely mistaken.

That may be true: my programming skill and knowledge end around simple scripting and CS 102. But whatever the weaknesses of Linux, OS X, and Windows, taken together they represent uncounted hours of programming and debugging time and effort. For those of you who haven’t tried it, I can only say that programming is an enormous challenge. To try and replicate all that modern OSes offer would be hard—and probably effectively impossible. If Stanislav wants to do it, though, I’d be his first cheerleader—but the history of computing is also rife with massive rewrites of existing software and paradigms that fail. See, for example, GNU/Hurd for a classic example. It’s been in development since 1990. Did it fail for technical or social reasons? I have no idea, but the history of new operating systems, however technically advanced, is not a happy one.

Stanislav goes on to say:

And if only the bloat and waste consisted of actual features that someone truly wants to use.

The problem is that one man’s feature is another’s bloat, and vice-versa, which Joel Spolsky points out, and that’s why the computer experience looks like it does today: because people hate bloat, unless it’s their bloat, in which case they’ll tolerate it.

He links to a cool post on regulated utilities as seen in New York (go read it). But I don’t think the power grid metaphor is a good one because transmission lines do one thing: move electricity. Computers can be programmed to do effectively anything, and, because users’ needs vary so much, so does the software. You don’t have to build everything from APIs to photo manipulation utilities to web browsers on top of power lines.

Note the last line of Symbolics, Inc.: A failure of heterogeneous engineering, which is linked to in Stanislav’s “About” page:

Symbolics is a classic example of a company failing at heterogeneous engineering. Focusing exclusively on the technical aspects of engineering led to great technical innovation. However, Symbolics did not successfully engineer its environment, custormers [sic], competitors and the market. This made the company unable to achieve long term success.

That kind of thinking sounds, to me, like the kind of thinking that leads one to lament how “slow” modern computers are. They are—from one perspective. From another, they enable things that the Lisp machine didn’t have (like, say, YouTube).

However, I’m a random armchair quarterback, and code talks while BS walks. If you think you can produce an OS that people want to use, write it. But when it doesn’t support X, where “X” is whatever they want, don’t be surprised when those people don’t use it. Metcalfe’s Law is strong in computing, and there is a massive amount of computing history devoted to the rewrite syndrome; for another example, see Dreaming in Code, a book that describes how an ostensibly simple task became an engineering monster.

Columbia or prison: similarities and differences?

Terry Gross’ interview with Scott Spencer (of A Man in the Woods) notes that the author has “taught fiction writing at Columbia University, and in prison” (1:10; I think she says “in prison,” although it might be “at prisons”). The tone sounds like this sort of trajectory is completely normal, like a sandwich and soup. To me, it invites questions:

  • Can I be the only one who finds the juxtaposition of those two fine American institutions curious or notable?
  • How many writers or professors have taught at an Ivy League school and a penal facility?
  • Is teaching at the one pretty much like teaching at the other?
  • If you’ve currently got a gig at a prison, how do you make the transition to Columbia? I assume relatively few people want to make the opposite leap.