Computers and network effects: Why your computer is "slow"

Going Nowhere Really Fast, or How Computers Only Come in Two Speeds” is half-right. Here’s the part that’s right:

[…] it remains obvious that computers come in just two speeds: slow and fast. A slow computer is one which cannot keep up with the operator’s actions in real time, and forces the hapless human to wait. A fast computer is one which can, and does not.

Today’s personal computers (with a few possible exceptions) are only available in the “slow” speed grade.

So far so good: I wish I didn’t have to wait as long as I do for Word to open documents or load or for OS X to become responsive after reboot. But then there’s the reason offered as to why computers feel subjectively slower in many respects than they did:

The GUI of my 4MHz Symbolics 3620 lisp machine is more responsive on average than that of my 3GHz office PC. The former boots (into a graphical everything-visible-and-modifiable programming environment, the most expressive ever created) faster than the latter boots into its syrupy imponade hell.

This implies that the world is filled with “bloat.” But such an argument reminds me of Joel Spolsky’s Bloatware and the 80/20 myth. He says:

A lot of software developers are seduced by the old “80/20″ rule. It seems to make a lot of sense: 80% of the people use 20% of the features. So you convince yourself that you only need to implement 20% of the features, and you can still sell 80% as many copies.

Unfortunately, it’s never the same 20%. Everybody uses a different set of features.

Exactly. And he goes on to quote Jamie Zawinski saying, “Convenient though it would be if it were true, Mozilla [Netscape 1.0] is not big because it’s full of useless crap. Mozilla is big because your needs are big. Your needs are big because the Internet is big. There are lots of small, lean web browsers out there that, incidentally, do almost nothing useful.”

That’s correct; Stanislav’s 4MHz Symbolics 3620 lisp machine was/is no doubt a nice computer. But modern, ultra-responsive computers don’t exist not because people like bloat—they don’t exist because people in the aggregate choose trade-offs that favor a very wide diversity of uses. People don’t want to make the trade-offs that fast responsiveness implies in sufficient numbers for there to be a market for such a computer.

Nothing is stopping someone from making a stripped-down version of, say, Linux that will boot “into a graphical everything-visible-and-modifiable programming environment, the most expressive ever created faster than the latter boots into its syrupy imponade hell.” But most people evidently prefer the features that modern OSes and programs offer. Or, rather, they prefer that modern OSes support THEIR pet feature and make everything as easy to accomplish as possible at the expense of speed. If you take out their favorite feature… well, then you can keep your superfast response time and they’ll stick with Windows.

To his credit, Stanislav responded to a version of what I wrote above, noting some of the possible technical deficiencies of Linux:

If you think that a static-language-kernel abomination like Linux (or any other UNIX clone) could be turned into a civilized programming environment, you are gravely mistaken.

That may be true: my programming skill and knowledge end around simple scripting and CS 102. But whatever the weaknesses of Linux, OS X, and Windows, taken together they represent uncounted hours of programming and debugging time and effort. For those of you who haven’t tried it, I can only say that programming is an enormous challenge. To try and replicate all that modern OSes offer would be hard—and probably effectively impossible. If Stanislav wants to do it, though, I’d be his first cheerleader—but the history of computing is also rife with massive rewrites of existing software and paradigms that fail. See, for example, GNU/Hurd for a classic example. It’s been in development since 1990. Did it fail for technical or social reasons? I have no idea, but the history of new operating systems, however technically advanced, is not a happy one.

Stanislav goes on to say:

And if only the bloat and waste consisted of actual features that someone truly wants to use.

The problem is that one man’s feature is another’s bloat, and vice-versa, which Joel Spolsky points out, and that’s why the computer experience looks like it does today: because people hate bloat, unless it’s their bloat, in which case they’ll tolerate it.

He links to a cool post on regulated utilities as seen in New York (go read it). But I don’t think the power grid metaphor is a good one because transmission lines do one thing: move electricity. Computers can be programmed to do effectively anything, and, because users’ needs vary so much, so does the software. You don’t have to build everything from APIs to photo manipulation utilities to web browsers on top of power lines.

Note the last line of Symbolics, Inc.: A failure of heterogeneous engineering, which is linked to in Stanislav’s “About” page:

Symbolics is a classic example of a company failing at heterogeneous engineering. Focusing exclusively on the technical aspects of engineering led to great technical innovation. However, Symbolics did not successfully engineer its environment, custormers [sic], competitors and the market. This made the company unable to achieve long term success.

That kind of thinking sounds, to me, like the kind of thinking that leads one to lament how “slow” modern computers are. They are—from one perspective. From another, they enable things that the Lisp machine didn’t have (like, say, YouTube).

However, I’m a random armchair quarterback, and code talks while BS walks. If you think you can produce an OS that people want to use, write it. But when it doesn’t support X, where “X” is whatever they want, don’t be surprised when those people don’t use it. Metcalfe’s Law is strong in computing, and there is a massive amount of computing history devoted to the rewrite syndrome; for another example, see Dreaming in Code, a book that describes how an ostensibly simple task became an engineering monster.

Computers and network effects: Why your computer is “slow”

Going Nowhere Really Fast, or How Computers Only Come in Two Speeds” is half-right. Here’s the part that’s right:

[…] it remains obvious that computers come in just two speeds: slow and fast. A slow computer is one which cannot keep up with the operator’s actions in real time, and forces the hapless human to wait. A fast computer is one which can, and does not.

Today’s personal computers (with a few possible exceptions) are only available in the “slow” speed grade.

So far so good: I wish I didn’t have to wait as long as I do for Word to open documents or load or for OS X to become responsive after reboot. But then there’s the reason offered as to why computers feel subjectively slower in many respects than they did:

The GUI of my 4MHz Symbolics 3620 lisp machine is more responsive on average than that of my 3GHz office PC. The former boots (into a graphical everything-visible-and-modifiable programming environment, the most expressive ever created) faster than the latter boots into its syrupy imponade hell.

This implies that the world is filled with “bloat.” But such an argument reminds me of Joel Spolsky’s Bloatware and the 80/20 myth. He says:

A lot of software developers are seduced by the old “80/20″ rule. It seems to make a lot of sense: 80% of the people use 20% of the features. So you convince yourself that you only need to implement 20% of the features, and you can still sell 80% as many copies.

Unfortunately, it’s never the same 20%. Everybody uses a different set of features.

Exactly. And he goes on to quote Jamie Zawinski saying, “Convenient though it would be if it were true, Mozilla [Netscape 1.0] is not big because it’s full of useless crap. Mozilla is big because your needs are big. Your needs are big because the Internet is big. There are lots of small, lean web browsers out there that, incidentally, do almost nothing useful.”

That’s correct; Stanislav’s 4MHz Symbolics 3620 lisp machine was/is no doubt a nice computer. But modern, ultra-responsive computers don’t exist not because people like bloat—they don’t exist because people in the aggregate choose trade-offs that favor a very wide diversity of uses. People don’t want to make the trade-offs that fast responsiveness implies in sufficient numbers for there to be a market for such a computer.

Nothing is stopping someone from making a stripped-down version of, say, Linux that will boot “into a graphical everything-visible-and-modifiable programming environment, the most expressive ever created faster than the latter boots into its syrupy imponade hell.” But most people evidently prefer the features that modern OSes and programs offer. Or, rather, they prefer that modern OSes support THEIR pet feature and make everything as easy to accomplish as possible at the expense of speed. If you take out their favorite feature… well, then you can keep your superfast response time and they’ll stick with Windows.

To his credit, Stanislav responded to a version of what I wrote above, noting some of the possible technical deficiencies of Linux:

If you think that a static-language-kernel abomination like Linux (or any other UNIX clone) could be turned into a civilized programming environment, you are gravely mistaken.

That may be true: my programming skill and knowledge end around simple scripting and CS 102. But whatever the weaknesses of Linux, OS X, and Windows, taken together they represent uncounted hours of programming and debugging time and effort. For those of you who haven’t tried it, I can only say that programming is an enormous challenge. To try and replicate all that modern OSes offer would be hard—and probably effectively impossible. If Stanislav wants to do it, though, I’d be his first cheerleader—but the history of computing is also rife with massive rewrites of existing software and paradigms that fail. See, for example, GNU/Hurd for a classic example. It’s been in development since 1990. Did it fail for technical or social reasons? I have no idea, but the history of new operating systems, however technically advanced, is not a happy one.

Stanislav goes on to say:

And if only the bloat and waste consisted of actual features that someone truly wants to use.

The problem is that one man’s feature is another’s bloat, and vice-versa, which Joel Spolsky points out, and that’s why the computer experience looks like it does today: because people hate bloat, unless it’s their bloat, in which case they’ll tolerate it.

He links to a cool post on regulated utilities as seen in New York (go read it). But I don’t think the power grid metaphor is a good one because transmission lines do one thing: move electricity. Computers can be programmed to do effectively anything, and, because users’ needs vary so much, so does the software. You don’t have to build everything from APIs to photo manipulation utilities to web browsers on top of power lines.

Note the last line of Symbolics, Inc.: A failure of heterogeneous engineering, which is linked to in Stanislav’s “About” page:

Symbolics is a classic example of a company failing at heterogeneous engineering. Focusing exclusively on the technical aspects of engineering led to great technical innovation. However, Symbolics did not successfully engineer its environment, custormers [sic], competitors and the market. This made the company unable to achieve long term success.

That kind of thinking sounds, to me, like the kind of thinking that leads one to lament how “slow” modern computers are. They are—from one perspective. From another, they enable things that the Lisp machine didn’t have (like, say, YouTube).

However, I’m a random armchair quarterback, and code talks while BS walks. If you think you can produce an OS that people want to use, write it. But when it doesn’t support X, where “X” is whatever they want, don’t be surprised when those people don’t use it. Metcalfe’s Law is strong in computing, and there is a massive amount of computing history devoted to the rewrite syndrome; for another example, see Dreaming in Code, a book that describes how an ostensibly simple task became an engineering monster.

Columbia or prison: similarities and differences?

Terry Gross’ interview with Scott Spencer (of A Man in the Woods) notes that the author has “taught fiction writing at Columbia University, and in prison” (1:10; I think she says “in prison,” although it might be “at prisons”). The tone sounds like this sort of trajectory is completely normal, like a sandwich and soup. To me, it invites questions:

  • Can I be the only one who finds the juxtaposition of those two fine American institutions curious or notable?
  • How many writers or professors have taught at an Ivy League school and a penal facility?
  • Is teaching at the one pretty much like teaching at the other?
  • If you’ve currently got a gig at a prison, how do you make the transition to Columbia? I assume relatively few people want to make the opposite leap.

Jane Austen, Emma, and what characters do

I’m rereading Jane Austen’s Emma and realized that when the characters in the novel debate the validity, respectability, or wisdom of the minor actions of other characters in the novel—which is essentially all that happens—they are really judging themselves and their own choices. For example, there’s a moment when Emma is considering Knightley’s observations about Elton’s real motives:

He had frightened her a little about Mr. Elton; but when she considered that Mr. Knightley could not have observed him as she had done, neither with the interest, nor (she must be allowed to tell herself, in spite of Mr. Knightley’s pretensions) with the skill of such an observer on such a question as herself, that he had spoken it hastily and in anger, she was able to believe, that he had rather said what he wished resentfully to be true, than what he knew any thing about.

When Emma says that Knightley “could not have observed him as she had done,” she’s really saying that she’s a more able observer than Knightley and that she doesn’t merely base things on what she “wished resentfully to be true.” This is proved wrong, of course, like many of her comments and ideas, and it shows that while she thinks she values seeing things clearly, given her “skill” as “such an observer,” she actually sees no more clearly than anyone else. The reader figures out that Emma is self-deceptive, while within the novel she is proclaiming that her own choice of Elton as a sexual partner for Harriet is an appropriate one.

Emma also tends not to have much meta-cognition—instead, we, the readers, act as her meta evaluator. For example, she moves briefly in this direction after Elton foolish declares her love, but she pulls back before it can come to fruition:

She had had many a hint from Mr. Knightley and some from her own heart, as to her deficiency—but none were equal to counteract the persuasion of its being very disagreeable,—a waste of time—tiresome women—and all the horror of being in danger of falling in with the second-rate and third-rate of Highbury, who were calling on them for ever, and therefore she seldom went near [the Bates, who she considers inferiors].

Whatever hints Knightley drops Emma ignores through most of the novel—likewise the ones “from her own heart.” Her own choices must be right because they come from her, even when those choices spring from unarticulated values that don’t hold up to Knightley’s clarifying vision. Emma never interrogates what “the second rate and third rate” mean: that’s one of the frustrating parts about this novel and so many others. The characters lack the ability to explicitly question their own values, even as they express what values they hold by denigrating the values of other characters. This is part of the joke and the irony of the novel, of course, but I tend prefer characters with somewhat greater self-awareness.

But the pleasure of Emma is realizing that its characters lack much of the self-awareness we think they should have. They debate values when they should be debating their debate on values. That, instead, is left to the critics.

The last word on this version of the Amazon Kindle

After months with the Kindle and one long review, I’ve stopped using it for most reading. Still, a commenter on Hacker News asked, “why would you want to take a library with you?” As someone with ~1,000 books, I thought I’d answer, since I can think of some very good reasons based on all those books:

1) Moving is a pain in the ass, to put it lightly. No, excuse me: to put it heavily. Very, very heavily. Especially cross country, though I’ve acquired a lot of books since starting grad school.

2) Shelving is expensive.

3) At scale the right book can become harder to find. The other day I spent 15 minutes looking for Tom Perrotta’s Election because it wasn’t quite where I thought it was.

4) Most of us don’t have infinite room and therefore eventually run out of space.

5) Searching within the text is still pretty nice.

That being said, why do I still prefer paper books?

1) The note-taking function on the Kindle sucks, and I compulsively fill margins. Highlighting is tedious and hard to find.

2) Page turning is still too slow. Way too slow.

3) I actually flip back and forth between pages quite a bit.

4) I’m not convinced that DRM isn’t going to bite me 1 to 20 years from now.

5) Anachronistic attachment to paper.

6) I’m a grad student, and the citation / edition situation hasn’t been sorted out for the Kindle. This is very important when writing academic papers.

Of these, another person pointed out that 1, 2, 3, and 6 are basically technological problems that might be solved. The E Ink used in the Kindle apparently limits the problems from being solved in the immediate future.

Note that I’m not making some kind of moral argument about whether electronic reading is good, bad, or indifferent. To me, it just is. I foresee eventually moving chiefly to electronic reading, and I think most people eventually will, but I’m not sure when or how that shift will happen.

I’m also not interested in the iPad because I spend enough time staring at LCDs as it is.

There is however, one huge, tremendous, gigantic redemptive feature of the Kindle: Instapaper. This is basically a bookmarklet + backend that quickly and easily lets me tag long, interesting articles. I’d otherwise have to read those articles on screen, and I spend enough time staring at screens, or print them, and I waste enough paper as it is. Once a week, I download about a dozen long articles to the Kindle using Instapaper, which automatically formats them. Brilliant.

The Novel: An Alternative History — Steven Moore

Novels really start when an important technology (the printing press) allows novelists to respond to one another.

Steven Moore’s The Novel: An Alternative History: Beginnings to 1600 is a very alternative history that points even more than most histories of the novel to the question of what defines the genre. But it answers that question with less satisfaction: a novel is any prose work of some length that is what we would now call fiction. But the idea of fiction / nonfiction weren’t particularly well established until the late eighteenth century, as discussed in some of those conventional histories, like The Rise Of The Novel: Studies In Defoe, Richardson And Fielding and Institutions of the English Novel: From Defoe to Scott.

Without that epistemological distinction, critics lack the intellectual scaffolding necessary to really talk about fiction: you have a muddle of stuff that people haven’t really figured out how to deal with. In The Disappearance of God, J. Hillis Miller puts it differently: “The change from traditional literature to a modern genre like the novel can be defined as a moving of once objective worlds of myth and romance into the subjective consciousness of man,” but he’s getting at a similar idea: the “objective worlds of myth” turn out not to be as “objective” as they appear, and the “subjective consciousness of man” reevaluates those worlds of myth. We get at distinctions between what’s true and what’s false based on our ability to recognize our own subjective position, which the novel helps us do.

Moore discusses these issues, of course: he notes the standard history I’m espousing and his reasons for doubting it:

And today our best novelists follow in this great tradition [from Defoe, Swift, and Richardson to the 19th Century realists through Joyce and Faulkner to the present]: that is, realistic narratives driven by strong plot and peopled by well-rounded characters struggling with serious ethical issues, conveyed in language anybody can understand.

Wrong. The novel has been around since at least the 4th century BCE […] and flourished in the Mediterranean area until the coming of the Christian Dark Ages.

That’s on page three. I’ve responded to the philosophical and intellectual aspects of what I think problematic, but there’s another issue: Moore’s argument ignores the technological history that enabled the novel to occur. I’ll return to my first paragraph.

Without the printing press, it’s wrong-headed to speak of novels. They couldn’t be sufficiently read, distributed, and disseminated, to enable the “speaking to each other” that I think of in fiction. There wasn’t a “creativity revolution” along the lines of the runaway Industrial Revolution of the eighteenth century (see, for example, Joel Mokyr’s The Enlightened Economy, which I discuss at the link). Books didn’t react enough to other books; that’s part of what the novel got going, and this aspect was enabled by the Industrial Revolution and the press. The two are fundamentally linked.

Some works that we would now classify as fiction definitely were written or compiled, as Moore rightly points out, but they didn’t gain the epistemological distinctions that we grant novels until much later, and novels evolved with a mass reading public that could only occur when novels were mass-produced—produced in numbers that allowed them to be read and responded to by other writers. Claiming that early quasi-fiction forms are novels is like saying that a play and a TV show are the same thing because both rely on visual representations of actors who are pretending to be someone else. In some respects, that’s true, but it still misses how form changes function. It misses the insights of Marshall McLuhan.

He almost gets to this issue:

Sorting through the various ancient writings that have come down to us on cuneiform tablets, papyri, scrolls, and ostraca (potsherds or limestone flakes), it is not difficult to find prototypes for literary fiction and what would eventually be called the novel. What’s difficult is sorting prose from poetry, and fiction from mythology and theology.

But the problem of sorting deserves more attention. Until it can be discussed with greater depth, it misses essential features of the genre. Accounts of the novel need to take two major issues into their reading: a technological one and an intellectual one. The technological one, as mentioned, is the invention and improvement of the printing press, without which the sheer labor necessary to produce copies of novels would have prevented many writers from working at all; you can read more about this in Elizabeth L. Eisenstein’s The Printing Press as an Agent of Change The second is the growth of subjectivity and the acknowledgment of subjectivity in fiction, as also discussed above. Without those technological and the intellectual facets, I don’t think you really have novels, at least in the way they’re conceived of in contemporary times.

The other thing I’d like to note is that Moore is doing more a taxonomy than a history: it has brief sections on more than 200 books with relatively little analysis of each book. This lessens the depth of his book and makes it more tedious as we go from culture to culture without a great deal of discussion about what common items link novel to novel. But that’s part of the problem: proto-novels weren’t linked because their authors didn’t know of one another or of what made fiction fiction and nonfiction nonfiction. Moore is left with this basic shape for The Novel: An Alternative History by his material; in short, form undercuts argument. Too bad, because it’s an argument worth paying attention to if for no other reason than its novelty.

Signaling, status, blogging, academia, and ideas

Jeff Ely’s Cheap Talk has one of those mandatory “Why I Blog” posts, but it’s unusually good and also increasingly describes my own feeling toward the genre. Jeff says:

There is a painful non-convexity in academic research. Only really good ideas are worth pursuing but it takes a lot of investment to find out whether any given idea is going to be really good. Usually you spend a lot of time doing some preliminary thinking just to prove to yourself that this idea is not good enough to turn into a full-fledged paper.

He’s right, but it’s hard to say which of the 100 preliminary ideas one might have over a couple of months “are worth pursuing.” Usually the answer is, “not very many.” So writing blog posts becomes a way of exploring those ideas without committing to attempting to write a full paper.

But to me, the other important part is that blogs often fill in my preliminary thinking, especially in subjects outside my field. I’m starting my third year of grad school in English lit at the University of Arizona and may write my dissertation about signaling and status in novels. My interest in the issue arose partially because of Robin Hanson’s relentless focus on signaling in Overcoming Bias, which got me thinking about how this subject works now.

The “big paper” I’m working on deals with academic novels like Richard Russo’s Straight Man and Francine Prose’s Blue Angel (which I’ve written about in a preliminary fashion—for Straight Man, a very preliminary fashion). Status issues are omnipresent in academia, as every academic knows, and as a result one can trace my reading of Overcoming Bias to my attention to status to my attention to theoretical and practical aspects of status in these books (there’s some other stuff going on here too, like an interest in evolutionary biology that predates reading Overcoming Bias, but I’ll leave that out for now).

Others have contributed too: I think I learned about Codes of the Underworld from an econ blog. It offers an obvious way to help interpret novels like those by Elmore Leonard, Raymond Chandler, and other crime / caper writers who deal with characters who need to convincingly signal to others that they’re available for crime but also need not to be caught by police, and so forth.

In the meantime, from what I can discern from following some journals on the novel and American lit, virtually no English professors I’ve found are using these kinds of methods. They’re mostly wrapped up in the standard forms of English criticism, literary theory, and debate. Those forms are very good, of course, but I’d like to go in other directions as well, and one way I’ve learned about alternative directions is through reading blogs. To my knowledge no one else has developed a complete theory of how signaling and status work in fiction, even though you could call novels long prose works in which characters signal their status to other characters, themselves, and the reader.

So I’m working on that. I’ve got some leads, like William Flesch’s Comeuppance: Costly Signaling, Altruistic Punishment, and Other Biological Components of Fiction and Jonathan Gottschall’s Literature, Science, and a New Humanities, but the field looks mostly open at the moment. Part of the reason I’ve been able to conceptualize the field is because I’ve started many threads through this blog and frequently read the blogs of others. If Steven Berlin Johnson is right about where good ideas come from, then I’ve been doing the right kinds of things without consciously realizing it until now. And I only have thanks to Jeff Ely’s Cheap Talk—it took a blog to create the nascent idea about why blogging is valuable, how different fields contribute to my own major interests, and how ideas form.

Follow

Get every new post delivered to your Inbox.

Join 1,349 other followers

%d bloggers like this: