Week 27 Links: McPhee, Walkability, Flip shutdown, and Ricky Gervais

* Deep Walkability.

* John McPhee on Writing, Teaching, and Programming.

* A sad day: Cisco is shutting down its Flip video camera unit. Amateur, uh, home video makers everywhere mourn. I’ve had a Flip MinoHD for a couple of years, and it’s a delightful little camera. I’m annoyed because the decision to shutter Flip appears to be an annoying corporate one.

David Pogue says Flip had an amazing new product coming out.

You can also see a New York Times article. Note that it doesn’t mention whether a lot of Flip cameras are still selling; I assume they are. See too Ars Technica’s coverage.

* How Black People Use Twitter: The latest research on race and microblogging.

* Less music, more books might boost mental health in teens.

* An (Atheist) Easter Message from Ricky Gervais.

Hulu Owners: Should We Shoot Ourselves in the Foot?

I don’t own a stereotypical TV and almost never watch video that originally appeared on conventional TV stations. I’ve also never had a subscription to cable TV. That being said, I will occasionally use Hulu to watch Glee, which is a lot of fun and not stupid and tedious—unlike most TV shows. I’m apparently not the only person who noticed this; the L.A. Times published “Hulu is popular, but that wasn’t the goal: Its owners — the parents of ABC, Fox and NBC — fear the TV website may hurt their bottom lines.”

Now the website faces changes that could curtail its trove of offerings or require users to pay for episodes they currently watch for free. Once hailed as the networks’ solution in taming the Internet, Hulu’s stunning success is now undermining the very system it was designed to protect, forcing the site’s owners to reconsider what Hulu should be.

The big problem, however, is that Hulu doesn’t just compete against network TV and cable. It also competes against BitTorrent sites. Now, because I enormously respect copyright law, I would never, ever, use such sites because they’re really convenient. Never. Just like as a 16 year old, I didn’t use Napster like all my friends did to download music.

In “The Other Road Ahead,” Paul Graham says, “Near my house there is a car with a bumper sticker that reads “death before inconvenience.” Most people, most of the time, will take whatever choice requires least work.” In this respect, I am most people, and people who want to watch TV are probably thinking the same thing. If Fox, ABC, and NBC don’t want to become tomorrow’s newspapers, they might want to contemplate what death before inconvenience means.

The Case Against Adolescence: Rediscovering the Adult in Every Teen — Robert Epstein

The Case Against Adolescence should be a better book than it is, much like Sex at Dawn. The central argument is that we create the contemporary adolescence experience (angst, nihilism, penchants for bad TV shows, temper storms) through social and legal restrictions on teenagers that deprive them of any real ability to be or act like adults. I’m inclined towards it, but the book would’ve been greatly helped by peer review.

Epstein is not the first person to notice. In “Why Nerds are Unpopular,” Paul Graham says that “I think the important thing about the real world is not that it’s populated by adults, but that it’s very large, and the things you do have real effects.” This means that teenagers have no real challenges—high school is so fake a challenge that a lot of people find that it poisons education for them—and that they become “neurotic lapdogs:

As far as I can tell, the concept of the hormone-crazed teenager is coeval with suburbia. I don’t think this is a coincidence. I think teenagers are driven crazy by the life they’re made to lead. Teenage apprentices in the Renaissance were working dogs. Teenagers now are neurotic lapdogs. Their craziness is the craziness of the idle everywhere.

“Coeval” is correct, but it would be more accurate to say that being a teenager was enabled by growing economic wealth more than anything else. Once young people didn’t have to start working immediately, they didn’t. This started happening on a somewhat wide scale in the late nineteenth and early twentieth century. It accelerated after World War II. By now, laws practically prevent people from becoming an adult. The question of why and how this happened, however, remains open.

The Case Against Adolescence offers a dedication: “To Jordan and Jenelle, may you grow up in a world that judges you based on your abilities, not your age.” Unfortunately, we’re not likely to get such a world in the near future because bureaucratic requirements demand hard age cutoffs instead of real judgments. Should you drive when you’re “ready” to drive? How will the DMV decide? It can’t, so laws make 16 the magic age. Based on what I’ve seen at the University of Arizona, most students are “ready” to drink in the sense that they make the choice to do so on their own free will—despite the nominal legal drinking age of 21. But “ready to” can’t be readily gauged by a cop looking at a driver’s license, so we have to choose arbitrary cutoffs.

Many students appear to feel done with high school by the time they’re 16—but high school continues to 18, so, for the vast majority, they stay—not “based on [their] abilities,” but on their age. Without those bureaucratic requirements, judgment based on abilities might be more possible. In some realms, it is: this might be why the image of the teenage hacker has become part of pop culture. In computer programming, one can judge immediately whether the code works and does what its author says it should. There isn’t really such a thing as code that is “avant garde” or otherwise susceptible to influence and taste. In addition, computers are readily available, and posting work online lets one adopt personas that may be “older” than the driver’s license age. As such, working online may alleviate problems with age, sex, race, and other such issues. Online, no one automatically knows you’re a teenager. Offline, it’s obvious.

One reason why contemporary teenagers act the way they do might simply be the “role models” they have—who tend to be each other. As Epstein says, “Because teens in preindustrial countries spend most of their time with adults—both family members and co-workers—adults become their role models, not peers. What’s more, their primary task is not to break free of adults but rather to become productive members of their families and their communities as soon as they are able.” But the term “preindustrial countries” sounds wrong: countries didn’t really coalesce into more than city-states until after the industrial revolution. A lot of contemporary political problems arise from imposing European “countries” on territories with diverse tribal or clan identities. Furthermore, I’m not sure that hunter-gatherers and agrarian societies can be lumped together like this. And I don’t think most agrarians would think of others as “co-workers,” an idea that comes from modern offices.

That’s one example of the book’s sloppiness. The other is simpler: our economy increasingly rewards advanced education, which means that the economic productivity of people without it is going down. So we might have a very good reason for forcing teenagers to attend school for long periods of time, namely that most won’t be able to accomplish much without it. The keyword is “most:” there are obvious exceptions, and the kinds of people likely to be reading this blog are more likely to be the exceptions. Epstein observes that for most of human existence, people we now call teenagers were more like adults. He’s right. But there’s a problem with his argument.

Early on, he says, “For the first time in human history, we have artificially extended childhood well past puberty. Simply stated, we are not letting our young people grow up.” The reasons for this are complex, and Epstein suggests an evolutionary narrative for greater capability earlier in history than we might now assume: “our young ancestors must have been capable of providing for their offspring. . . and in most other respects functioning fully as adults” because, if they couldn’t, “their young could not have survived.” This is true, but most of human history also hasn’t occurred in industrial and post-industrial times. We’re living in a weird era by almost any standard, so the reason teenagers are treated like teenagers might be an economic argument.

They can’t produce much until they have a lot of education, and productive adults don’t usually have time to train them. As Graham says of schools, “In fact their primary purpose is to keep kids locked up in one place for a big chunk of the day so adults can get things done. And I have no problem with this: in a specialized industrial society, it would be a disaster to have kids running around loose.” Teens might have use for adults, but not a lot of adults have much use for teenagers. In my parents’ business, Seliger + Associates, employing me was probably a net drag until I was 17 or 18, and even then I was only productive because I’d been working for them for so long. Epstein underestimates what a “specialized industrial society” looks like. The larger point that young people are probably going to be more capable if we let them be is true. But the flip side of positive capability is the negative possibility of failure.

Epstein does anticipate part of Graham’s argument:

[. . .] in most industrialized countries today teens are almost completely isolated from adults; they’re immersed in ‘teen culture,’ required or urged to attend school until their late teens or early twenties, largely prohibited from or discouraged from working, and largely restricted, when they do work, to demeaning, poorly paid jobs.

But he doesn’t elaborate on why this might be. Delaying adulthood can have a lot of reasons, and he sometimes confuses correlation with causation: just because men and women marry later than they used to, as Epstein argues on page 30, doesn’t mean that they’re delaying adulthood: it means they might want fun, they might not need marriage for economic purposes, and they don’t need marriage for sex. Disconnecting sex from marriage probably explains as much of this as anything else does.

He does notice institutionalized hypocrisy, which is useful. For example, “Whether we like the idea or not, young people who commit serious crimes are indeed emulating adults—adult behavior, adult emotions, adult ideas. They see adults on the streets, on TV, in movies, and in newspapers and magazines doing heinous things every day. What’s more, when a young person commits a crime, he or she is demonstrating control over his or her own life.” A sixteen year old who commits murder can be tried as an adult; a sixteen year old who has sex still has to be protected like a child, even if it’s the same sixteen year old. A twenty year old sends a naked picture of herself to her boyfriend, but a seventeen year old emulating the twenty year old’s behavior can’t.

I think a lot of this has to do with parent desires: they don’t want kids having sex because the economic consequences of pregnancy are severe and because parents are often left to clean up the financial and emotional messes in a way they don’t have to with, say, 21 year olds. Part of this is because of social expectations, but part may still be because of economics, which is the great missing piece of The Case Against Adolescence. Robin Hanson notes the labor component of the child / adolescent argument. I think he’s missing one major component of his argument: parents on average probably don’t want their offspring to leave school because they associate school with higher eventual earnings and economic success that will translate to social / reproductive success. So I don’t think it’s just other laborers who don’t want kids in the workforce—it’s also probably parents as a whole.

You can find more about judicial and sexual hypocrisy in in Judith Levine’s book, Harmful to Minors: The Perils of Protecting Children from Sex, which should probably be better known than it is. As she says:

This book, at bottom, is about fear. America’s fears about child sexuality are both peculiarly contemporary […] and forged deep in history. Harmful to Minors recounts how that fear got its claws into America in the late twentieth century and how, abetted by a sentimental, sometimes cynical, politics of child protectionism, it now dominates the way we think and act about children’s sexuality.

We’re probably afraid of sexuality because we’re afraid of the costs of pregnancy and because of the United States’ religious heritage. Those “fears about child sexuality” are unlikely to go away in part because there is some level of rationality in them: we’re unhappy when people reproduce and can’t afford their offspring. So we call people who mostly aren’t economically viable “children,” even when they’re physiologically and psychologically not. It’s dumb, but it’s what we do.

Furthermore, we don’t really know why adolescence, if it didn’t really exist until the twentieth century, didn’t. Epstein cites a 2003 New Yorker article by Joan Acocella called “Little People: When did we start treating children like children?“, which notes, “If, as is said, adolescence wasn’t discovered until the twentieth century, that may be because earlier teen-agers didn’t have time for one, or, if they did, it wasn’t witnessed by their parents.” Notice the tentativeness of this sentence: “as is said,” “that may be,” “or.” We don’t know. We might never entirely know. Contemporary adolescence might, like being overweight and having a 60″ TV, be a condition of modernity, and earlier peoples might have developed it too if they’d been rich enough.

This is the part of the post where I’m supposed to posit some solutions. Problem is, I don’t have any, or any that are practical. Eliminating middle school and having “high school” go from seventh to tenth grades might one, followed by something more like community college or a real university, would be a good place to start, along with letting people enter contracts at sixteen instead of eighteen. The probability of this happening is so low that I feel dumb for even mentioning it. Not all problems have solutions, but being aware of the problem might be a very small part of the start.

More on fiction versus nonfiction

Most of the books I’ve been wanting to write about and not getting around to are nonfiction, and I’m not sure why this is. It might be because both good and bad nonfiction are easier to write about than good fiction. Good fiction demands attention and time, which are in chronically short supply for me and virtually everyone else. So I foolishly put off writing about good fiction and instead spread time among lesser though still interesting vessels (this post comes as a followup to Nonfiction, fiction, and the perceived quality race, which got started from the question, “The quality of fiction seems to be decreasing relative to the quality of non-fiction, or am I just biased against active fiction writers vs. dead ones?”).

I expend a lot of my time thinking about good fiction in the context of making my own novel writing better, instead of writing about what makes good fiction good on this forum. So even though I think a lot about good novels, I write about them in a different context. For instance, the last novel I finished stole from Alain de Botton’s On Love and Rebecca Goldstein’s The Mind-Body Problem; I’ve written about both books here, but not nearly to the proportion I’ve been thinking about. Alas: the novel I wrote got the most encouraging rejections, many along the lines of “I like it but can’t sell it.” If it had sold and eventually been published, I think it would be much easier for me to write about novels I care deeply about.

Even so, there are a bunch of novels—a couple by Michel Houellebecq, Elmore Leonard’s latest, Brady Udall’s The Lonely Polygamist, more about Robertson Davies—I mean to write about, but but they’re outnumbered by nonfiction. This might seem strange, coming from a person in English graduate school, where we study nonfiction all the time, and when we study fiction, it’s often more like studying nonfiction than we care to admit.

I also simply don’t read as much fiction as I used to; I wonder if fiction is most useful to the young (who are trying to figure out who they are and how the social world works) and the old (who are trying to figure out what this crazy thing they just did actually means). A lot of people in the middle don’t appear to derive as much immediate benefit from reading fiction, although I have no data on this idea.

Finally, I can often read nonfiction much faster than fiction. This isn’t a change, but it is true: nonfiction often telegraphs where it’s going, which makes skipping large sections easier. Being able to read faster also indicates that too many books are too long, as Cowen has argued in various places, but it nonetheless means I very seldom have to invest as much in deep, close reading. I wish more nonfiction books rose to the level of deep, close reading, but few do, relative to good fiction.

George Eliot’s Daniel Deronda and the Graham Handley’s description of it

In his introduction to George Eliot’s Daniel Deronda, Graham Handley writes:

Yet if all the research and criticism of Daniel Deronda, including scholarly articles of the type which discover but do not evaluate, were put together, a consensus would doubtless reveal that it is generally thought of either as a remarkable failure, a flawed success, or even an aberration unredeemed by incisive insights or distinguished writing. The character of Gwendolen is always praised; those of Mirah, Mordecai, and Daniel are often denigrated.

It is my opinion, as someone who regularly miscegenates evaluation and discovery, that the critical consensus is correct. I particularly like the description of the novel as an “aberration unredeemed by incisive insights or distinguished writing.” I’m also still amused that Handley would announce this in the introduction, as if inviting us to agree with the consensus and not his defense.

Reading Handley’s defense, it’s hard not to like the critical consensus more:

It is my contention that Daniel Deronda needs no apology. [. . .] Its greatness consists in its artistic integrity, its moral and imaginative cohesion, its subtle and consistent presentation of a character with psychological integration as its particular strength, together with what Colvin called the ‘sense of universal interests and outside forces.’

Most of those words and phrases don’t mean anything on their own. What is “moral and imaginative cohesion?” Do you get it or them with glue and spackle? And how does the “subtle and consistent presentation of a character” work? Those sound like code words for “nothing happens,” other than that characters talk to each other about who’s going to boff who after they get married or, if we’re lucky, before.

The introduction goes on:

The form is fluid and vital, not static and diagrammatic, and the sophisticated and studied use of image and symbol is tremulous with life, with the feelings, responses, and pressures of the individual moral and spiritual experience of fictional character registering with the factual reader.

Spare me “sophisticated and studied use of image and symbol” when they aren’t deployed to tell much of a story. “Moral and spiritual experience” sounds remarkable tedious. Once again, with accolades like these, who needs haters?

I will say, however, that Daniel Deronda makes me feel incredibly virtuous for having read it, or at least parts of it. This is more or less true of every novel I’ve read whose title consists solely of a name.

George Eliot's Daniel Deronda and the Graham Handley's description of it

In his introduction to George Eliot’s Daniel Deronda, Graham Handley writes:

Yet if all the research and criticism of Daniel Deronda, including scholarly articles of the type which discover but do not evaluate, were put together, a consensus would doubtless reveal that it is generally thought of either as a remarkable failure, a flawed success, or even an aberration unredeemed by incisive insights or distinguished writing. The character of Gwendolen is always praised; those of Mirah, Mordecai, and Daniel are often denigrated.

It is my opinion, as someone who regularly miscegenates evaluation and discovery, that the critical consensus is correct. I particularly like the description of the novel as an “aberration unredeemed by incisive insights or distinguished writing.” I’m also still amused that Handley would announce this in the introduction, as if inviting us to agree with the consensus and not his defense.

Reading Handley’s defense, it’s hard not to like the critical consensus more:

It is my contention that Daniel Deronda needs no apology. [. . .] Its greatness consists in its artistic integrity, its moral and imaginative cohesion, its subtle and consistent presentation of a character with psychological integration as its particular strength, together with what Colvin called the ‘sense of universal interests and outside forces.’

Most of those words and phrases don’t mean anything on their own. What is “moral and imaginative cohesion?” Do you get it or them with glue and spackle? And how does the “subtle and consistent presentation of a character” work? Those sound like code words for “nothing happens,” other than that characters talk to each other about who’s going to boff who after they get married or, if we’re lucky, before.

The introduction goes on:

The form is fluid and vital, not static and diagrammatic, and the sophisticated and studied use of image and symbol is tremulous with life, with the feelings, responses, and pressures of the individual moral and spiritual experience of fictional character registering with the factual reader.

Spare me “sophisticated and studied use of image and symbol” when they aren’t deployed to tell much of a story. “Moral and spiritual experience” sounds remarkable tedious. Once again, with accolades like these, who needs haters?

I will say, however, that Daniel Deronda makes me feel incredibly virtuous for having read it, or at least parts of it. This is more or less true of every novel I’ve read whose title consists solely of a name.

Big Sex Little Death — Susie Bright's Memoir

Big Sex Little Death is weirdly boring. I say “weirdly” because you’d expect a book about sexual awakening, development, politics, and exploration to be more exciting; this Slate article on Bright and being wrong convinced me to buy the book. Skip it: read the reviews instead.

Big Sex Little Death has some clever lines and individual section, but as a whole the memoir feels prosaic. There’s an obligatory section on birth, parents, sides of the family, unlikely anecdotes; we find that “My mom didn’t drink” and that “My grandpa was a butcher and ran a chicken ranch,” which is eminently respectable in a memoir and somewhat tedious too. There’s a conventionally slightly broken childhood—isn’t it a requirement that people writing memoirs focus on childhood?—that leads to an adulthood that should hold the reason we’re reading the memoir. It does explain that, sort of, and tells a story about economic sexual censorship that I didn’t realize existed as late as the 1980s. Then again, looking at Amazon and Apple’s policies towards sexually frank books, maybe I shouldn’t be surprised. I also hasn’t realized that Bright’s women-run erotica magazine, On Our Backs, even existed.

In disentangling herself from the financial pit that On Our Backs turns out to be, Bright finds the only thing rarer than a hooker with a heart gold: a lawyer with a heart of gold, whom she has say, “Ms. Bright, I’m going to take care of this for you,” without making her pay. Reviews of memoirs often want to engage the question of how much is “true,” and I can believe the whole thing except perhaps for the exchange on page 310. Gun threats, underage and unwise sex, cruelty: all believable. Kind lawyers: less so.

There’s not a lot about the intellectual development that led Bright to work on On Our Backs, or that led her not just to get a lot of action but to write about getting a lot of action. Maybe it’s impossible, or nearly impossible, to describe what leads to intellectual engagement: “I read a lot, liked it, thought about it, and transformed thought in my mind” isn’t very satisfying. And it’s not easy to make actions symbolize intellectual development. If someone knows a good example of such changes shown effectively in literature, I’d love to hear them (one exception: The Adventures of Augie March. Bildungsromans might be as close as we get).

Once Bright gets past the parent bits, she describes how, as a teenager, she starts having sex with socialist, many of them older than her; she says that “lucky for me, some of them were really, really good in bed—and since everyone was down with women’s liberation and nonmonogamy, that made things extra good for me.” This continues:

I was in no one’s debt; I was no one’s property. What little I thought about school anymore involved feeling bad about how scared everyone was: scared of having sex, scared of leaving their gilded cage, scared of dreaming about anything that hadn’t been premeditated by their parents.

And they still are. It’s one of the moments in the book that translates across generations and feels right, since so many parents still treat their children as property. Elsewhere, Bright has finely observed moments, though they sometimes go slightly awry:

People always imagine there is something happening in Los Angeles because of the celebrities. They think that because they see a movie star buy a bag of marshmallows, it must be an event. They think wiping their ass with the same toilet paper that a movie star’s maid wiped her ass with is an accomplishment. This is a company town, and Hollywood is just as crushing as a Carnegie Steel mill. The vast majority of Angelenos have so much nothing in their lives that ‘celebrity nothing’ makes them feel like they have something.

This is almost true: people do imagine something is happening in L.A. But the next sentence is choppy, with all the “t” sounds and the repeated use of the word “they:” “They think that because they see. . .” Still, Bright understands the vacuousness of celebrity worship, but she’s wrong when she says L.A. is “a company town.” There are at least five major movie studios, compared to a single Carnegie Steel mill, and L.A.’s economy is much vaster and more diverse than Pittsburgh’s ever was. That’s part of the reason it was able to thrive; as Edward Glaeser describes in Triumph of the City, cities with diverse economic bases tend to thrive. L.A. is one, even if the movie studios—notice the plural—are very visible.

During that time in L.A., Bright says, “I could not take one more minute of trying to convince the people of Los Angeles that a workers’ revolution and a complete overhaul of society were a tiny bit more exciting than getting a bit role in a Burger King commercial.” I’d like to know what exactly a “workers’ revolution and a complete overhaul of society” means. Revolutions don’t have a great track record, since they tend to include a lot of mindless bloodshed and power struggles. The “workers’ revolution” in Russia that led to the Soviet Union might be the single bloodiest event in human history, according to Timothy Snyder’s Bloodlands. These demands aren’t a coherent political platform; they’re teenage angst writ large and the result of a mind that would be much assisted by taking some economics classics. I’ll take the Burger King commercial and maybe a faster CPU next year, thanks.

So her politic-politics might not be great, but her sexual politics and stories sometimes make more sense. Bright says that women making porn was shocking in the 1970s and 1980s. Apparently, however, women making porn for women is still news, and people are still going, “This is still news?!” I don’t see an end to this cycle. In the personal ream, Bright’s memoir could be titled, “Getting Some Ass From Unusual Places,” since relatively few people have gotten it from such diverse places: a union organizing camp; from college dropouts after she dropped out of high school; from lesbians; from men; and probably people in between. An appropriate subtitle might be, “And Then Thinking About It Afterwards,” Like Karen Owen, Bright has taken quite a survey of her escapes; unlike Owen, Bright isn’t alienated from herself or desires. She’s also more explicitly political, which can be both annoyingly polemical and deeper. Most people don’t think of their lives in overtly political terms, even when it might help them too, and it makes someone who does unusual.

That might be the biggest difference between Bright and many other writers about and havers of sex: she doesn’t regret what she’s done, has actualized her experience, and has never particularly bought into the sex-is-bad paradigm that, although weaker than it once was, still dominates culture for many people. We like everything leading up to sex—sexiness, attractiveness, revealing clothes, preening, buying expensive objects—but we still judge the people who move from signaling to action, despite or because of our own desires for action.

Bright imagines that, after the sexual revolution,

Women wouldn’t be catty. No one would bother to be jealous. Who would have the time? Sex would be friendly and kind and fun. You’d get to see what everyone was like in bed. You’d learn things in bed, and that would be the whole point. Romances would seem like candy cigarettes. You could have all the sex and friendship you wanted for free. Exclusivity would be for bores and babies.

I’m all for it. Alas, the pragmatist or realist in me sees this as so unlikely that I want to label it idealist in the worst sense of the word. Bright knows as much, however, and the eyes of experience looking backward demonstrates that she knows precisely how unlikely this is.

I wish there was more connective tissue between Bright’s experiences and better writing when she describes them. She knows the problems memoirs tend to have:

At the onset of my memoir, I thought I would bring myself up to date on the autobiography racket. I researched the current bestsellers among women authors who had contemplated their life’s journey. The results were so dispiriting: diet books. The weighty befores and afters. You look up men’s memoirs and find some guy climbing a mountain with his bare teeth—the parallel view for women are the mountains of cookies they rejected or succumbed to.

I think she gives men’s memoirs too much credit, since so many of them are equally inane and poorly written. And there are probably reasonably interesting memoirs written by women out there, but I think the bigger problem is “reasonably interesting memoirs” in general. Alas: I’m not sure Big Sex Little Death is one.


You can read more about Bright in this interview. Consider it and the other links in lieu of the book itself.

Tea Review – Fujian Jasmine Pearl

[I’ve started drinking tea after reading A Hacker’s Guide to Tea, and I’m going to start posting reviews here because, well, maybe the world wants to know what I think.]

Fujian Jasmine Pearl, for lack of a better term, “chemically.” I made it for the first time and found its smell to be more like tea drenched in an artificial flavor devised in a PepsiCo™ lab by food scientists than actual tea. The taste was an improvement over the smell, but not so much that I’d actually like to make it again. I love the smell of tea as much as the next guy, but I want it to be fundamentally like tea, not the inside of a warehouse or a perfume.

What gives? I’m guessing that I got a bad batch or that its extreme price makes people like the tea. It’s like a monetary placebo effect. Dan Ariely describes how this works in chapters 9 and 10 of Predictably Irrational, which describe “The effect of expectations: Why the mind gets what it expects” and “The power of price: Why a 50-cent aspirin can do what a penny aspirin can’t.” Both chapters describe how we manage to think our way into liking things. If you want someone to love your product, charge more for it and convince them to buy it. Cue people citing Apple as an example. But you have to offer something aspirational about it; in the case of Adagio, they put Fujian Jasmine Pearl in their “masters” collection. There’s a little story about the tea. There are dozens of glowing reviews. None of them take the Coke-Pepsi challenge on it.

Why publishers are scared of ebooks — the standard reasons and Amanda Hocking as symbol

Amanda Hocking, the now-famous indie writer, has an interesting post where she says, “Here’s another thing I don’t understand: The way people keep throwing my name around and saying publishers are “terrified” of me and that I really showed them.” They aren’t terrified of her, specifically, as an individual (which she notes), but they are scared of her as a symbol and what she represents: a world where you don’t need publishers as much. She just happens to be an early example of how to make it financially via ebooks. At the moment, publishers have one big advantage that no writer, no matter how skills, can replicate: distribution. If you take that advantage away, a lot of the raisons d’état of publishers goes away.

Later, she says: “And just so we’re clear – ebooks make up at best 20% of the market.” But that’s up from virtually nothing in 2006. In 2001, discs sold on shiny platters made up the vast majority of the music business. In 2011, the “music business” as it existed from the days of the first records until about ten years ago is gone. You still need a big record label if you want to be Lady Gaga, but almost no one else does. Music industry profits have never recovered. This is great for people who want to listen to music but not so good for people who want to make money from music, especially if they can’t actually make music themselves. Media executives, including publishers, know this, which is why they’re watching what happens in book-land so carefully.

“Nobody knows what makes one book a bestseller. Publishers and agents like to pretend they do, but if they did, they would only publish best sellers, and they don’t.” That’s the scariest thing of all: no one knows. This has long been a truism in lots of forms of art. William Goldman’s Adventures in the Screen Trade came in 1982, if I recall correctly, and he said almost the same thing about movies: “Nobody knows anything. Not one person in the entire motion picture field knows for a certainty what’s going to work. Every time out it’s a guess—and, if you’re lucky, an educated one.” Or, Scott Adams, if you prefer someone with even less movie experience than Goldman or me:

Evaluating whether an idea is good enough for a movie is a bit like an automobile expert saying a certain brand of car doesn’t taste good. It’s absurd. You can only hold the opinion that a particular movie concept is a good or bad idea if you don’t understand what a movie is or what an idea is.

Movies have a slight advantage in that making movies technically pretty (which requires foley artists, on-set locations, lots of actors, careful detail to light, and lots of other stuff) is still pretty expensive. A lot of people also still go to movie theaters, so that advantage hasn’t completely disappeared. With books, all you really have is the book.

There are probably lots of undiscovered bestsellers out there, which, if writers get tired of submitting to agents and all the rest, they can now relatively cheaply and easily put online and let the market sort it out. Again: if enough people succeed at this, publishers go away.

Big publishers might be dying in the way Paul Graham describes Microsoft being dead. Microsoft will continue making lots of money for the foreseeable future, but it’s no longer leading anything in tech. (Enough people misinterpreted him that he wrote the Cliff’s Notes version too.) They’re not dying in the sense that whoever owns Alfred A. Knopf is going to be gone tomorrow, or the day after. But if their relevance starts to slip, they could fail with surprising speed. Look at what happened to Blockbuster: Netflix undermined them, and within a decade of Netflix on the scene all the Blockbusters near me have “going out of business” signs on them.

Back to Hocking: “Traditional publishing and indie publishing aren’t all that different, and I don’t think people realize that.” They might not be as different as some make them out to be, but from the perspective of shareholders they’re very, very different, in that shareholders can make money off publishers in one model and they probably can’t in the same way in the other. From the perspective of the writer, she’s certainly right, as she goes on to say: writers still have to put in an enormous amount of time and effort. As I’m only too aware.

I’m not the only one saying this. Here’s what Kevin Kelly says: “I don’t think publishers are ready for how low book prices will go. It seems insane, dangerous, life threatening, but inevitable.” It’s scary because $.99 isn’t going to support cushy Manhattan offices, long lunches, interns, marketing departments, and everything else modern publishers do. It’s not going to support 5–10% growth every year, which most investors assume before they part with their money. As mentioned elsewhere, publishers can see what trend lines are like and they’ve all read The Innovator’s Dilemma, like everyone else who does anything business-related. The upshot of the book is that incumbents often recognize disruptive technologies and products and then fail to respond to them effectively anyway. Think of Microsoft and the Internet, or record labels and the Internet, or newspapers and the Internet. Yeah, I keep using “the Internet” as an example, but you can see this in other areas, like American car companies when the Japanese first entered the U.S. market. Microsoft is probably the best example, since the famous “Cornell is WIRED!” e-mail alerted them to the threat, and they responded with Internet Explorer.

Today, 17 years after that e-mail was sent, I’m typing this on an iMac, Google and Facebook are arguably the dominant Internet players, and Microsoft failed utterly to foresee the importance of search, like a lot of other people. Publishers know that they can’t really compete with $.99 – $2.99 ebooks, and that, in most genres, readers just aren’t that picky. Publishers know the sound of a market shifting underneath them because some of them have been to Harvard Business School or hired people who have been to tell them about the history of companies failing to adapt to new models and environments. That’s scary.

I pay some attention to this stuff because I’m about to take the latest plunge in the crocodile pit that is agent land. If I fail, sometime in the next two years or so I’ll probably say, “Screw it, I’m self-publishing.” Chances are, I’ll be the person who wastes a lot of money and time doing so, but that’s also true of traditional publishing. There’s still that small chance I’ll succeed. Although I’m hardly the best judge of these things, I think I would want to read my own novels, and at some point, I won’t have anything to lose by not self-publishing, if the choice is between that and letting my work sit on my hard drive. There might be other people who want to read my work too. Publishers don’t know. I don’t know. But Amazon, Barnes and Noble, and Apple will make it easier for me to find out than Alfred A. Knopf ever did.

Beating the crowds to Max Jamison

Wilifred Sheed’s Max Jamison is as hilarious as Terry Teachout says it is in “Neither Does He Spin.” The penultimate sentence of Teachout’s column says, “Though it’s out of print (surprise, surprise), you can easily procure a used copy.” Except it’s not anymore:

So, naturally, I did the only thing I could think of and listed my paperback copy for $299, since I bought it a year and change ago. Half the price of the $599 copy! I doubt it will sell, but although I like the novel, I don’t like $300 like it.