A world without work might be totally awesome, and we have models for it, but getting there might be hard

Derek Thompson’s “A World Without Work: For centuries, experts have predicted that machines would make workers obsolete. That moment may finally be arriving. Could that be a good thing?” is fascinating and you should read it. But I’d like to discuss one small part, when Thompson writes: “When I asked Hunnicutt what sort of modern community most resembles his ideal of a post-work society, he admitted, ‘I’m not sure that such a place exists.'”

I can imagine such a place: A university. At one time, most professors made enough money to meet their basic material needs without making extravagant amounts of money (there were and are some superstar exceptions). Today, a fair number of professors still make enough money to meet their basic material needs, though proportionally fewer than, say, 30 years ago. Still, universities have always depended on peer effects for reputation; they’ve tended to convince smart people to do a lot of meaningful activities that are disconnected from immediate and often long-term remuneration. Many professors appear to have self-directed lives that they themselves structure. The average person with free time doesn’t explore build-it-yourself DNA or write about the beauty of Proust or do many of the other things professors  do—the average person watches TV—but perhaps norms will change over time.

I don’t want to overstate the similarity between a potential low-work future and contemporary tenured professors—many professors find grading to be mind numbing, and not everyone handles self-direction and motivation well—but they are similar enough to be notable. In a world of basic incomes and (relative) economic plenty, we may get more people writing blogs, making art, and playing sports or other games. People may spend more time in gyms and less time in chairs.

The open-source software software community as it currently exists tends to intersect with large companies, but there are fringes of it with a strongly non-commercial or academic ethos. Richard Stallman has worked for MIT for decades and has written enormous amounts of important open-source code; the primary PGP maintainer made almost no money until recently, though he could almost certainly make tons of cash working for big tech company. Many people who make money in tech are closer to artists than is commonly supposed. Reading Hacker News and the better precincts in Reddit will introduce you to other open-source zealots, some of whom mostly blow hot air but others of whom act and think like artists rather than businessmen.

Many programmers say publicly that they consider programming to be so much fun that they’re amazed at the tremendous sums they can earn doing it. A small but literate part of the sex worker community says something similar: like most people they enjoy sex, and like most people they enjoy money, and combining the two is great for them. They may not enjoy every act with every client but the more attractive and attentive clients are pretty good. One could imagine an activity that is currently (sometimes) paid and sometimes free being used to occupy more time. I’ve met many people who dance and make their money putting on and teaching dances. If they had a guaranteed annual income they’d probably dance all the time and be very pleased doing that.

Already many professions have turned into hobbies, as I wrote in 2013; most actors and musicians are essentially hobbyists as well, at least in the revenue sense. Photographers are in a similar situation, as are many fiction writers. Poets haven’t been commercial for decades, to the extent they ever were (they weren’t when the Metaphysicals were writing, but that didn’t stop Herbert or Donne). Today many of my favorite activities aren’t remunerative, and while I won’t list them here many are probably similar to yours, and chances are good that some of yours aren’t remunerative either. Maybe our favorite activities are only as pleasurable as they are by contrast with less desirable activities. Maybe they aren’t. Consider for a moment your own peak, most pleasurable and intense experiences. Did they happen at work? If you worked less, would you have more?

In short, though, models for non-commercial but meaningful lives do (somewhat) exist. Again, they may not suit everyone, but one can see a potential future already here but unevenly distributed.

A lot of white-collar office work has a large make-work component, and there’s certainly plenty of literature on how boring it can be. If people really, really worked in the office they could probably do much of their “work” in a tiny amount of the allotted time. Much of that time is signaling conformity, diligence, and so forth, and, as Tim Ferris points out in The Four-Hour Work Week, people who work smarter can probably work less. To use myself as an example, I think of myself as productive but even I read Hacker News and Reddit more often than I should.

Some people already do what appears to me to be work-like jobs. People who don’t like writing would consider this blog to be “work,” while I consider it (mostly) play, albeit of an intensely intellectual sort. It already looks to me like many moderators on Reddit and similar sites have left the world of “hobby” and entered the world of “work.” The border is porous and always has been, but I see many people moving from the one to the other. (As Thompson observes, prior the late 19th or early 20th Century the idea of unemployment was itself nonsensical because pretty much anyone could find something productive to do.) Wikipedia is another site that has adverse effects in that respect, and I can’t figure out why many disinterested people would edit the site (my edits have always been self-motivated, though I prefer not to state more here).

One can imagine a low-work future being very good, but getting from the present to that future is going to be rocky at best, and I can’t foresee it happening for decades. There are too many old people and children to care for, too many goods that need to be delivered, too much physical infrastructure that needs fixing, and in general too much boring work that no one will do without being paid. Our whole society will have to be re-structured and that is not likely to be easy; in reality, too, there has never been a sustained period of quiet “normalcy” in American history. Upheaval is normal, and the U.S. has an advantage in that rewriting cultural DNA is part of our DNA. That being said, it’s useful to wonder what might be, and one can see the shape of things to come if we see radically falling prices for many material goods.

There’s one other fascinating quote that doesn’t fit into my essay but I want to emphasize anyway:

Decades from now, perhaps the 20th century will strike future historians as an aberration, with its religious devotion to overwork in a time of prosperity, its attenuations of family in service to job opportunity, its conflation of income with self-worth. The post-work society I’ve described holds a warped mirror up to today’s economy, but in many ways it reflects the forgotten norms of the mid-19th century—the artisan middle class, the primacy of local communities, and the unfamiliarity with widespread joblessness.

TheAtlantic.com is increasingly copying others instead of writing their own work

Something is rotten at The Atlantic: Jordan Weissman “wrote” a piece called “Disability Insurance: America’s $124 Billion Secret Welfare Program,” which is just a restatement of an NPR Planet Money report and some of David Autor’s work (which I’m familiar with through his Econtalk interview and reading some of his subsequent papers; he’s also mentioned by NPR.) This comes not long after Nate Thayer called out The Atlantic for trying to get writers to work for free. It seems like TheAtlantic.com is increasingly doing things like this: using thinly-veiled re-writes to drive traffic to it. Weissman’s piece adds little if anything to the NPR piece, and The Atlantic could have just linked to that piece.

The magazine is still very good, and original, but The Atlantic’s web content has been getting worse in a very noticeable way, with thinly-veiled re-writes of other people’s work. If you want to write about other people’s work, just link to it directly.

I’ve been noticing this phenomenon more and more, but this is the first time I’ve posted about it. I hope it doesn’t become a series.

(And I’m letting the Scientology ad thing slide, because I think it was an honest mistake.)

Journalism, physics and other glamor professions as hobbies

The short version of this Atlantic post by Alex C. Madrigal is “Don’t be a journalist,” and, by the way, “The Atlantic.com thinks it can get writers to work for free” (I’m not quoting directly because the article isn’t worth quoting). Apparently The Atlantic is getting writers to work for free, because many writers are capable of producing decent-quality work, and the number of paying outlets are shrinking. Anyone reading this and contemplating journalism as a profession should know that they need to seek another way of making money.

The basic problems journalism faces, however, are obvious and have been for a long time. In 2001, I was the co-editor-and-chief of my high school newspaper and thought about going into journalism. But it was clear that the Internet was going to destroy a lot of careers in journalism. It has. The only thing I still find puzzling is that some people want to major in journalism in college, or attempt to be “freelance writers.”

Friends who know about my background ask why I don’t do freelance writing. When I tell them that there’s less money in it than getting a job at Wal-Mart they look at me like I’m a little crazy—they don’t really believe that’s true, even when I ask them how many newspapers they subscribe to (median and mode answer: zero). Many, however, spend hours reading stuff for free online.

In important ways I’m part of the problem, because on this blog I’m doing something that used to be paid most of the time: reviewing books. Granted, I write erratically and idiosyncratically, usually eschewing the standard practices of book reviews (dull, two-paragraph plot summaries are stupid in my view, for instance), but I nonetheless do it and often do it better than actual newspapers or magazines, which I can say with confidence because I’ve read so many dry little book reports in major or once-major newspapers. Not every review I write is a critical gem, but I like doing it and thus do it. Many of my posts also start life as e-mails to friends (as this one did). I also commit far more typos than a decently edited newspaper or magazine. Which I do correct when you point them out.

The trajectory of journalism is indicative of other trends in American society and indeed the industrialized world. For example, a friend debating whether he should consider physics grad school wrote this to me recently: “I think physics is something that is fun to study for fun, but to try to become a professional physicist is almost like too much of a good thing.” He’s right. Doing physics for fun, rather than trying to get a tenure-track job, makes more sense from a lifestyle standpoint.

A growing number of what used to occupations seem to be moving in this direction. Artists got here first, but others are making their way here. I’m actually going to write a post about how journalism increasingly looks like this too. The obvious question is how far this trend will go—what happens when many jobs that used to be paid become un-paid?

Tyler Cowen thinks we might be headed towards a guaranteed annual income, an idea that was last popular in the 60s and 70s. When I asked Cowen his opinions about guaranteed annual incomes, he wrote back to say that he’d address the issue in a forthcoming book. The book hasn’t arrived yet, but I look forward to reading it. As a side not, apparently Britain has, or had, a concept called the “Dole,” which many people went on, especially poor artists. Geoff Dyer wrote about this some in Otherwise Known as the Human Condition. The Dole subsidized a lot of people who didn’t do much, but it also subsidized a lot of artists, which is pretty sweet; one can see student loans and grad school serving analogous roles in the U.S. today.

IMG_1469-1Even in programming, which is now the canonical “Thar be jobs!” (pirate voice intentional) profession, some parts of programming—like languages and language development—basically aren’t remunerative. Too many people will do it free because it’s fun, like amateur porn. In the 80s there were many language and library vendors, but nearly all have died, and libraries have become either open source or rolled into a few large companies like Apple and Microsoft. Some aspects of language development are cross-subsidized in various ways, like professors doing research, or companies paying for specific components or maintenance, but it’s one field that has, in some ways, become like photography, or writing, or physics, even though programming jobs as a whole are still pretty good.

I’m not convinced that the artist lifestyle of living cheap and being poor in the pursuit of some larger goal or glamor profession seems is good or bad, but I do think it is (that we have a lot of good cheap stuff out there, and especially cheap stuff in the form of consumer electronics, may help: it’s possible to buy or acquire a nearly free, five-year-old computer that works perfectly well as a writing box).* Of course, many starving artists adopt that as a pose—they think it’s cool to say they’re working on a novel or photography project or “a series of shorts” or whatever, but don’t actually do anything, while many people with jobs put out astonishing work. Or at least work, which is usually a precursor to astonishing work.

For some people, the growing ability of people to disseminate ideas and art forms even without being paid is a real win. In the old days, if you wanted to write something and get it out there, you needed an editor or editors to agree with you. Now we have a direct way of resolving questions about what people actually want to read. Of course, the downside is that whole payment thing, but that’s the general downside of the new world in which we live, and, frankly it’s one that I don’t have a society-wide solution for.

In writing, my best guess is that more people are going to book-ify blogs, and try to sell the book for $1 – $5, under the (probably correct) assumption that very few people want to go back and read a blog’s entire archives, but an ebook could collect and organize the material of those archives. If I read a powerful post by someone who seemed interesting, I’d buy a $4 ebook that covers their greatest hits or introduced me to their broader thinking.

This is tied into other issues around what people spend their time doing. My friend also wrote that he read “a couple of articles on Keynes’ predictions of utopia and declining work hours,” but he noted that work still takes up a huge amount of most people’s lives. He’s right, but most reports show that median hours worked in the U.S. has declined, and male labor force participation has declined precipitously. Labor force participation in general is surprisingly low. Ross Douthat has been discussing this issue in The New York Times (a paid gig I might add), and, like, most reasonable people he has a nuanced take on what’s happening. See also this Wikipedia link on working time for some arguments that working time has declined overall.

Working time, however, probably hasn’t decreased for everyone. My guess is that working time has increased for some smallish number of people at the top of their professors (think lawyers, doctors, programmers, writers, business founders), with people at the bottom often relying more on government or gray market income sources. Douthat starts his essay by saying that we might expect working hours among the rich to decline first, so they can pursue more leisure, but he points out that the rich are working more than ever.

Though I am tempted to put “working” in scare quotes, because it seems like many of the rich are doing things they would enjoy doing on some level anyway; certainly a lot of programmers say they would keep programming even if they were millionaires, and many of them become millionaires and keep programming. The same is true of writers (though fewer become millionaires). Is writing a leisure or work activity for me? Both, depending. If I self-publish Asking Anna tomorrow and make a zillion dollars, the day after I’ll still be writing something. I would like to get paid but some of the work I do for fun isn’t contingent on me getting paid.

Turning blogs into books and self-publishing probably won’t replace the salaries that news organizations used to pay, but it’s one means for writers or would-be writers to get some traction.

Incidentally, the hobby-ification of many professions makes me feel pretty good about working as a grant writing consultant. No one think when they’re 14, “I want to be a grant writer like Isaac and Jake Seliger!”, while lots of people want to be like famous actors, musicians, or journalists. There is no glamor, and grant writing is an example of the classic aphorism, “Where there’s shit, there’s gold” at work.

Grant writing is also challenging. Very few people have the weird intersection of skills necessary to be good, and it’s a decade-long process to build those skills—especially for people who aren’t good writers already. The field is perpetually mutating, with new RFPs appearing and old ones disappearing, so that we’re not competing with proposals written two years ago (where many novelists, for example, are in effect still competing with their peers from the 20s or 60s or 90s).

To return to journalism as a specific example, I can think of one situation in which I’d want The Atlantic or another big publisher to publish my work: if I was worried about being sued. Journalism is replete with stories about heroic reporters being threatened by entrenched interests; Watergate and the Pentagon Papers are the best-known examples, but even small-town papers turn up corruption in city hall and so forth. As centralized organizations decline, individuals are to some extent picking up the slack, but individuals are also more susceptible to legal and other threats. If you discovered something nasty about a major corporation and knew they’d tie up your life in legal bullshit for the next ten years, would you publish, or would you listen to your wife telling you to think of the kids, or your parents telling you to think about your career and future? Most of us are not martyrs. But it’s much harder for Mega Corp or Mega Individual to threaten The Atlantic and similar outlets.

The power and wealth of a big media company has its uses.

But such a use is definitely a niche case. I could imagine some of the bigger foundations, like ProPublica, offering a legal umbrella to bloggers and other muckrakers to mitigate such risks.

I have intentionally elided the question of what people are going to do if their industries turn towards hobbies. That’s for a couple reasons: as I said above, I don’t have a good solution. In addition, the parts of the economy I’m discussing here are pretty small, and small problems don’t necessarily need “solutions,” per se. People who want to turn their hours into a lot of income should try to find ways and skills to do that, and people who want to turn their hours into fun products like writing or movies should try to find ways to do that too. Crying over industry loss or change isn’t going to turn back the clock, and just because someone could make a career as a journalist doesn’t mean they can today.

* To some extent I’ve subsidized other people’s computers, because Macs hold their value surprisingly well and can be sold for a quarter to half of their original purchase price three to five years after they’ve been bought. Every computer replaced by my family or our business has been sold on Craigslist. Its also possible, with a little knowledge and some online guides, to add RAM and an SSD to most computers made in the last couple of years, which will make them feel much more responsive.

Cars and generational shift

In The Atlantic, Jordan Weissmann asks: Why Don’t Young Americans Buy Cars?. He’s responding to a New York Times article about how people my age don’t want or like cars. The NYT portrays the issue as one of marketing (“Mr. Martin is the executive vice president of MTV Scratch, a unit of the giant media company Viacom that consults with brands about connecting with consumers.” Ugh.) But I don’t think marketing is really issue: the real problem is that we’ve reached the point where cars suck as a mode of transportation for the marginal person.

Until the 1990s, car culture made sense, to some degree: space was available, exurbs weren’t so damn far from cities, and traffic in many cities wasn’t as bad as it is today. By now, we’ve seen the end-game of car culture, and its logical terminus is Southern California, where traffic is a perpetual nightmare. Going virtually anywhere can take 45 minutes or more, everyone has to have a car because everyone else has a car, and cars are pretty much the only transportation game in town. Urban height limits and other zoning rules prevent the development of really dense developments that might encourage busses or rail. In Southern California, you’re pretty much stuck with lousy car commutes—unless you move somewhere you don’t have to put up with them. And you’re stuck with the eternal, aggravating traffic. Given that setup, it shouldn’t surprise us that a lot of people want to get away from cars (I’ve seen some of this dynamic in my own family—more on that later).

The hatred of traffic and car commuting isn’t unique to me. In The New Yorker, Nick Paumgarten’s There and Back Again: The soul of the commuter reports all manner of ills that result from commuting (and, perhaps, from time spent alone in cars more generally):

Commuting makes people unhappy, or so many studies have shown. Recently, the Nobel laureate Daniel Kahneman and the economist Alan Krueger asked nine hundred working women in Texas to rate their daily activities, according to how much they enjoyed them. Commuting came in last. (Sex came in first.) The source of the unhappiness is not so much the commute itself as what it deprives you of. When you are commuting by car, you are not hanging out with the kids, sleeping with your spouse (or anyone else), playing soccer, watching soccer, coaching soccer, arguing about politics, praying in a church, or drinking in a bar. In short, you are not spending time with other people. The two hours or more of leisure time granted by the introduction, in the early twentieth century, of the eight-hour workday are now passed in solitude. You have cup holders for company.

“I was shocked to find how robust a predictor of social isolation commuting is,” Robert Putnam, a Harvard political scientist, told me. (Putnam wrote the best-seller “Bowling Alone,” about the disintegration of American civic life.) “There’s a simple rule of thumb: Every ten minutes of commuting results in ten per cent fewer social connections. Commuting is connected to social isolation, which causes unhappiness.”

I doubt most people my age are consciously thinking about how commuting makes people unhappy, or how miserable and unpredictable traffic is. But they probably have noticed that commuting sucks—which is part of the reason rents are so high in places where you can live without a car (New York, Boston, Seattle, Portland). Those are places a lot of people my age want to live—in part because you don’t have to drive everywhere. Services like Zipcar do a good job filling in the gap between bus/rail and cars, and much less expensively than single-car ownership. In my own family, it’s mostly my Dad who is obsessed with cars and driving; he’s a baby boomer, so to him, cars represent freedom, the open road, and possibility. To me, they represent smog, traffic, and tedium. To me, there are just too damn many of them in too small a space, and that problem is only going to get worse, not better, over time.

(For more on cities, density, and ideas, see Triumph of the City, The Gated City, and Where Good Ideas Come From.)

Caitlin Flanagan and narrative fallacies in Girl Land

In “The King of Human Error,” Michael Lewis describes Daniel Kahneman’s brilliant work, which I’ve learned about slowly over the last few years, as I see him cited more and more but only recently have come to understand just how pervasive and deserved his influence has been; Kahneman’s latest book, Thinking, Fast and Slow, is the kind of brilliant summa that makes even writing a review difficult because it’s so good and contains so much material all in one place. In his essay, Lewis says that “The human mind is so wedded to stereotypes and so distracted by vivid descriptions that it will seize upon them, even when they defy logic, rather than upon truly relevant facts. Kahneman and Tversky called this logical error the ‘conjunction fallacy.'”

Caitlin Flanagan’s Girl Land is superficially interesting but can be accurately summarized as simply the conjunction fallacy in book form.

Then we need to be doubly dubious of narrative and narrative fallacies; when we hear things embedded in stories, we ought to be thinking about how those things might not be true, how we’re affected by anecdotes, and how our reasoning holds up under statistical and other kinds of analysis. I like stories, and almost all of us like stories, but too many of us appear to be unwilling to acknowledge that stories we tell may be inaccurate or misleading. Think of Tyler Cowen’s TED talk on this subject too.

In the Lewis article, Kahneman also says: “People say your childhood has a big influence on who you become [. . .] I’m not at all sure that’s true.” I’m not sure either. Flanagan and Freud think so; Bryan Caplan is more skeptical. I am leaning steadily more towards the Caplan / Kahneman uncertain worldview. I wish Flanagan would move in that direction too. She starts Girl Land by saying, “Every woman I’ve known describes her adolescence as the most psychologically intense period of her life.” Which is pretty damn depressing: most people spend their adolescence under their parents’ yoke, stuck in frequently pointless high school classes, and finishing it without accomplishing anything of note. That this state could be “the most psychologically intense” of not just a single person’s life, but of every woman’s life, is to demean the accomplishments and real achievements of adult women. It might be that having a schlong disqualifies me from entering this discussion, but see too the links at the end of this post—which go to female critics equally unimpressed with Girl Land.

I’m not even convinced Flanagan has a strong grasp of what women are really like—maybe “girl land” looks different on the inside, because from the outside I saw as a teenager very little of the subtlety and sensitivity and weakness Flanagan suggests girls have. Perhaps it’s there, but if so, it’s well-hidden; to me a lot of the book reads like female solipsism and navel-gazing, and very disconnected from how women and teenage girls actually behave. Flanagan decries “the sexually explicit music, the endless hard-core and even fetish pornography available twenty-four hours a day on the Internet [. . .]” while ignoring that most girls and women appear to like sexually explicit music; if they didn’t, they’d listen to something else and shun guys who like such music. But they don’t.

Since Flanagan’s chief method of research is anecdote, let me do the same: I’ve known plenty of women who like fetish pornography. She also says puzzling stuff like, “For generations, a girl alone in her room was understood to be doing important work.” What? Understood by whom? And what constitutes “important work” here? In Flanagan’s view, it isn’t developing a detailed knowledge of microbiology in the hopes of furthering human understanding; it’s writing a diary.

There are other howlers: Flanagan says that “they [girls] are forced—perhaps more now than at any other time—to experience sexuality on boys’ terms.” This ignores the power of the female “no”—in our society women are the ones who decide to say yes or no to sex. She misses how many girls and women are drawn to bad-boy alpha males; any time they want “to experience sexuality on [girls’] terms,” whatever that might mean, they’re welcome to. Flanagan doesn’t have a sense of agency or how individuals create society. She says that “the mass media in which so many girls are immersed today does not mean them well; it is driven by a set of priorities largely created by men and largely devoted to the exploitation of girls and young women.” But this only works if girls choose to participate in the forms of mass media Flanagan is describing. That they do, especially in an age of infinite cultural possibilities, indicates that girls like whatever this “mass media” is that “does not mean them well.”

I’m not the only one to have noticed this stuff. See also “What Caitlin Flanagan’s new book Girl Land gets wrong about girls.” And “Facts and the real world hardly exist in Caitlin Flanagan’s ‘Girl Land,’ where gauzy, phony nostalgia reigns:” “Flanagan works as a critic, was once a teacher and counselor at an elite private school, and is the mother of two boys, but somehow nothing has matched the intensity of that girlhood; it forms the only authentically compelling material here.” Which is pretty damn depressing, to have the most intense moments of one’s life happen, at, say, 15.

Check under the bed for zombies, superheroes, and Mr. Collins

Joe Fassler’s How Zombies and Superheroes Conquered Highbrow Fiction is almost believable, but I don’t buy the premise of his essay: “Realistic stories once dominated American literature, but now writers are embracing the fantastical. What happened?”

Realistic stories might’ve once dominated perceived highbrow fiction, but they’ve always been present in a lot of literature, even capital-L Literature. Notice this from the article: “Led by their patron saint, Raymond Carver, American minimalists like Grace Paley, Amy Hempel, Richard Ford, Anne Beattie, and Tobias Wolff used finely-tuned vernacular to explore the everyday problems of everyday people.” It completely ignores, say, Neal Stephenson, William Gibson, and Ursula K. le Guin, all of whom did significant work during that time that’s also widely respected. If I had to bet, I’d put money on Gibson being more literarily important than than everyone else on Fassler’s list in a hundred years. And, at least outside of MFA programs, more important today.

Hell, even John Updike, who’s sometimes associated with banal domestic problems, wrote The Witches of Eastwick in 1984 and The Widows of Eastwick in 2008. So I think this story says more about people who perceive themselves to be highbrow ignoring everything else that goes on around them, until the “everything else” becomes the mainstream, even in people who perceive themselves to be engaging in highbrow Literary Discourse. The rest of us know that there hasn’t been a period—and I’m speaking from Beowulf to the present—without its share of monsters, demons, and supernatural powers, even if critics sometimes like to pretend there has been.

Philip Zimbardo and the ever-changing dynamics of sexual politics

A friend sent me a link to Philip Zimbardo’s talk, “The demise of guys?“, which recapitulates and shortens Hanna Rosen’s long Atlantic article, “The End of Men.” Based on the video and reading lots of material on similar subjects recently (like: Baumeister, Is There Anything Good About Men?, although I do not find all of it compelling), I replied to my (female) friend:

1) There is still a very strong preference for males in much of the developing world, including India and China.

2) Barring unpredictable improvements in reproductive technology that bring us closer to Brave New World, I do not see substantial numbers of women wanting to live without men. There are some, have always been some, and will always be some, but they’re in the minority and probably will be for a long time.

3) I wouldn’t be surprised if what’s actually happening is that we’re seeing an increasing bifurcation in male behavior, as we’re seeing in many aspects of society, where the winners win more and the losers lose more than they once did. I suspect you can see more guys getting a larger number of women—a la Strauss in The Game, guys in frats, and guys who want to play the field in major cities—but also more guys who substitute video games and porn for real women, or who are incarcerated, or otherwise unable to enter / compete in mating markets. This makes women unhappy because they have to compete for a smaller number of “eligible” guys, the word “eligible” being one women love to use without wanting to define it. Women on average aren’t punishing men as much as one might expect for playing the field—see, e.g., this Slate article. Notice how Baumeister is cited there too.

4) Guys are more likely to drop out of high school, but they’re also more likely to be in the top 1% of the income distribution. They’re overrepresented in software, engineering, novel writing, and lots of other high-octane fields. They’re also overrepresented in prisons, special ed classes, and so forth. If you concentrate on the far reaches of either end of the bell curve, you’ll find guys disproportionately represented. Feminists like to focus on the right side, Zimbardo is focusing on the left. Both might be right, and we’re just seeing or noticing more extreme variation than we used to.

5) I’m not convinced the conclusions drawn by Zimbardo follow from the research, although it’s hard to tell without citations.

6) If guys are playing 10,000 hours of video games before age 21, no wonder they’re not great at attracting women and women are on average less attracted to them. This may reinforce the dynamic in number 3, in which those guys who are “eligible” can more easily find available women.

7) Most women under the age of 30 will not answer phone calls any more and will only communicate with men via text. If I were on the market, I would find this profoundly annoying, but it’s true. Many women, at least in college, make themselves chiefly available for sex after drinking heavily at parties; this contributes to perceived problems noted by Zimbardo, instead of alleviating them. If women will mostly sleep with guys after drinking and at parties, that’s what guys will do, and guys who follow alternate strategies will not succeed as well. Despite this behavior, many women also say they want more than just a “hookup,” but their stated and revealed preferences diverge (in many instances, but not all). In other words, I’m not sure males are uniquely more anti-social, at least from my perspective. When stated and revealed preferences diverge, I tend to accept evidence of revealed preferences.

EDIT: At the gym, I was telling a friend about this post, and our conversation reminded me of a student who was a sorority girl. The student and I were talking and she mentioned how her sorority was holding an early morning event with a frat, but a lot of the girls didn’t want to go if there wasn’t going to be alcohol because they didn’t know how to talk to boys without it. Point is, atrophied social skills are not limited to one sex.

8) For more on number 7, see Bogle, Hooking Up: Sex, Dating, and Relationships on Campus; I read the interviews and thought, “A lot of these people, especially the women, must experience extreme cognitive dissonance.” But people on average do not appear to care much about consistency and hypocrisy, at least in themselves.

9) In “Marry Him!“, Lori Gottlieb argues that women are too picky about long-term partners and can drive themselves out of the reproductive market altogether by waiting too long. This conflicts somewhat with Zimbardo’s claims; maybe we’re all too picky and not picky enough at the same time? She’s also mostly addressing women in their 30s and 40s, while Zimbardo appears to be dealing with people in their teens and 20s.

10) If Zimbardo wrote an entire book the subject, I would read it, although very skeptically.

The Shallows: What the Internet is Doing to Our Brains — Nicholas Carr

One irony of this post is that you’re reading a piece on the Internet about a book that is in part about how the Internet is usurping the place of books. In The Shallows, Carr argues that the Internet encourages short attention spans, skimming, shallow knowledge, and distraction, and that this is a bad thing.

He might be right, but his argument misses one essential component: the absolute link between the Internet and distraction. He cites suggestive research but never quite crosses the causal bridge from the Internet as inherently distracting, both because of links and because of the overwhelming potential amount of material out there, and that we as a society and as a people are now endlessly distracted. Along the way, there are many soaring sentiments (“Our rich literary tradition is unthinkable without the intimate exchanges that take place between reader and writer within the crucible of a book”) and clever quotes (Nietzsche as quoted by Carr: “Our writing equipment takes part in the forming of our thoughts”), but that causal link is still weak.

I liked many of the points Carr made; that one about Nietzsche is something I’ve meditated over before, as shown here and here (I’ve now distracted you and you’re probably less likely to finish this post than you would be otherwise; if I offered you $20 for repeating the penultimate sentence in the comments section, I’d probably get no takers); I think our tools do cause us to think differently in some way, which might explain why I pay more attention to them than some bloggers do. And posts on tools and computer set ups and so forth seem to generate a lot of hits; Tools of the Trade—What a Grant Writer Should Have is among the more popular Grant Writing Confidential posts.

I use Devonthink Pro as described by Steven Berlin Johnson, which supplements my memory and acts as research tool, commonplace book, and quote database, and probably weakens my memory while allowing me to write deeper blog posts and papers. Maybe I remember less in my mind and more in my computer, but it still takes my mind to give context to the material copied into the database.

In fact, Devonthink Pro helped me figure out a potential contradiction in Carr’s writing. On page 209, he says:

Even as our technologies become extensions of ourselves, we become extensions of our technologies […] every tool imposes limitations even as it opens possibilities. The more we use it, the more we mold ourselves to its form and function.

But on page 47 he says: “Sometimes our tools do what we tell them to. Other times, we adapt ourselves to our tools’ requirements.” So if “sometimes our tools do what we tell them to,” then is it true that “The more we use it, the more we mold ourselves to its form and function?” The two statements aren’t quite mutually exclusive, but they’re close. Maybe reading Heidegger’s Being and Time and Graham Harman’s Tool-Being will clear up or deepen whatever confusion exists, since he a) went deep but b) like many philosophers, is hard to read and is closer to a machine for generating multiple interpretations than an illuminator and simplifier of problems. This could apply to philosophy in general as seen from the outside.

This post mirrors some of Carr’s tendencies, like the detour in the preceding paragraph. I’ll get back to the main point for a moment: Carr’s examples don’t necessarily add up to proving his argument, and some of them feel awfully tenuous. Some are also inaccurate; on page 74 he mentions a study that used brain scans to “examine what happens inside people’s heads as they read fiction” and cites Nicole K. Speer’s journal article “Reading Stories Activates Neural Representations of Visual and Motor Experiences,” which doesn’t mention fiction and uses a memoir from 1951 as its sample text.


That’s a relatively minor issue, however, and one that I only discovered because I found the study interesting enough to look up.

Along the way in The Shallows we get lots of digressions, and many of them are well-trod ones: the history of the printing press; the origins of the commonplace books; the early artificial intelligence program ELIZA; Frederick Winslow Taylor and his efficiency interest; the plasticity of the brain; technologies that’ve been used for various purposes, including metaphor.

Those digressions almost add up to one of my common criticisms of nonfiction books, which is that they’d be better as long magazine articles. The Shallows started as one, and one I’ve mentioned before: “Is Google Making Us Stupid?” The answer: maybe. The answer now, two years and 200 pages later: maybe. Is the book a substantial improvement on the article? Maybe. You’ll probably get 80% of the book’s content from the article, which makes me think you’d be better off following the link to the article and printing it—the better not to be distracted by the rest of The Atlantic. This might tie into the irony that I mentioned in the first line of this post, which you’ve probably forgotten by now because you’re used to skimming works on the Internet, especially moderately long ones that make somewhat subtle arguments.

Offline, Carr says, you’re used to linear reading—from start to finish. Online, you’re used to… something else. But we’re not sure what, or how to label the reading that leads away from the ideal we’ve been living in: “Calm, focused, undistracted, the linear mind is being pushed aside by a new kind of mind that wants and needs to take in and dole out information in short, disjointed, often overlapping bursts—the faster, the better.”

Again, maybe, which is the definitive word for analyzing The Shallows: but we don’t actually have a name for this kind of mind, and it’s not apparent that the change is as major as Carr describes: haven’t we always made disparate connections among many things? Haven’t we always skimmed until we’ve found what we’re looking for, and then decided to dive in? His point is that we no longer do dive in, and he might be right—for some people; but for me, online surfing, skimming, and reading coexists with long-form book reading. Otherwise I wouldn’t have had the fortitude to get through The Shallows.

Still, I don’t like reading on my Kindle very much because I’ve discovered that I often tend to hop back and forth between pages. In addition, grad school requires citations that favor conventional books. And for all my carping about the lack of causal certainty regarding Carr’s argument, I do think he’s on to something because of my own experience. He says:

Over the last few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going—so far as I can tell—but it’s changing. I’m not thinking the way I used to think. I feel it most strongly when I’m reading. I used to find it easy to immerse myself in a book or a lengthy article. My mind would get caught up in the twists of the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore. Now my concentration starts to drift after a page or two. I get fidgety, lose the thread, begin looking for something else to do. I feel like I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle.

I think I know what’s going on. For well over a decade now, I’ve been spending a lot of time online, searching and surfing and sometimes adding to the great databases of the Internet.

He says friends have reported similar experiences. I feel the same way as him and his friends: the best thing I’ve found for improving my productivity and making reading and writing easier is a program called Freedom, which prevents me from getting online unless I reboot my iMac. It throws enough of a barrier between me and the Internet that I can’t easily distract myself through e-mail or Hacker News (Freedom has also made writing this post slightly harder, because during the first draft, I haven’t been able to add links to various appropriate places, but I think it worth the trade-off, and I didn’t realize I was going to write this post when I turned it on). Paul Graham has enough money that he uses another computer for the same purpose, as he describes in the linked essay, which is titled, appropriately enough, “Disconnecting Distraction” (sample: “After years of carefully avoiding classic time sinks like TV, games, and Usenet, I still managed to fall prey to distraction, because I didn’t realize that it evolves.” Guess what distraction evolved into: the Internet).

Another grad student in English Lit expressed shock when I told him that I check my e-mail at most once a day and shook for every two days, primarily in an effort not to distract myself with electronic kibble or kipple. Carr himself had to do the same thing: he moves to Colorado and jettisons much of his electronic life, and he “throttled back my e-mail application […] I reset it to check only once an hour, and when that still created too much of a distraction, I began to keeping the program closed much of the day.” I work better that way. And I think I read better, or deeper, offline.

For me, reading a book is a very different experience from searching the web, in part because most of the websites I visit are exhaustible much faster than books. I have a great pile of them from the library waiting to be read, and an even greater number bought or gifted over the years. Books worth reading seem to go on forever. Websites don’t.

But if I don’t have that spark of discipline to stay off the Internet for a few hours at a time, I’m tempted to do the RSS round-robin and triple check the New York Times for hours, at which point I look up and say, “What did I do with my time?” If I read a book—like The Shallows, or Carlos Ruiz Zafon’s The Shadow of the Wind, which I’m most of the way through now—I look up in a couple of hours and know I’ve done something. This is particularly helpful for me because, as previously mentioned, I’m in grad school, which means I have to be a perpetual reader (if I didn’t want to be, I’d find another occupation).

To my mind, getting offline can become a comparative advantage because, like Carr, “I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain,” and that someone is me and that someone is the Internet. But I can’t claim this is true for all people in all places, even as I tell my students to try turning off their Internet access and cell phones when they write their papers. Most of them no doubt don’t. But the few who do learn how to turn off the electronic carnival are probably getting something very useful out of that advice. The ones who don’t probably would benefit from reading The Shallows because they’d at least become aware of the possibility that the Internet is rewiring our brains in ways that might not be beneficial to us, however tenuous the evidence (notice my hedging language: “at least,” “the possibility” “might not”).

Alas: they’re probably the ones least likely to read it.

The Atlantic, Fiction 2010, and How to Write in 700 Easy Lessons

The Atlantic‘s fiction issue showed up this weekend and has, as usual, some fascinating material—most notably How to Write in 700 Easy Lessons: The case against writing manuals, which argues that books that teach you how to write like writing is an exercise in carpentry aren’t a good way to actually learn how to write. As he says:

The trouble of course is that a good book is not something you can put together like a model airplane. It does not lend itself to that kind of instruction. Every day books are published that contain no real artfulness in the lines, books made up of clichés and limp prose, stupid stories offering nothing but high concept and plot—or supra-literary books that shut out even a serious reader in the name of assertions about the right of an author to be dull for a good cause. (No matter how serious a book is, if it is not entertaining, it is a failure.)

The real solution for writers? Reading:

My advice? Put the manuals and the how-to books away. Read the writers themselves, whose work and example are all you really need if you want to write. And wanting to write is so much more than a pose.

Note that he makes a distinction between books that deal with the craft of writing or the aesthetics of writing (“we have several very fine volumes in that vein (Charles Baxter’s Burning Down the House and John Gardner’s The Art of Fiction come to mind”), but rather the books that act like you’re merely laying down two by fours (think of the old wheels that allegedly helped writers by things like “heroine declares her love”).

The books I offered in The very very beginning writer are geared toward the craft/aesthetic approach, not the model airplane approach, although I admit that I’ve ready some of the ones using the model airplane approach and promptly gone back to studying characterization with Robertson Davies, plot with Elmore Leonard, and depth with Francine Prose. D.G. Myers said, “I do not believe that anyone can learn to write fiction from a guidebook […]”, and he’s right. But I think that many if not most artists benefit from reflecting on their craft, especially when they’re learning it, and there’s a difference between guidebooks and ones that help shape fundamental skills, rather than merely giving a formula or recipe.

Some of the fiction in the issue is excellent too: The Landscape of Pleasure is fascinating for its half-knowledgeable narrator in the late adolescent mold, and T.C. Boyle’s The Silence almost ends with “And what was its message? It had no message, he saw that now,” a statement that feels deserved in the context.

On crime fiction

Perhaps C.E.O libraries contain more crime fiction than they used to, as James Fallows writes today what many readers have probably thought:

Like most people who enjoy spy novels and crime fiction, I feel vaguely guilty about this interest. I realize that crime fiction is classy now, and has taken over part of the describing-modern-life job that high-toned novelists abdicated when they moved into the universities. My friend Patrick Anderson*, who has reviewed mysteries for years at the Washington Post, recently published a very good book to this effect: The Triumph of the Thriller. Still, you feel a little cheesy when you see a stack of lurid mystery covers sitting next to the bed.

So I’ve figured out a way to tell the books I can feel good about reading from the ones I should wean myself from. The test is: can I remember something from the book a month later — or, better, six months or a year on. This is the test I apply to “real” fiction too: surprisingly often, a great book is great because it presents a character, a mood, a facet of society, a predicament that you hadn’t thought of before reading the book but that stays with you afterwards.

I’ve never loved crime fiction but respect the best of it. The idea of genre fiction has always seemed suspect to me, as my fundamental test of a novel regardless of the section of the bookstore in which it sits is, “Does it move me?” The definition of “move” has many entries, but if it achieves this fundamental task I don’t care what’s on its cover.

Fallows is depressingly accurate with his barb about “high-toned novelists abdicated when they moved into the universities,” although I’m well aware of exceptions to this comment, which echoes some the issues raised by A Reader’s Manifesto. He goes on to list a number of his favorites, none of which I’ve read except for A Simple Plan, an excellent novel I highly recommend. It spawned the eponymous movie, which is also excellent and forgotten.

%d bloggers like this: