Why corporations?

Arnold Kling asks: “Why Large Corporations?” I left a comment citing Peter Thiel’s answer:

Companies exist because they optimally address internal and external coordination costs. In general, as an entity grows, so do its internal coordination costs. But its external coordination costs fall. Totalitarian government is entity writ large; external coordination is easy, since those costs are zero. But internal coordination, as Hayek and the Austrians showed, is hard and costly; central planning doesn’t work.

The flipside is that internal coordination costs for independent contractors are zero, but external coordination costs (uniquely contracting with absolutely everybody one deals with) are very high, possibly paralyzingly so. Optimality—firm size—is a matter of finding the right combination.

This applies to corporations more generally, but large corporations presumably persist because they continue to solve this class of problem. Corporations also solve or ameliorate succession and other problems; one way of re-stating Thiel’s point is that corporations help align the interests of a lot of people in approximately the same direction. This mechanism obviously isn’t perfect, but it’s better than alternatives.

IMG_0298Skepticism of corporations is useful, but only when skeptics understand the problems corporations solve. I took a grad seminar on the Modernism / Postmodernism divide and was assigned the movie The Corporation, which is heavy on innuendo and rhetorical slight-of-hand and light on intellectual acuity. When the seminar discussed the movie, my classmates were happy to assume that corporations are evil—but they couldn’t identify why they exist, let alone offer coherent alternatives that don’t have obvious drawbacks. I’m not in love with the corporate legal form as some kind of ideal, but without a plausible alternative, feeling-based criticism isn’t terribly helpful. It’s like people who criticize coal power plants. . . and nuclear. . . and other viable, large-scale options.

In the seminar’s discussion, other students and the professor conflated publicly-traded corporations with privately traded ones and LLCs with C Corps, etc. (Incidentally, if you want to listen to something hilarious yet depressing, get a bunch of English grad students and professors together and tell them to talk about business). They also thought that all corporations exist solely to make money. That’s not true: Corporations do what their shareholders tell them to do. As far as I know, courts have decided that publicly traded companies need to maximize shareholder value, but single-owner corporations can do whatever the single owner or small group of owners wants them to.

Thiel says this about the advantages of starting a new corporation to accomplish some task:

The easiest answer to “why startups?” is negative: because you can’t develop new technology in existing entities. There’s something wrong with big companies, governments, and non-profits. Perhaps they can’t recognize financial needs; the federal government, hamstrung by its own bureaucracy, obviously overcompensates some while grossly undercompensating others in its employ. Or maybe these entities can’t handle personal needs; you can’t always get recognition, respect, or fame from a huge bureaucracy. Anyone on a mission tends to want to go from 0 to 1. You can only do that if you’re surrounded by others to want to go from 0 to 1. That happens in startups, not huge companies or government.

Usually, developing “new technology” dovetails with making money, but it doesn’t necessarily have to: you could in principle start a nonprofit technology company to conduct research or develop a product (in some businesses, competition between for- and non-profits is common: think of healthcare, or gyms). That no one or almost no one goes this route means that it could be an under-explored avenue for creative and technological success. Or it could be a deadend, and no one goes down it because doing so would be stupid.

Why little black books instead of phones and computers

“Despite being a denizen of the digital world, or maybe because he knew too well its isolating potential, Jobs was a strong believer in face-to-face meetings.” That’s from Walter Isaacson’s biography of Steve Jobs. It’s a strange way to begin a post about notebooks, but Jobs’ views on the power of a potentially anachronistic practice applies to other seemingly anachronistic practices. I’m a believer in notebooks, though I’m hardly a luddite and use a computer too much.

The notebook has an immediate tactile advantage over phones: they aren’t connected to the Internet. It’s intimate in a way computers aren’t. A notebook has never interrupted me with a screen that says, “Wuz up?” Notebooks are easy to use without thinking. I know where I have everything I’ve written on-the-go over the last eight years: in the same stack. It’s easy to draw on paper. I don’t have to manage files and have yet to delete something important. The only way to “accidentally delete” something is to leave the notebook submerged in water.Notebook stack

A notebook is the written equivalent of a face-to-face meeting. It has no distractions, no pop-up icons, and no software upgrades. For a notebook, fewer features are better and fewer options are more. If you take a notebook out of your pocket to record an idea, you won’t see nude photos of your significant other. You’re going to see the page where you left off. Maybe you’ll see another idea that reminds you of the one you’re working on, and you’ll combine the two in a novel way. If you want to flip back to an earlier page, it’s easy.

The lack of editability is a feature, not a bug, and the notebook is an enigma of stopped time. Similar writing in a computer can function this way but doesn’t for me: the text is too open and too malleable. Which is wonderful in its own way, and that way opens many new possibilities. But those possibilities are different from the notebook’s. It’s become a cliche to argue that the technologies we use affect the thoughts we have and the way we express those thoughts, but despite being cliche the basic power of that observation remains. I have complete confidence that, unless I misplace them, I’ll still be able to read my notebooks in 20 years, regardless of changes in technology.

In Distrust That Particular Flavor, William Gibson says, “Once perfected, communication technologies rarely die out entirely; rather, they shrink to fit particular niches in the global info-structure.” The notebook’s niche is perfect. I don’t think it’s a coincidence that Moleskine racks have proliferated in stores at the same time everyone has acquired cell phones, laptops, and now tablets.

In The Shallows, Nicholas Carr says: “The intellectual ethic is the message that a medium or other tool transmits into the minds and culture of its users.” Cell phones subtly change our relationship with time. Notebooks subtly change our relationship with words and drawings. I’m not entirely sure how, and if I were struggling for tenure in industrial design or psychology I might start examining the relationship. For now, it’s enough to feel the relationship. Farhad Manjoo even cites someone who studies these things:

“The research shows that the type of content you produce is different whether you handwrite or type,” says Ken Hinckley, an interface expert at Microsoft Research who’s long studied pen-based electronic devices. “Typing tends to be for complete sentences and thoughts—you go deeper into each line of thought. Handwriting is for short phrases, for jotting ideas. It’s a different mode of thought for most people.” This makes intuitive sense: It’s why people like to brainstorm using whiteboards rather than Word documents.

IMG_2100I like to write in notebooks despite carrying around a smartphone. Some of this might be indicative of the technology I grew up with—would someone familiar with smartphone touchscreens from age seven have sufficiently dexterous fingers to be faster than they would be with paper?—but I think the obvious answer to “handwriting or computer?” is “both, depending.” As I write this sentence, I have a printout of a novel called ASKING ANNA in front of me, covered with blue pen, because editing on the printed page feels different to me than editing on the screen. I write long-form on computers, though. The plural of anecdote is not data. Still, I have to notice that using different mediums appears to improve the final work product (insert joke about low quality here).

There’s also a shallow and yet compelling reason to like notebooks: a disproportionate number of writers, artists, scientists, and thinkers like using them too, and I suspect that even contemporary writers, artists, scientists, and thinkers realize that sometimes silence and not being connected is useful, like quiet and solitude.

In “With the decline of the wristwatch, will time become just another app?”, Matthew Battles says:

Westerners have long been keenly interested in horology, as David Landes, an economic historian, points out in Revolution in Time, his landmark study of the development of timekeeping technology. It wasn’t the advent of clocks that forced us to fret over the hours; our obsession with time was fully in force when monks first began to say their matins, keeping track of the hours out of strict religious obligation. By the 18th century, secular time had acquired the pressure of routine that would rule its modern mode. Tristram Shandy’s father, waiting interminably for the birth of his son, bemoans the “computations of time” that segment life into “minutes, hours, weeks, and months” and despairs “of clocks (I wish there were not a clock in the kingdom).” Shandy’s father fretted that, by their constant tolling of the hours, clocks would overshadow the personal, innate sense of time—ever flexible, ever dependent upon mood and sociability.

The revolution in electronic technology is wonderful in many ways, but its downsides—distraction, most obviously—are present too. The notebook combats them. Notebooks are an organizing or disorganizing principle: organizing because one keeps one’s thoughts, but disorganizing because one cannot rearrange, tag, and structure thoughts in a notebook as one can on a screen (Devonthink Pro is impossible in the real world, and Scrivener can be done but only with a great deal of friction).

Once you try a notebook, you may realize that you’re a notebook person. You might realize it without trying. If you’re obsessed with this sort of thing, see Michael Loper / Rands’ Sweet Decay, which is better on validating why a notebook is important than evaluating the notebooks at hand. It was also written in 2008, before Rhodia updated its Webbie.

Like Rands, I’ve never had a sewn binding catastrophically fail. As a result, notebooks without sewn bindings are invisible to me. I find it telling that so many people are willing to write at length about their notebooks and use a nominally obsolete technology.

Once you decide that you like notebooks, you have to decide which one you want. I used to like Moleskines, until one broke, and I began reading other stories online about the highly variable quality level.

So I’ve begun ranging further afield.

I’ve tested about a dozen notebooks. Most haven’t been worth writing about. But by now I’ve found the best reasonably available notebooks, and I can say this: you probably don’t actually want a Guildhall Pocket Notebook, which is number two. You want a Rhodia Webnotebook.

Like many notebooks, the Guildhall starts off with promise: the pages do lie flat more easily than alternatives. Lines are closely spaced, maximizing writable area, which is important in an expensive notebook that shouldn’t be replaced frequently.

IMG_3900I like the Guildhall, but it’s too flimsy and has a binding that appears unlikely to withstand daily carry. Mine is already bending, and I haven’t even hauled it around that much. The Rhodia is somewhat stiffer. Its pages don’t lie flat quite as easily. The lines should go to the end of each page. But its great paper quality and durability advantage make it better than the alternatives.

The Rhodia is not perfect. The A7 version, which I like better than the 3.5 x 5.5 American version, is only available in Europe and Australia, which entails high shipping costs. The Webbie’s lines should stretch to the bottom of the page and be spaced slightly closer together. The name is stupid; perhaps it sounds better in French. The notebook’s cover extends slightly over its paper instead of aligning perfectly. Steve Jobs would demand perfect alignment. To return to Isaacson’s biography:

The connection between the design of a product, its essence, and its manufacturing was illustrated for Jobs and Ive when they were traveling in France and went into a kitchen supply store. Ive picked up a knife he admired, but then put it down in disappointment. Jobs did the same. ‘We both noticed the tiny bit of glue between the handle and the blade,’ Ive recalled. They talked about how the knife’s good design had been ruined by the way it was being manufactured. ‘We don’t like to think of our knives as being glued together,’ Ive said. ‘Steve and I care about things like that, which ruin the purity and detract from the essence of something like a utensil, and we think alike about how products should be made to look pure and seamless.

I wish the Rhodia were that good. But the Rhodia’s virtues are more important than its flaws: the paper quality is the highest I’ve seen, and none of the Rhodias I’ve bought have broken. If anyone knows of a notebook that combines the Rhodia’s durability with the qualities it lacks, by all means send me an e-mail.


More on the subject: The Pocket Notebooks of 20 Famous Men.

EDIT: See also Kevin Devlin’s The Death of Mathematics, which is about the allure of math by hand, rather than by computer; though I don’t endorse what he says, in part because it reminds me so much of Socrates decrying the advent of written over oral culture, I find it stimulating.

College, William Deresiewicz’s Tsunami, and better ways of thinking about university costs

I’m an on-the-record fan of William Deresiewicz, which made reading “Tsunami: How the market is destroying higher education” distressing. It blames problems in contemporary higher education on capitalism and markets, but I think it ignores a couple of things, the most important of which is the role in colleges in raising prices, increasing the number of administrators, and reducing teaching loads for tenured faculty.

Beyond that, Deresiewicz discusses Naomi Klein’s The Shock Doctrine, which is a dubious place to start; see, for example, “Shock Jock” for one critique. In it, Tyler Cowen notes that “Most of the book is a button-pressing, emotionally laden, whirlwind tour of global events over the last 30 years” and that “The book offers not so much an argument but rather a Dadaesque juxtaposition of themes and supposedly parallel developments in the global market.” Klein’s book reminds me of the bad academic writing that assumes the dubious evils of capitalism without quite spelling out what those dubious evils are or what plausible alternatives exist.

Returning to Deresiewicz: “College is now judged in terms of ‘return on investment,’ the delivery of immediately negotiable skills.” But this might simply be due to rising costs: when college was (relatively) inexpensive, it was easy to pay less attention to ROI issues; when it’s almost impossible to afford without loans for middle-class families, it becomes much harder. ROI on degrees that, in contemporary terms, cost $20,000 can be safely ignored. ROI on degrees that cost $150,000 can’t be.

Second, even at public (and private non-profit) schools, some people are getting rich: the college presidents and other managers (including coaches) whose salaries range well into the six figures and higher.

Presidents and other bureaucrats make popular punching bags—hell, I took a couple whacks in my first paragraph—and perhaps they are “overpaid” (though one should ask why Boards of Trustees are willing to pay them what they do), but such highly-paid administrators still aren’t very expensive relative to most colleges’ overall budgets. I would like to see universities exercise greater discipline in this area, but I doubt they will until they’re forced to by markets. At the moment, schools are underwritten by federally-backed, non-dischargeable loans taken out by students. Until we see real reform,

The only good answer about the rise in college costs that I’ve seen come from Robert Archibald and David Feldman’s Why Does College Cost So Much? Their short answer: “Baumol’s Cost Disease.” Unfortunately, it’s more fun pointing fingers at evil administrators, evil markets, evil capitalism, and ignorance students who want to know how much they’re going to make after they graduate.

At the very least, Why Does College Cost So Much? is a better place to start than The Shock Doctrine.

These questions are getting more and more play in the larger culture. Is College a Lousy Investment? appears in The Daily Beast. “A Generation Hobbled by the Soaring Cost of College” appears in The New York Times. A surprisingly large number of people with degrees are working in jobs that don’t require them: in coffee shops, as bartenders, as flight attendants, and so on. That’s a lot of money for a degree that turns out to be primarily about personal development and partying. So what should students, at the individual level, do?

To figure out whether college is a good idea, you have to start with what you’re trying to accomplish: getting a credential or gaining knowledge. If the primary purpose is the latter, and you have a strong sense of what you want to do and how you want to do it, college isn’t automatically the best option. It probably is if you’re 18, because, although you don’t realize this now, you don’t know anything. It might not be when you’re, say, 23, however.

Part of the problem with discussing “college” is that you’re discussing a huge number of varied institutions that do all sorts of things for all sorts of people. For people getting $200,000 English degrees from non-elite universities, college makes less sense (mine cost about half that much, and in retrospect I might’ve been better off with a state school for half again as much, but it seemed like a good idea at the time and seems to have worked out for me, as an individual). For people getting technical degrees from state schools, college does a huge amount for lifetime earnings. Talking about these two very different experiences of “college” is like talking about eating at McDonald’s and eating at New York’s best restaurant: they’re both about selling food, but the differences dwarf the similarities. College is so many different things that generalizing is tough or simply dumb.

In response to paragraphs like mine, above, we’re getting essays like Keith Burgess-Jackson’s “You Are Not My Customer.” Burgess-Jackson is correct to say that not everything can be valued in terms of dollars—that’s a point that Lewis Hyde makes in The Gift and others have made in terms of market vs. non-market economies. The question is whether we should view university education through a market lets.

When tuition was relatively cheap and quite affordable in absolute and relative terms, it made sense to look at universities through a “gift”-style lens, as Burgess-Jackson wants us to. Now that tuition is extremely high, however, we basically don’t have the luxury of making this choice: we can’t be paying $50,000 – $250,000 for an undergrad degree and have the attitude of “Thank you sir, may I have another.” It’s one or the other, not both, and universities are the ones setting prices.

Comments like this: “Good teachers know that most learning, certainly all durable learning, is self-effected” are true. But if Burgess-Jackson thinks that his students aren’t customers, wait until the administration finds that no one will or wants to take his classes. Unless he’s a publishing superstar, I suspect he’ll find out otherwise. I’d like universities to be less market-oriented and more gift-oriented, but an era of $20,000+ comprehensive costs for eight to nine months of instruction just doesn’t make that orientation plausible.

Complaints about Amazon’s rise ignore how long it has taken the company to rise

The latest raft of articles about Amazon and its power over the publishing industry appeared in the last couple of days (“Amazon, Destroyer of Worlds,” “What Amazon’s ebook strategy means,” “Booksellers Resisting Amazon’s Disruption”), and the first two note what is the most significant thing about Amazon, at least to my mind: how much better an experience Amazon is than the things it replaces (or complements, depending on your perspective).

Like any incumbents, publishers, as far as I can tell, want the status quo, but readers (and consumers of electronic gear) are happy to get something for less than they would’ve otherwise. Stross gets this—”Bookselling in 1994 was a notoriously backward-looking, inefficient, and old-fashioned area of the retail sector. There are structural reasons for this” and so does Yglesias—”But for consumers, it’s great. An Amazon Prime membership is the most outrageously good deal in commerce today. But competitors should be afraid.” Stross is suspicious of Amazon, and so is the New York Times writer. Their suspicions are worth holding, but the basic issue remains: Amazon is successful because it’s good.

Their books are cheap and arrive fast. Their used section is really great, for both buying and selling. Prior to Amazon and its smaller analogues, used bookstores simply wouldn’t buy books with writing in them. Amazon used buyers, however, don’t care, as long as the book is described honestly. I’m getting ready to move, which means that I’m selling or giving away somewhere between a couple hundred and a thousand books. I sold about 15 through Amazon, resulting in about $100 that I wouldn’t have otherwise. That efficiency is great, but it’s great in a way that publishers don’t like, because publishers would rather have everyone buying new books.

Amazon looks particularly good to me because I’ve spent a lot of time trying to wrangle a literary agent and failing. Five or six years ago, that meant my work would’ve spent its life on my hard drive, and that’s about it. But now that I’m done with comprehensive exams, I have time to hire an editor and a book designer and see what happens through self-publishing. The likely answer is “nothing,” but the probability of nothing happening is 1.0 if I leave the novels and other work on my hard drive forever.

Most of this was predictable: in 1997, Philip Greenspun wrote “The book behind the book behind the book…“, in which he observed: “Looking at the way my book was marketed made me realize that amazon.com is going to rule the world.” I’m sure others predicted the same thing. The publishing industry’s collective response was to shrug. I guess no one read The Innovator’s Dilemma. If publishers once were innovators, they’re not anymore.

Stross is averse to profit to the point that I think he’s signaling mood / group affiliation to some extent, but his basic economic analysis is good. Stuff like this: “piracy is a much less immediate threat than a gigantic multinational [. . .] that has expressed its intention to “disrupt” them, and whose chief executive said recently “even well-meaning gatekeepers slow innovation” (where ‘innovation’ is code-speak for ‘opportunities for me to turn a profit’)” could be rephrased; Amazon selling for less means more consumer surplus, and it appears that Amazon’s whole modus operandi is to very low, if any, profit margins; if it had margins as high or higher than what publishers and retailers shoot for, it wouldn’t be such a threat.

Anyhow, I too don’t want an Amazon monopoly or monopsony, but I don’t see a good alternative to Amazon. Barnes and Noble is, at best, second-best; their online prices finally became competitive with Amazon’s a year or two ago, but they still they’re chasing the leader instead of striving to be the leader.

If DRM on ebooks actually dies—as Stross thinks it will—that will make Barnes and Noble and other players more viable, in the same way that killing DRM on music made Amazon a viable purveyor of music (although a lot of people still use the iTunes Music Store).

Thoughts on possible and perceived income inequality

Someone in my family sent me “Standard of Living Is in the Shadows as Election Issue,” which is about how we allegedly need to break “out of a decade of income stagnation that has afflicted the middle class and the poor and exacerbated inequality.” But measuring standard of living solely through income has a couple of major problems. One is that a lot of people are getting life improvements through non-income-based measures (surfing the Internet is an obvious example). It also appears that the average basket of goods consumption is changing. Anyone who has to or chooses to consume health care or education is really hurting. Anyone who isn’t is arguably benefiting from the major drop in prices for virtually all manufactured goods.

I’m not convinced that income inequality has changed as much as the media believes it has. Robert J. Gordon wrote “Has the Rise in American Inequality Been Exaggerated?,” which argues that the indices used to measure inequality are flawed, that a lot of income is now needlessly spent on housing (primarily because so many cities restrict housing supply through various means, including arbitrary parking requirements and height limits), and that behavioral choices and changes may have changed perceived inequality. I don’t want to argue the merits of Gordon’s paper. His explanations are at least plausible, and that the more one tries to measure these kinds of changes, the harder it is to really know if what one is measuring is real or evidence of statistical artifacts or measurement biases. Standard of living arguments face the same issues.

I mentioned the kinds of goods we consume in the first paragraph. We have large incentive problems built into healthcare, education, and government, all of which are growing faster than inflation and have been for decades. Tyler Cowen’s The Great Stagnation: How America Ate All The Low-Hanging Fruit of Modern History, Got Sick, and Will (Eventually) Feel Better discusses these issues. Cowen also says:

More and more, ‘production’—that word my fellow economists have been using for generations—has become interior to the human mind rather than set on a factory floor. Maybe a tweet doesn’t look like much, but its value lies in the mental dimension. We use Twitter, Facebook, MySpace, and other Web services to construct a complex meld of stories, images, and feelings in our minds. No single bit from the Web seems so weighty on its own, but the resulting blend is rich in joy, emotion, and suspense.

This might be overly utopian: consider the arguments of Sherry Turkle’s Together Alone or Nicholas Carr’s The Shallows, neither of which may be fully persuasive but which still give me pause about the Internet as a “resulting blend. . . rich in joy, emotion, and suspense.”

At least “Standard of Living Is in the Shadows” understands this: “The causes of income stagnation are varied and lack the political simplicity of calls to bring down the deficit or avert another Wall Street meltdown.” The Wall Street meltdown is also a symptom, not a cause, of underlying problems. This is also probably true:

Maybe the biggest reason for optimism is that there is still a strong argument that both globalization and automation help the economy in the long run. This argument remains popular with economists: Trade allows countries to specialize in what they do best, while technology creates opportunities to extend and improve life that never before existed.

Previous periods of rapid economic change also created problems that seemed to be permanent but were not. Neither the cotton gin nor the steam engine nor the automobile created mass unemployment.

I don’t pretend to have answers to these questions, but both major political want to sell easy and probably wrong answers. A critical mass of voters haven’t revolted, or won’t revolt. I don’t see the end game. But we may also get self-driving cars, 3-D printing, and human genetic modification in the next decade. All three are big, transformative technologies that may alter the fabric of human life in major and unforeseeable ways. Remember that a huge number of technologies diffused through society incredibly quickly during the depression (radio being the best known). In my own case, for example, Amazon, Barnes & Noble, and Apple’s digital reading devices have made self-publishing pragmatic in a way that it wasn’t prior to about 2010 or so, and that’s a pretty big win for me, given my experience with literary agents.

There does, however, seem to be a pervasive societal sense over the last four years that something has gone wrong.

In an e-mail, one friend said this: “These days, I feel like much of society is living in some sort of shared delusion, where people want what they want but are blithely unaware of the effects of their desires” in the context of a link to Branford Marsalis’ take on students today. Marsalis says that he’s learned that “students today are completely full of shit. [. . .] Much like the generation before them, the only thing they’re really interested in is you telling them how right they are and how good they are.” I said to my friend:

I suspect people have always been “living in some sort of shared delusion, where people want what they want but are blithely unaware of the effects of their desires,” but wealth has enabled us to indulge these desires and shared delusions in new ways. And “shared delusion” as a small and relatively unimportant percentage of GDP / government spending is a cheap, affordable thrill. But shared delusion in an environment where economic growth is weak—I tend to buy the Tyler Cowen argument espoused in The Great Stagnation, along with Peter Thiel’s addendums, though I’m more than willing to consider alternate points of view—is much harder. A lot of people are clawing for a bigger slice of a limited pie, which is a more substantial problem than a lot of people clawing for a sliver of a growing pie. Most people don’t even understand the problems, or try to genuinely understand; it’s easier to fit small pieces of complex problems and phenomena into an existing social / political worldview than it is to try getting a handle on the problem domain and the forces in play (most of the political posts I’ve seen on Facebook look like mood affiliation and simple, Haidt-style posturing and mood affiliation than anything else). The delusion isn’t new, but the large climate /environment has changed. The scale of the delusion has changed too, and scale has qualities of its own.

But I still wonder about something real: when someone makes it really rich (Astors, Vanderbilts, or, today, Gates, Ellison), there’s a tendency for the wealth and the kinds of behaviors that led to the major wealth in the first place to be diluted over time and across generations (think of Paris Hilton as a salient media example). I wonder if that also happens to some extent at the level of countries, but over centuries instead of decades. Most of the time I tend to guess not—the wealthiest countries in 1800 are still mostly the wealthiest countries today, with a couple of notable exceptions (Argentina has gone down, South Korea up)—but it’s still something I ponder. Changing wealth distributions play into this too, although I’m not really sure how.

The preceding paragraphs might be overly pessimistic. Let’s take the long view: things are actually pretty good. The Soviets aren’t threatening us with total annihilation (and vice-versa: the news that Kennedy seriously considered a first strike in the 60s is really scary), we’re not in the Great Depression, there’s still lots of cool stuff happening, books are cheaper than ever, and virtually everyone has a magic box that lets them communicate with almost anyone, anywhere, any time. The minutia and stupidity of politics is being enabled in new ways, but I think the basic content isn’t so different from the past. By virtually every metric people are better off today than they were 30 or 40 years ago (psychologically speaking, I’m not so sure, but we’ll leave that to the side). Anyone who has had medical treatment that wouldn’t have been possible 40 years ago is aware of this.

As I said above, we may also get self-driving cars, 3-D printing, and human genetic modification in the next decade. These technologies might be overhyped or not pan out. But I still think:

Pretty neat!

People who are well-equipped to take advantage of modern nutrition and communication are in an especially good position. People who fall into the defaults—lots of simple sugars and fast foods, four or five hours of TV of dubious value every day—might not be. Simply being a consumer might be getting harder. So is following default paths. Certainly I derive a huge amount of benefit from being part of modern communication networks, but the kind of person who doesn’t care that much about writing or artistic production or whatever might not care or benefit.

In Name of the Rose Adso thinks: “As I lay on my pallet, I concluded that my father should not have sent me out into the world, which was more complicated than I had thought. I was learning too many things” (179). But we can’t avoid getting sent out into the world. All we can do is hope we have or can develop the strength and fortitude necessary to make a go of it. Maybe the very wealthy, who have inherited wealth, can avoid much of the world, but that will only last for a generation or two, and then it’s back against the hard rock face of reality, whether we’re ready for it or not.

School, incidentally, does a poor job of presenting the rock face, which is another issue for another, but I think it’s possible to present that rock face without being a jerk about it. I try to do so.

I also try to remember that life is hard. Even when it’s beautiful.

I think some women on OkCupid employ the same strategy:

“Mantises are large carnivorous insects. [. . .] they will attack almost anything that moves. When they mate, the male cautiously creeps up on the female, mounts her, and copulates. If the female gets the chance, she will eat him, beginning by biting his head off, either as the male is approaching, or immediately after he mounts, or after they separate. It might seem most sensible for her to wait until copulation is over before she starts to eat him. But the loss of the head does not seem to throw the rest of the male’s body off its sexual stride. Indeed, since the insect head is the seat of some inhibitory nerve centres, it is possible that the female improves the male’s sexual performance by eating his head. If so, this is an added benefit.”

—Richard Dawkins, The Selfish Gene, which is best read in its latest edition.

Why you should become a nurse or physicians assistant instead of a doctor: the underrated perils of medical school

Many if not most people who go to medical school are making a huge mistake—one they won’t realize they’ve made until it’s too late to undo.

So many medical students, residents, and doctors say they wish they could go back in time and tell themselves to do something—anything—else. Their stories are so similar that they’ve inspired me to explain, in detail, the underappreciated yet essential problems with medical school and residency. Potential doctors also don’t realize becoming a nurse or physicians assistant (PA) provides many of the job security advantages of medical school without binding those who start to at least a decade, and probably a lifetime, of finance-induced servitude.

The big reasons to be a doctor are a) lifetime earning potential, b) the limited number of doctors who are credentialed annually, which implies that doctors can restrict supply and thus will always have jobs available, c) higher perceived social status, and d) a desire to “help people” (there will be much more on the dubious value of that last one below).

These reasons come with numerous problems: a) it takes a long time for doctors to make that money, b) it’s almost impossible to gauge whether you’ll actually like a profession or the process of joining that profession until you’re already done, c) most people underestimate opportunity costs, and d) you have to be able to help yourself before you can help other people (and the culture of medicine and medical education is toxic).

Straight talk about doctors and money.

You’re reading this because you tell your friends and maybe yourself that you “want to help people,” but let’s start with the cash. Although many doctors will eventually make a lot of money, they take a long time to get there. Nurses can start making real salaries of around $50,000 when they’re 22. Doctors can’t start making real money until they’re at least 29, and often not until they’re much older.

Keep that in mind when you read the following numbers.

Student Doctor reports that family docs make about $130 – $200K on average, which sounds high compared to what I’ve heard on the street (Student Doctor’s numbers also don’t discuss hours worked). The Bureau of Labor Statistics—a more reliable source—reports that primary care physicians make an average of $186,044 per year. Notice, however, that’s an average, and it also doesn’t take into account overhead. Notice too that the table showing that BLS data indicates more than 40% of doctors are in primary care specialties. Family and general practice doctors make a career median annual wage of $163,510.

Nurses, by contrast, make about $70K a year. They also have a lot of market power—especially skilled nurses who might otherwise be doctors. Christine Mackey-Ross describes these economic dynamics in “The New Face of Health Care: Why Nurses Are in Such High Demand.” Nurses are gaining market power because medical costs are rising and residency programs have a stranglehold on the doctor supply. More providers must come from somewhere. As we know from econ 101, when you limit supply in the face of rising demand, prices rise.

The limit on the number of doctors is pretty sweet if you’re already a doctor, because it means you have very little competition and, if you choose a sufficiently demanding specialty, you can make a lot of money. But it’s bad for the healthcare system as a whole because too many patients chase too few doctors. Consequently, the system is lurching in the direction of finding ways to provide healthcare at lower costs. Like, say, through nurses and PAs.

Those nurses and PAs are going to end up competing with primary care docs. Look at one example, from the New York Times’s U.S. Moves to Cut Back Regulations on Hospitals:”

Under the proposals, issued with a view to “impending physician shortages,” it would be easier for hospitals to use “advanced practice nurse practitioners and physician assistants in lieu of higher-paid physicians.” This change alone “could provide immediate savings to hospitals,” the administration said.

Primary care docs are increasingly going to see pressure on their wages from nurse practitioners for as long as health care costs outstrip inflation. Consider “Yes, the P.A. Will See You Now:”

Ever since he was a hospital volunteer in high school, Adam Kelly was interested in a medical career. What he wasn’t interested in was the lifestyle attached to the M.D. degree. “I wanted to treat patients, but I wanted free time for myself, too,” he said. “I didn’t want to be 30 or 35 before I got on my feet — and then still have a lot of loans to pay back.”

To recap: nurses start making money when they’re 22, not 29, and they are eating into the market for primary care docs. Quality of care is a concern, but the evidence thus far shows no difference between nurse practitioners who act as primary-care providers and MDs who do.

Calls to lower doctor pay, like the one found in Matt Ygleasias’s “We pay our doctors way too much,” are likely to grow louder. Note that I’m not taking a moral or economic stance on whether physician pay should be higher or lower: I’m arguing that the pressure on doctors’ pay is likely to increase because of fundamental forces on healthcare.

To belabor the point about money, The Atlantic recently published “The average female primary-care physician would have been financially better off becoming a physician assistant.” Notice: “Interestingly, while the PA field started out all male, the majority of graduates today are female. The PA training program is generally 2 years, shorter than that for doctors. Unsurprisingly, subsequent hourly earnings of PAs are lower than subsequent hourly earnings of doctors.”

Although the following sentence doesn’t use the word “opportunity costs,” it should: “Even though both male and female doctors both earn higher wages than their PA counterparts, most female doctors don’t work enough hours at those wages to financially justify the costs of becoming a doctor.” I’m not arguing that women shouldn’t become doctors. But I am arguing that women and men both underestimate the opportunity costs of med school. If they understood those costs, fewer would go.

Plus, if you get a nursing degree, you can still go to medical school (as long as you have the pre-requisite courses; hell, you can major in English and go to med school as long as you take the biology, math, physics, and chemistry courses that med schools require). Apparently some medical schools will sniff at nurses who want to become doctors because of the nursing shortage and, I suspect, because med schools want to maintain a clear class / status hierarchy with doctors at top. Med schools are run by doctors invested in the dotor mystique. But the reality is simpler: medical schools want people with good MCAT scores and GPAs. Got a 4.0 and whatever a high MCAT score is? A med school will defect and take you.

One medical resident friend read a draft of this essay and simply said that she “didn’t realize that I was looking for nursing.” Or being a PA. She hated her third year of medical school, as most med students do, and got shafted in her residency—which she effectively can’t leave. Adam Kelly is right: more people should realize what “the lifestyle attached to an M.D. degree” means.

They should also understand “The Bullying Culture of Medical School” and residency, which is pervasive and pernicious—and it contributes to the relationship failures that notoriously plague the medical world. Yet med schools and residencies can get away with this because they have students and residents by the loans.

Why would my friend have realized that she wanted to be a nurse? Our culture doesn’t glorify nursing the way it does doctoring (except, maybe, on Halloween and in adult cinema). High academic achievers think being a doctor is the optimal road to success in the medical world. They see eye-popping surgeon salary numbers and rhetoric about helping people without realizing that nurses help people too, or that their desire to help people is likely to be pounded out of them by a cold, uncaring system that uses the rhetoric of helping to sucker undergrads into mortgaging their souls to student loans. Through the magic of student loans, schools are steadily siphoning off more of doctors’ lifetime earnings. Given constraints and barriers to entry into medicine, I suspect med schools and residencies will be able to continue doing so for the foreseeable future. The logical response for individuals is exit the market because they have so little control over it.

Sure, $160K/year probably sounds like a lot to a random 21-year-old college student, because it is, but after taking into account the investment value of money, student loans for undergrad, student loans for med school, how much nurses make, and residents’ salaries, most doctors’ earnings probably fail to outstrip nurses’ earnings until well after the age of 40. Dollars per hour worked probably don’t outstrip nurses’ earnings until even later.

To some extent, you’re trading happiness, security, dignity, and your sex life in your 20s, and possibly early 30s, for a financial opportunity that might not pay off until your 50s.

Social status is nice, but not nearly as nice when you’re exhausted at 3 a.m. as a third-year, or exhausted at 3 a.m. as a first-year resident, or exhausted at 3 a.m. as a third-year resident and you’re 30 and you just want a quasi-normal life, damnit, and maybe some time to be an artist. Or when you’re exhausted at 3 a.m. as an attending on-call physician because the senior doctors at the HMO know how to stiff the newbies by forcing them to “pay their dues.”

This is where prospective medical students protest, “I’m not going to be a family medicine doc.” Which is okay: maybe you won’t be. Have fun in five or seven years of residency instead of three. But don’t confuse the salaries of superstar specialties like neurosurgery and cardiology with the average experience; more likely than not you’re average. There’s this social ideal of doctors being rich. Not all are, even with barriers to entry in place.

The underrated miseries of residency

As one resident friend said, “You can see why doctors turn into the kind of people they do.” He meant that the system itself lets patients abuse doctors, doctors abuse residents, and for people to generally treat each other not like people, but like cogs. At least nurses who discover they hate nursing can quit, since they will have a portable undergrad degree and won’t have obscene graduate school student loans. They can probably go back to school and get a second degree in twelve to twenty-four months. (Someone with a standard bachelor’s degree can probably enter nursing in the same time period.)

In normal jobs, a worker who learns about a better opportunity in another company or industry can pursue it. Students sufficiently dissatisfied with their university can transfer.[1] Many academic grad schools make quitting easy. Residencies don’t. The residency market is tightly controlled by residency programs that want to restrict residents’ autonomy—and thus their wages and bargaining power. Once you’re in a residency, it’s very hard to leave, and you can only do so at particular in the gap between each residency year.

This is a recipe for exploitation; many of the labor battles during the first half of the twentieth century were fought to prevent employers from wielding this kind of power. For medical residents, however, employers have absolute power enshrined in law—though employers cloak their power in the specious word “education.”

Once a residency program has you, they can do almost anything they want to you, and you have little leverage. You don’t want to be in situations where you have no leverage, yet that’s precisely what happens the moment you enter the “match.”

Let’s explain the match, since almost no potential med students understand it. The match occurs in the second half of the fourth year of medical school. Students apply to residencies in the first half of their fourth year, interview at potential hospitals, and then list the residencies they’re interested in. Residency program directors then rank the students, and the National Residency Match Program “matches” students to programs using a hazily described algorithm.

Students are then obligated to attend that residency program. They can’t privately negotiate with other programs, as students can for, say, undergrad admissions, or med school admissions—or almost any other normal employment situation. Let me repeat and bold: Residents can’t negotiate. They can’t say, “How about another five grand?” or “Can I modify my contract to give me fewer days?” If a resident refuses to accept her “match,” then she’s blackballed from re-entering for the next three years.

Residency programs have formed a cartel designed to control cost and reduce employee autonomy, and hence salaries. I only went to law school for a year, by accident, but even I know enough law and history to recognize a very clear situation of the sort that anti-trust laws are supposed to address in order to protect workers. When my friend entered the match process like a mouse into a snake’s mouth, I became curious, because the system’s cruelty, exploitation, and unfairness to residents is an obvious example of employers banding together to harm employees. Lawyers often get a bad rap—sometimes for good reasons—but the match looked ripe for lawyers to me.

It turns out that I’m not a legal genius and that real lawyers have noticed this obvious anti-trust violation; an anti-trust lawsuit was filed in the early 2000s. Read about it in the NYTimes, including a grimly hilarious line about how “The defendants say the Match is intended to help students and performs a valuable service.” Ha! A valuable service to employers, since employees effectively can’t quit or negotiate with individual employers. Curtailing employee power by distorting markets is a valuable service. The article also notes regulatory capture:

Meanwhile, the medical establishment, growing increasingly concerned about the legal fees and the potential liability for hundreds of millions of dollars in damages, turned to Congress for help. They hired lobbyists to request legislation that would exempt the residency program from the accusations. A rider, sponsored by Senators Edward M. Kennedy, Democrat of Massachusetts, and Judd Gregg, Republican of New Hampshire, was attached to a pension act, which President Bush signed into law in April.

In other words, employers bought Congress and President Bush in order to screw residents.[2] If you attend med school, you’re agreeing to be screwed for three to eight years after you’ve incurred hundreds of thousands of dollars of debt, and you have few if any legal rights to attack the exploitative system you’ve entered.

(One question I have for knowledgeable readers: do you know of any comprehensive discussion of residents and unions? Residents can apparently unionize—which, if I were a medical resident, would be my first order of business—but the only extended treatment of the issue I’ve found so far is here, which deals with a single institution. Given how poorly  residents are treated, I’m surprised there haven’t been more unionization efforts, especially in union-friendly, resident-heavy states like California and New York. One reason might be simple: people fear being blackballed at their ultimate jobs, and a lot of residents seem to have Stockholm Syndrome.)

Self-interested residency program directors will no doubt argue that residency is set up the way it is because the residency experience is educational. So will doctors. Doctors argue for residency being essential because they have a stake in the process. Residency directors and other administrators make money off residents who work longer hours and don’t have alternatives. We shouldn’t be surprised that they seek other legal means of restricting competition—so much of the fight around medicine isn’t about patient care; it’s about regulatory environments and legislative initiatives. For one recent but very small example of the problems, see “When the Nurse Wants to Be Called ‘Doctor’,” concerning nursing doctorates.

I don’t buy their arguments for more than ad hominem reasons. The education at many residency programs is tenuous at best. One friend, for example, is in a program that requires residents to attend “conference,” where residents are supposed to learn. But “conference” usually degenerates into someone nattering and most of the residents reading or checking their phones. Conference is mandatory, regardless of its utility. Residents aren’t 10 year olds, yet they’re treated as such.

These problems are well-known (“What other profession routinely kicks out a third of its seasoned work force and replaces it with brand new interns every year?”). But there’s no political impetus to act: doctors like limiting their competition, and people are still fighting to get into medical school.

Soldiers usually make four-year commitments to the military. Even ROTC only demands a four- to five-year commitment after college graduation—at which point officers can choose to quit and do something else. Medicine is, in effect, at least a ten-year commitment: four of medical school, at least three of residency, and at least another three to pay off med school loans. At which point a smiling twenty-two-year-old graduate will be a glum thirty-two-year-old doctor who doesn’t entirely get how she got to be a doctor anyway, and might tell her earlier self the things that earlier self didn’t know.

Contrast this experience with nursing, which requires only a four-year degree, or PAs, who have two to three years of additional school. As John Goodman points out in “Why Not A Nurse?“, nursing is much less heavily or uniformly regulated than doctoring. Nurses can move to Oregon:

Take JoEllen Wynne. When she lived in Oregon, she had her own practice. As a nurse practitioner, she could draw blood, prescribe medication (including narcotics) and even admit patients to the hospital. She operated like a primary care physician and without any supervision from a doctor. But, JoEllen moved to Texas to be closer to family in 2006. She says, “I would have loved to open a practice here, but due to the restrictions, it is difficult to even volunteer.” She now works as an advocate at the American Academy of Nurse Practitioners.

and, based on the article, avoid Texas. Over time, we’ll see more articles like “Why Nurses Need More Authority: Allowing nurses to act as primary-care providers will increase coverage and lower health-care costs. So why is there so much opposition from physicians?” Doctors will oppose this, because it’s in their economic self-interest to avoid more competition.

The next problem with becoming a doctor involves what economists call “information asymmetry.” Most undergraduates making life choices don’t realize the economic problems I’ve described above, let alone some of the other problems I’m going to describe here. When I lay out the facts about becoming a doctor to my freshmen writing students, many of those who want to be doctors look at me suspiciously, like I’m offering them a miracle weight-loss drug or have grown horns and a tail.

“No,” I can see them thinking, “this can’t be true because it contradicts so much of what I’ve been implicitly told by society.” They don’t want to believe. Which is great—right up to the point they have to live their lives, and see how their how those are lives are being shaped by forces that no one told them about. Just like no one told them about opportunity costs or what residencies are really like.

Medical students and doctors have complained to me about how no one told them how bad it is. No one really told them, that is. I’m not sure how much of this I should believe, but, at the very least, if you’re reading this essay you’ve been told. I suspect a lot of now-doctors were told or had an inkling of what it’s really like, but they failed to imagine the nasty reality of 24- or 30-hour call.

They, like most people, ignore information that conflicts with their current belief system about the glamor of medicine to avoid cognitive dissonance (as we all do: this is part of what Jonathan Haidt points out in The Righteous Mind, as does Daniel Kahneman in Thinking, Fast and Slow). Many now-doctors, even if they were aware, probably ignored that awareness and now complain—in other words, even if they had better information, they’d have ignored it and continued on their current path. They pay attention to status and money instead of happiness.

For example, Penelope Trunk cites Daniel Gilert’s Stumbling on Happiness and says:

Unfortunately, people are not good at picking a job that will make them happy. Gilbert found that people are ill equipped to imagine what their life would be like in a given job, and the advice they get from other people is bad, (typified by some version of “You should do what I did.”)

Let’s examine some other vital takeaways from Stumbling on Happiness: [3]

* Making more than about $40,000/year does little to improve happiness (this should probably be greater in, say, NYC, but the main point stands: people think money and happiness show a linear correlation when they really don’t).

* Most people value friends, family, and social connections more than additional money, at least once their income reaches about $40K/year. If you’re trading time with friends and family for money, or, worse, for commuting, you’re making a tremendous, doctor-like mistake.

* Your sex life probably matters more than your job, and many people mis-optimize in this area. I’ve heard many residents and med students say they’re too busy to develop relationships or have sex with their significant others, if they manage to retain one or more, and this probably makes them really miserable.

* Making your work meaningful is important.

Attend med school without reading Gilbert at your own peril. No one in high school or college warns you of the dangers of seeking jobs that harm your sex life, because high schools are too busy trying to convince you not to have one. So I’m going issue the warning: if you take a job that makes you too tired to have sex or too tired to engage in contemporary mate-seeking behaviors, you’re probably making a mistake.

The sex-life issue might be overblown, because people who really want to have one find a way to have one; some med students and residents are just offering the kinds of generic romantic complaints that everyone stupidly offers, and which mean nothing more than discussion about the weather. You can tell what a person really wants by observing what they do, rather than what they say.

But med students and residents have shown enough agony over trade-offs and time costs to make me believe that med school does generate a genuine pall over romantic lives. There is a correlation-is-not-causation problem—maybe med school attracts the romantically inept—but I’m willing to assume for now that it doesn’t.

The title of Trunk’s post is “How much money do you need to be happy? Hint: Your sex life matters more.” If you’re in an industry that consistently makes you too tired for sex, you’re doing things wrong and need to re-prioritize. Nurses can work three twelves a week, or thirty-six total hours, and be okay. But, as described above, being a doctor doesn’t let employees re-prioritize.

Proto-doctors screw up their 20s and 30s, sexually speaking, because they’ve committed to a job that’s so cruel to its occupants that, if doctors were equally cruel to patients, those doctors would be sued for malpractice. And the student loans mean that med students effectively can’t quit. They’ve traded sex for money and gotten a raw deal. They’ll be surrounded by people who are miserable and uptight—and who have also mis-prioritized.

You probably also don’t realize how ill-equipped you are to what your life would be like as a doctor because a lot of doctors sugarcoat their jobs, or because you don’t know any actual doctors. So you extrapolate from people who say, “That’s great” when you say you want to be a doctor. If you say you’re going to stay upwind and see what happens, they don’t say, “That’s great,” because they simply think you’re another flaky college student. But saying “I want to go to med school” or “I want to go to law school” isn’t a good way to seem level-headed (though I took the latter route; fortunately, I had the foresight to quit). Those routes, if they once led to relative success and happiness, don’t any more, at least for most people, who can’t imagine what life is like on the other end of the process. With law, at least the process is three years, not seven or more.

No one tells you this because there’s still a social and cultural meme about how smart doctors are. Some are. Lots more are very good memorizers and otherwise a bit dull. And you know what? That’s okay. Average doctors seeing average patients for average complaints are fixing routine problems. They’re directing traffic when it comes to problems they can’t solve. Medicine doesn’t select for being well-rounded, innovative, or interesting; if anything, it selects against those traits through its relentless focus on test scores, which don’t appear to correlate strongly with being interesting or intellectual.

Doctors aren’t necessarily associating with the great minds of your generation by going to medical school. Doctors may not even really be associating with great minds. They might just be associating with excellent memorizers. I didn’t realize this until I met lots of of doctors, had repeated stabs at real conversations with them, and eventually realized that many aren’t intellectually curious and imaginative. There are, of course, plenty of smart, intellectually curious doctors, but given the meme about the intelligence of doctors, there are fewer than imagined and plenty who see themselves as skilled technicians and little more.

A lot of doctors are the smartest stupid people you’ve met. Smart, because they’ve survived the academic grind. Stupid, because they signed up for med school, which is effectively signing away extraordinarily valuable options. Life isn’t a videogame. There is no reset button, no do-over. Once your 20s are gone, they’re gone forever.

Maybe your 20s are supposed to be confusing. Although I’m still in that decade, I’m inclined to believe this idea. Medical school offers a trade-off: your professional life isn’t confusing and you have a clear path to a job and paycheck. If you take that path, your main job is to jump through hoops. But the path and the hoops offer  clarity of professional purpose at great cost in terms of hours worked, debt assumed, and, perhaps worst of all, flexibility. Many doctors would be better off with the standard confusion, but those doctors take the clear, well-lit path out of fear—which is the same thing that drives so many bright but unfocused liberal grads into law schools.

I’ve already mentioned prestige and money as two big reasons people go to med school. Here’s another: fear of the unknown. Bright students start med school because it’s a clearly defined, well-lit path. Such paths are becoming increasingly crowded. Uncertainty is scary. You can fight the crowd, or you can find another way. Most people are scared of the other way. They shouldn’t be, and they wouldn’t be if they knew what graduate school paths are like.

For yet another perspective on the issue of not going to med school, see Ali Binazir’s “Why you should not go to medical school — a gleefully biased rant,” which has more than 200 comments as of this writing. Binazir correctly says there’s only one thing that should drive you to med school: “You have only ever envisioned yourself as a doctor and can only derive professional fulfillment in life by taking care of sick people.”

If you can only derive professional fulfillment in life by taking care of sick people, however, you should remember that you can do so by being a nurse or a physicians assistant. And notice the words Binazir chooses: he doesn’t say, “help people”—he says “taking care of sick people.” The path from this feeling to actually taking care of sick people is a long, miserable one. And you should work hard at envisioning yourself as something else before you sign up for med school.

You can help people in all kinds of ways; the most obvious ones are by having specialized, very unusual skills that lots of people value. Alternately, think of a scientist like Norman Borlaug (I only know about him through Tyler Cowen’s book The Great Stagnation; in it, Cowen also observes that “When it comes to motivating human beings, status often matters at least as much as money.” I suspect that a lot of people going to medical school are really doing it for the status).

Bourlag saved millions of lives through developing hardier seeds and through other work as an agronomist. I don’t want to say something overwrought and possibly wrong like, “Bourlag has done more to help people than the vast majority of doctors,” since that raises all kinds of questions about what “more” and “help” and “vast majority” mean, but it’s fair to use him as an example of how to help people outside of being a doctor. Programmers, too, write software that can be instantly disseminated to billions of people, and yet those who want to “help” seldom think of it as a helping profession, even though it is.

For a lot of the people who say they want to be a doctor so they can help people, greater intellectual honesty would lead them to acknowledge mixed motives in which helping people is only one and perhaps not the most powerful. On the other hand, if you really want to spend your professional life taking care of sick people, Binazir is right. But I’m not sure you can really know that before making the decision to go to medical school, and, worse, even if all you want to do is take care of sick people, you’re going to find a system stacked against you in that respect.

You’re not taking the best care of people at 3 a.m. on a 12- to 24-hour shift in which your supervisors have been screaming at you and your program has been jerking your schedule around like a marionette all month, leaving your sleep schedule out of whack. Yeah, someone has to do it, but it doesn’t have to be you, and if fewer people were struggling to become doctors, the system itself would have to change to entice more people into medical school.

One other, minor point: you should get an MD and maybe a PhD if you really, really want to do medical research. But that’s a really hard thing for an 18 – 22 year old to know, and most doctors aren’t researchers. Nonetheless, nurses (usually) aren’t involved in the same kind of research as research MDs. I don’t think this point changes the main thrust of my argument. Superstar researchers are tremendously valuable. If you think you’ve got the tenacity and curiosity and skills to be a superstar researcher, this essay doesn’t apply to you.

Very few people will tell you this, or tell even if you ask; Paul Graham even writes about a doctor friend in his essay “How to do What You Love:”

A friend of mine who is a quite successful doctor complains constantly about her job. When people applying to medical school ask her for advice, she wants to shake them and yell “Don’t do it!” (But she never does.) How did she get into this fix? In high school she already wanted to be a doctor. And she is so ambitious and determined that she overcame every obstacle along the way—including, unfortunately, not liking it.

Now she has a life chosen for her by a high-school kid.

When you’re young, you’re given the impression that you’ll get enough information to make each choice before you need to make it. But this is certainly not so with work. When you’re deciding what to do, you have to operate on ridiculously incomplete information. Even in college you get little idea what various types of work are like. At best you may have a couple internships, but not all jobs offer internships, and those that do don’t teach you much more about the work than being a batboy teaches you about playing baseball.

Having a life chosen for you by a 19-year-old college student or 23-year-old wondering what to do is only marginally better.

I’m not the first person to notice that people don’t always understand what they’ll be like when they’re older; in “Aged Wisdom,” Robin Hanson says:

You might look inside yourself and think you know yourself, but over many decades you can change in ways you won’t see ahead of time. Don’t assume you know who you will become. This applies all the more to folks around you. You may know who they are now, but not who they will become.

This doesn’t surprise me anymore. Now I acknowledge that I’m very unlikely to be able to gauge what I’ll want in the future.

Contemplate too the psychological makeup of many med students. They’re good rule-followers and test-takers; they tend to be very good on tracks but perhaps not so good outside of tracks. Prestige is very important, as is listening to one’s elders (who may or may not understand the ways the world is changing in fundamental ways). They may find the real world large and scary, while the academic world is small, highly directed, and sufficiently confined to prevent intellectual or monetary agoraphobia.

These issues are addressed well in two books: Excellent Sheep by William Deresiewicz and Zero to One by Peter Thiel and Blake Masters. I won’t endorse everything in either book, but pay special attention to their discussions of the psychology of elite students and especially the weaknesses that tend to appear in that psychology.

It is not easy for anyone to accept criticism, but that may be particularly true of potential med students, who have been endlessly told how “smart” they are, or supposedly are. Being smart in the sense of passing classes and acing tests may not necessarily lead you towards the right life, and, moreover, graduate schools and consulting have evolved to prey on your need for accomplishment, positive feedback, and clear metrics. You are the food they need to swallow and digest. Think long and hard about that.

If you don’t want to read Excellent Sheep and Zero to One, or think you’re “too busy,” I’m going to marvel: you’re willing to spend hundreds of thousands of dollars and years of your life to a field that you’re not wiling to spend $30 and half a day to understanding better? That’s a dangerous yet astonishingly common level of willful ignorance.

Another friend asked what I wanted to accomplish with this essay. The small answer: help people understand things they didn’t understand before. The larger answer—something like “change medical education”—isn’t very plausible because the forces encouraging people to be doctors are so much larger than me. The power of delusion and prestige is so vast that I doubt I can make a difference through writing alone. Almost no writer can: the best one can hope for is changes at the margin over time.

Some med school stakeholders are starting to recognize the issues discussed in this essay: for example, The New York Times has reported that New York University’s med school may be able to shorten its duration from four years to three, and “Administrators at N.Y.U. say they can make the change without compromising quality, by eliminating redundancies in their science curriculum, getting students into clinical training more quickly and adding some extra class time in the summer.” This may be a short-lived effort. But it may also be an indicator that word about the perils of med school is spreading.

I don’t expect this essay to have much impact. It would require people to a) find it, which most probably won’t do, b) read it, which most probably won’t do, c) understand it, which most of those who read it won’t or can’t do, and d) implement it. Most people don’t seem to give their own futures much real consideration. I know a staggering number of people who go to law or med or b-school because it “seems like a good idea.” Never mind the problem with following obvious paths, or the question of opportunity costs, or the difficulty in knowing what life is like on the other side.

People just don’t think that far ahead. I’m already imagining people on the Internet who are thinking about going to med school and who see the length of this essay and decide it’s not worth it—as if they’d rather spend a decade of their lives gathering the knowledge they could read in an hour. They just don’t understand the low quality of life medicine entails for many if not most doctors.

Despite the above, I will make one positive point about med school: if you go, if you jump through all the hoops, if you make it to the other side, you will have a remunerative job for life, as long as you don’t do anything grossly awful. Job demand and pay are important. Law school doesn’t offer either anymore. Many forms of academic grad schools are cruel pyramid schemes propagated by professors and universities. But medicine does in fact have a robust job market on the far end. That is a real consideration. You’re still probably better off being a nurse or PA—nurses are so in-demand that nursing schools can’t grow fast enough, at least as of 2015—but I don’t want to pretend that the job security of being a doctor doesn’t exist.

I’m not telling you what to do. I rarely tell anyone what to do. I’m describing trade-offs and asking if you understand them. It appears that few people do. Have you read this essay carefully? If not, read it again. Then at least you won’t be one of the many doctors who hate what you do, warn others about how doctors are sick of their profession, and wish you’d been wise enough to take a different course.

If you enjoyed this essay, you should also read my novel, Asking Anna. It’s a lot of fun for not a lot of money!


[0] Here’s another anti-doctor article: “Why I Gave Up Practicing Medicine.” Scott Alexander’s “Medicine As Not Seen On TV” is also good. The anti-med-school lit is available to those who seek it. Most potential med students don’t seem to. Read the literature and understand the perils. If after learning you still want to go anyway, great.

Here is too intelligent commenter ktswan, who qualifies the rest of the article. She went from nursing to med school and writes, “I am much happier in medicine than lots of my colleagues, I think in many ways because I knew exactly what I was getting into, what I was sacrificing, and what I wanted to gain from it.”

[1] One could argue that many of the problems in American K – 12 education stem from a captive audience whose presence or absence in a school is based on geography and geographical accidents rather than the school’s merit.

[2] You can read more about the match lawsuit here. Europe doesn’t have a match-like system; there, the equivalent of medical residency is much more like a job.

[3] Stumbling on Happiness did more to change my life and perspective than almost any other book. I’ve read thousands of books. Maybe tens of thousands. Yet this one influences my day-to-day decisions and practices by clarifying how a lot of what people say they value they don’t, and how a lot of us make poor life choices based on perceived status that end up screwing us. Which is another way of saying we end up screwing ourselves. Which is what a lot of medical students, doctors, and residents have done. No one holds the proverbial gun to your head and orders you into med school (unless you have exceptionally fanatical parents). When you’re doing life, at least in industrialized Western countries, you mostly have yourself to blame for your poor choices, made without enough knowledge to know the right choices.

Thanks to Derek Huang, Catherine Fiona MacPherson, and Bess Stillman for reading this essay.

Why “Man’s Search for Meaning” and Viktor Frankl

I recommend Viktor Frankl’s Man’s Search for Meaning to a fair number of people in a wide array of contexts, and one of my students asked why I included him in a short list of books at the back of the syllabus. Though I’ve mentioned him on blog a number of times (see here for one example), I hadn’t really considered why I admire his book and so wanted to take a shot at doing so.

As Frankl says, we’re suffering from a bizarre dearth of meaning in our everyday lives. One can see this in the emptiness that a lot of people report feeling and, more seriously, in suicide rates. In material terms, people in Western societies have never been as well off as we are today—and most of Asia and Latin America, along with much of Africa, are catching up with surprising speed. Yet in “spiritual” terms (I hate that much-abused word but can think of no better one—metaphysical, perhaps?) many of us aren’t doing so well, which is odd, given the cornucopia of goods and opportunities around us. I think Frankl tries to teach us how to better actualize our lives—we truly don’t live by bread alone—and I think he has a keen sense of the malaise many of us feel. I’ve struggled with these issues too and think Frankl’s treatment of them is a good one.

One can see another version or statement of this general problem in Louis CK’s much-linked bit “Everything is amazing right now and nobody is happy.” It has 7 million views, and while YouTube views are hardly a good metric for importance or content, I think CK’s bit has gone viral because he’s touching a profound problem that many people feel, even if they don’t articulate it, or usually won’t articulate to themselves or others.

Many people also seem to feel isolated (see Putnam’s possibly flawed Bowling Alone for one account). Yet because they feel isolated, they have no one to talk to about feeling isolated! The paradox worsens isolation, and there isn’t an obvious outlet for these kinds of feelings or problems. Plus, technology seems to enable crappier and more tenuous relationships, when many of us really want the opposite. That’s partly a problem of the person using the technology—we can talk to anyone, anywhere despite many of us having nothing to say—but technology also pushes use to use it in particular ways, which is one of my points about how Facebook is bad for relationships.

And people are mostly on their own in dealing with this. Schools, as they’re widely conceived of right now, are largely seen as job-training centers, rather than as places to figure out how you should live your life. So they’re not very helpful. Religion or religious feeling is one answer for some people, but religious thinking or feeling isn’t very satisfying for me and a growing number of people.

I don’t know what is helpful—problems are often easier to see than solutions—but Frankl offers a framework for thinking about leading a meaningful existence through attempting to do the best with what you’ve got and choosing an aim for your life, however small or absurd (Hence: “Nietzsche’s words, ‘He who has a why to live for can bear with almost any how,’ could be the guiding motto for all psychotherapeutic and psychohygienic efforts regarding prisoners. Whenever there was an opportunity for it, one had to give them a why—an aim—for their lives, in order to strengthen them to bear the terrible how of their existence”).

Frankl and Louis CK are hardly the only people to notice this—All Things Shining: Reading the Western Classics to Find Meaning in a Secular Age is a contemporary example of a book tackling similar basic concepts from a different angle. Stumbling on Happiness and The Happiness Hypothesis are others. The fact that this problem persists across decades and arguably becomes more urgent means that I don’t think these books will be the last. As Frankl says in a preface:

I do not at all see in the bestseller status of my book so much an achievement and accomplishment on my part as an expression of the misery of our time: if hundreds of thousands of people reach out for a book whose very title promises to deal with the question of a meaning to life, it must be a question that burns under the fingernails.

The beta orbiter problem: Observations from the field

A newly-graduated friend sent me this, as part of an e-mail about the difficulty of making friends after moving to a big city:

I’m now trying to “friend-date” whenever I meet a female (or male) who seems platonically cool. I have an easier time with males, but I don’t think the single ones ever intend to be friends with me from the beginning… mostly beta orbiters I guess.

Think about it this way, from the guy’s perspective: you’re exceedingly hungry. As hungry as you’ve ever been. And you can smell a delicious curry. You can see it. You’re very hungry. You’d love to eat the curry. But you can’t eat the curry.

The metaphor isn’t perfect—women have agency and curries don’t, among other things*—but it should impart the basic urgency single guys feel and the reason why single men who don’t want to be “just friends” also don’t want to hang out with you as a friend. How badly would you want to go to a restaurant when you’re desperately hungry but can’t eat at the restaurant?

My friend also said:

I found this article recently that was telling guys why they should be friends with the women who reject them for dating, but want to be friends. 1) The guy will become more confident around the type of women he’s interested in. 2) She will introduce him to her hot friends.

While “She will introduce him to her hot friends” is true in theory, it isn’t true, or very often true, in practice (based on my experience, anyway). More often, when a girl I’m interested in declines my affections, at best she sets me up with friends who are substantially less attractive than she is, and frequently says they’re “cute” and promises that I’ll “like them for their personality.” Unfortunate euphemisms lead to hurt feelings all around. Actually, my feelings don’t get hurt, but the feelings of other women sometimes do.

The hot girl’s friends also often know the hot girl turned the guy down, and that sends a powerful negative signal. If the guy isn’t good enough for the hot girl, why should he be good enough for her friends? Again, I won’t say that no straight guy has ever gotten the female-friend hookup, but I suspect that the female-friend hookup is more mythologized than actualized.

I’m familiar with the the beat orbiter mindset because I spent a lot of high school being one—but that’s because I was an idiot who didn’t know any better. I finally stepped back from that behavior, wondered why the hell I was doing it, and stopped. Non-adaptive behaviors should be altered. Most self-respecting guys who are dumb enough to go through a beta orbiter phase leave that phase by the time they graduate from college, if not earlier. Not all do, however, and you’ll occasionally run into 35-year-old men with the emotional temperament of 15-year-old boys in the thrall of their first serious, unrequited infatuation.

I’ve also had girls be the female equivalent of beta orbiters. I say “girls” here because, like men, adult single women usually grow out of this behavior, and if they’re attracted to a guy, they either make their move and see where it goes or they find a guy who is interested in them, instead of pointlessly pining after the unavailable. Straight American women seem to be more susceptible to acquiring beta orbiters than straight American men, while women seem to be, on average, more deluded about their “real” relationships with their supposed male “friends.”

One thought experiment might clarify your “friendships:” imagine that you’re lying in bed, wearing lingerie or nothing, and your male friend comes in. Does he leave or partake? If he leaves, you’re real friends. If he partakes, he’s probably not.

The attention of beta orbiters is kind of flattering to women, but it’s also mostly pointless; if you’re in the game, so to speak, you want to focus on the game, not the crowd. This is true of both sexes, whether gay or straight, but it seems like a lot of people have trouble admitting it.

(As a side note, literature is full of idiots pursuing pointless love for no particular reason: think of Gatsby and Daisy, or Robert Cohn and Brett Ashley in The Sun Also Rises, or any number of 19th century novels, or The Sorrows of Young Werther, or Romeo and Juliet (which has the advantage of Mercutio, until he dies; it’s his death that’s tragic, because he’s hilarious—”I will bite thee by the ear for that jest” and “for the bawdy hand of the / dial is now upon the prick of noon”). In each case, the obvious thing for the pursuer to do is get over whoever he or she is obsessed with and find someone better / more available, which are much the same thing. That’s a problem with Gatsby and Sun in particular: both novels are constructed around idiotic, self-defeating sexual behavior that contemporary teenagers often see glorified in pop culture and eventually must learn to overcome. The writing and style in both novels are specular, but their plots leave much to be desired, since the obvious thing for Jay Gatsby and Robert Cohn to do is get over Daisy and Brett Ashley. If they do, however, one no longer has a novel. But we shouldn’t admire guys who do things that are clearly dumb and sub-optimal.)

It may also be hard for attractive women to become genuine friends with a guy who already has a girlfriend because most girlfriends won’t want a rival—especially an attractive rival, sniffing around their campfire—so to speak. The reasons should be obvious. The major exception, however, occurs when the girl herself is bi, or at least interested in some girl-on-girl experience(s), but third-wheel situations among relative strangers seldom seem to last long.

This kind of misunderstand seems to be incredibly, stupidly common; I occasionally read the reddit.com/r/relationships section, which is filled with people like “jaqueinabox” who say

I have a friend who I’ve known for about four or five years. A couple years ago, when my boyfriend at the time and I were on a break, I invited him to a social [. . .] I dropped him off at his car, but we ended up making out for a few minutes before I told him I had to stop (I never really do stuff like that and I was incredibly uncomfortable with it.) [. . .] When my boyfriend and I broke up for good, my friend started insisting we hang out more. Go to movies, go out for dinner, go to his place and watch movies, sending me texts with “xoxo” and “;)” in them, and it feels a lot like dating. [. . .] I still see movies and hang out with him because it seems rude to say no.

She’s wrong: it’s actually rude, both to herself and, to a lesser extent, the guy, to keep going out with him. In the thread, I wrote that “Directness is beautiful, both for you and these ‘guy-friends,’ who are not actually friends.” The other day a Reddit commenter wrote, accurately:

Most male friends become friends with attractive women with hopes of getting with them. Usually, when it doesn’t happen or she gets into/is in a relationship, they step back. When she gets out of a relationship, they usually try again.

The best movie scene dealing with this dynamic is the famous bit from When Harry Met Sally:

The contemporary term of art for these guys is, of course, “beta orbiter.”

I do want to clarify one point: It is possible for men and women to be authentic friends (I have a bunch of authentic female friends). It’s just much more unusual than many young straight women want to think it is. Many young straight women want to lie to themselves, or simply like deluding themselves, about male “friends.”

Authentic cross-gender friendships are great and they are no less worth cultivating than any other friendship. But don’t lie to yourself and don’t go into authentic friendships with the purpose of trying to covertly shoot for more.


* As far as I know.

We all have value systems, even if dollars aren’t their main currency

In Robert Skidelsky’s Econtalk interview, he mentions that we get restless if we have nothing to do, and there’s a certain amount of insatiability that appears built into the human condition. He’s referencing money, but it made me realize something: academics and intellectuals are restless and insatiable too, but they don’t use conventional currency: they use citation counts and perceived intellectual influence. They aren’t (mostly) acquisitively forward-looking, but they are interested in writing more and more, in order to have a greater and greater reputation.

Skidelsky’s most recent book is How Much is Enough?: Money and the Good Life, and in it he evidently discusses the idea of material good saturation, which is, I suspect, a topic that’s going to become more and more interesting over the course of my life. Most of us, as he points out and he points out that Keynes pointed out, reach a point of diminishing returns when it comes to goods and many other things: having a working car is very valuable to many of us, but having a $100,000 car is less so. Having a computer is very valuable, but having the latest model is less so. But we’re still working quite hard for goods that might not be valuable enough.

I leave it to the reader’s imagination to apply how the previous paragraph might be applied to academics or intellectuals, for whom it seems there is never enough respect go around.

Skidelsky’s point about work is especially interesting to me because I’m a person who has been working, so to speak, to make the kind of “work” that I do fun—at which point it’s not really onerous. I wonder if that kind of move is the future of work. We also get a certain amount of satisfaction from doing a thing well, and perhaps that will drive us, collectively, even in the face of not needing to do certain things to the extent that we need to do them now.