Personal epistemology, free speech, and tech companies

The NYT describes “The Problem of Free Speech in an Age of Disinformation, and in response Hacker News commenter throwaway13337 says, in part, “It’s not unchecked free speech. Instead, it’s unchecked curation by media and social media companies with the goal of engagement.” There’s some truth to the idea that social media companies have evolved to seek engagement, rather than truth, but I think the social media companies are reflecting a deeper human tendency. I wrote back to throwaway13337: “Try teaching non-elite undergrads, and particularly assignments that require some sense of epistemology, and you’ll discover that the vast majority of people have pretty poor personal epistemic hygiene—it’s not much required in most people, most of the time, in most jobs.”

From what I can tell, we evolved to form tribes, not to be “right:” Jonathan’s Haidt’s The Righteous Mind: Why Good People Are Divided by Politics and Religion deals with this topic well and at length, and I’ve not seen any substantial rebuttals of it. We don’t naturally take to tracking the question, “How do I know what I know?” Instead, we naturally seem to want to find “facts” or ideas that support our preexisting views. In the HN comment thread, someone asked for specific examples of poor undergrad epistemic hygiene, and while I’d prefer not to get super specific for reasons of privacy, I’ve had many conversations that take the following form: “How do you know article x is accurate?” “Google told me.” “How does Google work?” “I don’t know.” “What does it take to make a claim on the Internet.” “Um. A phone, I guess?” A lot of people—maybe most—will uncritically take as fact whatever happens to be served up by Google (it’s always Google and never Duck Duck Go or Bing), and most undergrads whose work I’ve read will, again uncritically, accept clickbait sites and similar as accurate. Part of the reason for this reasoning is that undergrads’s lives are minimally affected by being wrong or incomplete about some claim done in a short assignment that’s being imposed by some annoying professor toff standing between them and their degree.

The gap between elite information discourse and everyday information discourse, even among college students, who may be more sophisticated than their peer equivalents, is vast—so vast that I don’t think most journalists (who mostly talk to other journalists and to experts) and to other people who work with information, data, and ideas really truly understand it. We’re all living in bubbles. I don’t think I did, either, before I saw the epistemic hygiene most undergrads practice, or don’t practice. This is not a “kids these days” rant, either: many of them have never really been taught to ask themselves, “How do I know what I know?” Many have never really learned anything about the scientific method. It’s not happening much in most non-elite schools, so where are they going to get epistemic hygiene from?

The United States alone has 320 million people in it. Table DP02 in the Census at data.census.gov estimates that 20.3% of the population age 25 and older has a college bachelor’s degree, and 12.8% have a graduate or professional degree. Before someone objects, let me admit that a college degree is far from a perfect proxy for epistemic hygiene or general knowledge, and some high school dropouts perform much better at cognition, meta cognition, statistical reasoning, and so forth, than do some people with graduate degrees. With that said, though, a college degree is probably a decent approximation for baseline abstract reasoning skills and epistemic hygiene. Most people, though, don’t connect with or think in terms of aggregated data or abstract reasoning—one study, for example, finds that “Personal experiences bridge moral and political divides better than facts.” We’re tribe builders, not fact finders.

Almost anyone who wants a megaphone in the form of one of the many social media platforms available now has one. The number of people motivated by questions like “What is really true, and how do I discern what is really true? How do I enable myself to get countervailing data and information into my view, or worldview, or worldviews?” is not zero, again obviously, but it’s not a huge part of the population. And many very “smart” people in an IQ sense use their intelligence to build better rationalizations, rather than to seek truth (and I may be among the rationalizers: I’m not trying to exclude myself from that category).

Until relatively recently, almost everyone with a media megaphone had some kind of training or interest in epistemology, even they didn’t call it “epistemology.” Editors would ask, “How do you know that?” or “Who told you that?” or that sort of thing. Professors have systems that are supposed to encourage greater-than-average epistemic hygiene (these systems were not and are not perfect, and nothing I have written so far implies that they were or are).

Most people don’t care about the question, “How do you know what you know?” are fairly surprised if it’s asked, implicitly or explicitly. Some people are intrigued by it but most aren’t, and view questions about sources and knowledge to be a hindrance. This is less likely to be true of people who aspire to be researchers or work in other knowledge-related professions, but that describes only a small percentage of undergraduates, particularly at non-elite schools. And the “elite schools” thing drives a lot of the media discourse around education. One of the things I like about Professor X’s book In the Basement of the Ivory Tower is how it functions as a corrective to that discourse.

For most people, floating a factually incorrect conspiracy theory online isn’t going to negatively affect their lives. If someone is a nurse and gives a patient a wrong medication or incorrect medication, that person is not going to be a nurse for long. If the nurse states or repeats a factually incorrect political or social idea online, particularly but not exclusively under a pseudonym, that nurse’s life likely won’t be affected. There’s no truth feedback loop. The same is true for someone working in, say, construction, or engineering, or many other fields. The person is free to state things that are factually incorrect, or incomplete, or misleading, and doing so isn’t going to have many negative consequences. Maybe it will have some positive consequences: one way to show that you’re really on team x is to state or repeat falsehoods that show you’re on team x, rather than on team “What is really true?”

I don’t want to get into daily political discourse, since that tends to raise defenses and elicit anger, but the last eight months have demonstrated many people’s problems with epistemology, and in a way that can have immediate, negative personal consequences—but not for everyone.

Pew Research data indicate that a quarter of US adults didn’t read a book in 2018; this is consistent with other data indicating that about half of US adults read zero or one books per year. Again, yes, there are surely many individuals who read other materials and have excellent epistemic hygiene, but this is a reasonable mass proxy, given the demands that reading makes on us.

Many people driving the (relatively) elite discourse don’t realize how many people are not only not like them, but wildly not like them, along numerous metrics. It may also be that we don’t know how to deal with gossip at scale. Interpersonal gossip is all about personal stories, while many problems at scale are best understood through data—but the number of people deeply interested in data and data’s veracity is small. And elite discourse has some of its own possible epistemic falsehoods, or at least uncertainties, embedded within it: some of the populist rhetoric against elites is rooted in truth.

A surprisingly large number of freshmen don’t know the difference between fiction and nonfiction, or that novels are fiction. Not a majority, but I was surprised when I first encountered confusion around these points; I’m not any longer. I don’t think the majority of freshmen confuse fiction and nonfiction, or genres of nonfiction, but enough do for the confusion to be a noticeable pattern (modern distinctions between fiction and nonfiction only really arose, I think, during the Enlightenment and the rise of the novel in the 18th Century, although off the top of my head I don’t have a good citation for this historical point, apart perhaps from Ian Watt’s work on the novel). Maybe online systems like Twitter or Facebook allow average users to revert to an earlier mode of discourse in which the border between fiction and nonfiction is more porous, and the online systems have strong fictional components that some users don’t care to segregate.

We are all caught in our bubble, and the universe of people is almost unimaginably larger than the number of people in our bubble. If you got this far, you’re probably in a nerd bubble: usually, anything involving the word “epistemology” sends people to sleep or, alternately, scurrying for something like “You won’t believe what this celebrity wore/said/did” instead. Almost no one wants to consider epistemology; to do so as a hobby is rare. One person’s disinformation is another person’s teambuilding. If you think the preceding sentence is in favor of disinformation, by the way, it’s not.

Digital Minimalism — Cal Newport

All of Cal Newport’s books could be titled, “How to Be an Effective Person.” Or, maybe, “How to Be an Effective Person In This Technological Epoch.” Digital Minimalism is, like Deep Work: Rules for Focused Success in a Distracted World, about why you should quit or drastically limit the digital distractions that have proliferated in much of modern life. To me, it seemed obviously necessary to do so a long time ago, so there’s a large component of preaching-to-the-choir in me reading and now recommending this book. I’m barely on Facebook or most other social networks, which seem anathema to doing anything substantive or important.

A story. A friend sent me an email about Newport’s article “Is email making professors stupid?” I told him that, even in grad school, I’d figured out the problems with email and checked it, typically, once per day—sometimes every other day. The other grad students were in awe of that (low?) rate. I was like, “How do you get any writing done otherwise?” I leave it as an exercise to the reader to square this circle. You may notice that some of my novels are out there and their novels are not.

In my experience, too, most profs actually like the distraction, the work-like feeling without having to do the hard part. In reality, it is not at all hard to open your email every other day and spent 90%+ of your time focused on your work. If you don’t do this, then, as Newport says, “The urge to check Twitter or refresh Reddit becomes a nervous twitch that shatters uninterrupted time into shards too small to support the presence necessary for an intentional life.” And yet many of us, as measured by data, do just that. I buy many of Newport’s arguments while also being skeptical that we’ll see large-scale change. Yet we should seek individual change; many of the online systems are psychologically bad for us:

The techno-philosopher Jaron Lanier convincingly argues that the primacy of anger and outrage outline is, in some sense, an unavoidable feature of the medium: In an open marketplace for attention, darker emotions attract more eyeballs that positive and constructive thoughts. For heavy Internet users, repeated interaction with this darkness can become a source of draining negativity—a steep price that many don’t even realize they’re paying to support their compulsive connectivity.

Is “the primacy of anger and outrage” really “an unavoidable feature?” I like to think not; I like to think that I try to avoid anger and outrage, making those tertiary features at best, and instead I try to focus on ideas and thinking. So I like to think that I’m avoiding those things.

Still, compulsive connectivity online may also be costing us offline, real-world connection. That’s a point in Johann Hari’s book Lost Connections: Uncovering the Real Causes of Depression, which you should also read.

The book describes how modern social media systems and apps exploit our desire for random or intermittent positive reinforcement. Because we don’t know what we’re going to get anytime we boot up Twitter or similar, we want to visit those sites more often. We lose perspective on what’s more important—finishing a vital long-term project or checking for whatever the news of the day might be, however trivial. Or seeing random thoughts from our friends. Newport doesn’t argue that we shouldn’t have friends or that social networking systems don’t have some value—he just points out that we can derive a huge amount of the value from a tiny amount of time (“minimalists don’t mind missing out on small things; what worries them more more is diminishing the large things they already know for sure make life good”). But our “drive for social approval” often encourages us to stay superficially connected, instead of deeply connected.

In the book, we also get visits to the Amish, suggestions we take a 30-day break from digital bullshit, and case studies from Newport’s readers. I don’t think “Solitude and Leadership” is cited, but it might as well have been.

Another version of this book might be, “opportunity costs matter.” If there’s anything missing, it’s a deeper exploration of why, if many digital social media tools are bad for us, we persist using them—and what our use may say about us. Perhaps revealed preferences show that most of us don’t give a damn about the intentional life. Probably we never have. Maybe we never will. Arguably, history is a long drive towards greater connectivity, and, if this trend is centuries, maybe millennia, old, we can expect it to continue. Many older religious figures worried deeply that technologies would take people away from their religious communities and from God, and those figures were actually right. Few of us, however, want to go back.

For a book about craft and living an intentional life, the paper quality of this book is oddly bad.

No one takes the next step

Yesterday’s New York Times has an article, “Thanks for the painful reminder,” that starts, “Six months ago, our teenage son was killed in a car accident. I took a month off from work because I couldn’t get out of bed.” Almost everyone knows someone who was killed, almost killed, or seriously mangled in a car crash, yet no one is thinking or talking about how to reduce reliance on cars. In 2016 34,439 died in car crashes. None or few those parents and spouses start organizations dedicated to reducing car usage. Why not? School shootings keep inspiring survivors and their families to start organizations around guns, but the same doesn’t seem to happen with cars.

The author of the article doesn’t take the next step, either. It’s an omission that almost no one talks about, either. We’ve had the technologies to improve this situation for more than a century.

Postmodernisms: What does *that* mean?

In response to What’s so dangerous about Jordan Peterson?, there have been a bunch of discussions about what “postmodernism” means (“He believes that the insistence on the use of gender-neutral pronouns is rooted in postmodernism, which he sees as thinly disguised Marxism.”) By now, postmodernism has become so vague and broad that it means almost anything—which is of course another way of saying “nothing”—so the plural is there in the title for a reason. In my view most people claiming the mantle of big broad labels like “Marxist,” “Christian,” “Socialist,” “Democrat,” etc. are trying to signal something about themselves and their identity much more than they’re trying to understand the nuances of what those positions might mean or what ideas / policies really underlie the labels, so for the most part when I see someone talking or writing about postmodern, I say, “Oh, that’s nice,” then move on to talking about something more interesting and immediate.

But if one is going to attempt to describe postmodernism, and how it relates to Marxism, I’d start by observing that old-school Marxists don’t believe much of the linguistic stuff that postmodernists sometimes say they believe—about how everything reduces to “language” or “discourse”—but I think that the number of people who are “Marxists” in the sense that Marx or Lenin would recognize is tiny, even in academia.

I think what’s actually happening is this: people have an underlying set of models or moral codes and then grab some labels to fit on top of those codes. So the labels fit, or try to fit, the underlying morality and beliefs. People in contemporary academia might be particularly drawn to a version of strident moralism in the form of “postmodernism” or “Marxism” because they don’t have much else—no religion, not much influence, no money, so what’s left? A moral superiority that gets wrapped up in words like “postmodernism.” So postmodernism isn’t so much a thing as a mode or a kind of moral signal, and that in turn is tied into the self-conception of people in academia.

You may be wondering why academia is being dragged into this. Stories about what “postmodernism” means are bound up in academia, where ideas about postmodernism still simmer. In humanities grad school, most grad students make no money, as previously mentioned, and don’t expect to get academic jobs when they’re done. Among those who do graduate, most won’t get jobs. Those who do, probably won’t get tenure. And even those who get tenure will often get it for writing a book that will sell two hundred copies to university libraries and then disappear without a trace. So… why are they doing what they do?

At the same time, humanities grad students and profs don’t even have God to console them, as many religious figures do. So some of the crazier stuff emanating from humanities grad students might be a misplaced need for God or purpose. I’ve never seen the situation discussed in those terms, but as I look at the behavior I saw in grad school and the stories emerging from humanities departments, I think that a central absence better explains many problems than most “logical” explanations. And then “postmodernism” is the label that gets applied to this suite of what amount to beliefs. And that, in turn, is what Jordan Peterson is talking about. If you are (wisely) not following trends in the academic humanities, Peterson’s tweet on the subject probably makes no sense.

Most of us need something to believe it—and the need to believe may be more potent in smarter or more intellectual people. In the absence of God, we very rarely get “nothing.” Instead, we get something else, but we should take care in what that “something” is. The sense of the sacred is still powerful within humanities departments, but what that sacred is has shifted, to their detriment and to the detriment of society as a whole.

(I wrote here about the term “deconstructionism,” which has a set of problems similar to “postmodernism,” so much of what I write there also applies here.)

Evaluating things along power lines, as many postmodernists and Marxists seek to do, isn’t always a bad idea, of course, but there are many other dimensions along which one can evaluate art, social situations, politics, etc. So the relentless focus on “power” becomes tedious and reductive after a while: one always knows what the speaker is likely to say, unless of course the speaker is seen as the powerful person and the thing being criticized can be seen as the obvious (e.g. it seems obvious that many tenured professors are in positions of relatively high power, especially compared to grad students; that’s part of what makes the Lindsay Shepherd story compelling).

This brand of post-modernism tends to infantilize groups or individuals (they’re all victims!) or lead to races to the bottom and the development of victimhood culture. But these pathologies are rarely acknowledged by their defenders.

Has postmodernism led to absurdities like the one at Evergreen State, which led to huge enrollment drops? Maybe. I’ve seen the argument and, on even days, buy it.

I read a good Tweet summarizing the basic problem:

When postmodern types say that truth-claims are rhetoric and that attempts to provide evidence are but moves in a power-game—believe them! They are trying to tell you that this is how they operate in discussions. They are confessing that they cannot imagine doing otherwise.

If everything is just “rhetoric” or “power” or “language,” there is no real way to judge anything. Along a related axis, see “Dear Humanities Profs: We Are the Problem.” Essays like it seem to appear about once a year or so. That they seem to change so little is discouraging.

So what does postmodernism mean? Pretty much whatever you want it to mean, whether you love it for whatever reason or hate it for whatever reason. Which is part of the reason you’ll very rarely see it used on this site: it’s too unspecific to be useful, so I shade towards words with greater utility that haven’t been killed, or at least made somatic, through over-use. There’s a reason why most smart people eschew talking about postmodernism or deconstructionism or similar terms: they’re at a not-very-useful level of abstraction, unless one is primarily trying to signal tribal affiliation, and signaling tribal affiliation isn’t a very interesting level of or for discussion.

If you’ve read to the bottom of this, congratulations! I can’t imagine many people are terribly interested in this subject; it seems that most people read a bit about it, realize that many academics in the humanities are crazy, and go do something more useful. It’s hard to explain this stuff in plain language because it often doesn’t mean much of anything, and explaining why that’s so takes a lot.

“Bean freaks: On the hunt for an elusive legume”

Bean freaks: On the hunt for an elusive legume” is among the more charming and hilarious stories I’ve read recently and it’s highly recommended. There are many interesting moments in it, but this tangent caught my attention:

In his late teens, Sando lost weight and found his crowd, learned to improvise on the piano, and discovered, to his great surprise, that he’d become rather good-looking. “What we call a twink now,” he says. Although he never found a true, long-term partner, he married a friend of a friend in his late thirties and had two boys with her, now nineteen and sixteen. “I’d had every lesbian on the planet ask me for sperm,” he says. “But there was a side of me that said, ‘I can’t do this as a passive bystander.’ ” They raised the boys in adjacent houses for a few years, then divorced. “Theres a sitcom waiting to happen,” he says. But he tells the story flatly, without grievance or irony, as if giving a deposition. “The truth is that your sexual identity is just about the least interesting thing about you,” he says. “Do you play an instrument? That would be interesting.”

I think he’s right about the sitcom, and, while I said something like this in a previous post, I’ll say here that I think we’re going to see a lot more gay, bisexual, non-monogamous, etc. characters in movies, TV, and novels not because of a desire to represent those people, or whatever, though that desire may exist, but because of all the new and interesting plotlines and situations those orientations / interests / proclivities open up. Many writers are at their base pragmatists. They (or we) will use whatever material is available and, ideally, hasn’t been done before. As far as I know, a gay man marrying a lesbian and having two kids together, then raising them side-by-side, hasn’t been done and offers lots of material.

Speaking of laughter, this last sentence got me:

Still, admitting that you’re obsessed with beans is a little like saying you collect decorative plates. It marks your taste as untrustworthy. I’ve seen the reaction often enough in my family: the eye roll and stifled cough, the muttered aside as I show yet another guest the wonders of my well-lit and cleverly organized bean closet. As my daughter Evangeline put it one night, a bit melodramatically, when I served beans for the third time in a week, “Lord, why couldn’t it have been bacon or chocolate?”

If the bean club were still open, I’d subscribe. (This will make sense in the context of the article.)

Does politics have to be everywhere, all the time? On Jordan B. Peterson

The Intellectual We Deserve: Jordan Peterson’s popularity is the sign of a deeply impoverished political and intellectual landscape” has been making the rounds for good reason: it’s an intellectually engaged, non-stupid takedown of Peterson. But while you should read it, you should also read it skeptically (or at least contextually). Take this:

A more important reason why Peterson is “misinterpreted” is that he is so consistently vague and vacillating that it’s impossible to tell what he is “actually saying.” People can have such angry arguments about Peterson, seeing him as everything from a fascist apologist to an Enlightenment liberal, because his vacuous words are a kind of Rorschach test onto which countless interpretations can be projected.

I hate to engage in “whataboutism,” but if you’re going to boot intellectuals who write nonsense, at least half of humanities professors are out—and maybe more. People can have long (and literally endless) arguments about what “literary theory” is “actually saying” because most of its content is itself vacuous enough to be “a kind of Rorschach test.” Peterson is responding in part to that kind of intellectual environment. An uncharitable reading may find that he produces vacuous nonsense in part because that sells.

A more charitable reading, however, may find that in human affairs, apparent opposites may be true, depending on context. There are sometimes obvious points from everyday life: it’s good to be kind, unless kindness becomes a weakness. Or is it good to be hard, not kind, because the world is a tough place? Many aphorisms contradict other aphorisms because human life is messy and often paradoxical. So people giving “life advice,” or whatever one may call it, tend to suffer the same problems.

You may notice that religious texts are wildly popular but not internally consistent. There seems to be something in the human psyche that responds to attractive stories more than consistency and verifiability.

More:

[Peterson] is popular partly because academia and the left have failed spectacularly at helping make the world intelligible to ordinary people, and giving them a clear and compelling political vision.

Makes sense to me. When much of academia has abrogated any effort to find meaning in the larger world or impart somewhat serious ideas about what it means to be and to exist in society, apart from particular political theories, we shouldn’t be surprised when someone eventually comes along and attracts followers from those adrift.

In other words, Robinson has a compelling theory about what makes Peterson popular, but he doesn’t have a compelling theory about how the humanities in academia might rejoin planet earth (thought he notes, correctly, that “the left and academia actually bear a decent share of blame [. . .] academics have been cloistered and unhelpful, and the left has failed to offer people a coherent political alternative”).

Too many academics on the left also see their mission as advocacy first and learner or impartial judge second. That creates a lot of unhappiness and alienation in classrooms and universities. We see problems with victimology that have only recently started being addressed. Peterson tells people not to be victims; identifying as a victim is often bad even for people who are genuine victims. There much more to be said about these issues, but they’ll have to be saved for some other essay—or browse around Heterodox Academy.

More:

Sociologist C. Wright Mills, in critically examining “grand theorists” in his field who used verbosity to cover for a lack of profundity, pointed out that people respond positively to this kind of writing because they see it as “a wondrous maze, fascinating precisely because of its often splendid lack of intelligibility.” But, Mills said, such writers are “so rigidly confined to such high levels of abstraction that the ‘typologies’ they make up—and the work they do to make them up—seem more often an arid game of Concepts than an effort to define systematically—which is to say, in a clear and orderly way, the problems at hand, and to guide our efforts to solve them.”

Try reading Jung. He’s “a wondrous maze” and often unintelligible—and certainly not falsifiable. Yet people like and respond to him, and he’s inspired many artists, in part because he’s saying things that may be true—or may be true in some circumstances. Again, literary theorists do something similar. Michel Foucault is particularly guilty of nonsense (why people love his History of Sexuality, which contains little history and virtually no citations, is beyond me). In grad school a professor assigned Luce Irigaray’s book Sexes and Genealogies, a book that makes both Foucault and Peterson seem lucid and specific by comparison.

Until Robinson’s essay I’d not heard of C. Wright Mills, but I wish I’d heard of him back in grad school; in that atmosphere, where many dumb ideas feel so important because the stakes are so low, he would’ve been revelatory. He may help explain what’s wrong in many corners of what’s supposed to be the world of ideas.

Oddly, the Twitter account Real Peer Review has done much of the work aggregating the worst offenders in published humanities nonsense (a long time ago I started collecting examples of nonsense in peer review but gave up because there was so much of it and pointing out nonsense seemed to have no effect on the larger world).

the Peterson way is not just futile because it’s pointless, it’s futile because ultimately, you can’t escape politics. Our lives are conditioned by economic and political systems, like it or not [. . .]

It’s true, I suppose, in some sense, that you can’t escape politics, but must all of life be about politics, everywhere, all the time? I hope not. One hears that “the personal is the political,” which is both irritating and wrong. Sometimes the personal is just personal. Or political dimensions may be present but very small and unimportant, like relativity acting on objects moving at classical speeds. The politicizing of everyday life may be part of what drives searching people towards Peterson.

Sometimes people want to live outside the often-dreary shadow of politics, but, some aspects of social media make that harder. I’ve observed to friends that, the more I see of someone on Facebook, the less I tend to like them (maybe the same is true of others who know me via Facebook). Maybe social media also means that the things that could be easily ignored in a face-to-face context, or just not known, get highlighted in an unfortunate and extremely visible way. Social media seems to heighten our mimetic instincts in not-good ways.

We seem to want to sort ourselves into political teams more readily than we used to, and we seem more likely to cut off relationships due to slights or beliefs that wouldn’t have been visible to us previously. In some sense we can’t escape politics, but many if not most of us feel that political is not our most defining characteristic.

I’m happy to read Peterson as a symptom and a response, but the important question then becomes, “To what? Of what?” There are a lot of possible answers, some of which Robinson engages—which is great! But most of Peterson’s critics don’t seem to want to engage the question, let alone the answer.

The rest of us are back to the war of art. Which has to first of all be good, rather than agreeing with whatever today’s social pieties may be.

What would a better doctor education system look like?

A reader of “Why you should become a nurse or physicians assistant instead of a doctor: the underrated perils of medical school,” asks, though not quite in this way, what a better doctor education system would look like. It’s surprising that it’s taken so long and so many readers for someone to ask, but before I answer, let me say that, while the question is important, I don’t expect to see improvement. That’s because current, credentialed doctors are highly invested in the system and want to keep barriers to entry high—which in turn helps keep salaries up. In addition, there are still many people trying to enter med school, so the supply of prospective applicants props the system up. Meanwhile, people who notice high wages in medicine but who also notice how crazy the med school system is can turn to PA or NP school as reasonable alternatives. With so little pressure on the system and so many stakeholders invested, why change?

That being said, the question is intellectually interesting if useless in practice, so let’s list some possibilities:

1. Roll med school into undergrad. Do two years of gen eds, then start med school. Even assuming med school needs to be four years (it probably doesn’t), that would slice two years of high-cost education off the total bill.

2. Allow med students, or for that matter anyone, to “challenge the test.” If you learn anatomy on your own and with Youtube, take the test and then you don’t have take three to six (expensive) weeks of mind-numbing lecture courses. Telling students, “You can spent $4,000 on courses or learn it yourself and then take a $150 test” will likely have… unusual outcomes, compared to what professors claim students need.

3. Align curriculums with what doctors actually do. Biochem is a great subject that few specialties actually use. Require those specialties to know biochem. Don’t mandate biochem for family docs, ER, etc.

4. Allow competition among residencies—that is, allow residents to switch on, say, a month-by-month basis, like a real job market.

There are probably others, but these are some of the lowest-hanging fruit. We’re also not likely to see many of these changes for the reason mentioned above—lots of people have a financial stake in the status quo—but also because so much of school is about signaling, not learning. The system works sub-optimally, but it also works “well enough.” Since the present system is good enough and the current medical cartel likes things as they are, it’s up to uncredentialed outsiders like me to observe possible changes that’ll never be implemented by insiders.

I wrote “Why you should become a nurse or physicians assistant instead of a doctor: the underrated perils of medical school” five years ago and in that time we’ve seen zero changes at the macro level. Some individuals have likely not screwed up their lives via med school, and some of them have left comments or sent me emails saying as much, and that’s great. But it’s not been sufficient to generate systems change.

“Pop culture today is obsessed with the battle between good and evil. Traditional folktales never were. What changed?”

The good guy/bad guy myth: Pop culture today is obsessed with the battle between good and evil. Traditional folktales never were. What changed?” is one of the most interesting essays on narrative and fiction I’ve ever read, and while I, like most of you, am familiar with the tendency of good guys and bad guys in fiction, I wasn’t cognizant of the way pure good and pure evil as fundamental characterizations only really proliferated around 1700.

In other words, I didn’t notice the narrative water in which I swim. Yet now I can’t stop thinking about a lot of narrative in the terms described.

A while ago, I read most of Neil Gaiman’s Norse Mythology and found it boring, perhaps in part because the characters didn’t seem to stand for anything beyond themselves, and they didn’t seem to want anything greater than themselves in any given moment. Yet for most of human civilization, that kind of story may have been more common than many modern stories.

Still, I wonder if we should be even more skeptical of good versus evil stories than I would’ve thought we should be prior to reading this essay.

 

Lost technologies, Seveneves, and The Secret of Our Success

Spoilers ahead, but if you haven’t read Seveneves by now they probably don’t matter.

Seveneves is an unusual and great novel, and it’s great as long as you attribute some of its less plausible elements to an author building a world. One plausible element is the way humanity comes together and keeps the social, political, and economic systems functional enough to launch large numbers of spacecraft in the face of imminent collective death. If we collectively had two years to live, I suspect total breakdown would follow, leaving us with no Cloud Ark (and no story—thus we go along with the premise).

But that’s not the main thing I want to write about. Instead, consider the loss of knowledge that inherently comes with population decline. In Seveneves humanity declines to seven women living in space on a massive iron remnant of the moon. They slowly repopulate, with their descendants living in space for five thousand years. But a population of seven would probably not be able to retain and transmit the specialized knowledge necessary for survival on most parts of Earth, let alone space.

That isn’t a speculative claim. We have pretty good evidence for the way small populations lose knowledge. Something drew me to re-reading Joseph Henrich’s excellent book The Secret of Our Success, and maybe the sections about technological loss are part of it. He writes about many examples of European explorers getting lost and dying in relatively fecund environments because they don’t have the local knowledge and customs necessary to survive. He writes about indigenous groups too, including the Polar Intuit, who “live in an isolated region of northwestern Greenland [. . . .] They are the northernmost human population that has ever existed” (211). But

Sometime in the 1820s an epidemic hit this population and selectively killed off many of its oldest and most knowledgable members. With the sudden disappearance of the know-how carried by these individuals, the group collectively lost its ability to make some of its most crucial and complex tools, including leisters, bows and arrows, the heat-trapping long entry ways for snow houses, and most important, kayaks.

As a result, “The population declined until 1862, when another group of Intuit from around Baffin Island ran across them while traveling along the Greenland coast. The subsequent cultural reconnection led the Polar Intuit to rapidly reacquire what they had lost.” Which is essential:

Though crucial to survival in the Arctic, the lost technologies were not things that the Polar Intuit could easily recreate Even having seen these technologies in operation as children, and with their population crashing, neither the older generation nor an entirely new generation responded to Mother Necessity by devising kayaks, leisters, compound bows, or long tunnel entrances.

Innovation is hard and relatively rare. We’re all part of a network that transmits knowledge horizontally, from peer to peer, and vertically, from older person to younger person. Today, people in first-world countries are used to innovation because we’re part of a vast network of billions of people who are constantly learning from each and transmitting the innovations that do arise. We’re used to seemingly automatic innovation, because so many people are working on so many problems. Unless we’re employed as researchers, we’re often not cognizant of how much effort goes into both discovery and then transmission.

Without that dense network of people, though, much of what we know would be lost. Maybe the best-known example of technology loss happened when the Roman Empire fell, followed by the way ancient Egyptians lost the know-how necessary to build pyramids and other epic engineering works.

In a Seveneves scenario, it’s highly unlikely that the novel’s protagonists would be able to sustain and transmit the knowledge necessary to live somewhere on earth, let alone somewhere as hostile as space. Quick: how helpful would you be in designing and manufacturing microchips, solar panels, nuclear reactors, plant biology, or oxygen systems? Yeah, me too. Those complex technologies have research, design, and manufacture facets that are embodied in the heads of thousands if not millions of individuals. The level of specialization our society has achieved is incredible, but we rarely think about how incredible it really is.

This is not so much a criticism of the novel—I consider the fact that they do survive part of granting the author his due—but it is a contextualization of the novel’s ideas. The evidence that knowledge is fragile is more pervasive and available than I’d thought when I was younger. We like stories of individual agency, but in actuality we’re better conceived of as parts in a massive system. We can see our susceptibility to conspiracy theories as beliefs in the excessive power of the individual. In an essay from Distrust That Particular Flavor, William Gibson writes: “Conspiracy theories and the occult comfort us because they present models of the world that more easily make sense than the world itself, and, regardless of how dark or threatening, are inherently less frightening.” The world itself is big, densely interconnected, and our ability to change it is real but often smaller than we imagine.

Henrich writes:

Once individuals evolve to learn from one another with sufficient accuracy (fidelity), social groups of individuals develop what might be called collective brains. The power of these collective brains to develop increasingly effective tools and technologies, as well as other forms of nonmaterial culture (e.g., know-how), depends in part on the size of the group of individuals engaged and on their social connectedness. (212)

The Secret of Our Success also cites laboratory recreations of similar principles; those experiments are too long to describe here, but they are clever. If there are good critiques of the chapter and idea, I haven’t found them (and if you know any, let’s use our collective brain by posting links in the comments). Henrich emphasizes:

If a population suddenly shrinks or gets socially disconnected, it can actually lose adaptive cultural information, resulting in a loss of technical skills and the disappearance of complex technologies. [. . . ] A population’s size and social interconnectedness sets a maximum on the size of a group’s collective brain. (218-9)

That size cap means that small populations in space, even if they are composed of highly skilled and competent individuals, are unlikely to survive over generations. They are unlikely to survive even if they have the rest of humanity’s explicit knowledge recorded on disk. There is too much tacit knowledge for explicit knowledge in and of itself to be useful, as anyone who has ever tried to learn from a book and then from a good teacher knows. Someday we may be able to survive indefinitely in space, but today we’re far from that stage.

Almost all post-apocalyptic novels face the small-population dilemma to some extent (I’d argue that Seveneves can be seen as a post-apocalyptic novel with a novel apocalypse). Think of the role played by the nuclear reactor in Steven King’s The Stand: the characters in the immediate aftermath must decide if they’re going to live in the dark and regress to hunter-gatherer times, at best, or if they’re going to save and use the reactor to live in the light (the metaphoric implications are not hard to perceive here). In one of the earliest post-apocalyptic novels, Earth Abides, two generations after the disaster, descendants of technologically sophisticated people are reduced to using melted-down coins as tips for spears and arrows. In Threads, the movie (and my nominee for scariest movie ever made), the descendants of survivors of nuclear war lose most of their vocabulary and are reduced to what is by modern standards an impoverished language that is a sort of inadvertent 1984 newspeak.* Let’s hope we don’t find out what actually happens after nuclear war.

In short, kill enough neurons in the collective brain and the brain itself stops working. Which has happened before. And it could happen again.


* Check out the cars in Britain in Threads: that reminds us of the possibilities of technological progress and advancement.

Why read bestsellers

Someone wrote to ask why I bother writing about John Grisham’s weaknesses as a writer and implied in it is a second question: why read bestsellers at all? The first is a fair question and so is the implication in it: Grisham’s readers don’t read me and don’t care what I think; they don’t care that he’s a bad writer; and people who read me probably aren’t going to read him. Still, I read him because I was curious and I wrote about him to report what I found.

The answer to the second one is easy: Some are great! Not all, probably not even most, but enough to try. Lonesome Dove, the best novel I’ve read recently, was a bestseller. Its sequel, Streets of Laredo, is not quite as good but I’m glad to have read it. Elmore Leonard was often a bestseller and he is excellent. Others seemed like they’d be bad (Gillian Flynn, Tucker Max) but turned into favorites.

One could construct a 2×2 matrix of good famous books; bad famous books; good obscure books; and bad obscure books. That last one is a large group too; credibility amid a handful of literary critics (who may be scratching each other’s backs anyway) does not necessarily equate to quality, and I’ve been fooled by good reviews of mostly unknown books many times. Literary posturing does not equate to actual quality.

Different people also have different views around literary quality, and those views depend in part on experience and reading habits. Someone who reads zero or one books a year is likely to have very different impressions than someone who reads ten or someone who reads fifty or a hundred. Someone who is reading like a writer will probably have a different experience than someone who reads exclusively in a single, particular genre.

And Grisham? That article (which I wish I could find) made him and especially Camino Island sound appealing, and the book does occasionally work. But its addiction to cliché and the sort of overwriting common in student writing makes it unreadable in my view. But someone who reads one or two books a year and for whom Grisham is one of those books will probably like him just fine, because they don’t have the built-up stock of reading that lets them distinguish what’s really good from what isn’t.

%d bloggers like this: