Digital Minimalism — Cal Newport

All of Cal Newport’s books could be titled, “How to Be an Effective Person.” Or, maybe, “How to Be an Effective Person In This Technological Epoch.” Digital Minimalism is, like Deep Work: Rules for Focused Success in a Distracted World, about why you should quit or drastically limit the digital distractions that have proliferated in much of modern life. To me, it seemed obviously necessary to do so a long time ago, so there’s a large component of preaching-to-the-choir in me reading and now recommending this book. I’m barely on Facebook or most other social networks, which seem anathema to doing anything substantive or important.

A story. A friend sent me an email about Newport’s article “Is email making professors stupid?” I told him that, even in grad school, I’d figured out the problems with email and checked it, typically, once per day—sometimes every other day. The other grad students were in awe of that (low?) rate. I was like, “How do you get any writing done otherwise?” I leave it as an exercise to the reader to square this circle. You may notice that some of my novels are out there and their novels are not.

In my experience, too, most profs actually like the distraction, the work-like feeling without having to do the hard part. In reality, it is not at all hard to open your email every other day and spent 90%+ of your time focused on your work. If you don’t do this, then, as Newport says, “The urge to check Twitter or refresh Reddit becomes a nervous twitch that shatters uninterrupted time into shards too small to support the presence necessary for an intentional life.” And yet many of us, as measured by data, do just that. I buy many of Newport’s arguments while also being skeptical that we’ll see large-scale change. Yet we should seek individual change; many of the online systems are psychologically bad for us:

The techno-philosopher Jaron Lanier convincingly argues that the primacy of anger and outrage outline is, in some sense, an unavoidable feature of the medium: In an open marketplace for attention, darker emotions attract more eyeballs that positive and constructive thoughts. For heavy Internet users, repeated interaction with this darkness can become a source of draining negativity—a steep price that many don’t even realize they’re paying to support their compulsive connectivity.

Is “the primacy of anger and outrage” really “an unavoidable feature?” I like to think not; I like to think that I try to avoid anger and outrage, making those tertiary features at best, and instead I try to focus on ideas and thinking. So I like to think that I’m avoiding those things.

Still, compulsive connectivity online may also be costing us offline, real-world connection. That’s a point in Johann Hari’s book Lost Connections: Uncovering the Real Causes of Depression, which you should also read.

The book describes how modern social media systems and apps exploit our desire for random or intermittent positive reinforcement. Because we don’t know what we’re going to get anytime we boot up Twitter or similar, we want to visit those sites more often. We lose perspective on what’s more important—finishing a vital long-term project or checking for whatever the news of the day might be, however trivial. Or seeing random thoughts from our friends. Newport doesn’t argue that we shouldn’t have friends or that social networking systems don’t have some value—he just points out that we can derive a huge amount of the value from a tiny amount of time (“minimalists don’t mind missing out on small things; what worries them more more is diminishing the large things they already know for sure make life good”). But our “drive for social approval” often encourages us to stay superficially connected, instead of deeply connected.

In the book, we also get visits to the Amish, suggestions we take a 30-day break from digital bullshit, and case studies from Newport’s readers. I don’t think “Solitude and Leadership” is cited, but it might as well have been.

Another version of this book might be, “opportunity costs matter.” If there’s anything missing, it’s a deeper exploration of why, if many digital social media tools are bad for us, we persist using them—and what our use may say about us. Perhaps revealed preferences show that most of us don’t give a damn about the intentional life. Probably we never have. Maybe we never will. Arguably, history is a long drive towards greater connectivity, and, if this trend is centuries, maybe millennia, old, we can expect it to continue. Many older religious figures worried deeply that technologies would take people away from their religious communities and from God, and those figures were actually right. Few of us, however, want to go back.

For a book about craft and living an intentional life, the paper quality of this book is oddly bad.

No one takes the next step

Yesterday’s New York Times has an article, “Thanks for the painful reminder,” that starts, “Six months ago, our teenage son was killed in a car accident. I took a month off from work because I couldn’t get out of bed.” Almost everyone knows someone who was killed, almost killed, or seriously mangled in a car crash, yet no one is thinking or talking about how to reduce reliance on cars. In 2016 34,439 died in car crashes. None or few those parents and spouses start organizations dedicated to reducing car usage. Why not? School shootings keep inspiring survivors and their families to start organizations around guns, but the same doesn’t seem to happen with cars.

The author of the article doesn’t take the next step, either. It’s an omission that almost no one talks about, either. We’ve had the technologies to improve this situation for more than a century.

Postmodernisms: What does *that* mean?

In response to What’s so dangerous about Jordan Peterson?, there have been a bunch of discussions about what “postmodernism” means (“He believes that the insistence on the use of gender-neutral pronouns is rooted in postmodernism, which he sees as thinly disguised Marxism.”) By now, postmodernism has become so vague and broad that it means almost anything—which is of course another way of saying “nothing”—so the plural is there in the title for a reason. In my view most people claiming the mantle of big broad labels like “Marxist,” “Christian,” “Socialist,” “Democrat,” etc. are trying to signal something about themselves and their identity much more than they’re trying to understand the nuances of what those positions might mean or what ideas / policies really underlie the labels, so for the most part when I see someone talking or writing about postmodern, I say, “Oh, that’s nice,” then move on to talking about something more interesting and immediate.

But if one is going to attempt to describe postmodernism, and how it relates to Marxism, I’d start by observing that old-school Marxists don’t believe much of the linguistic stuff that postmodernists sometimes say they believe—about how everything reduces to “language” or “discourse”—but I think that the number of people who are “Marxists” in the sense that Marx or Lenin would recognize is tiny, even in academia.

I think what’s actually happening is this: people have an underlying set of models or moral codes and then grab some labels to fit on top of those codes. So the labels fit, or try to fit, the underlying morality and beliefs. People in contemporary academia might be particularly drawn to a version of strident moralism in the form of “postmodernism” or “Marxism” because they don’t have much else—no religion, not much influence, no money, so what’s left? A moral superiority that gets wrapped up in words like “postmodernism.” So postmodernism isn’t so much a thing as a mode or a kind of moral signal, and that in turn is tied into the self-conception of people in academia.

You may be wondering why academia is being dragged into this. Stories about what “postmodernism” means are bound up in academia, where ideas about postmodernism still simmer. In humanities grad school, most grad students make no money, as previously mentioned, and don’t expect to get academic jobs when they’re done. Among those who do graduate, most won’t get jobs. Those who do, probably won’t get tenure. And even those who get tenure will often get it for writing a book that will sell two hundred copies to university libraries and then disappear without a trace. So… why are they doing what they do?

At the same time, humanities grad students and profs don’t even have God to console them, as many religious figures do. So some of the crazier stuff emanating from humanities grad students might be a misplaced need for God or purpose. I’ve never seen the situation discussed in those terms, but as I look at the behavior I saw in grad school and the stories emerging from humanities departments, I think that a central absence better explains many problems than most “logical” explanations. And then “postmodernism” is the label that gets applied to this suite of what amount to beliefs. And that, in turn, is what Jordan Peterson is talking about. If you are (wisely) not following trends in the academic humanities, Peterson’s tweet on the subject probably makes no sense.

Most of us need something to believe it—and the need to believe may be more potent in smarter or more intellectual people. In the absence of God, we very rarely get “nothing.” Instead, we get something else, but we should take care in what that “something” is. The sense of the sacred is still powerful within humanities departments, but what that sacred is has shifted, to their detriment and to the detriment of society as a whole.

(I wrote here about the term “deconstructionism,” which has a set of problems similar to “postmodernism,” so much of what I write there also applies here.)

Evaluating things along power lines, as many postmodernists and Marxists seek to do, isn’t always a bad idea, of course, but there are many other dimensions along which one can evaluate art, social situations, politics, etc. So the relentless focus on “power” becomes tedious and reductive after a while: one always knows what the speaker is likely to say, unless of course the speaker is seen as the powerful person and the thing being criticized can be seen as the obvious (e.g. it seems obvious that many tenured professors are in positions of relatively high power, especially compared to grad students; that’s part of what makes the Lindsay Shepherd story compelling).

This brand of post-modernism tends to infantilize groups or individuals (they’re all victims!) or lead to races to the bottom and the development of victimhood culture. But these pathologies are rarely acknowledged by their defenders.

Has postmodernism led to absurdities like the one at Evergreen State, which led to huge enrollment drops? Maybe. I’ve seen the argument and, on even days, buy it.

I read a good Tweet summarizing the basic problem:

When postmodern types say that truth-claims are rhetoric and that attempts to provide evidence are but moves in a power-game—believe them! They are trying to tell you that this is how they operate in discussions. They are confessing that they cannot imagine doing otherwise.

If everything is just “rhetoric” or “power” or “language,” there is no real way to judge anything. Along a related axis, see “Dear Humanities Profs: We Are the Problem.” Essays like it seem to appear about once a year or so. That they seem to change so little is discouraging.

So what does postmodernism mean? Pretty much whatever you want it to mean, whether you love it for whatever reason or hate it for whatever reason. Which is part of the reason you’ll very rarely see it used on this site: it’s too unspecific to be useful, so I shade towards words with greater utility that haven’t been killed, or at least made somatic, through over-use. There’s a reason why most smart people eschew talking about postmodernism or deconstructionism or similar terms: they’re at a not-very-useful level of abstraction, unless one is primarily trying to signal tribal affiliation, and signaling tribal affiliation isn’t a very interesting level of or for discussion.

If you’ve read to the bottom of this, congratulations! I can’t imagine many people are terribly interested in this subject; it seems that most people read a bit about it, realize that many academics in the humanities are crazy, and go do something more useful. It’s hard to explain this stuff in plain language because it often doesn’t mean much of anything, and explaining why that’s so takes a lot.

“Bean freaks: On the hunt for an elusive legume”

Bean freaks: On the hunt for an elusive legume” is among the more charming and hilarious stories I’ve read recently and it’s highly recommended. There are many interesting moments in it, but this tangent caught my attention:

In his late teens, Sando lost weight and found his crowd, learned to improvise on the piano, and discovered, to his great surprise, that he’d become rather good-looking. “What we call a twink now,” he says. Although he never found a true, long-term partner, he married a friend of a friend in his late thirties and had two boys with her, now nineteen and sixteen. “I’d had every lesbian on the planet ask me for sperm,” he says. “But there was a side of me that said, ‘I can’t do this as a passive bystander.’ ” They raised the boys in adjacent houses for a few years, then divorced. “Theres a sitcom waiting to happen,” he says. But he tells the story flatly, without grievance or irony, as if giving a deposition. “The truth is that your sexual identity is just about the least interesting thing about you,” he says. “Do you play an instrument? That would be interesting.”

I think he’s right about the sitcom, and, while I said something like this in a previous post, I’ll say here that I think we’re going to see a lot more gay, bisexual, non-monogamous, etc. characters in movies, TV, and novels not because of a desire to represent those people, or whatever, though that desire may exist, but because of all the new and interesting plotlines and situations those orientations / interests / proclivities open up. Many writers are at their base pragmatists. They (or we) will use whatever material is available and, ideally, hasn’t been done before. As far as I know, a gay man marrying a lesbian and having two kids together, then raising them side-by-side, hasn’t been done and offers lots of material.

Speaking of laughter, this last sentence got me:

Still, admitting that you’re obsessed with beans is a little like saying you collect decorative plates. It marks your taste as untrustworthy. I’ve seen the reaction often enough in my family: the eye roll and stifled cough, the muttered aside as I show yet another guest the wonders of my well-lit and cleverly organized bean closet. As my daughter Evangeline put it one night, a bit melodramatically, when I served beans for the third time in a week, “Lord, why couldn’t it have been bacon or chocolate?”

If the bean club were still open, I’d subscribe. (This will make sense in the context of the article.)

Does politics have to be everywhere, all the time? On Jordan B. Peterson

The Intellectual We Deserve: Jordan Peterson’s popularity is the sign of a deeply impoverished political and intellectual landscape” has been making the rounds for good reason: it’s an intellectually engaged, non-stupid takedown of Peterson. But while you should read it, you should also read it skeptically (or at least contextually). Take this:

A more important reason why Peterson is “misinterpreted” is that he is so consistently vague and vacillating that it’s impossible to tell what he is “actually saying.” People can have such angry arguments about Peterson, seeing him as everything from a fascist apologist to an Enlightenment liberal, because his vacuous words are a kind of Rorschach test onto which countless interpretations can be projected.

I hate to engage in “whataboutism,” but if you’re going to boot intellectuals who write nonsense, at least half of humanities professors are out—and maybe more. People can have long (and literally endless) arguments about what “literary theory” is “actually saying” because most of its content is itself vacuous enough to be “a kind of Rorschach test.” Peterson is responding in part to that kind of intellectual environment. An uncharitable reading may find that he produces vacuous nonsense in part because that sells.

A more charitable reading, however, may find that in human affairs, apparent opposites may be true, depending on context. There are sometimes obvious points from everyday life: it’s good to be kind, unless kindness becomes a weakness. Or is it good to be hard, not kind, because the world is a tough place? Many aphorisms contradict other aphorisms because human life is messy and often paradoxical. So people giving “life advice,” or whatever one may call it, tend to suffer the same problems.

You may notice that religious texts are wildly popular but not internally consistent. There seems to be something in the human psyche that responds to attractive stories more than consistency and verifiability.

More:

[Peterson] is popular partly because academia and the left have failed spectacularly at helping make the world intelligible to ordinary people, and giving them a clear and compelling political vision.

Makes sense to me. When much of academia has abrogated any effort to find meaning in the larger world or impart somewhat serious ideas about what it means to be and to exist in society, apart from particular political theories, we shouldn’t be surprised when someone eventually comes along and attracts followers from those adrift.

In other words, Robinson has a compelling theory about what makes Peterson popular, but he doesn’t have a compelling theory about how the humanities in academia might rejoin planet earth (thought he notes, correctly, that “the left and academia actually bear a decent share of blame [. . .] academics have been cloistered and unhelpful, and the left has failed to offer people a coherent political alternative”).

Too many academics on the left also see their mission as advocacy first and learner or impartial judge second. That creates a lot of unhappiness and alienation in classrooms and universities. We see problems with victimology that have only recently started being addressed. Peterson tells people not to be victims; identifying as a victim is often bad even for people who are genuine victims. There much more to be said about these issues, but they’ll have to be saved for some other essay—or browse around Heterodox Academy.

More:

Sociologist C. Wright Mills, in critically examining “grand theorists” in his field who used verbosity to cover for a lack of profundity, pointed out that people respond positively to this kind of writing because they see it as “a wondrous maze, fascinating precisely because of its often splendid lack of intelligibility.” But, Mills said, such writers are “so rigidly confined to such high levels of abstraction that the ‘typologies’ they make up—and the work they do to make them up—seem more often an arid game of Concepts than an effort to define systematically—which is to say, in a clear and orderly way, the problems at hand, and to guide our efforts to solve them.”

Try reading Jung. He’s “a wondrous maze” and often unintelligible—and certainly not falsifiable. Yet people like and respond to him, and he’s inspired many artists, in part because he’s saying things that may be true—or may be true in some circumstances. Again, literary theorists do something similar. Michel Foucault is particularly guilty of nonsense (why people love his History of Sexuality, which contains little history and virtually no citations, is beyond me). In grad school a professor assigned Luce Irigaray’s book Sexes and Genealogies, a book that makes both Foucault and Peterson seem lucid and specific by comparison.

Until Robinson’s essay I’d not heard of C. Wright Mills, but I wish I’d heard of him back in grad school; in that atmosphere, where many dumb ideas feel so important because the stakes are so low, he would’ve been revelatory. He may help explain what’s wrong in many corners of what’s supposed to be the world of ideas.

Oddly, the Twitter account Real Peer Review has done much of the work aggregating the worst offenders in published humanities nonsense (a long time ago I started collecting examples of nonsense in peer review but gave up because there was so much of it and pointing out nonsense seemed to have no effect on the larger world).

the Peterson way is not just futile because it’s pointless, it’s futile because ultimately, you can’t escape politics. Our lives are conditioned by economic and political systems, like it or not [. . .]

It’s true, I suppose, in some sense, that you can’t escape politics, but must all of life be about politics, everywhere, all the time? I hope not. One hears that “the personal is the political,” which is both irritating and wrong. Sometimes the personal is just personal. Or political dimensions may be present but very small and unimportant, like relativity acting on objects moving at classical speeds. The politicizing of everyday life may be part of what drives searching people towards Peterson.

Sometimes people want to live outside the often-dreary shadow of politics, but, some aspects of social media make that harder. I’ve observed to friends that, the more I see of someone on Facebook, the less I tend to like them (maybe the same is true of others who know me via Facebook). Maybe social media also means that the things that could be easily ignored in a face-to-face context, or just not known, get highlighted in an unfortunate and extremely visible way. Social media seems to heighten our mimetic instincts in not-good ways.

We seem to want to sort ourselves into political teams more readily than we used to, and we seem more likely to cut off relationships due to slights or beliefs that wouldn’t have been visible to us previously. In some sense we can’t escape politics, but many if not most of us feel that political is not our most defining characteristic.

I’m happy to read Peterson as a symptom and a response, but the important question then becomes, “To what? Of what?” There are a lot of possible answers, some of which Robinson engages—which is great! But most of Peterson’s critics don’t seem to want to engage the question, let alone the answer.

The rest of us are back to the war of art. Which has to first of all be good, rather than agreeing with whatever today’s social pieties may be.

What would a better doctor education system look like?

A reader of “Why you should become a nurse or physicians assistant instead of a doctor: the underrated perils of medical school,” asks, though not quite in this way, what a better doctor education system would look like. It’s surprising that it’s taken so long and so many readers for someone to ask, but before I answer, let me say that, while the question is important, I don’t expect to see improvement. That’s because current, credentialed doctors are highly invested in the system and want to keep barriers to entry high—which in turn helps keep salaries up. In addition, there are still many people trying to enter med school, so the supply of prospective applicants props the system up. Meanwhile, people who notice high wages in medicine but who also notice how crazy the med school system is can turn to PA or NP school as reasonable alternatives. With so little pressure on the system and so many stakeholders invested, why change?

That being said, the question is intellectually interesting if useless in practice, so let’s list some possibilities:

1. Roll med school into undergrad. Do two years of gen eds, then start med school. Even assuming med school needs to be four years (it probably doesn’t), that would slice two years of high-cost education off the total bill.

2. Allow med students, or for that matter anyone, to “challenge the test.” If you learn anatomy on your own and with Youtube, take the test and then you don’t have take three to six (expensive) weeks of mind-numbing lecture courses. Telling students, “You can spent $4,000 on courses or learn it yourself and then take a $150 test” will likely have… unusual outcomes, compared to what professors claim students need.

3. Align curriculums with what doctors actually do. Biochem is a great subject that few specialties actually use. Require those specialties to know biochem. Don’t mandate biochem for family docs, ER, etc.

4. Allow competition among residencies—that is, allow residents to switch on, say, a month-by-month basis, like a real job market.

There are probably others, but these are some of the lowest-hanging fruit. We’re also not likely to see many of these changes for the reason mentioned above—lots of people have a financial stake in the status quo—but also because so much of school is about signaling, not learning. The system works sub-optimally, but it also works “well enough.” Since the present system is good enough and the current medical cartel likes things as they are, it’s up to uncredentialed outsiders like me to observe possible changes that’ll never be implemented by insiders.

I wrote “Why you should become a nurse or physicians assistant instead of a doctor: the underrated perils of medical school” five years ago and in that time we’ve seen zero changes at the macro level. Some individuals have likely not screwed up their lives via med school, and some of them have left comments or sent me emails saying as much, and that’s great. But it’s not been sufficient to generate systems change.

“Pop culture today is obsessed with the battle between good and evil. Traditional folktales never were. What changed?”

The good guy/bad guy myth: Pop culture today is obsessed with the battle between good and evil. Traditional folktales never were. What changed?” is one of the most interesting essays on narrative and fiction I’ve ever read, and while I, like most of you, am familiar with the tendency of good guys and bad guys in fiction, I wasn’t cognizant of the way pure good and pure evil as fundamental characterizations only really proliferated around 1700.

In other words, I didn’t notice the narrative water in which I swim. Yet now I can’t stop thinking about a lot of narrative in the terms described.

A while ago, I read most of Neil Gaiman’s Norse Mythology and found it boring, perhaps in part because the characters didn’t seem to stand for anything beyond themselves, and they didn’t seem to want anything greater than themselves in any given moment. Yet for most of human civilization, that kind of story may have been more common than many modern stories.

Still, I wonder if we should be even more skeptical of good versus evil stories than I would’ve thought we should be prior to reading this essay.

 

%d bloggers like this: