“Bean freaks: On the hunt for an elusive legume”

Bean freaks: On the hunt for an elusive legume” is among the more charming and hilarious stories I’ve read recently and it’s highly recommended. There are many interesting moments in it, but this tangent caught my attention:

In his late teens, Sando lost weight and found his crowd, learned to improvise on the piano, and discovered, to his great surprise, that he’d become rather good-looking. “What we call a twink now,” he says. Although he never found a true, long-term partner, he married a friend of a friend in his late thirties and had two boys with her, now nineteen and sixteen. “I’d had every lesbian on the planet ask me for sperm,” he says. “But there was a side of me that said, ‘I can’t do this as a passive bystander.’ ” They raised the boys in adjacent houses for a few years, then divorced. “Theres a sitcom waiting to happen,” he says. But he tells the story flatly, without grievance or irony, as if giving a deposition. “The truth is that your sexual identity is just about the least interesting thing about you,” he says. “Do you play an instrument? That would be interesting.”

I think he’s right about the sitcom, and, while I said something like this in a previous post, I’ll say here that I think we’re going to see a lot more gay, bisexual, non-monogamous, etc. characters in movies, TV, and novels not because of a desire to represent those people, or whatever, though that desire may exist, but because of all the new and interesting plotlines and situations those orientations / interests / proclivities open up. Many writers are at their base pragmatists. They (or we) will use whatever material is available and, ideally, hasn’t been done before. As far as I know, a gay man marrying a lesbian and having two kids together, then raising them side-by-side, hasn’t been done and offers lots of material.

Speaking of laughter, this last sentence got me:

Still, admitting that you’re obsessed with beans is a little like saying you collect decorative plates. It marks your taste as untrustworthy. I’ve seen the reaction often enough in my family: the eye roll and stifled cough, the muttered aside as I show yet another guest the wonders of my well-lit and cleverly organized bean closet. As my daughter Evangeline put it one night, a bit melodramatically, when I served beans for the third time in a week, “Lord, why couldn’t it have been bacon or chocolate?”

If the bean club were still open, I’d subscribe. (This will make sense in the context of the article.)

Does politics have to be everywhere, all the time? On Jordan B. Peterson

The Intellectual We Deserve: Jordan Peterson’s popularity is the sign of a deeply impoverished political and intellectual landscape” has been making the rounds for good reason: it’s an intellectually engaged, non-stupid takedown of Peterson. But while you should read it, you should also read it skeptically (or at least contextually). Take this:

A more important reason why Peterson is “misinterpreted” is that he is so consistently vague and vacillating that it’s impossible to tell what he is “actually saying.” People can have such angry arguments about Peterson, seeing him as everything from a fascist apologist to an Enlightenment liberal, because his vacuous words are a kind of Rorschach test onto which countless interpretations can be projected.

I hate to engage in “whataboutism,” but if you’re going to boot intellectuals who write nonsense, at least half of humanities professors are out—and maybe more. People can have long (and literally endless) arguments about what “literary theory” is “actually saying” because most of its content is itself vacuous enough to be “a kind of Rorschach test.” Peterson is responding in part to that kind of intellectual environment. An uncharitable reading may find that he produces vacuous nonsense in part because that sells.

A more charitable reading, however, may find that in human affairs, apparent opposites may be true, depending on context. There are sometimes obvious points from everyday life: it’s good to be kind, unless kindness becomes a weakness. Or is it good to be hard, not kind, because the world is a tough place? Many aphorisms contradict other aphorisms because human life is messy and often paradoxical. So people giving “life advice,” or whatever one may call it, tend to suffer the same problems.

You may notice that religious texts are wildly popular but not internally consistent. There seems to be something in the human psyche that responds to attractive stories more than consistency and verifiability.

More:

[Peterson] is popular partly because academia and the left have failed spectacularly at helping make the world intelligible to ordinary people, and giving them a clear and compelling political vision.

Makes sense to me. When much of academia has abrogated any effort to find meaning in the larger world or impart somewhat serious ideas about what it means to be and to exist in society, apart from particular political theories, we shouldn’t be surprised when someone eventually comes along and attracts followers from those adrift.

In other words, Robinson has a compelling theory about what makes Peterson popular, but he doesn’t have a compelling theory about how the humanities in academia might rejoin planet earth (thought he notes, correctly, that “the left and academia actually bear a decent share of blame [. . .] academics have been cloistered and unhelpful, and the left has failed to offer people a coherent political alternative”).

Too many academics on the left also see their mission as advocacy first and learner or impartial judge second. That creates a lot of unhappiness and alienation in classrooms and universities. We see problems with victimology that have only recently started being addressed. Peterson tells people not to be victims; identifying as a victim is often bad even for people who are genuine victims. There much more to be said about these issues, but they’ll have to be saved for some other essay—or browse around Heterodox Academy.

More:

Sociologist C. Wright Mills, in critically examining “grand theorists” in his field who used verbosity to cover for a lack of profundity, pointed out that people respond positively to this kind of writing because they see it as “a wondrous maze, fascinating precisely because of its often splendid lack of intelligibility.” But, Mills said, such writers are “so rigidly confined to such high levels of abstraction that the ‘typologies’ they make up—and the work they do to make them up—seem more often an arid game of Concepts than an effort to define systematically—which is to say, in a clear and orderly way, the problems at hand, and to guide our efforts to solve them.”

Try reading Jung. He’s “a wondrous maze” and often unintelligible—and certainly not falsifiable. Yet people like and respond to him, and he’s inspired many artists, in part because he’s saying things that may be true—or may be true in some circumstances. Again, literary theorists do something similar. Michel Foucault is particularly guilty of nonsense (why people love his History of Sexuality, which contains little history and virtually no citations, is beyond me). In grad school a professor assigned Luce Irigaray’s book Sexes and Genealogies, a book that makes both Foucault and Peterson seem lucid and specific by comparison.

Until Robinson’s essay I’d not heard of C. Wright Mills, but I wish I’d heard of him back in grad school; in that atmosphere, where many dumb ideas feel so important because the stakes are so low, he would’ve been revelatory. He may help explain what’s wrong in many corners of what’s supposed to be the world of ideas.

Oddly, the Twitter account Real Peer Review has done much of the work aggregating the worst offenders in published humanities nonsense (a long time ago I started collecting examples of nonsense in peer review but gave up because there was so much of it and pointing out nonsense seemed to have no effect on the larger world).

the Peterson way is not just futile because it’s pointless, it’s futile because ultimately, you can’t escape politics. Our lives are conditioned by economic and political systems, like it or not [. . .]

It’s true, I suppose, in some sense, that you can’t escape politics, but must all of life be about politics, everywhere, all the time? I hope not. One hears that “the personal is the political,” which is both irritating and wrong. Sometimes the personal is just personal. Or political dimensions may be present but very small and unimportant, like relativity acting on objects moving at classical speeds. The politicizing of everyday life may be part of what drives searching people towards Peterson.

Sometimes people want to live outside the often-dreary shadow of politics, but, some aspects of social media make that harder. I’ve observed to friends that, the more I see of someone on Facebook, the less I tend to like them (maybe the same is true of others who know me via Facebook). Maybe social media also means that the things that could be easily ignored in a face-to-face context, or just not known, get highlighted in an unfortunate and extremely visible way. Social media seems to heighten our mimetic instincts in not-good ways.

We seem to want to sort ourselves into political teams more readily than we used to, and we seem more likely to cut off relationships due to slights or beliefs that wouldn’t have been visible to us previously. In some sense we can’t escape politics, but many if not most of us feel that political is not our most defining characteristic.

I’m happy to read Peterson as a symptom and a response, but the important question then becomes, “To what? Of what?” There are a lot of possible answers, some of which Robinson engages—which is great! But most of Peterson’s critics don’t seem to want to engage the question, let alone the answer.

The rest of us are back to the war of art. Which has to first of all be good, rather than agreeing with whatever today’s social pieties may be.

What would a better doctor education system look like?

A reader of “Why you should become a nurse or physicians assistant instead of a doctor: the underrated perils of medical school,” asks, though not quite in this way, what a better doctor education system would look like. It’s surprising that it’s taken so long and so many readers for someone to ask, but before I answer, let me say that, while the question is important, I don’t expect to see improvement. That’s because current, credentialed doctors are highly invested in the system and want to keep barriers to entry high—which in turn helps keep salaries up. In addition, there are still many people trying to enter med school, so the supply of prospective applicants props the system up. Meanwhile, people who notice high wages in medicine but who also notice how crazy the med school system is can turn to PA or NP school as reasonable alternatives. With so little pressure on the system and so many stakeholders invested, why change?

That being said, the question is intellectually interesting if useless in practice, so let’s list some possibilities:

1. Roll med school into undergrad. Do two years of gen eds, then start med school. Even assuming med school needs to be four years (it probably doesn’t), that would slice two years of high-cost education off the total bill.

2. Allow med students, or for that matter anyone, to “challenge the test.” If you learn anatomy on your own and with Youtube, take the test and then you don’t have take three to six (expensive) weeks of mind-numbing lecture courses. Telling students, “You can spent $4,000 on courses or learn it yourself and then take a $150 test” will likely have… unusual outcomes, compared to what professors claim students need.

3. Align curriculums with what doctors actually do. Biochem is a great subject that few specialties actually use. Require those specialties to know biochem. Don’t mandate biochem for family docs, ER, etc.

4. Allow competition among residencies—that is, allow residents to switch on, say, a month-by-month basis, like a real job market.

There are probably others, but these are some of the lowest-hanging fruit. We’re also not likely to see many of these changes for the reason mentioned above—lots of people have a financial stake in the status quo—but also because so much of school is about signaling, not learning. The system works sub-optimally, but it also works “well enough.” Since the present system is good enough and the current medical cartel likes things as they are, it’s up to uncredentialed outsiders like me to observe possible changes that’ll never be implemented by insiders.

I wrote “Why you should become a nurse or physicians assistant instead of a doctor: the underrated perils of medical school” five years ago and in that time we’ve seen zero changes at the macro level. Some individuals have likely not screwed up their lives via med school, and some of them have left comments or sent me emails saying as much, and that’s great. But it’s not been sufficient to generate systems change.

“Pop culture today is obsessed with the battle between good and evil. Traditional folktales never were. What changed?”

The good guy/bad guy myth: Pop culture today is obsessed with the battle between good and evil. Traditional folktales never were. What changed?” is one of the most interesting essays on narrative and fiction I’ve ever read, and while I, like most of you, am familiar with the tendency of good guys and bad guys in fiction, I wasn’t cognizant of the way pure good and pure evil as fundamental characterizations only really proliferated around 1700.

In other words, I didn’t notice the narrative water in which I swim. Yet now I can’t stop thinking about a lot of narrative in the terms described.

A while ago, I read most of Neil Gaiman’s Norse Mythology and found it boring, perhaps in part because the characters didn’t seem to stand for anything beyond themselves, and they didn’t seem to want anything greater than themselves in any given moment. Yet for most of human civilization, that kind of story may have been more common than many modern stories.

Still, I wonder if we should be even more skeptical of good versus evil stories than I would’ve thought we should be prior to reading this essay.

 

Lost technologies, Seveneves, and The Secret of Our Success

Spoilers ahead, but if you haven’t read Seveneves by now they probably don’t matter.

Seveneves is an unusual and great novel, and it’s great as long as you attribute some of its less plausible elements to an author building a world. One plausible element is the way humanity comes together and keeps the social, political, and economic systems functional enough to launch large numbers of spacecraft in the face of imminent collective death. If we collectively had two years to live, I suspect total breakdown would follow, leaving us with no Cloud Ark (and no story—thus we go along with the premise).

But that’s not the main thing I want to write about. Instead, consider the loss of knowledge that inherently comes with population decline. In Seveneves humanity declines to seven women living in space on a massive iron remnant of the moon. They slowly repopulate, with their descendants living in space for five thousand years. But a population of seven would probably not be able to retain and transmit the specialized knowledge necessary for survival on most parts of Earth, let alone space.

That isn’t a speculative claim. We have pretty good evidence for the way small populations lose knowledge. Something drew me to re-reading Joseph Henrich’s excellent book The Secret of Our Success, and maybe the sections about technological loss are part of it. He writes about many examples of European explorers getting lost and dying in relatively fecund environments because they don’t have the local knowledge and customs necessary to survive. He writes about indigenous groups too, including the Polar Intuit, who “live in an isolated region of northwestern Greenland [. . . .] They are the northernmost human population that has ever existed” (211). But

Sometime in the 1820s an epidemic hit this population and selectively killed off many of its oldest and most knowledgable members. With the sudden disappearance of the know-how carried by these individuals, the group collectively lost its ability to make some of its most crucial and complex tools, including leisters, bows and arrows, the heat-trapping long entry ways for snow houses, and most important, kayaks.

As a result, “The population declined until 1862, when another group of Intuit from around Baffin Island ran across them while traveling along the Greenland coast. The subsequent cultural reconnection led the Polar Intuit to rapidly reacquire what they had lost.” Which is essential:

Though crucial to survival in the Arctic, the lost technologies were not things that the Polar Intuit could easily recreate Even having seen these technologies in operation as children, and with their population crashing, neither the older generation nor an entirely new generation responded to Mother Necessity by devising kayaks, leisters, compound bows, or long tunnel entrances.

Innovation is hard and relatively rare. We’re all part of a network that transmits knowledge horizontally, from peer to peer, and vertically, from older person to younger person. Today, people in first-world countries are used to innovation because we’re part of a vast network of billions of people who are constantly learning from each and transmitting the innovations that do arise. We’re used to seemingly automatic innovation, because so many people are working on so many problems. Unless we’re employed as researchers, we’re often not cognizant of how much effort goes into both discovery and then transmission.

Without that dense network of people, though, much of what we know would be lost. Maybe the best-known example of technology loss happened when the Roman Empire fell, followed by the way ancient Egyptians lost the know-how necessary to build pyramids and other epic engineering works.

In a Seveneves scenario, it’s highly unlikely that the novel’s protagonists would be able to sustain and transmit the knowledge necessary to live somewhere on earth, let alone somewhere as hostile as space. Quick: how helpful would you be in designing and manufacturing microchips, solar panels, nuclear reactors, plant biology, or oxygen systems? Yeah, me too. Those complex technologies have research, design, and manufacture facets that are embodied in the heads of thousands if not millions of individuals. The level of specialization our society has achieved is incredible, but we rarely think about how incredible it really is.

This is not so much a criticism of the novel—I consider the fact that they do survive part of granting the author his due—but it is a contextualization of the novel’s ideas. The evidence that knowledge is fragile is more pervasive and available than I’d thought when I was younger. We like stories of individual agency, but in actuality we’re better conceived of as parts in a massive system. We can see our susceptibility to conspiracy theories as beliefs in the excessive power of the individual. In an essay from Distrust That Particular Flavor, William Gibson writes: “Conspiracy theories and the occult comfort us because they present models of the world that more easily make sense than the world itself, and, regardless of how dark or threatening, are inherently less frightening.” The world itself is big, densely interconnected, and our ability to change it is real but often smaller than we imagine.

Henrich writes:

Once individuals evolve to learn from one another with sufficient accuracy (fidelity), social groups of individuals develop what might be called collective brains. The power of these collective brains to develop increasingly effective tools and technologies, as well as other forms of nonmaterial culture (e.g., know-how), depends in part on the size of the group of individuals engaged and on their social connectedness. (212)

The Secret of Our Success also cites laboratory recreations of similar principles; those experiments are too long to describe here, but they are clever. If there are good critiques of the chapter and idea, I haven’t found them (and if you know any, let’s use our collective brain by posting links in the comments). Henrich emphasizes:

If a population suddenly shrinks or gets socially disconnected, it can actually lose adaptive cultural information, resulting in a loss of technical skills and the disappearance of complex technologies. [. . . ] A population’s size and social interconnectedness sets a maximum on the size of a group’s collective brain. (218-9)

That size cap means that small populations in space, even if they are composed of highly skilled and competent individuals, are unlikely to survive over generations. They are unlikely to survive even if they have the rest of humanity’s explicit knowledge recorded on disk. There is too much tacit knowledge for explicit knowledge in and of itself to be useful, as anyone who has ever tried to learn from a book and then from a good teacher knows. Someday we may be able to survive indefinitely in space, but today we’re far from that stage.

Almost all post-apocalyptic novels face the small-population dilemma to some extent (I’d argue that Seveneves can be seen as a post-apocalyptic novel with a novel apocalypse). Think of the role played by the nuclear reactor in Steven King’s The Stand: the characters in the immediate aftermath must decide if they’re going to live in the dark and regress to hunter-gatherer times, at best, or if they’re going to save and use the reactor to live in the light (the metaphoric implications are not hard to perceive here). In one of the earliest post-apocalyptic novels, Earth Abides, two generations after the disaster, descendants of technologically sophisticated people are reduced to using melted-down coins as tips for spears and arrows. In Threads, the movie (and my nominee for scariest movie ever made), the descendants of survivors of nuclear war lose most of their vocabulary and are reduced to what is by modern standards an impoverished language that is a sort of inadvertent 1984 newspeak.* Let’s hope we don’t find out what actually happens after nuclear war.

In short, kill enough neurons in the collective brain and the brain itself stops working. Which has happened before. And it could happen again.


* Check out the cars in Britain in Threads: that reminds us of the possibilities of technological progress and advancement.

Why read bestsellers

Someone wrote to ask why I bother writing about John Grisham’s weaknesses as a writer and implied in it is a second question: why read bestsellers at all? The first is a fair question and so is the implication in it: Grisham’s readers don’t read me and don’t care what I think; they don’t care that he’s a bad writer; and people who read me probably aren’t going to read him. Still, I read him because I was curious and I wrote about him to report what I found.

The answer to the second one is easy: Some are great! Not all, probably not even most, but enough to try. Lonesome Dove, the best novel I’ve read recently, was a bestseller. Its sequel, Streets of Laredo, is not quite as good but I’m glad to have read it. Elmore Leonard was often a bestseller and he is excellent. Others seemed like they’d be bad (Gillian Flynn, Tucker Max) but turned into favorites.

One could construct a 2×2 matrix of good famous books; bad famous books; good obscure books; and bad obscure books. That last one is a large group too; credibility amid a handful of literary critics (who may be scratching each other’s backs anyway) does not necessarily equate to quality, and I’ve been fooled by good reviews of mostly unknown books many times. Literary posturing does not equate to actual quality.

Different people also have different views around literary quality, and those views depend in part on experience and reading habits. Someone who reads zero or one books a year is likely to have very different impressions than someone who reads ten or someone who reads fifty or a hundred. Someone who is reading like a writer will probably have a different experience than someone who reads exclusively in a single, particular genre.

And Grisham? That article (which I wish I could find) made him and especially Camino Island sound appealing, and the book does occasionally work. But its addiction to cliché and the sort of overwriting common in student writing makes it unreadable in my view. But someone who reads one or two books a year and for whom Grisham is one of those books will probably like him just fine, because they don’t have the built-up stock of reading that lets them distinguish what’s really good from what isn’t.

Work and video games

I was reading “Escape to Another World” (highly recommended) and this part made me realize something:

How could society ever value time spent at games as it does time spent on “real” pursuits, on holidays with families or working in the back garden, to say nothing of time on the job? Yet it is possible that just as past generations did not simply normalise the ideal of time off but imbued it with virtue – barbecuing in the garden on weekends or piling the family into the car for a holiday – future generations might make hours spent each day on games something of an institution.

I think part of the challenge is that, historically, many of us pursue hobbies and other activities that are also related to craftsmanship. The world of full of people who, in their spare time, rebuild bikes or cars, or sew quilts, or bind books, or write open-source software, or pursue other kinds of hobbies that have virtues beyond the pleasure of the hobby itself (I am thinking of a book like Shop Class as Soul Craft, though if I recall correctly the idea of craftsmanship as a virtue of its own goes back to Plato). A friend of mine, for example, started up pottery classes; while she enjoys the process, she also gets bowls and mugs out of it. Video games seem have few or none of those secondary effects.

To be sure, a lot of playing video games has likely replaced watching TV, and watching TV has none of those salutary effects either. Still, one has to wonder if video games are also usurping more active forms of activity that also build other kinds of skills (as well as useful objects).

I say this as someone who wasted a fantastic amount of time on video games from ages 12 – 15 or so. Those are years I should’ve been building real skills and abilities (or even having real fun), and instead I spent a lot of them slaying imaginary monsters as a way of avoiding the real world. I can’t imagine being an adult and spending all that time on video games. We can never get back the time we waste, and wasted time compounds—as does invested time.

In my own life, the hobby time I’ve spent reading feeds directly into my professional life. The hobby time I spent working on newspapers in high school and college does too. Many people won’t have so direct a connection—but many do, and will.

To be sure, lots of people play recreational video games that don’t interfere with the rest of their lives. Playing video games as a way of consciously wasting time is fine, but when wasting time becomes a primary activity instead of a secondary or tertiary one it becomes a problem over time. It’s possible to waste a single day mucking around or playing a game or whatever—I have and chances are very high that so have you—but the pervasiveness of them seems new, as Avent writes.

It’s probably better to be the person writing the games than playing the games (and writing them can at times take on some game-like qualities). When you’re otherwise stuck, build skills. No one wants skills in video game playing, but lots of people want other skills that aren’t being built by battling digital orcs. The realest worry may be that many people who start the video game spiral won’t be able to get out.

%d bloggers like this: