The death of literary culture

At The Complete Review, Michael Orthofer writes of John Updike that “Dead authors do tend to fade fast these days — sometimes to be resurrected after a decent interval has passed, sometimes not –, which would seem to me to explain a lot. As to ‘the American literary mainstream’, I have far too little familiarity with it; indeed, I’d be hard pressed to guess what/who qualifies as that.” He’s responding to a critical essay that says: “Much of American literature is now written in the spurious confessional style of an Alcoholics Anonymous meeting. Readers value authenticity over coherence; they don’t value conventional beauty at all.” I’m never really sure what “authenticity” and its cousin “relatability” mean; regarding the former, I find The Authenticity Hoax: How We Get Lost Finding Ourselves persuasive.

Literary culture itself is mostly dead, and I lived through its final throes—perhaps like someone who, living through the 1950s, saw the end of religious Christianity as a dominant culture, since it was essentially gone by the 1970s—though many claimed its legacy for years after the real thing had passed. What killed literary culture? The Internet is the most obvious, salient answer, and in particular the dominance of social media, which is in effect its own genre—and, frequently, its own genre of fiction. Almost everyone will admit that their own social media profiles mostly showcase a version of their best or ideal selves, and, thinking of just about everyone I know well, or even slightly well, the gap between who they really are and what they are really doing, and what appears on their social media, is so wide as to qualify as fiction. Determining the “real” self is probably impossible, but determining the fake selves is easier, and the fake is everywhere.

Everyone knows this, but admitting it is rarer. Think of all the social media photos of a person ostensibly alone—admiring the beach, reading, sunbathing, whatever—but the photographer is somewhere. A simple example, maybe, but also one without the political baggage of many other possible examples.

Much of what passes for social media discourse makes little or no sense, until one considers that most assertions are assertions of identity, not of factual or true statements, and many social media users are constructing a quasi-fictional universe not unlike the ones novels used to create. “QAnon” might be one easy modern example, albeit one that will probably go stale soon, if it’s not already stale; others will take its place. Many of these fictions are the work of group authors. Numerous assertions around gender and identity might be a left-wing-valenced version of the phenomenon, for readers who want balance, however spurious balance might be. Today, we’ve in some ways moved back to a world like that of the early novel and the early novelists, when “fact” and “fiction” were much more disputed, interwoven territories, and many novels claimed to be “true stories” on their cover pages. Today, the average person has poor epistemic hygiene for most topics not directly tied to income and employment, but the average person has a very keen sense of tribe, belonging, and identity—so views that may be epistemically dubious nonetheless succeed if they promote belonging (consider also The Elephant in the Brain by Robin Hanson and Kevin Simler for a more thorough elaboration on these ideas). Before social media, did most people really belong, or did they silently suffer through the feeling of not belonging? Or was something else at play? I don’t know.

In literary culture terms, the academic and journalistic establishment that once formed the skeletal structure upholding literary culture has collapsed, and today journalists and academics have become modern clerics. The number of jobs in journalism has approximately halved since the year 2000; academic jobs in the humanities cratered in 2009, from an already low starting point, and never recovered; even jobs teaching in high school humanities subjects have a much more ideological, rather than humanistic, cast than they did ten years ago. What’s taken its place, if anything? Instagram, Snapchat, TikTok, and, above all, Twitter. Twitter, in particular, seems to promote negative feedback and fear loops, in ways that media and other institutions haven’t yet figured out how to resist. The jobs that supported the thinkers, critics, starting-out novelists, and others, aren’t there. Whatever might have replaced them, like Twitter, isn’t equivalent. The Internet doesn’t just push most “content” (songs, books, and so forth) towards zero—it also changes what people do, including the people who used to make up what I’m calling literary culture or book culture.

Today, the most power and vibrancy of and in book culture has shifted towards nonfiction—either narrative nonfiction, like Michael Lewis, or data-driven nonfiction, with too many examples to cite. It still sells (sales aren’t a perfect representation of artistic merit or cultural vibrancy, but they’re not nothing, either). Dead authors go fast today not solely or primarily because of their work, but because the literary culture is going away fast, if it’s not already gone away. When John Updike was in his prime, millions of people read him (or they at last bought Couples and could spit out some light book chat about it on command). The number of writers working today who the educated public, broadly conceived of, might know about is small: maybe Elena Ferrante, Michel Houllebecq, Sally Rooney, and perhaps a few others (none of those three are American, I note). I can’t even think of a figure like Elmore Leonard: someone writing linguistically interesting, highly plotted material. Bulk genre writers are still out there, but none who I’m aware of who have any literary ambition.

I caught the tail end of a humane and human-focused literary culture that’s largely been succeeded by a political and moral-focused culture that I hesitate to call literary, even though it’s taken over what remains of those literary-type institutions. It’s not surprising to me that this change has also coincided with a lessening of interest in those institutions: very few people want to be clerics and scolds—many fewer than wonder about the human condition. Shifting from the one to the other seems like a net loss to me, but also a net loss that I’m mostly unable to arrest or alter. If I had to pick a date range for this death, it’d probably be 2009 – 2015: the Great Recession eliminates many of the institutional jobs and professions that once existed, along with any plausible path into them for all but the luckiest, and by 2015 social media and scold culture had taken over. Culture is define but easy to feel as you exist within and around it. By 2010, Facebook had become truly mainstream, and everyone’s uncle and grandma weren’t just on the Internet for email and search engines, but for other people and their opinions.

Maybe mainstream literary culture has been replaced by some number of smaller micro-cultures, but those microcultures don’t add up to what used to be a macroculture.

Reading back over this I realize it has the tone and quality of a complaint, but it’s meant mostly as a description: I’m trying to look at what’s happening, not whine about it. One could argue this change is for the better. Whining about aggregate behavior and choices has rarely, if ever, changed it. I don’t think literary culture will ever return, any more Latin, epic poetry, classical music, opera, or any number of other once-vital cultural products and systems will. In some ways, we’re moving backwards, towards a cultural fictional universe with less clearly demarcated lines between “fact” and “fiction” (I remember being surprised, when I started teaching, by undergrads who didn’t know a novel or short stories are fiction, or who called nonfiction works “novels”). Every day, each of us is helping whatever comes next, become. The intertwined forces of technology and culture move primarily in a single direction. The desire for story will remain but the manifestation of that desire won’t.

Where are the woke on Disney and China?

I have sat through numerous talks and seen numerous social media messages about the evils of imperialism, and in particular western imperialism—so where’s the mass outrage over China today, and the efforts by Disney and Hollywood to court China? China is a literal, real-world imperialist power, today; China has crushed Hong Kong’s independent, imprisoned perhaps a million of its own people based on their race and religion, and invaded and occupied Tibet—and Taiwan may be next. But I never read “imperialist” or “racist” critiques from the usual suspects. Why not?

Search for “imperialism” on Twitter, for example, and you’ll find numerous people denouncing what they take to be “imperialism” or various kinds of imperialisms, but few dealing with China. This bit about Bob Iger’s complicity with Chinese government repression got me thinking about why some targets draw much “woke” ire while others don’t. My working hypothesis is that China seems far away from the United States and too different to understand—even though companies and individuals are regularly attacked for their associations with other Americans, they rarely seem to be for their associations with China. The NBA, to take another example, fervently favors police reform in the United States, but is largely silent on China (to be sure, I don’t agree with all the posturing at the link, but pay attention to the underlying point). My working theory is that the situation between the woke and China is analogous to the way that comparisons to your wife’s sister’s husband’s income can create a lot of jealousy while comparisons to the truly wealthy don’t.

In addition, could it be that Disney’s specialty in child-like stories of simple, Manichaean stories of good versus evil appeal to the same people, or kinds of person, most likely to be attracted to the quasi-religious “woke” mindset? To my knowledge, I’ve not seen these questions asked, and Disney products, like Star Wars movies and TV shows, seem to remain broadly popular, including on the far left. It’s also worth emphasizing that some have spoken about Disney’s action’s; the Twitter thread about Iger links to “Why Disney’s new ‘Mulan’ is a scandal.” But the issue seems to elicit relatively little ire and prominence, compared to many others. Few sustained movements or organizations are devoted to these issues.

What views make someone a pariah, and why? What associations make someone a pariah, and why? What views and associations elicit intense anger, and why? I don’t have full answers to any of these questions but think them worth asking. No one seems to be calling for boycotts of Disney, even though Disney is toadying to an actual imperialist state.

Dissent, insiders, and outsiders: Institutions in the age of Twitter

How does an organization deal with differing viewpoints among its constituents, and how do constituents dissent?

Someone in Google’s AI division was recently fired, or the person’s resignation accepted, depending on one’s perspective, for reasons related to a violation of process and organizational norms, or something else, again depending on one’s perspective. The specifics of that incident can be disputed, but the more interesting level of abstraction might ask how organizations process conflict and what underlying conflict model participants have. I recently re-read Noah Smith’s essay “Leaders Who Act Like Outsiders Invite Trouble;” he’s dealing with the leadup to World War II but also says: “This extraordinary trend of rank-and-file members challenging the leaders of their organizations goes beyond simple populism. There may be no word for this trend in the English language. But there is one in Japanese: gekokujo.” And later, “The real danger of gekokujo, however, comes from the establishment’s response to the threat. Eventually, party bosses, executives and other powerful figures may get tired of being pushed around.”

If you’ve been reading the news, you’ll have seen gekokujo, as institutions are being pushed by the Twitter mob, and by the Twitter mob mentality, even when the mobbing person is formally within the institution. I think we’re learning, or going to have to re-learn, things like “Why did companies traditionally encourage people to leave politics and religion at the door?” and “What’s the acceptable level of discourse within the institution, before you’re not a part of it any more?”

Colleges and universities in particular seem to be susceptible to these problems, and some are inculcating environments and cultures that may not be good for working in large groups. One recent example of these challenges occurred at Haverford college, but here too the news has many other examples, and the Haverford story seems particularly dreadful.

The basic idea that organizations have to decide who’s inside and who’s outside is old: Albert Hirschman’s Exit, Voice, and Loyalty: Responses to Decline in Firms, Organizations, and States is one great discussion. Organizations also used to unfairly exclude large swaths of the population based on demographic factors, and that’s (obviously) bad. Today, though, many organizations have in effect, if not intent, decided that it’s okay for some of their members to attack the good faith of other members of the organization, and to attack the coherentness of the organization itself. There are probably limits to how much this can be done, and still retain a functional organization, let alone a maximally functional organization.

The other big change involves the ability to coordinate relatively large numbers of people: digital tools have made this easier, in a relatively short time—thus the “Twitter mob” terminology that came to mind a few paragraphs ago; I kept the term, because it seems like a reasonable placeholder for that class of behavior. Digital tools ease the ability of a small percentage of total people to be a large absolute number of people. For example, if 100,000 people are interested in or somehow connected to an organization, and one percent of them want to fundamentally disrupt the organization, change its direction, or arrange an attack, that’s 1,000 people—which feels like a lot. It’s far above the Dunbar number and too many for one or two public-facing people to deal with. In addition, in some ways journalists and academics have become modern-day clerics, and they’re often eager to highlight and disseminate news of disputes of this sort.

Over time, I expect organizations are going to need to develop new cultural norms if they’re going to maintain their integrity in the face of coordinated groups that represent relatively small percentages of people but large absolute numbers of people. The larger the organization, the more susceptible it may be to these kinds of attacks. I’d expect more organizations to, for example, explicitly say that attacking other members of the organization in bad faith will result in expulsion, as seems to have happened in the Google example.

Evergreen College, which hosted an early example of this kind of attack (on a biology professor named Bret Weinstein), has seen its enrollment drop by about a third.

Martin Gurri’s book The Revolt of The Public and the Crisis of Authority in the New Millennium examines the contours of the new information world, and the relative slowness of institutions to adapt to it. Even companies like Google, Twitter, and Facebook, which have enabled sentiment amplification, were founded before their own user bases became so massive.

Within organizations, an excess of conformity is a problem—innovation doesn’t occur from simply following orders—but so is an excess of chaos. Modern intellectual organizations, like tech companies or universities, probably need more “chaos” (in the sense of information transfer) than, say, old-school manufacturing companies, which primarily needed compliance. “Old-school” is a key phrase, because from what I understand, modern manufacturing companies are all tech companies too, and they need the people closest to the process to be able to speak up if something is amiss or needs to be changed. Modern information companies need workers to speak up and suggest new ideas, new ways of doing things, and so on. That’s arguably part of the job of every person in the organization.

Discussion at work of controversial identity issues can probably function if all parties assume good faith from the other parties (Google is said to have had a freewheeling culture in this regard from around the time of its founding up till relatively recently). Such discussions probably won’t function without fundamental good faith, and good faith is hard to describe, but most of us know it when we see it, and defining every element of it would probably be impossible, while cultivating it as a general principle is desirable. Trying to maintain such an environment is tough: I know that intimately because I’ve tried to maintain it in classrooms, and those experiences led me to write “The race to the bottom of victimhood and ‘social justice’ culture.” It’s hard to teach, or run an information organization, without a culture that lets people think out loud, in good faith, with relatively little fear of arbitrary reprisal. Universities, in particular, are supposed to be oriented around new ideas and discussing ideas. Organizations also need some amount of hierarchy: without it, decisions can’t or don’t get made, and the organizational processes themselves don’t function. Excessive attacks lead to the “gekokujo” problem Smith describes. Over time organizations are likely going to have to develop antibodies to the novel dynamics of the digital world.

A lot of potential learning opportunities aren’t happening, because we’re instead dividing people into inquisitors and heretics, when very few should be the former, and very are truly the latter. One aspect of “Professionalism” might be “assuming good faith on the part of other parties, until proven otherwise.”

On the other hand, maybe these cultural skirmishes don’t matter much, like brawlers in a tavern across the street from the research lab. Google’s AlphaFold has made a huge leap in protein folding efforts (Google reorganized itself, so technically both Google and AlphaFold are part of the “Alphabet” parent company). Waymo, another Google endeavor, may be leading the way towards driverless cars, and it claims to be expanding its driverless car service. Compared to big technical achievements, media fights are minor. Fifty years from now, driverless cars will be taken for granted, along with customizable biology, people will be struggling to understand what was at stake culturally, in much the way most people don’t get what the Know-Nothing party, of the Hundred Years War, were really about, but we take electricity and the printing press for granted.

EDIT: Coinbase has publicly taken a “leave politics and religion at the door” stand. They’re an innovator, or maybe a back-to-the-future company, in these terms.

 

Personal epistemology, free speech, and tech companies

The NYT describes “The Problem of Free Speech in an Age of Disinformation, and in response Hacker News commenter throwaway13337 says, in part, “It’s not unchecked free speech. Instead, it’s unchecked curation by media and social media companies with the goal of engagement.” There’s some truth to the idea that social media companies have evolved to seek engagement, rather than truth, but I think the social media companies are reflecting a deeper human tendency. I wrote back to throwaway13337: “Try teaching non-elite undergrads, and particularly assignments that require some sense of epistemology, and you’ll discover that the vast majority of people have pretty poor personal epistemic hygiene—it’s not much required in most people, most of the time, in most jobs.”

From what I can tell, we evolved to form tribes, not to be “right:” Jonathan’s Haidt’s The Righteous Mind: Why Good People Are Divided by Politics and Religion deals with this topic well and at length, and I’ve not seen any substantial rebuttals of it. We don’t naturally take to tracking the question, “How do I know what I know?” Instead, we naturally seem to want to find “facts” or ideas that support our preexisting views. In the HN comment thread, someone asked for specific examples of poor undergrad epistemic hygiene, and while I’d prefer not to get super specific for reasons of privacy, I’ve had many conversations that take the following form: “How do you know article x is accurate?” “Google told me.” “How does Google work?” “I don’t know.” “What does it take to make a claim on the Internet.” “Um. A phone, I guess?” A lot of people—maybe most—will uncritically take as fact whatever happens to be served up by Google (it’s always Google and never Duck Duck Go or Bing), and most undergrads whose work I’ve read will, again uncritically, accept clickbait sites and similar as accurate. Part of the reason for this reasoning is that undergrads’s lives are minimally affected by being wrong or incomplete about some claim done in a short assignment that’s being imposed by some annoying professor toff standing between them and their degree.

The gap between elite information discourse and everyday information discourse, even among college students, who may be more sophisticated than their peer equivalents, is vast—so vast that I don’t think most journalists (who mostly talk to other journalists and to experts) and to other people who work with information, data, and ideas really truly understand it. We’re all living in bubbles. I don’t think I did, either, before I saw the epistemic hygiene most undergrads practice, or don’t practice. This is not a “kids these days” rant, either: many of them have never really been taught to ask themselves, “How do I know what I know?” Many have never really learned anything about the scientific method. It’s not happening much in most non-elite schools, so where are they going to get epistemic hygiene from?

The United States alone has 320 million people in it. Table DP02 in the Census at data.census.gov estimates that 20.3% of the population age 25 and older has a college bachelor’s degree, and 12.8% have a graduate or professional degree. Before someone objects, let me admit that a college degree is far from a perfect proxy for epistemic hygiene or general knowledge, and some high school dropouts perform much better at cognition, meta cognition, statistical reasoning, and so forth, than do some people with graduate degrees. With that said, though, a college degree is probably a decent approximation for baseline abstract reasoning skills and epistemic hygiene. Most people, though, don’t connect with or think in terms of aggregated data or abstract reasoning—one study, for example, finds that “Personal experiences bridge moral and political divides better than facts.” We’re tribe builders, not fact finders.

Almost anyone who wants a megaphone in the form of one of the many social media platforms available now has one. The number of people motivated by questions like “What is really true, and how do I discern what is really true? How do I enable myself to get countervailing data and information into my view, or worldview, or worldviews?” is not zero, again obviously, but it’s not a huge part of the population. And many very “smart” people in an IQ sense use their intelligence to build better rationalizations, rather than to seek truth (and I may be among the rationalizers: I’m not trying to exclude myself from that category).

Until relatively recently, almost everyone with a media megaphone had some kind of training or interest in epistemology, even they didn’t call it “epistemology.” Editors would ask, “How do you know that?” or “Who told you that?” or that sort of thing. Professors have systems that are supposed to encourage greater-than-average epistemic hygiene (these systems were not and are not perfect, and nothing I have written so far implies that they were or are).

Most people don’t care about the question, “How do you know what you know?” are fairly surprised if it’s asked, implicitly or explicitly. Some people are intrigued by it but most aren’t, and view questions about sources and knowledge to be a hindrance. This is less likely to be true of people who aspire to be researchers or work in other knowledge-related professions, but that describes only a small percentage of undergraduates, particularly at non-elite schools. And the “elite schools” thing drives a lot of the media discourse around education. One of the things I like about Professor X’s book In the Basement of the Ivory Tower is how it functions as a corrective to that discourse.

For most people, floating a factually incorrect conspiracy theory online isn’t going to negatively affect their lives. If someone is a nurse and gives a patient a wrong medication or incorrect medication, that person is not going to be a nurse for long. If the nurse states or repeats a factually incorrect political or social idea online, particularly but not exclusively under a pseudonym, that nurse’s life likely won’t be affected. There’s no truth feedback loop. The same is true for someone working in, say, construction, or engineering, or many other fields. The person is free to state things that are factually incorrect, or incomplete, or misleading, and doing so isn’t going to have many negative consequences. Maybe it will have some positive consequences: one way to show that you’re really on team x is to state or repeat falsehoods that show you’re on team x, rather than on team “What is really true?”

I don’t want to get into daily political discourse, since that tends to raise defenses and elicit anger, but the last eight months have demonstrated many people’s problems with epistemology, and in a way that can have immediate, negative personal consequences—but not for everyone.

Pew Research data indicate that a quarter of US adults didn’t read a book in 2018; this is consistent with other data indicating that about half of US adults read zero or one books per year. Again, yes, there are surely many individuals who read other materials and have excellent epistemic hygiene, but this is a reasonable mass proxy, given the demands that reading makes on us.

Many people driving the (relatively) elite discourse don’t realize how many people are not only not like them, but wildly not like them, along numerous metrics. It may also be that we don’t know how to deal with gossip at scale. Interpersonal gossip is all about personal stories, while many problems at scale are best understood through data—but the number of people deeply interested in data and data’s veracity is small. And elite discourse has some of its own possible epistemic falsehoods, or at least uncertainties, embedded within it: some of the populist rhetoric against elites is rooted in truth.

A surprisingly large number of freshmen don’t know the difference between fiction and nonfiction, or that novels are fiction. Not a majority, but I was surprised when I first encountered confusion around these points; I’m not any longer. I don’t think the majority of freshmen confuse fiction and nonfiction, or genres of nonfiction, but enough do for the confusion to be a noticeable pattern (modern distinctions between fiction and nonfiction only really arose, I think, during the Enlightenment and the rise of the novel in the 18th Century, although off the top of my head I don’t have a good citation for this historical point, apart perhaps from Ian Watt’s work on the novel). Maybe online systems like Twitter or Facebook allow average users to revert to an earlier mode of discourse in which the border between fiction and nonfiction is more porous, and the online systems have strong fictional components that some users don’t care to segregate.

We are all caught in our bubble, and the universe of people is almost unimaginably larger than the number of people in our bubble. If you got this far, you’re probably in a nerd bubble: usually, anything involving the word “epistemology” sends people to sleep or, alternately, scurrying for something like “You won’t believe what this celebrity wore/said/did” instead. Almost no one wants to consider epistemology; to do so as a hobby is rare. One person’s disinformation is another person’s teambuilding. If you think the preceding sentence is in favor of disinformation, by the way, it’s not.

Bringing Up Bébé – Pamela Druckerman

This is really a book about how to do things, and about how the way we do things says things about who we are. Fiction is often about culture and so is Bringing Up Bébé. Cross-cultural comparisons are (still) underrated and we should do more of them; you can think of Michel Houellebecq’s work as being about the dark side of France and Druckerman’s as being about the light side of France (noting that she’s a transplanted American). Bringing Up Bébé is a parenting book, yes, but also a living book—that is, how to live. I bought it, let it sit around for a while, and only started it when I couldn’t find anything else to read, only to be delighted, and surprised. Let me quote from a section of the book; each new paragraph is a separate section, but put them together and one can see the differences between American-style families and French-style families:

French experts and parents believe that hearing “no” rescues children from the tyranny of their own desires.

As with teaching kids to sleep, French experts view learning to cope with “no” as a crucial step in a child’s evolution. It forces them to understand that there are other people in the world, with needs as powerful as their own.

French parents don’t worry that they’re going to damage their kids by frustrating them. To the contrary, they think their kids will be damaged if they can’t cope with frustration.

Walter Mischel says that capitulating to kids starts a dangerous cycle: “If kids have the experience that when they’re told to wait, that if they scream, Mommy will come and the wait will be over, they will very quickly learn not to wait. Non-waiting and screaming and carrying on and whining are being rewarded.”

“You must teach your child frustration” is a French parenting maxim.

As with sleep, we tend to view whether kids are good at waiting as a matter of temperament. In our view, parents either luck out and get a child who waits well or they don’t.

Since the ’60s, American parents seem to have become less inclined to say no and let kids live with some frustration, and yet we need some frustration and difficulty in order to become whole people. I’m sure many teachers and professors are reading the quotes above and connecting them to their own classroom experiences. The tie into Jean Twenge’s book iGen and Jonathan Haidt’s The Coddling of the American Mind is almost too obvious to state; Haidt and Twenge’s books concern what smartphones are doing to the state of education, educational discourse, and educational institutions, and, while they cover smartphones and social media, those two technologies aren’t occurring in isolation. Excessive permissiveness appears to create neuroticism, unhappiness, and fragility, and excessive permissiveness seems to start for American parents somewhere between a few weeks and a few months after birth—and it never ends. But most of us don’t recognize it in the absence of an outside observer, the same way we often don’t recognize our psychological challenges in the absence of an outside observer.

In Druckerman’s rendition, French parents are good at establishing boundaries, saying “no” and, with babies, implementing “the pause”—that is, not rushing to to the baby’s aid every time the baby makes some small noise or sound. She writes about how the way many children are “stout,” to use the French euphemism for “fat,” comes from not having established mealtimes but instead of having continuous snacking, in part because parents won’t say “No, you need to wait” to their kids.

Failing to create reasonable boundaries from an early age leads to the failure to develop emotional resilience. “Reasonable” is an important word: it is possible to be strict or to let kids struggle too much, just as it’s possible to do the opposite, and the right mix will likely depend on the kid or the situations.

French parenting culture spills into schools:

When Benoît took a temporary posting at Princeton, he was surprised when students accused him of being a harsh grader. “I learned that you had to say some positive things about even the worst essays,” he recalls. In one incident, he had to justify giving a student a D. Conversely, I hear that an American who taught at a French high school got complaints from parents when she gave grades of 18:20 and 20:20. The parents assumed that the class was too easy and that the grades were “fake.”

The whines I got from students also make sense: in many U.S. schools, there’s not as strong a culture of excellent as there is a culture of “gold stars for everyone.” I understand the latter desire, having felt it myself in many circumstances, but it’s also telling how important a culture of excellence is once the school train tracks end and the less-mapped wilderness of the “real world” (a phrase that is misused at times) begins.

I routinely get feedback that class is too hard, likely because most classes and professors have no incentive to fight grade inflation, and the easiest way to get along is for them to pretend to learn and us to pretend to teach. Real life, however, is rarely an “everybody gets an A” experience, and almost no one treats it that way: most people who eat bad food at a bad restaurant complain about it; most people whose doctor misses a diagnosis complain about the miss (and want excellence, not just kindness); most people prefer the best consumer tech products, like MacBook Airs or Dell XPS laptops, not the “good try” ones. Excellence itself is a key aspect of the real world but is often underemphasized in the current American education system (again, it is possible to over-emphasize it as well).

In my own work as a grant writing consultant, “good job” never occurs if the job is not good, and “you suck” sometimes occurs even if the job is good. Clients demand superior products and most people can’t write effectively, so they can’t do what I do. I’m keen to impart non-commodity skills that will help differentiate students from the untrained and poorly educated masses, but this demands a level of effort and precision beyond what most American schools seem to expect.

Having read Bringing Up Bébé, I’m surprised it’s not become a common read among professors and high school teachers—I think because it’s pitched as more of a parenting book and a popular “two different cultures” book. But it’s much subtler and more sociological than I would have thought, so perhaps I bought into its marketing too. There is also much to be said for how to teach and think about teaching in this book. The French are arguably too strict and too mean to students. Americans are probably not strict enough, not demanding enough, and don’t set adequate standards. The optimal place is likely somewhere between the extremes.

Druckerman is also funny: “I realize how much I’ve changed when, on the metro one morning, I instinctively back away from the man sitting next to the only empty seat, because I have the impression that he’s deranged. On reflection, I realize my only evidence for this is that he’s wearing shorts.” Could shorts not be an indication of derangement? And Druckerman cops to her own neuroticisms, which a whole industry of parenting guides exists to profit from:

What makes “Is It Safe?” so compulsive is that it creates new anxieties (Is it safe to make photocopies? Is it safe to swallow semen?) but then refuses to allay them with a simple “yes” or “no.” Instead, expert respondents disagree with one another and equivocate.

Bébé is a useful contrast from the France depicted in Houellebecq novels. Same country, very different vantages. In Druckerman’s France, the early childhood education system works fairly well, not having to have a car is pleasant, food isn’t a battle, and pleasant eroticism seems to fuel most adults’s lives—including parents’s. “Pleasant” is used twice deliberately. In Houellebecq’s France, empty nihilism reigns, most people are isolated by their attachment to machines, and and most actions are or feel futile.

So who’s right? Maybe both writers. But Druckerman may also point to some reasons why France, despite pursuing many bad economic policies at the country level, is still impressively functional and in many ways a good place to live. The country’s education system is functioning well and so is its transit systems—for example, Paris’s Metro is being massively expanded, at a time when the U.S. is choking on traffic and struggling with absurdly high subway costs that prevent us from building out alternatives. New York’s last main trunk subway line was completed before World War II. Small and useful extensions have been completed since, but there is no substitute for opening a dozen or more new stations and 10+ miles at a time. Improved subway access reduces the need for high-cost cars and enables people to live better lives—something France is doing but the U.S. seems unable to achieve. AAA estimates the average total cost of an American car to be $9,282. If French people can cut that to say $3,000 (taxes included) for subways, the French may be able to do a lot more with less.

France’s bad macro policies and overly rigid labor market may be offset by good childcare and transit policies; Bébé could help explain why that is. Druckerman says, “Catering to picky kids is a lot of work” (“cater” appears four times in Bébé). If the French don’t do that, Americans may be spending a lot of hours at work, rather than leisure, that the French aren’t spending—therefore raising the total quality of French life. Mismeasurement is everywhere, and, while I don’t want to praise France too much on the basis of a single work, I can see aspects of French culture that make sense and aspects of American culture that, framed correctly, don’t.

Have journalists and academics become modern-day clerics?

This guy was wrongly and somewhat insanely accused of sexual impropriety by two neo-puritans; stories about individual injustice can be interesting, but this one seems like an embodiment of a larger trend, and, although the story is long and some of the author’s assumptions are dubious, I think there’s a different, conceivably better, takeaway than the one implied: don’t go into academia (at least the humanities) or journalism. Both fields are fiercely, insanely combative for very small amounts of money; because the money is so bad, many people get or stay in them for non-monetary ideological reasons, almost the way priests, pastors, or other religious figures used to choose low incomes and high purpose (or “purpose” if we’re feeling cynical). Not only that, but clerics often know the answer to the question before the question has even been asked, and they don’t need free inquiry because the answers are already available—attributes that are very bad, yet seem to be increasingly common, in journalism and academia.

Obviously journalism and academia have never been great fields for getting rich, but the business model for both has fallen apart in the last 20 years. The people willing to tolerate the low pay and awful conditions must have other motives (a few are independently wealthy) to go into them. I’m not arguing that other motives have never existed, but today you’d have to be absurdly committed to those other motives. That there are new secular religions is not an observation original to me, but once I heard that idea a lot of other strange-seeming things about modern culture clicked into place. Low pay, low status, and low prestige occupations must do something for the people who go into them.

Once an individual enters the highly mimetic and extremely ideological space, he becomes a good target for destruction—and makes a good scapegoat for anyone who is not getting the money or recognition they think they deserve. Or for anyone who is simply angry or feels ill-used. The people who are robust or anti-fragile stay out of this space.

Meanwhile, less ideological and much wealthier professions may not have been, or be, immune from the cultural psychosis in a few media and academic fields, but they’re much less susceptible to mimetic contagions and ripping-downs. The people in them have greater incomes and resources. They have a greater sense of doing something in the world that is not primarily intellectual, and thus probably not primarily mimetic and ideological.

There’s a personal dimension to these observations, because I was attracted to both journalism and academia, but the former has shed at least half its jobs over the last two decades and the latter became untenable post-2008. I’ve enough interaction with both fields to get the cultural tenor of them, and smart people largely choose more lucrative and less crazy industries. Like many people attracted to journalism, I read books like All the President’s Men in high school and wanted to model Woodward and Bernstein. But almost no reporters today are like Woodward and Bernstein. They’re more likely to be writing Buzzfeed clickbait, and nothing generates more clicks than outrage. Smart people interested in journalism can do a minimal amount of research and realize that the field is oversubscribed and should be avoided.

When I hear students say they’re majoring in journalism, I look at them cockeyed, regardless of gender; there’s fierce competition coupled with few rewards. The journalism industry has evolved to take advantage of youthful idealism, much like fashion, publishing, film, and a few other industries. Perhaps that is why these industries attract so many writers to insider satires: the gap between idealistic expectation and cynical reality is very wide.

Even if thousands of people read this and follow its advice, thousands more persons will keep attempting to claw their way into journalism or academia. It is an unwise move. We have people like David Graeber buying into the innuendo and career attack culture. Smart people look at this and do something else, something where a random smear is less likely to cost an entire career.

We’re in the midst of a new-puritan revival and yet large parts of the media ecosystem are ignoring this idea, often because they’re part of it.

It is grimly funny to have read the first story linked next to a piece that quotes Solzhenitsyn: “To do evil a human being must first of all believe that what he’s doing is good, or else that it’s a well-considered act in conformity with natural law. . . . it is in the nature of a human being to seek a justification for his actions.” Ideology is back, and destruction is easier the construction. Our cultural immune system seems to have failed to figure this out, yet. Short-form social media like Facebook and Twitter arguably encourage black and white thinking, because there’s not enough space to develop nuance. There is enough space, however, to say that the bad guy is right over there, and we should go attack that bad guy for whatever thought crimes or wrongthink they may have committed.

Ideally, academics and journalists come to a given situation or set of facts and don’t know the answer in advance. In an ideal world, they try to figure out what’s true and why. “Ideal” is repeated twice because, historically, departures from the ideal is common, but having ideological neutrality and an investigatory posture is preferable to knowing the answer in advance and judging people based on demographic characteristics and prearranged prejudices, yet those traits seem to have seeped into the academic and journalistic cultures.

Combine this with present-day youth culture that equates feelings with facts and felt harm with real harm, and you get a pretty toxic stew—”toxic” being a favorite word of the new clerics. See further, America’s New Sex Bureaucracy. If you feel it’s wrong, it must be wrong, and probably illegal; if you feel it’s right, it must be right, and therefore desirable. This kind of thinking has generated some backlash, but not enough to save some of the demographic undesirables who wander into the kill zone of journalism or academia. Meanwhile, loneliness seems to be more acute than ever, and we’re stuck wondering why.

Is literature dead?

Is Literature Dead? The question can be seen as “more of the same,” and I’ll answer no: plenty of people, myself included, still find most video-based material boring. It’s not sufficiently information-dense and represents human interiority and thought poorly. A reasonable number of people in their teens or 20s who feel the same way, despite growing up in iGen. Fewer, maybe, than in previous generations, but still some and still enough to matter.

Literature has probably always been a minority pursuit, and it has been for as long as I’ve been alive and cognizant. It’ll continue being a minority pursuit—but I don’t think it will go away, in part for aesthetic reasons and in part for practical ones. Reading fiction is still a powerful tool for understanding other people, their drives, their uncertainties, their strengths—all vital components of organizations and organizational structures. TV and movies can replace some fraction of that but not all of it, and it’s notable how often video mediums launch from literary ones, like a parasite consuming its host.

That said, the marginal value of literature may have shrunk because there’s a lot of good written material in non-literature form—more articles, more essays, more easily available and read. All that nonfiction means that literature, while still valuable, has more competition. I’ve also wondered if the returns to reading fiction diminish at some point: after the thousandth novel, does each one after stop being as meaningful? Do you see “enough” of the human drama? If you’ve seen 92%, does getting to 92.5% mean anything? I phrase this as a question, not an answer, deliberately.

The biggest problem in my view is that a lot of literature is just not that good. Competition for time and attention is greater than it was even 20 or 30 years ago. Literature needs to recognize that and strive to be better: better written, better plotted, better thought-out, and too often it does not achieve those things. The fault is not all with Instagram-addled persons. I still find readers in the most unlikely of places. They—we—will likely keep showing up there.

The college bribery scandal vs. Lambda School

Many of you have seen the news, but, while the bribery scandal is sucking up all the attention in the media, Lambda School is offering a $2,000/month living stipend to some students and Western Governors University is continuing to quietly grow. The Lambda School story is a useful juxtaposition with the college-bribery scandal. Tyler Cowen has a good piece on the bribery scandal (although to me the scandal looks pretty much like business-as-usual among colleges, which are wrapped up in mimetic rivalry, rather than a scandal as such, unless the definition of a scandal is “when someone accidentally tells the truth”):

Many wealthy Americans perceive higher education to be an ethics-free, law-free zone where the only restraint on your behavior is whatever you can get away with.

This may be an overly cynical take, but to what extent do universities act like ethics-free, law-free zones? They accept students (and their student loan payments) who are unlikely to matriculate; they have no skin in the game regarding student loans; insiders understand the “paying for the party” phenomenon, while outsiders don’t; too frequently, universities don’t seem to defend free speech or inquiry. In short, many universities are exploiting information asymmetries between them and their students and those students’s parents—especially the weakest and worst-informed students. Discrimination against Asians in admissions is common at some schools and is another open secret, albeit less secret than it once was. When you realize what colleges are doing to students and their families, why is it a surprise when students and their families reciprocate?

To be sure, this is not true of all universities, not all the time, not all parts of all universities, so maybe I am just too close to the sausage factory. But I see a whole lot of bad behavior, even when most of the individual actors are well-meaning. Colleges have evolved in a curious set of directions, and no one attempting to design a system from scratch would choose what we have now. That is not a reason to imagine some kind of perfect world, but it is worth asking how we might evolve out of the current system, despite the many barriers to doing so. We’re also not seeing employers search for alternate credentialing sources, at least from what I can ascertain.

See also “I Was a College Admissions Officer. This Is What I Saw.” In a social media age, why are we not seeing more of these pieces? (EDIT: Maybe we are? This is another one, scalding and also congruent with my experiences.) Overall, I think colleges are really, really good at marketing, and arguably marketing is their core competency. A really good marketer, however, can convince you that marketing is not their core competency.

The elite case against big product “x” (today it’s Facebook)

For most of my life I’ve been reading well-structured, well-supported, well-written, and well-cited pieces arguing for why and how people should not do extremely popular thing x, where x can change based on the person making the argument. Often the argument is quite good but doesn’t create mass behavior change on the ground. I often agree with the argument, but whether I agree with it or not is less relevant than whether the majority of the population changes its behavior in measurable ways (for truly popular products and services, they don’t). Today, the x is Facebook.

Based on past examples of “the elite case against ‘x,'” I predict that today’s NYT and BBC articles do very little to change real-world, measurable behavior around Facebook and social media. To the extent people move away from Facebook, it will be toward some other Facebook property like Instagram or toward some other system that still has broadly similar properties, like Discord, Snapchat, etc. Today’s case against Facebook, or social media more generally, reminds me of the elite case against:

* TV. TV rots your brain and is worse than reading books. It destroys high culture and is merely a vehicle for advertising. Sophisticated pleasures are better than reality TV and the other “trash” on TV.” Yet TV remains popular. Even in 2017, “Watching TV was the leisure activity that occupied the most time (2.8 hours per day). And 2.8 hours per day is lower than the “four hours per day” time I’ve seen quoted elsewhere. Today, though, most people, even cultural elites, don’t even bother arguing against TV.

* Fast food, especially McDonald’s, Taco Bell, etc. It’s filled with sugar and, rather than being called “food,” it should probably be called, “an edible food-like substance.” There is also an elite case against factory farming and animal torture, which pretty much all fast food suppliers do. Yet McDonald’s, Taco Bell, and similar companies remain massive. Michael Pollan has done good work articulating the elite case against fast food.

* Oil companies. Oil use has led us to more than 400ppm CO2 in the atmosphere. We’re on the way to cooking ourselves. Yet the market response to hybrid vehicles has been to ignore them. Almost no one walks or bikes to work. Again, I would argue that more people should do these things, but what I think people should do, and what people do, are quite different. We like to attack oil companies instead of the consumer behavior that supports oil companies.

Oddly, I see the elite case against car companies and airplane companies much less frequently than I do against oil companies.

* Tobacco. It gives you lung cancer and smoking cigarettes isn’t even that good. While it appears that smoking rates have been declining for decades, 15.5% of adults still smoke. Taxation may be doing more to drive people away from tobacco than asserting the number and ways that tobacco is bad.

* Video games. They’re a way to evade the real world and perform activities that feel like fitness-enhancing activities but are actually just mental masturbation, but without the physical limits imposed by actual masturbation. They simulate the social world in a way that makes us more isolated and frustrated than ever before.

What other examples am I missing?

Today, we have the elite case against social media. It may be accurate. It’s generated good books, like Cal Newport’s Deep Work and Nicholas Carr’s The Shallows. Social media has generated lots of op-eds and parenting guides. Some individuals have announced publicly that they’re deleting their Facebook or Instagram page, yet Facebook is a public company and keeps reporting massive levels of use and engagement.

It turns out that what people want to do, is quite different from what The New York Times thinks people should do.

%d bloggers like this: