Most people don’t read carefully or for comprehension

Dan Luu has a great Twitter thread about “how few bits of information it’s possible to reliably convey to a large number of people. When I was at MS, I remember initially being surprised at how unnuanced their communication was, but it really makes sense in hindsight” and he also says that he’s “noticed this problem with my blog as well. E.g., I have some posts saying BigCo $ is better than startup $ for p50 and maybe even p90 outcomes and that you should work at startups for reasons other than pay. People often read those posts as ‘you shouldn’t work at startups’.” In other words, many people are poor readers, although “hurried” or “inattentive” might be kinder word choices. His experiences, though, are congruent with mine: I’ve taught English to non-elite college students, off and on, since 2008; when I first started, I’d run classes by saying things like, “What do you all think of the reading? Any comments or questions?” I’d get some meandering responses, and maybe generate a discussion, but I often felt like the students were doing random free association, and it took me an embarrassingly long time to figure out why.

After a semester or two I began changing what I was doing. An essay like Neal Stephenson’s “Turn On, Tune In, Veg Out” is a good demonstration of why, and it’s on my mind because I taught it to students recently (you should probably read “Turn On, Tune In, Veg Out” first, because, if you don’t, the next three paragraphs won’t make a lot of sense—and you’re the kind of person who does the reading, right?). Instead of opening by asking “What do you think?”, I began class by asking, “What is the main point of ‘Turn On, Tune In, Veg Out?’” Inevitably, not all students would have done the reading, but, among those who had, almost none ever have, or give, good answers. Many get stuck on the distinction between “geeking out” and “vegging out,” even though that’s a subsidiary point. Some students haven’t seen or dislike Star Wars, and talk about their dislike, even though that’s not germane to understanding the essay.

Stephenson says at least three times that Star Wars functions a metaphor: once in the third paragraph, once in the second-to-last paragraph—although that technically compares the Jedi to scientists, rather than Star Wars as a whole to society—and again at the end (“If the ‘Star Wars’ movies are remembered a century from now, it’ll be because they are such exact parables for this state of affairs”). Most students don’t know what a “parable” is, which also means I wind up asking what they should do if they come across a word they don’t know. It’s also not like the essay is long or using numerous complex words: it’s only about 1,300 words and it’s about pop culture, not some abstract topic.

The first few times I taught “Turn On, Tune In, Veg Out” this way, I wondered if I was getting unrepresentative samples, but I’ve done it many times since and have consistently gotten the same results. I think most high-school students, to the extent they’re being taught to read effectively at all, are being taught to skim a work for keywords and then vomit up an emotional reaction (I assign free-form, pass-fail student journals, and most take this form). Very few students seem to be taught close reading, although when I was still in grad school, I had a cluster of students who all had had the same junior or senior year high school teacher, and that teacher had drilled all of them in close reading and essay writing—and they were all proficient. She seemed to be the exception, not the rule, and I meant to send her a letter thanking her but never did. Teaching “Turn On, Tune In, Veg Out” usually takes somewhere between 45 minutes and an hour, in order to go through it and look at how the essay is constructed, how the sentence “What gives?” functions as a turning point in it, and other related topics. I tell students at the end of the process that we’ve not talked about whether they like “Turn On, Tune In, Veg Out” or not; the goal is to understand it first, and evaluate it later. Understanding before judgment: Internet culture encourages precisely opposite values, as I’m sure we’ve all seen in social media like Twitter itself.

At the end of class, I ask again, “What is the main point?” and get much better answers. I’ll sometimes do the same thing with other argumentative essays, and often the initial answers aren’t great. I posit that most students aren’t being taught close reading in high school, and part of that theory comes from me asking them, individually, what their high school English classes were like. Many report “we watched a lot of movies” or “nothing.” Sure, a few students will have taken “nothing” from excellent classes and instructors, but the answers are too uncomfortably common, especially from diligent-seeming students, for me to not see the pattern. In high school, few students seem to have looked closely at the language of a given work and how language choices are used to construct a story or argument. To my mind, and in my experience, doing that is a prerequisite for being a proficient writer, including on topics related to “social justice.”

It’s not just “turn On, Tune In, Veg Out;” when I assign Orwell’s “Politics and the English Language,” I’ll get strange responses from students about how it’s so totally true that these days the English language is being used poorly. After enough of those kinds of responses, I began to open class by asking students to take 20 or 30 seconds to write down when “Politics” was written. In case you think this is a trick question, “1946” is displayed in huge font at the start of the version I’m using, and it’s repeated again at the very end. In the text itself, Orwell cites a Communist pamphlet, and he mentions the “Soviet press,” and such choices should be clues that it’s not contemporary. Nonetheless, if a third of a given class gets in the right ballpark—pretty much anything between “1930s” and “1950s” is adequate enough for these purposes—that’s good, which implies two-thirds of a given class hasn’t done the reading or hasn’t retained what I’d call an elemental idea from the reading. Students routinely guess “2010s” or “2000s.”

Right after college I taught the LSAT for two years, and the LSAT is largely a test of reading comprehension. I worked for an independent guy named Steven Klein, who’d started his company in the late ‘80s or early ’90s, before Kaplan and Princeton Review became test-prep behemoths. He and his business partner, Sandy, would marvel at the students who had 3.8, 3.9, sometimes 4.0 GPAs in fields like sociology, communication, English, or “Law, Society, and Justice” but who couldn’t seem to understand even simple prose passages. The students would get frustrated too: they were college grads or near college grads, who were used to being told they were great. The LSAT experience made me a sympathetic reader of the book Paying for the Party, or Beer and Circus: How Big-Time College Sports Has Crippled Undergraduate Education, both of which describe how most colleges and universities have evolved vast party tracks that require minimal skill development and mental acuity, but reliably deliver high grades. I think of those books when I read about the massive, $1 trillion and growing amount of outstanding student loan debt. Many college and university students would be better served with apprenticeships and vocational education, but as a society we’ve spent 40, if not more, years disparaging such paths and exalting “college.” Articles like “41% of Recent Grads Work in Jobs Not Requiring a Degree” are common. We have many bartenders and airline stewards and stewardesses and baristas who’ve obtained expensive degrees: I’m not opposed to any of those professions and respect all of them, but a four-year degree is a very expensive way of winding up in them.

The LSAT is a standardized test, and many schools still like standardized tests because those tests aren’t changed by how rich or connected or otherwise privileged a person is. Some Ivy-League and effectively-Ivy-League schools are doing away with the SAT, in the name of “diversity,” but that usually means they’re trying to give themselves even more discretion in “holistic” admissions, which tends to mean rich kids, with a smattering of diversity admits for political cover. “Race Quotas and Class Privilege at Harvard: Meme Wars: Who gets in, and why?” is one take on this topic, although numerous others can be found. The students who had gotten weak degrees and high GPAs were flummoxed by the LSAT; when they asked what they could do to improve their reading skills, Steven and Sandy often told them, “Read more, and read more sophisticated works. The Atlantic, The New Yorker [this was a while ago], better books, and do it daily.” I’d sometimes see their faces fall at the notion of having to read more: they were hoping to learn “One Weird Trick For Improving Your Reading Skills. You Won’t Guess What It Is!” When I’ve taught undergrads, they often want to know if there’s a way to get extra credit, and I tell them to do the reading thoroughly and write great essays, because I will grade based on improvement. This seems particularly important because many haven’t been taught close reading or sentence construction. I also see the disappointment in their faces and body language, because they think I’m going to tell them the secret, and instead I tell them there is no real secret, just execution and practice. A lot of school consists of jumping through somewhat ridiculous, but well-defined, hoops, and then being rewarded for it at the end, but real learning is much stranger and more tenuous than that. Sarah Constantin argues that “Humans Who Are Not Concentrating Are Not General Intelligences,” which is consistent with my experiences.

Many, if not most, English and writing professors also seem strangely uninterested in teaching writing or close reading. I get peculiar looks when I talk about the importance of either with other people teaching writing or English; one woman at a school I taught at in New York told me that social justice is the only appropriate theme for freshman writing courses. I know what she meant, and grunted noncommittally; I didn’t really reply to her at the time, although I was thinking: “Isn’t developing high levels of skill and proficiency the ultimate form of social justice?”

This is a long-winded way of saying that poor reading comprehension may be closer to the norm than the exception, and that may also be why, as Dan observed, very few bits of information trickle down from the C suites in big companies to the line workers (“I’ve seen quite a few people in upper management attempt to convey a mixed/nuanced message since my time at MS and I have yet to observe a case of this working in a major org at a large company (I have seen this work at a startup, but that’s a very different environment)”). I’d imagine the opposite is also true: if you’re a line worker, or lower-level management, it’s probably difficult or impossible to tell the C suite people about something you think important. Startups can disrupt big companies when a few people at the startup realize something important is happening, but the decision makers and the BigCo don’t.

I’ve also learned, regarding teaching, a message similar to what the MS VPs had learned: not much goes through, and repetition is key. One time, my sister watched me teach and said after, “You repeat yourself a lot.” I told her she was right, but that I’d learned to do so. Teachers and professors repeating themselves endlessly made me crazy when I was in school, but now I understand why they do it. I’ll routinely say “Do [stuff] for Thursday. Any questions?” and have someone immediately say: “What should we do for Thursday?” There’s a funny scene in the movie Zoolander in which the David Duchovny character explains to the Ben Stiller character how male models are being used to conduct political assassinations. He goes through his explanation, and then Zoolander goes: “But why male models?” The David Duchovny character replies: “Are you stupid? I just explained exactly that to you.” Derek Zoolander is a deliberately stupid character, but I think inattention is probably the most relevant explanation in the real world. Big tech companies like Microsoft probably have very few stupid people in them. Most students aren’t stupid, but I think many haven’t been effectively challenged or trained. It’s also harder for the instructor to teach close reading than it is to have meandering discussions about how a given work, which has probably been at best skimmed, makes students feel. I’ve written on “What incentivizes professors to grade honestly? Nothing.” There’s a phrase that floats around higher education about a rueful compact between students and teachers: “They pretend to learn, and we pretend to teach.” Students, I’m sure you’ll be shocked to know, really like to get good grades. I of course grade with scrupulous honesty and integrity 100% of the time, just like everyone else, but I have heard rumors that there’s temptation to give students what they want and collect positive evaluations, which are often used for hiring and tenure purposes.

Politicians appear to have learned the same thing about repetition and the limits of the channel: the more successful ones appear to develop a simple message, and often a simple phrase (“Hope and change,” to name a recent one: you can probably think of others) and repeat it endlessly, leaving the implementation details to staff, assuming the politician in question is elected.

When Paul Graham confronts readers mis-reading his work, he’ll often ask, “Can you point to a specific sentence where I state what you say I state?” It appears almost none do. Even otherwise sophisticated people will attribute views to him that he doesn’t hold and hasn’t stated, based on the mood his essay creates. In Jan. 2016, for example, he wrote “Economic Inequality: The Short Version” because he saw “some very adventurous interpretations” of the original. In April 2007, he wrote “Microsoft is Dead: The Cliffs Notes” because many interpreted his metaphor as being literal. I often teach a few of his essays, most notably “What You’ll Wish You’d Known,” and some students will report that he’s “arrogant” or “pretentious.” Maybe he is: I’ll ask a version of the question Graham does: “Can you cite a sentence that you find arrogant or pretentious?” Usually the answer is “no.” I tell students they could write an essay arguing that he is, using specific textual evidence, but that never happens.

I’ve told bits and pieces of this essay to friends in conversation, and they sometimes urge me to try and make a difference by making an effort to improve college teaching. I appreciate their encouragement, but I don’t run any writing or English departments and have a full-time job that occupies most of my time and attention. I like teaching, but teaching represents well under 10% of my total income, tenure-track jobs in humanities fields haven’t really existed since 2009, and adjunct gigs offer marginal pay. To really encourage better classroom teaching, schools would need to pay more and set up teaching systems for improving classroom teaching. The goal of the system is to propagate and perpetuate the system, not to disturb it in ways that would require more money or commitment. Pretending excellence is much easier than excellence. I’m okay with doing a bit of teaching on the side, because it’s fun and different from the kind of computer work I usually do, but I’m under no illusions that I’m capable of changing the system in any large-scale way. The writing I’ve done over the years about colleges and college teaching appears to have had an impact on the larger system that’s indistinguishable from zero.

Dissent, insiders, and outsiders: Institutions in the age of Twitter

How does an organization deal with differing viewpoints among its constituents, and how do constituents dissent?

Someone in Google’s AI division was recently fired, or the person’s resignation accepted, depending on one’s perspective, for reasons related to a violation of process and organizational norms, or something else, again depending on one’s perspective. The specifics of that incident can be disputed, but the more interesting level of abstraction might ask how organizations process conflict and what underlying conflict model participants have. I recently re-read Noah Smith’s essay “Leaders Who Act Like Outsiders Invite Trouble;” he’s dealing with the leadup to World War II but also says: “This extraordinary trend of rank-and-file members challenging the leaders of their organizations goes beyond simple populism. There may be no word for this trend in the English language. But there is one in Japanese: gekokujo.” And later, “The real danger of gekokujo, however, comes from the establishment’s response to the threat. Eventually, party bosses, executives and other powerful figures may get tired of being pushed around.”

If you’ve been reading the news, you’ll have seen gekokujo, as institutions are being pushed by the Twitter mob, and by the Twitter mob mentality, even when the mobbing person is formally within the institution. I think we’re learning, or going to have to re-learn, things like “Why did companies traditionally encourage people to leave politics and religion at the door?” and “What’s the acceptable level of discourse within the institution, before you’re not a part of it any more?”

Colleges and universities in particular seem to be susceptible to these problems, and some are inculcating environments and cultures that may not be good for working in large groups. One recent example of these challenges occurred at Haverford college, but here too the news has many other examples, and the Haverford story seems particularly dreadful.

The basic idea that organizations have to decide who’s inside and who’s outside is old: Albert Hirschman’s Exit, Voice, and Loyalty: Responses to Decline in Firms, Organizations, and States is one great discussion. Organizations also used to unfairly exclude large swaths of the population based on demographic factors, and that’s (obviously) bad. Today, though, many organizations have in effect, if not intent, decided that it’s okay for some of their members to attack the good faith of other members of the organization, and to attack the coherentness of the organization itself. There are probably limits to how much this can be done, and still retain a functional organization, let alone a maximally functional organization.

The other big change involves the ability to coordinate relatively large numbers of people: digital tools have made this easier, in a relatively short time—thus the “Twitter mob” terminology that came to mind a few paragraphs ago; I kept the term, because it seems like a reasonable placeholder for that class of behavior. Digital tools ease the ability of a small percentage of total people to be a large absolute number of people. For example, if 100,000 people are interested in or somehow connected to an organization, and one percent of them want to fundamentally disrupt the organization, change its direction, or arrange an attack, that’s 1,000 people—which feels like a lot. It’s far above the Dunbar number and too many for one or two public-facing people to deal with. In addition, in some ways journalists and academics have become modern-day clerics, and they’re often eager to highlight and disseminate news of disputes of this sort.

Over time, I expect organizations are going to need to develop new cultural norms if they’re going to maintain their integrity in the face of coordinated groups that represent relatively small percentages of people but large absolute numbers of people. The larger the organization, the more susceptible it may be to these kinds of attacks. I’d expect more organizations to, for example, explicitly say that attacking other members of the organization in bad faith will result in expulsion, as seems to have happened in the Google example.

Evergreen College, which hosted an early example of this kind of attack (on a biology professor named Bret Weinstein), has seen its enrollment drop by about a third.

Martin Gurri’s book The Revolt of The Public and the Crisis of Authority in the New Millennium examines the contours of the new information world, and the relative slowness of institutions to adapt to it. Even companies like Google, Twitter, and Facebook, which have enabled sentiment amplification, were founded before their own user bases became so massive.

Within organizations, an excess of conformity is a problem—innovation doesn’t occur from simply following orders—but so is an excess of chaos. Modern intellectual organizations, like tech companies or universities, probably need more “chaos” (in the sense of information transfer) than, say, old-school manufacturing companies, which primarily needed compliance. “Old-school” is a key phrase, because from what I understand, modern manufacturing companies are all tech companies too, and they need the people closest to the process to be able to speak up if something is amiss or needs to be changed. Modern information companies need workers to speak up and suggest new ideas, new ways of doing things, and so on. That’s arguably part of the job of every person in the organization.

Discussion at work of controversial identity issues can probably function if all parties assume good faith from the other parties (Google is said to have had a freewheeling culture in this regard from around the time of its founding up till relatively recently). Such discussions probably won’t function without fundamental good faith, and good faith is hard to describe, but most of us know it when we see it, and defining every element of it would probably be impossible, while cultivating it as a general principle is desirable. Trying to maintain such an environment is tough: I know that intimately because I’ve tried to maintain it in classrooms, and those experiences led me to write “The race to the bottom of victimhood and ‘social justice’ culture.” It’s hard to teach, or run an information organization, without a culture that lets people think out loud, in good faith, with relatively little fear of arbitrary reprisal. Universities, in particular, are supposed to be oriented around new ideas and discussing ideas. Organizations also need some amount of hierarchy: without it, decisions can’t or don’t get made, and the organizational processes themselves don’t function. Excessive attacks lead to the “gekokujo” problem Smith describes. Over time organizations are likely going to have to develop antibodies to the novel dynamics of the digital world.

A lot of potential learning opportunities aren’t happening, because we’re instead dividing people into inquisitors and heretics, when very few should be the former, and very are truly the latter. One aspect of “Professionalism” might be “assuming good faith on the part of other parties, until proven otherwise.”

On the other hand, maybe these cultural skirmishes don’t matter much, like brawlers in a tavern across the street from the research lab. Google’s AlphaFold has made a huge leap in protein folding efforts (Google reorganized itself, so technically both Google and AlphaFold are part of the “Alphabet” parent company). Waymo, another Google endeavor, may be leading the way towards driverless cars, and it claims to be expanding its driverless car service. Compared to big technical achievements, media fights are minor. Fifty years from now, driverless cars will be taken for granted, along with customizable biology, people will be struggling to understand what was at stake culturally, in much the way most people don’t get what the Know-Nothing party, of the Hundred Years War, were really about, but we take electricity and the printing press for granted.

EDIT: Coinbase has publicly taken a “leave politics and religion at the door” stand. They’re an innovator, or maybe a back-to-the-future company, in these terms.

 

“Why technology will never fix education”

Why technology will never fix education” is a 2015 article that’s also absurdly relevant in the COVID era of distance education, and this paragraph in particular resonates with my teaching experience:

The real obstacle in education remains student motivation. Especially in an age of informational abundance, getting access to knowledge isn’t the bottleneck, mustering the will to master it is. And there, for good or ill, the main carrot of a college education is the certified degree and transcript, and the main stick is social pressure. Most students are seeking credentials that graduate schools and employers will take seriously and an environment in which they’re prodded to do the work. But neither of these things is cheaply available online.

For the last few years, I’ve often asked students to look at their phones’s “Screen Time” (iOS) or “Digital Wellbeing” (Android) apps. These apps measure how much time a person spends using their phone each day, and most students report 3 – 7 hours per day on their phones. The top apps are usually Instagram, SnapChat, and Facebook. Student often laugh bashfully at the sheer number of hours they spend on their phones, and some later confess they’re abashed. I ask the same thing when students tell me how “busy” they are during office hours (no one ever says they’re not busy). So far, both the data and anecdotes I’ve seen or heard support the “ban connected devices in class” position I’ve held for a while. The greatest discipline needed today seems to be the discipline not to stare relentlessly at the phone.

But what happens when class comes from a connected, distraction-laden device?

In my experience so far, the online education experience hasn’t been great, although it went better than I feared, and I think that, as norms shift, we’ll see online education become more effective. But the big hurdle remains motivation, not information. And I too find teaching via Zoom (or similar, presumably) unsatisfying, because it seems that concentration and motivation are harder on it. Perhaps online education is just increasing the distance between highly structured and self-motivated people versus everyone else.

 

A simple solution to peer review problems

Famous computer scientist and Roomba co-founder Rodney Brooks writes about the problems of peer review in academia. He notes that peer review has some important virtues even as the way it’s currently practiced generates many problems and pathologies too. Brooks says, “I don’t have a solution, but I hope my observations here might be interesting to some.” I have a partial solution: researchers “publish” papers to arXiv or similar, then “submit” them to the journal, which conducts peer review. The “journal” is a list of links to papers that it has accepted or verified.

That way, the paper is available to those who find it useful. If a researcher really thinks the peer reviewers are wrong, the researcher can state why, and why they’re leaving it up, despite the critiques. Peer-review reports can be kept anonymous but can also be appended to the paper, so that readers can decide for themselves whether the peer reviewers’ comments are useful or accurate—in my limited, but real, experience in English lit, they’ve been neither, and that experience seems to have been echoed by many others. If a writer wishes to be anonymous, the writer can leave the work as “anonymous” until after it’s been submitted for peer review, which would allow for double-blind peer review, and that double-blindness would help remove some of the insider-ism biases around people knowing each other.

Server costs for things like simple websites are almost indistinguishable from zero today, and those costs can easily be borne by the universities themselves, which will find them far lower than subscription costs.

What stands in the way? Current practice and setup. Plus, Elsevier and one or two other multi-billion-dollar publishing conglomerates that control the top journals in most fields. These giants want to maintain library fees that amount to thousands of dollars per journal, even if the journal editors are paid minimally, as are peer reviewers and so on. Only the companies make money. Academics live and die based on prestige, so few will deviate from the existing model. Publishing in top journals is essential for hiring, tenure, and promotion (the tenure model also generates a bunch of pathologies in academia, but we’ll ignore those for now).

There are pushes to change the model—the entire University of California system, for example, announced in 2019 that it would “terminate subscriptions with world’s largest scientific publisher in push for open access to publicly funded research.” In my view, all public funding bodies should stipulate that no research funded with public money can be published in closed-access journals, and foundations should do the same. There is no reason for modern research to be hidden behind paywalls.

It would also help if individual schools and departments quit making hiring, tenure and promotion decisions almost entirely based on “peer-reviewed” work. Those on hiring, tenure, and promotion committees should be able to read the work and judge the merit for themselves, regardless of the venue in which it appears.

Coronavirus and the need for urgent research has also pushed biomed and medicine towards the “publish first” model. Peer review seems to be happening after the paper is published in medRxiv or bioRxiv. One hopes these are permanent changes. The problems with the journal model are well known but too little is being done. Or, rather, too little was being done: the urgency of the situation may lead to reform in most fields.

Open journals would be a boon for access and for intellectual diversity. When I was in grad school for English (don’t do that, I want to reiterate), the peer reviewer reports I got on most of my papers were so bad that they made me realize I was wasting my life trying to break into the field; there is a difference between “negative but fair” and “these people are not worth trying to impress,” and in English lit the latter predominated. In addition, journals took a year, and sometimes years, to publish the papers they accepted, raising the obvious question: if something is so unimportant that it’s acceptable to take years to publish it, why bother? “The Research Bust” explores the relevant implications. No one else in the field seemed to care about its torporous pace or what that implies. Many academics in the humanities have been wringing their hands about the state of the field for years, without engaging in real efforts to fix it, even as professor jobs disappear and undergrads choose other majors. In my view, intellectual honesty and diversity are both important, and yet the current academic system doesn’t properly incentivize or reward either, though it could.

In the humanities, at least being wrong and “peer reviewed” doesn’t carry some of the costs that being wrong and “peer reviewed” can in the sciences.

For another take on peer review’s problems, see Andrew Gelman.

Have journalists and academics become modern-day clerics?

This guy was wrongly and somewhat insanely accused of sexual impropriety by two neo-puritans; stories about individual injustice can be interesting, but this one seems like an embodiment of a larger trend, and, although the story is long and some of the author’s assumptions are dubious, I think there’s a different, conceivably better, takeaway than the one implied: don’t go into academia (at least the humanities) or journalism. Both fields are fiercely, insanely combative for very small amounts of money; because the money is so bad, many people get or stay in them for non-monetary ideological reasons, almost the way priests, pastors, or other religious figures used to choose low incomes and high purpose (or “purpose” if we’re feeling cynical). Not only that, but clerics often know the answer to the question before the question has even been asked, and they don’t need free inquiry because the answers are already available—attributes that are very bad, yet seem to be increasingly common, in journalism and academia.

Obviously journalism and academia have never been great fields for getting rich, but the business model for both has fallen apart in the last 20 years. The people willing to tolerate the low pay and awful conditions must have other motives (a few are independently wealthy) to go into them. I’m not arguing that other motives have never existed, but today you’d have to be absurdly committed to those other motives. That there are new secular religions is not an observation original to me, but once I heard that idea a lot of other strange-seeming things about modern culture clicked into place. Low pay, low status, and low prestige occupations must do something for the people who go into them.

Once an individual enters the highly mimetic and extremely ideological space, he becomes a good target for destruction—and makes a good scapegoat for anyone who is not getting the money or recognition they think they deserve. Or for anyone who is simply angry or feels ill-used. The people who are robust or anti-fragile stay out of this space.

Meanwhile, less ideological and much wealthier professions may not have been, or be, immune from the cultural psychosis in a few media and academic fields, but they’re much less susceptible to mimetic contagions and ripping-downs. The people in them have greater incomes and resources. They have a greater sense of doing something in the world that is not primarily intellectual, and thus probably not primarily mimetic and ideological.

There’s a personal dimension to these observations, because I was attracted to both journalism and academia, but the former has shed at least half its jobs over the last two decades and the latter became untenable post-2008. I’ve enough interaction with both fields to get the cultural tenor of them, and smart people largely choose more lucrative and less crazy industries. Like many people attracted to journalism, I read books like All the President’s Men in high school and wanted to model Woodward and Bernstein. But almost no reporters today are like Woodward and Bernstein. They’re more likely to be writing Buzzfeed clickbait, and nothing generates more clicks than outrage. Smart people interested in journalism can do a minimal amount of research and realize that the field is oversubscribed and should be avoided.

When I hear students say they’re majoring in journalism, I look at them cockeyed, regardless of gender; there’s fierce competition coupled with few rewards. The journalism industry has evolved to take advantage of youthful idealism, much like fashion, publishing, film, and a few other industries. Perhaps that is why these industries attract so many writers to insider satires: the gap between idealistic expectation and cynical reality is very wide.

Even if thousands of people read this and follow its advice, thousands more persons will keep attempting to claw their way into journalism or academia. It is an unwise move. We have people like David Graeber buying into the innuendo and career attack culture. Smart people look at this and do something else, something where a random smear is less likely to cost an entire career.

We’re in the midst of a new-puritan revival and yet large parts of the media ecosystem are ignoring this idea, often because they’re part of it.

It is grimly funny to have read the first story linked next to a piece that quotes Solzhenitsyn: “To do evil a human being must first of all believe that what he’s doing is good, or else that it’s a well-considered act in conformity with natural law. . . . it is in the nature of a human being to seek a justification for his actions.” Ideology is back, and destruction is easier the construction. Our cultural immune system seems to have failed to figure this out, yet. Short-form social media like Facebook and Twitter arguably encourage black and white thinking, because there’s not enough space to develop nuance. There is enough space, however, to say that the bad guy is right over there, and we should go attack that bad guy for whatever thought crimes or wrongthink they may have committed.

Ideally, academics and journalists come to a given situation or set of facts and don’t know the answer in advance. In an ideal world, they try to figure out what’s true and why. “Ideal” is repeated twice because, historically, departures from the ideal is common, but having ideological neutrality and an investigatory posture is preferable to knowing the answer in advance and judging people based on demographic characteristics and prearranged prejudices, yet those traits seem to have seeped into the academic and journalistic cultures.

Combine this with present-day youth culture that equates feelings with facts and felt harm with real harm, and you get a pretty toxic stew—”toxic” being a favorite word of the new clerics. See further, America’s New Sex Bureaucracy. If you feel it’s wrong, it must be wrong, and probably illegal; if you feel it’s right, it must be right, and therefore desirable. This kind of thinking has generated some backlash, but not enough to save some of the demographic undesirables who wander into the kill zone of journalism or academia. Meanwhile, loneliness seems to be more acute than ever, and we’re stuck wondering why.

The Seventh Function of Language — Laurent Binet

The Seventh Function of Language is wildly funny, at least for the specialist group of humanities academics and those steeped in humanities academic nonsense of the last 30 – 40 years. For everyone else, it may be like reading a prolonged in-joke. Virtually every field has its jokes that require particular background to get (I’ve heard many doctors tell stories whose punchline is something like, “And then the PCDH level hit 50, followed by an ADL of 200!” Laughter all around, except for me). In the novel, Roland Barthes doesn’t die from a typical car crash in 1980; instead, he is murdered. But by who, and why?

A hardboiled French detective (or “Superintendent,” which is France’s equivalent) must team up with a humanities lecturer to find out, because in the world of The Seventh Function it’s apparent that a link exists between Barthes’s work and his murder. They don’t exactly have a Holmes and Watson relationship, as neither Bayard (the superintendent) or Herzog (the lecturer) make brilliant leaps of deduction; rather, both complement each other, each alternating between bumbling and brilliance. Readers of The Name of the Rose will recognize both the detective/side-kick motif as well as the way a murder is linked to the intellectual work being done by the deceased. In most crime fiction—as, apparently, in most crime—the motives are small and often paltry, if not outright pathetic: theft, revenge, jealousy, sex. “Money and/or sex” pretty much summarizes why people kill (and perhaps why many people live). That sets up the novel’s idea, in which someone is killed for an idea.

The novel’s central, unstated joke is that, in the real world, no one would bother killing over literary theory because literary theory is so wildly unimportant (“Bayard gets the gist: Roland Barthes’s language is gibberish. But in that case why waste your time reading him?”). At Barthes’s funeral, Bayard thinks:

To get anywhere in this investigation, he knows that he has to understand what he’s searching for. What did Barthes possess of such value that someone not only stole it from him but they wanted to kill him for it too?

The real world answer is “nothing.” He, like other French intellectuals, has nothing worth killing over. And if you have nothing conceivably worth killing over, are your ideas of any value? The answer could plausibly be “yes,” but in the case of Barthes and others it is still “no.” And the money question structures a lot of relations: Bayard thinks of Foucault, “Does this guy earn more than he does?”

Semiotics permeates:

Many is an interpreting machine and, with a little imagination, he sees signs everywhere: in the color of his wife’s coat, in the stripe on the door of his car, in the eating habits of the people next door, in France’s monthly unemployment figures, in the banana-like taste of Beaujolais nouveau (for it always tastes either like banana or, less often, raspberry. Why? No one knows, but there must be an explanation, and it is semiological.)…

There are also various amusing authorial intrusions and one could say the usual things about them. The downside of The Seventh Function is that its underlying thrust is similar to the numerous other academic novels out there; if you’ve read a couple, you’ve read them all. The upsides are considerable, however, among them the comedy of allusion and the gap between immediate, venal human behavior and the olympian ideas enclosed in books produced by often-silly humans. If the idea stated in the book and the author’s behavior don’t match, what lesson should we take from that mismatch?

The college bribery scandal vs. Lambda School

Many of you have seen the news, but, while the bribery scandal is sucking up all the attention in the media, Lambda School is offering a $2,000/month living stipend to some students and Western Governors University is continuing to quietly grow. The Lambda School story is a useful juxtaposition with the college-bribery scandal. Tyler Cowen has a good piece on the bribery scandal (although to me the scandal looks pretty much like business-as-usual among colleges, which are wrapped up in mimetic rivalry, rather than a scandal as such, unless the definition of a scandal is “when someone accidentally tells the truth”):

Many wealthy Americans perceive higher education to be an ethics-free, law-free zone where the only restraint on your behavior is whatever you can get away with.

This may be an overly cynical take, but to what extent do universities act like ethics-free, law-free zones? They accept students (and their student loan payments) who are unlikely to matriculate; they have no skin in the game regarding student loans; insiders understand the “paying for the party” phenomenon, while outsiders don’t; too frequently, universities don’t seem to defend free speech or inquiry. In short, many universities are exploiting information asymmetries between them and their students and those students’s parents—especially the weakest and worst-informed students. Discrimination against Asians in admissions is common at some schools and is another open secret, albeit less secret than it once was. When you realize what colleges are doing to students and their families, why is it a surprise when students and their families reciprocate?

To be sure, this is not true of all universities, not all the time, not all parts of all universities, so maybe I am just too close to the sausage factory. But I see a whole lot of bad behavior, even when most of the individual actors are well-meaning. Colleges have evolved in a curious set of directions, and no one attempting to design a system from scratch would choose what we have now. That is not a reason to imagine some kind of perfect world, but it is worth asking how we might evolve out of the current system, despite the many barriers to doing so. We’re also not seeing employers search for alternate credentialing sources, at least from what I can ascertain.

See also “I Was a College Admissions Officer. This Is What I Saw.” In a social media age, why are we not seeing more of these pieces? (EDIT: Maybe we are? This is another one, scalding and also congruent with my experiences.) Overall, I think colleges are really, really good at marketing, and arguably marketing is their core competency. A really good marketer, however, can convince you that marketing is not their core competency.

The Coddling of the American Mind — Jonathan Haidt and Greg Lukianoff

Apart from its intellectual content and institutional structure descriptions, The Coddling of the American Mind makes being a contemporary college student in some schools sound like a terrible experience:

Life in a call-out culture requires constant vigilance, fear, and self-censorship. Many in the audience may feel sympathy for the person being shamed but are afraid to speak up, yielding the false impression that the audience is unanimous in its condemnation.

Who would want to live this way? It sounds exhausting and tedious. If we’ve built exhausting and tedious ways to live into the college experience, perhaps we ought to stop doing that. I also find it strange that, in virtually every generation, free speech and free thought have to be re-litigated. The rationale behind opposing free speech and thought changes, but the opposition remains.

Coddling is congruent with this conversation between Claire Lehmann and Tyler Cowen, where Lehmann describes Australian universities:

COWEN: With respect to political correctness, how is it that Australian universities are different?

LEHMANN: I think the fact that they’re public makes a big difference because students are not paying vast sums to go to university in the first place, so students have less power.

If you’re a student, and you make a complaint against a professor in an Australian university, the university’s just going to shrug its shoulders, and you’ll be sort of walked out of the room. Students have much less power to make complaints and have their grievances heard. That’s one factor.

Another factor is, we don’t have this hothouse environment where students go and live on campus and have their social life collapsed into their university life.

Most students in Australia live at home with their parents or move into a share house and then travel to university, but they don’t live on campus. So there isn’t this compression where your entire life is the campus environment. That’s another factor.

Overall, I suspect the American university environment as a total institution where students live, study, and play might be a better one in some essential ways: it may foster more entrepreneurship, due to students being physically proximate to one another. American universities have a much greater history of alumni involvement (and donations), donations likely being tied into the sense of affinity with the university generated by living on campus.

But Haidt and Lukianoff are pointing to some of the potential costs: when everything happens on campus, no one gets a break from “call-out culture” or accusations of being “offensive.” I think I would laugh at this sort of thing if I were an undergrad today, or choose bigger schools (the authors use an example from Smith College) that are more normal and less homogenous and neurotic. Bigger schools have more diverse student bodies and fewer students with the time and energy to relentlessly surveil one another. The authors describe how “Reports from around the country are remarkably similar; students at many colleges today are walking on eggshells, afraid of saying the wrong thing, liking the wrong post, or coming to the defense of someone who they know to be innocent, out of fear they themselves will be called out by a mob on social media.”

Professors, especially in humanities departments, seem to be helping to create this atmosphere by embracing “micro aggressions,” “intersectionality,” and similar doctrines of fragility. Perhaps professors ought to stop doing that, too. I wonder too if or when students will stop wanting to attend schools like Smith, where the “Us vs them” worldview prevails.

School itself may be becoming more boring: “Many professors say they now teach and speak more cautiously, because one slip or simple misunderstanding could lead to vilification and even threats from any number of sources.” And, in an age of ubiquitous cameras, it’s easy to take something out of context. Matthew Reed, who has long maintained a blog called “Dean Dad,” has written about how he would adopt certain political perspectives in class (Marxist, fascist, authoritarian, libertarian, etc.) in an attempt to get students to understand what some of those ideologies entail and what their advocates might say. So he’d say things he doesn’t believe in order to get students to think. But that strategy is prone to the camera-and-splice practice. It’s a tension I feel, too: in class I often raise ideas or reading to encourage thinking or offer pushback against apparent groupthink. Universities are supposed to exist to help students (and people more generally) think independently; while courtesy is important, at what point does “caution” become tedium, or censorship?

Schools encourage fragility in other ways:

“Always trust your feelings,” said Misoponos, and that dictum hay sound wise and familiar. You’ve heard versions of it from a variety of sappy novels and pop psychology gurus. But the second Great Untruth—the Untruth of Emotional Reasoning—is a direct contradiction of much ancient wisdom. [. . .] Sages in many societies have converged on the insight that feelings are always compelling, but not always reliable.

More important than ancient sages, modern psychologists and behavioral economists have found and argued the same. Feelings of fear, uncertainty, and doubt are strangely encouraged: “Administrators often acted in ways that gave the impression that students were in constant danger and in need of protection from a variety of risks and discomforts.” How odd: 18- and 19-year-olds in the military face risks and discomforts like, you know, being shot. Maybe the issue is that our society has too little risk, or risk that is invisible (this is your occasional reminder that about 30,000 people die in car crashes every year, and hundreds of thousands more are mangled, yet we do little to alleviate the car-centric world).

Umberto Eco says, “Art is an escape from personal emotion, as both Joyce and Eliot had taught me.” Yet we often treat personal emotion as the final arbiter and decider of things. “Personal emotion” is very close the word “feelings.” We should be wary of trusting those feelings; art enables to escape from our own feelings into someone else’s conception of the world, if we allow it to. The study of art in many universities seemingly discourages this. Perhaps we ought to read more Eco.

I wonder if Coddling is going to end up being one of those important books no one reads.

It is also interesting to read Coddling in close proximity to Michael Pollan’s How to Change Your Mind: What the New Science of Psychedelics Teaches Us About Consciousness, Dying, Addiction, Depression, and Transcendence. Perhaps we need less iPhone and more magic mushrooms. I’d actually like to hear a conversation among Pollan, Haidt, and Lukianoff. The other day I was telling a friend about How to Change Your Mind, and he said that not only had he tried psychedelics in high school, but his experience cured or alleviated his stutter and helped him find his way in the world. The plural of anecdote is not data, but it’s hard to imagine safety culture approving of psychedelic experiences (despite their safety, which Pollan describes in detail).

In The Lord of the Rings when Aragorn and his companions believe that Gandalf has perished in Moria; Gimli says that “Gandalf chose to come himself, and he was the first to be lost… his foresight failed him.” Aragorn replies, “The counsel of Gandalf was not founded on foreknowledge of safety, for himself or for others.” And neither is life: it is not founded on foreknowledge of safety. Adventure is necessary to become a whole person. Yet childhood and even universities are today increasingly obsessed with safety, to the detriment of the development of children and students. In my experience, military veterans returning to college are among the most intersting and diligent students. We seem to have forgotten Gandalf’s lessons. One advantage in reading old books may be some of the forgotten cultural assumptions beneath them; in The Lord of the Rings risk is necessary for reward, and the quality of a life is not dependent on the elimination of challenge.

Here’s a good critical review.

“Oh, the Humanities!”

It’s pretty rare for a blog post, even one like “ Mea culpa: there *is* a crisis in the humanities,” to inspire a New York Times op ed, but here we have “Oh, the Humanities! New data on college majors confirms an old trend. Technocracy is crushing the life out of humanism.” It’s an excellent essay. Having spent a long time working in the humanities (a weird phrase, if you think about it) and having written extensively about the problems with the humanities as currently practiced in academia, I naturally have some thoughts.

Douthat notes the decline in humanities majors and says, “this acceleration is no doubt partially driven by economic concerns.” That’s true. Then we get this interesting move:

In an Apollonian culture, eager for “Useful Knowledge” and technical mastery and increasingly indifferent to memory and allergic to tradition, the poet and the novelist and the theologian struggle to find an official justification for their arts. And both the turn toward radical politics and the turn toward high theory are attempts by humanists in the academy to supply that justification — to rebrand the humanities as the seat of social justice and a font of political reform, or to assume a pseudoscientific mantle that lets academics claim to be interrogating literature with the rigor and precision of a lab tech doing dissection.

There is likely some truth here too. In this reading, the humanities have turned from traditional religious feeling and redirected the religious impulse in a political direction.

Douthat has some ideas about how to improve:

First, a return of serious academic interest in the possible (I would say likely) truth of religious claims. Second, a regained sense of history as a repository of wisdom and example rather than just a litany of crimes and wrongthink. Finally, a cultural recoil from the tyranny of the digital and the virtual and the Very Online, today’s version of the technocratic, technological, potentially totalitarian Machine that Jacobs’s Christian humanists opposed.

I think number two is particularly useful, number three is reasonable, and number one is fine but somewhat unlikely and not terribly congruent with my own inclinations. But I also think that the biggest problem with the humanities as currently practiced is the turn from uninterested inquiry about what is true, what is valuable, what is beautiful, what is worth remembering, what should be made, etc., and toward politics, activism, and taking sides in current political debates—especially when those debates are highly interested in stratifying groups of people based on demographic characteristics, then assigning values to those groups.

That said, I’m not the first person to say as much and have zero impact. Major structural forces stand in the way of reform. The current grad-school-to-tenure structure kills most serious, divergent thinking and encourages a group-think monoculture. Higher-ed growth peaked around 1975; not surprisingly, the current “culture wars” or “theory wars” or whatever you want to call them got going in earnest in the 1980s, when there was little job growth among humanities academics. And they’ve been going, in various ways, ever since.

Before the 1980s, most people who got PhDs in the humanities eventually got jobs of some kind or other. This meant heterodox thinkers could show up, snag a foothold somewhere, and change the culture of the academic humanities. People like Camille Paglia or Harold Bloom or even Paul de Man (not my favorite writer) all have this quality. But since the 1980s, the number of jobs has shrunk, the length of grad school has lengthened, and heterodox thinkers have (mostly) been pushed out. Interesting writers like Jonathan Gottschall work as adjuncts, if they work at all.

Today, the jobs situation is arguably worse than ever: I can’t find the report off-hand, the Modern Language Association tracks published, tenure-track jobs, and those declined from about a thousand a year before 2008 to about 300 – 400 per year now.

Current humanities profs hire new humanities profs who already agree with them, politically speaking. Current tenured profs tenure new profs who already agree. This dynamic wasn’t nearly as strong when pretty much everyone got a job, even those who advocated for weird new ideas that eventually became the norm. That process is dead. Eliminating tenure might help the situation some, but any desire to eliminate tenure as a practice will be deeply opposed by the powerful who benefit from it.

So I’m not incredibly optimistic about a return to reason among humanities academics. Barring that return to reason, a lot of smart students are going to look at humanities classes and the people teaching them, then decide to go major in economics (I thought about majoring in econ).

I remember taking a literary theory class when I was an undergrad and wondering how otherwise seemingly-smart people could take some of that terrible writing and thinking seriously. Still, I was interested in reading and fiction, so I ignored the worst parts of what I read (Foucault, Judith Butler—those kinds of people) and kept on going, even into grad school. I liked to read and still do. I’d started writing (bad, at the time) novels. I didn’t realize the extent to which novels like Richard Russo’s Straight Man and Francine Prose’s Blue Angel are awfully close to nonfiction.

By now, the smartest people avoid most humanities subjects as undergrads and then grad students, or potential grad students. Not all of the smartest people, but most of them. And that anti-clumping tendency leaves behind people who don’t know any better or who are willing to repeat the endless and tedious postmodernist mantras like initiates into the cult (and there is the connection to Douthat, who’d like us to acknowledge the religious impulse more than most of us now do). Some of them are excellent sheep: a phrase from William Deresiewicz that he applies to students at elite schools but that might also be applied to many humanities grad students.

MFA programs, last time I checked, are still doing pretty well, and that’s probably because they’re somewhat tethered to the real world and the desire to write things other humans might want to read. That desire seems to have disappeared in most of humanistic academia. Leaving the obvious question: “Why bother?” And that is the question I can no longer answer.

Postmodernisms: What does *that* mean?

In response to What’s so dangerous about Jordan Peterson?, there have been a bunch of discussions about what “postmodernism” means (“He believes that the insistence on the use of gender-neutral pronouns is rooted in postmodernism, which he sees as thinly disguised Marxism.”) By now, postmodernism has become so vague and broad that it means almost anything—which is of course another way of saying “nothing”—so the plural is there in the title for a reason. In my view most people claiming the mantle of big broad labels like “Marxist,” “Christian,” “Socialist,” “Democrat,” etc. are trying to signal something about themselves and their identity much more than they’re trying to understand the nuances of what those positions might mean or what ideas / policies really underlie the labels, so for the most part when I see someone talking or writing about postmodern, I say, “Oh, that’s nice,” then move on to talking about something more interesting and immediate.

But if one is going to attempt to describe postmodernism, and how it relates to Marxism, I’d start by observing that old-school Marxists don’t believe much of the linguistic stuff that postmodernists sometimes say they believe—about how everything reduces to “language” or “discourse”—but I think that the number of people who are “Marxists” in the sense that Marx or Lenin would recognize is tiny, even in academia.

I think what’s actually happening is this: people have an underlying set of models or moral codes and then grab some labels to fit on top of those codes. So the labels fit, or try to fit, the underlying morality and beliefs. People in contemporary academia might be particularly drawn to a version of strident moralism in the form of “postmodernism” or “Marxism” because they don’t have much else—no religion, not much influence, no money, so what’s left? A moral superiority that gets wrapped up in words like “postmodernism.” So postmodernism isn’t so much a thing as a mode or a kind of moral signal, and that in turn is tied into the self-conception of people in academia.

You may be wondering why academia is being dragged into this. Stories about what “postmodernism” means are bound up in academia, where ideas about postmodernism still simmer. In humanities grad school, most grad students make no money, as previously mentioned, and don’t expect to get academic jobs when they’re done. Among those who do graduate, most won’t get jobs. Those who do, probably won’t get tenure. And even those who get tenure will often get it for writing a book that will sell two hundred copies to university libraries and then disappear without a trace. So… why are they doing what they do?

At the same time, humanities grad students and profs don’t even have God to console them, as many religious figures do. So some of the crazier stuff emanating from humanities grad students might be a misplaced need for God or purpose. I’ve never seen the situation discussed in those terms, but as I look at the behavior I saw in grad school and the stories emerging from humanities departments, I think that a central absence better explains many problems than most “logical” explanations. And then “postmodernism” is the label that gets applied to this suite of what amount to beliefs. And that, in turn, is what Jordan Peterson is talking about. If you are (wisely) not following trends in the academic humanities, Peterson’s tweet on the subject probably makes no sense.

Most of us need something to believe it—and the need to believe may be more potent in smarter or more intellectual people. In the absence of God, we very rarely get “nothing.” Instead, we get something else, but we should take care in what that “something” is. The sense of the sacred is still powerful within humanities departments, but what that sacred is has shifted, to their detriment and to the detriment of society as a whole.

(I wrote here about the term “deconstructionism,” which has a set of problems similar to “postmodernism,” so much of what I write there also applies here.)

Evaluating things along power lines, as many postmodernists and Marxists seek to do, isn’t always a bad idea, of course, but there are many other dimensions along which one can evaluate art, social situations, politics, etc. So the relentless focus on “power” becomes tedious and reductive after a while: one always knows what the speaker is likely to say, unless of course the speaker is seen as the powerful person and the thing being criticized can be seen as the obvious (e.g. it seems obvious that many tenured professors are in positions of relatively high power, especially compared to grad students; that’s part of what makes the Lindsay Shepherd story compelling).

This brand of post-modernism tends to infantilize groups or individuals (they’re all victims!) or lead to races to the bottom and the development of victimhood culture. But these pathologies are rarely acknowledged by their defenders.

Has postmodernism led to absurdities like the one at Evergreen State, which led to huge enrollment drops? Maybe. I’ve seen the argument and, on even days, buy it.

I read a good Tweet summarizing the basic problem:

When postmodern types say that truth-claims are rhetoric and that attempts to provide evidence are but moves in a power-game—believe them! They are trying to tell you that this is how they operate in discussions. They are confessing that they cannot imagine doing otherwise.

If everything is just “rhetoric” or “power” or “language,” there is no real way to judge anything. Along a related axis, see “Dear Humanities Profs: We Are the Problem.” Essays like it seem to appear about once a year or so. That they seem to change so little is discouraging.

So what does postmodernism mean? Pretty much whatever you want it to mean, whether you love it for whatever reason or hate it for whatever reason. Which is part of the reason you’ll very rarely see it used on this site: it’s too unspecific to be useful, so I shade towards words with greater utility that haven’t been killed, or at least made somatic, through over-use. There’s a reason why most smart people eschew talking about postmodernism or deconstructionism or similar terms: they’re at a not-very-useful level of abstraction, unless one is primarily trying to signal tribal affiliation, and signaling tribal affiliation isn’t a very interesting level of or for discussion.

If you’ve read to the bottom of this, congratulations! I can’t imagine many people are terribly interested in this subject; it seems that most people read a bit about it, realize that many academics in the humanities are crazy, and go do something more useful. It’s hard to explain this stuff in plain language because it often doesn’t mean much of anything, and explaining why that’s so takes a lot.

%d bloggers like this: