Why don’t schools teach debugging, or, more fundamentally, fundamentals?

If you find this piece worthwhile, consider the Go Fund Me that’s funding ongoing cancer care.

A story from Dan Luu, from back when he “TA’ed EE 202, a second year class on signals and systems at Purdue:”

When I suggested to the professor that he spend half an hour reviewing algebra for those students who never had the material covered cogently in high school, I was told in no uncertain terms that it would be a waste of time because some people just can’t hack it in engineering. I was told that I wouldn’t be so naive once the semester was done, because some people just can’t hack it in engineering.

This matches my experiences: when I was a first-year grad student in English,[1] my advisor was complaining about his students not knowing how to use commas, and I made a suggestion very similar to Luu’s: “Why not teach commas?” His reasoning was slightly different from “some people just can’t hack it in engineering,” in that he thought students should’ve learned comma usage in high school. I argued that, while he might be right in theory, if the students don’t know how to use commas, he ought to teach them how. He looked at me like I was a little dim and said “no.” 

I thought and still think he’s wrong.

If a person doesn’t know fundamentals of a given field, and particularly if a larger group doesn’t, teach those fundamentals.[2] I’ve taught commas and semicolons to students almost every semester I’ve taught in college, and it’s neither time consuming nor hard. A lot of the students appreciate it and say no one has ever stopped to do so. 

Usually I ask, when the first or second draft of their paper is due for peer editing, that students write down four major comma rules and a sample sentence showcasing each. I’m looking for something like: connecting two independent clauses (aka complete sentences) with a coordinating conjunction (like “and” or “or”), offsetting a dependent word, clause, or phase (“When John picked up the knife, …”), as a parenthetical (sometimes called “appositives” for reasons not obvious to me but probably having something to do with Latin), and lists. Students often know about lists (“John went to the store and bought mango, avocado, and shrimp”), but the other three elude them.

I don’t obsess with the way the rules are phrased and if the student has gotten the gist of the idea, that’s sufficient. They write for a few minutes, then I walk around and look at their answers and offer a bit of individual feedback. Ideally, I have some chocolate and give the winner or sometimes winners a treat. After, we go over the rules as a class. I repeat this three times, for each major paper. Students sometimes come up with funny example sentences. The goal is to rapidly learn and recall the material, then move on. There aren’t formal grades or punishments, but most students try in part because they know I’m coming around to read their answers.

We do semicolons, too—they’re used to conjoin related independent clauses without a coordinating conjunction, or to separate complex lists. I’ll use an example sentence of unrelated independent clauses like “I went to the grocery store; there is no god.”

I tell students that, once they know comma rules, they can break them, as I did in the previous paragraph. I don’t get into smaller, less important comma rules, which are covered by whatever book I assign students, like Write Right!.

Humanities classes almost never teach editing, either, which I find bizarre. I suspect that editing is to debugging as writing is to programming (or hardware design): essential. I usually teach editing at the sentence level, by collecting example sentences from student journals, then putting them on the board and asking students: “what would you do with this sentence, and why?” I walk around to read answers and offer brief feedback or tips. These are, to my mind, fundamental skills. Sentences I’ve used in the past include ones like this, regarding a chapter from Alain de Botton’s novel On Love: “Revealed in ‘Marxism,’ those who are satisfying a desire are not experiencing love rather they are using the concept to give themselves a purpose.” Or: “Contrast is something that most people find most intriguing.” These sentences are representative of the ones first- and second-year undergrads tend to produce at first.

I showed Bess an early version of this essay, and it turns out she had experiences similar to Luu’s, but at Arizona State University (ASU):

My O-chem professor was teaching us all something new, but he told me to quit when I didn’t just understand it immediately and was struggling. He had daily office hours, and I was determined to figure out the material, so I kept showing up. He wanted to appear helpful, but then acted resentful when I asked questions, “wasting his time” with topics from which he’d already moved on, and which I “should already understand”.

He suggested I drop the class, because “O-Chem is just too much for some people.” When I got the second-highest grade in the class two semesters in a row, he refused to write me a letter of recommendation because it had been so hard for me to initially grasp the material, despite the fact that I now thought fluently in it. My need for extra assistance to grasp the basics somehow overshadowed the fact that I became adept, and eventually offered tutoring for the course (where I hope I was kinder and more helpful to students than he was).

Regarding Bess’s organic chemistry story, I’m reminded of a section from David Epstein’s book Range: How Generalists Triumph in a Specialized World. In his chapter “Learning, Fast and Slow” Epstein writes that “for learning that is both durable (it sticks) and flexible (it can be applied broadly), fast and easy is precisely the problem” (85). Instead, it’s important to encounter “desirable difficulties,” or “obstacles that make learning more challenging, slower, and more frustrating in the short term, but better in the long term.” According to Epstein, students like Bess are often the ones who master the material and go on to be able to apply it. How many students has that professor foolishly discouraged? Has he ever read Range? Maybe he should.

Bess went on:  

Dan’s story also reminds me of an attending doctor in my emergency medicine program; she judged residents on what they already knew and thought negatively of ones who, like me, asked a bunch of questions. But how else are you supposed to learn? This woman (I’m tempted to use a less-nice word) considered a good resident one who’d either already been taught the information during medical school, or, more likely, pretended to know it.

She saw the desire to learn and be taught—the point of a medical residency— as an inconvenience (hers) and a weakness (ours). Residency should be about gaining a firm foundation in an environment ostensibly about education, but turns out it’s really about cheap labor, posturing, and also some education where you can pick it up off the floor. When I see hospitals claiming that residency is about education, not work, I laugh. Everyone knows that argument is bullshit.    

We can and should do a better job of teaching fundamentals, though I don’t see a lot of incentive to do so in formal settings. In most K – 12 public schools, after one to three years most teachers can’t effectively be fired, due to union rules, so the incentive to be good, let alone great, is weak. In universities, a lot of professors are, as I noted earlier, hired for research, not teaching. It’s possible that, as charter schools spread, we’ll see more experimentation and improvement at the ˚K –12 level. At the college and graduate school level, I’d love to see more efforts at instructional and institutional experimentation and diversity, but apart from the University of Austin, Minerva, the Thiel Fellowship, and a few other efforts, the teaching business is business-as-usual.

Moreover, there’s an important quirk of the college system: Congress and the Department of Education have outsourced the credentialing of colleges and universities to regional accreditation bodies. Harvard, for example, is accredited by “The New England Association of Schools and Colleges (NEASC).” But guess who makes up the regional accreditation bodies? Existing colleges and universities. How excited are existing colleges and universities to allow new competitors? Exactly. The term for this is “cartel.” This point is near top-of-mind because Marc Andreessen and Ben Horowitz emphasized it on their recent podcast regarding the crises of higher education. If you want a lot more, their podcast is good.

Unfortunately, my notions of what’s important in teaching don’t matter much any more because it’s unlikely I’ll ever teach again, given that I no longer have a tongue  and am consequently difficult to understand. I really liked (and still like!) teaching, but doing it as an adjunct making $3 – $4k / class has been unwise for many years and is even more unwise given how short time is for me right now. Plus, the likelihood of me living out the year is not high.   

In terms of trying to facilitate change and better practices, I also don’t know where, if at all, people teaching writing congregate online. Maybe they don’t congregate anywhere, so it’s hard to try and engage large numbers of instructors.

Tyler Cowen has a theory, expounded in various podcasts I’ve heard him on, that better teachers are really here to inspire students—which is true regarding both formal and informal education. Part of inspiration is, in my view, being able to rapidly traverse the knowledge space and figure out whatever the learner needs.

Until we perfect neural chips that can download the entirety of human knowledge to the fetal cortex while still in utero, no one springs from the womb knowing everything. In some areas you’ll always be a beginner. Competence, let alone mastery, starts with desire and basics.

If you’ve gotten this far, consider the Go Fund Me that’s funding ongoing care.


[1] Going to grad school in general is a bad idea; going in any humanities discipline is a horribly bad idea, but I did it, and am now a cautionary tale for having done it.

[2] Schools like Purdue also overwhelmingly select faculty on the basis of research and grantsmanship, not teaching, so it’s possible that the instructors don’t care at all. Not every researcher is a Feynman, to put it lightly.

Dissent, insiders, and outsiders: Institutions in the age of Twitter

How does an organization deal with differing viewpoints among its constituents, and how do constituents dissent?

Someone in Google’s AI division was recently fired, or the person’s resignation accepted, depending on one’s perspective, for reasons related to a violation of process and organizational norms, or something else, again depending on one’s perspective. The specifics of that incident can be disputed, but the more interesting level of abstraction might ask how organizations process conflict and what underlying conflict model participants have. I recently re-read Noah Smith’s essay “Leaders Who Act Like Outsiders Invite Trouble;” he’s dealing with the leadup to World War II but also says: “This extraordinary trend of rank-and-file members challenging the leaders of their organizations goes beyond simple populism. There may be no word for this trend in the English language. But there is one in Japanese: gekokujo.” And later, “The real danger of gekokujo, however, comes from the establishment’s response to the threat. Eventually, party bosses, executives and other powerful figures may get tired of being pushed around.”

If you’ve been reading the news, you’ll have seen gekokujo, as institutions are being pushed by the Twitter mob, and by the Twitter mob mentality, even when the mobbing person is formally within the institution. I think we’re learning, or going to have to re-learn, things like “Why did companies traditionally encourage people to leave politics and religion at the door?” and “What’s the acceptable level of discourse within the institution, before you’re not a part of it any more?”

Colleges and universities in particular seem to be susceptible to these problems, and some are inculcating environments and cultures that may not be good for working in large groups. One recent example of these challenges occurred at Haverford college, but here too the news has many other examples, and the Haverford story seems particularly dreadful.

The basic idea that organizations have to decide who’s inside and who’s outside is old: Albert Hirschman’s Exit, Voice, and Loyalty: Responses to Decline in Firms, Organizations, and States is one great discussion. Organizations also used to unfairly exclude large swaths of the population based on demographic factors, and that’s (obviously) bad. Today, though, many organizations have in effect, if not intent, decided that it’s okay for some of their members to attack the good faith of other members of the organization, and to attack the coherentness of the organization itself. There are probably limits to how much this can be done, and still retain a functional organization, let alone a maximally functional organization.

The other big change involves the ability to coordinate relatively large numbers of people: digital tools have made this easier, in a relatively short time—thus the “Twitter mob” terminology that came to mind a few paragraphs ago; I kept the term, because it seems like a reasonable placeholder for that class of behavior. Digital tools ease the ability of a small percentage of total people to be a large absolute number of people. For example, if 100,000 people are interested in or somehow connected to an organization, and one percent of them want to fundamentally disrupt the organization, change its direction, or arrange an attack, that’s 1,000 people—which feels like a lot. It’s far above the Dunbar number and too many for one or two public-facing people to deal with. In addition, in some ways journalists and academics have become modern-day clerics, and they’re often eager to highlight and disseminate news of disputes of this sort.

Over time, I expect organizations are going to need to develop new cultural norms if they’re going to maintain their integrity in the face of coordinated groups that represent relatively small percentages of people but large absolute numbers of people. The larger the organization, the more susceptible it may be to these kinds of attacks. I’d expect more organizations to, for example, explicitly say that attacking other members of the organization in bad faith will result in expulsion, as seems to have happened in the Google example.

Evergreen College, which hosted an early example of this kind of attack (on a biology professor named Bret Weinstein), has seen its enrollment drop by about a third.

Martin Gurri’s book The Revolt of The Public and the Crisis of Authority in the New Millennium examines the contours of the new information world, and the relative slowness of institutions to adapt to it. Even companies like Google, Twitter, and Facebook, which have enabled sentiment amplification, were founded before their own user bases became so massive.

Within organizations, an excess of conformity is a problem—innovation doesn’t occur from simply following orders—but so is an excess of chaos. Modern intellectual organizations, like tech companies or universities, probably need more “chaos” (in the sense of information transfer) than, say, old-school manufacturing companies, which primarily needed compliance. “Old-school” is a key phrase, because from what I understand, modern manufacturing companies are all tech companies too, and they need the people closest to the process to be able to speak up if something is amiss or needs to be changed. Modern information companies need workers to speak up and suggest new ideas, new ways of doing things, and so on. That’s arguably part of the job of every person in the organization.

Discussion at work of controversial identity issues can probably function if all parties assume good faith from the other parties (Google is said to have had a freewheeling culture in this regard from around the time of its founding up till relatively recently). Such discussions probably won’t function without fundamental good faith, and good faith is hard to describe, but most of us know it when we see it, and defining every element of it would probably be impossible, while cultivating it as a general principle is desirable. Trying to maintain such an environment is tough: I know that intimately because I’ve tried to maintain it in classrooms, and those experiences led me to write “The race to the bottom of victimhood and ‘social justice’ culture.” It’s hard to teach, or run an information organization, without a culture that lets people think out loud, in good faith, with relatively little fear of arbitrary reprisal. Universities, in particular, are supposed to be oriented around new ideas and discussing ideas. Organizations also need some amount of hierarchy: without it, decisions can’t or don’t get made, and the organizational processes themselves don’t function. Excessive attacks lead to the “gekokujo” problem Smith describes. Over time organizations are likely going to have to develop antibodies to the novel dynamics of the digital world.

A lot of potential learning opportunities aren’t happening, because we’re instead dividing people into inquisitors and heretics, when very few should be the former, and very are truly the latter. One aspect of “Professionalism” might be “assuming good faith on the part of other parties, until proven otherwise.”

On the other hand, maybe these cultural skirmishes don’t matter much, like brawlers in a tavern across the street from the research lab. Google’s AlphaFold has made a huge leap in protein folding efforts (Google reorganized itself, so technically both Google and AlphaFold are part of the “Alphabet” parent company). Waymo, another Google endeavor, may be leading the way towards driverless cars, and it claims to be expanding its driverless car service. Compared to big technical achievements, media fights are minor. Fifty years from now, driverless cars will be taken for granted, along with customizable biology, people will be struggling to understand what was at stake culturally, in much the way most people don’t get what the Know-Nothing party, of the Hundred Years War, were really about, but we take electricity and the printing press for granted.

EDIT: Coinbase has publicly taken a “leave politics and religion at the door” stand. They’re an innovator, or maybe a back-to-the-future company, in these terms.

 

Personal epistemology, free speech, and tech companies

The NYT describes “The Problem of Free Speech in an Age of Disinformation, and in response Hacker News commenter throwaway13337 says, in part, “It’s not unchecked free speech. Instead, it’s unchecked curation by media and social media companies with the goal of engagement.” There’s some truth to the idea that social media companies have evolved to seek engagement, rather than truth, but I think the social media companies are reflecting a deeper human tendency. I wrote back to throwaway13337: “Try teaching non-elite undergrads, and particularly assignments that require some sense of epistemology, and you’ll discover that the vast majority of people have pretty poor personal epistemic hygiene—it’s not much required in most people, most of the time, in most jobs.”

From what I can tell, we evolved to form tribes, not to be “right:” Jonathan’s Haidt’s The Righteous Mind: Why Good People Are Divided by Politics and Religion deals with this topic well and at length, and I’ve not seen any substantial rebuttals of it. We don’t naturally take to tracking the question, “How do I know what I know?” Instead, we naturally seem to want to find “facts” or ideas that support our preexisting views. In the HN comment thread, someone asked for specific examples of poor undergrad epistemic hygiene, and while I’d prefer not to get super specific for reasons of privacy, I’ve had many conversations that take the following form: “How do you know article x is accurate?” “Google told me.” “How does Google work?” “I don’t know.” “What does it take to make a claim on the Internet.” “Um. A phone, I guess?” A lot of people—maybe most—will uncritically take as fact whatever happens to be served up by Google (it’s always Google and never Duck Duck Go or Bing), and most undergrads whose work I’ve read will, again uncritically, accept clickbait sites and similar as accurate. Part of the reason for this reasoning is that undergrads’s lives are minimally affected by being wrong or incomplete about some claim done in a short assignment that’s being imposed by some annoying professor toff standing between them and their degree.

The gap between elite information discourse and everyday information discourse, even among college students, who may be more sophisticated than their peer equivalents, is vast—so vast that I don’t think most journalists (who mostly talk to other journalists and to experts) and to other people who work with information, data, and ideas really truly understand it. We’re all living in bubbles. I don’t think I did, either, before I saw the epistemic hygiene most undergrads practice, or don’t practice. This is not a “kids these days” rant, either: many of them have never really been taught to ask themselves, “How do I know what I know?” Many have never really learned anything about the scientific method. It’s not happening much in most non-elite schools, so where are they going to get epistemic hygiene from?

The United States alone has 320 million people in it. Table DP02 in the Census at data.census.gov estimates that 20.3% of the population age 25 and older has a college bachelor’s degree, and 12.8% have a graduate or professional degree. Before someone objects, let me admit that a college degree is far from a perfect proxy for epistemic hygiene or general knowledge, and some high school dropouts perform much better at cognition, meta cognition, statistical reasoning, and so forth, than do some people with graduate degrees. With that said, though, a college degree is probably a decent approximation for baseline abstract reasoning skills and epistemic hygiene. Most people, though, don’t connect with or think in terms of aggregated data or abstract reasoning—one study, for example, finds that “Personal experiences bridge moral and political divides better than facts.” We’re tribe builders, not fact finders.

Almost anyone who wants a megaphone in the form of one of the many social media platforms available now has one. The number of people motivated by questions like “What is really true, and how do I discern what is really true? How do I enable myself to get countervailing data and information into my view, or worldview, or worldviews?” is not zero, again obviously, but it’s not a huge part of the population. And many very “smart” people in an IQ sense use their intelligence to build better rationalizations, rather than to seek truth (and I may be among the rationalizers: I’m not trying to exclude myself from that category).

Until relatively recently, almost everyone with a media megaphone had some kind of training or interest in epistemology, even they didn’t call it “epistemology.” Editors would ask, “How do you know that?” or “Who told you that?” or that sort of thing. Professors have systems that are supposed to encourage greater-than-average epistemic hygiene (these systems were not and are not perfect, and nothing I have written so far implies that they were or are).

Most people don’t care about the question, “How do you know what you know?” are fairly surprised if it’s asked, implicitly or explicitly. Some people are intrigued by it but most aren’t, and view questions about sources and knowledge to be a hindrance. This is less likely to be true of people who aspire to be researchers or work in other knowledge-related professions, but that describes only a small percentage of undergraduates, particularly at non-elite schools. And the “elite schools” thing drives a lot of the media discourse around education. One of the things I like about Professor X’s book In the Basement of the Ivory Tower is how it functions as a corrective to that discourse.

For most people, floating a factually incorrect conspiracy theory online isn’t going to negatively affect their lives. If someone is a nurse and gives a patient a wrong medication or incorrect medication, that person is not going to be a nurse for long. If the nurse states or repeats a factually incorrect political or social idea online, particularly but not exclusively under a pseudonym, that nurse’s life likely won’t be affected. There’s no truth feedback loop. The same is true for someone working in, say, construction, or engineering, or many other fields. The person is free to state things that are factually incorrect, or incomplete, or misleading, and doing so isn’t going to have many negative consequences. Maybe it will have some positive consequences: one way to show that you’re really on team x is to state or repeat falsehoods that show you’re on team x, rather than on team “What is really true?”

I don’t want to get into daily political discourse, since that tends to raise defenses and elicit anger, but the last eight months have demonstrated many people’s problems with epistemology, and in a way that can have immediate, negative personal consequences—but not for everyone.

Pew Research data indicate that a quarter of US adults didn’t read a book in 2018; this is consistent with other data indicating that about half of US adults read zero or one books per year. Again, yes, there are surely many individuals who read other materials and have excellent epistemic hygiene, but this is a reasonable mass proxy, given the demands that reading makes on us.

Many people driving the (relatively) elite discourse don’t realize how many people are not only not like them, but wildly not like them, along numerous metrics. It may also be that we don’t know how to deal with gossip at scale. Interpersonal gossip is all about personal stories, while many problems at scale are best understood through data—but the number of people deeply interested in data and data’s veracity is small. And elite discourse has some of its own possible epistemic falsehoods, or at least uncertainties, embedded within it: some of the populist rhetoric against elites is rooted in truth.

A surprisingly large number of freshmen don’t know the difference between fiction and nonfiction, or that novels are fiction. Not a majority, but I was surprised when I first encountered confusion around these points; I’m not any longer. I don’t think the majority of freshmen confuse fiction and nonfiction, or genres of nonfiction, but enough do for the confusion to be a noticeable pattern (modern distinctions between fiction and nonfiction only really arose, I think, during the Enlightenment and the rise of the novel in the 18th Century, although off the top of my head I don’t have a good citation for this historical point, apart perhaps from Ian Watt’s work on the novel). Maybe online systems like Twitter or Facebook allow average users to revert to an earlier mode of discourse in which the border between fiction and nonfiction is more porous, and the online systems have strong fictional components that some users don’t care to segregate.

We are all caught in our bubble, and the universe of people is almost unimaginably larger than the number of people in our bubble. If you got this far, you’re probably in a nerd bubble: usually, anything involving the word “epistemology” sends people to sleep or, alternately, scurrying for something like “You won’t believe what this celebrity wore/said/did” instead. Almost no one wants to consider epistemology; to do so as a hobby is rare. One person’s disinformation is another person’s teambuilding. If you think the preceding sentence is in favor of disinformation, by the way, it’s not.

A simple solution to peer review problems

Famous computer scientist and Roomba co-founder Rodney Brooks writes about the problems of peer review in academia. He notes that peer review has some important virtues even as the way it’s currently practiced generates many problems and pathologies too. Brooks says, “I don’t have a solution, but I hope my observations here might be interesting to some.” I have a partial solution: researchers “publish” papers to arXiv or similar, then “submit” them to the journal, which conducts peer review. The “journal” is a list of links to papers that it has accepted or verified.

That way, the paper is available to those who find it useful. If a researcher really thinks the peer reviewers are wrong, the researcher can state why, and why they’re leaving it up, despite the critiques. Peer-review reports can be kept anonymous but can also be appended to the paper, so that readers can decide for themselves whether the peer reviewers’ comments are useful or accurate—in my limited, but real, experience in English lit, they’ve been neither, and that experience seems to have been echoed by many others. If a writer wishes to be anonymous, the writer can leave the work as “anonymous” until after it’s been submitted for peer review, which would allow for double-blind peer review, and that double-blindness would help remove some of the insider-ism biases around people knowing each other.

Server costs for things like simple websites are almost indistinguishable from zero today, and those costs can easily be borne by the universities themselves, which will find them far lower than subscription costs.

What stands in the way? Current practice and setup. Plus, Elsevier and one or two other multi-billion-dollar publishing conglomerates that control the top journals in most fields. These giants want to maintain library fees that amount to thousands of dollars per journal, even if the journal editors are paid minimally, as are peer reviewers and so on. Only the companies make money. Academics live and die based on prestige, so few will deviate from the existing model. Publishing in top journals is essential for hiring, tenure, and promotion (the tenure model also generates a bunch of pathologies in academia, but we’ll ignore those for now).

There are pushes to change the model—the entire University of California system, for example, announced in 2019 that it would “terminate subscriptions with world’s largest scientific publisher in push for open access to publicly funded research.” In my view, all public funding bodies should stipulate that no research funded with public money can be published in closed-access journals, and foundations should do the same. There is no reason for modern research to be hidden behind paywalls.

It would also help if individual schools and departments quit making hiring, tenure and promotion decisions almost entirely based on “peer-reviewed” work. Those on hiring, tenure, and promotion committees should be able to read the work and judge the merit for themselves, regardless of the venue in which it appears.

Coronavirus and the need for urgent research has also pushed biomed and medicine towards the “publish first” model. Peer review seems to be happening after the paper is published in medRxiv or bioRxiv. One hopes these are permanent changes. The problems with the journal model are well known but too little is being done. Or, rather, too little was being done: the urgency of the situation may lead to reform in most fields.

Open journals would be a boon for access and for intellectual diversity. When I was in grad school for English (don’t do that, I want to reiterate), the peer reviewer reports I got on most of my papers were so bad that they made me realize I was wasting my life trying to break into the field; there is a difference between “negative but fair” and “these people are not worth trying to impress,” and in English lit the latter predominated. In addition, journals took a year, and sometimes years, to publish the papers they accepted, raising the obvious question: if something is so unimportant that it’s acceptable to take years to publish it, why bother? “The Research Bust” explores the relevant implications. No one else in the field seemed to care about its torporous pace or what that implies. Many academics in the humanities have been wringing their hands about the state of the field for years, without engaging in real efforts to fix it, even as professor jobs disappear and undergrads choose other majors. In my view, intellectual honesty and diversity are both important, and yet the current academic system doesn’t properly incentivize or reward either, though it could.

In the humanities, at least being wrong and “peer reviewed” doesn’t carry some of the costs that being wrong and “peer reviewed” can in the sciences.

For another take on peer review’s problems, see Andrew Gelman.

Have journalists and academics become modern-day clerics?

This guy was wrongly and somewhat insanely accused of sexual impropriety by two neo-puritans; stories about individual injustice can be interesting, but this one seems like an embodiment of a larger trend, and, although the story is long and some of the author’s assumptions are dubious, I think there’s a different, conceivably better, takeaway than the one implied: don’t go into academia (at least the humanities) or journalism. Both fields are fiercely, insanely combative for very small amounts of money; because the money is so bad, many people get or stay in them for non-monetary ideological reasons, almost the way priests, pastors, or other religious figures used to choose low incomes and high purpose (or “purpose” if we’re feeling cynical). Not only that, but clerics often know the answer to the question before the question has even been asked, and they don’t need free inquiry because the answers are already available—attributes that are very bad, yet seem to be increasingly common, in journalism and academia.

Obviously journalism and academia have never been great fields for getting rich, but the business model for both has fallen apart in the last 20 years. The people willing to tolerate the low pay and awful conditions must have other motives (a few are independently wealthy) to go into them. I’m not arguing that other motives have never existed, but today you’d have to be absurdly committed to those other motives. That there are new secular religions is not an observation original to me, but once I heard that idea a lot of other strange-seeming things about modern culture clicked into place. Low pay, low status, and low prestige occupations must do something for the people who go into them.

Once an individual enters the highly mimetic and extremely ideological space, he becomes a good target for destruction—and makes a good scapegoat for anyone who is not getting the money or recognition they think they deserve. Or for anyone who is simply angry or feels ill-used. The people who are robust or anti-fragile stay out of this space.

Meanwhile, less ideological and much wealthier professions may not have been, or be, immune from the cultural psychosis in a few media and academic fields, but they’re much less susceptible to mimetic contagions and ripping-downs. The people in them have greater incomes and resources. They have a greater sense of doing something in the world that is not primarily intellectual, and thus probably not primarily mimetic and ideological.

There’s a personal dimension to these observations, because I was attracted to both journalism and academia, but the former has shed at least half its jobs over the last two decades and the latter became untenable post-2008. I’ve enough interaction with both fields to get the cultural tenor of them, and smart people largely choose more lucrative and less crazy industries. Like many people attracted to journalism, I read books like All the President’s Men in high school and wanted to model Woodward and Bernstein. But almost no reporters today are like Woodward and Bernstein. They’re more likely to be writing Buzzfeed clickbait, and nothing generates more clicks than outrage. Smart people interested in journalism can do a minimal amount of research and realize that the field is oversubscribed and should be avoided.

When I hear students say they’re majoring in journalism, I look at them cockeyed, regardless of gender; there’s fierce competition coupled with few rewards. The journalism industry has evolved to take advantage of youthful idealism, much like fashion, publishing, film, and a few other industries. Perhaps that is why these industries attract so many writers to insider satires: the gap between idealistic expectation and cynical reality is very wide.

Even if thousands of people read this and follow its advice, thousands more persons will keep attempting to claw their way into journalism or academia. It is an unwise move. We have people like David Graeber buying into the innuendo and career attack culture. Smart people look at this and do something else, something where a random smear is less likely to cost an entire career.

We’re in the midst of a new-puritan revival and yet large parts of the media ecosystem are ignoring this idea, often because they’re part of it.

It is grimly funny to have read the first story linked next to a piece that quotes Solzhenitsyn: “To do evil a human being must first of all believe that what he’s doing is good, or else that it’s a well-considered act in conformity with natural law. . . . it is in the nature of a human being to seek a justification for his actions.” Ideology is back, and destruction is easier the construction. Our cultural immune system seems to have failed to figure this out, yet. Short-form social media like Facebook and Twitter arguably encourage black and white thinking, because there’s not enough space to develop nuance. There is enough space, however, to say that the bad guy is right over there, and we should go attack that bad guy for whatever thought crimes or wrongthink they may have committed.

Ideally, academics and journalists come to a given situation or set of facts and don’t know the answer in advance. In an ideal world, they try to figure out what’s true and why. “Ideal” is repeated twice because, historically, departures from the ideal is common, but having ideological neutrality and an investigatory posture is preferable to knowing the answer in advance and judging people based on demographic characteristics and prearranged prejudices, yet those traits seem to have seeped into the academic and journalistic cultures.

Combine this with present-day youth culture that equates feelings with facts and felt harm with real harm, and you get a pretty toxic stew—”toxic” being a favorite word of the new clerics. See further, America’s New Sex Bureaucracy. If you feel it’s wrong, it must be wrong, and probably illegal; if you feel it’s right, it must be right, and therefore desirable. This kind of thinking has generated some backlash, but not enough to save some of the demographic undesirables who wander into the kill zone of journalism or academia. Meanwhile, loneliness seems to be more acute than ever, and we’re stuck wondering why.

The college bribery scandal vs. Lambda School

Many of you have seen the news, but, while the bribery scandal is sucking up all the attention in the media, Lambda School is offering a $2,000/month living stipend to some students and Western Governors University is continuing to quietly grow. The Lambda School story is a useful juxtaposition with the college-bribery scandal. Tyler Cowen has a good piece on the bribery scandal (although to me the scandal looks pretty much like business-as-usual among colleges, which are wrapped up in mimetic rivalry, rather than a scandal as such, unless the definition of a scandal is “when someone accidentally tells the truth”):

Many wealthy Americans perceive higher education to be an ethics-free, law-free zone where the only restraint on your behavior is whatever you can get away with.

This may be an overly cynical take, but to what extent do universities act like ethics-free, law-free zones? They accept students (and their student loan payments) who are unlikely to matriculate; they have no skin in the game regarding student loans; insiders understand the “paying for the party” phenomenon, while outsiders don’t; too frequently, universities don’t seem to defend free speech or inquiry. In short, many universities are exploiting information asymmetries between them and their students and those students’s parents—especially the weakest and worst-informed students. Discrimination against Asians in admissions is common at some schools and is another open secret, albeit less secret than it once was. When you realize what colleges are doing to students and their families, why is it a surprise when students and their families reciprocate?

To be sure, this is not true of all universities, not all the time, not all parts of all universities, so maybe I am just too close to the sausage factory. But I see a whole lot of bad behavior, even when most of the individual actors are well-meaning. Colleges have evolved in a curious set of directions, and no one attempting to design a system from scratch would choose what we have now. That is not a reason to imagine some kind of perfect world, but it is worth asking how we might evolve out of the current system, despite the many barriers to doing so. We’re also not seeing employers search for alternate credentialing sources, at least from what I can ascertain.

See also “I Was a College Admissions Officer. This Is What I Saw.” In a social media age, why are we not seeing more of these pieces? (EDIT: Maybe we are? This is another one, scalding and also congruent with my experiences.) Overall, I think colleges are really, really good at marketing, and arguably marketing is their core competency. A really good marketer, however, can convince you that marketing is not their core competency.

“Oh, the Humanities!”

It’s pretty rare for a blog post, even one like “ Mea culpa: there *is* a crisis in the humanities,” to inspire a New York Times op ed, but here we have “Oh, the Humanities! New data on college majors confirms an old trend. Technocracy is crushing the life out of humanism.” It’s an excellent essay. Having spent a long time working in the humanities (a weird phrase, if you think about it) and having written extensively about the problems with the humanities as currently practiced in academia, I naturally have some thoughts.

Douthat notes the decline in humanities majors and says, “this acceleration is no doubt partially driven by economic concerns.” That’s true. Then we get this interesting move:

In an Apollonian culture, eager for “Useful Knowledge” and technical mastery and increasingly indifferent to memory and allergic to tradition, the poet and the novelist and the theologian struggle to find an official justification for their arts. And both the turn toward radical politics and the turn toward high theory are attempts by humanists in the academy to supply that justification — to rebrand the humanities as the seat of social justice and a font of political reform, or to assume a pseudoscientific mantle that lets academics claim to be interrogating literature with the rigor and precision of a lab tech doing dissection.

There is likely some truth here too. In this reading, the humanities have turned from traditional religious feeling and redirected the religious impulse in a political direction.

Douthat has some ideas about how to improve:

First, a return of serious academic interest in the possible (I would say likely) truth of religious claims. Second, a regained sense of history as a repository of wisdom and example rather than just a litany of crimes and wrongthink. Finally, a cultural recoil from the tyranny of the digital and the virtual and the Very Online, today’s version of the technocratic, technological, potentially totalitarian Machine that Jacobs’s Christian humanists opposed.

I think number two is particularly useful, number three is reasonable, and number one is fine but somewhat unlikely and not terribly congruent with my own inclinations. But I also think that the biggest problem with the humanities as currently practiced is the turn from uninterested inquiry about what is true, what is valuable, what is beautiful, what is worth remembering, what should be made, etc., and toward politics, activism, and taking sides in current political debates—especially when those debates are highly interested in stratifying groups of people based on demographic characteristics, then assigning values to those groups.

That said, I’m not the first person to say as much and have zero impact. Major structural forces stand in the way of reform. The current grad-school-to-tenure structure kills most serious, divergent thinking and encourages a group-think monoculture. Higher-ed growth peaked around 1975; not surprisingly, the current “culture wars” or “theory wars” or whatever you want to call them got going in earnest in the 1980s, when there was little job growth among humanities academics. And they’ve been going, in various ways, ever since.

Before the 1980s, most people who got PhDs in the humanities eventually got jobs of some kind or other. This meant heterodox thinkers could show up, snag a foothold somewhere, and change the culture of the academic humanities. People like Camille Paglia or Harold Bloom or even Paul de Man (not my favorite writer) all have this quality. But since the 1980s, the number of jobs has shrunk, the length of grad school has lengthened, and heterodox thinkers have (mostly) been pushed out. Interesting writers like Jonathan Gottschall work as adjuncts, if they work at all.

Today, the jobs situation is arguably worse than ever: I can’t find the report off-hand, the Modern Language Association tracks published, tenure-track jobs, and those declined from about a thousand a year before 2008 to about 300 – 400 per year now.

Current humanities profs hire new humanities profs who already agree with them, politically speaking. Current tenured profs tenure new profs who already agree. This dynamic wasn’t nearly as strong when pretty much everyone got a job, even those who advocated for weird new ideas that eventually became the norm. That process is dead. Eliminating tenure might help the situation some, but any desire to eliminate tenure as a practice will be deeply opposed by the powerful who benefit from it.

So I’m not incredibly optimistic about a return to reason among humanities academics. Barring that return to reason, a lot of smart students are going to look at humanities classes and the people teaching them, then decide to go major in economics (I thought about majoring in econ).

I remember taking a literary theory class when I was an undergrad and wondering how otherwise seemingly-smart people could take some of that terrible writing and thinking seriously. Still, I was interested in reading and fiction, so I ignored the worst parts of what I read (Foucault, Judith Butler—those kinds of people) and kept on going, even into grad school. I liked to read and still do. I’d started writing (bad, at the time) novels. I didn’t realize the extent to which novels like Richard Russo’s Straight Man and Francine Prose’s Blue Angel are awfully close to nonfiction.

By now, the smartest people avoid most humanities subjects as undergrads and then grad students, or potential grad students. Not all of the smartest people, but most of them. And that anti-clumping tendency leaves behind people who don’t know any better or who are willing to repeat the endless and tedious postmodernist mantras like initiates into the cult (and there is the connection to Douthat, who’d like us to acknowledge the religious impulse more than most of us now do). Some of them are excellent sheep: a phrase from William Deresiewicz that he applies to students at elite schools but that might also be applied to many humanities grad students.

MFA programs, last time I checked, are still doing pretty well, and that’s probably because they’re somewhat tethered to the real world and the desire to write things other humans might want to read. That desire seems to have disappeared in most of humanistic academia. Leaving the obvious question: “Why bother?” And that is the question I can no longer answer.

Postmodernisms: What does *that* mean?

In response to What’s so dangerous about Jordan Peterson?, there have been a bunch of discussions about what “postmodernism” means (“He believes that the insistence on the use of gender-neutral pronouns is rooted in postmodernism, which he sees as thinly disguised Marxism.”) By now, postmodernism has become so vague and broad that it means almost anything—which is of course another way of saying “nothing”—so the plural is there in the title for a reason. In my view most people claiming the mantle of big broad labels like “Marxist,” “Christian,” “Socialist,” “Democrat,” etc. are trying to signal something about themselves and their identity much more than they’re trying to understand the nuances of what those positions might mean or what ideas / policies really underlie the labels, so for the most part when I see someone talking or writing about postmodern, I say, “Oh, that’s nice,” then move on to talking about something more interesting and immediate.

But if one is going to attempt to describe postmodernism, and how it relates to Marxism, I’d start by observing that old-school Marxists don’t believe much of the linguistic stuff that postmodernists sometimes say they believe—about how everything reduces to “language” or “discourse”—but I think that the number of people who are “Marxists” in the sense that Marx or Lenin would recognize is tiny, even in academia.

I think what’s actually happening is this: people have an underlying set of models or moral codes and then grab some labels to fit on top of those codes. So the labels fit, or try to fit, the underlying morality and beliefs. People in contemporary academia might be particularly drawn to a version of strident moralism in the form of “postmodernism” or “Marxism” because they don’t have much else—no religion, not much influence, no money, so what’s left? A moral superiority that gets wrapped up in words like “postmodernism.” So postmodernism isn’t so much a thing as a mode or a kind of moral signal, and that in turn is tied into the self-conception of people in academia.

You may be wondering why academia is being dragged into this. Stories about what “postmodernism” means are bound up in academia, where ideas about postmodernism still simmer. In humanities grad school, most grad students make no money, as previously mentioned, and don’t expect to get academic jobs when they’re done. Among those who do graduate, most won’t get jobs. Those who do, probably won’t get tenure. And even those who get tenure will often get it for writing a book that will sell two hundred copies to university libraries and then disappear without a trace. So… why are they doing what they do?

At the same time, humanities grad students and profs don’t even have God to console them, as many religious figures do. So some of the crazier stuff emanating from humanities grad students might be a misplaced need for God or purpose. I’ve never seen the situation discussed in those terms, but as I look at the behavior I saw in grad school and the stories emerging from humanities departments, I think that a central absence better explains many problems than most “logical” explanations. And then “postmodernism” is the label that gets applied to this suite of what amount to beliefs. And that, in turn, is what Jordan Peterson is talking about. If you are (wisely) not following trends in the academic humanities, Peterson’s tweet on the subject probably makes no sense.

Most of us need something to believe it—and the need to believe may be more potent in smarter or more intellectual people. In the absence of God, we very rarely get “nothing.” Instead, we get something else, but we should take care in what that “something” is. The sense of the sacred is still powerful within humanities departments, but what that sacred is has shifted, to their detriment and to the detriment of society as a whole.

(I wrote here about the term “deconstructionism,” which has a set of problems similar to “postmodernism,” so much of what I write there also applies here.)

Evaluating things along power lines, as many postmodernists and Marxists seek to do, isn’t always a bad idea, of course, but there are many other dimensions along which one can evaluate art, social situations, politics, etc. So the relentless focus on “power” becomes tedious and reductive after a while: one always knows what the speaker is likely to say, unless of course the speaker is seen as the powerful person and the thing being criticized can be seen as the obvious (e.g. it seems obvious that many tenured professors are in positions of relatively high power, especially compared to grad students; that’s part of what makes the Lindsay Shepherd story compelling).

This brand of post-modernism tends to infantilize groups or individuals (they’re all victims!) or lead to races to the bottom and the development of victimhood culture. But these pathologies are rarely acknowledged by their defenders.

Has postmodernism led to absurdities like the one at Evergreen State, which led to huge enrollment drops? Maybe. I’ve seen the argument and, on even days, buy it.

I read a good Tweet summarizing the basic problem:

When postmodern types say that truth-claims are rhetoric and that attempts to provide evidence are but moves in a power-game—believe them! They are trying to tell you that this is how they operate in discussions. They are confessing that they cannot imagine doing otherwise.

If everything is just “rhetoric” or “power” or “language,” there is no real way to judge anything. Along a related axis, see “Dear Humanities Profs: We Are the Problem.” Essays like it seem to appear about once a year or so. That they seem to change so little is discouraging.

So what does postmodernism mean? Pretty much whatever you want it to mean, whether you love it for whatever reason or hate it for whatever reason. Which is part of the reason you’ll very rarely see it used on this site: it’s too unspecific to be useful, so I shade towards words with greater utility that haven’t been killed, or at least made somatic, through over-use. There’s a reason why most smart people eschew talking about postmodernism or deconstructionism or similar terms: they’re at a not-very-useful level of abstraction, unless one is primarily trying to signal tribal affiliation, and signaling tribal affiliation isn’t a very interesting level of or for discussion.

If you’ve read to the bottom of this, congratulations! I can’t imagine many people are terribly interested in this subject; it seems that most people read a bit about it, realize that many academics in the humanities are crazy, and go do something more useful. It’s hard to explain this stuff in plain language because it often doesn’t mean much of anything, and explaining why that’s so takes a lot.

What happened to the academic novel?

In “The Joke’s Over: How academic satire died,” Andrew Kay asks: What happened to the academic novel? He proffers some excellent theories, including: “the precipitate decline of English departments, their tumble from being the academy’s House Lannister 25 years ago — a dignified dynasty — to its House Greyjoy, a frozen island outpost. [. . .] academic satires almost invariably took place in English departments.” That seems plausible, and it’s also of obvious importance that writers tend to inhabit English departments, not biology departments; novels are likely to come from novelists and people who study novels than they are from people who study DNA.

But Kay goes on to note that tenure-track jobs disappeared, which made making fun of academics less funny because their situation became serious. I don’t think that’s it, though: tenure-track jobs declined enormously in 1975, yet academic satires kept appearing regularly after that.

But:

When English declined, though, academic satire dwindled with it. Much of the clout that English departments had once enjoyed migrated to disciplines like engineering, computer science, and (that holiest of holies!) neuroscience. (Did we actually have a March for Science last April, or was that satire?) Poetry got bartered for TED talks, Words­worth and Auden for that new high priest of cultural wisdom, the cocksure white guy in bad jeans and a headset holding forth on “innovation” and “biotech.”

And I think this makes sense: much of what English departments began producing in the 1980s and 1990s is nonsense that almost no one takes seriously—even the people who produce it, and it’s hard to satirize total nonsense:

Most satire relies on hyperbole: The satirist holds a ludicrously distorted mirror up to reality, exaggerating the flaws of individuals and systems and so (ideally) shocking them into reform. But what happens when reality outpaces satire, or at least grows so outlandish that a would-be jester has to sprint just to keep up?

What English departments are doing is mostly unimportant, so larger cultural attention focuses on TED talks or edge.org or any number of other venues and disciplines. Debating economics is more interesting than debating deconstructionism (or whatever) because the outcome of the debate matters. In grad school I heard entirely too many people announce that there is no such as reality, then go off to lunch (which seemed a lot like reality to me, but I was a bit of a grad-school misfit).

A couple years ago I wrote “What happened with Deconstruction? And why is there so much bad writing in academia?“, which attempts to explain some of the ways that academia came to be infested by nonsense. Smart people today might gaze at what’s going on in English (and many other humanities) departments, laugh, and move on to more important issues—to the extent they bother gazing over at all. If the Lilliputians want to chase each other around with rhetorical sticks, let them; the rest of us have things to do.

Decades of producing academic satire have produced few if any changes. The problems Blue Angel and Straight Men identified remain and are if anything worse. No one in English departments has anything to lose, intellectually speaking; the sense of perspective departed a long time ago. At some point, would-be reformers wander off and deal with more interesting topics. English department members, meanwhile, can’t figure out why they can’t get more undergrads to major in English or more tenure-track hires. One could start by looking in the mirror, but it’s easier and more fun to blame outsiders than it is to look within.

Back when I was writing a dissertation on academic novels, a question kept creeping up on me, like a serial killer in a horror novel: “Who cares?” I couldn’t find a good answer to that question—at least, not one that most people in the academic humanities seemed to accept. It seems that I’m not alone. Over time, people vote with their feet, or, in this case, attention. If no one wants to pay attention to English departments, maybe that should tell us something.

Nah. What am I saying? It’s them, not us.

“University presidents: We’ve been blindsided.” Er, no.

University presidents: We’ve been blindsided” is an amazing article—if the narrative it presents is true. It’s amazing because people have been complaining about political correctness and nothing-means-anything postmodernism since at least the early ’90s, yet the problems with reality and identity politics seem to have intensified in the Internet age. University presidents haven’t been blindsided, and some of the problems in universities aren’t directly their fault—but perhaps their biggest failure, with some notable exceptions (like the University of Chicago), is not standing up for free speech.

I don’t see how it’s impossible to see this coming; the right’s attack on academia has its roots in the kind of scorn and disdain I write about in “The right really was coming after college next.” As I say there, I’ve been hearing enormous, overly broad slams against the right for as long as I’ve been involved in higher education. That sort of thing has gone basically unchecked for I-don’t-know how long. It’s surprising not to expect a backlash, eventually, and institutions that don’t police themselves eventually get policed or at least attacked from the outside.

(Since such observations tend to generate calls of “partisanship,” I’ll again note that I’m not on the right and am worried about intellectual honesty.)

There is this:

“It’s not enough anymore to just say, ‘trust us,'” Yale President Peter Salovey said. “There is an attempt to build a narrative of colleges and universities as out of touch and not politically diverse, and I think … we have a responsibility to counter that — both in actions and in how we present ourselves.”

That’s because universities are not politically diverse. At all. Heterodox Academy has been writing about this since it was founded. Political monocultures may in turn encourage freedom of speech restrictions, especially against the other guy, who isn’t even around to make a case. For example, some of you may have been following the Wilifred Laurier University brouhaha (if not, “Why Wilfrid Laurier University’s president apologized to Lindsay Shepherd” is an okay place to start, though the school is in Canada, not the United States). Shepherd’s department wrote a reply, “An open letter from members of the Communication Studies Department, Wilfrid Laurier University” that says, “Public debates about freedom of expression, while valuable, can have a silencing effect on the free speech of other members of the public.” In other words, academics who are supposed to support free speech and disinterested inquiry don’t. And they get to decide what counts as free speech.

If academics don’t support free speech, they’re just another interest group, subject to the same social and political forces that all interest groups are subject to. I don’t think the department that somehow thought this letter to be a good idea realizes as much.

The idea that “trust us” is good enough doesn’t seem to be good enough anymore. In the U.S., the last decade of anti-free-speech and left-wing activism on campus has brought us a Congress that is in some ways more retrograde than any since… I’m not sure when. Maybe the ’90s. Maybe earlier. Yet the response on campus has been to shrug and worry about pronouns.

Rather than “touting their positive impacts on their communities to local civic groups, lawmakers and alumni,” universities need to re-commit to free speech, open and disinterested inquiry, and not prima facie opposing an entire, large political group. Sure, “Some presidents said they blame themselves for failing to communicate the good they do for society — educating young people, finding cures for diseases and often acting as major job creators.” But, again, universities exist to learn what’s true, as best one can, and then explain why it’s true.

Then there’s this:

But there was also an element of defensiveness. Many argue the backlash they’ve faced is part of a larger societal rethinking of major institutions, and that they’re victims of a political cynicism that isn’t necessarily related to their actions. University of Washington President Ana Mari Cauce, for one, compared public attitudes toward universities with distrust of Congress, the legal system, the voting system and the presidency.

While universities do a lot right, they (or some of their members) also engaging in dangerous epistemic nihilism that’s contrary to their missions. And people are catching onto that. Every time one sees a fracas like the one at Evergreen College, universities as a whole lose a little of their prestige. And the response of many administrators hasn’t been good.

Meanwhile, the incredible Title IX stories don’t help (or see Laura Kipnis’s story). One can argue that these are isolated cases. But are they? With each story, and the inept institutional response to it, universities look worse and so do their presidents. University presidents aren’t reaffirming the principles of free speech and disinterested research, and they’re letting bureaucrats create preposterous and absurd tribunals. Then they’re saying they’ve been blindsided! A better question might be, “How can you not see a reckoning in advance?”