Dissent, insiders, and outsiders: Institutions in the age of Twitter

How does an organization deal with differing viewpoints among its constituents, and how do constituents dissent?

Someone in Google’s AI division was recently fired, or the person’s resignation accepted, depending on one’s perspective, for reasons related to a violation of process and organizational norms, or something else, again depending on one’s perspective. The specifics of that incident can be disputed, but the more interesting level of abstraction might ask how organizations process conflict and what underlying conflict model participants have. I recently re-read Noah Smith’s essay “Leaders Who Act Like Outsiders Invite Trouble;” he’s dealing with the leadup to World War II but also says: “This extraordinary trend of rank-and-file members challenging the leaders of their organizations goes beyond simple populism. There may be no word for this trend in the English language. But there is one in Japanese: gekokujo.” And later, “The real danger of gekokujo, however, comes from the establishment’s response to the threat. Eventually, party bosses, executives and other powerful figures may get tired of being pushed around.”

If you’ve been reading the news, you’ll have seen gekokujo, as institutions are being pushed by the Twitter mob, and by the Twitter mob mentality, even when the mobbing person is formally within the institution. I think we’re learning, or going to have to re-learn, things like “Why did companies traditionally encourage people to leave politics and religion at the door?” and “What’s the acceptable level of discourse within the institution, before you’re not a part of it any more?”

Colleges and universities in particular seem to be susceptible to these problems, and some are inculcating environments and cultures that may not be good for working in large groups. One recent example of these challenges occurred at Haverford college, but here too the news has many other examples, and the Haverford story seems particularly dreadful.

The basic idea that organizations have to decide who’s inside and who’s outside is old: Albert Hirschman’s Exit, Voice, and Loyalty: Responses to Decline in Firms, Organizations, and States is one great discussion. Organizations also used to unfairly exclude large swaths of the population based on demographic factors, and that’s (obviously) bad. Today, though, many organizations have in effect, if not intent, decided that it’s okay for some of their members to attack the good faith of other members of the organization, and to attack the coherentness of the organization itself. There are probably limits to how much this can be done, and still retain a functional organization, let alone a maximally functional organization.

The other big change involves the ability to coordinate relatively large numbers of people: digital tools have made this easier, in a relatively short time—thus the “Twitter mob” terminology that came to mind a few paragraphs ago; I kept the term, because it seems like a reasonable placeholder for that class of behavior. Digital tools ease the ability of a small percentage of total people to be a large absolute number of people. For example, if 100,000 people are interested in or somehow connected to an organization, and one percent of them want to fundamentally disrupt the organization, change its direction, or arrange an attack, that’s 1,000 people—which feels like a lot. It’s far above the Dunbar number and too many for one or two public-facing people to deal with. In addition, in some ways journalists and academics have become modern-day clerics, and they’re often eager to highlight and disseminate news of disputes of this sort.

Over time, I expect organizations are going to need to develop new cultural norms if they’re going to maintain their integrity in the face of coordinated groups that represent relatively small percentages of people but large absolute numbers of people. The larger the organization, the more susceptible it may be to these kinds of attacks. I’d expect more organizations to, for example, explicitly say that attacking other members of the organization in bad faith will result in expulsion, as seems to have happened in the Google example.

Evergreen College, which hosted an early example of this kind of attack (on a biology professor named Bret Weinstein), has seen its enrollment drop by about a third.

Martin Gurri’s book The Revolt of The Public and the Crisis of Authority in the New Millennium examines the contours of the new information world, and the relative slowness of institutions to adapt to it. Even companies like Google, Twitter, and Facebook, which have enabled sentiment amplification, were founded before their own user bases became so massive.

Within organizations, an excess of conformity is a problem—innovation doesn’t occur from simply following orders—but so is an excess of chaos. Modern intellectual organizations, like tech companies or universities, probably need more “chaos” (in the sense of information transfer) than, say, old-school manufacturing companies, which primarily needed compliance. “Old-school” is a key phrase, because from what I understand, modern manufacturing companies are all tech companies too, and they need the people closest to the process to be able to speak up if something is amiss or needs to be changed. Modern information companies need workers to speak up and suggest new ideas, new ways of doing things, and so on. That’s arguably part of the job of every person in the organization.

Discussion at work of controversial identity issues can probably function if all parties assume good faith from the other parties (Google is said to have had a freewheeling culture in this regard from around the time of its founding up till relatively recently). Such discussions probably won’t function without fundamental good faith, and good faith is hard to describe, but most of us know it when we see it, and defining every element of it would probably be impossible, while cultivating it as a general principle is desirable. Trying to maintain such an environment is tough: I know that intimately because I’ve tried to maintain it in classrooms, and those experiences led me to write “The race to the bottom of victimhood and ‘social justice’ culture.” It’s hard to teach, or run an information organization, without a culture that lets people think out loud, in good faith, with relatively little fear of arbitrary reprisal. Universities, in particular, are supposed to be oriented around new ideas and discussing ideas. Organizations also need some amount of hierarchy: without it, decisions can’t or don’t get made, and the organizational processes themselves don’t function. Excessive attacks lead to the “gekokujo” problem Smith describes. Over time organizations are likely going to have to develop antibodies to the novel dynamics of the digital world.

A lot of potential learning opportunities aren’t happening, because we’re instead dividing people into inquisitors and heretics, when very few should be the former, and very are truly the latter. One aspect of “Professionalism” might be “assuming good faith on the part of other parties, until proven otherwise.”

On the other hand, maybe these cultural skirmishes don’t matter much, like brawlers in a tavern across the street from the research lab. Google’s AlphaFold has made a huge leap in protein folding efforts (Google reorganized itself, so technically both Google and AlphaFold are part of the “Alphabet” parent company). Waymo, another Google endeavor, may be leading the way towards driverless cars, and it claims to be expanding its driverless car service. Compared to big technical achievements, media fights are minor. Fifty years from now, driverless cars will be taken for granted, along with customizable biology, people will be struggling to understand what was at stake culturally, in much the way most people don’t get what the Know-Nothing party, of the Hundred Years War, were really about, but we take electricity and the printing press for granted.

EDIT: Coinbase has publicly taken a “leave politics and religion at the door” stand. They’re an innovator, or maybe a back-to-the-future company, in these terms.

 

“Why technology will never fix education”

Why technology will never fix education” is a 2015 article that’s also absurdly relevant in the COVID era of distance education, and this paragraph in particular resonates with my teaching experience:

The real obstacle in education remains student motivation. Especially in an age of informational abundance, getting access to knowledge isn’t the bottleneck, mustering the will to master it is. And there, for good or ill, the main carrot of a college education is the certified degree and transcript, and the main stick is social pressure. Most students are seeking credentials that graduate schools and employers will take seriously and an environment in which they’re prodded to do the work. But neither of these things is cheaply available online.

For the last few years, I’ve often asked students to look at their phones’s “Screen Time” (iOS) or “Digital Wellbeing” (Android) apps. These apps measure how much time a person spends using their phone each day, and most students report 3 – 7 hours per day on their phones. The top apps are usually Instagram, SnapChat, and Facebook. Student often laugh bashfully at the sheer number of hours they spend on their phones, and some later confess they’re abashed. I ask the same thing when students tell me how “busy” they are during office hours (no one ever says they’re not busy). So far, both the data and anecdotes I’ve seen or heard support the “ban connected devices in class” position I’ve held for a while. The greatest discipline needed today seems to be the discipline not to stare relentlessly at the phone.

But what happens when class comes from a connected, distraction-laden device?

In my experience so far, the online education experience hasn’t been great, although it went better than I feared, and I think that, as norms shift, we’ll see online education become more effective. But the big hurdle remains motivation, not information. And I too find teaching via Zoom (or similar, presumably) unsatisfying, because it seems that concentration and motivation are harder on it. Perhaps online education is just increasing the distance between highly structured and self-motivated people versus everyone else.

 

A simple solution to peer review problems

Famous computer scientist and Roomba co-founder Rodney Brooks writes about the problems of peer review in academia. He notes that peer review has some important virtues even as the way it’s currently practiced generates many problems and pathologies too. Brooks says, “I don’t have a solution, but I hope my observations here might be interesting to some.” I have a partial solution: researchers “publish” papers to arXiv or similar, then “submit” them to the journal, which conducts peer review. The “journal” is a list of links to papers that it has accepted or verified.

That way, the paper is available to those who find it useful. If a researcher really thinks the peer reviewers are wrong, they can state why, and why they’re leaving it up, despite the critiques. Peer-review reports can be kept anonymous but can also be appended to the paper, so that readers can decide for themselves whether the peer reviewers’ comments are useful or accurate. If a writer wishes to be anonymous, the writer can leave the work as “anonymous” until after it’s been submitted for peer review, which would allow for double-blind peer review to occur.

Server costs for things like simple websites are almost indistinguishable from zero today, and those costs can easily be borne by the universities themselves, which will find them far lower than subscription costs.

What stands in the way? Elsevier and one or two other multi-billion-dollar publishing conglomerates that control the top journals in most fields. These giants want to maintain library fees that amount to thousands of dollars per journal, even if the journal editors are paid minimally, as are peer reviewers and so on. Only the companies make money. Academics live and die based on prestige, so few will deviate from the existing model. Publishing in top journals is essential for hiring, tenure, and promotion (the tenure model also generates a bunch of pathologies in academia, but we’ll ignore those for now).

There are pushes to change the model—the entire University of California system, for example, announced in 2019 that it would “terminate subscriptions with world’s largest scientific publisher in push for open access to publicly funded research.” In my view, all public funding bodies should stipulate that no research funded with public money can be published in closed-access journals, and foundations should do the same. There is no reason for modern research to be hidden behind paywalls.

Coronavirus and the need for urgent research has also pushed biomed and medicine towards the “publish first” model. Peer review seems to be happening after the paper is published in medRxiv or bioRxiv. One hopes these are permanent changes. The problems with the journal model are well known but too little is being done. Or, rather, too little was being done: the urgency of the situation may lead to reform in most fields.

Open journals would be a boon for access and for intellectual diversity. When I was in grad school for English (don’t do that, by the way, I want to reiterate), the peer reviewer reports I got on most of my papers were so bad that they made me realize I was wasting my life trying to break into the field; there is a difference between “negative but fair” and “these people are not worth trying to impress,” and in English lit the latter predominated. In addition, journals took a year, and sometimes years, to publish the papers they accepted, raising the obvious question: if something is so unimportant that it’s acceptable to take years to publish it, why bother? “The Research Bust” explores the relevant implications. No one else in the field seemed to care about its torporous pace or what that implies. Many academics in the humanities have been wringing their hands about the state of the field for years, without engaging in real efforts to fix it, even as professor jobs disappear and undergrads choose other majors. In my view, intellectual honesty and diversity are both important, and yet the current academic system doesn’t properly incentivize or reward either, though it could.

For another take on peer review’s problems, see Andrew Gelman.

Have journalists and academics become modern-day clerics?

This guy was wrongly and somewhat insanely accused of sexual impropriety by two neo-puritans; stories about individual injustice can be interesting, but this one seems like an embodiment of a larger trend, and, although the story is long and some of the author’s assumptions are dubious, I think there’s a different, conceivably better, takeaway than the one implied: don’t go into academia (at least the humanities) or journalism. Both fields are fiercely, insanely combative for very small amounts of money; because the money is so bad, many people get or stay in them for non-monetary ideological reasons, almost the way priests, pastors, or other religious figures used to choose low incomes and high purpose (or “purpose” if we’re feeling cynical). Not only that, but clerics often know the answer to the question before the question has even been asked, and they don’t need free inquiry because the answers are already available—attributes that are very bad, yet seem to be increasingly common, in journalism and academia.

Obviously journalism and academia have never been great fields for getting rich, but the business model for both has fallen apart in the last 20 years. The people willing to tolerate the low pay and awful conditions must have other motives (a few are independently wealthy) to go into them. I’m not arguing that other motives have never existed, but today you’d have to be absurdly committed to those other motives. That there are new secular religions is not an observation original to me, but once I heard that idea a lot of other strange-seeming things about modern culture clicked into place. Low pay, low status, and low prestige occupations must do something for the people who go into them.

Once an individual enters the highly mimetic and extremely ideological space, he becomes a good target for destruction—and makes a good scapegoat for anyone who is not getting the money or recognition they think they deserve. Or for anyone who is simply angry or feels ill-used. The people who are robust or anti-fragile stay out of this space.

Meanwhile, less ideological and much wealthier professions may not have been, or be, immune from the cultural psychosis in a few media and academic fields, but they’re much less susceptible to mimetic contagions and ripping-downs. The people in them have greater incomes and resources. They have a greater sense of doing something in the world that is not primarily intellectual, and thus probably not primarily mimetic and ideological.

There’s a personal dimension to these observations, because I was attracted to both journalism and academia, but the former has shed at least half its jobs over the last two decades and the latter became untenable post-2008. I’ve enough interaction with both fields to get the cultural tenor of them, and smart people largely choose more lucrative and less crazy industries. Like many people attracted to journalism, I read books like All the President’s Men in high school and wanted to model Woodward and Bernstein. But almost no reporters today are like Woodward and Bernstein. They’re more likely to be writing Buzzfeed clickbait, and nothing generates more clicks than outrage. Smart people interested in journalism can do a minimal amount of research and realize that the field is oversubscribed and should be avoided.

When I hear students say they’re majoring in journalism, I look at them cockeyed, regardless of gender; there’s fierce competition coupled with few rewards. The journalism industry has evolved to take advantage of youthful idealism, much like fashion, publishing, film, and a few other industries. Perhaps that is why these industries attract so many writers to insider satires: the gap between idealistic expectation and cynical reality is very wide.

Even if thousands of people read this and follow its advice, thousands more persons will keep attempting to claw their way into journalism or academia. It is an unwise move. We have people like David Graeber buying into the innuendo and career attack culture. Smart people look at this and do something else, something where a random smear is less likely to cost an entire career.

We’re in the midst of a new-puritan revival and yet large parts of the media ecosystem are ignoring this idea, often because they’re part of it.

It is grimly funny to have read the first story linked next to a piece that quotes Solzhenitsyn: “To do evil a human being must first of all believe that what he’s doing is good, or else that it’s a well-considered act in conformity with natural law. . . . it is in the nature of a human being to seek a justification for his actions.” Ideology is back, and destruction is easier the construction. Our cultural immune system seems to have failed to figure this out, yet. Short-form social media like Facebook and Twitter arguably encourage black and white thinking, because there’s not enough space to develop nuance. There is enough space, however, to say that the bad guy is right over there, and we should go attack that bad guy for whatever thought crimes or wrongthink they may have committed.

Ideally, academics and journalists come to a given situation or set of facts and don’t know the answer in advance. In an ideal world, they try to figure out what’s true and why. “Ideal” is repeated twice because, historically, departures from the ideal is common, but having ideological neutrality and an investigatory posture is preferable to knowing the answer in advance and judging people based on demographic characteristics and prearranged prejudices, yet those traits seem to have seeped into the academic and journalistic cultures.

Combine this with present-day youth culture that equates feelings with facts and felt harm with real harm, and you get a pretty toxic stew—”toxic” being a favorite word of the new clerics. See further, America’s New Sex Bureaucracy. If you feel it’s wrong, it must be wrong, and probably illegal; if you feel it’s right, it must be right, and therefore desirable. This kind of thinking has generated some backlash, but not enough to save some of the demographic undesirables who wander into the kill zone of journalism or academia. Meanwhile, loneliness seems to be more acute than ever, and we’re stuck wondering why.

The Seventh Function of Language — Laurent Binet

The Seventh Function of Language is wildly funny, at least for the specialist group of humanities academics and those steeped in humanities academic nonsense of the last 30 – 40 years. For everyone else, it may be like reading a prolonged in-joke. Virtually every field has its jokes that require particular background to get (I’ve heard many doctors tell stories whose punchline is something like, “And then the PCDH level hit 50, followed by an ADL of 200!” Laughter all around, except for me). In the novel, Roland Barthes doesn’t die from a typical car crash in 1980; instead, he is murdered. But by who, and why?

A hardboiled French detective (or “Superintendent,” which is France’s equivalent) must team up with a humanities lecturer to find out, because in the world of The Seventh Function it’s apparent that a link exists between Barthes’s work and his murder. They don’t exactly have a Holmes and Watson relationship, as neither Bayard (the superintendent) or Herzog (the lecturer) make brilliant leaps of deduction; rather, both complement each other, each alternating between bumbling and brilliance. Readers of The Name of the Rose will recognize both the detective/side-kick motif as well as the way a murder is linked to the intellectual work being done by the deceased. In most crime fiction—as, apparently, in most crime—the motives are small and often paltry, if not outright pathetic: theft, revenge, jealousy, sex. “Money and/or sex” pretty much summarizes why people kill (and perhaps why many people live). That sets up the novel’s idea, in which someone is killed for an idea.

The novel’s central, unstated joke is that, in the real world, no one would bother killing over literary theory because literary theory is so wildly unimportant (“Bayard gets the gist: Roland Barthes’s language is gibberish. But in that case why waste your time reading him?”). At Barthes’s funeral, Bayard thinks:

To get anywhere in this investigation, he knows that he has to understand what he’s searching for. What did Barthes possess of such value that someone not only stole it from him but they wanted to kill him for it too?

The real world answer is “nothing.” He, like other French intellectuals, has nothing worth killing over. And if you have nothing conceivably worth killing over, are your ideas of any value? The answer could plausibly be “yes,” but in the case of Barthes and others it is still “no.” And the money question structures a lot of relations: Bayard thinks of Foucault, “Does this guy earn more than he does?”

Semiotics permeates:

Many is an interpreting machine and, with a little imagination, he sees signs everywhere: in the color of his wife’s coat, in the stripe on the door of his car, in the eating habits of the people next door, in France’s monthly unemployment figures, in the banana-like taste of Beaujolais nouveau (for it always tastes either like banana or, less often, raspberry. Why? No one knows, but there must be an explanation, and it is semiological.)…

There are also various amusing authorial intrusions and one could say the usual things about them. The downside of The Seventh Function is that its underlying thrust is similar to the numerous other academic novels out there; if you’ve read a couple, you’ve read them all. The upsides are considerable, however, among them the comedy of allusion and the gap between immediate, venal human behavior and the olympian ideas enclosed in books produced by often-silly humans. If the idea stated in the book and the author’s behavior don’t match, what lesson should we take from that mismatch?

The college bribery scandal vs. Lambda School

Many of you have seen the news, but, while the bribery scandal is sucking up all the attention in the media, Lambda School is offering a $2,000/month living stipend to some students and Western Governors University is continuing to quietly grow. The Lambda School story is a useful juxtaposition with the college-bribery scandal. Tyler Cowen has a good piece on the bribery scandal (although to me the scandal looks pretty much like business-as-usual among colleges, which are wrapped up in mimetic rivalry, rather than a scandal as such, unless the definition of a scandal is “when someone accidentally tells the truth”):

Many wealthy Americans perceive higher education to be an ethics-free, law-free zone where the only restraint on your behavior is whatever you can get away with.

This may be an overly cynical take, but to what extent do universities act like ethics-free, law-free zones? They accept students (and their student loan payments) who are unlikely to matriculate; they have no skin in the game regarding student loans; insiders understand the “paying for the party” phenomenon, while outsiders don’t; too frequently, universities don’t seem to defend free speech or inquiry. In short, many universities are exploiting information asymmetries between them and their students and those students’s parents—especially the weakest and worst-informed students. Discrimination against Asians in admissions is common at some schools and is another open secret, albeit less secret than it once was. When you realize what colleges are doing to students and their families, why is it a surprise when students and their families reciprocate?

To be sure, this is not true of all universities, not all the time, not all parts of all universities, so maybe I am just too close to the sausage factory. But I see a whole lot of bad behavior, even when most of the individual actors are well-meaning. Colleges have evolved in a curious set of directions, and no one attempting to design a system from scratch would choose what we have now. That is not a reason to imagine some kind of perfect world, but it is worth asking how we might evolve out of the current system, despite the many barriers to doing so. We’re also not seeing employers search for alternate credentialing sources, at least from what I can ascertain.

See also “I Was a College Admissions Officer. This Is What I Saw.” In a social media age, why are we not seeing more of these pieces? (EDIT: Maybe we are? This is another one, scalding and also congruent with my experiences.) Overall, I think colleges are really, really good at marketing, and arguably marketing is their core competency. A really good marketer, however, can convince you that marketing is not their core competency.

The Coddling of the American Mind — Jonathan Haidt and Greg Lukianoff

Apart from its intellectual content and institutional structure descriptions, The Coddling of the American Mind makes being a contemporary college student in some schools sound like a terrible experience:

Life in a call-out culture requires constant vigilance, fear, and self-censorship. Many in the audience may feel sympathy for the person being shamed but are afraid to speak up, yielding the false impression that the audience is unanimous in its condemnation.

Who would want to live this way? It sounds exhausting and tedious. If we’ve built exhausting and tedious ways to live into the college experience, perhaps we ought to stop doing that. I also find it strange that, in virtually every generation, free speech and free thought have to be re-litigated. The rationale behind opposing free speech and thought changes, but the opposition remains.

Coddling is congruent with this conversation between Claire Lehmann and Tyler Cowen, where Lehmann describes Australian universities:

COWEN: With respect to political correctness, how is it that Australian universities are different?

LEHMANN: I think the fact that they’re public makes a big difference because students are not paying vast sums to go to university in the first place, so students have less power.

If you’re a student, and you make a complaint against a professor in an Australian university, the university’s just going to shrug its shoulders, and you’ll be sort of walked out of the room. Students have much less power to make complaints and have their grievances heard. That’s one factor.

Another factor is, we don’t have this hothouse environment where students go and live on campus and have their social life collapsed into their university life.

Most students in Australia live at home with their parents or move into a share house and then travel to university, but they don’t live on campus. So there isn’t this compression where your entire life is the campus environment. That’s another factor.

Overall, I suspect the American university environment as a total institution where students live, study, and play might be a better one in some essential ways: it may foster more entrepreneurship, due to students being physically proximate to one another. American universities have a much greater history of alumni involvement (and donations), donations likely being tied into the sense of affinity with the university generated by living on campus.

But Haidt and Lukianoff are pointing to some of the potential costs: when everything happens on campus, no one gets a break from “call-out culture” or accusations of being “offensive.” I think I would laugh at this sort of thing if I were an undergrad today, or choose bigger schools (the authors use an example from Smith College) that are more normal and less homogenous and neurotic. Bigger schools have more diverse student bodies and fewer students with the time and energy to relentlessly surveil one another. The authors describe how “Reports from around the country are remarkably similar; students at many colleges today are walking on eggshells, afraid of saying the wrong thing, liking the wrong post, or coming to the defense of someone who they know to be innocent, out of fear they themselves will be called out by a mob on social media.”

Professors, especially in humanities departments, seem to be helping to create this atmosphere by embracing “micro aggressions,” “intersectionality,” and similar doctrines of fragility. Perhaps professors ought to stop doing that, too. I wonder too if or when students will stop wanting to attend schools like Smith, where the “Us vs them” worldview prevails.

School itself may be becoming more boring: “Many professors say they now teach and speak more cautiously, because one slip or simple misunderstanding could lead to vilification and even threats from any number of sources.” And, in an age of ubiquitous cameras, it’s easy to take something out of context. Matthew Reed, who has long maintained a blog called “Dean Dad,” has written about how he would adopt certain political perspectives in class (Marxist, fascist, authoritarian, libertarian, etc.) in an attempt to get students to understand what some of those ideologies entail and what their advocates might say. So he’d say things he doesn’t believe in order to get students to think. But that strategy is prone to the camera-and-splice practice. It’s a tension I feel, too: in class I often raise ideas or reading to encourage thinking or offer pushback against apparent groupthink. Universities are supposed to exist to help students (and people more generally) think independently; while courtesy is important, at what point does “caution” become tedium, or censorship?

Schools encourage fragility in other ways:

“Always trust your feelings,” said Misoponos, and that dictum hay sound wise and familiar. You’ve heard versions of it from a variety of sappy novels and pop psychology gurus. But the second Great Untruth—the Untruth of Emotional Reasoning—is a direct contradiction of much ancient wisdom. [. . .] Sages in many societies have converged on the insight that feelings are always compelling, but not always reliable.

More important than ancient sages, modern psychologists and behavioral economists have found and argued the same. Feelings of fear, uncertainty, and doubt are strangely encouraged: “Administrators often acted in ways that gave the impression that students were in constant danger and in need of protection from a variety of risks and discomforts.” How odd: 18- and 19-year-olds in the military face risks and discomforts like, you know, being shot. Maybe the issue is that our society has too little risk, or risk that is invisible (this is your occasional reminder that about 30,000 people die in car crashes every year, and hundreds of thousands more are mangled, yet we do little to alleviate the car-centric world).

Umberto Eco says, “Art is an escape from personal emotion, as both Joyce and Eliot had taught me.” Yet we often treat personal emotion as the final arbiter and decider of things. “Personal emotion” is very close the word “feelings.” We should be wary of trusting those feelings; art enables to escape from our own feelings into someone else’s conception of the world, if we allow it to. The study of art in many universities seemingly discourages this. Perhaps we ought to read more Eco.

I wonder if Coddling is going to end up being one of those important books no one reads.

It is also interesting to read Coddling in close proximity to Michael Pollan’s How to Change Your Mind: What the New Science of Psychedelics Teaches Us About Consciousness, Dying, Addiction, Depression, and Transcendence. Perhaps we need less iPhone and more magic mushrooms. I’d actually like to hear a conversation among Pollan, Haidt, and Lukianoff. The other day I was telling a friend about How to Change Your Mind, and he said that not only had he tried psychedelics in high school, but his experience cured or alleviated his stutter and helped him find his way in the world. The plural of anecdote is not data, but it’s hard to imagine safety culture approving of psychedelic experiences (despite their safety, which Pollan describes in detail).

In The Lord of the Rings when Aragorn and his companions believe that Gandalf has perished in Moria; Gimli says that “Gandalf chose to come himself, and he was the first to be lost… his foresight failed him.” Aragorn replies, “The counsel of Gandalf was not founded on foreknowledge of safety, for himself or for others.” And neither is life: it is not founded on foreknowledge of safety. Adventure is necessary to become a whole person. Yet childhood and even universities are today increasingly obsessed with safety, to the detriment of the development of children and students. In my experience, military veterans returning to college are among the most intersting and diligent students. We seem to have forgotten Gandalf’s lessons. One advantage in reading old books may be some of the forgotten cultural assumptions beneath them; in The Lord of the Rings risk is necessary for reward, and the quality of a life is not dependent on the elimination of challenge.

Here’s a good critical review.

“Oh, the Humanities!”

It’s pretty rare for a blog post, even one like “ Mea culpa: there *is* a crisis in the humanities,” to inspire a New York Times op ed, but here we have “Oh, the Humanities! New data on college majors confirms an old trend. Technocracy is crushing the life out of humanism.” It’s an excellent essay. Having spent a long time working in the humanities (a weird phrase, if you think about it) and having written extensively about the problems with the humanities as currently practiced in academia, I naturally have some thoughts.

Douthat notes the decline in humanities majors and says, “this acceleration is no doubt partially driven by economic concerns.” That’s true. Then we get this interesting move:

In an Apollonian culture, eager for “Useful Knowledge” and technical mastery and increasingly indifferent to memory and allergic to tradition, the poet and the novelist and the theologian struggle to find an official justification for their arts. And both the turn toward radical politics and the turn toward high theory are attempts by humanists in the academy to supply that justification — to rebrand the humanities as the seat of social justice and a font of political reform, or to assume a pseudoscientific mantle that lets academics claim to be interrogating literature with the rigor and precision of a lab tech doing dissection.

There is likely some truth here too. In this reading, the humanities have turned from traditional religious feeling and redirected the religious impulse in a political direction.

Douthat has some ideas about how to improve:

First, a return of serious academic interest in the possible (I would say likely) truth of religious claims. Second, a regained sense of history as a repository of wisdom and example rather than just a litany of crimes and wrongthink. Finally, a cultural recoil from the tyranny of the digital and the virtual and the Very Online, today’s version of the technocratic, technological, potentially totalitarian Machine that Jacobs’s Christian humanists opposed.

I think number two is particularly useful, number three is reasonable, and number one is fine but somewhat unlikely and not terribly congruent with my own inclinations. But I also think that the biggest problem with the humanities as currently practiced is the turn from uninterested inquiry about what is true, what is valuable, what is beautiful, what is worth remembering, what should be made, etc., and toward politics, activism, and taking sides in current political debates—especially when those debates are highly interested in stratifying groups of people based on demographic characteristics, then assigning values to those groups.

That said, I’m not the first person to say as much and have zero impact. Major structural forces stand in the way of reform. The current grad-school-to-tenure structure kills most serious, divergent thinking and encourages a group-think monoculture. Higher-ed growth peaked around 1975; not surprisingly, the current “culture wars” or “theory wars” or whatever you want to call them got going in earnest in the 1980s, when there was little job growth among humanities academics. And they’ve been going, in various ways, ever since.

Before the 1980s, most people who got PhDs in the humanities eventually got jobs of some kind or other. This meant heterodox thinkers could show up, snag a foothold somewhere, and change the culture of the academic humanities. People like Camille Paglia or Harold Bloom or even Paul de Man (not my favorite writer) all have this quality. But since the 1980s, the number of jobs has shrunk, the length of grad school has lengthened, and heterodox thinkers have (mostly) been pushed out. Interesting writers like Jonathan Gottschall work as adjuncts, if they work at all.

Today, the jobs situation is arguably worse than ever: I can’t find the report off-hand, the Modern Language Association tracks published, tenure-track jobs, and those declined from about a thousand a year before 2008 to about 300 – 400 per year now.

Current humanities profs hire new humanities profs who already agree with them, politically speaking. Current tenured profs tenure new profs who already agree. This dynamic wasn’t nearly as strong when pretty much everyone got a job, even those who advocated for weird new ideas that eventually became the norm. That process is dead. Eliminating tenure might help the situation some, but any desire to eliminate tenure as a practice will be deeply opposed by the powerful who benefit from it.

So I’m not incredibly optimistic about a return to reason among humanities academics. Barring that return to reason, a lot of smart students are going to look at humanities classes and the people teaching them, then decide to go major in economics (I thought about majoring in econ).

I remember taking a literary theory class when I was an undergrad and wondering how otherwise seemingly-smart people could take some of that terrible writing and thinking seriously. Still, I was interested in reading and fiction, so I ignored the worst parts of what I read (Foucault, Judith Butler—those kinds of people) and kept on going, even into grad school. I liked to read and still do. I’d started writing (bad, at the time) novels. I didn’t realize the extent to which novels like Richard Russo’s Straight Man and Francine Prose’s Blue Angel are awfully close to nonfiction.

By now, the smartest people avoid most humanities subjects as undergrads and then grad students, or potential grad students. Not all of the smartest people, but most of them. And that anti-clumping tendency leaves behind people who don’t know any better or who are willing to repeat the endless and tedious postmodernist mantras like initiates into the cult (and there is the connection to Douthat, who’d like us to acknowledge the religious impulse more than most of us now do). Some of them are excellent sheep: a phrase from William Deresiewicz that he applies to students at elite schools but that might also be applied to many humanities grad students.

MFA programs, last time I checked, are still doing pretty well, and that’s probably because they’re somewhat tethered to the real world and the desire to write things other humans might want to read. That desire seems to have disappeared in most of humanistic academia. Leaving the obvious question: “Why bother?” And that is the question I can no longer answer.

Postmodernisms: What does *that* mean?

In response to What’s so dangerous about Jordan Peterson?, there have been a bunch of discussions about what “postmodernism” means (“He believes that the insistence on the use of gender-neutral pronouns is rooted in postmodernism, which he sees as thinly disguised Marxism.”) By now, postmodernism has become so vague and broad that it means almost anything—which is of course another way of saying “nothing”—so the plural is there in the title for a reason. In my view most people claiming the mantle of big broad labels like “Marxist,” “Christian,” “Socialist,” “Democrat,” etc. are trying to signal something about themselves and their identity much more than they’re trying to understand the nuances of what those positions might mean or what ideas / policies really underlie the labels, so for the most part when I see someone talking or writing about postmodern, I say, “Oh, that’s nice,” then move on to talking about something more interesting and immediate.

But if one is going to attempt to describe postmodernism, and how it relates to Marxism, I’d start by observing that old-school Marxists don’t believe much of the linguistic stuff that postmodernists sometimes say they believe—about how everything reduces to “language” or “discourse”—but I think that the number of people who are “Marxists” in the sense that Marx or Lenin would recognize is tiny, even in academia.

I think what’s actually happening is this: people have an underlying set of models or moral codes and then grab some labels to fit on top of those codes. So the labels fit, or try to fit, the underlying morality and beliefs. People in contemporary academia might be particularly drawn to a version of strident moralism in the form of “postmodernism” or “Marxism” because they don’t have much else—no religion, not much influence, no money, so what’s left? A moral superiority that gets wrapped up in words like “postmodernism.” So postmodernism isn’t so much a thing as a mode or a kind of moral signal, and that in turn is tied into the self-conception of people in academia.

You may be wondering why academia is being dragged into this. Stories about what “postmodernism” means are bound up in academia, where ideas about postmodernism still simmer. In humanities grad school, most grad students make no money, as previously mentioned, and don’t expect to get academic jobs when they’re done. Among those who do graduate, most won’t get jobs. Those who do, probably won’t get tenure. And even those who get tenure will often get it for writing a book that will sell two hundred copies to university libraries and then disappear without a trace. So… why are they doing what they do?

At the same time, humanities grad students and profs don’t even have God to console them, as many religious figures do. So some of the crazier stuff emanating from humanities grad students might be a misplaced need for God or purpose. I’ve never seen the situation discussed in those terms, but as I look at the behavior I saw in grad school and the stories emerging from humanities departments, I think that a central absence better explains many problems than most “logical” explanations. And then “postmodernism” is the label that gets applied to this suite of what amount to beliefs. And that, in turn, is what Jordan Peterson is talking about. If you are (wisely) not following trends in the academic humanities, Peterson’s tweet on the subject probably makes no sense.

Most of us need something to believe it—and the need to believe may be more potent in smarter or more intellectual people. In the absence of God, we very rarely get “nothing.” Instead, we get something else, but we should take care in what that “something” is. The sense of the sacred is still powerful within humanities departments, but what that sacred is has shifted, to their detriment and to the detriment of society as a whole.

(I wrote here about the term “deconstructionism,” which has a set of problems similar to “postmodernism,” so much of what I write there also applies here.)

Evaluating things along power lines, as many postmodernists and Marxists seek to do, isn’t always a bad idea, of course, but there are many other dimensions along which one can evaluate art, social situations, politics, etc. So the relentless focus on “power” becomes tedious and reductive after a while: one always knows what the speaker is likely to say, unless of course the speaker is seen as the powerful person and the thing being criticized can be seen as the obvious (e.g. it seems obvious that many tenured professors are in positions of relatively high power, especially compared to grad students; that’s part of what makes the Lindsay Shepherd story compelling).

This brand of post-modernism tends to infantilize groups or individuals (they’re all victims!) or lead to races to the bottom and the development of victimhood culture. But these pathologies are rarely acknowledged by their defenders.

Has postmodernism led to absurdities like the one at Evergreen State, which led to huge enrollment drops? Maybe. I’ve seen the argument and, on even days, buy it.

I read a good Tweet summarizing the basic problem:

When postmodern types say that truth-claims are rhetoric and that attempts to provide evidence are but moves in a power-game—believe them! They are trying to tell you that this is how they operate in discussions. They are confessing that they cannot imagine doing otherwise.

If everything is just “rhetoric” or “power” or “language,” there is no real way to judge anything. Along a related axis, see “Dear Humanities Profs: We Are the Problem.” Essays like it seem to appear about once a year or so. That they seem to change so little is discouraging.

So what does postmodernism mean? Pretty much whatever you want it to mean, whether you love it for whatever reason or hate it for whatever reason. Which is part of the reason you’ll very rarely see it used on this site: it’s too unspecific to be useful, so I shade towards words with greater utility that haven’t been killed, or at least made somatic, through over-use. There’s a reason why most smart people eschew talking about postmodernism or deconstructionism or similar terms: they’re at a not-very-useful level of abstraction, unless one is primarily trying to signal tribal affiliation, and signaling tribal affiliation isn’t a very interesting level of or for discussion.

If you’ve read to the bottom of this, congratulations! I can’t imagine many people are terribly interested in this subject; it seems that most people read a bit about it, realize that many academics in the humanities are crazy, and go do something more useful. It’s hard to explain this stuff in plain language because it often doesn’t mean much of anything, and explaining why that’s so takes a lot.

What happened to the academic novel?

In “The Joke’s Over: How academic satire died,” Andrew Kay asks: What happened to the academic novel? He proffers some excellent theories, including: “the precipitate decline of English departments, their tumble from being the academy’s House Lannister 25 years ago — a dignified dynasty — to its House Greyjoy, a frozen island outpost. [. . .] academic satires almost invariably took place in English departments.” That seems plausible, and it’s also of obvious importance that writers tend to inhabit English departments, not biology departments; novels are likely to come from novelists and people who study novels than they are from people who study DNA.

But Kay goes on to note that tenure-track jobs disappeared, which made making fun of academics less funny because their situation became serious. I don’t think that’s it, though: tenure-track jobs declined enormously in 1975, yet academic satires kept appearing regularly after that.

But:

When English declined, though, academic satire dwindled with it. Much of the clout that English departments had once enjoyed migrated to disciplines like engineering, computer science, and (that holiest of holies!) neuroscience. (Did we actually have a March for Science last April, or was that satire?) Poetry got bartered for TED talks, Words­worth and Auden for that new high priest of cultural wisdom, the cocksure white guy in bad jeans and a headset holding forth on “innovation” and “biotech.”

And I think this makes sense: much of what English departments began producing in the 1980s and 1990s is nonsense that almost no one takes seriously—even the people who produce it, and it’s hard to satirize total nonsense:

Most satire relies on hyperbole: The satirist holds a ludicrously distorted mirror up to reality, exaggerating the flaws of individuals and systems and so (ideally) shocking them into reform. But what happens when reality outpaces satire, or at least grows so outlandish that a would-be jester has to sprint just to keep up?

What English departments are doing is mostly unimportant, so larger cultural attention focuses on TED talks or edge.org or any number of other venues and disciplines. Debating economics is more interesting than debating deconstructionism (or whatever) because the outcome of the debate matters. In grad school I heard entirely too many people announce that there is no such as reality, then go off to lunch (which seemed a lot like reality to me, but I was a bit of a grad-school misfit).

A couple years ago I wrote “What happened with Deconstruction? And why is there so much bad writing in academia?“, which attempts to explain some of the ways that academia came to be infested by nonsense. Smart people today might gaze at what’s going on in English (and many other humanities) departments, laugh, and move on to more important issues—to the extent they bother gazing over at all. If the Lilliputians want to chase each other around with rhetorical sticks, let them; the rest of us have things to do.

Decades of producing academic satire have produced few if any changes. The problems Blue Angel and Straight Men identified remain and are if anything worse. No one in English departments has anything to lose, intellectually speaking; the sense of perspective departed a long time ago. At some point, would-be reformers wander off and deal with more interesting topics. English department members, meanwhile, can’t figure out why they can’t get more undergrads to major in English or more tenure-track hires. One could start by looking in the mirror, but it’s easier and more fun to blame outsiders than it is to look within.

Back when I was writing a dissertation on academic novels, a question kept creeping up on me, like a serial killer in a horror novel: “Who cares?” I couldn’t find a good answer to that question—at least, not one that most people in the academic humanities seemed to accept. It seems that I’m not alone. Over time, people vote with their feet, or, in this case, attention. If no one wants to pay attention to English departments, maybe that should tell us something.

Nah. What am I saying? It’s them, not us.

%d bloggers like this: