“Why technology will never fix education”

Why technology will never fix education” is a 2015 article that’s also absurdly relevant in the COVID era of distance education, and this paragraph in particular resonates with my teaching experience:

The real obstacle in education remains student motivation. Especially in an age of informational abundance, getting access to knowledge isn’t the bottleneck, mustering the will to master it is. And there, for good or ill, the main carrot of a college education is the certified degree and transcript, and the main stick is social pressure. Most students are seeking credentials that graduate schools and employers will take seriously and an environment in which they’re prodded to do the work. But neither of these things is cheaply available online.

For the last few years, I’ve often asked students to look at their phones’s “Screen Time” (iOS) or “Digital Wellbeing” (Android) apps. These apps measure how much time a person spends using their phone each day, and most students report 3 – 7 hours per day on their phones. The top apps are usually Instagram, SnapChat, and Facebook. Student often laugh bashfully at the sheer number of hours they spend on their phones, and some later confess they’re abashed. I ask the same thing when students tell me how “busy” they are during office hours (no one ever says they’re not busy). So far, both the data and anecdotes I’ve seen or heard support the “ban connected devices in class” position I’ve held for a while. The greatest discipline needed today seems to be the discipline not to stare relentlessly at the phone.

But what happens when class comes from a connected, distraction-laden device?

In my experience so far, the online education experience hasn’t been great, although it went better than I feared, and I think that, as norms shift, we’ll see online education become more effective. But the big hurdle remains motivation, not information. And I too find teaching via Zoom (or similar, presumably) unsatisfying, because it seems that concentration and motivation are harder on it. Perhaps online education is just increasing the distance between highly structured and self-motivated people versus everyone else.

 

A simple solution to peer review problems

Famous computer scientist and Roomba co-founder Rodney Brooks writes about the problems of peer review in academia. He notes that peer review has some important virtues even as the way it’s currently practiced generates many problems and pathologies too. Brooks says, “I don’t have a solution, but I hope my observations here might be interesting to some.” I have a partial solution: researchers “publish” papers to arXiv or similar, then “submit” them to the journal, which conducts peer review. The “journal” is a list of links to papers that it has accepted or verified.

That way, the paper is available to those who find it useful. If a researcher really thinks the peer reviewers are wrong, they can state why, and why they’re leaving it up, despite the critiques. Peer-review reports can be kept anonymous but can also be appended to the paper, so that readers can decide for themselves whether the peer reviewers’ comments are useful or accurate. If a writer wishes to be anonymous, the writer can leave the work as “anonymous” until after it’s been submitted for peer review, which would allow for double-blind peer review to occur.

Server costs for things like simple websites are almost indistinguishable from zero today, and those costs can easily be borne by the universities themselves, which will find them far lower than subscription costs.

What stands in the way? Elsevier and one or two other multi-billion-dollar publishing conglomerates that control the top journals in most fields. These giants want to maintain library fees that amount to thousands of dollars per journal, even if the journal editors are paid minimally, as are peer reviewers and so on. Only the companies make money. Academics live and die based on prestige, so few will deviate from the existing model. Publishing in top journals is essential for hiring, tenure, and promotion (the tenure model also generates a bunch of pathologies in academia, but we’ll ignore those for now).

There are pushes to change the model—the entire University of California system, for example, announced in 2019 that it would “terminate subscriptions with world’s largest scientific publisher in push for open access to publicly funded research.” In my view, all public funding bodies should stipulate that no research funded with public money can be published in closed-access journals, and foundations should do the same. There is no reason for modern research to be hidden behind paywalls.

Coronavirus and the need for urgent research has also pushed biomed and medicine towards the “publish first” model. Peer review seems to be happening after the paper is published in medRxiv or bioRxiv. One hopes these are permanent changes. The problems with the journal model are well known but too little is being done. Or, rather, too little was being done: the urgency of the situation may lead to reform in most fields.

Open journals would be a boon for access and for intellectual diversity. When I was in grad school for English (don’t do that, by the way, I want to reiterate), the peer reviewer reports I got on most of my papers were so bad that they made me realize I was wasting my life trying to break into the field; there is a difference between “negative but fair” and “these people are not worth trying to impress,” and in English lit the latter predominated. In addition, journals took a year, and sometimes years, to publish the papers they accepted, raising the obvious question: if something is so unimportant that it’s acceptable to take years to publish it, why bother? “The Research Bust” explores the relevant implications. No one else in the field seemed to care about its torporous pace or what that implies. Many academics in the humanities have been wringing their hands about the state of the field for years, without engaging in real efforts to fix it, even as professor jobs disappear and undergrads choose other majors. In my view, intellectual honesty and diversity are both important, and yet the current academic system doesn’t properly incentivize or reward either, though it could.

For another take on peer review’s problems, see Andrew Gelman.

Have journalists and academics become modern-day clerics?

This guy was wrongly and somewhat insanely accused of sexual impropriety by two neo-puritans; stories about individual injustice can be interesting, but this one seems like an embodiment of a larger trend, and, although the story is long and some of the author’s assumptions are dubious, I think there’s a different, conceivably better, takeaway than the one implied: don’t go into academia (at least the humanities) or journalism. Both fields are fiercely, insanely combative for very small amounts of money; because the money is so bad, many people get or stay in them for non-monetary ideological reasons, almost the way priests, pastors, or other religious figures used to choose low incomes and high purpose (or “purpose” if we’re feeling cynical). Not only that, but clerics often know the answer to the question before the question has even been asked, and they don’t need free inquiry because the answers are already available—attributes that are very bad, yet seem to be increasingly common, in journalism and academia.

Obviously journalism and academia have never been great fields for getting rich, but the business model for both has fallen apart in the last 20 years. The people willing to tolerate the low pay and awful conditions must have other motives (a few are independently wealthy) to go into them. I’m not arguing that other motives have never existed, but today you’d have to be absurdly committed to those other motives. That there are new secular religions is not an observation original to me, but once I heard that idea a lot of other strange-seeming things about modern culture clicked into place. Low pay, low status, and low prestige occupations must do something for the people who go into them.

Once an individual enters the highly mimetic and extremely ideological space, he becomes a good target for destruction—and makes a good scapegoat for anyone who is not getting the money or recognition they think they deserve. Or for anyone who is simply angry or feels ill-used. The people who are robust or anti-fragile stay out of this space.

Meanwhile, less ideological and much wealthier professions may not have been, or be, immune from the cultural psychosis in a few media and academic fields, but they’re much less susceptible to mimetic contagions and ripping-downs. The people in them have greater incomes and resources. They have a greater sense of doing something in the world that is not primarily intellectual, and thus probably not primarily mimetic and ideological.

There’s a personal dimension to these observations, because I was attracted to both journalism and academia, but the former has shed at least half its jobs over the last two decades and the latter became untenable post-2008. I’ve enough interaction with both fields to get the cultural tenor of them, and smart people largely choose more lucrative and less crazy industries. Like many people attracted to journalism, I read books like All the President’s Men in high school and wanted to model Woodward and Bernstein. But almost no reporters today are like Woodward and Bernstein. They’re more likely to be writing Buzzfeed clickbait, and nothing generates more clicks than outrage. Smart people interested in journalism can do a minimal amount of research and realize that the field is oversubscribed and should be avoided.

When I hear students say they’re majoring in journalism, I look at them cockeyed, regardless of gender; there’s fierce competition coupled with few rewards. The journalism industry has evolved to take advantage of youthful idealism, much like fashion, publishing, film, and a few other industries. Perhaps that is why these industries attract so many writers to insider satires: the gap between idealistic expectation and cynical reality is very wide.

Even if thousands of people read this and follow its advice, thousands more persons will keep attempting to claw their way into journalism or academia. It is an unwise move. We have people like David Graeber buying into the innuendo and career attack culture. Smart people look at this and do something else, something where a random smear is less likely to cost an entire career.

We’re in the midst of a new-puritan revival and yet large parts of the media ecosystem are ignoring this idea, often because they’re part of it.

It is grimly funny to have read the first story linked next to a piece that quotes Solzhenitsyn: “To do evil a human being must first of all believe that what he’s doing is good, or else that it’s a well-considered act in conformity with natural law. . . . it is in the nature of a human being to seek a justification for his actions.” Ideology is back, and destruction is easier the construction. Our cultural immune system seems to have failed to figure this out, yet. Short-form social media like Facebook and Twitter arguably encourage black and white thinking, because there’s not enough space to develop nuance. There is enough space, however, to say that the bad guy is right over there, and we should go attack that bad guy for whatever thought crimes or wrongthink they may have committed.

Ideally, academics and journalists come to a given situation or set of facts and don’t know the answer in advance. In an ideal world, they try to figure out what’s true and why. “Ideal” is repeated twice because, historically, departures from the ideal is common, but having ideological neutrality and an investigatory posture is preferable to knowing the answer in advance and judging people based on demographic characteristics and prearranged prejudices, yet those traits seem to have seeped into the academic and journalistic cultures.

Combine this with present-day youth culture that equates feelings with facts and felt harm with real harm, and you get a pretty toxic stew—”toxic” being a favorite word of the new clerics. See further, America’s New Sex Bureaucracy. If you feel it’s wrong, it must be wrong, and probably illegal; if you feel it’s right, it must be right, and therefore desirable. This kind of thinking has generated some backlash, but not enough to save some of the demographic undesirables who wander into the kill zone of journalism or academia. Meanwhile, loneliness seems to be more acute than ever, and we’re stuck wondering why.

The Seventh Function of Language — Laurent Binet

The Seventh Function of Language is wildly funny, at least for the specialist group of humanities academics and those steeped in humanities academic nonsense of the last 30 – 40 years. For everyone else, it may be like reading a prolonged in-joke. Virtually every field has its jokes that require particular background to get (I’ve heard many doctors tell stories whose punchline is something like, “And then the PCDH level hit 50, followed by an ADL of 200!” Laughter all around, except for me). In the novel, Roland Barthes doesn’t die from a typical car crash in 1980; instead, he is murdered. But by who, and why?

A hardboiled French detective (or “Superintendent,” which is France’s equivalent) must team up with a humanities lecturer to find out, because in the world of The Seventh Function it’s apparent that a link exists between Barthes’s work and his murder. They don’t exactly have a Holmes and Watson relationship, as neither Bayard (the superintendent) or Herzog (the lecturer) make brilliant leaps of deduction; rather, both complement each other, each alternating between bumbling and brilliance. Readers of The Name of the Rose will recognize both the detective/side-kick motif as well as the way a murder is linked to the intellectual work being done by the deceased. In most crime fiction—as, apparently, in most crime—the motives are small and often paltry, if not outright pathetic: theft, revenge, jealousy, sex. “Money and/or sex” pretty much summarizes why people kill (and perhaps why many people live). That sets up the novel’s idea, in which someone is killed for an idea.

The novel’s central, unstated joke is that, in the real world, no one would bother killing over literary theory because literary theory is so wildly unimportant (“Bayard gets the gist: Roland Barthes’s language is gibberish. But in that case why waste your time reading him?”). At Barthes’s funeral, Bayard thinks:

To get anywhere in this investigation, he knows that he has to understand what he’s searching for. What did Barthes possess of such value that someone not only stole it from him but they wanted to kill him for it too?

The real world answer is “nothing.” He, like other French intellectuals, has nothing worth killing over. And if you have nothing conceivably worth killing over, are your ideas of any value? The answer could plausibly be “yes,” but in the case of Barthes and others it is still “no.” And the money question structures a lot of relations: Bayard thinks of Foucault, “Does this guy earn more than he does?”

Semiotics permeates:

Many is an interpreting machine and, with a little imagination, he sees signs everywhere: in the color of his wife’s coat, in the stripe on the door of his car, in the eating habits of the people next door, in France’s monthly unemployment figures, in the banana-like taste of Beaujolais nouveau (for it always tastes either like banana or, less often, raspberry. Why? No one knows, but there must be an explanation, and it is semiological.)…

There are also various amusing authorial intrusions and one could say the usual things about them. The downside of The Seventh Function is that its underlying thrust is similar to the numerous other academic novels out there; if you’ve read a couple, you’ve read them all. The upsides are considerable, however, among them the comedy of allusion and the gap between immediate, venal human behavior and the olympian ideas enclosed in books produced by often-silly humans. If the idea stated in the book and the author’s behavior don’t match, what lesson should we take from that mismatch?

The college bribery scandal vs. Lambda School

Many of you have seen the news, but, while the bribery scandal is sucking up all the attention in the media, Lambda School is offering a $2,000/month living stipend to some students and Western Governors University is continuing to quietly grow. The Lambda School story is a useful juxtaposition with the college-bribery scandal. Tyler Cowen has a good piece on the bribery scandal (although to me the scandal looks pretty much like business-as-usual among colleges, which are wrapped up in mimetic rivalry, rather than a scandal as such, unless the definition of a scandal is “when someone accidentally tells the truth”):

Many wealthy Americans perceive higher education to be an ethics-free, law-free zone where the only restraint on your behavior is whatever you can get away with.

This may be an overly cynical take, but to what extent do universities act like ethics-free, law-free zones? They accept students (and their student loan payments) who are unlikely to matriculate; they have no skin in the game regarding student loans; insiders understand the “paying for the party” phenomenon, while outsiders don’t; too frequently, universities don’t seem to defend free speech or inquiry. In short, many universities are exploiting information asymmetries between them and their students and those students’s parents—especially the weakest and worst-informed students. Discrimination against Asians in admissions is common at some schools and is another open secret, albeit less secret than it once was. When you realize what colleges are doing to students and their families, why is it a surprise when students and their families reciprocate?

To be sure, this is not true of all universities, not all the time, not all parts of all universities, so maybe I am just too close to the sausage factory. But I see a whole lot of bad behavior, even when most of the individual actors are well-meaning. Colleges have evolved in a curious set of directions, and no one attempting to design a system from scratch would choose what we have now. That is not a reason to imagine some kind of perfect world, but it is worth asking how we might evolve out of the current system, despite the many barriers to doing so. We’re also not seeing employers search for alternate credentialing sources, at least from what I can ascertain.

See also “I Was a College Admissions Officer. This Is What I Saw.” In a social media age, why are we not seeing more of these pieces? (EDIT: Maybe we are? This is another one, scalding and also congruent with my experiences.) Overall, I think colleges are really, really good at marketing, and arguably marketing is their core competency. A really good marketer, however, can convince you that marketing is not their core competency.

The Coddling of the American Mind — Jonathan Haidt and Greg Lukianoff

Apart from its intellectual content and institutional structure descriptions, The Coddling of the American Mind makes being a contemporary college student in some schools sound like a terrible experience:

Life in a call-out culture requires constant vigilance, fear, and self-censorship. Many in the audience may feel sympathy for the person being shamed but are afraid to speak up, yielding the false impression that the audience is unanimous in its condemnation.

Who would want to live this way? It sounds exhausting and tedious. If we’ve built exhausting and tedious ways to live into the college experience, perhaps we ought to stop doing that. I also find it strange that, in virtually every generation, free speech and free thought have to be re-litigated. The rationale behind opposing free speech and thought changes, but the opposition remains.

Coddling is congruent with this conversation between Claire Lehmann and Tyler Cowen, where Lehmann describes Australian universities:

COWEN: With respect to political correctness, how is it that Australian universities are different?

LEHMANN: I think the fact that they’re public makes a big difference because students are not paying vast sums to go to university in the first place, so students have less power.

If you’re a student, and you make a complaint against a professor in an Australian university, the university’s just going to shrug its shoulders, and you’ll be sort of walked out of the room. Students have much less power to make complaints and have their grievances heard. That’s one factor.

Another factor is, we don’t have this hothouse environment where students go and live on campus and have their social life collapsed into their university life.

Most students in Australia live at home with their parents or move into a share house and then travel to university, but they don’t live on campus. So there isn’t this compression where your entire life is the campus environment. That’s another factor.

Overall, I suspect the American university environment as a total institution where students live, study, and play might be a better one in some essential ways: it may foster more entrepreneurship, due to students being physically proximate to one another. American universities have a much greater history of alumni involvement (and donations), donations likely being tied into the sense of affinity with the university generated by living on campus.

But Haidt and Lukianoff are pointing to some of the potential costs: when everything happens on campus, no one gets a break from “call-out culture” or accusations of being “offensive.” I think I would laugh at this sort of thing if I were an undergrad today, or choose bigger schools (the authors use an example from Smith College) that are more normal and less homogenous and neurotic. Bigger schools have more diverse student bodies and fewer students with the time and energy to relentlessly surveil one another. The authors describe how “Reports from around the country are remarkably similar; students at many colleges today are walking on eggshells, afraid of saying the wrong thing, liking the wrong post, or coming to the defense of someone who they know to be innocent, out of fear they themselves will be called out by a mob on social media.”

Professors, especially in humanities departments, seem to be helping to create this atmosphere by embracing “micro aggressions,” “intersectionality,” and similar doctrines of fragility. Perhaps professors ought to stop doing that, too. I wonder too if or when students will stop wanting to attend schools like Smith, where the “Us vs them” worldview prevails.

School itself may be becoming more boring: “Many professors say they now teach and speak more cautiously, because one slip or simple misunderstanding could lead to vilification and even threats from any number of sources.” And, in an age of ubiquitous cameras, it’s easy to take something out of context. Matthew Reed, who has long maintained a blog called “Dean Dad,” has written about how he would adopt certain political perspectives in class (Marxist, fascist, authoritarian, libertarian, etc.) in an attempt to get students to understand what some of those ideologies entail and what their advocates might say. So he’d say things he doesn’t believe in order to get students to think. But that strategy is prone to the camera-and-splice practice. It’s a tension I feel, too: in class I often raise ideas or reading to encourage thinking or offer pushback against apparent groupthink. Universities are supposed to exist to help students (and people more generally) think independently; while courtesy is important, at what point does “caution” become tedium, or censorship?

Schools encourage fragility in other ways:

“Always trust your feelings,” said Misoponos, and that dictum hay sound wise and familiar. You’ve heard versions of it from a variety of sappy novels and pop psychology gurus. But the second Great Untruth—the Untruth of Emotional Reasoning—is a direct contradiction of much ancient wisdom. [. . .] Sages in many societies have converged on the insight that feelings are always compelling, but not always reliable.

More important than ancient sages, modern psychologists and behavioral economists have found and argued the same. Feelings of fear, uncertainty, and doubt are strangely encouraged: “Administrators often acted in ways that gave the impression that students were in constant danger and in need of protection from a variety of risks and discomforts.” How odd: 18- and 19-year-olds in the military face risks and discomforts like, you know, being shot. Maybe the issue is that our society has too little risk, or risk that is invisible (this is your occasional reminder that about 30,000 people die in car crashes every year, and hundreds of thousands more are mangled, yet we do little to alleviate the car-centric world).

Umberto Eco says, “Art is an escape from personal emotion, as both Joyce and Eliot had taught me.” Yet we often treat personal emotion as the final arbiter and decider of things. “Personal emotion” is very close the word “feelings.” We should be wary of trusting those feelings; art enables to escape from our own feelings into someone else’s conception of the world, if we allow it to. The study of art in many universities seemingly discourages this. Perhaps we ought to read more Eco.

I wonder if Coddling is going to end up being one of those important books no one reads.

It is also interesting to read Coddling in close proximity to Michael Pollan’s How to Change Your Mind: What the New Science of Psychedelics Teaches Us About Consciousness, Dying, Addiction, Depression, and Transcendence. Perhaps we need less iPhone and more magic mushrooms. I’d actually like to hear a conversation among Pollan, Haidt, and Lukianoff. The other day I was telling a friend about How to Change Your Mind, and he said that not only had he tried psychedelics in high school, but his experience cured or alleviated his stutter and helped him find his way in the world. The plural of anecdote is not data, but it’s hard to imagine safety culture approving of psychedelic experiences (despite their safety, which Pollan describes in detail).

In The Lord of the Rings when Aragorn and his companions believe that Gandalf has perished in Moria; Gimli says that “Gandalf chose to come himself, and he was the first to be lost… his foresight failed him.” Aragorn replies, “The counsel of Gandalf was not founded on foreknowledge of safety, for himself or for others.” And neither is life: it is not founded on foreknowledge of safety. Adventure is necessary to become a whole person. Yet childhood and even universities are today increasingly obsessed with safety, to the detriment of the development of children and students. In my experience, military veterans returning to college are among the most intersting and diligent students. We seem to have forgotten Gandalf’s lessons. One advantage in reading old books may be some of the forgotten cultural assumptions beneath them; in The Lord of the Rings risk is necessary for reward, and the quality of a life is not dependent on the elimination of challenge.

Here’s a good critical review.

“Oh, the Humanities!”

It’s pretty rare for a blog post, even one like “ Mea culpa: there *is* a crisis in the humanities,” to inspire a New York Times op ed, but here we have “Oh, the Humanities! New data on college majors confirms an old trend. Technocracy is crushing the life out of humanism.” It’s an excellent essay. Having spent a long time working in the humanities (a weird phrase, if you think about it) and having written extensively about the problems with the humanities as currently practiced in academia, I naturally have some thoughts.

Douthat notes the decline in humanities majors and says, “this acceleration is no doubt partially driven by economic concerns.” That’s true. Then we get this interesting move:

In an Apollonian culture, eager for “Useful Knowledge” and technical mastery and increasingly indifferent to memory and allergic to tradition, the poet and the novelist and the theologian struggle to find an official justification for their arts. And both the turn toward radical politics and the turn toward high theory are attempts by humanists in the academy to supply that justification — to rebrand the humanities as the seat of social justice and a font of political reform, or to assume a pseudoscientific mantle that lets academics claim to be interrogating literature with the rigor and precision of a lab tech doing dissection.

There is likely some truth here too. In this reading, the humanities have turned from traditional religious feeling and redirected the religious impulse in a political direction.

Douthat has some ideas about how to improve:

First, a return of serious academic interest in the possible (I would say likely) truth of religious claims. Second, a regained sense of history as a repository of wisdom and example rather than just a litany of crimes and wrongthink. Finally, a cultural recoil from the tyranny of the digital and the virtual and the Very Online, today’s version of the technocratic, technological, potentially totalitarian Machine that Jacobs’s Christian humanists opposed.

I think number two is particularly useful, number three is reasonable, and number one is fine but somewhat unlikely and not terribly congruent with my own inclinations. But I also think that the biggest problem with the humanities as currently practiced is the turn from uninterested inquiry about what is true, what is valuable, what is beautiful, what is worth remembering, what should be made, etc., and toward politics, activism, and taking sides in current political debates—especially when those debates are highly interested in stratifying groups of people based on demographic characteristics, then assigning values to those groups.

That said, I’m not the first person to say as much and have zero impact. Major structural forces stand in the way of reform. The current grad-school-to-tenure structure kills most serious, divergent thinking and encourages a group-think monoculture. Higher-ed growth peaked around 1975; not surprisingly, the current “culture wars” or “theory wars” or whatever you want to call them got going in earnest in the 1980s, when there was little job growth among humanities academics. And they’ve been going, in various ways, ever since.

Before the 1980s, most people who got PhDs in the humanities eventually got jobs of some kind or other. This meant heterodox thinkers could show up, snag a foothold somewhere, and change the culture of the academic humanities. People like Camille Paglia or Harold Bloom or even Paul de Man (not my favorite writer) all have this quality. But since the 1980s, the number of jobs has shrunk, the length of grad school has lengthened, and heterodox thinkers have (mostly) been pushed out. Interesting writers like Jonathan Gottschall work as adjuncts, if they work at all.

Today, the jobs situation is arguably worse than ever: I can’t find the report off-hand, the Modern Language Association tracks published, tenure-track jobs, and those declined from about a thousand a year before 2008 to about 300 – 400 per year now.

Current humanities profs hire new humanities profs who already agree with them, politically speaking. Current tenured profs tenure new profs who already agree. This dynamic wasn’t nearly as strong when pretty much everyone got a job, even those who advocated for weird new ideas that eventually became the norm. That process is dead. Eliminating tenure might help the situation some, but any desire to eliminate tenure as a practice will be deeply opposed by the powerful who benefit from it.

So I’m not incredibly optimistic about a return to reason among humanities academics. Barring that return to reason, a lot of smart students are going to look at humanities classes and the people teaching them, then decide to go major in economics (I thought about majoring in econ).

I remember taking a literary theory class when I was an undergrad and wondering how otherwise seemingly-smart people could take some of that terrible writing and thinking seriously. Still, I was interested in reading and fiction, so I ignored the worst parts of what I read (Foucault, Judith Butler—those kinds of people) and kept on going, even into grad school. I liked to read and still do. I’d started writing (bad, at the time) novels. I didn’t realize the extent to which novels like Richard Russo’s Straight Man and Francine Prose’s Blue Angel are awfully close to nonfiction.

By now, the smartest people avoid most humanities subjects as undergrads and then grad students, or potential grad students. Not all of the smartest people, but most of them. And that anti-clumping tendency leaves behind people who don’t know any better or who are willing to repeat the endless and tedious postmodernist mantras like initiates into the cult (and there is the connection to Douthat, who’d like us to acknowledge the religious impulse more than most of us now do). Some of them are excellent sheep: a phrase from William Deresiewicz that he applies to students at elite schools but that might also be applied to many humanities grad students.

MFA programs, last time I checked, are still doing pretty well, and that’s probably because they’re somewhat tethered to the real world and the desire to write things other humans might want to read. That desire seems to have disappeared in most of humanistic academia. Leaving the obvious question: “Why bother?” And that is the question I can no longer answer.

Postmodernisms: What does *that* mean?

In response to What’s so dangerous about Jordan Peterson?, there have been a bunch of discussions about what “postmodernism” means (“He believes that the insistence on the use of gender-neutral pronouns is rooted in postmodernism, which he sees as thinly disguised Marxism.”) By now, postmodernism has become so vague and broad that it means almost anything—which is of course another way of saying “nothing”—so the plural is there in the title for a reason. In my view most people claiming the mantle of big broad labels like “Marxist,” “Christian,” “Socialist,” “Democrat,” etc. are trying to signal something about themselves and their identity much more than they’re trying to understand the nuances of what those positions might mean or what ideas / policies really underlie the labels, so for the most part when I see someone talking or writing about postmodern, I say, “Oh, that’s nice,” then move on to talking about something more interesting and immediate.

But if one is going to attempt to describe postmodernism, and how it relates to Marxism, I’d start by observing that old-school Marxists don’t believe much of the linguistic stuff that postmodernists sometimes say they believe—about how everything reduces to “language” or “discourse”—but I think that the number of people who are “Marxists” in the sense that Marx or Lenin would recognize is tiny, even in academia.

I think what’s actually happening is this: people have an underlying set of models or moral codes and then grab some labels to fit on top of those codes. So the labels fit, or try to fit, the underlying morality and beliefs. People in contemporary academia might be particularly drawn to a version of strident moralism in the form of “postmodernism” or “Marxism” because they don’t have much else—no religion, not much influence, no money, so what’s left? A moral superiority that gets wrapped up in words like “postmodernism.” So postmodernism isn’t so much a thing as a mode or a kind of moral signal, and that in turn is tied into the self-conception of people in academia.

You may be wondering why academia is being dragged into this. Stories about what “postmodernism” means are bound up in academia, where ideas about postmodernism still simmer. In humanities grad school, most grad students make no money, as previously mentioned, and don’t expect to get academic jobs when they’re done. Among those who do graduate, most won’t get jobs. Those who do, probably won’t get tenure. And even those who get tenure will often get it for writing a book that will sell two hundred copies to university libraries and then disappear without a trace. So… why are they doing what they do?

At the same time, humanities grad students and profs don’t even have God to console them, as many religious figures do. So some of the crazier stuff emanating from humanities grad students might be a misplaced need for God or purpose. I’ve never seen the situation discussed in those terms, but as I look at the behavior I saw in grad school and the stories emerging from humanities departments, I think that a central absence better explains many problems than most “logical” explanations. And then “postmodernism” is the label that gets applied to this suite of what amount to beliefs. And that, in turn, is what Jordan Peterson is talking about. If you are (wisely) not following trends in the academic humanities, Peterson’s tweet on the subject probably makes no sense.

Most of us need something to believe it—and the need to believe may be more potent in smarter or more intellectual people. In the absence of God, we very rarely get “nothing.” Instead, we get something else, but we should take care in what that “something” is. The sense of the sacred is still powerful within humanities departments, but what that sacred is has shifted, to their detriment and to the detriment of society as a whole.

(I wrote here about the term “deconstructionism,” which has a set of problems similar to “postmodernism,” so much of what I write there also applies here.)

Evaluating things along power lines, as many postmodernists and Marxists seek to do, isn’t always a bad idea, of course, but there are many other dimensions along which one can evaluate art, social situations, politics, etc. So the relentless focus on “power” becomes tedious and reductive after a while: one always knows what the speaker is likely to say, unless of course the speaker is seen as the powerful person and the thing being criticized can be seen as the obvious (e.g. it seems obvious that many tenured professors are in positions of relatively high power, especially compared to grad students; that’s part of what makes the Lindsay Shepherd story compelling).

This brand of post-modernism tends to infantilize groups or individuals (they’re all victims!) or lead to races to the bottom and the development of victimhood culture. But these pathologies are rarely acknowledged by their defenders.

Has postmodernism led to absurdities like the one at Evergreen State, which led to huge enrollment drops? Maybe. I’ve seen the argument and, on even days, buy it.

I read a good Tweet summarizing the basic problem:

When postmodern types say that truth-claims are rhetoric and that attempts to provide evidence are but moves in a power-game—believe them! They are trying to tell you that this is how they operate in discussions. They are confessing that they cannot imagine doing otherwise.

If everything is just “rhetoric” or “power” or “language,” there is no real way to judge anything. Along a related axis, see “Dear Humanities Profs: We Are the Problem.” Essays like it seem to appear about once a year or so. That they seem to change so little is discouraging.

So what does postmodernism mean? Pretty much whatever you want it to mean, whether you love it for whatever reason or hate it for whatever reason. Which is part of the reason you’ll very rarely see it used on this site: it’s too unspecific to be useful, so I shade towards words with greater utility that haven’t been killed, or at least made somatic, through over-use. There’s a reason why most smart people eschew talking about postmodernism or deconstructionism or similar terms: they’re at a not-very-useful level of abstraction, unless one is primarily trying to signal tribal affiliation, and signaling tribal affiliation isn’t a very interesting level of or for discussion.

If you’ve read to the bottom of this, congratulations! I can’t imagine many people are terribly interested in this subject; it seems that most people read a bit about it, realize that many academics in the humanities are crazy, and go do something more useful. It’s hard to explain this stuff in plain language because it often doesn’t mean much of anything, and explaining why that’s so takes a lot.

What happened to the academic novel?

In “The Joke’s Over: How academic satire died,” Andrew Kay asks: What happened to the academic novel? He proffers some excellent theories, including: “the precipitate decline of English departments, their tumble from being the academy’s House Lannister 25 years ago — a dignified dynasty — to its House Greyjoy, a frozen island outpost. [. . .] academic satires almost invariably took place in English departments.” That seems plausible, and it’s also of obvious importance that writers tend to inhabit English departments, not biology departments; novels are likely to come from novelists and people who study novels than they are from people who study DNA.

But Kay goes on to note that tenure-track jobs disappeared, which made making fun of academics less funny because their situation became serious. I don’t think that’s it, though: tenure-track jobs declined enormously in 1975, yet academic satires kept appearing regularly after that.

But:

When English declined, though, academic satire dwindled with it. Much of the clout that English departments had once enjoyed migrated to disciplines like engineering, computer science, and (that holiest of holies!) neuroscience. (Did we actually have a March for Science last April, or was that satire?) Poetry got bartered for TED talks, Words­worth and Auden for that new high priest of cultural wisdom, the cocksure white guy in bad jeans and a headset holding forth on “innovation” and “biotech.”

And I think this makes sense: much of what English departments began producing in the 1980s and 1990s is nonsense that almost no one takes seriously—even the people who produce it, and it’s hard to satirize total nonsense:

Most satire relies on hyperbole: The satirist holds a ludicrously distorted mirror up to reality, exaggerating the flaws of individuals and systems and so (ideally) shocking them into reform. But what happens when reality outpaces satire, or at least grows so outlandish that a would-be jester has to sprint just to keep up?

What English departments are doing is mostly unimportant, so larger cultural attention focuses on TED talks or edge.org or any number of other venues and disciplines. Debating economics is more interesting than debating deconstructionism (or whatever) because the outcome of the debate matters. In grad school I heard entirely too many people announce that there is no such as reality, then go off to lunch (which seemed a lot like reality to me, but I was a bit of a grad-school misfit).

A couple years ago I wrote “What happened with Deconstruction? And why is there so much bad writing in academia?“, which attempts to explain some of the ways that academia came to be infested by nonsense. Smart people today might gaze at what’s going on in English (and many other humanities) departments, laugh, and move on to more important issues—to the extent they bother gazing over at all. If the Lilliputians want to chase each other around with rhetorical sticks, let them; the rest of us have things to do.

Decades of producing academic satire have produced few if any changes. The problems Blue Angel and Straight Men identified remain and are if anything worse. No one in English departments has anything to lose, intellectually speaking; the sense of perspective departed a long time ago. At some point, would-be reformers wander off and deal with more interesting topics. English department members, meanwhile, can’t figure out why they can’t get more undergrads to major in English or more tenure-track hires. One could start by looking in the mirror, but it’s easier and more fun to blame outsiders than it is to look within.

Back when I was writing a dissertation on academic novels, a question kept creeping up on me, like a serial killer in a horror novel: “Who cares?” I couldn’t find a good answer to that question—at least, not one that most people in the academic humanities seemed to accept. It seems that I’m not alone. Over time, people vote with their feet, or, in this case, attention. If no one wants to pay attention to English departments, maybe that should tell us something.

Nah. What am I saying? It’s them, not us.

The Case Against Education — Bryan Caplan

The Case Against Education is a brilliant book that you should read, though you’ll probably reject its conclusions without really considering them. That’s because, as Caplan argues, most of us are prone to “Social Desirability Bias:” we want to say things that are popular and make people feel good, whether or not they’re true. Some true things may be socially desirable—but many false things may be too; the phrase “Don’t shoot the messenger” exists for a reason, as does the myth of Cassandra. We like to create scapegoats, and messengers are handy scapegoats. Simultaneously, we don’t like to take responsibility for our own ideas; and we like to collectively punish iconoclasts (at first, at least: later they may become idols, but first they must be castigated).

Caplan is an iconoclast but a data-driven one, and that’s part of what makes him unusual and special. And, to be sure, I myself am prone to the biases Caplan notes. Yet, as I read The Case Against Education, I couldn’t find many holes to poke in the argument. The book blends data and observation / anecdote well, and it also fits disturbingly well with my own teaching experiences. For example, Caplan notes that students find school boring and stultifying: “Despite teachers’ best efforts, most youths find high culture boring—and few change their minds in adulthood.” While “school is boring” seems obvious to most people, it’s also worth asking why. Many of the reasons Caplan gives are fine, but I’ll also add that “interesting” is often also “controversial,” and many controversial / interesting instructors will take heat, as I argue in “Ninety-five percent of people are fine — but it’s that last five percent:”

Almost no teacher gets in trouble for being boring, but a teacher can get in trouble or can get in trouble for being many values of “interesting.” Even I’ve had that problem, and I’m not sure I’m that interesting an instructor, and I teach college students.

It’s easy for outsiders to say that teachers should stand up to the vocal, unhappy minority. But it’s less easy to do that when a teacher relies on their job for rent and health insurance. It’s also less easy when the teacher worries about what administrators and principals will do and what could happen if the media gets involved or if the teacher gets demonized.

Despite the fact that no one actively wants school to be boring, the collection of forces operating on the school experience pushes it towards boredom. Many people, for example, are very interested in sex and drugs, but those topics also excite many students and parents, such that it’s difficult to say much that’s true about them in school.

As Caplan says, however, boredom is almost a feature, not a bug. Boring classes allow students to signal traits that employers value, like conscientiousness, intelligence, and conformity. Even if reading Ethan Frome is boring, being willing to tolerate Ethan Frome is important to people who would not themselves read Ethan Frome.

Caplan argues that most education is actually about signaling, not skill development. It’s notable how little we in as a society have improved education in the last two decades, when the Internet has opened up many new learning and signaling opportunities. Caplan has a theory about why: using weird counter-signaling efforts itself signals non-conformity and general weirdness (“‘alternative’ signals of conformity signal nonconformity”). So we’re stuck in a negative equilibrium.

He might be right. That said, I wonder if we’re just seeing a lag: twenty years is a long time by some standards, but in the history of education it’s a relatively short time. The problems with contemporary education also seem to argue that many employers would be well-served to ignore the signals sent by degree and search for alternate signals instead. Google claims to be doing this, but I don’t know of any researchers who’ve audited or studied Google’s internal data (if you do, please leave a pointer in the comments).

The people who most need to read this book are probably educators and high school students. The former probably won’t read it because it punctures some of the powerful myths and beliefs that keep them motivated. The latter probably won’t read it because high school students read very few books, and the ones most likely to read The Case Against Education are probably also likely to gain the most from higher education. So it’s another of these books that’s caught in a readerly catch-22.

Here is a Claudia Goldin paper, “The Race between Education and Technology: The Evolution of U.S. Educational Wage Differentials, 1890 to 2005;” as one person said on Twitter, “I agree with @bryan_caplan that the wage premium from education mainly comes from signaling, rather than learning vocational skills. But – I also believe widespread, generalist, higher ed can be a very good thing (as explained in [“The Race Between…”]).”

I also wonder about this: “employers throughout the economy defer to teachers’ opinions when they decide whom to interview, whom to hire, and how much to pay them.” Do they? Do most employers require transcripts and then actively use those transcripts? It seems that many do look for degrees but don’t look for grades.

One question, too, is why more people don’t go into various forms of consulting; smaller firms are less likely to be interested in credentials than larger ones. I do grant writing for nonprofits, public agencies, and some research-based businesses. Zero clients have asked about educational credentials (well, a few public agencies have superficial processes that ask about them, but the decision-makers don’t seem to care). Clients are much more interested in our experience and the skills demonstrated by our website and client list than they are in credentials. And when we’ve hired various people, like website programmers or graphic designers, we’ve never asked about education either, because we don’t care—we care if they can get the job done. In restaurants, I’ve never stopped a server or hostess to ask if the chef went to cooking school. So smaller firms may offer some respite from degree madness; if there is a market opportunity for avoiding expensive college and the credentials race (for individuals), it might be there.

Yet at the same time, I feel (perhaps wrongly) that school did help me become a better writer. “Feel” is a dangerous word—it’s hard to dispute feelings but easy to dispute data—yet I don’t know how else to describe it. When I read other people’s writing, especially other people’s proposals, I often think, “This helps explain why I have the job I do.” It’s possible to get through college and learn very little about writing. Occasionally managers will learn that I teach writing and say, “Why can’t college graduates write effectively?” An excellent question and one that requires 10,000 words of answer or no answer at all. But the alternative—not taking any writing classes—often seems worse.

Caplan also conducts many fascinating thought experiments, of sorts, although perhaps “contextualizes common practices and ideas” may be more accurate:

The human capital model doesn’t just imply all cheaters are wasting their time. It also implies all educators who try to prevent cheating are wasting their time. All exams might as well be take-home. No one needs to proctor tests or call time. No one needs to punish plagiarism—or Google random sentences to detect it. Learners get job skills and financial rewards. Fakers get poetic justice.

Signaling, in contrast, explains why cheating pays—and why schools are wise to combat it. In the signaling model, employers reward workers for the skills they think those workers possess. Cheating tricks employers into thinking you’re a better worker than you really are. The trick pays because unless everyone cheats all the time, students with better records are, on average, better workers.

Makes sense to me. I sometimes tell students that, if they manage to get through college without learning how to read and write effectively, no one comes back to ask me why. No college offers partial refunds to the unemployable who nonetheless graduate. The signal is the signal.

Many of you will not like The Case Against Education too because it is thorough. Caplan goes through his arguments, then many rebuttals, then rebuttals to the rebuttals. If you want a book that only goes one or two layers deep, this is the wrong book for you and you should stick to the Internet.

Many books also fail to convincingly answer the question, “What should we do about the problem identified?” Caplan doesn’t. He argues that public spending on education (or “education:” as much of what seems like education should be called signaling) should be eliminated altogether, while simultaneously acknowledging that this is only slightly more likely than someone jumping to the moon.

Caplan fulfills many of the conditions of myth, but probably not enough people will read this book to truly hate him. Which is a pity: as I said in the first line, the book is brilliant. But socially desirable persons will reject it, if they consider it at all. And the education machine will press on, a monstrous juice press squeezing every orange that enters its maw. Once I was the orange; now I am the press.

One other answer to “What education does?” may be “to keep options open” and “provide a base from which to build later.” Without some writing and numeracy skills, it’ll be hard to enter many careers; while school may do a lousy job of building them (as Caplan demonstrates), if the alternative to school nothing (i.e. Netflix, hanging out, and partying), school may be a better option than nothing.

As for optionality, I think of my friends, many artistically inclined, who got to their mid or late 20s and around that time got tired of working marginal jobs, struggling to pay rent, working in coffee shops, crashing on friends’ couches, etc. Things that seem glamorous at age 20 often seem depressing five or ten years later. Many of them have gone back to school of various kinds to get programming or healthcare jobs. In the former case, math is important, and in the latter case, biology and some other science knowledge is important. Those who blew off math or bio in high school or college struggle more in those occupations. So maybe education is about keeping at least some options open—or more options than would be open for someone who quits school or begins vocational ed in 8th grade.

Finally, education might be an elite phenomenon. We educate everyone, or, more realistically, attempt to educate everyone, in order to get a relatively small number of elite people into position to drive the entire culture forward. The people at the pinnacle of the scientific, technical, artistic, and social elites got there in part because they had access to education that was good enough to get them into the elite spheres where it’s possible to make a real difference.

I’m not sure I’m in those elite spheres, but I may be close, and at age 15 I probably didn’t look like such a good bet. Yet education continued and here I am, engaging in the kinds of conversations that could move the culture forward. If I’d been tracked differently at age 15 that might not’ve happened. Yes, the process is horrendously wasteful, but it’s useful to give many people a shot, even if most people go nowhere.

To be sure, I buy Caplan’s argument, but I’ve not seen this angle pursued by others, and it at least seems plausible. I also don’t know how one would measure the “education as elite phenomenon” argument, which is another weakness of my own point.

Still, I’ve become more of an elitist because of my involvement in the educational system, which shows that most students are in fact bored and don’t give a damn. When I started grad school I thought I could help students become more engaged by changing the nature of the short journal assignments: instead of just writing for me, students would start blogs that they would read and comment on. Education would become more peer-driven and collaborative. The material would seem relevant. Right?

After a semester or two of reactions that ranged from indifference at best to massive hostility at worst, I stopped and went back to the usual form of short written responses, printed, and handed in. That was easier on me and on the students, and it still at least exposed students to the idea of writing regularly. A few may have continued the practice. Most probably didn’t (and don’t). I learned a lot, maybe more than students, and I also learned that I’m a weirdo for my (extreme) interests in writing and language—but my own time in the education system and my own friend set had to some extent hidden that from me. Now, however, it’s so apparent that I wonder what 24-year-old me was thinking.

Caplan helps explain what I was thinking; many people who go into various kinds of teaching are probably optimists who themselves like school. They’re selected for being, in many cases, passionate weirdos. Personally, I like passionate weirdos and misfits and the people who don’t fit well into the school system (I’ve been all three). But I seem to be unusual in that respect too, though I wasn’t so weird that I couldn’t fit into the convention-making machine. A good thing, too—as Caplan notes, it’s individually rational to pursue educational credentials, even if the mass pursuit of those credentials may not be so good for society as a whole. Correlation is not causation, as you no doubt learned from your statistics classes and still understand today.

Here is a good critical review, not wholly convincing in my view, but worth thinking about.

%d bloggers like this: