“University presidents: We’ve been blindsided.” Er, no.

University presidents: We’ve been blindsided” is an amazing article—if the narrative it presents is true. It’s amazing because people have been complaining about political correctness and nothing-means-anything postmodernism since at least the early ’90s, yet the problems with reality and identity politics seem to have intensified in the Internet age. University presidents haven’t been blindsided, and some of the problems in universities aren’t directly their fault—but perhaps their biggest failure, with some notable exceptions (like the University of Chicago), is not standing up for free speech.

I don’t see how it’s impossible to see this coming; the right’s attack on academia has its roots in the kind of scorn and disdain I write about in “The right really was coming after college next.” As I say there, I’ve been hearing enormous, overly broad slams against the right for as long as I’ve been involved in higher education. That sort of thing has gone basically unchecked for I-don’t-know how long. It’s surprising not to expect a backlash, eventually, and institutions that don’t police themselves eventually get policed or at least attacked from the outside.

(Since such observations tend to generate calls of “partisanship,” I’ll again note that I’m not on the right and am worried about intellectual honesty.)

There is this:

“It’s not enough anymore to just say, ‘trust us,'” Yale President Peter Salovey said. “There is an attempt to build a narrative of colleges and universities as out of touch and not politically diverse, and I think … we have a responsibility to counter that — both in actions and in how we present ourselves.”

That’s because universities are not politically diverse. At all. Heterodox Academy has been writing about this since it was founded. Political monocultures may in turn encourage freedom of speech restrictions, especially against the other guy, who isn’t even around to make a case. For example, some of you may have been following the Wilifred Laurier University brouhaha (if not, “Why Wilfrid Laurier University’s president apologized to Lindsay Shepherd” is an okay place to start, though the school is in Canada, not the United States). Shepherd’s department wrote a reply, “An open letter from members of the Communication Studies Department, Wilfrid Laurier University” that says, “Public debates about freedom of expression, while valuable, can have a silencing effect on the free speech of other members of the public.” In other words, academics who are supposed to support free speech and disinterested inquiry don’t. And they get to decide what counts as free speech.

If academics don’t support free speech, they’re just another interest group, subject to the same social and political forces that all interest groups are subject to. I don’t think the department that somehow thought this letter to be a good idea realizes as much.

The idea that “trust us” is good enough doesn’t seem to be good enough anymore. In the U.S., the last decade of anti-free-speech and left-wing activism on campus has brought us a Congress that is in some ways more retrograde than any since… I’m not sure when. Maybe the ’90s. Maybe earlier. Yet the response on campus has been to shrug and worry about pronouns.

Rather than “touting their positive impacts on their communities to local civic groups, lawmakers and alumni,” universities need to re-commit to free speech, open and disinterested inquiry, and not prima facie opposing an entire, large political group. Sure, “Some presidents said they blame themselves for failing to communicate the good they do for society — educating young people, finding cures for diseases and often acting as major job creators.” But, again, universities exist to learn what’s true, as best one can, and then explain why it’s true.

Then there’s this:

But there was also an element of defensiveness. Many argue the backlash they’ve faced is part of a larger societal rethinking of major institutions, and that they’re victims of a political cynicism that isn’t necessarily related to their actions. University of Washington President Ana Mari Cauce, for one, compared public attitudes toward universities with distrust of Congress, the legal system, the voting system and the presidency.

While universities do a lot right, they (or some of their members) also engaging in dangerous epistemic nihilism that’s contrary to their missions. And people are catching onto that. Every time one sees a fracas like the one at Evergreen College, universities as a whole lose a little of their prestige. And the response of many administrators hasn’t been good.

Meanwhile, the incredible Title IX stories don’t help (or see Laura Kipnis’s story). One can argue that these are isolated cases. But are they? With each story, and the inept institutional response to it, universities look worse and so do their presidents. University presidents aren’t reaffirming the principles of free speech and disinterested research, and they’re letting bureaucrats create preposterous and absurd tribunals. Then they’re saying they’ve been blindsided! A better question might be, “How can you not see a reckoning in advance?”

“The right really was coming after college next”

Excuse the awkward headline and focus on the content in “The right really was coming after college next.” Relatively few people point out that college has been coming after the right for a very long time; sometimes college correctly comes after the right (e.g. Iraq War II), but the coming after is usually indiscriminate. I’ve spent my entire adult life hearing professors say that Republicans are stupid or people who vote for Romney or whoever are stupid. Perhaps we ought not to be surprised when the right eventually hits back?

A few have noticed that “Elite colleges are making it easy for conservatives to dislike them.” A few have also noticed that we ought to be working towards greater civility and respect, especially regarding ideological disagreement; that’s one purpose of Jonathan Haidt’s Heterodox Academy. Still, on the ground and on a day-to-day level, the academic vituperation towards the right in the humanities and most social sciences (excluding economics) has been so obvious and so clear that I’m surprised it’s taken this long for a backlash.

Because I’m already imagining the assumptions in the comments and on Twitter, let me note that I’m not arguing this from the right—I find that I’m on the side of neither the right nor the left, in part because neither the right nor the left is on my side—but I am arguing this as someone who cares about freedom of speech and freedom of thought, which have never been free and have often been unpopular. It’s important to work towards understanding before judgment or condemnation, even though that principle too has likely never been popular or widely adopted.

It seems to me that homogeneous, lockstep thought is dangerous wherever it occurs, and increasingly it appears to be occurring in large parts of colleges. One hopes that the colleges notice this and try to self-correct. Self-correction will likely be more pleasant than whatever political solution might be devised in statehouses.

 

On Las Vegas, briefly

In 2012 James Fallows wrote, “The Certainty of More Shootings.” As of October 2, this mass-shooting database lists 273 mass shootings in 2017. The policy response to mass-shootings has been indistinguishable from zero. After the Sandy Hook shooting, pundits observed that if we’re willing to tolerate the massacre of small children, we’re basically willing to tolerate anything. They seem to have been right. Now at least 50 are dead in Las Vegas.

It’s easy to blame “politicians” but politicians respond to voters. I fear that “The Certainty of More Shootings” is going to remain distressingly relevant for years, maybe decades, to come. I bet Fallows wishes that it could be relegated to a historical curiosity.

Even The Onion has a perennial for gun massacres: “‘No Way To Prevent This,’ Says Only Nation Where This Regularly Happens.”

If I were a camera company I’d be nervous

I’d be nervous because phone makers and especially Apple are iterating so fast on hardware and software that nearly everyone is going to end up using phone cameras, with the exception of some dedicated pros and the most obsessive amateurs. Right now the media is saturated with articles like, “How Apple Built An iPhone Camera That Makes Everyone A Professional Photographer.” Many of those articles overstate the case—but not by much.

To be sure, phone camera sensors remain small, but Apple and Google are making up for size via software; in cameras, as in so many domains, software is eating the world. And the response so far from camera makers has been anemic.

If I were a camera maker, I’d be laser focused on making Android the default camera OS and exposing APIs to software developers. Yet none seem to care.* It’s like none have learned Nokia’s lesson; Nokia was a famously huge cell phone maker that got killed by the transition smartphones and never recovered. I wrote this about cameras in 2014 and it’s still true today. In the last three years camera makers have done almost nothing to improve their basic position, especially regarding software.

“Not learning Nokia’s lesson” is a very dangerous place. And I like the Panasonic G85 I have! It’s a nice camera. But it’s very large. I don’t always have it with me. Looking at phones like the iPhone X I find myself thinking, “Maybe my next camera won’t be a camera.”

Within a year or two most phone cameras are likely to have two lenses and image sensors, along with clever software to weave them together effectively. Already Apple is ahead of the camera makers in other ways; some of those remain beneath the notice of many reviewers. Apple, for example, is offering more advanced codecs, which probably doesn’t mean much to most users, but implementing H.265 video means that Apple can in effect halve the size of most videos. In a storage- and bandwidth-constrained environment, that’s a huge win (just try to shoot 4K video and see what I mean). Camera makers should be at the forefront of such transitions, but they’re not. Again, Samsung’s cameras were out front (they used H.265 in 2015), but no one else followed.

Camera makers are going to be business-school case studies one day, if they aren’t already. They have one job—making the best cameras possible—and already Apple is doing things in a $1,000 smartphone (next year it will likely be $800) that camera makers aren’t doing in $2,000+ cameras.

That’s incredibly bad for camera makers but great for photographers. I may never buy another standalone camera because if phones do pictures and videos better, why bother?


* With the exception of Samsung, which had a brief foray into the camera world but then quit—probably due to a declining market and low margins. And Thom Hogan has been beating the Android drum for years, for good reason, and it appears that no decision makers are listening.

‘Maybe in 50 years there won’t be novels’

Claire Messud: ‘Maybe in 50 years there won’t be novels:’ As her fifth novel is published, the American writer warns that shrinking attention spans could prove the death of long fiction” makes an interesting point that is definitely plausible and may also be correct. Still, while average attention spans may be shrinking, elite attention spans may be as long as they ever were—they have to be to do good work. The people who make Twitter, Facebook, and SnapChat need intense concentration to do the work they do (everyone ought to at least attempt a programming class, if for no other reason than to understand the kind of mental effort it entails). If we’re going to keep the lights on, the Internet working at all, and the world running, we need to be able to concentrate long enough to really understand a topic deeply.

As fewer people can do this, the value of doing it rises. My own work as a grant writer depends on concentration; part of the reason we have a business is because most people can’t concentrate long enough to learn to write well and then apply that learning to grant applications. As I wrote in 2012, “Grant writing is long-form, not fragmentary.” Cal Newport makes a similar point, although not about grant writing, in Deep Work.

The contemporary tension between an attention-addled majority and a deep-working minority fuels Neal Stephenson’s novel Anathem. It’s not the most readable of novels because the made-up vocabulary of the future is so grating. The idea is a reasonable one (our present vocabulary is different from the past’s vocabulary, so won’t the same be true of the future?), but the novel also shows the technical problems that attempting to implement that idea entail. I wonder if Messud has read Anathem.

Anyway, to return to Messud, I suspect this is true: “That we can’t fathom other people, or ourselves, is the engine of fiction” and as long as it remains true there will be an appetite for novels among at least some people.

By the way, I’ve started a couple of Messud’s books and never cottoned to them. Maybe the flaw is mine.

Statistical analyses of literature: let’s see what happens

I got some pushback to the link on what heretical things statistics can tell us about fiction, and I’ve read pushback like it before: the objections tend to say that great literature can’t be reduced to statistics; big data will never replicate the reading experience; a novel is more than the sum of the words chosen. That sort of thing. All of which is likely true, but the more interesting question is, “What kinds of things is nobody doing in the study of fiction?” (Or words, or sentences, of writers’ oeuvres). Lots and lots of people, including me, closely study individual works and connect them to a smallish body of other works and ideas.

Over centuries, if not longer, thousands, if not millions, of people have engaged this practice. Not very many people have attempted to systematically examine thousands if not millions of works simultaneously. So that may tell us something the usual methods haven’t. It’s worth exploring that domain. And just because that domain is being explored, the more usual paths via close reading aren’t closed off.

In other words, don’t think that an argument along the lines of “x is interesting” means “we should always and only do x.”

At the moment, we also appear to be at the very start of the field. Maybe it’ll become extremely important and maybe it won’t. The potential is there. People have (arguably) been doing some form of close reading and analysis, even if the practice didn’t use those specific words, for millennia. Certainly for centuries. So I’d be pretty surprised to see statistical analyses produce whatever good material they’re likely to produce in just a decade or two.

Part of what art and analysis should do is be novel. Another part is “be interesting.” We’re looking for the intersection of those two zones.

Lost technologies, Seveneves, and The Secret of Our Success

Spoilers ahead, but if you haven’t read Seveneves by now they probably don’t matter.

Seveneves is an unusual and great novel, and it’s great as long as you attribute some of its less plausible elements to an author building a world. One plausible element is the way humanity comes together and keeps the social, political, and economic systems functional enough to launch large numbers of spacecraft in the face of imminent collective death. If we collectively had two years to live, I suspect total breakdown would follow, leaving us with no Cloud Ark (and no story—thus we go along with the premise).

But that’s not the main thing I want to write about. Instead, consider the loss of knowledge that inherently comes with population decline. In Seveneves humanity declines to seven women living in space on a massive iron remnant of the moon. They slowly repopulate, with their descendants living in space for five thousand years. But a population of seven would probably not be able to retain and transmit the specialized knowledge necessary for survival on most parts of Earth, let alone space.

That isn’t a speculative claim. We have pretty good evidence for the way small populations lose knowledge. Something drew me to re-reading Joseph Henrich’s excellent book The Secret of Our Success, and maybe the sections about technological loss are part of it. He writes about many examples of European explorers getting lost and dying in relatively fecund environments because they don’t have the local knowledge and customs necessary to survive. He writes about indigenous groups too, including the Polar Intuit, who “live in an isolated region of northwestern Greenland [. . . .] They are the northernmost human population that has ever existed” (211). But

Sometime in the 1820s an epidemic hit this population and selectively killed off many of its oldest and most knowledgable members. With the sudden disappearance of the know-how carried by these individuals, the group collectively lost its ability to make some of its most crucial and complex tools, including leisters, bows and arrows, the heat-trapping long entry ways for snow houses, and most important, kayaks.

As a result, “The population declined until 1862, when another group of Intuit from around Baffin Island ran across them while traveling along the Greenland coast. The subsequent cultural reconnection led the Polar Intuit to rapidly reacquire what they had lost.” Which is essential:

Though crucial to survival in the Arctic, the lost technologies were not things that the Polar Intuit could easily recreate Even having seen these technologies in operation as children, and with their population crashing, neither the older generation nor an entirely new generation responded to Mother Necessity by devising kayaks, leisters, compound bows, or long tunnel entrances.

Innovation is hard and relatively rare. We’re all part of a network that transmits knowledge horizontally, from peer to peer, and vertically, from older person to younger person. Today, people in first-world countries are used to innovation because we’re part of a vast network of billions of people who are constantly learning from each and transmitting the innovations that do arise. We’re used to seemingly automatic innovation, because so many people are working on so many problems. Unless we’re employed as researchers, we’re often not cognizant of how much effort goes into both discovery and then transmission.

Without that dense network of people, though, much of what we know would be lost. Maybe the best-known example of technology loss happened when the Roman Empire fell, followed by the way ancient Egyptians lost the know-how necessary to build pyramids and other epic engineering works.

In a Seveneves scenario, it’s highly unlikely that the novel’s protagonists would be able to sustain and transmit the knowledge necessary to live somewhere on earth, let alone somewhere as hostile as space. Quick: how helpful would you be in designing and manufacturing microchips, solar panels, nuclear reactors, plant biology, or oxygen systems? Yeah, me too. Those complex technologies have research, design, and manufacture facets that are embodied in the heads of thousands if not millions of individuals. The level of specialization our society has achieved is incredible, but we rarely think about how incredible it really is.

This is not so much a criticism of the novel—I consider the fact that they do survive part of granting the author his due—but it is a contextualization of the novel’s ideas. The evidence that knowledge is fragile is more pervasive and available than I’d thought when I was younger. We like stories of individual agency, but in actuality we’re better conceived of as parts in a massive system. We can see our susceptibility to conspiracy theories as beliefs in the excessive power of the individual. In an essay from Distrust That Particular Flavor, William Gibson writes: “Conspiracy theories and the occult comfort us because they present models of the world that more easily make sense than the world itself, and, regardless of how dark or threatening, are inherently less frightening.” The world itself is big, densely interconnected, and our ability to change it is real but often smaller than we imagine.

Henrich writes:

Once individuals evolve to learn from one another with sufficient accuracy (fidelity), social groups of individuals develop what might be called collective brains. The power of these collective brains to develop increasingly effective tools and technologies, as well as other forms of nonmaterial culture (e.g., know-how), depends in part on the size of the group of individuals engaged and on their social connectedness. (212)

The Secret of Our Success also cites laboratory recreations of similar principles; those experiments are too long to describe here, but they are clever. If there are good critiques of the chapter and idea, I haven’t found them (and if you know any, let’s use our collective brain by posting links in the comments). Henrich emphasizes:

If a population suddenly shrinks or gets socially disconnected, it can actually lose adaptive cultural information, resulting in a loss of technical skills and the disappearance of complex technologies. [. . . ] A population’s size and social interconnectedness sets a maximum on the size of a group’s collective brain. (218-9)

That size cap means that small populations in space, even if they are composed of highly skilled and competent individuals, are unlikely to survive over generations. They are unlikely to survive even if they have the rest of humanity’s explicit knowledge recorded on disk. There is too much tacit knowledge for explicit knowledge in and of itself to be useful, as anyone who has ever tried to learn from a book and then from a good teacher knows. Someday we may be able to survive indefinitely in space, but today we’re far from that stage.

Almost all post-apocalyptic novels face the small-population dilemma to some extent (I’d argue that Seveneves can be seen as a post-apocalyptic novel with a novel apocalypse). Think of the role played by the nuclear reactor in Steven King’s The Stand: the characters in the immediate aftermath must decide if they’re going to live in the dark and regress to hunter-gatherer times, at best, or if they’re going to save and use the reactor to live in the light (the metaphoric implications are not hard to perceive here). In one of the earliest post-apocalyptic novels, Earth Abides, two generations after the disaster, descendants of technologically sophisticated people are reduced to using melted-down coins as tips for spears and arrows. In Threads, the movie (and my nominee for scariest movie ever made), the descendants of survivors of nuclear war lose most of their vocabulary and are reduced to what is by modern standards an impoverished language that is a sort of inadvertent 1984 newspeak.* Let’s hope we don’t find out what actually happens after nuclear war.

In short, kill enough neurons in the collective brain and the brain itself stops working. Which has happened before. And it could happen again.


* Check out the cars in Britain in Threads: that reminds us of the possibilities of technological progress and advancement.

%d bloggers like this: