Skin in the Game – Nassim Taleb

Skin in the Game is congruent with Tom Ricks’ book The Generals. Almost all generals and high-ranking officers in the U.S. military are now exempt from real risk, as Ricks argues—they are exempt even the risk of being fired or reassigned for simple incompetence (or being ill-suited to a role). Almost all enlisted men and junior officers, however, are heavily exposed to real risk, like being killed. That risk asymmetry should give pause to someone contemplating joining. The risk profile for generals prior to the Korean war, while not as a great as the risk profile for regular soldiers, was more reasonable than it is today. Military contractors are arguably the greatest beneficiary of the military today. If more people knew (and acted like they knew) this, we might see changes.

In Skin in the Game Taleb has many, many unusual examples, many of them good; he reads more like an old-fashioned philosopher (that is: one who wants to be read, heard, and understood, as opposed to one who wants tenure), and I mean that as a compliment. One of his rules is, “No person in a transaction should have certainty about the outcomes while the other one has uncertainty.” I wonder how this rule could be applied to colleges, especially under a student-loan system, in which the college is certain to be paid by the student, the student’s family, or the student’s bank (which is really to say, the bank’s student), while the student may see a variable return on investment—especially if the student is ill-equipped in the first place. Colleges may be selling credentials more than skills. But almost no one thinks about those things in advance.

Skin in the Game will, like Antifragile, frustrate you if you demand that every single sentence be true and useful. Some of Taleb’s micro-examples are bad, like his thing against GMOs:

In my war with the Monsanto machine, the advocates of genetically modified organisms (transgenics) kept countering me with benefit analyses (which were often bogus and doctored up), not tail risk analyses for repeated exposures

This view is incoherent because virtually every food eaten today has been “genetically modified,” inefficiently, through selective breeding. If you wish to learn just how hard this is, see The Wizard and the Prophet by Charles Mann. Transgenics speed the process. See this sad tale, and the links, for one researcher in the field who is giving up due to widespread opposition. He points out that, over and over again, transgenic have been shown to be safe.

Taleb is right that there are tail risks to transgenics… but that’s also theoretically true of traditional cross-breeding, and it’s also true of not engaging in transgenics. The alternative to high-efficiency transgenics is environmental degradation and, in many places, starvation. That’s pretty bad, and there’s a serious, usually unstated, environmental trade-off between signaling environmental caring and opposite transgenics (nuclear energy is the same).

Despite incorrect micro-examples, Skin in the Game is great and you should read it. It is less uneven than Antifragile. It’s also an excellent book to re-read (don’t expect to get everything the first time through) because Taleb gives so many examples and is overflowing with ideas.

Like: “If your private life conflicts with your intellectual opinion, it cancels your intellectual ideas, not your private life.” Something easily and frequently forgotten, or never considered in the first place. Look at what people do, not what they say. One of the many charming parts of Alain de Botton’s The Consolations of Philosophy is the apparently wide gap between what many philosophers wrote and how they appeared to live. Maybe the truest philosophers don’t write but do.

Or consider:

the highest form of virtue is unpopular. This does not mean that virtue is inherently unpopular, or correlates with unpopularity, only that unpopular acts signal some risk taking and genuine behavior.

A very Peter Thiel point: he asks what popular view is wrong and what unpopular views a given person holds.

Or consider:

The only definition of rationality that I’ve found that is practically, empirically, and mathematically rigorous is the following: what is rational is that which allows for survival.

This may be true, but most of us in the West now survive, unless we do something truly stupid, dangerous, or brave. So our wealth and comfort may enable us to be irrational, because we’re much less likely to pay the ultimate penalty than we once were. Darwin Awards aside, we mostly make it. We can worry more about terrorism than the much more immediate and likely specter of death in the form of the car, which kills far more people every year in the United States than terrorism.

To his credit, though, Taleb does write:

The Chernoff bound can be explained as follows. The probability that the number of people who drown in their bathtubs in the United States doubles next year [. . .] is one per several trillions lifetimes of the universe. This cannot be said about the doubling of the number of people killed by terrorism over the same period.

He’s right that the number who could be killed by terrorism is massive, especially given the risk of nuclear and biological weapons. But the disproportionate focus on terrorism takes too much attention from risks that seem mundane, like getting into cars. Everyone expects to get into car crashes. Perhaps we should be thinking more seriously about that. Too bad almost no one is.

What it’s like going to a Conversation with Tyler

I forgot to post this expeditiously, but I was at this conversation between Tyler Cowen and Matt Levine. You may listen to Conversations with Tyler, as I do, and wonder what they’re like live. I suspect they’re all different from each other, because some occur on university campuses, some are one-on-one, and this happened at what I think is Bloomberg News’s headquarters.

Some last-minute work things meant that I only arrived just before the start, but I did get a chance to chat before the chat, so to speak. Of the half dozen or so people I talked to, all were there for Matt, not Tyler, so they knew finance, and I often got that sense of turning off when they found out I wasn’t in the guild: there is no good business or networking opportunity here, so it’s time to move on—this isn’t a bad thing and in some circumstances I’ve done the same thing (“this isn’t what I’m here for, so let me keep looking”). But I was in “intellectual curiosity” mode more than “job networking” mode and in that mode I’m broader minded.

Almost everyone looked the same, but in a way hard to describe. If you’ve ever been to gatherings of consultants or finance people in New York (or similar places?), you’ll know what I mean: either light pastel colors or very dark clothes, lots of tucked-in dress shirts, a sense of restraint. It’s a little different, though, than similar gatherings in L.A., where people are not just more tan but more… glossy? Wearing short-sleeved shirts? The audience was almost all guys.

Bloomberg central feels like a combination of Deathstar and sleek Silicon Valley moguldom. I don’t think I’ve ever been in a better-designed space, but enough good design starts to feel oppressive. Security guys were all over the place, though not, I think, to protect the speakers. I think they’re just… there. Very visibly there.

During the talk itself, every time the word “risk” was mentioned, I thought of Nassim Taleb. How much do we really understand about derivatives and swaps? In my case very little, but even in the case of experts and “experts” I wonder if the answer is very much. I could be totally wrong, of course.

After the talk I had to leave quickly, alas, but maybe the crowd was less interest-seeky after a couple drinks.

If you get a chance to go, you totally should. Live events are good for many reasons, one being that they act as a reminder about why educational utopians who think online education is likely to upend live education may be wrong. The real world is very high resolution. I discovered that, when I listen to podcasts, I’m almost always moving, most often but not exclusively when I’m making dinner. It’s hard for me to sit still and purely listen for an entire hour, even to very engaging conversation. This is similar to my thinking about most modern classical music venues: they’re usually too hushed, too cerebral, too little audience movement, and too little beer.

Live audience events also do complex status transfers that I don’t entirely get, but they’re easy to feel in the moment. That’s obvious on some level but I very rarely see it stated as such.

Does politics have to be everywhere, all the time? On Jordan B. Peterson

The Intellectual We Deserve: Jordan Peterson’s popularity is the sign of a deeply impoverished political and intellectual landscape” has been making the rounds for good reason: it’s an intellectually engaged, non-stupid takedown of Peterson. But while you should read it, you should also read it skeptically (or at least contextually). Take this:

A more important reason why Peterson is “misinterpreted” is that he is so consistently vague and vacillating that it’s impossible to tell what he is “actually saying.” People can have such angry arguments about Peterson, seeing him as everything from a fascist apologist to an Enlightenment liberal, because his vacuous words are a kind of Rorschach test onto which countless interpretations can be projected.

I hate to engage in “whataboutism,” but if you’re going to boot intellectuals who write nonsense, at least half of humanities professors are out—and maybe more. People can have long (and literally endless) arguments about what “literary theory” is “actually saying” because most of its content is itself vacuous enough to be “a kind of Rorschach test.” Peterson is responding in part to that kind of intellectual environment. An uncharitable reading may find that he produces vacuous nonsense in part because that sells.

A more charitable reading, however, may find that in human affairs, apparent opposites may be true, depending on context. There are sometimes obvious points from everyday life: it’s good to be kind, unless kindness becomes a weakness. Or is it good to be hard, not kind, because the world is a tough place? Many aphorisms contradict other aphorisms because human life is messy and often paradoxical. So people giving “life advice,” or whatever one may call it, tend to suffer the same problems.

You may notice that religious texts are wildly popular but not internally consistent. There seems to be something in the human psyche that responds to attractive stories more than consistency and verifiability.

More:

[Peterson] is popular partly because academia and the left have failed spectacularly at helping make the world intelligible to ordinary people, and giving them a clear and compelling political vision.

Makes sense to me. When much of academia has abrogated any effort to find meaning in the larger world or impart somewhat serious ideas about what it means to be and to exist in society, apart from particular political theories, we shouldn’t be surprised when someone eventually comes along and attracts followers from those adrift.

In other words, Robinson has a compelling theory about what makes Peterson popular, but he doesn’t have a compelling theory about how the humanities in academia might rejoin planet earth (thought he notes, correctly, that “the left and academia actually bear a decent share of blame [. . .] academics have been cloistered and unhelpful, and the left has failed to offer people a coherent political alternative”).

Too many academics on the left also see their mission as advocacy first and learner or impartial judge second. That creates a lot of unhappiness and alienation in classrooms and universities. We see problems with victimology that have only recently started being addressed. Peterson tells people not to be victims; identifying as a victim is often bad even for people who are genuine victims. There much more to be said about these issues, but they’ll have to be saved for some other essay—or browse around Heterodox Academy.

More:

Sociologist C. Wright Mills, in critically examining “grand theorists” in his field who used verbosity to cover for a lack of profundity, pointed out that people respond positively to this kind of writing because they see it as “a wondrous maze, fascinating precisely because of its often splendid lack of intelligibility.” But, Mills said, such writers are “so rigidly confined to such high levels of abstraction that the ‘typologies’ they make up—and the work they do to make them up—seem more often an arid game of Concepts than an effort to define systematically—which is to say, in a clear and orderly way, the problems at hand, and to guide our efforts to solve them.”

Try reading Jung. He’s “a wondrous maze” and often unintelligible—and certainly not falsifiable. Yet people like and respond to him, and he’s inspired many artists, in part because he’s saying things that may be true—or may be true in some circumstances. Again, literary theorists do something similar. Michel Foucault is particularly guilty of nonsense (why people love his History of Sexuality, which contains little history and virtually no citations, is beyond me). In grad school a professor assigned Luce Irigaray’s book Sexes and Genealogies, a book that makes both Foucault and Peterson seem lucid and specific by comparison.

Until Robinson’s essay I’d not heard of C. Wright Mills, but I wish I’d heard of him back in grad school; in that atmosphere, where many dumb ideas feel so important because the stakes are so low, he would’ve been revelatory. He may help explain what’s wrong in many corners of what’s supposed to be the world of ideas.

Oddly, the Twitter account Real Peer Review has done much of the work aggregating the worst offenders in published humanities nonsense (a long time ago I started collecting examples of nonsense in peer review but gave up because there was so much of it and pointing out nonsense seemed to have no effect on the larger world).

the Peterson way is not just futile because it’s pointless, it’s futile because ultimately, you can’t escape politics. Our lives are conditioned by economic and political systems, like it or not [. . .]

It’s true, I suppose, in some sense, that you can’t escape politics, but must all of life be about politics, everywhere, all the time? I hope not. One hears that “the personal is the political,” which is both irritating and wrong. Sometimes the personal is just personal. Or political dimensions may be present but very small and unimportant, like relativity acting on objects moving at classical speeds. The politicizing of everyday life may be part of what drives searching people towards Peterson.

Sometimes people want to live outside the often-dreary shadow of politics, but, some aspects of social media make that harder. I’ve observed to friends that, the more I see of someone on Facebook, the less I tend to like them (maybe the same is true of others who know me via Facebook). Maybe social media also means that the things that could be easily ignored in a face-to-face context, or just not known, get highlighted in an unfortunate and extremely visible way. Social media seems to heighten our mimetic instincts in not-good ways.

We seem to want to sort ourselves into political teams more readily than we used to, and we seem more likely to cut off relationships due to slights or beliefs that wouldn’t have been visible to us previously. In some sense we can’t escape politics, but many if not most of us feel that political is not our most defining characteristic.

I’m happy to read Peterson as a symptom and a response, but the important question then becomes, “To what? Of what?” There are a lot of possible answers, some of which Robinson engages—which is great! But most of Peterson’s critics don’t seem to want to engage the question, let alone the answer.

The rest of us are back to the war of art. Which has to first of all be good, rather than agreeing with whatever today’s social pieties may be.

What would a better doctor education system look like?

A reader of “Why you should become a nurse or physicians assistant instead of a doctor: the underrated perils of medical school,” asks, though not quite in this way, what a better doctor education system would look like. It’s surprising that it’s taken so long and so many readers for someone to ask, but before I answer, let me say that, while the question is important, I don’t expect to see improvement. That’s because current, credentialed doctors are highly invested in the system and want to keep barriers to entry high—which in turn helps keep salaries up. In addition, there are still many people trying to enter med school, so the supply of prospective applicants props the system up. Meanwhile, people who notice high wages in medicine but who also notice how crazy the med school system is can turn to PA or NP school as reasonable alternatives. With so little pressure on the system and so many stakeholders invested, why change?

That being said, the question is intellectually interesting if useless in practice, so let’s list some possibilities:

1. Roll med school into undergrad. Do two years of gen eds, then start med school. Even assuming med school needs to be four years (it probably doesn’t), that would slice two years of high-cost education off the total bill.

2. Allow med students, or for that matter anyone, to “challenge the test.” If you learn anatomy on your own and with Youtube, take the test and then you don’t have take three to six (expensive) weeks of mind-numbing lecture courses. Telling students, “You can spent $4,000 on courses or learn it yourself and then take a $150 test” will likely have… unusual outcomes, compared to what professors claim students need.

3. Align curriculums with what doctors actually do. Biochem is a great subject that few specialties actually use. Require those specialties to know biochem. Don’t mandate biochem for family docs, ER, etc.

4. Allow competition among residencies—that is, allow residents to switch on, say, a month-by-month basis, like a real job market.

There are probably others, but these are some of the lowest-hanging fruit. We’re also not likely to see many of these changes for the reason mentioned above—lots of people have a financial stake in the status quo—but also because so much of school is about signaling, not learning. The system works sub-optimally, but it also works “well enough.” Since the present system is good enough and the current medical cartel likes things as they are, it’s up to uncredentialed outsiders like me to observe possible changes that’ll never be implemented by insiders.

I wrote “Why you should become a nurse or physicians assistant instead of a doctor: the underrated perils of medical school” five years ago and in that time we’ve seen zero changes at the macro level. Some individuals have likely not screwed up their lives via med school, and some of them have left comments or sent me emails saying as much, and that’s great. But it’s not been sufficient to generate systems change.

“University presidents: We’ve been blindsided.” Er, no.

University presidents: We’ve been blindsided” is an amazing article—if the narrative it presents is true. It’s amazing because people have been complaining about political correctness and nothing-means-anything postmodernism since at least the early ’90s, yet the problems with reality and identity politics seem to have intensified in the Internet age. University presidents haven’t been blindsided, and some of the problems in universities aren’t directly their fault—but perhaps their biggest failure, with some notable exceptions (like the University of Chicago), is not standing up for free speech.

I don’t see how it’s impossible to see this coming; the right’s attack on academia has its roots in the kind of scorn and disdain I write about in “The right really was coming after college next.” As I say there, I’ve been hearing enormous, overly broad slams against the right for as long as I’ve been involved in higher education. That sort of thing has gone basically unchecked for I-don’t-know how long. It’s surprising not to expect a backlash, eventually, and institutions that don’t police themselves eventually get policed or at least attacked from the outside.

(Since such observations tend to generate calls of “partisanship,” I’ll again note that I’m not on the right and am worried about intellectual honesty.)

There is this:

“It’s not enough anymore to just say, ‘trust us,'” Yale President Peter Salovey said. “There is an attempt to build a narrative of colleges and universities as out of touch and not politically diverse, and I think … we have a responsibility to counter that — both in actions and in how we present ourselves.”

That’s because universities are not politically diverse. At all. Heterodox Academy has been writing about this since it was founded. Political monocultures may in turn encourage freedom of speech restrictions, especially against the other guy, who isn’t even around to make a case. For example, some of you may have been following the Wilifred Laurier University brouhaha (if not, “Why Wilfrid Laurier University’s president apologized to Lindsay Shepherd” is an okay place to start, though the school is in Canada, not the United States). Shepherd’s department wrote a reply, “An open letter from members of the Communication Studies Department, Wilfrid Laurier University” that says, “Public debates about freedom of expression, while valuable, can have a silencing effect on the free speech of other members of the public.” In other words, academics who are supposed to support free speech and disinterested inquiry don’t. And they get to decide what counts as free speech.

If academics don’t support free speech, they’re just another interest group, subject to the same social and political forces that all interest groups are subject to. I don’t think the department that somehow thought this letter to be a good idea realizes as much.

The idea that “trust us” is good enough doesn’t seem to be good enough anymore. In the U.S., the last decade of anti-free-speech and left-wing activism on campus has brought us a Congress that is in some ways more retrograde than any since… I’m not sure when. Maybe the ’90s. Maybe earlier. Yet the response on campus has been to shrug and worry about pronouns.

Rather than “touting their positive impacts on their communities to local civic groups, lawmakers and alumni,” universities need to re-commit to free speech, open and disinterested inquiry, and not prima facie opposing an entire, large political group. Sure, “Some presidents said they blame themselves for failing to communicate the good they do for society — educating young people, finding cures for diseases and often acting as major job creators.” But, again, universities exist to learn what’s true, as best one can, and then explain why it’s true.

Then there’s this:

But there was also an element of defensiveness. Many argue the backlash they’ve faced is part of a larger societal rethinking of major institutions, and that they’re victims of a political cynicism that isn’t necessarily related to their actions. University of Washington President Ana Mari Cauce, for one, compared public attitudes toward universities with distrust of Congress, the legal system, the voting system and the presidency.

While universities do a lot right, they (or some of their members) also engaging in dangerous epistemic nihilism that’s contrary to their missions. And people are catching onto that. Every time one sees a fracas like the one at Evergreen College, universities as a whole lose a little of their prestige. And the response of many administrators hasn’t been good.

Meanwhile, the incredible Title IX stories don’t help (or see Laura Kipnis’s story). One can argue that these are isolated cases. But are they? With each story, and the inept institutional response to it, universities look worse and so do their presidents. University presidents aren’t reaffirming the principles of free speech and disinterested research, and they’re letting bureaucrats create preposterous and absurd tribunals. Then they’re saying they’ve been blindsided! A better question might be, “How can you not see a reckoning in advance?”

“The right really was coming after college next”

Excuse the awkward headline and focus on the content in “The right really was coming after college next.” Relatively few people point out that college has been coming after the right for a very long time; sometimes college correctly comes after the right (e.g. Iraq War II), but the coming after is usually indiscriminate. I’ve spent my entire adult life hearing professors say that Republicans are stupid or people who vote for Romney or whoever are stupid. Perhaps we ought not to be surprised when the right eventually hits back?

A few have noticed that “Elite colleges are making it easy for conservatives to dislike them.” A few have also noticed that we ought to be working towards greater civility and respect, especially regarding ideological disagreement; that’s one purpose of Jonathan Haidt’s Heterodox Academy. Still, on the ground and on a day-to-day level, the academic vituperation towards the right in the humanities and most social sciences (excluding economics) has been so obvious and so clear that I’m surprised it’s taken this long for a backlash.

Because I’m already imagining the assumptions in the comments and on Twitter, let me note that I’m not arguing this from the right—I find that I’m on the side of neither the right nor the left, in part because neither the right nor the left is on my side—but I am arguing this as someone who cares about freedom of speech and freedom of thought, which have never been free and have often been unpopular. It’s important to work towards understanding before judgment or condemnation, even though that principle too has likely never been popular or widely adopted.

It seems to me that homogeneous, lockstep thought is dangerous wherever it occurs, and increasingly it appears to be occurring in large parts of colleges. One hopes that the colleges notice this and try to self-correct. Self-correction will likely be more pleasant than whatever political solution might be devised in statehouses.

 

On Las Vegas, briefly

In 2012 James Fallows wrote, “The Certainty of More Shootings.” As of October 2, this mass-shooting database lists 273 mass shootings in 2017. The policy response to mass-shootings has been indistinguishable from zero. After the Sandy Hook shooting, pundits observed that if we’re willing to tolerate the massacre of small children, we’re basically willing to tolerate anything. They seem to have been right. Now at least 50 are dead in Las Vegas.

It’s easy to blame “politicians” but politicians respond to voters. I fear that “The Certainty of More Shootings” is going to remain distressingly relevant for years, maybe decades, to come. I bet Fallows wishes that it could be relegated to a historical curiosity.

Even The Onion has a perennial for gun massacres: “‘No Way To Prevent This,’ Says Only Nation Where This Regularly Happens.”

If I were a camera company I’d be nervous

I’d be nervous because phone makers and especially Apple are iterating so fast on hardware and software that nearly everyone is going to end up using phone cameras, with the exception of some dedicated pros and the most obsessive amateurs. Right now the media is saturated with articles like, “How Apple Built An iPhone Camera That Makes Everyone A Professional Photographer.” Many of those articles overstate the case—but not by much.

To be sure, phone camera sensors remain small, but Apple and Google are making up for size via software; in cameras, as in so many domains, software is eating the world. And the response so far from camera makers has been anemic.

If I were a camera maker, I’d be laser focused on making Android the default camera OS and exposing APIs to software developers. Yet none seem to care.* It’s like none have learned Nokia’s lesson; Nokia was a famously huge cell phone maker that got killed by the transition smartphones and never recovered. I wrote this about cameras in 2014 and it’s still true today. In the last three years camera makers have done almost nothing to improve their basic position, especially regarding software.

“Not learning Nokia’s lesson” is a very dangerous place. And I like the Panasonic G85 I have! It’s a nice camera. But it’s very large. I don’t always have it with me. Looking at phones like the iPhone X I find myself thinking, “Maybe my next camera won’t be a camera.”

Within a year or two most phone cameras are likely to have two lenses and image sensors, along with clever software to weave them together effectively. Already Apple is ahead of the camera makers in other ways; some of those remain beneath the notice of many reviewers. Apple, for example, is offering more advanced codecs, which probably doesn’t mean much to most users, but implementing H.265 video means that Apple can in effect halve the size of most videos. In a storage- and bandwidth-constrained environment, that’s a huge win (just try to shoot 4K video and see what I mean). Camera makers should be at the forefront of such transitions, but they’re not. Again, Samsung’s cameras were out front (they used H.265 in 2015), but no one else followed.

Camera makers are going to be business-school case studies one day, if they aren’t already. They have one job—making the best cameras possible—and already Apple is doing things in a $1,000 smartphone (next year it will likely be $800) that camera makers aren’t doing in $2,000+ cameras.

That’s incredibly bad for camera makers but great for photographers. I may never buy another standalone camera because if phones do pictures and videos better, why bother?


* With the exception of Samsung, which had a brief foray into the camera world but then quit—probably due to a declining market and low margins. And Thom Hogan has been beating the Android drum for years, for good reason, and it appears that no decision makers are listening.

‘Maybe in 50 years there won’t be novels’

Claire Messud: ‘Maybe in 50 years there won’t be novels:’ As her fifth novel is published, the American writer warns that shrinking attention spans could prove the death of long fiction” makes an interesting point that is definitely plausible and may also be correct. Still, while average attention spans may be shrinking, elite attention spans may be as long as they ever were—they have to be to do good work. The people who make Twitter, Facebook, and SnapChat need intense concentration to do the work they do (everyone ought to at least attempt a programming class, if for no other reason than to understand the kind of mental effort it entails). If we’re going to keep the lights on, the Internet working at all, and the world running, we need to be able to concentrate long enough to really understand a topic deeply.

As fewer people can do this, the value of doing it rises. My own work as a grant writer depends on concentration; part of the reason we have a business is because most people can’t concentrate long enough to learn to write well and then apply that learning to grant applications. As I wrote in 2012, “Grant writing is long-form, not fragmentary.” Cal Newport makes a similar point, although not about grant writing, in Deep Work.

The contemporary tension between an attention-addled majority and a deep-working minority fuels Neal Stephenson’s novel Anathem. It’s not the most readable of novels because the made-up vocabulary of the future is so grating. The idea is a reasonable one (our present vocabulary is different from the past’s vocabulary, so won’t the same be true of the future?), but the novel also shows the technical problems that attempting to implement that idea entail. I wonder if Messud has read Anathem.

Anyway, to return to Messud, I suspect this is true: “That we can’t fathom other people, or ourselves, is the engine of fiction” and as long as it remains true there will be an appetite for novels among at least some people.

By the way, I’ve started a couple of Messud’s books and never cottoned to them. Maybe the flaw is mine.

Statistical analyses of literature: let’s see what happens

I got some pushback to the link on what heretical things statistics can tell us about fiction, and I’ve read pushback like it before: the objections tend to say that great literature can’t be reduced to statistics; big data will never replicate the reading experience; a novel is more than the sum of the words chosen. That sort of thing. All of which is likely true, but the more interesting question is, “What kinds of things is nobody doing in the study of fiction?” (Or words, or sentences, of writers’ oeuvres). Lots and lots of people, including me, closely study individual works and connect them to a smallish body of other works and ideas.

Over centuries, if not longer, thousands, if not millions, of people have engaged this practice. Not very many people have attempted to systematically examine thousands if not millions of works simultaneously. So that may tell us something the usual methods haven’t. It’s worth exploring that domain. And just because that domain is being explored, the more usual paths via close reading aren’t closed off.

In other words, don’t think that an argument along the lines of “x is interesting” means “we should always and only do x.”

At the moment, we also appear to be at the very start of the field. Maybe it’ll become extremely important and maybe it won’t. The potential is there. People have (arguably) been doing some form of close reading and analysis, even if the practice didn’t use those specific words, for millennia. Certainly for centuries. So I’d be pretty surprised to see statistical analyses produce whatever good material they’re likely to produce in just a decade or two.

Part of what art and analysis should do is be novel. Another part is “be interesting.” We’re looking for the intersection of those two zones.

%d bloggers like this: