Lost technologies, Seveneves, and The Secret of Our Success

Spoilers ahead, but if you haven’t read Seveneves by now they probably don’t matter.

Seveneves is an unusual and great novel, and it’s great as long as you attribute some of its less plausible elements to an author building a world. One plausible element is the way humanity comes together and keeps the social, political, and economic systems functional enough to launch large numbers of spacecraft in the face of imminent collective death. If we collectively had two years to live, I suspect total breakdown would follow, leaving us with no Cloud Ark (and no story—thus we go along with the premise).

But that’s not the main thing I want to write about. Instead, consider the loss of knowledge that inherently comes with population decline. In Seveneves humanity declines to seven women living in space on a massive iron remnant of the moon. They slowly repopulate, with their descendants living in space for five thousand years. But a population of seven would probably not be able to retain and transmit the specialized knowledge necessary for survival on most parts of Earth, let alone space.

That isn’t a speculative claim. We have pretty good evidence for the way small populations lose knowledge. Something drew me to re-reading Joseph Henrich’s excellent book The Secret of Our Success, and maybe the sections about technological loss are part of it. He writes about many examples of European explorers getting lost and dying in relatively fecund environments because they don’t have the local knowledge and customs necessary to survive. He writes about indigenous groups too, including the Polar Intuit, who “live in an isolated region of northwestern Greenland [. . . .] They are the northernmost human population that has ever existed” (211). But

Sometime in the 1820s an epidemic hit this population and selectively killed off many of its oldest and most knowledgable members. With the sudden disappearance of the know-how carried by these individuals, the group collectively lost its ability to make some of its most crucial and complex tools, including leisters, bows and arrows, the heat-trapping long entry ways for snow houses, and most important, kayaks.

As a result, “The population declined until 1862, when another group of Intuit from around Baffin Island ran across them while traveling along the Greenland coast. The subsequent cultural reconnection led the Polar Intuit to rapidly reacquire what they had lost.” Which is essential:

Though crucial to survival in the Arctic, the lost technologies were not things that the Polar Intuit could easily recreate Even having seen these technologies in operation as children, and with their population crashing, neither the older generation nor an entirely new generation responded to Mother Necessity by devising kayaks, leisters, compound bows, or long tunnel entrances.

Innovation is hard and relatively rare. We’re all part of a network that transmits knowledge horizontally, from peer to peer, and vertically, from older person to younger person. Today, people in first-world countries are used to innovation because we’re part of a vast network of billions of people who are constantly learning from each and transmitting the innovations that do arise. We’re used to seemingly automatic innovation, because so many people are working on so many problems. Unless we’re employed as researchers, we’re often not cognizant of how much effort goes into both discovery and then transmission.

Without that dense network of people, though, much of what we know would be lost. Maybe the best-known example of technology loss happened when the Roman Empire fell, followed by the way ancient Egyptians lost the know-how necessary to build pyramids and other epic engineering works.

In a Seveneves scenario, it’s highly unlikely that the novel’s protagonists would be able to sustain and transmit the knowledge necessary to live somewhere on earth, let alone somewhere as hostile as space. Quick: how helpful would you be in designing and manufacturing microchips, solar panels, nuclear reactors, plant biology, or oxygen systems? Yeah, me too. Those complex technologies have research, design, and manufacture facets that are embodied in the heads of thousands if not millions of individuals. The level of specialization our society has achieved is incredible, but we rarely think about how incredible it really is.

This is not so much a criticism of the novel—I consider the fact that they do survive part of granting the author his due—but it is a contextualization of the novel’s ideas. The evidence that knowledge is fragile is more pervasive and available than I’d thought when I was younger. We like stories of individual agency, but in actuality we’re better conceived of as parts in a massive system. We can see our susceptibility to conspiracy theories as beliefs in the excessive power of the individual. In an essay from Distrust That Particular Flavor, William Gibson writes: “Conspiracy theories and the occult comfort us because they present models of the world that more easily make sense than the world itself, and, regardless of how dark or threatening, are inherently less frightening.” The world itself is big, densely interconnected, and our ability to change it is real but often smaller than we imagine.

Henrich writes:

Once individuals evolve to learn from one another with sufficient accuracy (fidelity), social groups of individuals develop what might be called collective brains. The power of these collective brains to develop increasingly effective tools and technologies, as well as other forms of nonmaterial culture (e.g., know-how), depends in part on the size of the group of individuals engaged and on their social connectedness. (212)

The Secret of Our Success also cites laboratory recreations of similar principles; those experiments are too long to describe here, but they are clever. If there are good critiques of the chapter and idea, I haven’t found them (and if you know any, let’s use our collective brain by posting links in the comments). Henrich emphasizes:

If a population suddenly shrinks or gets socially disconnected, it can actually lose adaptive cultural information, resulting in a loss of technical skills and the disappearance of complex technologies. [. . . ] A population’s size and social interconnectedness sets a maximum on the size of a group’s collective brain. (218-9)

That size cap means that small populations in space, even if they are composed of highly skilled and competent individuals, are unlikely to survive over generations. They are unlikely to survive even if they have the rest of humanity’s explicit knowledge recorded on disk. There is too much tacit knowledge for explicit knowledge in and of itself to be useful, as anyone who has ever tried to learn from a book and then from a good teacher knows. Someday we may be able to survive indefinitely in space, but today we’re far from that stage.

Almost all post-apocalyptic novels face the small-population dilemma to some extent (I’d argue that Seveneves can be seen as a post-apocalyptic novel with a novel apocalypse). Think of the role played by the nuclear reactor in Steven King’s The Stand: the characters in the immediate aftermath must decide if they’re going to live in the dark and regress to hunter-gatherer times, at best, or if they’re going to save and use the reactor to live in the light (the metaphoric implications are not hard to perceive here). In one of the earliest post-apocalyptic novels, Earth Abides, two generations after the disaster, descendants of technologically sophisticated people are reduced to using melted-down coins as tips for spears and arrows. In Threads, the movie (and my nominee for scariest movie ever made), the descendants of survivors of nuclear war lose most of their vocabulary and are reduced to what is by modern standards an impoverished language that is a sort of inadvertent 1984 newspeak.* Let’s hope we don’t find out what actually happens after nuclear war.

In short, kill enough neurons in the collective brain and the brain itself stops working. Which has happened before. And it could happen again.


* Check out the cars in Britain in Threads: that reminds us of the possibilities of technological progress and advancement.

Why read bestsellers

Someone wrote to ask why I bother writing about John Grisham’s weaknesses as a writer and implied in it is a second question: why read bestsellers at all? The first is a fair question and so is the implication in it: Grisham’s readers don’t read me and don’t care what I think; they don’t care that he’s a bad writer; and people who read me probably aren’t going to read him. Still, I read him because I was curious and I wrote about him to report what I found.

The answer to the second one is easy: Some are great! Not all, probably not even most, but enough to try. Lonesome Dove, the best novel I’ve read recently, was a bestseller. Its sequel, Streets of Laredo, is not quite as good but I’m glad to have read it. Elmore Leonard was often a bestseller and he is excellent. Others seemed like they’d be bad (Gillian Flynn, Tucker Max) but turned into favorites.

One could construct a 2×2 matrix of good famous books; bad famous books; good obscure books; and bad obscure books. That last one is a large group too; credibility amid a handful of literary critics (who may be scratching each other’s backs anyway) does not necessarily equate to quality, and I’ve been fooled by good reviews of mostly unknown books many times. Literary posturing does not equate to actual quality.

Different people also have different views around literary quality, and those views depend in part on experience and reading habits. Someone who reads zero or one books a year is likely to have very different impressions than someone who reads ten or someone who reads fifty or a hundred. Someone who is reading like a writer will probably have a different experience than someone who reads exclusively in a single, particular genre.

And Grisham? That article (which I wish I could find) made him and especially Camino Island sound appealing, and the book does occasionally work. But its addiction to cliché and the sort of overwriting common in student writing makes it unreadable in my view. But someone who reads one or two books a year and for whom Grisham is one of those books will probably like him just fine, because they don’t have the built-up stock of reading that lets them distinguish what’s really good from what isn’t.

Work and video games

I was reading “Escape to Another World” (highly recommended) and this part made me realize something:

How could society ever value time spent at games as it does time spent on “real” pursuits, on holidays with families or working in the back garden, to say nothing of time on the job? Yet it is possible that just as past generations did not simply normalise the ideal of time off but imbued it with virtue – barbecuing in the garden on weekends or piling the family into the car for a holiday – future generations might make hours spent each day on games something of an institution.

I think part of the challenge is that, historically, many of us pursue hobbies and other activities that are also related to craftsmanship. The world of full of people who, in their spare time, rebuild bikes or cars, or sew quilts, or bind books, or write open-source software, or pursue other kinds of hobbies that have virtues beyond the pleasure of the hobby itself (I am thinking of a book like Shop Class as Soul Craft, though if I recall correctly the idea of craftsmanship as a virtue of its own goes back to Plato). A friend of mine, for example, started up pottery classes; while she enjoys the process, she also gets bowls and mugs out of it. Video games seem have few or none of those secondary effects.

To be sure, a lot of playing video games has likely replaced watching TV, and watching TV has none of those salutary effects either. Still, one has to wonder if video games are also usurping more active forms of activity that also build other kinds of skills (as well as useful objects).

I say this as someone who wasted a fantastic amount of time on video games from ages 12 – 15 or so. Those are years I should’ve been building real skills and abilities (or even having real fun), and instead I spent a lot of them slaying imaginary monsters as a way of avoiding the real world. I can’t imagine being an adult and spending all that time on video games. We can never get back the time we waste, and wasted time compounds—as does invested time.

In my own life, the hobby time I’ve spent reading feeds directly into my professional life. The hobby time I spent working on newspapers in high school and college does too. Many people won’t have so direct a connection—but many do, and will.

To be sure, lots of people play recreational video games that don’t interfere with the rest of their lives. Playing video games as a way of consciously wasting time is fine, but when wasting time becomes a primary activity instead of a secondary or tertiary one it becomes a problem over time. It’s possible to waste a single day mucking around or playing a game or whatever—I have and chances are very high that so have you—but the pervasiveness of them seems new, as Avent writes.

It’s probably better to be the person writing the games than playing the games (and writing them can at times take on some game-like qualities). When you’re otherwise stuck, build skills. No one wants skills in video game playing, but lots of people want other skills that aren’t being built by battling digital orcs. The realest worry may be that many people who start the video game spiral won’t be able to get out.

Trump fears and the nuclear apocalypse

In a best-case Trump scenario, he bumbles around for four years doing not much except embarrassing himself and the country, but few substantive political changes actually occur; in the worst-case Trump scenario, however, Trump starts or provokes a nuclear war. Nuclear war is very bad and could conceivably extinguish the human race or at least wipe out the United States, as well as other countries. I still view nuclear war as unlikely, but it’s far more likely than I would’ve judged it three weeks weeks ago—and when I’ve mentioned increasing fear of nuclear war I’ve gotten a weirdly large amount of pushback.

Most of that pushback seems like wishful thinking. To understand the danger, Fred Kaplan’s The Wizards of Armageddon is a good book about nuclear policy and history, but Command and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety by Eric Schlosser is probably better for a first introduction to the subject. Command and Control details the (scarily) short lines between the president and launching, or attempting to launch, nuclear weapons is appallingly short.

To understand why Trump is scary, it is necessary to understand two things: 1. That in theory the president is supposed to be able to order a nuclear launch anywhere, at any time, and have missiles in the air within 30 minutes and 2. The way seemingly minor quarrels among countries have sometimes led to historically catastrophic outcomes.

Let us deal with the first point: while the president is supposed to be able to order an unprovoked nuclear attack at any time, there is at least some precedent for a gray area around nuclear weapons:

[I]n 1974, in the last days of the Watergate scandal, Mr. Nixon was drinking heavily and his aides saw what they feared was a growing emotional instability. His new secretary of defense, James R. Schlesinger, himself a hawkish Cold Warrior, instructed the military to divert any emergency orders — especially one involving nuclear weapons — to him or the secretary of state, Henry A. Kissinger.

It was a completely extralegal order, perhaps mutinous. But no one questioned it.

“Although Schlesinger’s order raised questions about who was actually in command,” Eric Schlosser writes in “Command and Control,” a 2013 book, “it seemed like a good idea at the time.”

This is at least a little heartening, as it implies that the generals in charge of executing nuclear launch commands simply will not do so unprovoked. The human nuclear bureaucracy and apparatus is itself hopefully not suicidal and homicidal. Still, that is a slender hope, as Alex Wellerstein describes in “The President and the bomb.”

To be sure, it’s also possible that Obama, Biden, and for that matter someone like Paul Ryan is having quiet conversations with the Secret Service and the military about what to do with a rogue nuclear launch order. Those quiet conversations might be unconstitutional, but if the choice is between constitutionality and the death of everyone and everything, one should hope that the few people charged with mechanically carrying out orders will second-guess those orders.

Beyond that, the history of World War I should scare us. World War I was a catastrophe that killed tens of millions of people and it was a war that no one wanted. I doubt most people have the faintest idea how World War I got started, and if you want to annoy your friends try asking them. Hell, I’m not even sure I could give a good answer. Still, consider some background reading:

* This is Tobias Stone’s “History Tells Us What Will Happen Next With Brexit And Trump.”

* Here is one description of “How Trump Could Realistically Start a Nuclear War.”

* Here is “The real danger,” also about the possibility of direct, great power wars.

* At the same time, see “Commander-In-Chief Donald Trump Will Have Terrifying Powers. Thanks, Obama.” It can be fun to have secret, unchecked powers when your guy is in office, but is incredibly dangerous when the other guy does.

Almost everyone has forgotten about World War I, but in the short prelude to it people acted like it was normal. Check out the sleepwalking into war described in the Hardcore History podcast, around 1:38. In the horrible late July and early August of 1914, people went on holiday and shopkeepers assured their customers that nothing untoward would happen (One sees similar noises in the normalization of Trump). World trade had been expanding for decades; everyone “knew” that war would be suicidal; it seemed implausible that the death of a minor noble would lead to conflagration.

A similar set of circumstances could happen today. The flashpoint could be in the South China Sea, which is a disputed area. It could be the Baltic states. It could be Syria. It could be almost anywhere that the U.S. could pointlessly clash with China or Russia. Trump is obsessed with revenge and in a skirmish or dispute between U.S. forces and Chinese or Russian forces, which escalates rapidly in a tit-for-tat fashion.

Like this scenario: a Chinese ship fires on a U.S. ship in the South China Sea. The U.S. ship flees with a few causalities and Trump orders an attack on a Chinese ship in retaliation. The ship sinks, and China cannot possibly accept disrespect and in turn sinks a sub and imposes trade sanctions. The U.S. rallies to the flag and does the same. Eventually China uses a supercav missile to take out a U.S. carrier.

One could spin out an infinite number of similar scenarios, which may develop very quickly, over the course of days or weeks. Tit-for-tat may be an attractive strategy for small bands of humans or proto-humans in hunter-gatherer or agricultural societies fighting each other. It could end the world in the nuclear age.

I’m not too worried about Trump and domestic policy. He is likely to do some bad and foolish things, but they are unlikely to be existential threats. I am worried about Trump and the end of the world. We haven’t even discussed the possibility of a flu pandemic or some other kind of pandemic. The Ebola crisis was much closer to a worldwide catastrophe than is commonly assumed now. At the start of a flu pandemic the United States may have to lead world in a decisive, intelligent way that seems unlikely to happen under Trump.

Maybe nothing catastrophically bad will happen. I hope so and think that will be true. But to pretend he is a “normal” politician (or to vote for him) is to be willfully blind to history and to the man himself. In darker moments I wonder: maybe we don’t deserve democracy or freedom. Those who will not even vote for it—and half the potential electorate didn’t vote—don’t deserve it. Maybe institutions will resist Trump for the next four years, or resist his most militaristic and dangerous impulses. Maybe they won’t.

Again, I think the likely scenario is that Trump bumbles for four years and gets voted out of office. But nuclear war is too far outside most people’s Overton window, so they won’t even consider it, much as the total destruction that preceded World War I was inconceivable by any of the belligerents—had they realized it they would not have marched off to war, and many of the soliders themselves would have dramatically resisted conscription; they marched to their own deaths.

If you are not scared you’re not paying attention.

We are one black swan event from disaster. The last worldwide, negative black swan event was arguably World War II. Perhaps the 71 years separating us from then is long enough to have forgotten how bad bad really is.

I don’t expect this post to change any minds. All of the information in it was available three weeks ago and that didn’t change shit. We’re surrounded by what political scientists politely call “low-information voters.” This is a post based on logic and knowledge and logic and knowledge played little role in the election. Maybe, outside of elite spheres, it plays little role at all in human life. I only hope that the apocalyptic scenario doesn’t come to pass. If it does, “I told you so” will be no comfort, as it wasn’t in the aftermath of World War I. In that war the prophets and historians were ignored, as they were in the 2016 election. Let us pray that some of the prophets and historians are wrong.

“You can teach a lot of skills, but you can’t teach obsession”

There are many interesting moments in Ezra Klein’s conversation with Tyler Cowen but one in particular stands out, when Klein says that “You can teach a lot of skills, but you can’t teach obsession. There’s a real difference between somebody who is obsessed with the work they’re doing and someone who is simply skilled at the work they’re doing.” He’s right. You can’t teach people to be obsessed and over the medium to long term you can’t even pay them to be obsessed. Look for the people who are obsessed, even if it’s hard.

The larger context is:

Look for people who are desperate to be doing the thing they’re doing. I have often found really great people by finding people who either seemed or were literally doing what they need to be doing for free because nobody was yet paying them for it.

. . . You can teach a lot of skills, but you can’t teach obsession. There’s a real difference between somebody who is obsessed with the work they’re doing and someone who is simply skilled at the work they’re doing. I will take the obsession and teach the skills over getting the skills and having to teach the obsession.

Thinking about this now, it’s odd to me that more people, especially in hiring positions, don’t select more or better for obsession. That’s especially true in academia but it’s also true elsewhere. Now that I think about it explicitly I also realize that my essay “How to get your Professors’ Attention, Along With Coaching and Mentoring” is in part about how to if not fake obsession then at least demonstrate that the person seeking help or advice rises above indifference.

Is most narrative art just a series of status games?

In The Righteous Mind Jonathan Haidt writes:

If you think that moral reasoning is something we do to figure out the truth, you’ll be constantly frustrated by how foolish, biased, and illogical people become when they disagree with you. But if you think about moral reasoning as a skill we humans evolved to further our own social agendas—to justify our own actions and to defend teams we belong to—then things will make a lot more sense. Keep your eye on the intuitions, and don’t take people’s moral arguments at face value. They’re mostly post hoc constructions made up on the fly, crafted to advance one or more strategic objectives.

And those post hoc constructions are often “crafted” subconsciously, without the speaker or listener even aware of what they’re doing. It occurs to me in light of this that most narrative art and the moral reasoning implied in it is just a set of moral status games: someone, usually the narrator, is trying to raise their own status and perhaps that of their group too. Seen in this way a lot of novels, TV shows, and movies get stripped of their explicit content and become vehicles for intuitive status games. Police shows are perhaps the worst offenders but are by no means the only ones. Most romance novels are about raising the heroine’s status through the acquisition of a high-status man.

One could apply similar logic to other genres. While realizing this may make most narrative art more boring, it may also open the possibility of writing narrative art that is explicitly not about status games, or that tries to avoid them to the extent possible. Science fiction may be the genre least prone to relentless status gaming, though “least prone” may also be faint praise.

Idea Makers: Personal Perspectives on the Lives & Ideas of Some Notable People — Stephen Wolfram

Idea Makers is charming and not for everyone. Its introduction is accurate:

in my own life I”ve seen all sorts of ideas and other things develop over the course of years—which has given me some intuition about how such things work. And one of the important lessons is that however brilliant one may be, every idea is the result of some progression or path—often hard won. If there seems to be a jump in the story—a missing link—then that’s just because one hasn’t figured it out.

idea_makersThe book is also pleasant because Wolfram does not adhere to the false art-science dichotomy. He’s “spent most of my life working hard to build the future with science and technology.” At the same time, “two of my other great interests are history and people.” Idea Makers covers all four and to some extent asks where good ideas come from. Wolfram has met numerous interesting, unusual, and special people, and his stories are close the ideal ones you’d hear in a bar after two drinks.

Some sections introduce ideas that are counterintuitive or that I wasn’t aware of, like “mathematicians—despite their reputation for abstract generality—like most scientists, tend to concentrate on questions that their methods succeed with.” From this one might think the best way forward is to concentrate on developing new methods, or applying old methods to radically different fields. The quality of someone’s work may also not be apparent immediately, which is a better-known idea but still finds itself here: “At the time… Turing’s work did not make much of a splash, probably largely because the emphasis of Cambridge mathematics was elsewhere.”

Other thinkers were different: John von Neumann, for example, “was not particularly one to buck the system: he liked the social milieu of science and always seemed to take both intellectual and other authority seriously.”

Or:

Despite his successes, [George] Boole seems to have always thought of himself as a self-taught schoolteacher, rather than a member of the academic elite. And perhaps that helped in his ability to take intellectual risks. Whether it was playing fast and loose with differential operators in calculus, or finding ways to bend the laws of algebra so they could apply to logic, Boole seems to have always taken the attitude of just moving forward and seeing where he could go, trusting his own sense of what was correct and true.

Measuring the extent to which a person admires or respects received authorities / hierarchies against the extent to which a person disregards them could be an interesting project.

Each section of Idea Makers covers someone in science, math, or technology. This is not amenable to quotation, but each section feels the appropriate length and like it has the appropriate focus.

Some facts are simply tragic. Ada Lovelave died from what was likely cervical cancer; today the HPV vaccine largely protects its recipients from that disease. Most deaths are tragic on a local level; Ada Lovelace’s death is tragic on a global level, given how much she contributed and how much more she might have contributed.

Refreshingly, the quality of the physical book—its paper and binding—is unusually high, maybe because it’s put out by Wolfram Press: The company cares about longevity and quality in a way that most commercial publishers would do well to emulate. Stephen Wolfram himself often consults centuries-old pages, and in one illustration we see him using an iPhone to photograph an artifact. It is not a stretch to imagine him imagining someone photographing (or using some other advanced technology) to photograph the work he publishes today.

People can believe in madness for surprisingly long periods of time:

I’m re-reading Zero to One, and one of its early points has surprising salience to politics right now. Collective madness is one of the book’s themes; Thiel notes that “Dot-com mania was intense but short—18 months of insanity from September 1998 to March 2000.” During that time, Thiel says he knew a “40-something rad student” who “was running six difference companies in 1999.” Yet:

Usually, it’s considered weird to be a 4o-year-old graduate student. Usually, it’s considered insane to start a half-dozen companies at once. But in the late ’90s, people could believe that was a winning combination.

That chapter, “Party Like It’s 1999,” starts with a quote from Nietzche: “Madness is rare in individuals—but in groups, parties, nations, and ages it is the rule.” Is it so rare in individuals? I see people doing insane-seeming things all the time. Going to grad school in the humanities is one (I did that, by the way, although I at least had a well-developed backup plan). Continuing to date transparently bad people is another.  Imagining the world to be a fundamentally stable place is a third, though one that has less immediate interpersonal relevance.

Still, the problem of collective insanity is a real one with lots of historical precedence. Hugo Chavez was originally elected fairly in Venezuela. Putin was originally elected fairly in Russia. Erdoğan was originally elected Prime Minister of Turkey fairly. In all three cases, the people spoke… wrongly. Horribly wrongly, and in ways that were at least somewhat clear at the time. Much as I hate to violate Godwin’s Law, the National Socialists were originally elected, or at least gained legitimate parliamentary seats. Mythologically, vampires must be invited into the home. The greatest danger is not the thing that should transparently be resisted. The greatest danger is the thing blithely accepted to the inner circle.

The U.S. has historically eschewed demagogues. Charles Lindergh never became president. Neither did Huey Long. The closest we’ve gotten in recent memory is Richard Nixon. The U.S. has historically eschewed outright incompetents too. But madness in groups, parties, and nations can persist for surprisingly long periods of time. It can be weirdly persistent, especially because, as Thiel argues implicitly throughout Zero to One, it’s very hard to really think for yourself. I’m not sure I do it well. There is a kind of Dunning-Kruger Effect for thinking for yourself.

That’s the context for why thinking people are scared about Trump as president. He’s manifestly unfit and unqualified, and yet it’s not uncommon for people to elect demagogic incompetents. Andrew Sullivan thinks we’ve never been as good a breeding ground for tyranny as we are now. That’s overstating the case—the 1930s were far more dangerous—but the argument itself is a reasonable one, and that itself is scary. We may be collectively partying like it’s 1999, and not in a good way.

I don’t write this from a partisan perspective or out of partisan animus. This blog rarely deal with direct political issues (though it often touches meta-politics). I’m politically disaffected; neither major party represents me or has the right ideas to move the country forward. Yet the recurrence of collective madness in history scares me. It should scare you too. The next American presidential election should, one hopes, deal such a terrific blow to the forces of madness that have taken over one party in particular that it is forced to re-constitute itself in the next four years.

“From Pickup Artist to Pariah” buries the lead

In “From Pickup Artist to Pariah: Jared Rutledge fancied himself a big man of the ‘manosphere.’ But when his online musings about 46 women were exposed, his whole town turned against him,” oddly, the most interesting and perhaps important parts of the article are buried or de-emphasized:

In 2012, he slept with three women; in 2013, 17; in 2014, 22. In manosphere terms, he was spinning plates — keeping multiple casual relationships going at once.

In other words… it worked, at least according to this writer. And:

I met four women at a downtown bar. All were on Jared’s List of Lays. Over cocktails and ramen, the women told me about Jared’s sexual habits, his occasional flakiness, his black-and-white worldview. [. . .] They seemed most troubled by just how fine he had been to date. “I really liked him,” said W. “And that’s what makes me feel so gullible.”

In other words… it worked, at least according to the women interviewed as framed by this writer.

How might a Straussian read “From Pickup Artist to Pariah?” Parts of the article, and not those already quoted, could be inserted directly into Onion stories.

The first sentence of Public Enemies: Dueling Writers Take On Each Other and the World is “Dear Bernard-Henri Lévy, We have, as they say, nothing in common—except for one essential trait: we are both rather contemptible individuals.” Is being contemptible sometimes a sign of status? As BHL implies, the greatest hatred is often reserved for that which might be true.*

In other news, the Wall Street Journal reports today that “Global Temperatures Set Record for Second Straight Year: 2015 was the warmest year world-wide since reliable global record-keeping began in 1880.”

In Julie Klausner’s book, I Don’t Care About Your Band: What I Learned from Indie Rockers, Trust Funders, Pornographers, Felons, Faux Sensitive Hipsters, and Other Guys I’ve Dated, she writes at the very end, “Around this time of graduation or evolution or whatever you call becoming thirty, I started fending off the guys I didn’t like before I slept with them. It was the first change I noticed in my behavior that really marked my twenties being over.” Maybe Rutledge’s mistake is of tone: Comedians are sometimes forgiven and sometimes thrown into the fire. No one is ever forgiven seriousness.


Houellebecq also writes, “there is in those I admire a tendency toward irresponsibility that I find only too easy to understand.” He is not the first person to admire irresponsibility. In Surely You’re Joking, Mr. Feynman!“, Richard Feynman says:

Von Neumann gave me an interesting idea: that you don’t have to be responsible for the world that you’re in. So I have developed a very powerful sense of social irresponsibility as a result of Von Neumann’s advice. It’s made me a very happy man ever since. But it was Von Neumann who put the seed in that grew into my active irresponsibility.

The more you do it the better you get: Why Americans might not work less

In “Why Do Americans Work So Much?“, Rebecca Rosen poses some answers to the question in the title, most notably, “American inequality means that the gains of increasing productivity are not widely shared. In other words, most Americans are too poor to work less.” I’m not convinced this is true; one problem we have involves the difficulty or illegality of building and selling relatively inexpensive housing in high-demand areas (see here and here for two discussions, and please don’t leave a comment unless you’ve read both links thoroughly). Some of what looks like financial “inequality” is actually people paying a shit ton of money for housing in New York, Seattle, L.A., and similar places, rather than living in cheaper places like Houston or Phoenix. Homeowners who vote in those areas vote to keep housing prices high by strangling supply.

Plus, I’d add that, per “The inequality that matters II: Why does dating in Seattle get left out?“, financial inequality isn’t the only kind, though for some reason it’s gotten an overwhelming amount of play in the press over the last ten years. I’ve seen people speculate that financial inequality is fun to attack because money can easily be taken from someone at the point of a gun and given to someone else, while other forms of inequality like beauty or a playful disposition can’t be taken so easily.

Still, there’s one other important factor that may be unexplored: Demanding and remunerative cognitive jobs may not be easy to partition. That is, one person doing a cognitively demanding job 40 hours per week is way more efficient than two people doing the same job for 20 hours a week. And that same person may be even more efficient working 50 or 60 hours a week.

Let me explain. With some classic manufacturing tasks—let’s imagine a very simple one, like turning an hex key—you can do x turns per hour times y hours. With many high-value jobs, and even ambiguously defined median-value jobs, that isn’t true. In my not-tremendous-but-not-zero experience in coding, having one person stuff as much of the code base—that is, the problem space—into their head as possible makes the work better. The person learns a lot about edge cases and keeps larger parts of the codebase in their mind. The cost of attempting to explain the code base to another person is much higher than keeping it all in one’s head.

Among professors, the ones who’ve read the most and written the most usually exponentially better than those who have read 75% and written 75% as much. They’re 5x as valuable, not 33% more valuable.

One sees similar patterns recur across cognitively demanding fields. Once a person has put in the 10,000 hours necessary to master that field, each additional hour is highly valuable, and, even better, the problem domain is better understood. That’s part of the reason law firms charge so much for top lawyers. Those top lawyers have skills that can only be developed through extensive, extreme practice.

I see this effect in grant writing: we don’t split proposal tasks because doing so vastly increases the communication overhead. I’m much more efficient in writing an entire proposal than two or three people could be each writing parts. We’ve rescued numerous doomed proposals from organizations that attempted this approach and failed.

Many of you have probably heard about unfinished and perhaps unfinishable projects (often initiated by government). Here’s a list of famous failed software projects. Some of those projects simply become so massive that latency and bandwidth between the workers in the project overwhelm the doing of actual work. The project becomes all management and no substance. As I mentioned in the previous paragraph, we’ve seen many grant proposals fail because of too many writers and no real captain. At least with proposals, the final work product is sufficiently simple that a single person can write an entire narrative. In software, thousands of people or more may contribute to a project (depending on where you draw the line, hundreds of thousands may contribute: does anyone who has worked on the compiler or version control system or integrated development environment (IDE) count?).

Put these trends together and you get people working more because the costs of splitting up tasks are so much higher. If you put five junior lawyers on a project, they may come up with a worse answer or set of answers than a single senior lawyer who has the problem space in his head. The same thing could conceivably be true in software as well. The costs of interconnection are real. This will increase inequality because top people are so valuable while simultaneously meaning that a person can’t earn x% of the income through x% of the work. A person must do 100% or not compete at all.

This is also consistent with changes in financial remuneration, which the original author considers. It’s also consistent with Paul Graham’s observations in “The Refragmentation.”

Finally, there may also be signaling issues. Here is one Robin Hanson post on related concerns. At some point, Hanson described working for Lockheed before he did his Ph.D., and if I recall correctly he tried to work fewer hours for commensurately lower pay, and that did not go over well. Maybe Lockheed was cognizant of the task-splitting costs I note above, or maybe they were more concerned with what Hanson was communicating about his devotion to the job, or what example he’d set to the others.

So earning may not be scalable. It may be binary. We may not be “working” less because we’re poor. We may be working less because the nature of many tasks and occupations are binary: You win big by working big hours or don’t work much at all.

EDIT: See also “You Don’t Need More Free Time,” which argues that we may not need more free time, but rather the right free time—when our friends are free. I also wonder if too much “free” time is also enervating in its own way.