Thoughts on “The Anthropology of Childhood” by David Lancy

As noted previously, The Anthropology of Childhood is excellent, and now I can say that it is excellent throughout. There are too many points to summarize the book effectively or even to hit many of its main points. One could productively read it with Bryan Caplan’s Selfish Reasons to Have More Kids, since both books argue, sometimes implicitly, that upper-middle class Western child-raising practices have become crazed, neurotic, and conceivably even counter-productive (and almost certainly counter-productive in life-satisfaction terms). Consider this example, from Anthropology:

An interesting contrast can be made with WEIRD [Western, Educated, Industrial, Rich, and Democratic] society, where girls are not usually assigned sibcare [sibling care] duties and where young mothers labor alone without the guidance of their old female relatives. “The relative isolation of the nuclear family . . . means that each woman rears her newborn infant from scratch” and young, urban mothers are unprepared for squalling, active, and very unhappy babies (Hubert 1974: 46–47). The foibles of clueless parents have proven to be quite entertaining, as evidenced by “reality” TV shows such Nanny 911, which aired in the USA between 2004 and 2007, and Supernanny (2004 – 2011), in which a competent nanny brings order and harmony to dysfunctional families.

anthro_of_childhoodYet almost no one considers this point, or many similar points.

Wealth may enable a wide range of non-adaptive behaviors and beliefs that can be sustained primarily because we’re rich enough to sustain them. Bedrock beliefs held by many Westerners about the nature of humans and families are actually culturally selected, and some of those beliefs surprised me. Nerds, however, may be unpopular because nerds often attempt to interject facts into belief- and feeling-based conversations; I suspect many citations to The Anthropology of Childhood, and especially the sections on infanticide, will not go down well.

Still, self-deception also helps explain why so many people adapt seemingly non-functional behaviors; Charles Murray describes many of those behaviors in Coming Apart: The State of White America, 1960-2010, a book that like many true books that puncture popular beliefs is deeply unpopular in many quarters. Most people can, if they choose to, easily see whether their partners exhibit traits related to fidelity, tenacity, conscientiousness, grit, and so forth—but many if not most of us choose to ignore these obvious signals. Our values are observed everywhere.

There seems to be a growing bifurcation in American society between crazed, neurotic, and anxious upper-middle class two-parent households in which little Madison needs ice-skating lessons, soccer practice, oboe lessons, and round-the-clock enrichment activities, or else she’ll never be a “success” and will become a drug-addicted prostituted without even a public-school degree, and single-parent low-income households in which any babysitter is a good babysitter and survival is everything. The former need to chill out and the latter… I actually don’t see a good public policy for the latter, though both the political left and right have many strongly held opinions, neither of which have done much to countervail the larger trends Murray describes. Another writer, Michel Houellebecq, describes them as well, though much more obliquely.

The Anthropology of Childhood is, as Michael Erardjan suggests in the New York Times, going to become my go-to baby gift for those who have recently spawned, though careful readers may find sections disconcerting:

Another common tactic used by new mothers is to exaggerate the resemblance between the newborn and their husband [. . .] In spite of the confidence with which humans claim “he looks just like his father,” experimental studies show that babies cannot be reliably paired to their parents on the basis of appearance (Pagel 2012: 315). Studies in our monogamous, adultery-condemning society have shown that 10 percent of men designated as the biological father of a particular child are not (Buss 1994: 66–67), so the baby’s anonymous appearance confers a survival advantage.

If 10 percent of men designated as the biological father of a particular child are not, one has to ask what kinds of fictions prevent a society in which DNA testing is cheap and easy from automatically doing so as a matter of standard practice. The answers may get very ugly very fast.

I want badly for Lancy to write an advice column; something like Cheryl Strayed’s Tiny Beautiful Things: Advice on Love and Life from Dear Sugar is beautifully written and yet so utterly conditioned by contemporary American beliefs, and so utterly unfamiliar with cross-cultural comparisons or evolutionary biology. Read it anyway—that beauty! that feeling!—but read it with Lancy. Compare and contrast. Imagine what Lancy might say about the myriad of problems medicated, neurotic Americans experience, or think we experience. Most contemporary advice columnists are as much repositories of conventional thinking as religious figures were a century or two ago. Lancy is different. Lancy knows things. But the things he knows we instinctively want to reject—which is why reading him is so valuable.

The stupidity of what I’m doing and the meaning of real work: Reading for PhD comprehensive exams

Last weekend, I wrote a flurry of posts after months of relative silence because I needed to do real work.

This might sound strange: I am doing a lot of things, especially reading, but all of it is make-believe, pretend work. That’s because the primary thing I’m doing is studying for PhD comprehensive exams in English lit. The exam set is structured in four parts: three, four-hour written segments, and a single oral exam, on topics related to stuff that’s not very important to me and probably not very important to most people. The exams also aren’t very relevant to being an English professor, because the key skill that English professors possess and practice is writing long-form essays/articles that are published in peer-reviewed journals. The tests I’m taking don’t, as far as I can tell, map very effectively to that skill.

As a consequence, the tests, although very time consuming, aren’t very good proxies for what the job market actually wants me to do.*

Consequently, PhD exams—at least in English—aren’t real work. They’re pretend work—another hoop to be jumped through on the way to getting a union card. Paul Graham makes a useful distinction in “Good and Bad Procrastination,” when he says that “Good procrastination is avoiding errands to do real work.” That’s what I’ve done through most of grad school, and that’s part of the reason why I have a fairly large body of work on this blog, which you can obviously read, a fairly large body of fiction, which you can’t (at the moment, but that’s going to change in the coming months). To Graham, the kind of small stuff that represents bad procrastination is “Roughly, work that has zero chance of being mentioned in your obituary.” Passing exams has zero chance of being mentioned in my obituary. Writing books or articles does.** PhD exams feel like bad procrastination because they’re not really examining anything useful.

They’re also hard, but hard in the wrong way, like picking patterns out of noise. Being hard in the right way means the soreness you get after working out, or when a challenging math problem suddenly clicks. The quasi-work I’m doing is intellectually unsatisfying—the mental equivalent of eating ice cream and candy all day, every day. Sure, they’re technically food, but you’re going to develop some serious problems if you persist in the ice cream and candy diet. The same is true of grad school, which might be why so many people emerge from it with a lugubrious, unpalatable writing style. Grad school doesn’t select or train for style; it selects and trains for a kind of strange anti-style, in which the less you can say in more words is rewarded. It’s the kind of style I’m consciously trying to un-cultivate, however hard the process might be, and this blog is one outlet for keeping the real writer alive in the face of excessive doses from tedious but canonized work and literary theory. Exams, if anything, reinforce this bogus hardness. If I’m ever in a position of power in an English department with a grad program, I’m going to try and offer an alternative to conventional exams, and say that four to six publishable, high-quality papers can or should take their place. That, at least, mirrors the skills valued by the job market.

The bogosity of exams relates to a separate problem in English academia, which I started noticing when I was an undergrad and have really noticed lately: the English curriculum is focused on the wrong thing. The problem can be stated concisely: Should English department teach content (like, say, Medieval poetry, or Modernist writers), or skills (like writing coherently and close reading)? Louis Menand describes the issue in The Marketplace of Ideas:

[C]ompare the English departments at two otherwise quite similar schools, Amherst and Wellesley. English majors at Wellesley are required to take ten English department courses [. . .] All English majors must take a core course called ‘Critical Interpretations’; one course on Shakespeare; and at least two courses on literature written before 1900 [. . .] The course listing reflects attention to every traditional historical period in English and American literature. Down the turnpike at Amherst, on the other hand, majors have only to take ten courses ‘offered or approved by the department’—in other words, apparently, they may be course sin any department. Majors have no core requirement and no period requirements. (Menand 89-90)

Most departments right now appear to answer “content.” Mine does. But I increasingly think that’s the wrong answer. I’m not convinced that it’s insanely important for undergrads to know Chaucer, or to have read Sister Carrie and Maggie: Girl of the Streets, or to have read any particular body of work. I do think it’s insanely important for them to have very strong close reading skills and exceptional writing skills. Unfortunately, I appear to be in the minority of professional Englishers in this respect. And I’m in grad school, where the answer skill mostly appears to be “content,” and relatively few people appear to be focusing on skills; those are mostly left to individuals to develop on their own. I don’t think I’ve heard anyone discuss what makes good writing at conferences, in seminars, or in peer-reviewed papers (MFA programs appear to be very interested in this subject, however, which might explain some of their rise since 1945).

As Menand points out, no one is sure what an “‘English’ department or degree is supposed to be.” That’s part of the field’s problem. I think it’s also part of the reason many students are drawn to creative writing classes: in those, at least the better ones, writing gets taught; the reading is more contemporary; and I think many people are doing things that matter. When I read the Romantic Poets, I mostly want to do anything but read the Romantic Poets. Again, I have nothing against the Romantic Poets or against other people reading the Romantic Poets—I just don’t want to do it. Yet English undergrad and grad school forces the reading of them. Maybe it should. But if so, it should temper the reading of them with a stronger focus on writing, and what makes good writing.

Then again, if English departments really wanted to do more to reward the producing of real content, they’d probably structure the publishing of peer-reviewed articles better. Contrary to what some readers have said in e-mails to me, or inferred from what I’ve written, I’m actually not at all opposed to peer review or peer-reviewed publications. But the important thing these days isn’t a medium for publishing—pretty much anyone with an Internet connection can get that for free—but the imprimatur of peer-review, which says, “This guy [or gal] knows what he’s talking about.” A more intellectually honest way to go about peer-review would be to have every academic have a blog / website. When he or she has an article ready to go, he should post it, send a link to an editor, and ask the editor to kick it out to a peer-reviewer. Their comments, whether anonymous or not, should be appended to the article. If it’s accepted, it gets a link and perhaps the full-text copied and put in the “journal’s” main page. If it doesn’t, readers can judge its merits or lack thereof for themselves.

The sciences arguably already have this, because important papers appear on arXiv.org before they’re officially “published.” But papers in the sciences appear to be less status-based and more content-based than papers in the humanities.

I think this change will happen in the humanities, very slowly, over time; it won’t be fast because there’s no reason for it to be fast, and the profession’s gatekeepers are entrenched and have zero incentive to change. If anything, they have a strong incentive to maintain the system, because doing that raises their own status and increases their own power within the profession. So I don’t foresee this happening, even if it would be an important thing. But then again, academics are almost always behind the important thing: the important thing is happening in some marginal, liminal space, and academics inhabit a much more central area, where it’s easy to ignore stuff at the margins. I don’t see that changing either, especially in a world where many people compete for few academic slots. In that world, pointless hoop-jumping is going to remain.


* There’s a vast literature in industrial organization on the subject of hiring practices, and most of that literature finds that the most effective ways to hire workers is to give them an IQ test and a work-skills or work-practice test. The former is effectively illegal in the U.S., so the best bet is to give workers a test of the thing they’ll actually be called on to do.

** I also consciously ask myself this question set:

In his famous essay You and Your Research (which I recommend to anyone ambitious, no matter what they’re working on), Richard Hamming suggests that you ask yourself three questions:

1. What are the most important problems in your field?

2. Are you working on one of them?

3. Why not?

I have an answer to number three, but it doesn’t seem like a very good one.

Student choice, employment skills, and grade inflation

Edward Tenner’s Atlantic post asks, “Should We Blame the Colleges for High Unemployment?” and mostly doesn’t answer the question, instead focusing on employer hiring behavior. But I’m interested in the title question and would note that the original story says, “Fundamentally, students aren’t learning [in college] what they need to compete for the jobs that do exist.”

That may be true. But colleges and universities, whatever their rhetoric, aren’t bastions of pure idealistic knowledge; they’re also businesses, and they respond to customer demand. In other words, student demand. Students choose their own major, and it isn’t exactly news that engineers, computer scientists, mathematicians, and the like tend to make much more money than other majors, or that people in those disciplines are much more likely to find jobs. Students, however, by and large don’t choose them: they choose business, communications (“comm” for the university set), and sociology—all majors that, in most forms in most places, aren’t terribly demanding. I’ve yet to hear an electrical engineering major say that comm was just too hard, so she switched to engineering instead. As Richard Arum and Josipa Roksa show in Academically Adrift: Limited Learning on College Campuses, those majors aren’t, on average, very hard either, and they don’t impart much improvement in verbal or math skills. So what gives?

The easiest answer seems like the most right one: students aren’t going to universities primarily to get job skills. They’re going for other reasons: signaling; credentialing; a four-year party; to have fun; choose your reason here. And universities, eager for tuition dollars, will cater to those students—and to students who demand intellectual rigor. The former get business degrees and comm, while the latter get the harder parts of the humanities (like philosophy), the social sciences (like econ), or the hard sciences. It’s much easier to bash universities, with the implication of elaborately educated dons letting their product being watered down or failing, than it is to realize that universities are reacting to incentives, just as it’s much easier to bash weak politicians than it is to acknowledge that politicians give voters what they want—and voters want higher services and lower taxes, without wanting to pay for them. Then people paying attention to universities or politics notice, write articles and posts pointing out the contradiction, but fail to realize the contradiction exists.

You may also notice that most people don’t appear to choose schools based on academics. They choose schools based on proximity, or because their sports teams are popular. Indeed, another Atlantic blogger points out that “Teenagers [. . .] are apt to assemble lists of favored colleges through highly non-scientific methods involving innuendo, the results of televised football games, and what their friend’s older brother’s girlfriend said that one time at the mall.” Murray Sperber especially emphasizes sports in his book Beer and Circus: How Big-Time Sports Is Crippling Undergraduate Education.

By the way, this does bother me at least somewhat, and I’d like to imagine that universities are going to nobly hold the line against grade and credential inflation, against the desires of the people attending them. But I can also recognize the gap between my ideal world and the real world. I’m especially cognizant of the issue because student demand for English literature courses has held constant for decades, as Louis Menand says in The Marketplace of Ideas:

In 1970–71, English departments awarded 64,342 bachelor’s degrees; that represented 7.6 percent of all bachelor’s degrees, including those awarded in non-liberal arts fields, such as business. The only liberal arts category that awarded more degrees than English was history and social science, a category that combines several disciplines. Thirty years later, in 2000–01, the number of bachelor’s degrees awarded in all fields was 50 percent higher than in 1970–71, but the number of degrees in English was down both in absolute numbers—from 64,342 to 51,419—and as a percentage of all bachelor’s degrees, from 7.6 percent to around 4 percent.

Damn. Students, for whatever reason, don’t want English degrees as much as they once did. As a person engaged in English Literature grad school, this might make me unhappy, and I might argue for the importance of English lit. Still, I can’t deny that more people apparently want business degrees than English degrees, even if Academically Adrift demonstrates that humanities degrees actually impart critical thinking and other kinds of skills. I could blame “colleges” for this, as Tenner does; or I could acknowledge that colleges are reflecting demand, and the real issue isn’t with colleges—it’s with the students themselves.

Non-Places: Introduction to an Anthropology of Supermodernity — Marc Augé

Marc Augé’s Non-Places: Introduction to an Anthropology of Supermodernity is fascinating because it describes a process and some places that almost all of us feel like we’ve been. In my post about Lewis Hyde’s The Gift, I wrote about one such bureaucratized space in the form of airports:

As I write this, I sit in a Tucson airpot bar. Airports have everything wrong with them: they are transitional, one-off spaces filled with strangers, the “restaurants” they offer consist of pre-made food with character slightly above a TV dinner, and for some reason we as a society have decided that Constitution rights and privacy don’t apply here. People I don’t know can stop me at will, and merely flying requires that I submit to security theater that is simultaneously ineffective and invasive. Everything is exorbitantly expensive but not of particularly high quality. Menus don’t have beer prices on them.

The airport, in short, is designed to extract money from a captive audience; this might be in part why I don’t care much for sports stadiums, Disneyland, and other areas where I feel vaguely captive.

And it’s miserable, at least to me, and Augé traces that feeling at least partially to a place’s relationship or lack thereof with history. His book is useful because it offers a theoretical framework for understanding why we think of some places the way we do and frustrating because it’s written in French academic-ese. John Howe translates it but can’t change the fact that most of the book is actually concerned with how ethnologists view places. In other words, the major action described by the title isn’t reached until about two thirds of the way through the book. And it takes until page 94, and nearly at the end, to get a somewhat clear definition of what constitutes a “non-place:”

Clearly the word ‘non-place’ designates two complementary but distinct realities: spaces formed in relation to certain ends (transport, transit, commerce, leisure), and the relations that individuals have these spaces. Although the two sets of relations overlap to a large extent, and in any case officially (individuals travel, make purchases, relax), they are still not confused with one another; for non-places mediate a whole mass of relations, with the self and with others, which are only indirectly connected with their purposes. As anthropological places create the organically, so non-places create solitary contractuality.

Any time someone uses “clearly” or “obviously,” they should have their text examined more carefully, because anything that is genuinely clear or obvious doesn’t need the modifier. The text itself is unsure: what are “certain ends” as opposed to “non-certain ends?” I’m not sure: maybe he means where people live. What is the ‘organically social?’ Presumably something like neighborhoods, common cause, not “Bowling Alone” and the like. The gap between what is said and what is probably meant looms large with these phrases, even if the passage as a whole at least yields some kind of framework for discussing the problem.

I would say that non-places are basically commerce or exchange economies, while places are gift economies. In other words, in non-places one cannot have any real recourse to common humanity: you can’t ask to borrow something, to be done a favor, or to expect to know the myriad of strangers you cross. In a place, you can expect to have local knowledge, to not have to rely entirely on signs, to be able to decorate it as you will, and to have the opportunity for whimsy.

One thing I like about universities is that they do a decent job of being both gift and commerce economies, thanks in part to state subsidies: although my students have to pay the bursar’s office to take my class, once they are within it, we do not discuss or exchange money directly, and this mediating bureaucratic influence helps maintain something closer to a gift economy. Most professors I have met are more than willing to give their time to those who do not waste it and who wish to learn. By “do not waste it,” I mean those who are prepared, conscientious, and willing to read or experiment per the professor’s instructions, as opposed to the inevitable students who, at least in English, want the professor to read a half-baked paper the night before it is due in order to receive a higher grade. Professors are willing, in short, to make what Augé calls a “relational” space that is “concerned with identity,” or, as his long quotes have it:

If a place can be defined as relational, historical and concerned with identity, then a space which cannot be defined as relation, or historical, or concerned with identity will be a non-place. The hypothesis advanced here is that supermodernity produces non-places, meaning spaces which are not themselves anthropological places and which, unlike Baudelairean modernity, do not integrate the earlier places: instead these are listed, classified, promoted to the status of ‘places of memory’, and assigned to a circumscribed and specific position. A world where people are born in the clinic and die in hospital, where transit points and temporary abodes are proliferating under luxurious or inhuman conditions (hotels chains and squats […]) (78 – 9).

This is the sort of assertion that almost works: notice how it starts with a major “if” at the start and never quite defines the terms relational, historical, and concerned with identity: although airports feel like they have none of those attributes today, they might a hundred years from now. Maybe airpots will one day be places in the sense that Belltown or the University District in Seaattle are. It’s hard to say, even if I feel like the idea that “supermodernity produces non-places” is correct, since those kinds of spaces (like airports, as stated above) produce the unhappy torpor of being totally unmoored and being buffeted by bureaucratic forces that cannot be directly negotiated with.

The last comparison Augé uses is curious: hotel chains feel quite different squatter camps, although I only have direct experience of the former. And being born in the clinic and dying in the hospital sounds like an improvement over being born in a hut and dying in a house, if the latter involve an earlier death. And what it means to be modern is something that seems like it’s being ceaselessly re-described—to be modern is to debate what it means to be modern, or to be acutely aware of history. This is another way of thinking about connections between people, among groups, and the like. Here’s one way Augé gets at that:

Collectivities (or those who direct them), like their individual members, need to think simultaneously about identity and relations; and to this end, they need to symbolize the components of shared identity (shared by the whole group), particularly identity (of a given group or individual in relation to the others) and singular identity (what makes the individual or group of individuals different from any other). The handling of space is one of the means to this end, and it is hardly astonishing that the ethnologist should be tempted to follow in reverse the route from space to the social, as if the latter had produced the former once and for all (Augé 51).

Neither wholly produces the other, but they both work systematically, space constraining daily contact and time constraining members in terms of particular history. Notice the idea of the “reverse […] route from space to the social,” although the social also affects space. In Jane Austen this happens less, but the space of the manor or inheritance affects everything the characters do: think of the vitality of the entailment on the actions of the characters in Pride and Prejudice. Love does not conqueror all in that novel, even if it affects relations with space and vice-versa. It is hard to imagine Charlotte Lucas loving the irritating Mr. Collins if not for his eventual, deferred wealth.

The book’s penultimate paragraph suddenly moves away from place and toward humanity:

One day, perhaps, there will be a sign of intelligent life on another world. Then, through an effect of solidarity whose mechanisms the ethnologist has studied on a small scale, the whole terrestrial space will become a single place. Being from earth will signify something. In the meantime, though, it is far from certain that threats to the environment are sufficient to produce the same effect. The community of human destinies is experienced in the anonymity of non-place, and in solitude (120).

The idea of distance and perspective is evoked from the first words: “one day” implies a day so distant that it cannot be envisaged, only held up as a trope. And the sense of vastness continues, with the “whole terrestrial space,” as opposed to the way we divide up now, and the possibility that such an orientation, however improbable that it will come to pass, might bring. I hope we get there, unlikely though it may seem, and unlikely as it is that non-places will bring us closer to place.

%d bloggers like this: