Hulu Owners: Should We Shoot Ourselves in the Foot?

I don’t own a stereotypical TV and almost never watch video that originally appeared on conventional TV stations. I’ve also never had a subscription to cable TV. That being said, I will occasionally use Hulu to watch Glee, which is a lot of fun and not stupid and tedious—unlike most TV shows. I’m apparently not the only person who noticed this; the L.A. Times published “Hulu is popular, but that wasn’t the goal: Its owners — the parents of ABC, Fox and NBC — fear the TV website may hurt their bottom lines.”

Now the website faces changes that could curtail its trove of offerings or require users to pay for episodes they currently watch for free. Once hailed as the networks’ solution in taming the Internet, Hulu’s stunning success is now undermining the very system it was designed to protect, forcing the site’s owners to reconsider what Hulu should be.

The big problem, however, is that Hulu doesn’t just compete against network TV and cable. It also competes against BitTorrent sites. Now, because I enormously respect copyright law, I would never, ever, use such sites because they’re really convenient. Never. Just like as a 16 year old, I didn’t use Napster like all my friends did to download music.

In “The Other Road Ahead,” Paul Graham says, “Near my house there is a car with a bumper sticker that reads “death before inconvenience.” Most people, most of the time, will take whatever choice requires least work.” In this respect, I am most people, and people who want to watch TV are probably thinking the same thing. If Fox, ABC, and NBC don’t want to become tomorrow’s newspapers, they might want to contemplate what death before inconvenience means.

The Case Against Adolescence: Rediscovering the Adult in Every Teen — Robert Epstein

The Case Against Adolescence should be a better book than it is, much like Sex at Dawn. The central argument is that we create the contemporary adolescence experience (angst, nihilism, penchants for bad TV shows, temper storms) through social and legal restrictions on teenagers that deprive them of any real ability to be or act like adults. I’m inclined towards it, but the book would’ve been greatly helped by peer review.

Epstein is not the first person to notice. In “Why Nerds are Unpopular,” Paul Graham says that “I think the important thing about the real world is not that it’s populated by adults, but that it’s very large, and the things you do have real effects.” This means that teenagers have no real challenges—high school is so fake a challenge that a lot of people find that it poisons education for them—and that they become “neurotic lapdogs:

As far as I can tell, the concept of the hormone-crazed teenager is coeval with suburbia. I don’t think this is a coincidence. I think teenagers are driven crazy by the life they’re made to lead. Teenage apprentices in the Renaissance were working dogs. Teenagers now are neurotic lapdogs. Their craziness is the craziness of the idle everywhere.

“Coeval” is correct, but it would be more accurate to say that being a teenager was enabled by growing economic wealth more than anything else. Once young people didn’t have to start working immediately, they didn’t. This started happening on a somewhat wide scale in the late nineteenth and early twentieth century. It accelerated after World War II. By now, laws practically prevent people from becoming an adult. The question of why and how this happened, however, remains open.

The Case Against Adolescence offers a dedication: “To Jordan and Jenelle, may you grow up in a world that judges you based on your abilities, not your age.” Unfortunately, we’re not likely to get such a world in the near future because bureaucratic requirements demand hard age cutoffs instead of real judgments. Should you drive when you’re “ready” to drive? How will the DMV decide? It can’t, so laws make 16 the magic age. Based on what I’ve seen at the University of Arizona, most students are “ready” to drink in the sense that they make the choice to do so on their own free will—despite the nominal legal drinking age of 21. But “ready to” can’t be readily gauged by a cop looking at a driver’s license, so we have to choose arbitrary cutoffs.

Many students appear to feel done with high school by the time they’re 16—but high school continues to 18, so, for the vast majority, they stay—not “based on [their] abilities,” but on their age. Without those bureaucratic requirements, judgment based on abilities might be more possible. In some realms, it is: this might be why the image of the teenage hacker has become part of pop culture. In computer programming, one can judge immediately whether the code works and does what its author says it should. There isn’t really such a thing as code that is “avant garde” or otherwise susceptible to influence and taste. In addition, computers are readily available, and posting work online lets one adopt personas that may be “older” than the driver’s license age. As such, working online may alleviate problems with age, sex, race, and other such issues. Online, no one automatically knows you’re a teenager. Offline, it’s obvious.

One reason why contemporary teenagers act the way they do might simply be the “role models” they have—who tend to be each other. As Epstein says, “Because teens in preindustrial countries spend most of their time with adults—both family members and co-workers—adults become their role models, not peers. What’s more, their primary task is not to break free of adults but rather to become productive members of their families and their communities as soon as they are able.” But the term “preindustrial countries” sounds wrong: countries didn’t really coalesce into more than city-states until after the industrial revolution. A lot of contemporary political problems arise from imposing European “countries” on territories with diverse tribal or clan identities. Furthermore, I’m not sure that hunter-gatherers and agrarian societies can be lumped together like this. And I don’t think most agrarians would think of others as “co-workers,” an idea that comes from modern offices.

That’s one example of the book’s sloppiness. The other is simpler: our economy increasingly rewards advanced education, which means that the economic productivity of people without it is going down. So we might have a very good reason for forcing teenagers to attend school for long periods of time, namely that most won’t be able to accomplish much without it. The keyword is “most:” there are obvious exceptions, and the kinds of people likely to be reading this blog are more likely to be the exceptions. Epstein observes that for most of human existence, people we now call teenagers were more like adults. He’s right. But there’s a problem with his argument.

Early on, he says, “For the first time in human history, we have artificially extended childhood well past puberty. Simply stated, we are not letting our young people grow up.” The reasons for this are complex, and Epstein suggests an evolutionary narrative for greater capability earlier in history than we might now assume: “our young ancestors must have been capable of providing for their offspring. . . and in most other respects functioning fully as adults” because, if they couldn’t, “their young could not have survived.” This is true, but most of human history also hasn’t occurred in industrial and post-industrial times. We’re living in a weird era by almost any standard, so the reason teenagers are treated like teenagers might be an economic argument.

They can’t produce much until they have a lot of education, and productive adults don’t usually have time to train them. As Graham says of schools, “In fact their primary purpose is to keep kids locked up in one place for a big chunk of the day so adults can get things done. And I have no problem with this: in a specialized industrial society, it would be a disaster to have kids running around loose.” Teens might have use for adults, but not a lot of adults have much use for teenagers. In my parents’ business, Seliger + Associates, employing me was probably a net drag until I was 17 or 18, and even then I was only productive because I’d been working for them for so long. Epstein underestimates what a “specialized industrial society” looks like. The larger point that young people are probably going to be more capable if we let them be is true. But the flip side of positive capability is the negative possibility of failure.

Epstein does anticipate part of Graham’s argument:

[. . .] in most industrialized countries today teens are almost completely isolated from adults; they’re immersed in ‘teen culture,’ required or urged to attend school until their late teens or early twenties, largely prohibited from or discouraged from working, and largely restricted, when they do work, to demeaning, poorly paid jobs.

But he doesn’t elaborate on why this might be. Delaying adulthood can have a lot of reasons, and he sometimes confuses correlation with causation: just because men and women marry later than they used to, as Epstein argues on page 30, doesn’t mean that they’re delaying adulthood: it means they might want fun, they might not need marriage for economic purposes, and they don’t need marriage for sex. Disconnecting sex from marriage probably explains as much of this as anything else does.

He does notice institutionalized hypocrisy, which is useful. For example, “Whether we like the idea or not, young people who commit serious crimes are indeed emulating adults—adult behavior, adult emotions, adult ideas. They see adults on the streets, on TV, in movies, and in newspapers and magazines doing heinous things every day. What’s more, when a young person commits a crime, he or she is demonstrating control over his or her own life.” A sixteen year old who commits murder can be tried as an adult; a sixteen year old who has sex still has to be protected like a child, even if it’s the same sixteen year old. A twenty year old sends a naked picture of herself to her boyfriend, but a seventeen year old emulating the twenty year old’s behavior can’t.

I think a lot of this has to do with parent desires: they don’t want kids having sex because the economic consequences of pregnancy are severe and because parents are often left to clean up the financial and emotional messes in a way they don’t have to with, say, 21 year olds. Part of this is because of social expectations, but part may still be because of economics, which is the great missing piece of The Case Against Adolescence. Robin Hanson notes the labor component of the child / adolescent argument. I think he’s missing one major component of his argument: parents on average probably don’t want their offspring to leave school because they associate school with higher eventual earnings and economic success that will translate to social / reproductive success. So I don’t think it’s just other laborers who don’t want kids in the workforce—it’s also probably parents as a whole.

You can find more about judicial and sexual hypocrisy in in Judith Levine’s book, Harmful to Minors: The Perils of Protecting Children from Sex, which should probably be better known than it is. As she says:

This book, at bottom, is about fear. America’s fears about child sexuality are both peculiarly contemporary […] and forged deep in history. Harmful to Minors recounts how that fear got its claws into America in the late twentieth century and how, abetted by a sentimental, sometimes cynical, politics of child protectionism, it now dominates the way we think and act about children’s sexuality.

We’re probably afraid of sexuality because we’re afraid of the costs of pregnancy and because of the United States’ religious heritage. Those “fears about child sexuality” are unlikely to go away in part because there is some level of rationality in them: we’re unhappy when people reproduce and can’t afford their offspring. So we call people who mostly aren’t economically viable “children,” even when they’re physiologically and psychologically not. It’s dumb, but it’s what we do.

Furthermore, we don’t really know why adolescence, if it didn’t really exist until the twentieth century, didn’t. Epstein cites a 2003 New Yorker article by Joan Acocella called “Little People: When did we start treating children like children?“, which notes, “If, as is said, adolescence wasn’t discovered until the twentieth century, that may be because earlier teen-agers didn’t have time for one, or, if they did, it wasn’t witnessed by their parents.” Notice the tentativeness of this sentence: “as is said,” “that may be,” “or.” We don’t know. We might never entirely know. Contemporary adolescence might, like being overweight and having a 60″ TV, be a condition of modernity, and earlier peoples might have developed it too if they’d been rich enough.

This is the part of the post where I’m supposed to posit some solutions. Problem is, I don’t have any, or any that are practical. Eliminating middle school and having “high school” go from seventh to tenth grades might one, followed by something more like community college or a real university, would be a good place to start, along with letting people enter contracts at sixteen instead of eighteen. The probability of this happening is so low that I feel dumb for even mentioning it. Not all problems have solutions, but being aware of the problem might be a very small part of the start.

More on fiction versus nonfiction

Most of the books I’ve been wanting to write about and not getting around to are nonfiction, and I’m not sure why this is. It might be because both good and bad nonfiction are easier to write about than good fiction. Good fiction demands attention and time, which are in chronically short supply for me and virtually everyone else. So I foolishly put off writing about good fiction and instead spread time among lesser though still interesting vessels (this post comes as a followup to Nonfiction, fiction, and the perceived quality race, which got started from the question, “The quality of fiction seems to be decreasing relative to the quality of non-fiction, or am I just biased against active fiction writers vs. dead ones?”).

I expend a lot of my time thinking about good fiction in the context of making my own novel writing better, instead of writing about what makes good fiction good on this forum. So even though I think a lot about good novels, I write about them in a different context. For instance, the last novel I finished stole from Alain de Botton’s On Love and Rebecca Goldstein’s The Mind-Body Problem; I’ve written about both books here, but not nearly to the proportion I’ve been thinking about. Alas: the novel I wrote got the most encouraging rejections, many along the lines of “I like it but can’t sell it.” If it had sold and eventually been published, I think it would be much easier for me to write about novels I care deeply about.

Even so, there are a bunch of novels—a couple by Michel Houellebecq, Elmore Leonard’s latest, Brady Udall’s The Lonely Polygamist, more about Robertson Davies—I mean to write about, but but they’re outnumbered by nonfiction. This might seem strange, coming from a person in English graduate school, where we study nonfiction all the time, and when we study fiction, it’s often more like studying nonfiction than we care to admit.

I also simply don’t read as much fiction as I used to; I wonder if fiction is most useful to the young (who are trying to figure out who they are and how the social world works) and the old (who are trying to figure out what this crazy thing they just did actually means). A lot of people in the middle don’t appear to derive as much immediate benefit from reading fiction, although I have no data on this idea.

Finally, I can often read nonfiction much faster than fiction. This isn’t a change, but it is true: nonfiction often telegraphs where it’s going, which makes skipping large sections easier. Being able to read faster also indicates that too many books are too long, as Cowen has argued in various places, but it nonetheless means I very seldom have to invest as much in deep, close reading. I wish more nonfiction books rose to the level of deep, close reading, but few do, relative to good fiction.

George Eliot's Daniel Deronda and the Graham Handley's description of it

In his introduction to George Eliot’s Daniel Deronda, Graham Handley writes:

Yet if all the research and criticism of Daniel Deronda, including scholarly articles of the type which discover but do not evaluate, were put together, a consensus would doubtless reveal that it is generally thought of either as a remarkable failure, a flawed success, or even an aberration unredeemed by incisive insights or distinguished writing. The character of Gwendolen is always praised; those of Mirah, Mordecai, and Daniel are often denigrated.

It is my opinion, as someone who regularly miscegenates evaluation and discovery, that the critical consensus is correct. I particularly like the description of the novel as an “aberration unredeemed by incisive insights or distinguished writing.” I’m also still amused that Handley would announce this in the introduction, as if inviting us to agree with the consensus and not his defense.

Reading Handley’s defense, it’s hard not to like the critical consensus more:

It is my contention that Daniel Deronda needs no apology. [. . .] Its greatness consists in its artistic integrity, its moral and imaginative cohesion, its subtle and consistent presentation of a character with psychological integration as its particular strength, together with what Colvin called the ‘sense of universal interests and outside forces.’

Most of those words and phrases don’t mean anything on their own. What is “moral and imaginative cohesion?” Do you get it or them with glue and spackle? And how does the “subtle and consistent presentation of a character” work? Those sound like code words for “nothing happens,” other than that characters talk to each other about who’s going to boff who after they get married or, if we’re lucky, before.

The introduction goes on:

The form is fluid and vital, not static and diagrammatic, and the sophisticated and studied use of image and symbol is tremulous with life, with the feelings, responses, and pressures of the individual moral and spiritual experience of fictional character registering with the factual reader.

Spare me “sophisticated and studied use of image and symbol” when they aren’t deployed to tell much of a story. “Moral and spiritual experience” sounds remarkable tedious. Once again, with accolades like these, who needs haters?

I will say, however, that Daniel Deronda makes me feel incredibly virtuous for having read it, or at least parts of it. This is more or less true of every novel I’ve read whose title consists solely of a name.

George Eliot’s Daniel Deronda and the Graham Handley’s description of it

In his introduction to George Eliot’s Daniel Deronda, Graham Handley writes:

Yet if all the research and criticism of Daniel Deronda, including scholarly articles of the type which discover but do not evaluate, were put together, a consensus would doubtless reveal that it is generally thought of either as a remarkable failure, a flawed success, or even an aberration unredeemed by incisive insights or distinguished writing. The character of Gwendolen is always praised; those of Mirah, Mordecai, and Daniel are often denigrated.

It is my opinion, as someone who regularly miscegenates evaluation and discovery, that the critical consensus is correct. I particularly like the description of the novel as an “aberration unredeemed by incisive insights or distinguished writing.” I’m also still amused that Handley would announce this in the introduction, as if inviting us to agree with the consensus and not his defense.

Reading Handley’s defense, it’s hard not to like the critical consensus more:

It is my contention that Daniel Deronda needs no apology. [. . .] Its greatness consists in its artistic integrity, its moral and imaginative cohesion, its subtle and consistent presentation of a character with psychological integration as its particular strength, together with what Colvin called the ‘sense of universal interests and outside forces.’

Most of those words and phrases don’t mean anything on their own. What is “moral and imaginative cohesion?” Do you get it or them with glue and spackle? And how does the “subtle and consistent presentation of a character” work? Those sound like code words for “nothing happens,” other than that characters talk to each other about who’s going to boff who after they get married or, if we’re lucky, before.

The introduction goes on:

The form is fluid and vital, not static and diagrammatic, and the sophisticated and studied use of image and symbol is tremulous with life, with the feelings, responses, and pressures of the individual moral and spiritual experience of fictional character registering with the factual reader.

Spare me “sophisticated and studied use of image and symbol” when they aren’t deployed to tell much of a story. “Moral and spiritual experience” sounds remarkable tedious. Once again, with accolades like these, who needs haters?

I will say, however, that Daniel Deronda makes me feel incredibly virtuous for having read it, or at least parts of it. This is more or less true of every novel I’ve read whose title consists solely of a name.

Grade Inflation? What Grade Inflation?

A friend sent me “Should I feel guilty for failing my students? As an adjunct English professor, I know I shouldn’t inflate grades — but I feel like I’m ruining people’s lives,” an excerpt from “In the Basement of the Ivory Tower,” which began life as a frighteningly accurate Atlantic article.

I agree with a lot of the “Should I feel guilty for failing my students” excerpt, but I don’t think this is correct: “First of all, twenty-first-century American culture makes it more difficult to fail people.” The biggest reason it’s hard for professors to fail students, as economists like to remind us, involves incentives.

I’m a grad student in English lit, and when I go to the job market in the near future, I’m highly unlikely to be judged at all on my grade distribution; as far as I know, the University of Arizona doesn’t even send that information out. I may or may not be seriously judged on my teaching evaluations, depending on the kind of university I try to go to. I probably won’t, or won’t very much, but the easiest way to improve evals is to give higher grades (see “Judgment Day” for one popular explanation). Perhaps not surprisingly, students give better evals to profs who get higher grades. So professors, in the absence of any institutional or professional incentives not to give higher grades, do—at least on average, even if any single prof denies doing so (I have yet to hear anyone in a public forum announce, “I inflate grades.” I do not inflate grades).

To recap: we might be looked at poorly for having bad teaching evals, which are linked to student grades, and there’s no pressure on student grades. The big thing I will be judged on is academic publishing. The more I do that, the better off I am professionally. When you give students bad grades, not only are they likely to take it out on evals, but they’re more likely to complain to your teaching advisor, show up in office hours to fight about grades, be unhappy in class, and generally take more of time, which you can’t spend writing the academic articles that will get you a job and tenure.

Combined, these two forces encourage you to give higher grades and maximize academic publishing. This force is probably strongest in softer subjects, like the humanities, business, comm, and the like (students want to argue papers all day long) and weakest in math and the sciences (if you didn’t get the right answer, your instructor will demonstrate why you’re objectively wrong). Fields like nursing probably don’t see a huge amount of grade inflation because students who don’t understand the material will kill someone if they don’t, which is a big problem for lots of people. Same in engineering—if your bridge collapses, you can’t complain that there is no such thing as a “good” bridge, or that bridge design is so “subjective.”

All this stuff might contribute to how little students are actually learning, as discussed extensively in Academically Adrift: Limited Learning on College Campuses. The book shows that most college students, through most measures, don’t acquire much real knowledge over the course of their four or more years in school. Part of Academically Adrift details the evidence used to reach this conclusion, the other big part describes how this might have happened and be happening, and the last (weakest) part discusses solutions.

How could one solve this incentive problem? Probably by plotting eval scores against grades. If you’re giving an average GPA of 3.0 and getting a 4.0 on your eval, and Suzie down the hall is giving an average GPA of 2.9 and getting a 4.3 on her evals, then Suzie is probably doing better. I don’t know why colleges aren’t moving toward systems like this, aside from inertia and the complete lack of incentive to do so. Which, I guess, means that I do know why. This wouldn’t be a perfect solution, but it would at least be a step in the right direction. A few schools are apparently doing something about the issue.

Professors don’t want to champion better evals, however, because it distracts them from the research for which they’re rewarded. Administrators don’t want to because they want tuition and grant money, not rocking the boat. High school seniors have not shown a great swell of interest in attending schools with rigorous professor evaluations; they have shown a great swell of interest in beer and circus, however, so that’s what they mostly get. Grad students want to claw their way up the academic ladder and/or finish their damn dissertations. Parents want their offspring to pass. Employers are too diffuse and don’t get much of a say. So where does the coalition for improvement come from? Some individuals, but we’ll see if they get very far.

%d bloggers like this: