There’s a fascinating moment in The Righteous Mind where Jonathan Haidt makes a point similar to one I wrote about earlier:
If you think that moral reasoning is something we do to figure out the truth, you’ll be constantly frustrated by how foolish, biased, and illogical people become when they disagree with you. But if you think about moral reasoning as a skill we humans evolved to further our social agendas—to justify our own actions and to defend the teams we belong to—then things will make a lot more sense. Keep your eye on the intuitions, and don’t take people’s moral arguments at face value. They’re mostly post hoc constructions made up on the fly, crafted to advance one or more strategic objectives.
Compare this to my December 2010 post “What people want and what they are: religious edition:”
. . . as Julian Sanchez puts it, “a lot of our current politics has less to do with actual policy disagreements than with resolving status anxieties.” I think his overall post is right, but I suspect that people pick their preferred policies (beyond patriotism, which is his example) to signal what they’re really like or want people to believe they’re really like.
Take my favorite example, gun control: the pro-gun types want other to think of them as capable, fierce, tough, and independent. And who isn’t in favor of those things? The anti-gun types want others to think of them as community-oriented, valuing health and welfare, and caring. And who isn’t in favor of those things?
You could extend this to other fields too (tax cuts, health care, whatever the issue du jour is), and they don’t always map to a neat left/right axis. Anyone can have an opinion that signals values on complex political topics in a way they can’t about, say, theoretical physics, mostly because complex political topics often don’t have correct answers. So they can be easily used to signal values that are often divorced from whatever real conditions on the ground look like. Almost no one uses their opinions on vector calculus to signify what they most believe.
Haidt doesn’t use the word “signal,” but his idea of using moral claims to “justify our own actions and to defend the teams we belong to” is pretty close. This also describes why, over the past ten years, I’ve become a person much less invested in political, moral, or (many kinds of) intellectual arguments: most of those arguments aren’t really about their content, but about something else, below the surface, that doesn’t always bob up to the surface. Here’s Paul Graham on that idea in “What You Can’t Say:”
Most struggles, whatever they’re really about, will be cast as struggles between competing ideas. The English Reformation was at bottom a struggle for wealth and power, but it ended up being cast as a struggle to preserve the souls of Englishmen from the corrupting influence of Rome. It’s easier to get people to fight for an idea. And whichever side wins, their ideas will also be considered to have triumphed, as if God wanted to signal his agreement by selecting that side as the victor.
Most people seem to equate “winning” an argument in a lawyerly fashion with being intellectually right. This might be why lawyers have some of the reputation they do: they get paid primarily to construct arguments that may be specious, but that have to be convincing.
I also like to think that realizing how moral arguments really work makes me a better teacher: rather than fighting with students who bring up moral arguments, I try to ask them where their arguments come from and how they come to believe what they believe. In other words, I try to work at a higher level of abstraction—which is what Haidt is doing in The Righteous Mind.
One other point about Haidt: if you’re frustrated by “how foolish, biased, and illogical people become when they disagree with you,” imagine how you must act to them.