The study of morality by science – primarily by “cognitive scientists” and neurologists – has yielded a paradoxical insight. It seems very likely that we don’t reason our way to moral judgments. Instead we follow a “moral sense” that quickly determines the right and wrong of a situation, and attaches powerful emotions to each.
The consequences of this insight are many and fascinating. Tightly-reasoned argumentation about a moral problem, for example, has absolutely no effect in changing minds: think abortion, pro or con. And because the moral sense is, in Stephen Pinker’s phrase, an evolutionary gadget, primed for the environment of our Stone Age ancestors, it is prone, in the world of Googling and shopping malls, to deep distortions and illusions. Not surprisingly, moral illusions exaggerate the righteousness of kin over strangers and of our race over those who don’t look like us.
The best cognitive scientist I know working on the subject is Jonathan Haidt, who lays out the evidence on the weakness of reason pretty comprehensibly in his article, “The Emotional Dog and Its Rational Tail.” Among the neurologists, the most persuasive is Antonio Damasio, whose Descartes’ Error is a must read. Pinker himself dedicates a sizeable chunk of The Blank Slate to the evolutionary antecedents of human behavior, including the universal need to pass judgment: he calls us “the sanctimonious animal.”
Why should anyone care? I think the reasons are obvious. From Socrates to John Rawls, the dominant current in Western moral philosophy has validated moral judgment by reason only. Reason has been portrayed as a common language available to moral disputants, all of whom can discover, as with a mathematical proposition, when they are correct and when they are in error.
This view still has strong defenders; my favorite is John Searle. But the question is one of fact. We arrive at moral judgments either through logic and deduction, or by an immediate perception of right and wrong. If science, in accumulating evidence, comes down on the side of the moral sense, it will turn out that many of our moral and political assumptions have been based on a mistake.
I’ll have more to say on this some other time. Here I want to draw attention to this article, posted on the Princeton alumni online magazine, and summarizing the experimental research in that university delving into morality. The article meanders rather aimlessly, and the method in some of the experiments sounds questionable – 41 “subjects,” all probably college students, can’t demonstrate much of universal value about the human race. But I found this speculation by one of the researchers, Joshua Green, worth citing:
Our current moral thinking may reflect the conditions that prevailed during most of our evolutionary history rather than the conditions we actually face today, says Greene, a postdoc with the Department of Psychology and the Center for the Study of Brain, Mind, and Behavior who received his Princeton doctorate in philosophy. We are often told, when confronted with a major decision, to “go with our gut.” Those intuitive judgments feel so right and come so easily. But where do they come from? (Hint: It’s not your gut.) Greene posits two scenarios, both adapted from Princeton philosopher and bioethics professor Peter Singer. First, you are driving along a sparsely traveled country road in your brand-new BMW convertible, which came equipped with expensive leather seats. You see a man covered in blood by the side of the road; you stop and he tells you that he had an accident while hiking and needs a lift to the hospital. He may lose his leg if you don’t take him; there is no one else around to help. But the blood will ruin your leather seats. Is it OK to leave him by the side of the road because you don’t want to spend the money to reupholster the seats? Obviously, the “right” thing to do is pretty clear: Take him to the hospital. Most would find a decision to leave him repugnant.
Yet Greene poses another scenario. You get a letter in the mail from a reputable international aid organization, asking for a donation of a few hundred dollars. The money, the letter says, will bring medical help to poor people in a country thousands of miles away. Is it OK to toss the letter and forget about it because you want to save money? According to Singer, these scenarios are morally equivalent; in both, people can easily be helped without undue sacrifice on your part. The only difference is proximity. Most people, says Greene, are inclined to think there must be “some good reason” why it’s not OK to abandon the hiker but is perfectly acceptable to throw away the appeal from the aid group. That decision would probably “feel right” to a lot of people, he says. Yet Greene writes, in an essay published in the October 2003 issue of Nature Reviews Neuroscience, that “maybe this pair of moral intuitions has nothing to do with ‘some good reason’ and everything to do with the way our brains happen to be built.” Because our ancestors didn’t have the capability to help anyone far away – or probably even realize that they existed – they didn’t face dilemmas like the aid-group plea. Our brains, Greene suggests, may be wired to respond to proximal moral dilemmas, not those that originate miles away.
Greene proposes four levels of moral decision-making. One is the basic instinct to protect and promote one’s own interests – someone swipes your food, you react angrily. Then there’s the human/empathetic response – seeing the hiker and helping him. One step higher is moral intuition based on cultural norms. If someone had grown up in a family, for example, that gave away 50 percent of its money to charity, that person might find responding to the mailed appeal a moral no-brainer, so to speak. Finally, there is a decision made by an individual who, “through his own philosophizing, has come to the conclusion that is largely independent of the conclusions come to by the local culture.”
The Singer scenario is rationalist to the core. Is helping a wounded man morally equivalent to mailing a check? That assumes that morality is something more than the feeling of rightness conveyed by the moral sense – in this case, the ability to help someone. But how can that be demonstrated? The matter of proximity and distance also poses interesting problems. I know I’m helping the wounded man; how do I know where my check is going? I also know the effect of my actions on the wounded man, but even if the check goes to the needy, how can I parse the moral consequences?
For once, these are not rhetorical questions. Comments are welcomed from one and all.