Sam Harris gave a TED talk, in which he claims that science can tell us what to value, or how to be moral. Unfortunately I completely disagree with his major point. (Via Jerry Coyne and 3 Quarks Daily.)
He starts by admitting that most people are skeptical that science can lead us to certain values; science can tell us what is, but not what ought to be. There is a old saying, going back to David Hume, that you can’t derive ought from is. And Hume was right! You can’t derive ought from is. Yet people insist on trying.
Harris uses an ancient strategy to slip morality into what starts out as description. He says:
"Values are a certain kind of fact. They are facts about the well-being of conscious creatures… If we’re more concerned about our fellow primates than we are about insects, as indeed we are, it’s because we think they are exposed to a greater range of potential happiness and suffering. The crucial thing to notice here is that this is a factual claim."
Let’s grant the factual nature of the claim that primates are exposed to a greater range of happiness and suffering than insects or rocks. So what? That doesn’t mean we should care about their suffering or happiness; it doesn’t imply anything at all about morality, how we ought to feel, or how to draw the line between right and wrong.
Morality and science operate in very different ways. In science, our judgments are ultimately grounded in data; when it comes to values we have no such recourse. If I believe in the Big Bang model and you believe in the Steady State cosmology, I can point to the successful predictions of the cosmic background radiation, light element nucleosynthesis, evolution of large-scale structure, and so on. Eventually you would either agree or be relegated to crackpot status. But what if I believe that the highest moral good is to be found in the autonomy of the individual, while you believe that the highest good is to maximize the utility of some societal group? What are the data we can point to in order to adjudicate this disagreement? We might use empirical means to measure whether one preference or the other leads to systems that give people more successful lives on some particular scale — but that’s presuming the answer, not deriving it. Who decides what is a successful life? It’s ultimately a personal choice, not an objective truth to be found simply by looking closely at the world. How are we to balance individual rights against the collective good? You can do all the experiments you like and never find an answer to that question.
Harris is doing exactly what Hume warned against, in a move that is at least as old as Plato: he’s noticing that most people are, as a matter of empirical fact, more concerned about the fate of primates than the fate of insects, and taking that as evidence that we ought to be more concerned about them; that it is morally correct to have those feelings. But that’s a non sequitur. After all, not everyone is all that concerned about the happiness and suffering of primates, or even of other human beings; some people take pleasure in torturing them. And even if they didn’t, again, so what? We are simply stating facts about how human beings feel, from which we have no warrant whatsoever to conclude things about how they should feel.
Attempts to derive ought from is are like attempts to reach an odd number by adding together even numbers. If someone claims that they’ve done it, you don’t have to check their math; you know that they’ve made a mistake. Or, to choose a different mathematical analogy, any particular judgment about right and wrong is like Euclid’s parallel postulate in geometry; there is not a unique choice that is compatible with the other axioms, and different choices could in principle give different interesting moral philosophies.
A big part of the temptation to insist that moral judgments are objectively true is that we would like to have justification for arguing against what we see as moral outrages when they occur. But there’s no reason why we can’t be judgmental and firm in our personal convictions, even if we are honest that those convictions don’t have the same status as objective laws of nature. In the real world, when we disagree with someone else’s moral judgments, we try to persuade them to see things our way; if that fails, we may (as a society) resort to more dramatic measures like throwing them in jail. But our ability to persuade others that they are being immoral is completely unaffected — and indeed, may even be hindered — by pretending that our version of morality is objectively true. In the end, we will always be appealing to their own moral senses, which may or may not coincide with ours.
The unfortunate part of this is that Harris says a lot of true and interesting things, and threatens to undermine the power of his argument by insisting on the objectivity of moral judgments. There are not objective moral truths (where “objective” means “existing independently of human invention”), but there are real human beings with complex sets of preferences. What we call “morality” is an outgrowth of the interplay of those preferences with the world around us, and in particular with other human beings. The project of moral philosophy is to make sense of our preferences, to try to make them logically consistent, to reconcile them with the preferences of others and the realities of our environments, and to discover how to fulfill them most efficiently. Science can be extremely helpful, even crucial, in that task. We live in a universe governed by natural laws, and it makes all the sense in the world to think that a clear understanding of those laws will be useful in helping us live our lives — for example, when it comes to abortion or gay marriage. When Harris talks about how people can reach different states of happiness, or how societies can become more successful, the relevance of science to these goals is absolutely real and worth stressing.
Which is why it’s a shame to get the whole thing off on the wrong foot by insisting that values are simply a particular version of empirical facts. When people share values, facts can be very helpful to them in advancing their goals. But when they don’t share values, there’s no way to show that one of the parties is “objectively wrong.” And when you start thinking that there is, a whole set of dangerous mistakes begins to threaten. It’s okay to admit that values can’t be derived from facts — science is great, but it’s not the only thing in the world.
Last month, I had the privilege of speaking at the 2010 TED conference for exactly 18 minutes. The short format of these talks is a brilliant innovation and surely the reason for their potent half-life on the Internet. However, 18 minutes is not a lot of time in which to present a detailed argument. My intent was to begin a conversation about how we can understand morality in universal, scientific terms. Many people who loved my talk, misunderstood what I was saying, and loved it for the wrong reasons; and many of my critics were right to think that I had said something extremely controversial. I was not suggesting that science can give us an evolutionary or neurobiological account of what people do in the name of “morality.” Nor was I merely saying that science can help us get what we want out of life. Both of these would have been quite banal claims to make (unless one happens to doubt the truth of evolution or the mind’s dependency on the brain). Rather I was suggesting that science can, in principle, help us understand what we should do and should want—and, perforce, what other people should do and want in order to live the best lives possible. My claim is that there are right and wrong answers to moral questions, just as there are right and wrong answers to questions of physics, and such answers may one day fall within reach of the maturing sciences of mind. As the response to my TED talk indicates, it is taboo for a scientist to think such things, much less say them public.
Most educated, secular people (and this includes most scientists, academics, and journalists) seem to believe that there is no such thing as moral truth—only moral preference, moral opinion, and emotional reactions that we mistake for genuine knowledge of right and wrong, or good and evil. While I make the case for a universal conception of morality in much greater depth in my forthcoming book, The Moral Landscape: How Science Can Determine Human Values , I’d like to address the most common criticisms I’ve received thus far in response to my remarks at TED.
Some of my critics got off the train before it even left the station, by defining “science” in exceedingly narrow terms. Many think that science is synonymous with mathematical modeling, or with immediate access to experimental data. However, this is to mistake science for a few of its tools. Science simply represents our best effort to understand what is going on in this universe, and the boundary between it and the rest of rational thought cannot always be drawn. There are many tools one must get in hand to think scientifically—ideas about cause and effect, respect for evidence and logical coherence, a dash of curiosity and intellectual honesty, the inclination to make falsifiable predictions, etc.—and many come long before one starts worrying about mathematical models or specific data.
There is also much confusion about what it means to speak with scientific “objectivity.” As the philosopher John Searle once pointed out, there are two very different senses of the terms “objective” and “subjective.” The first relates to how we know (i.e. epistemology), the second to what there is to know (i.e. ontology). When we say that we are reasoning or speaking “objectively,” we mean that we are free of obvious bias, open to counter-arguments, cognizant of the relevant facts, etc. There is no impediment to our doing this with regard to subjective (i.e. third-person) facts. It is, for instance, true to say that I am experiencing tinnitus (ringing in my ears) at this moment. This is a subjective fact about me. I am not lying about it. I have been to an otologist and had the associated hearing loss in the upper frequencies in my right ear confirmed. There is simply no question that I can speak about my tinnitus in the spirit of scientific objectivity. And, no doubt, this experience must have some objective (third-person) correlates, like damage to my cochlea. Many people seem to think that because moral facts relate entirely to our experience (and are, therefore, ontologically “subjective”), all talk of morality must be “subjective” in the epistemological sense (i.e. biased, merely personal, etc.). This is simply untrue.
Many of my critics also fail to distinguish between there being no answers in practice and no answers in principle to certain questions about the nature of reality. Only the latter questions are “unscientific,” and there are countless facts to be known in principle that we will never know in practice. Exactly how many birds are in flight over the surface of the earth at this instant? What is their combined weight in grams? We cannot possibly answer such questions, but they have simple, numerical answers. Does our inability to gather the relevant data oblige us to respect all opinions equally? For instance, how seriously should we take the claim that there are exactly 23,000 birds in flight at this moment, and, as they are all hummingbirds weighing exactly 2 grams, their total weight is 46,000 grams? It should be obvious that this is a ridiculous assertion. We can, therefore, decisively reject answers to questions that we cannot possibly answer in practice. This is a perfectly reasonable, scientific, and often necessary thing to do. And yet, many scientists will say that moral truths do not exist, simply because certain facts about human experience cannot be readily known, or may never be known. As I hope to show, this blind spot has created tremendous confusion about the relationship between human knowledge and human values.
When I speak of there being right and wrong answers to questions of morality, I am saying that there are facts about human and animal wellbeing that we can, in principle, know—simply because wellbeing (and states of consciousness altogether) must lawfully relate to states of the brain and to states of the world.
And here is where the real controversy begins: for many people strongly objected to my claim that values (and hence morality) relate to facts about the wellbeing of conscious creatures. My critics seem to think that consciousness and its states hold no special place where values are concerned, or that any state of consciousness stands the same chance of being valued as any other. While maximizing the wellbeing of conscious creatures may be what I value, other people are perfectly free to define their values differently, and there will be no rational or scientific basis to argue with them. Thus, by starting my talk with the assertion that values depend upon actual or potential changes in consciousness, and that some changes are better than others, I merely assumed what I set out to prove. This is what philosophers call “begging the question.” I am, therefore, an idiot. And given that my notion of objective values must be a mere product of my own personal and cultural biases, and these led me to disparage traditional religious values from the stage at TED, I am also a bigot. While these charges are often leveled separately, they are actually connected.
I’ve now had these basic objections hurled at me a thousand different ways—from YouTube comments that end by calling me “a Mossad agent” to scarcely more serious efforts by scientists like Sean Carroll which attempt to debunk my reasoning as circular or otherwise based on unwarranted assumptions. Many of my critics piously cite Hume’s is/ought distinction as though it were well known to be the last word on the subject of morality until the end of time. Indeed, Carroll appears to think that Hume’s lazy analysis of facts and values is so compelling that he elevates it to the status of mathematical truth:
"Attempts to derive ought from is [values from facts] are like attempts to reach an odd number by adding together even numbers. If someone claims that they’ve done it, you don’t have to check their math; you know that they’ve made a mistake."
This is an amazingly wrongheaded response coming from a very smart scientist. I wonder how Carroll would react if I breezily dismissed his physics with a reference to something Robert Oppenheimer once wrote, on the assumption that it was now an unmovable object around which all future human thought must flow. Happily, that’s not how physics works. But neither is it how philosophy works. Frankly, it’s not how anything that works, works.
Carroll appears to be confused about the foundations of human knowledge. For instance, he clearly misunderstands the relationship between scientific truth and scientific consensus. He imagines that scientific consensus signifies the existence of scientific truth (while scientific controversy just means that there is more work to be done). And yet, he takes moral controversy to mean that there is no such thing as moral truth (while moral consensus just means that people are deeply conditioned for certain preferences). This is a double standard that I pointed out in my talk, and it clearly rigs the game against moral truth. The deeper issue, however, is that truth has nothing, in principle, to do with consensus: It is, after all, quite possible for everyone to be wrong, or for one lone person to be right. Consensus is surely a guide to discovering what is going on in the world, but that is all that it is. Its presence or absence in no way constrains what may or may not be true.
Strangely, Carroll also imagines that there is greater consensus about scientific truth than about moral truth. Taking humanity as a whole, I am quite certain that he is mistaken about this. There is no question that there is a greater consensus that cruelty is generally wrong (a common moral intuition) than that the passage of time varies with velocity (special relativity) or that humans and lobsters share an ancestor (evolution). Needless to say, I’m not inclined to make too much of this consensus, but it is worth noting that scientists like Carroll imagine far more moral diversity than actually exists. While certain people believe some very weird things about morality, principles like the Golden Rule are very well subscribed. If we wanted to ground the epistemology of science on democratic principles, as Carroll suggests we might, the science of morality would have an impressive head start over the science of physics. [1]
The real problem, however, is that critics like Carroll think that there is no deep intellectual or moral issue here to worry about. Carroll encourages us to just admit that a universal conception of human values is a pipe dream. Thereafter, those of us who want to make life on earth better, or at least not worse, can happily collaborate, knowing all the while that we are seeking to further our merely provincial, culturally constructed notions of moral goodness. Once we have our values in hand, and cease to worry about their relationship to the Truth, science can help us get what we want out of life.
There are many things wrong with this approach. The deepest problem is that it strikes me as patently mistaken about the nature of reality and about what we can reasonably mean by words like “good,” “bad,” “right,” and “wrong.” In fact, I believe that we can know, through reason alone, that consciousness is the only intelligible domain of value. What’s the alternative? Imagine some genius comes forward and says, “I have found a source of value/morality that has absolutely nothing to do with the (actual or potential) experience of conscious beings.” Take a moment to think about what this claim actually means. Here’s the problem: whatever this person has found cannot, by definition, be of interest to anyone (in this life or in any other). Put this thing in a box, and what you have in that box is—again, by definition—the least interesting thing in the universe.
So how much time should we spend worrying about such a transcendent source of value? I think the time I will spend typing this sentence is already far too much. All other notions of value will bear some relationship to the actual or potential experience of conscious beings. So my claim that consciousness is the basis of values does not appear to me to be an arbitrary starting point.
Now that we have consciousness on the table, my further claim is that wellbeing is what we can intelligibly value—and “morality” (whatever people’s associations with this term happen to be) really relates to the intentions and behaviors that affect the wellbeing of conscious creatures. And, as I pointed out at TED, all the people who claim to have alternative sources of morality (like the Word of God) are, in every case that I am aware of, only concerned about wellbeing anyway: They just happen to believe that the universe functions in such a way as to place the really important changes in conscious experience after death (i.e. in heaven or hell). And those philosophical efforts that seek to put morality in terms of duty, fairness, justice, or some other principle that is not explicitly tied to the wellbeing of conscious creatures—are, nevertheless, parasitic on some notion of wellbeing in the end (I argue this point at greater length in my book. And yes, I’ve read Rawls, Nozick, and Parfit). The doubts that immediately erupt on this point seem to invariably depend on extremely unimaginative ideas about what the term “wellbeing” could mean, altogether, or on mistaken beliefs about what science is.
Those who assumed that any emphasis on human “wellbeing” would lead us to enslave half of humanity, or harvest the organs of the bottom ten percent, or nuke the developing world, or nurture our children a continuous drip of heroin are, it seems to me, not really thinking about these issues seriously. It seems rather obvious that fairness, justice, compassion, and a general awareness of terrestrial reality have rather a lot to do with our creating a thriving global civilization—and, therefore, with the greater wellbeing of humanity. And, as I emphasized in my talk, there may be many different ways for individuals and communities to thrive—many peaks on the moral landscape—so if there is real diversity in how people can be deeply fulfilled in life, this diversity can be accounted for and honored in the context of science. As I said in my talk, the concept of “wellbeing,” like the concept of “health,” is truly open for revision and discovery. Just how happy is it possible for us to be, personally and collectively? What are the conditions—ranging from changes in the genome to changes in economic systems—that will produce such happiness? We simply do not know.
But the deeper objection raised by scientists like Carroll is that the link I have drawn between values and wellbeing seems arbitrary, or otherwise in need of justification. What if certain people insist that their “values” or “morality” have nothing to do with wellbeing? What if a man like Jefferey Dahmer says, “The only peaks on the moral landscape that interest me are ones where I get to murder young men and have sex with their corpses.” This possibility—the prospect of radically different moral preferences—seems to be at the heart of many people’s concerns. In response to one of his readers, Carroll writes:
"[W]e have to distinguish between choosing a goal and choosing the best way to get there. But when we do science we all basically agree on what the goals are — we want to find a concise, powerful explanation of the empirical facts we observe. Sure, someone can choose to disagree with those goals — but then they’re not doing science, they’re doing philosophy of science. Which is interesting in its own right, but not the same thing.
When it comes to morality, there is nowhere near the unanimity of goals that there is in science. That’s not a minor quibble, that’s the crucial difference! If we all agreed on the goals, we would indeed expend our intellectual effort on the well-grounded program of figuring out how best to achieve those goals. That would be great, but it’s not the world in which we live."
Again, we encounter this confusion about the significance of consensus. But we should also remember that there are trained “scientists” who are Biblical Creationists, and their scientific thinking is purposed not toward a dispassionate study of the universe, but toward interpreting the data of science to fit the Biblical account of creation. Such people claim to be doing “science,” of course—but real scientists are free, and indeed obligated, to point out that they are misusing the term. Similarly, there are people who claim to be highly concerned about “morality” and “human values,” but when we see that they are more concerned about condom use than they are about child rape (e.g. the Catholic Church), we should feel free to say that they are misusing the term “morality,” or that their values are distorted. As I asked at TED, how have we convinced ourselves that on the subject of morality, all views must count equally?
Everyone has an intuitive “physics,” but much of our intuitive physics is wrong (with respect to the goal of describing the behavior of matter), and only physicists have a deep understanding of the laws that govern the behavior of matter in our universe. Everyone also has an intuitive “morality,” but much intuitive morality is wrong (with respect to the goal of maximizing personal and collective wellbeing) and only genuine moral experts would have a deep understanding of the causes and conditions of human and animal wellbeing. Yes, we must have a goal to define what counts as “right” or “wrong” in a given domain, but this criterion is equally true in both domains.
So what about people who think that morality has nothing to do with anyone’s wellbeing? I am saying that we need not worry about them—just as we don’t worry about the people who think that their “physics” is synonymous with astrology, or sympathetic magic, or Vedanta. We are free to define “physics” any way we want. Some definitions will be useless, or worse. We are free to define “morality” any way we want. Some definitions will be useless, or worse—and many are so bad that we can know, far in advance of any breakthrough in the sciences of mind, that they have no place in a serious conversation about human values.
One of my critics put the concern this way: “Why should human wellbeing matter to us?” Well, why should logical coherence matter to us? Why should historical veracity matter to us? Why should experimental evidence matter to us? These are profound and profoundly stupid questions. No framework of knowledge can withstand such skepticism, for none is perfectly self-justifying. Without being able to stand entirely outside of a framework, one is always open to the charge that the framework rests on nothing, that its axioms are wrong, or that there are foundational questions it cannot answer. So what? Science and rationality generally are based on intuitions and concepts that cannot be reduced or justified. Just try defining “causation” in non-circular terms. If you manage it, I really want hear from you . Or try to justify transitivity in logic: if A = B and B = C, then A = C. A skeptic could say that this is nothing more than an assumption that we’ve built into the definition of “equality.” Others will be free to define “equality” differently. Yes, they will. And we will be free to call them “imbeciles.” Seen in this light, moral relativism should be no more tempting than physical, biological, mathematical, or logical relativism. There are better and worse ways to define our terms; there are more and less coherent ways to think about reality; and there are—is there any doubt about this?—many ways to seek fulfillment in this life and not find it.
On a related point, the philosopher Russell Blackford wrote, “I’ve never yet seen an argument that shows that psychopaths are necessarily mistaken about some fact about the world. Moreover, I don’t see how the argument could run…” Well, here it is in brief: We already know that psychopaths have brain damage that prevents them from having certain deeply satisfying experiences (like empathy) which seem good for people both personally and collectively (in that they tend to increase wellbeing on both counts). Psychopaths, therefore, don’t know what they are missing (but we do). The position of a psychopath also cannot be generalized; it is not, therefore, an alternative view of how human beings should live (this is one point Kant got right: even a psychopath couldn’t want to live in a world filled with psychopaths). We should also realize that the psychopath we are envisioning is a straw man: Watch interviews with real psychopaths, and you will find that they do not tend to claim to be in possession of an alternative morality or to be living deeply fulfilling lives. These people are generally ruled by compulsions that they don’t understand and cannot resist. It absolutely clear that, whatever they might believe about what they are doing, psychopaths are seeking some form of wellbeing (excitement, ecstasy, feelings of power, etc.), but because of their neurological deficits, they are doing a very bad job of it. We can say that a psychopath like Ted Bundy takes satisfaction in the wrong things, because living a life purposed toward raping and killing women does not allow for deeper and more generalizable forms of human flourishing. Compare Bundy’s deficits to those of a delusional physicist who finds meaningful patterns and mathematical significance in the wrong places (John Nash might have been a good example, while suffering the positive symptoms of his schizophrenia). His “Eureka!” detectors are poorly coupled to reality; he sees meaningful patterns where most people would not—and these patterns will be a very poor guide to the proper goals of physics (i.e. understanding the physical world). Is there any doubt that Ted Bundy’s “Yes! I love this!” detectors were poorly coupled to the possibilities of finding deep fulfillment in this life, or that his overriding obsession with raping and killing young women was a poor guide to the proper goals of morality (i.e. living a fulfilling life with others)?
And while people like Bundy may want some very weird things out of life, no one wants utter, interminable misery. And if someone claims to want this, we are free to treat them like someone who claims to believe that 2 + 2 = 5 or that all events are self-caused. On the subject of morality, as on every other subject, some people are not worth listening to.
The moment we admit that consciousness is the context in which any discussion of values makes sense, we must admit that there are facts to be known about how the experience of conscious creatures can change—and these facts can be studied, in principle, with the tools of science. Do pigs suffer more than cows do when being led to slaughter? Would humanity suffer more or less, on balance, if the U.S. unilaterally gave up all its nuclear weapons? Questions like these are very difficult to answer. But this does not mean that they don’t have answers. Carroll writes:
"But what if I believe that the highest moral good is to be found in the autonomy of the individual, while you believe that the highest good is to maximize the utility of some societal group? What are the data we can point to in order to adjudicate this disagreement? We might use empirical means to measure whether one preference or the other leads to systems that give people more successful lives on some particular scale — but that’s presuming the answer, not deriving it. Who decides what is a successful life? It’s ultimately a personal choice, not an objective truth to be found simply by looking closely at the world. How are we to balance individual rights against the collective good? You can do all the experiments you like and never find an answer to that question."
Again, we see the confusion between no answers in practice and no answers in principle. The fact that it could difficult or impossible to know exactly how to maximize human wellbeing, does not mean that there are no right or wrong ways to do this—nor does it mean that we cannot exclude certain answers as obviously bad. The fact that it might be difficult to decide exactly how to balance individual rights against collective good, or that there might be a thousand equivalent ways of doing this, does not mean that we must hesitate to condemn the morality of the Taliban, or the Nazis, or the Ku Klux Klan—not just personally, but from the point of view of science. As I said at TED, the moment we admit that there is anything to know about human wellbeing, we must admit that certain individuals or cultures might not know it.
It is also worth noticing that Carroll has set the epistemological bar higher for morality than he has for any other branch of science. He asks, “Who decides what is a successful life?” Well, who decides what is coherent argument? Who decides what constitutes empirical evidence? Who decides when our memories can be trusted? The answer is, “we do.” And if you are not satisfied with this answer, you have just wiped out all of science, mathematics, history, journalism, and every other human effort to make sense of reality.
And the philosophical skepticism that brought us the division between facts and values can be used in many other ways that smart people like Carroll would never countenance. In fact, I could use another of Hume’s arguments, the case against induction, to torpedo Carroll’s entire field, or science generally. The scientific assumption that the future will lawfully relate to the past is just that—an assumption. Other people are free to assume that it won’t. In fact, I’m free to assume that the apparent laws of nature will expire on the first Tuesday of the year 3459. Is this assumption just as good as any other? If so, we can say goodbye to physics.
There are also very practical, moral concerns that follow from the glib idea that anyone is free to value anything—the most consequential being that it is precisely what allows highly educated, secular, and otherwise well-intentioned people to pause thoughtfully, and often interminably, before condemning practices like compulsory veiling, genital excision, bride-burning, forced marriage, and the other cheerful products of alternative “morality” found elsewhere in the world. Fanciers of Hume’s is/ought distinction never seem to realize what the stakes are, and they do not see what an abject failure of compassion their intellectual “tolerance” of moral difference amounts to. While much of this debate must be had in academic terms, this is not merely an academic debate. There are women and girls getting their faces burned off with acid at this moment for daring to learn to read, or for not consenting to marry men they have never met, or even for the crime of getting raped. Look into their eyes, and tell me that what has been done to them is the product of an alternative moral code every bit as authentic and philosophically justifiable as your own. And if you actually believe this, I would like to publish your views on my website.
The amazing thing is that some people won’t even blink before plunging into this intellectual and moral crevasse—and most of these enlightened souls are highly educated. I once spoke at an academic conference on themes similar to those I discussed at TED—my basic claim being that once we have a more complete understanding of human wellbeing, ranging from its underlying neurophysiology to the political systems and economic policies that best safeguard it, we will be able to make strong claims about which cultural practices are good for humanity and which aren’t. I then made what I thought would be a quite incontestable assertion: we already have good reason to believe that certain cultures are less suited to maximizing wellbeing than others. I cited the ruthless misogyny and religious bamboozlement of the Taliban as an example of a worldview that seems less than perfectly conducive to human flourishing.
As it turns out, to denigrate the Taliban at a scientific meeting is to court controversy (after all, “Who decides what is a successful life?”) At the conclusion of my talk, I fell into debate with another invited speaker, who seemed, at first glance, to be very well positioned to reason effectively about the implications of science for our understanding of morality. She holds a degree in genetics from Dartmouth, a masters in biology from Harvard, and a law degree, another masters, and a Ph.D. in the philosophy of biology from Duke. This scholar is now a recognized authority on the intersection between criminal law, genetics, neuroscience and philosophy. Here is a snippet of our conversation, more or less verbatim:
"She: What makes you think that science will ever be able to say that forcing women to wear burqas is wrong?
Me: Because I think that right and wrong are a matter of increasing or decreasing wellbeing—and it is obvious that forcing half the population to live in cloth bags, and beating or killing them if they refuse, is not a good strategy for maximizing human wellbeing.
She: But that’s only your opinion.
Me: Okay… Let’s make it even simpler. What if we found a culture that ritually blinded every third child by literally plucking out his or her eyes at birth, would you then agree that we had found a culture that was needlessly diminishing human wellbeing?
She: It would depend on why they were doing it.
Me (slowly returning my eyebrows from the back of my head): Let’s say they were doing it on the basis of religious superstition. In their scripture, God says, “Every third must walk in darkness.”
She: Then you could never say that they were wrong."
Such opinions are not uncommon in the Ivory Tower. I was talking to a woman (it’s hard not to feel that her gender makes her views all the more disconcerting) who had just delivered an entirely lucid lecture on the moral implications of neuroscience for the law. She was concerned that our intelligence services might one day use neuroimaging technology for the purposes of lie detection, which she considered a likely violation of cognitive liberty. She was especially exercised over rumors that our government might have exposed captured terrorists to aerosols containing the hormone oxytocin in an effort to make them more cooperative. Though she did not say it, I suspect that she would even have opposed subjecting these prisoners to the smell of freshly baked bread, which has been shown to have a similar effect. While listening to her talk, as yet unaware of her liberal views on compulsory veiling and ritual enucleation, I thought her slightly over-cautious, but a basically sane and eloquent authority on the premature use of neuroscience in our courts. I confess that once we did speak, and I peered into the terrible gulf that separated us on these issues, I found that I could not utter another word to her. In fact, our conversation ended with my blindly enacting two, neurological clichés: my jaw quite literally dropped open, and I spun on my heels before walking away.
Moral relativism is clearly an attempt to pay intellectual reparations for the crimes of western colonialism, ethnocentrism, and racism. This is, I think, the only charitable thing to be said about it. Needless to say, it was not my purpose at TED to defend the idiosyncrasies of the West as any more enlightened, in principle, than those of any other culture. Rather, I was arguing that the most basic facts about human flourishing must transcend culture, just as most other facts do. And if there are facts which are truly a matter of cultural construction—if, for instance, learning a specific language or tattooing your face fundamentally alters the possibilities of human experience—well, then these facts also arise from (neurophysiological) processes that transcend culture.
I must say, the vehemence and condescension with which the is/ought objection has been thrown in my face astounds me. And it confirms my sense that this bit of bad philosophy has done tremendous harm to the thinking of smart (and not so smart) people. The categorical distinction between facts and values helped open a sinkhole beneath liberalism long ago—leading to moral relativism and to masochistic depths of political correctness. Think of the champions of “tolerance” who reflexively blamed Salman Rushdie for his fatwa, or Ayaan Hirsi Ali for her ongoing security concerns, or the Danish cartoonists for their “controversy,” and you will understand what happens when educated liberals think there is no universal foundation for human values. Among conservatives in the West, the same skepticism about the power of reason leads, more often than not, directly to the feet of Jesus Christ, Savior of the Universe. Indeed, the most common defense one now hears for religious faith is not that there is compelling evidence for God’s existence, but that a belief in Him is the only basis for a universal conception of human values. And it is decidedly unhelpful that the moral relativism of liberals so often seems to prove the conservative case.
Of course, there is more to be said on the relationship between facts and values—more details to consider and objections to counter—and I will do my best to tackle these issues in my forthcoming book. As always, if you feel that you have found flaws in my argument, I sincerely encourage you to point them out to me, and to everyone else, in the comment thread following this article.
1) Perhaps Carroll will want to say that scientists agree about science more than ordinary people agree about morality (I’m not even sure this is true). But this is an empty claim, for at least two reasons: 1) it is circular, because anyone who insufficiently agrees with the principles of science as Carroll knows them, won’t count as a scientist in his book (so the definition of “scientist” is question begging). 2) Scientists are an elite group, by definition. “Moral experts” would also constitute an elite group, and the existence of such experts is completely in line with my argument.
The discussion is interesting. Sam Harris recently and infamously proposed that, contra Hume, you can derive an 'ought' from an 'is', and that science can therefore provide reasonable guidance towards a moral life. Sean Carroll disagrees at length.
I'm afraid that so far I'm in the Carroll camp. I think Harris is following a provocative and potentially useful track, but I'm not convinced. I think he's right in some of the examples he gives: science can trivially tell you that psychopaths and violent criminals and the pathologies produced by failed states in political and economic collapse are not good models on which to base a successful human society (although I also think that the desire for a successful society is not a scientific premise…it's a kind of Darwinian criterion, because unsuccessful societies don't survive). However, I don't think Harris's criterion — that we can use science to justify maximizing the well-being of individuals — is valid. We can't. We can certainly use science to say how we can maximize well-being, once we define well-being…although even that might be a bit more slippery than he portrays it. Harris is smuggling in an unscientific prior in his category of well-being.
One good example Harris uses is the oppression of women and raging misogyny of the Taliban. Can we use science to determine whether that is a good strategy for human success? I think we can, but not in the way Harris is trying to do so: we could ask empirically, after the fact, whether the Taliban was successful in expanding, maintaining its population, and responding to its environment in a productive way. We cannot, though, say a priori that it is wrong because abusing and denigrating half the population is unconscionable and vile, because that is not a scientific foundation for the conclusion. It's an emotional one; it's also a rational one, given the premise that we should treat all people equitably…but that premise can't claim scientific justification. That's what Harris has to show!
That is different from saying is is an unjustified premise, though — I agree with Harris entirely that the oppression of women is an evil, a wrong, a violation of a social contract that all members of a society should share. I just don't see a scientific reason for that — I see reasons of biological predisposition (we are empathic, social animals), of culture (this is a conclusion of Enlightenment history), and personal values, but not science. Science is an amoral judge: science could find that a slave culture of ant-like servility was a species optimum, or that a strong behavioral sexual dimorphism, where men and women had radically different statuses in society, was an excellent working solution. We bring in emotional and personal beliefs when we say that we'd rather not live in those kinds of cultures, and want to work towards building a just society.
And that's OK. I think that deciding that my sisters and female friends and women all around the world ought to have just as good a chance to thrive as I do is justified given a desire to improve the well-being and happiness of all people. I am not endorsing moral relativism at all — we should work towards liberating everyone, and the Taliban are contemptible scum — I'm just not going to pretend that that goal is built on an entirely objective, scientific framework.
Carroll brings up another set of problems. Harris is building his arguments around a notion that we ought to maximize well-being; Caroll points out that "well-being" is an awfully fuzzy concept that means different things to different people, and that it isn't clear that "well-being" isn't necessarily a goal of morality. Harris does have an answer to those arguments, sort of.
Quote:
Those who assumed that any emphasis on human "wellbeing" would lead us to enslave half of humanity, or harvest the organs of the bottom ten percent, or nuke the developing world, or nurture our children a continuous drip of heroin are, it seems to me, not really thinking about these issues seriously. It seems rather obvious that fairness, justice, compassion, and a general awareness of terrestrial reality have rather a lot to do with our creating a thriving global civilization--and, therefore, with the greater wellbeing of humanity. And, as I emphasized in my talk, there may be many different ways for individuals and communities to thrive--many peaks on the moral landscape--so if there is real diversity in how people can be deeply fulfilled in life, this diversity can be accounted for and honored in the context of science. As I said in my talk, the concept of "wellbeing," like the concept of "health," is truly open for revision and discovery. Just how happy is it possible for us to be, personally and collectively? What are the conditions--ranging from changes in the genome to changes in economic systems--that will produce such happiness? We simply do not know.
The phrase beginning "It seems rather obvious…" is an unfortunate give-away. Don't tell me it's obvious, tell me how you can derive your conclusion from the simple facts of the world. He also slips in a new goal: "creating a thriving global civilization." I like that goal; I think that is an entirely reasonable objective for a member of a species to strive for, to see that their species achieves a stable, long-term strategy for survival. However, the idea that it should be achieved by promoting fairness, justice, compassion, etc., is not a scientific requirement. As Harris notes, there could be many different peaks in the moral landscape — what are the objective reasons for picking those properties as the best elements of a strategy? He doesn't say.
I'm fine with setting up a set of desirable social goals — fairness, justice, compassion, and equality are just a start — and declaring that these will be the hallmark of our ideal society, and then using reason and science to work towards those objectives. I just don't see a scientific reason for the premises, wonderful as they are and as strongly as they speak to me. I also don't feel a need to label a desire as "scientific".
Re:The Moral Equivalent of the Parallel Postulate and Sam Harris' Response
« Reply #4 on: 2010-05-05 17:56:54 »
I think Rifkin (I don't totally agree with what is an over simplification, but it is a new way of framing a discussion that has to happen) manages to underscore what Harris is saying. Critical thinking to yet another step in our evolution. All our social constructs have been tools to help survive, and some have been better then others. We went from a useful mythology socially, with many gods forming the guidelines for structure for society, down a path I suspect was less helpful with monotheism, resulting in absolute power corrupting absolutely and as we raise the bar of understanding globally we need to move to new model. Science is a tool to help us socially, just like Mythology was !
<snip> Source: Huffington Post Author: Jeremy Rifkin Date: January 11, 2010
'The Empathic Civilization': Rethinking Human Nature in the Biosphere Era
Two spectacular failures, separated by only 18 months, marked the end of the modern era. In July 2008, the price of oil on world markets peaked at $147/ barrel, inflation soared, the price of everything from food to gasoline skyrocketed, and the global economic engine shut off. Growing demand in the developed nations, as well as in China, India, and other emerging economies, for diminishing fossil fuels precipitated the crisis. Purchasing power plummeted and the global economy collapsed. That was the earthquake that tore asunder the industrial age built on and propelled by fossil fuels. The failure of the financial markets two months later was merely the aftershock. The fossil fuel energies that make up the industrial way of life are sunsetting and the industrial infrastructure is now on life support.<snip>
Over the past couple of months, I seem to have conducted a public experiment in the manufacture of philosophical and scientific ideas. In February, I spoke at the 2010 TED conference, where I briefly argued that morality should be considered an undeveloped branch of science. Normally, when one speaks at a conference the resulting feedback amounts to a few conversations in the lobby during a coffee break. I had these conversations at TED, of course, and they were useful. As luck would have it, however, my talk was broadcast on the internet just as I was finishing a book on the relationship between science and human values, and this produced a blizzard of criticism at a moment when criticism could actually do me some good. I made a few efforts to direct and focus this feedback, and the result has been that for the last few weeks I have had literally thousands of people commenting upon my work, more or less in real time. I can't say that the experience has been entirely pleasant, but there is no question that it has been useful.
If nothing else, the response to my TED talk proves that many smart people believe that something in the last few centuries of intellectual progress prevents us from making cross-cultural moral judgments -- or moral judgments at all. Thousands of highly educated men and women have now written to inform me that morality is a myth, that statements about human values are without truth conditions and, therefore, nonsensical, and that concepts like "well-being" and "misery" are so poorly defined, or so susceptible to personal whim and cultural influence, that it is impossible to know anything about them. Many people also claim that a scientific foundation for morality would serve no purpose, because we can combat human evil while knowing that our notions of "good" and "evil" are unwarranted. It is always amusing when these same people then hesitate to condemn specific instances of patently abominable behavior. I don't think one has fully enjoyed the life of the mind until one has seen a celebrated scholar defend the "contextual" legitimacy of the burqa, or a practice like female genital excision, a mere thirty seconds after announcing that his moral relativism does nothing to diminish his commitment to making the world a better place. Given my experience as a critic of religion, I must say that it has been disconcerting to see the caricature of the over-educated, atheistic moral nihilist regularly appearing in my inbox and on the blogs. I sincerely hope that people like Rick Warren have not been paying attention.
First, a disclaimer and non-apology: Many of my critics fault me for not engaging more directly with the academic literature on moral philosophy. There are two reasons why I haven't done this: First, while I have read a fair amount of this literature, I did not arrive at my position on the relationship between human values and the rest of human knowledge by reading the work of moral philosophers; I came to it by considering the logical implications of our making continued progress in the sciences of mind. Second, I am convinced that every appearance of terms like "metaethics," "deontology," "noncognitivism," "anti-realism," "emotivism," and the like, directly increases the amount of boredom in the universe. My goal, both in speaking at conferences like TED and in writing my book, is to start a conversation that a wider audience can engage with and find helpful. Few things would make this goal harder to achieve than for me to speak and write like an academic philosopher. Of course, some discussion of philosophy is unavoidable, but my approach is to generally make an end run around many of the views and conceptual distinctions that make academic discussions of human values so inaccessible. While this is guaranteed to annoy a few people, the prominent philosophers I've consulted seem to understand and support what I am doing.
Many people believe that the problem with talking about moral truth, or with asserting that there is a necessary connection between morality and well-being, is that concepts like "morality" and "well-being" must be defined with reference to specific goals and other criteria -- and nothing prevents people from disagreeing about these definitions. I might claim that morality is really about maximizing well-being and that well-being entails a wide range of cognitive/emotional virtues and wholesome pleasures, but someone else will be free to say that morality depends upon worshipping the gods of the Aztecs and that well-being entails always having a terrified person locked in one's basement, waiting to be sacrificed.
Of course, goals and conceptual definitions matter. But this holds for all phenomena and for every method we use to study them. My father, for instance, has been dead for 25 years. What do I mean by "dead"? Do I mean "dead" with reference to specific goals? Well, if you must, yes -- goals like respiration, energy metabolism, responsiveness to stimuli, etc. The definition of "life" remains, to this day, difficult to pin down. Does this mean we can't study life scientifically? No. The science of biology thrives despite such ambiguities. The concept of "health" is looser still: it, too, must be defined with reference to specific goals -- not suffering chronic pain, not always vomiting, etc. -- and these goals are continually changing. Our notion of "health" may one day be defined by goals that we cannot currently entertain with a straight face (like the goal of spontaneously regenerating a lost limb). Does this mean we can't study health scientifically?
I wonder if there is anyone on earth who would be tempted to attack the philosophical underpinnings of medicine with questions like: "What about all the people who don't share your goal of avoiding disease and early death? Who is to say that living a long life free of pain and debilitating illness is 'healthy'? What makes you think that you could convince a person suffering from fatal gangrene that he is not as healthy you are?" And yet, these are precisely the kinds of objections I face when I speak about morality in terms of human and animal well-being. Is it possible to voice such doubts in human speech? Yes. But that doesn't mean we should take them seriously.
The physicist Sean Carroll has written another essay in response to my TED talk, further arguing that one cannot derive "ought" from "is" and that a science of morality is impossible. Carroll's essay is worth reading on its own, but in the hopes of making the difference between our views as clear as possible, I have I excerpted his main points in their entirety, and followed them with my comments.
Carroll begins:
"I want to start with a hopefully non-controversial statement about what science is. Namely: science deals with empirical reality -- with what happens in the world. (I.e. what "is.") Two scientific theories may disagree in some way -- "the observable universe began in a hot, dense state about 14 billion years ago" vs. "the universe has always existed at more or less the present temperature and density." Whenever that happens, we can always imagine some sort of experiment or observation that would let us decide which one is right. The observation might be difficult or even impossible to carry out, but we can always imagine what it would entail. (Statements about the contents of the Great Library of Alexandria are perfectly empirical, even if we can't actually go back in time to look at them.) If you have a dispute that cannot in principle be decided by recourse to observable facts about the world, your dispute is not one of science."
I agree with Carroll's definition of "science" here -- though some of his subsequent thinking seems to depend on a more restrictive definition. I especially like his point about the Library of Alexandria. Clearly, any claims we make about the contents of this library will be right or wrong, and the truth does not depend on our being able to verify such claims. We can also dismiss an infinite number of claims as obviously wrong without getting access to the relevant data. We know, for instance, that this library did not contain a copy of The Catcher in the Rye. When I speak about there being facts about human and animal well-being, this includes facts that are quantifiable and conventionally "scientific" (e.g., facts about human neurophysiology) as well as facts that we will never have access to (e.g., how happy would I have been if I had decided not to spend the evening responding to Carroll's essay?).
[Carroll]
"With that in mind, let's think about morality. What would it mean to have a science of morality? I think it would look have to look something like this:
Human beings seek to maximize something we choose to call "well-being" (although it might be called "utility" or "happiness" or "flourishing" or something else). The amount of well-being in a single person is a function of what is happening in that person's brain, or at least in their body as a whole. That function can in principle be empirically measured. The total amount of well-being is a function of what happens in all of the human brains in the world, which again can in principle be measured. The job of morality is to specify what that function is, measure it, and derive conditions in the world under which it is maximized."
Good enough. I would simply broaden picture to include animals and any other conscious systems that can experience gradations of happiness and suffering -- and weight them to the degree that they can experience such states. Do monkeys suffer more than mice from medical experiments? (The answer is almost surely "yes.") If so, all other things being equal, it is worse to run experiments on monkeys than on mice.
Skipping ahead a little, Carroll makes the following claims:
"I want to argue that this program is simply not possible. I'm not saying it would be difficult -- I'm saying it's impossible in principle. Morality is not part of science, however much we would like it to be. There are a large number of arguments one could advance for in support of this claim, but I'll stick to three.
1. There's no single definition of well-being.
People disagree about what really constitutes "well-being" (or whatever it is you think they should be maximizing). This is so perfectly obvious, it's hard to know what to defend. Anyone who wants to argue that we can ground morality on a scientific basis has to jump through some hoops.
First, there are people who aren't that interested in universal well-being at all. There are serial killers, and sociopaths, and racial supremacists. We don't need to go to extremes, but the extremes certainly exist. The natural response is to simply separate out such people; "we need not worry about them," in Harris's formulation. Surely all right-thinking people agree on the primacy of well-being. But how do we draw the line between right-thinkers and the rest? Where precisely do we draw the line, in terms of measurable quantities? And why there? On which side of the line do we place people who believe that it's right to torture prisoners for the greater good, or who cherish the rituals of fraternity hazing? Most particularly, what experiment can we imagine doing that tells us where to draw the line?"
This is where Carroll and I begin to diverge. He also seems to be conflating two separate issues: (1) He is asking how we can determine who is worth listening to. This is a reasonable question, but there is no way Carroll could answer it "precisely" and "in terms of measurable quantities" for his own field, much less for a nascent science of morality. How flakey can a Nobel laureate in physics become before he is no longer worth listening to -- indeed, how many crazy things could he say about matter and space-time before he would no longer even count as a "physicist"? Hard question. But I doubt Carroll means to suggest that we must answer such questions experimentally. I assume that he can make a reasonably principled decision about whom to put on a panel at the next conference on Dark Matter without finding a neuroscientist from the year 2075 to scan every candidate's brain and assess it for neurophysiological competence in the relevant physics. (2) Carroll also seems worried about how we can assess people's claims regarding their inner lives, given that questions about morality and well-being necessarily refer to the character subjective experience. He even asserts that there is no possible experiment that could allow us to define well-being or to resolve differences of opinion about it. Would he say this for other mental phenomena as well? What about depression? Is it impossible to define or study this state of mind empirically? I'm not sure how deep Carroll's skepticism runs, but much of psychology now appears to hang in the balance. Of course, Carroll might want to say that the problem of access to the data of first-person experience is what makes psychology often seem to teeter at the margin of science. He might have a point -- but, if so, it would be a methodological point, not a point about the limits of scientific truth. Remember, the science of determining exactly which books were in the Library of Alexandria is stillborn and going absolutely nowhere, methodologically speaking. But this doesn't mean we can't be absolutely right or absolutely wrong about the relevant facts.
As for there being many people who "aren't interested in universal well-being," I would say that more or less everyone, myself included, is insufficiently interested in it. But we are seeking well-being in some form nonetheless, whatever we choose to call it and however narrowly we draw the circle of our moral concern. Clearly many of us (most? all?) are not doing as good a job of this as we might. In fact, if science did nothing more than help people align their own selfish priorities -- so that those who really wanted to lose weight, or spend more time with their kids, or learn another language, etc., could get what they most desired -- it would surely increase the well-being of humanity. And this is to say nothing of what would happen if science could reveal depths of well-being that most of us are unaware of, thereby changing our priorities.
Carroll continues:
"More importantly, it's equally obvious that even right-thinking people don't really agree about well-being, or how to maximize it. Here, the response is apparently that most people are simply confused (which is on the face of it perfectly plausible). Deep down they all want the same thing, but they misunderstand how to get there; hippies who believe in giving peace a chance and stern parents who believe in corporal punishment for their kids all want to maximize human flourishing, they simply haven't been given the proper scientific resources for attaining that goal.
While I'm happy to admit that people are morally confused, I see no evidence whatsoever that they all ultimately want the same thing. The position doesn't even seem coherent. Is it a priori necessary that people ultimately have the same idea about human well-being, or is it a contingent truth about actual human beings? Can we not even imagine people with fundamentally incompatible views of the good? (I think I can.) And if we can, what is the reason for the cosmic accident that we all happen to agree? And if that happy cosmic accident exists, it's still merely an empirical fact; by itself, the existence of universal agreement on what is good doesn't necessarily imply that it is good. We could all be mistaken, after all.
In the real world, right-thinking people have a lot of overlap in how they think of well-being. But the overlap isn't exact, nor is the lack of agreement wholly a matter of misunderstanding. When two people have different views about what constitutes real well-being, there is no experiment we can imagine doing that would prove one of them to be wrong. It doesn't mean that moral conversation is impossible, just that it's not science."
Imagine that we had a machine that could produce any possible brain state (this would be the ultimate virtual reality device, more or less like the Matrix). This machine would allow every human being to sample all available mental states (some would not be available without changing a person's brain, however). I think we can ignore most of the philosophical and scientific wrinkles here and simply stipulate that it is possible, or even likely, that given an infinite amount of time and perfect recall, we would agree about a range of brain states that qualify as good (as in, "Wow, that was so great, I can't imagine anything better") and bad (as in, "I'd rather die than experience that again.") There might be controversy over specific states -- after all, some people do like Marmite -- but being members of the same species with very similar brains, we are likely to converge to remarkable degree. I might find that brain state X242358B is my absolute favorite, and Carroll might prefer X979793L, but the fear that we will radically diverge in our judgments about what constitutes well-being seems pretty far-fetched. The possibility that my hell will be someone else's heaven, and vice versa, seems hardly worth considering. And yet, whatever divergence did occur must also depend on facts about the brains in question.
Even if there were ten thousand different ways for groups of human beings to maximally thrive (all trade-offs and personal idiosyncrasies considered), there will be many ways for them not to thrive -- and the difference between luxuriating on a peak of the moral landscape and languishing in a valley of internecine horror will translate into facts that can be scientifically understood.
"2. It's not self-evident that maximizing well-being, however defined, is the proper goal of morality.
Maximizing a hypothetical well-being function is an effective way of thinking about many possible approaches to morality. But not every possible approach. In particular, it's a manifestly consequentialist idea -- what matters is the outcome, in terms of particular mental states of conscious beings. There are certainly non-consequentialist ways of approaching morality; in deontological theories, the moral good inheres in actions themselves, not in their ultimate consequences. Now, you may think that you have good arguments in favor of consequentialism. But are those truly empirical arguments? You're going to get bored of me asking this, but: what is the experiment I could do that would distinguish which was true, consequentialism or deontological ethics?"
It is true that many people believe that "there are non-consequentialist ways of approaching morality," but I think that they are wrong. In my experience, when you scratch the surface on any deontologist, you find a consequentialist just waiting to get out. For instance, I think that Kant's Categorical Imperative only qualifies as a rational standard of morality given the assumption that it will be generally beneficial (as J.S. Mill pointed out at the beginning of Utilitarianism). Ditto for religious morality. This is a logical point before it is an empirical one, but yes, I do think we might be able to design experiments to show that people are concerned about consequences, even when they say they aren't. While my view of the moral landscape can be classed as "consequentialist," this term comes with fair amount of philosophical baggage, and there are many traditional quibbles with consequentialism that do not apply to my account of morality.
[Carroll]
"The emphasis on the mental states of conscious beings, while seemingly natural, opens up many cans of worms that moral philosophers have tussled with for centuries. Imagine that we are able to quantify precisely some particular mental state that corresponds to a high level of well-being; the exact configuration of neuronal activity in which someone is healthy, in love, and enjoying a hot-fudge sundae. Clearly achieving such a state is a moral good. Now imagine that we achieve it by drugging a person so that they are unconscious, and then manipulating their central nervous system at a neuron-by-neuron level, until they share exactly the mental state of the conscious person in those conditions. Is that an equal moral good to the conditions in which they actually are healthy and in love etc.? If we make everyone happy by means of drugs or hypnosis or direct electronic stimulation of their pleasure centers, have we achieved moral perfection? If not, then clearly our definition of "well-being" is not simply a function of conscious mental states. And if not, what is it?"
Clearly, we want our conscious states to track the reality of our lives. We want to be happy, but we want to be happy for the right reasons. And if we occasionally want to uncouple our mental state from our actual situation in the world (e.g. by taking powerful drugs, drinking great quantities of alcohol, etc.) we don't want this to render us permanently delusional, however pleasant such delusion might be. There are some obvious reasons for this: We need our conscious states to be well synched to their material context, otherwise we forget to eat, ramble incoherently, and step in front of speeding cars. And most of what we value in our lives, like our connection to other people, is predicated on our being in touch with external reality and with the probable consequences of our behavior. Yes, I might be able to take a drug that would make me feel good while watching my young daughter drown in the bathtub -- but I am perfectly capable of judging that I do not want to take such a drug out of concern for my (and her) well-being. Such a judgment still takes place in my conscious mind, with reference to other conscious mental states (both real and imagined). For instance, my judgment that it would be wrong to take such a drug has a lot to do with the horror I would expect to feel upon discovering that I had happily let my daughter drown. Of course, I am also thinking about the potential happiness that my daughter's death would diminish -- her own, obviously, but also that of everyone who is now, and would have been, close to her. There is nothing mysterious about this: Morality still relates to consciousness and to its changes, both actual and potential. What else could it relate to?
"3. There's no simple way to aggregate well-being over different individuals. The big problems of morality, to state the obvious, come about because the interests of different individuals come into conflict. Even if we somehow agreed perfectly on what constituted the well-being of a single individual -- or, more properly, even if we somehow "objectively measured" well-being, whatever that is supposed to mean -- it would generically be the case that no achievable configuration of the world provided perfect happiness for everyone. People will typically have to sacrifice for the good of others; by paying taxes, if nothing else.
So how are we to decide how to balance one person's well-being against another's? To do this scientifically, we need to be able to make sense of statements like "this person's well-being is precisely 0.762 times the well-being of that person." What is that supposed to mean? Do we measure well-being on a linear scale, or is it logarithmic? Do we simply add up the well-beings of every individual person, or do we take the average? And would that be the arithmetic mean, or the geometric mean? Do more individuals with equal well-being each mean greater well-being overall? Who counts as an individual? Do embryos? What about dolphins? Artificially intelligent robots?"
These are all good questions: Some admit of straightforward answers; others plunge us into moral paradox; none, however, proves that there are no right or wrong answers to questions of human and animal wellbeing. I discuss these issues at some length in my forthcoming book. For those who want to confront how difficult it can be to think about aggregating human well-being, I recommend Derek Parfit's masterpiece, Reasons and Persons. I do not claim to have solved all the puzzles raised by Parfit -- but I don't think we have to.
Practically speaking, I think we have some very useful intuitions on this front. We care more about creatures that can experience a greater range of suffering and happiness -- and we are right to, because suffering and happiness (defined in the widest possible sense) are all that can be cared about. Are all animal lives equivalent? No. Are all human lives equivalent? No. I have no problem admitting that certain people's lives are more valuable than mine -- I need only imagine a person whose death would create much greater suffering and foreclose much greater happiness. However, it also seems quite rational for us to collectively act as though all human lives were equally valuable. Hence, most of our laws and social institutions generally ignore differences between people. I suspect that this is a very good thing. Of course, I could be wrong about this -- and that is precisely the point. If we didn't behave this way, our world would be different, and these differences would either affect the totality of human well-being, or they wouldn't. Once again, there are answers to such questions, whether we can ever answer them in practice.
I believe that covers the heart of Carroll's argument. Skipping ahead to final point:
"And finally: pointing out that people disagree about morality is not analogous to the fact that some people are radical epistemic skeptics who don't agree with ordinary science. That's mixing levels of description. It is true that the tools of science cannot be used to change the mind of a committed solipsist who believes they are a brain in a vat, manipulated by an evil demon; yet, those of us who accept the presuppositions of empirical science are able to make progress. But here we are concerned only with people who have agreed to buy into all the epistemic assumptions of reality-based science -- they still disagree about morality. That's the problem. If the project of deriving ought from is were realistic, disagreements about morality would be precisely analogous to disagreements about the state of the universe fourteen billion years ago. There would be things we could imagine observing about the universe that would enable us to decide which position was right. But as far as morality is concerned, there aren't."
The biologist P.Z. Myers has thrown his lot in with Carroll on a similar point:
"I don't think Harris's criterion -- that we can use science to justify maximizing the well-being of individuals -- is valid. We can't... Harris is smuggling in an unscientific prior in his category of well-being."
It seems to me that these two quotations converge on the core issue. Of course, it is easy enough for Carroll to assert that moral skepticism isn't analogous to scientific skepticism, but I think he is simply wrong about this. To use Myer's formulation, we must smuggle in an "unscientific prior" to justify any branch of science. If this isn't a problem for physics, why should it be a problem of a science of morality? Can we prove, without recourse to any prior assumptions, that our definition of "physics" is the right one? No, because our standards of proof will be built into any definition we provide. We might observe that standard physics is better at predicting the behavior of matter than Voodoo "physics" is, but what could we say to a "physicist" whose only goal is to appease the spiritual hunger of his dead ancestors? Here, we seem to reach an impasse. And yet, no one thinks that the failure of standard physics to silence all possible dissent has any significance whatsoever; why should we demand more of a science of morality?
So, while it is possible to say that one can't move from "is" to "ought," we should be honest about how we get to "is" in the first place. Scientific "is" statements rest on implicit "oughts" all the way down. When I say, "Water is two parts hydrogen and one part oxygen," I have uttered a quintessential statement of scientific fact. But what if someone doubts this statement? I can appeal to data from chemistry, describing the outcome of simple experiments. But in so doing, I implicitly appeal to the values of empiricism and logic. What if my interlocutor doesn't share these values? What can I say then? What evidence could prove that we should value evidence? What logic could demonstrate the importance of logic? As it turns out, these are the wrong questions. The right question is, why should we care what such a person thinks in the first place?
So it is with the linkage between morality and well-being: To say that morality is arbitrary (or culturally constructed, or merely personal), because we must first assume that the well-being of conscious creatures is good, is exactly like saying that science is arbitrary (or culturally constructed, or merely personal), because we must first assume that a rational understanding of the universe is good. We need not enter either of these philosophical cul-de-sacs.
Carroll and Myers both believe nothing much turns on whether we find a universal foundation for morality. I disagree. Granted, the practical effects cannot be our reason for linking morality and science -- we have to form our beliefs about reality based on what we think is actually true. But the consequences of moral relativism have been disastrous. And science's failure to address the most important questions in human life has made it seem like little more than an incubator for technology. It has also given faith-based religion -- that great engine of ignorance and bigotry -- a nearly uncontested claim to being the only source of moral wisdom. This has been bad for everyone. What is more, it has been unnecessary -- because we can speak about the well-being of conscious creatures rationally, and in the context of science. I think it is time we tried.