Morality (Sam Harris is Wrong, part 7 and conclusion)

Morality is perhaps the most philosophically difficult part of Sam Harris’s worldview. He argues against moral relativism, in favor of objective morality. This objective morality is based on maximizing “wellbeing for all conscious creatures.”

I doubt this article will change people’s minds. It is much easier to do that in conversation where you can instantly react to where people are at, and address people’s criticisms as they come up. I’m just writing this post as sort of a record of my views, though I might edit it in the future to clear up confusion.

Sam Harris builds his moral argument on top of a starting axiom: that the “worst possible misery for everyone is bad.” In a debate at Notre Dame, he said that, if you don’t think that the worst possible misery for everyone is “bad,” then “you don’t know what you’re talking about.”

But that’s not an argument, that’s a character attack. In order to tackle a philosophical topic as difficult as morality, we need to consider something ridiculous: whether or not the worst possible misery for everyone is “bad.” This will turn into a semantic game: the definition of bad. And if you don’t like semantic games, then you’re bad at philosophy.

sharris1.png

Sam sometimes concedes that he does need to use starting axioms, but I’ll allow him that rebuttal because we need to use starting axioms in every other domain of science and discourse as well.

But I have to take issue with that tweet. Physicists do justify that events have causes, mathematicians do justify that 2+9=11 from more basic concepts, etc. But Sam would just respond by questioning the basis of those more basic concepts, so I have to go deeper.

Here’s what my axioms are based on: consistently observable patterns.

How do we know that events have causes? Because, if I see a cause, I can consistently predict an affect (or, when told the affect, I can consistently predict the cause). If we failed to ever do this, we would not know that events have causes.

How do we know that some memories are veridical? Because I consistently make predictions based on my memories that are then confirmed by observation. I remember where I keep my pencils, which allows me to consistently predict that every time I look there, I will find a pencil (ok, almost every time).

How do we know 2 + 9 = 11? Because we consistently observe that when we place 2 objects next to 9 objects, the outcome is 11 placed objects. In every observation ever made, there has never been an exception to this rule, so we assume it’s a rule. If we observed an exception, math might still work internally because it’s derived axiomatically, but math would no longer be able to represent reality, and we would need to create a different math to actually represent reality.

It is true that, at some point in the process, I need to insert axioms, which I can use to prove that certain things are true by definition. But I will only consider an axiom if it has held true for all correctly conducted observations.

How do you discuss morality in those terms? What constitutes “bad?” No amount of consistent observations can solve the problem. Every time you arrive at an axiom, someone can say they disagree with your definition. By contrast, no one is going to disagree with computational logic and set theory, the axiomatic cores of other disciplines. 

Someone: But how do you know your sense data is reliable?

This seems to be a point of confusion some people have. I don’t claim that it’s reliable, my sense data just is what it is. I will use an analogy: sense data is like drawing a sequence of cards from a deck. If I draw A B A B A, then I can predict that the next card will most likely to be B, because that meets the pattern. The next card could not be B. It could be C. In such a case, I will have to figure out whether the C is anomalous and can be ignored, or whether it’s part of a bigger pattern. It is objectively true that the next card is most likely B, but it could still not be B.

Now back to Sam Harris. He would be right about objective morality, if he could get everyone on board with the exact definition the he uses: morality = increasing the total wellbeing of all conscious creatures. From there, he can make moral claims:

Assuming morality means such and such, X

Assuming morality means such and such, Y

Assuming morality means such and such, Z

Alas, everyone is not on board.

Comparing disagreements about morality to disagreements about health (as Sam is apt to do) does not solve his problem. Someone who does not care about the conventional conception of “health” can choose to ignore it and not be healthy. But Sam’s morality is far broader: it concerns everything you should ever want to do, basically.  What can someone do if they disagree with it?

At this point, Sam might argue that everyone is already accidentally using his notion of morality, whether they like it or not. All moral systems, he argues, are ultimately defended on the basis of wellbeing.

He argues this, but it isn’t true. There are many moral systems that are not grounded in wellbeing at all. The most common alternative is to base morality on coercion vs consent. Under such a system, coercion is to be avoided, and consent is to be honored, regardless of whether doing so causes humans to suffer or flourish. I use something similar in my political manifesto. Harris would argue that even this is grounded in wellbeing, but I can construct many situations wherein the libertarian would actually value consent more, and that would cause you to have different political views.

There is more than one alternative. Hedonism is a moral system where you are concerned with your wellbeing, because that is what you can feel, and not with anyone else’s, because you cannot experience the wellbeing of anyone else. The closest thing to that with loyal support is Randian Objectivism, which defends selfishness. Finally, social contract theory is the version of morality that is actually practiced, where morality is really a descriptive rather than a prescriptive thing; a description of the societal politics that decides what is right and wrong.

Sam says that it doesn’t matter whether people say they believe in his morality, because in practice, the choices people actually make are still governed by his version. When you touch a hot stove, or get tortured, you will express the manifest reality that suffering (as a component of “wellbeing”) does indeed matter, because you will feel the pain and react. We need to address, he argues, those are the sensations readily evident to you, and that is the way people actually behave.

Of course, it is a biological reality that people respond to pain and pleasure. But that need not dictate morality. We are not slaves to our biology. We can decide what we want based on logical considerations.

The only way that Sam’s morality really works is by smuggling other types of morality into it, i.e., if I always prefer to be free and suffer over being coerced and feel pleasure, he needs to take into account my preference for freedom as a component of “wellbeing.”

If that’s the case, his morality is relegated to something meta-level, like the golden rule. Something like: “people have the moral preferences they have, and we should help people achieve their preferences.” Or should we? Maybe it’s not part of my morality. Now what?

Conclusion

This is the end of the series.

It may seem, at this point, like I hate Sam Harris. This is not the case. This series is, more or less, an excuse to explore my own philosophical beliefs on a variety of issues. Harris is a good vessel for that because he has touched so many pop philosophy issues.

One big takeaway is that, on a great deal of controversial philosophical issues, the truth lies somewhere on the intersection of the opposing views. If one side was unilaterally correct, the issue would not be controversial. The truth would be obvious.

There are cases when the truth is obviously correct, and the evil side masquerades as the “nuanced take.” But, curiously, such issues are always “political,” not “philosophical.” With philosophical issues, by contrast, nuance is generally correct. A few examples:

  • Do we have free will, or is determinism true? Answer: both.
  • Is religion right or wrong? Answer: It depends what you mean. Religion is adapted.
  • Is AI taking over the world a sure thing or a far-off joke? Answer: Neither.
  • Is identity politics good or bad? Answer: that question makes no sense in game theory

And finally: is morality objective?

Well, here’s my take. The first principles are subjective. The use of those first principles isn’t (at least it shouldn’t be). Once you have first principles, they can be evenly applied. It’s the difference between creating a computer program (first principles) vs letting it run (employing first principles). We want to be consistent about what we call moral and immoral. Otherwise, that determination is at the mercy of whoever has the most persuasive voice.

In other words: There are many different versions of morality. Choosing which one to pick is subjective. But any useful version of morality really ought to be objective.

We just have to admit that our axioms, the definitions of words like “morality,” or “good,” or “bad,” are ultimately subjective, even if their use is objective.