Find the Absurdities in Your Own Arguments

Part 1: Modus-Tollens

Moral arguments are hard because often you are arguing with someone who simply has different fundamental presuppositions than you do. It’s hard to apply formal logic to morality because when reach those base axioms, you’re at an impasse. However, one thing that we can do is show that the opponent’s moral system, if true, would lead to absurdity. Which is to say, that if they applied their moral system consistently, it would force them to believe them something absurd.

Of course “absurd” is objective, but if you can get your opponents themselves to denounce the implication of their own beliefs, then you have found a contradiction in their beliefs, which renders them automatically wrong. Showing that an argument “leads to something wrong” is called Modus-Tollens, an argument with this structure:

If A was true, that would mean that B would also have to be true.
But B is false.
Therefore, A is false.

This is also called “denying the consequent.” Even more formally:

Premise 1: A ⇒ B
Premise 2: ¬B
Conclusion: ¬A

It should be noted, however, that if your opponent does not agree that B is false (in this case, “bad”), then they are off the hook. I want to give you an example of that. So let me start with an argument that I came up with.

Part 2: An Argument

The following is an argument that hunting of certain species is not unethical. I actually do believe this.

The main argument that hunting is immoral comes from veganism. That is what I’ll be addressing here.

When I look up veganism, the main definition I get is along the lines of: “within all practicality, the philosophy, lifestyle, and diet which reduces or eliminates the suffering and killing of animals.”

This definition is a conjunction of two elements. The philosophy (1) prohibits the suffering of animals, and (2) prohibits the killing of animals. If I address one of these tenants with respect to hunting, I will consider it a partial rebuff of veganism; if I address both, I will consider it a full rebuff of veganism.

First, to address suffering. I generally agree with veganism that the suffering of animals is bad. However, I don’t believe hunting increases the suffering of animals. Most hunters use guns, and death by gunshot is usually considered one of the most painless ways to die. I doubt it would be more painful than the deaths these animals would experience otherwise. They would otherwise typically die of natural causes: hunger, predators, wounds, etc, all generally more painful than a gunshot.

Second, to address death. I do not condone the hunting of endanger species. I only defend the hunting of species for which hunting would not meaningfully impact their populations. Furthermore, I do not condone the hunting of certain species that have achieved advanced levels of cooperation/communication, only animals like deer.

Under that constraint, I do not agree with vegans that the death of animals is inherently bad. Animals like deer are basically fungible, which is probably the wrong way to say: one deer is not meaningfully different than any other deer. If one deer dies, and another deer is born, who then grows to the age of the killed deer, and the deer population stays the same during the whole period, what meaningfully has changed?

Humans, by contrast, are almost endlessly differentiable. Among many things that make humans unique, they have the ability an communicate which is simply in a different category from all other animals. Humans can tell stories, share their memories, ask each other questions, pass down knowledge and wisdom, etc. When you kill a human, then, you are destroying something non-replicable: you are destroying their mind with all of its memories and personality dispositions. Other animals have memories and personality dispositions too (albeit not as much as humans, in the case of relevant hunted animals), but their memories and such don’t matter so much for the case of those animals, because they can never communicate that information to others like humans can.

Vegans who like philosophy have practically practiced responding to claims like what I just made. They will bring up counter-examples of mentally disabled people who cannot talk, in extreme cases, even that much more than animals can. A counter-example that lands best in my mind is that of newborn babies. No matter what you point towards and claim it makes humans unique, vegans can always say, “newborn babies don’t have that trait, but you wouldn’t support killing newborns!”

You can quickly start arguing about “potentials” of newborns. In this way the philosophy of veganism gets strangely wrapped up with the philosophy of abortion. You should either condemn killing-without-suffering in all cases, or not.

Alas, I have a case for protecting babies unrelated to the above. Newborns and disabled people, regardless of their personal ability to speak, are still part of society; they are spoken about, they are treated as human beings, and general rules that apply to all humans apply to them. I am even willing to extend this argument to non-human domestic pets. They are integrated into society, and offhandedly killing them would disrupt society. I acknowledge that this argument is different and weaker than my previous arguments. For one, this argument is culturally contingent.

I can make a similar argument in utilitarian terms. Babies usually have people who greatly care about them, like their mothers. To kill the baby would be an aggression against the mother, causing her great harm. “But what if there was a case of an orphaned baby, with absolutely no one who cared about them?” Yes, you can undeniably spell out edge cases. However, for society to function, we need to spell out certain safeguards and bright lines. To say, “these are the strict rules. Rule 1: you can’t kill babies, I don’t want your excuses!”

Our moral systems cannot functionally depend on granular, philosophically abstracted guides that force you to ask, “did anyone really care about this baby before he was killed?” That is madness. For the same reason, we have rules against drunk driving, although it does not always result in accidents, and rules against incest, even though it could conceivably be done safely and non-abusively: we don’t want to take chances. For my current argument, therefore, I am reluctantly a “rule utilitarian”: rather than stopping at your basic principles, one formulates a set of rules that if enforced would work best for whatever your moral axioms one is trying to optimize.

As you can see I sort of “debate bro logic treed” this out. Maybe you were convinced by that argument, or maybe you weren’t. I merely give this argument as an example.

I made my core argument, and then predicted counter-arguments, and responded to those counter-arguments. But even that did not go far enough. You may have noticed a sort of a loose-end near the end of the argument. My argument seems to become weaker in the last third, leaving open many possible objections. Often, the best objections take the form of thought experiments. Therefore, it behooves me to find the best thought experiment that goes against me – the best Modus-Tollens attempt, if you will.

With that, you have to accept whatever a valid thought experiment would force you to accept to keep your argument consistent. There is enormous power in this: in owning the corollaries, in being able to “bite the bullet” like the guy in the “YES” meme. Your ethical framework probably has some drawbacks, and if you are intellectually honest, you should be able to recognize those. At least that is better than being ignorant to the valid criticisms of your own position.

Even if you think you’ve found the one true flawless moral system, it is only flawless to you. You should acknowledge what aspects of it other people might take objection to, regardless of if you share their inclinations. It is merely an act of noting the bounds of your argument: what it gets you, and what it doesn’t get you. Arguments that get you everything usually in fact get you nothing.

Therefore I will provide you with a counter-argument and thought experiment that is a sort of Modus-Tollens of my argument.

Part 3: A Counter-Argument and Thought Experiment

John is the only surviving human after Earth was obliterated. John is on a space ship that is looking for a habitable planet on which to repopulate humanity.

John’s space ship is equipped with a very capable human-baby-incubator. The machine can incubate a newly-“born” baby within the span of less than 5 minutes. The machine has enough material on the ship that it would be able to create babies more or less indefinitely. Furthermore, John cannot de-activate or destroy the device. The ship also has a system on board to raise babies, plus security and healthcare systems to prevent and reduce human suffering respectively.

However, the incubation machine has a peculiar feature. This incubator only “wants” there to be at most one newborn baby on the space ship at any given time. When it becomes active, the incubator will create a baby, then it will begin scanning the ship. Whenever it detects that there is no longer a baby on the ship, it will instantly begin gestating a new one.

Unfortunately, John is a psychopath, incapable of feeling empathy, regret, or shame. Suddenly, the incubator becomes automatically activated for the first time, and gestates a baby. John promptly kills the baby by stabbing it in the back of the neck. Because the baby does not technically suffer pain in this ordeal, the ship’s security system does not come online. The gestation machine begins work on another baby.

The question is: would it be ethically okay for John to continue killing babies in such a manner, disposing of their bodies by throwing them into space, until he is satisfied? At worst, John will eventually die of old age, and the ship will work to repopulate humanity whether John wants it to or not. Is John doing anything clearly ethically wrong?

Maybe this description was a bit superfluous, with a lot of unnecessary detail, but I wanted to iron out the edge cases, in order to isolate the variable we care about. Also, if I do say so, it’s a compelling story this way.

Most people would answer that there is something clearly wrong with what John is doing. Although I definitely wouldn’t condone it, the moral positions I’ve taken so far force me to say that no, it’s not fundamentally immoral. I would be appalled and disgusted by it, however, I myself am not present in the though experiment; I am not there on the space ship to be appalled at him. There is no society to condemn him, the babies are basically interchangeable in this scenario, and he isn’t causing them to suffer. However much I am disgusted by this scenario, I also do not base my moral system on what disgusts me, otherwise I would have to oppose many harmless things.

By preemptively owning that result, I am (I hope) making it harder for people to oppose my on objective grounds, because I am “owning” potential criticisms. I recommend doing that for all of your philosophical positions.