Tournament: Loyola | Round: 4 | Opponent: Marlborough JS | Judge: David Dosch
FW
Extinction comes first!
Pummer 15 ~Theron, Junior Research Fellow in Philosophy at St. Anne's College, University of Oxford. "Moral Agreement on Saving the World" Practical Ethics, University of Oxford. May 18, 2015~ AT
There appears to be lot of disagreement in moral philosophy. Whether these many apparent disagreements are deep and irresolvable, I believe there is at least one thing it is reasonable to agree on right now, whatever general moral view we adopt: that it is very important to reduce the risk that all intelligent beings on this planet are eliminated by an enormous catastrophe, such as a nuclear war. How we might in fact try to reduce such existential risks is discussed elsewhere. My claim here is only that we – whether we're consequentialists, deontologists, or virtue ethicists – should all agree that we should try to save the world. According to consequentialism, we should maximize the good, where this is taken to be the goodness, from an impartial perspective, of outcomes. Clearly one thing that makes an outcome good is that the people in it are doing well. There is little disagreement here. If the happiness or well-being of possible future people is just as important as that of people who already exist, and if they would have good lives, it is not hard to see how reducing existential risk is easily the most important thing in the whole world. This is for the familiar reason that there are so many people who could exist in the future – there are trillions upon trillions… upon trillions. There are so many possible future people that reducing existential risk is arguably the most important thing in the world, even if the well-being of these possible people were given only 0.001 as much weight as that of existing people. Even on a wholly person-affecting view – according to which there's nothing (apart from effects on existing people) to be said in favor of creating happy people – the case for reducing existential risk is very strong. As noted in this seminal paper, this case is strengthened by the fact that there's a good chance that many existing people will, with the aid of life-extension technology, live very long and very high quality lives. You might think what I have just argued applies to consequentialists only. There is a tendency to assume that, if an argument appeals to consequentialist considerations (the goodness of outcomes), it is irrelevant to non-consequentialists. But that is a huge mistake. Non-consequentialism is the view that there's more that determines rightness than the goodness of consequences or outcomes; it is not the view that the latter don't matter. Even John Rawls wrote, "All ethical doctrines worth our attention take consequences into account in judging rightness. One which did not would simply be irrational, crazy." Minimally plausible versions of deontology and virtue ethics must be concerned in part with promoting the good, from an impartial point of view. They'd thus imply very strong reasons to reduce existential risk, at least when this doesn't significantly involve doing harm to others or damaging one's character. What's even more surprising, perhaps, is that even if our own good (or that of those near and dear to us) has much greater weight than goodness from the impartial "point of view of the universe," indeed even if the latter is entirely morally irrelevant, we may nonetheless have very strong reasons to reduce existential risk. Even egoism, the view that each agent should maximize her own good, might imply strong reasons to reduce existential risk. It will depend, among other things, on what one's own good consists in. If well-being consisted in pleasure only, it is somewhat harder to argue that egoism would imply strong reasons to reduce existential risk – perhaps we could argue that one would maximize her expected hedonic well-being by funding life extension technology or by having herself cryogenically frozen at the time of her bodily death as well as giving money to reduce existential risk (so that there is a world for her to live in!). I am not sure, however, how strong the reasons to do this would be. But views which imply that, if I don't care about other people, I have no or very little reason to help them are not even minimally plausible views (in addition to hedonistic egoism, I here have in mind views that imply that one has no reason to perform an act unless one actually desires to do that act). To be minimally plausible, egoism will need to be paired with a more sophisticated account of well-being. To see this, it is enough to consider, as Plato did, the possibility of a ring of invisibility – suppose that, while wearing it, Ayn could derive some pleasure by helping the poor, but instead could derive just a bit more by severely harming them. Hedonistic egoism would absurdly imply she should do the latter. To avoid this implication, egoists would need to build something like the meaningfulness of a life into well-being, in some robust way, where this would to a significant extent be a function of other-regarding concerns (see chapter 12 of this classic intro to ethics). But once these elements are included, we can (roughly, as above) argue that this sort of egoism will imply strong reasons to reduce existential risk. Add to all of this Samuel Scheffler's recent intriguing arguments (quick podcast version available here) that most of what makes our lives go well would be undermined if there were no future generations of intelligent persons. On his view, my life would contain vastly less well-being if (say) a year after my death the world came to an end. So obviously if Scheffler were right I'd have very strong reason to reduce existential risk. We should also take into account moral uncertainty. What is it reasonable for one to do, when one is uncertain not (only) about the empirical facts, but also about the moral facts? I've just argued that there's agreement among minimally plausible ethical views that we have strong reason to reduce existential risk – not only consequentialists, but also deontologists, virtue ethicists, and sophisticated egoists should agree. But even those (hedonistic egoists) who disagree should have a significant level of confidence that they are mistaken, and that one of the above views is correct. Even if they were 90 sure that their view is the correct one (and 10 sure that one of these other ones is correct), they would have pretty strong reason, from the standpoint of moral uncertainty, to reduce existential risk. Perhaps most disturbingly still, even if we are only 1 sure that the well-being of possible future people matters, it is at least arguable that, from the standpoint of moral uncertainty, reducing existential risk is the most important thing in the world. Again, this is largely for the reason that there are so many people who could exist in the future – there are trillions upon trillions… upon trillions. (For more on this and other related issues, see this excellent dissertation). Of course, it is uncertain whether these untold trillions would, in general, have good lives. It's possible they'll be miserable. It is enough for my claim that there is moral agreement in the relevant sense if, at least given certain empirical claims about what future lives would most likely be like, all minimally plausible moral views would converge on the conclusion that we should try to save the world. While there are some non-crazy views that place significantly greater moral weight on avoiding suffering than on promoting happiness, for reasons others have offered (and for independent reasons I won't get into here unless requested to), they nonetheless seem to be fairly implausible views. And even if things did not go well for our ancestors, I am optimistic that they will overall go fantastically well for our descendants, if we allow them to. I suspect that most of us alive today – at least those of us not suffering from extreme illness or poverty – have lives that are well worth living, and that things will continue to improve. Derek Parfit, whose work has emphasized future generations as well as agreement in ethics, described our situation clearly and accurately: "We live during the hinge of history. Given the scientific and technological discoveries of the last two centuries, the world has never changed as fast. We shall soon have even greater powers to transform, not only our surroundings, but ourselves and our successors. If we act wisely in the next few centuries, humanity will survive its most dangerous and decisive period. Our descendants could, if necessary, go elsewhere, spreading through this galaxy…. Our descendants might, I believe, make the further future very good. But that good future may also depend in part on us. If our selfish recklessness ends human history, we would be acting very wrongly." (From chapter 36 of On What Matters)