Subscribe: Philosophy, et cetera
Added By: Feedage Forager Feedage Grade A rated
Language: English
agent  case  consequentialism  independent  killing  might  moral theory  moral  natural  nmc  normative  theory  view  wrong 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Philosophy, et cetera

Philosophy, et cetera

Providing the questions for all of life's answers.

Updated: 2018-04-26T17:31:58.724+01:00


Three kinds of offsetting


Distinguish the following kinds of "offsetting" behaviour:Preventative offsetting -- when potential harms depend on just the global amount of something (say, greenhouse gas emissions), it seems that one can prevent the potential harm done by one's contributions by "offsetting" or paying to reduce others' contributions, so that the net effect of one's behaviour leaves the global magnitudes unchanged.Cause-specific (or harm-type) offsetting -- when you cause a harm of a certain type, but then seek to 'offset' the badness of this by preventing a like harm from occurring elsewhere.  E.g. donating to a relevant environmental charity after polluting your local river.Cause-neutral (or net utility) offsetting -- when you cause a harm of a certain magnitude, and then seek to 'offset' the badness of this by preventing a similar amount of harm elsewhere.  E.g. donating to a global poverty charity after polluting your local river.Preventative offsetting seems the most easily justified.  While he doesn't use this exact terminology, this is the basic idea that Will MacAskill appeals to in Doing Good Better to distinguish carbon offsetting (which he thinks can justify our carbon emissions) from things like murder offsetting (which surely can't justify murder).But preventative offsetting can only get us so far.  For one thing, it's not entirely obvious that even carbon offsets are entirely preventative: it seems an empirical possibility that there could be some localized effects to greenhouse gas emissions, such that planting trees in Peru doesn't literally undo the effects of flying around Europe.  Moreover, our modern lives entail all sorts of localized environmental harms (e.g. air pollution from driving cars, creating demand for electricity that's partly from non-renewable sources, habitat destruction to make room for our housing and food needs, adding to oceanic plastic buildup, etc. etc.) that aren't subject to preventative offsetting.  So if we want to be able to "make up for" these harms we (and our children) cause, we will need to appeal to some form of non-preventative offsetting (whether cause-specific or cause-neutral).Perhaps the most promising way to do this (without absurdly justifying murder offsets or the like) would be to develop Scott Alexander's idea that "you can offset axiology, but not morality."  Scott happened to appeal to a specifically rule-utilitarian conception of morality, but we can broaden the idea beyond this.  The basic idea (as I would prefer to develop it) is just that you can offset diffuse harms but not specific wrongs or rights-violations.On any plausible conception of rights (whether foundationally consequentialist, contractualist, or taking the rights themselves as foundational) we have a right that our neighbours not murder us, but no right that they refrain from exhaling carbon dioxide into the atmosphere.  So there we have our easy cases: carbon offsetting is legitimate, whereas murder offsetting is not.  This analysis also plausibly identifies the difficult cases: reasonable theories disagree about whether factory farming involves rights violations or just harms to the animals' welfare, and so the legitimacy of meat offsetting (e.g. paying others to become vegetarian in your place) is a similarly open question.My hope is that this analysis could convince even non-consequentialists that we can legitimately offset the environmental harms entailed by procreation (and indeed by our own continued existence), thereby undermining the environmental anti-natalist arguments of Travis Rieder, Sarah Conly, and others.  Our everyday environmental externalities seem like precisely the kind of diffuse, untargeted harms that can legitimately be offset, after all.I'm further inclined to think that there's no clear moral reason to prefer cause-specific moral offsetting: if we can more efficiently make the world a better place through other means, we should feel free to do so.  And this makes it much more fe[...]

Moving to Miami!


I'm very happy to say that Helen & I will be joining the philosophy department at the University of Miami next year!

We bid farewell to many fantastic colleagues, and will certainly miss daffodil season in York...

... but are thrilled to be joining the outstanding philosophical community at UM!

On Parfit on Knowing What Matters


If I had to pick a "favourite philosopher", it would be Derek Parfit.  His book Reasons and Persons is, in my view, the best there is -- containing striking insights and arguments on every page, and laying the groundwork for basically all subsequent work on the deepest puzzles surrounding consequentialism, personal identity, and population ethics.  So it was a great honour to have him respond to my paper 'Knowing What Matters' in his third volume of On What Matters.  I wish he were still around to be able to continue the conversation further, as I would have liked to prompt him to engage more closely with various claims (that he was instead initially inclined to reject by just re-asserting his antecedent view). Sadly, that's no longer possible.  But I guess I can at least continue my side of the conversation, and perhaps other readers will suggest further comments and responses that could be made on Parfit's behalf.'Knowing What Matters' argues that Parfit concedes too much to the moral skeptic, and explores how the robust realist might defensibly take a less conciliatory line on moral epistemology.  In particular:1. I argue: Given that the moral facts are causally inefficacious, Street-style skeptical worries about the causal origins of our beliefs being unrelated to their truth-makers does not depend specifically on the idea that our moral beliefs have evolutionary causes.  So I think it was a mistake for Parfit to be so invested in trying to refute that particular causal story.  It renders his view hostage to empirical fortune, and misses the larger philosophical issue.Parfit responds (OWM v.3, p.286):[W]e cannot defensibly assume that all possible causes of our normative beliefs would have been unrelated to their truth.  We do not yet know enough about these other possible causes to be justified in making any such assumption.What conceivable natural causes would qualify as sufficiently "related" to the (non-natural) moral facts, if evolutionary causes do not?  Parfit complains that the same normative beliefs would have evolved "whether or not they were true." (287)  But Parfit accepts the standard robust-realist view that normative facts are not causally efficacious.  So whatever the natural causes of our normative beliefs, imagining (per impossibile) switching the truth values of the believed proposition can't affect the causal fabric of the natural world and hence whether we come to have the belief in question.So, if robust realists are to make sense of the possibility of moral knowledge, we must reject Parfit's truth-switching test. (Indeed, the need to reject such strict demands for 'sensitivity' to the truth is a familiar lesson from considering radical skepticism.)  But on a looser understanding of the needed "relation" to truth -- mere reliability, say -- it's unclear why we must think that evolutionary causes (for social creatures in our ecological niche) are "unrelated" to the moral truth.2. We do better, I argue, to regard the causal origins of a (normative) belief as lacking intrinsic epistemic significance.  The important question is instead just whether the proposition in question is itself either intrinsically credible or otherwise justified.  Parfit rejects this (p.287):Suppose we discover that we have some belief because we were hypnotized to have this belief, by some hypnotist who chose at random what to cause us to believe. One example might be the belief that incest between siblings is morally wrong. If the hypnotist's flipped coin had landed the other way up, he would have caused us to believe that such incest is not wrong. If we discovered that this was how our belief was caused, we could not justifiably assume that this belief was true.I agree that we cannot just assume that such a belief is true (but this was just as true before we learned of its causal origins -- the hypnotist makes no difference).  We need to expose it to critical reflectio[...]

Cognitivism and Moral / Philosophical Peer Intransigence


Richard Rowland's forthcoming Analysis paper on 'The Intelligibility of Moral Intransigence' presents a curious argument against moral cognitivism.  It goes roughly as follows:

P1. Beliefs track perceived evidence.
P2. Perceived peer disagreement is perceived evidence.
Hence C1. Peer intransigent judgments are not beliefs.
P3. Moral peer intransigence is intelligible: moral judgments can be peer intransigent.
Hence C2: Moral judgments are not beliefs.

The argument seems to prove too much, insofar as one could just as well replace 'moral' with 'philosophical' in P3, but non-cognitivism about all philosophy seems pretty absurd.

So, where does it go wrong?  I suggest P2.  As per my previous post, peer disagreement is not necessarily epistemically relevant.  A peer's disagreement is only evidence against your current view if it is evidence that you've made a mistake by your own lights.  But in many (philosophical, but especially moral) cases it is instead merely evidence that your peer's bedrock assumptions differ from yours.  In such cases, intransigence is not only intelligible, I argue, but also perfectly reasonable.  The mere fact that some people have different priors or bedrock assumptions from yourself is not in itself any kind of evidence against your priors or bedrock assumptions.  So P2 is false.  Perceived peer disagreement is only perceived evidence when further conditions are met (as they are in the standard cases -- arithmetic disputes when calculating the dinner bill, etc. -- that Rowlands uses to motivate the premise).

[Terminological variant: If one defines an 'epistemic peer' narrowly as someone you regard as generally just as likely as yourself to reach the truth on questions in this domain then we should instead reject the assumption that peer intransigence is intelligible.  In the proposed cases, the agent should reject the claim that the disputant is their epistemic peer in this strict sense. Even if they grant that they are equally procedurally rational, that is insufficient.  After all, it is incoherent to think that different priors or fundamental epistemic methods are just as likely to be correct as your own -- see the last section of Elga's 'How to Disagree about How to Disagree'.]

Philosophical Expertise, Deference, and Intransigence


Here's a familiar puzzle: David Lewis was a better philosopher than me, and certainly knew more and had thought more carefully about issues surrounding the metaphysics of modality.  He concluded that modal realism was true: that every concrete way that a world could be is a way that some concrete universe truly is (and that these concrete universes serve to ground modal truths -- truths about what is or is not possible).  But most of us don't feel the slightest inclination to defer to his judgement on this topic.  (I might defer to physicists on the 'Many Worlds' Interpretation of quantum mechanics, but that's a different matter.)  Are we being irrational?A familiar response: philosophical 'experts' themselves disagree.  Kripke, for example, may be weighed against Lewis on this topic.  But then it might seem to follow that I should suspend judgment entirely: if even the top experts on the topic cannot agree, what hope do I have of coming to a justified conclusion here?  And it would seem epistemically shady to cherry-pick the experts who agree with you, claim that you're responsibly deferring to them, and just ignore all the ones that don't.I think a better response is available.The puzzle presupposes that we ought to defer to experts.  But that only makes sense if we've reason to expect that expertise in a domain sufficiently increases epistemic reliability, i.e. the likelihood of true beliefs.  That's certainly the case for many domains -- it's why we should defer to scientific experts, for example.  But it arguably isn't so for philosophy in general.Philosophical expertise seems compatible with being completely off the rails when it comes to the substantive content of one's philosophical views.  And this is to be expected once we appreciate that (i) there are many possible internally coherent worldviews, (ii) philosophical argumentation proceeds through a mixture of ironing out incoherence and making us aware of possibilities we had previously neglected, and so (iii) even the greatest expertise in these skills will only help you to reach the truth if you start off in roughly the right place.  Increasing the coherence of someone who is totally wrong (i.e. closer to one of the many internally coherent worldviews that is objectively incorrect) won't necessarily bring them any closer to the truth.To put a more subjective spin on it: One's only hope of reaching the truth via rational means is to trust that your starting points are in the right vicinity, such that an ideally coherent version of your worldview would be getting things right.  So we've only got reason to defer to others if their verdicts are indicative of what our idealized selves would conclude.  Often, we can reasonably judge that other philosophers have views so alien to our own that it's unlikely that procedurally ideal reflection (increasing internal coherence) would lead us to share those views.  In such cases, we've no reason to defer to those philosophers, however 'expert' they may be.(Terminological variant: If you want to build into the definition of a subject-area 'expert' that deference to expert judgement is mandatory, then you should restrict attributions of expertise to those whose starting points are sufficiently similar to your own.)tl;dr: We should only be epistemically moved by peer disagreement (and related phenomena) when we take the other person's views to be evidence of what we ourselves would conclude upon ideal reflection.  Philosophical intransigence is thus often justified, insofar as we can justifiably believe that an improved version of our view could be developed that is at least as internally coherent as the opposing views. This remains true even if we judge that the defenders of the opposing views are (in purely procedural terms) smarter / better philosophers than we are ourselves. [...]

2017 in review


(Past annual reviews: 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 2005, and 2004.)Off the blog... Mostly I've been occupied this year by the arrival of this little guy:Professionally, I was delighted to finally find a good home for my 'Willpower Satisficing' paper (in Noûs!).  'Why Care About Non-Natural Reasons?' was accepted by APQ.  And a couple of previously-accepted papers -- 'Knowing What Matters' and 'Rethinking the Asymmetry' -- appeared in print, while 'Fittingness Objections to Consequentialism' was officially approved for an OUP-edited volume.  Busy times!On the blog...Applied Ethics* A series of posts took a critical look at a healthcare fiasco unfolding in the UK which our family experienced first-hand: UK shuts down Independent Midwives, Medical Indemnity: Protection or Compensation?, and Assessing the NMC's Defense of its Independent Midwifery Ban.* Universalizing Tactical Voting rebuts the moral objection to tactical voting.* Anomaly vs Huemer on Immigration -- explaining why the default presumption should be to favour freer immigration.Moral Theory* Aggregating the Right Moments addresses one intuitive reason for thinking that it'd be better to give one person half a million minutes (i.e. one year) more life than to give a million people one minute more each.* Nanoseconds that Matter explains why even arbitrarily small durations of time should not be assumed to lack value entirely.* Harms, Benefits, and Framing Effects defends the existence of 'framing effects' against the objections of a recently published paper.* Iterating Badness in the Paradox of Deontology explores an objection to Setiya's new paper, 'Must Consequentialists Kill?'* Drawing the Consequentialism/Deontology Distinction does just what it says on the tin.Other* Our Zombie Bodies, and Physicalist Epiphenomenalism discusses the idea that our mental properties should not be attributed to our physical bodies in addition to our person, and so our bodies are, in a sense, philosophical zombies.* Intelligible Non=Natural Concerns explores exceptions to the rule that we shouldn't care about morality 'de dicto'.Happy New Year! [...]

Giving Game 2017 results


This past week I ran a 'Giving Game' for my Effective Altruism class, letting each student decide (after class discussion) how to allocate £100 of my charitable budget for the year.  There was just one restriction: if they wanted to pick something other than one of the four EA Funds options (which have expert managers directing funds in the fields of "global health & development", "animal welfare", "long-term future", and "EA community"), they had to convince at least one other classmate to join them.  In the first seminar group, half the class ended up choosing alternative options; in the second, all stuck with the EA funds.  The end result was a bit more varied and (less conservative) than the first time I tried this, so that was interesting to see.  (I think it helped both to allow individual discretion rather than requiring group consensus decisions, and also to have the new "EA funds" available to enable responsibly contributing to a cause area without having to identify or select particular outstanding organizations within the area.  You can now just make the value judgment, and defer to trusted experts on the empirical details.)

Here's the final breakdown for both seminar groups combined:

* Global Health & Development [EA fund]: £900
* Animal Welfare [EA fund]: £200
* Long-term Future [EA fund]: £500
* EA Community [EA fund]: £300
* Cool Earth: £400
* Good Food Institute: £200

Where would (or better: do) you choose to give, in hopes of achieving the most good?

I'd definitely stick to the EA funds myself, trusting the managers to know the best giving opportunities within their area better than I do.  I've traditionally favoured the 'meta' approach of funding EA movement-building to catalyze additional donations, but the Long-term Future (yes, including AI risk) is hugely important and unduly neglected by people in general.

Drawing the Consequentialism / Deontology Distinction


I previously mentioned that Setiya's 'Must Consequentialists Kill?' defines consequentialism vs deontology in a way that I think we should resist.  (This is part of what allows Setiya to reach his surprising-sounding conclusion that "consequentialists" aren't committed to killing one to prevent more killings.)  Setiya defines "consequentialism" as the conjunction of two theses:ACTION-PREFERENCE NEXUS: Among the actions available to you, you should perform one of those whose consequences you should prefer to all the rest.AGENT-NEUTRALITY: Which consequences you should prefer is fixed by descriptions of consequences that make no indexical reference to you.I think this is both too strong and too weak.  It is too strong because consequentialism doesn't require agent-neutrality.  Egoism is clearly consequentialist in nature, as are other forms of agent-relative welfarism (e.g. views that are utilitarian at base, but then weaken the strict equality of interests to instead allow agents to weight the interests of their nearest and dearest more heavily than those of strangers).  As Setiya acknowledges, this is a "terminological" point. Nonetheless, some ways of using terms do a better job than others of carving nature at the joints, and if we're looking for a fundamental structural divide with which to categorize all normative ethical theories, you've made a real mistake if you don't recognize that egoism, utilitarianism, and agent-relative welfarism all belong on the same side.On the other hand, Setiya's definition is too weak because the Action-Preference Nexus is (as he himself notes) "not a claim about the order of explanation but about the congruence of reasons". And such a congruence seems difficult to deny -- even many self-identified deontologists [though not all!] will surely say that we should prefer not to act wrongly, for example.  But so long as the explanation of the congruence takes some reasons for action to be prior to reasons for desire (i.e. takes the right to be prior to the good), then the view in question would seem best categorized as "deontological" in nature, not "consequentialist".  This seems so even if the view in question is also agent-neutral, as is the view Setiya defends in his paper.  You can (albeit with some difficulty) hold that everyone ought to prefer that agents not engage in utilitarian sacrifice, or kill one to prevent more killings.  But insofar as these are moral side-constraints on action that you are elevating to the status of universal desirability, the resulting view seems all the more "deontological" in nature.One way to see this is to invoke my 'naturalization' test for axiological vs deontic reasons. Suppose it were not an agent, but rather a bolt of lightning, that killed the one and thereby saved the five.  How would the resulting state of affairs compare to the alternative where the five were killed?  Presumably it's less bad: fewer people died, and nobody was treated as a means, had their rights violated, or was otherwise "wronged" in any way.Since it makes such a difference to our (commonsense) moral assessment whether the one is killed by an agent or by natural causes, it seems that the injection of agency into the picture is responsible for flipping our verdicts about desirability in the original "killing one to save five" case -- suggesting that we only prefer that the agent not kill the one because of an antecedent judgment that the act would be morally wrong (rather than judging it wrong because it brings about an antecedently undesirable outcome, as a consequentialist version of the view would have it).[By contrast, consider how a properly consequentialist version of Setiya's view would go.  For "killing one to save five" to be antecedently undesirable (i.e. undesirable independently of a[...]

Iterating Badness in the Paradox of Deontology


In 'Must Consequentialists Kill?' (forthcoming in J Phil), Setiya convincingly argues against the "orthodox" view that commonsense verdicts about the ethics of killing entail agent-relativity.  Instead, he observes: "In general, when you should not cause harm to one in a way that will benefit others, you should not want others to do so either." (p.8 on pre-print version)  For example, it's not just the agent that should prefer to avoid themselves killing one to prevent five killings, but we should generally prefer that others likewise avoid killing one to prevent five other killings.  The preference here mandated by commonsense morality is thus agent-neutral in nature: it makes no essential reference to your role in the situation.This seems right (I mean, correct as a claim about "commonsense morality", not actually right...), and it avoids one horn of the paradox of deontology, namely worries about how the reasons to act morally, if merely agent-relative, could have the authority to trump more personal / self-interested reasons for the agent.  But the core puzzle remains: if killing is so bad, why should we not be concerned to minimize its occurrence?  Setiya's answer seems to be, in effect, that killing per se is not so bad (being roughly as bad as accidental death).  What's very bad is instead a more specific kind of killing, such as killing as a means to preventing greater harms (or whatever the right generalization of deontological intuitions over cases turns out to be).And the badness must iterate.  Consider: According to Setiya, you should prefer that agents not kill one even as means to thereby preventing five utilitarians from each doing the very bad thing of killing one to prevent five other killings.  So killing as a means to preventing a greater number of very bad killings must itself be worse than five such very bad killings (which are themselves worse than 25 ordinary killings).  That is, it must be very very bad.  And so on.  In general, the badness of killing someone (in an intuitively disapproved-of way) must be a function of how much instrumental good is achieved by the killing, in order to ensure that the killing remains bad/undesirable on net.Such a view strikes me as not very substantively plausible.  It is not worse to kill someone for the sake of helping others than to kill them (let alone five...) for purely selfish ends.**  So I think we should reject the "commonsense" view about the ethics of killing (given that it is seen to be absurd upon reflection), and embrace a more traditional form of (welfarist) consequentialism which treats the "commonsense" intuition as something like a useful rule of thumb instead of a fundamental moral principle.Setiya disputes this by suggesting that we focus on the moral stakes as the situation unfolds in time:At the beginning of Five Killings, five people are going to be killed. In One Killing to Prevent Five, five people are going to be killed unless they are saved by the pushing of the button, which kills an innocent stranger by dropping him off a bridge into the path of the speeding trolley. The situation in which someone is going to be killed unless they are saved in this way is as bad as the situation in which they are going to be killed. Ethically speaking, the damage has been done. [...] It makes things worse, not better, that the button is pushed, so that the innocent stranger dies. That is why One Killing to Prevent Five is worse than Five Killings: it starts out worse and then declines. If we think of the temporal unfolding of events in One Killing to Prevent Five, we can make sense of why we should prefer Five Killings.Of course, this is not an argument intended to persuade the traditional consequentialist, but rather an "int[...]

Intelligible Non-Natural Concerns


I've previously argued that -- even by non-naturalist lights -- what matters are various natural properties (e.g. causing pleasure or pain), and the role of the non-natural normative properties is instead to "mark" the significance of these natural properties.But it's worth flagging that there are exceptions. While I take it that typically what matters are natural features of the world, this is not a universal restriction on what matters. After all, normative properties plausibly have the further normative property of being worthy of philosophical scrutiny. So I do not deny that there may be special cases when it is perfectly reasonable to take an interest in morality de dicto. (Responding to moral uncertainty may be another such case.) My claim was the more modest one that non-naturalism does not commit us to having non-natural properties take center stage in our moral lives.The special cases where normative properties themselves are of legitimate interest are precisely cases in which it no longer seems perverse or unintelligible to take a special interest in a non-natural property. There's clearly nothing unintelligible about taking a philosophical interest in non-natural properties, after all. (They raise all sorts of interesting questions!) The case of moral uncertainty may be less obvious, so let me discuss that a bit further.Suppose you aren't sure whether it's wrong to painlessly kill happy chickens, as you are unsure whether the cognitive capacities that they possess (in particular, their limited degree of psychological connectedness from one moment to the next) are sufficient to ground a normative interest in continued (happy) survival. The question about which you are unsure is not an empirical one -- we may suppose that you are fully aware of all the relevant empirical details concerning chicken psychology. You know what relations of psychological continuity and connectedness do and do not hold between the various chicken timeslices. You just aren't sure which relations are the ones that matter. Since you generally desire to avoid causing harm, you specifically desire not to kill the chicken if the relation binding together its temporal parts is one that matters (in the sense of giving it a normative interest in survival).This seems a distinctively abstract sort of property that you are (quite reasonably) concerned with in this case. But then it doesn't seem any great problem if it turns out that the property in question is a non-natural one. Of course, it is not just any old non-natural property. And it is not an entirely free-floating concern, disconnected from all your other concerns. On the contrary, this property marks being normatively alike to other things that you rightly care about, and that uncontroversially matter. So perhaps this connection to other, more concrete, concerns can help to render this abstract concern more intelligible.If we take for granted that adult humans have what matters in survival and that embryos do not, we may intelligibly wonder whether the most coherent systematization of our pattern of concern would mandate concern for chicken survival or not. This is not exactly the same as the question whether chicken survival matters, but from a first personal perspective (assuming that one is not in general morally misguided, etc.) they are at least closely related, as an answer to the one question may reasonably be taken to subjectively settle the other as well.So I take this to be an intelligible concern, despite being relatively abstract in nature. It doesn't seem to make any difference to its intelligibility if it turns out to be a non-natural property that we are here concerned with. What would be objectionable is if non-naturalists were committed to replacing our ordinary concrete concerns (such as those involving the he[...]

Harms, Benefits, and Framing Effects


Kahneman and Tversky famously found that most people would prefer to save 200 / 600 people over a 1/3 chance of saving all 600, and yet would prefer a 1/3 chance of none of the 600 dying over a guaranteed 400/600 deaths.  This seems incoherent, since it seems our preferences over a pair of options are reversed merely by describing the very same case using different words.

In 'The Asian Disease Problem and the Ethical Implications Of Prospect Theory' (forthcoming in Noûs) Dreisbach and Guevara argue that the folk responses are compatible with a coherent non-consequentialist view.  Their basic idea (if I understand them correctly) is that the "400 will die" case is suggestive of a different causal mechanism: perhaps the 400 die from our intervention, so the choice is between guaranteed or gambled harms, whereas the "saving" choice is between guaranteed or gambled benefits.  They then suggest that non-consequentialist principles might reasonably mandate a special aversion to causing guaranteed harm (and so think it better to risk harming either all or none, despite no difference in expected value between the sure thing and the gamble).  In the first case, by contrast, they suggest that non-consequentialists might think it easier to justify saving some lives as a "sure thing" rather than taking a gamble that would most likely save nobody at all.

Such non-consequentialist principles sound pretty odd to me, but let's grant them for sake of argument.  I think that D&G's argument fails for the simpler reason that we can easily clarify the thought experiment so that there is no change in causal mechanisms between the two cases.  Rather than merely specifying that "400 will die", we may clarify: "400 will die of the disease."  I trust that this makes no intuitive difference to the case: So long as the scenario is framed in terms of how many people die of the disease rather than how many are saved from it, we prefer to gamble with lives in a way that we otherwise would not.  The inconsistency cannot be explained away by positing that our intuitions are tracking the deontological property of whether we cause a harm, for it evidently remains even when there is no such causal discrepancy between the cases.

So it seems that our prima facie intuitions here are simply incoherent -- and responsive to arbitrary framing effects -- after all.  Or am I missing something?

Anomaly v Huemer on Immigration


People often assume that to allow immigration is an act of charity: a country generously sharing its land and institutions with outsiders who have no real claim to be there.  Michael Huemer's work forcefully upends this assumption, showing that immigration restrictions are in fact a form of harmful coercion (like blocking a starving man from accessing a public market where he could trade for food). This reconceptualisation shifts the argumentative "burden", insofar as we generally accept that it is much more difficult to justify coercively harming someone (a seeming rights-violation) than to merely refrain from assisting them.

Jonny Anomaly, in a recent blog post on the issue, seems to miss this key feature of Huemer's argument, instead characterizing Huemer's argument in terms of "mutually beneficial gains", and responding that "although a small number of voluntary transactions may benefit all parties, this does not entail that a large series of transactions will benefit everyone."  But Huemer relies on no such entailment.  Indeed, he explicitly argues that incidental disadvantages don't justify harmful coercion: You may not block Starving Marvin's access to the market just because you're worried that he will beat a local to the last loaf of cheap bread.

It would of course be more relevant if immigration would somehow cause the total breakdown of society -- non-absolutists allow that rights may be overridden to avoid disaster.  But there is insufficient reason to accept such overblown fears at this stage. (Anomaly speculates that "At the extreme, unlimited migration might destroy the very institutions to which Starving Marvin wishes to immigrate."  We are given no reason to consider this credible.)

The reasonable response to such concerns is not to use them to drum up opposition to immigration (or the open borders ideal) in our current -- excessively nativist -- milieu, but just to keep an eye out for new evidence as we move to reduce the barriers to immigration.  As Huemer himself writes:
I grant that it may be wise to move only gradually towards open borders. The United States might, for example, increase immigration by one million persons per year, and continue increasing the immigration rate until either everyone who wishes to immigrate is accommodated, or we start to observe serious harmful consequences. My hope and belief would be that the former would happen first. But in case the latter occurred, we could freeze or lower immigration levels at that time.

Is there any reasonable basis for rejecting this "gradualist" proposal?

Nanoseconds that Matter


Take an arbitrarily short duration -- I'll speak of 'nanoseconds' for familiarity and convenience, but you could use an even smaller measure of time.  Could removing a mere (arbitrary) nanosecond from your life plausibly make your life any worse on the whole?  You might think not, on the basis that "surely nothing of any significance could occur during such a short time."  On the other hand, if you remove all the nanoseconds then we have no life left at all, which is certainly a significant difference.  Is it coherent to think that many individually worthless moments might collectively have value?I have my doubts, and have previously suggested that such putatively vague goods (as a "sufficient duration to matter") are better understood as graded and/or involving threshold effects.  A friend suggested minuscule scales of time as a challenge to this view, but I think my approach still makes good sense of this case.  Here's how...Some goods are extended in duration and plausibly independent / proportioned in value such that 1/n of the duration has 1/n of the value of the whole.  Hedonic pleasure is perhaps the paradigm example of this.  A tiny (but non-zero) period of pleasure can thus be expected to have a correspondingly tiny (but non-zero) value.Is that enough to refute the claim that no nanoseconds matter?  Well, one might push back against my counterexample by insisting that a mere nanosecond (or picosecond, or whatever) can make no difference to the actual amount of pleasure felt.  This claim might be defended in either of two ways:(1) appeal to some kind of 'discernibility' criterion, such that if you cannot discriminate between an experience lasting an extra nanosecond and one without, then there is no difference in the felt, subjective experience. But this fails, because we know that indiscriminability is insufficient for phenomenal identity (since the former relation is intransitive and identity isn't).(2) the critic might claim that during a sufficiently short period of time no physical processing in the brain sufficient to give rise to (continued) experience will have occurred.  But in that case, we have simply transformed this into a chunking case: The minimum phenomenal duration must, we're assuming, extend for longer (let's say it is 100 nanoseconds), in which case there must be some extension of brain processing in physical time sufficient to give rise to 100 nanoseconds more phenomenal experience.  Assuming that this happens, on average, once every 100 nanoseconds of physical time, it would seem that an arbitrary 1 ns extention of physical processing gives you a 1/100 chance of getting a bonus 100 ns of experience.  Even though 99% of nanoseconds have now been rendered worthless, the remaining 1% have their value boosted one hundredfold, returning the expected value of the nanosecond to our (slight but non-zero) starting point.So: some nanoseconds (or picoseconds, etc.) matter.  Further, a similar story may be told about various non-hedonic goods.  These will likewise tend to be graded, building up in value over time, perhaps with elements of 'chunking' or threshold effects here or there.  Consider having an idea, or writing a book.  It might at first seem inconceivable that a mere nanosecond could ever make a difference to such projects.  But there will be some crucial period during which the idea becomes more fully-formed and clear in your mind, and its value may increase with its clarity and completeness, and some nanoseconds simply must make a difference to these dimensions (since they involve precise physical mechanisms such as firing neurons).  Similarly when writing a bo[...]

Aggregating the Right Moments


Should we prefer to give one person half a million minutes (i.e. one year) more life, or to give a million people one minute more each?  If iterated a million times over (once for each person in the million), the latter repeated choice is clearly better for all (by half a million minutes).  Moreover, as I suggested in comments to that post, if we assume that the million choices are independent of each other in value -- that is, the value of making one such choice does not depend on how the other choices are made -- then it quickly follows that it's better to give the million tiny benefits rather than the one big benefit, even in a one-off choice situation.However, it's worth flagging that on one very natural (but philosophically distorting) way of imagining the situation, the independence assumption will not hold.It's natural to imagine bestowing the extra gift of life to someone on their deathbed.  But then giving them one minute of time is not actually worth 1/500k of 500k extra minutes, since a minute spent on your deathbed is likely to be pretty worthless, i.e. disproportionately bad compared to the average minute in your life.  With an extra year, by contrast, you may actually achieve something worthwhile.Of course, if we're interested in whether small benefits to many can, in aggregate, outweigh large benefits to one, it's important that the putative "small benefit" actually be beneficial.  So, rather than imagining someone starting afresh after an extended hospital stay has already disrupted their life, it might be more philosophically fruitful to instead imagine the initial health disruption being delayed for the stipulated duration of time (a minute or a year, as the case may be).  This makes an important difference, because an extra minute in the midst of your life might be just what you needed to finish some worthwhile project (or at least extend a period of carefree enjoyment) that is genuinely valuable.While it may be relatively unlikely for any given person that the extra random minute would make much difference, given a large enough number of beneficiaries it becomes quite likely indeed that the extra minute made a substantial difference for someone (and a less substantial, but still genuinely desirable, difference for many others).  This is in stark contrast to imagining all million people each getting an unrepresentative, comparatively unhappy extra minute to spend lying in their deathbed!In sum: It can be hard to believe that giving one minute more to a million different people could really do more good than giving a full year of life to one person.  But this intuition may at least be partly due to imagining a distorted version of the case where the extra minute being given is a disproportionately worthless one.  If we instead imagine giving randomly representative minutes to a million individuals, and bear in mind all the moments in life when an extra minute could have real value, then I find the anti-aggregative intuition disappears entirely.  I now find it entirely credible that the extra minute each, to a large enough population, is better overall than an extra year to just one person would be. [...]

Universalizing Tactical Voting


I regularly come across two objections to tactical voting, i.e. voting for Lesser Evil rather than Good in hopes of defeating the Greater Evil candidate.  One objection is just the standard worry that individual votes lack instrumental value, debunked here.  More interestingly, some worry that tactical voting is positively problematic, morally speaking, on grounds of its putative non-universalizability.On one version of the worry, tactical voting involves (something approaching) a contradiction in the will, insofar as even if those who most prefer Good constituted a majority, they could get stuck in the inferior equilibrium point of all (unnecessarily, and contrary to their collective preference) supporting Lesser Evil.  On another version of the worry, tactical voting involves (something like) a contradiction in conception, insofar as it involves responding to how others plan to vote, which might seem to depend upon those others voting non-tactically, i.e. not waiting to first learn how you plan to vote.To avoid either worry, it suffices for tacticians to follow Regan's co-operative utilitarianism (which is, in general, the correct theoretical solution to any sort of coordination problem).  To wit:(1) Identify those who are willing and able to cooperate with you (by following this very decision procedure) in pursuit of the best collectively attainable outcome.(2) Determine the best available total plan of action for the group of cooperators, given how non-cooperators will actually behave, then play your individual role in said plan.In the first problem case: If a majority of the electorate is willing and able to cooperate with you, then correctly following the above procedure entails (i) that they will recognize themselves as forming a majority, and (ii) will follow the best plan available on that basis, viz. electing Good.  Your group of cooperators will only vote for Lesser Evil on the condition that they are not in a position to collectively elect Good (who is, by stipulation, better).  So: no problem.The second case is a bit trickier.  I take it that we are to imagine difference sub-groups, each of which attempts to cooperate in voting tactically to best achieve their (differing) goals.  Perhaps 40% unconditionally support Greater Evil, and 30% each support Good and Lesser Evil, whilst still preferring the other over Greater Evil.  How should the G and LE groups vote (given their values)?Since they have different goals, they do not count as fully mutual cooperators.  All we can do is consider each group separately.  In each case, what the members of a group objectively ought to do will depend upon what the other group does.  But that's not a problem, since there is after all a fact of the matter as to what the various individuals will end up doing, and so the above tactical decision procedure will (separately) determine what they all should have done (given their values).  It can be summarized as saying that each group should vote for whichever candidate the other group votes for, whereas determining which candidate that is is a matter outside of moral theory.  (It may instead be determined by whichever group is better able to position their candidate as the 'default' alternative to the Greater Evil.)While it would of course be unfortunate if it turned out that the Lesser Evil folks were able to outmaneuver the Good folks here, that would just be a sad fact about the world and not any sort of indictment of tactical voting per se, or so it seems to me.Is there some residual problem that I'm missing here?(The practical problem -- how to act given in[...]

Assessing the NMC's Defense of its Independent Midwifery Ban


After receiving much criticism for its effective ban on independent midwifery, the NMC released a document [pdf] that seeks to explain and justify their position (see especially the fourth and final page).Their central conclusion is that they are simply following orders, and it isn't their responsibility to do anything to mitigate the harms they're thereby causing:So what we are seeing now is an entirely foreseeable consequence of Government policy (of successive governments since 2009) and the EU Directive which introduced this mandatory requirement in the interests of public protection. As Finlay Scott advised in 2010 and remains the case today, if the UK or any of the devolved administrations consider it is necessary to enable the continued availability of the services provided by entirely independent midwives, acting outside the alternative governance structures which are available to them, then they must seek to facilitate a solution, as the solution does not lie with the NMC.They further stress:The NMC exists to protect the public using the statutory powers it has been given. This includes ensuring that all its registered nurses and midwives have appropriate indemnity insurance. The NMC does not have the power to remove, waive or disregard this statutory duty, nor could it be in the public interest for it to do so.I think there are a couple of points worth noting here:Firstly: This is non-responsive to the specific concerns that have been raised about the NMC's hasty and overzealous exercise of its statutory powers.* The NMC did not have a statutory duty specifically to disrupt existing care arrangements (it could instead have allowed a reasonable adjustment period between reaching its final verdict and implementing the ban, preventing IMs from acquiring new clients rather than disrupting the maternity care of their existing clients).* The NMC did not have a statutory duty to ban IMs from attending births even in a non-medical capacity.* The NMC did not have a statutory duty to form its judgment (that IMUK's indemnity insurance was "inappropriate") in an arbitrary and ignorant fashion, without properly assessing the catastrophic risk model provided by IMUK's actuaries which explain why they believe their product to be entirely fit for purpose. [Listen to 23 - 24 mins through this radio interview, where IMUK chair Jacqui Tomkins explains that after long refusing to even meet to discuss their model, the NMC finally conceded that they hadn't even attempted to assess IMUK's catastrophic risk model because it was "very complicated and would have taken a lot of time"!!]* The NMC did not have a statutory duty to refuse to communicate with IMs about what level of indemnity cover it would consider "sufficient". (In case their stubborn and uncooperative arrogance does not come through sufficiently strongly in the above quotes, have a listen to NMC Chief Executive Jackie Smith at 11:30 mins through this radio interview, responding to the host's query about what would constitute sufficient cover with, "That is for them to satisfy me.")In each of these cases, far from simply faithfully executing their statutory duties, it seems clear that the NMC has behaved in a reckless, callous, and arbitrary fashion, contrary to the interests of the affected parties (including affected members of "the public" who suffered disruption to their maternity care as a result).Secondly: While perhaps less important in practice, I think it worth flagging just how lacking in imagination one would have to be to think it impossible in principle that disregarding a regulation "could [...] be in the public interest".  After all, I t[...]

Our Zombie Bodies, and Physicalist Epiphenomenalism


Eric Olson has a fascinating paper, 'The Zombies Among Us' (forthcoming in Nous), where he points out that standard constitutional theories of persons imply that our bodies are phenomenal zombies -- physically identical to us but lacking conscious experiences (or indeed any mental properties).Constitutionalists hold that we are constituted by (but not identical to) the physical matter that makes up our bodies.  If we imagine a brain-transplant case, for example, it seems that we go where our brains go, and so we can come apart from our bodies (and likewise from the biological organism that our bodies also constitute).  But then we must deny that our materially coincident bodies have mental properties (such as the belief that we go with our brains and not with our bodies), on pain of a fairly radical skepticism (if bodies have all the same beliefs and thoughts as us, what confidence could we have that we are not ourselves such thinking bodies, falsely believing that we go with our brains?).So, in the ordinary run of things, our bodies are physically identical to us but lack mental properties: they are philosophical zombies.Not in any sense that refutes physicalism, of course; our brains may still metaphysically suffice for (giving rise to) consciousness, even if we do not then attribute the consciousness to every object that has our brains as parts. (Which makes independent sense, after all: we can speak of a gerrymandered object consisting of my brain + the Eiffel Tower, but we should not hold that this object is itself a thinking thing.)An interesting consequence of this, which Olson flags, is that physicalists should not identify mental states (or properties) with physical states (or properties).  Any token physical state that I am in is shared by my body, after all.  So if mental states were identical to token physical states, we would be committed to holding that my body shares my mental states, contra the view expressed above.Instead, I think, physicalists should agree with dualists that the brain gives rise to (rather than just is) the mind.  Their view can remain distinctively physicalistic insofar as they insist that there is nothing metaphysically heavyweight about this "generation" -- the mind comes along "for free", in something like the way that computer hardware gives rise to a running software programme.  Your web browser is not identical to any of the circuitry upon which it runs, but nor does it create any deep metaphysical mysteries. (Of course, I think consciousness is different, but I'm playing Devil's advocate for the physicalist here.)An interesting upshot of this (it seems to me) is that physicalists should be epiphenomenalists too.  These "virtual" or abstractly generated mental properties do not push atoms around, and so lack fundamental causal powers (though we can always invoke them when speaking loosely, as in correlative explanations).  This isn't a problem, since there's nothing wrong with epiphenomenalism, but insofar as many physicalists believe otherwise, it does put them in a funny position.[See also: the 'Virtual Mind' diagnosis of why Searle's Chinese Room argument is misleading, for related discussion of how mental properties should be attributed to a different 'level' of entity from that which does the underlying information processing.] [...]

Medical Indemnity: Protection or Compensation?


One of the (many) puzzling elements of the NMC anti-midwifery fiasco is the NMC's insistence that, by shutting down midwives whom they judge to have "insufficient" indemnity cover, they are thereby "mak[ing] sure that all women and their babies are provided with a sufficient level of protection should anything go wrong," and that they "had to act quickly in the interests of public safety."This rhetoric strikes me as deeply misleading.  Indemnity cover is not a public safety issue.  Not only does it do nothing to prevent bad medical outcomes from occurring in the first place, it cannot even ensure in general that financial support is available when needed for increased caring costs associated with (e.g.) disability.  All that indemnity cover does for patients is ensure that greater compensation can be paid in a malpractice lawsuit -- a very rare and specific set of circumstances.Indemnity cannot be relied upon to "protect" families "should anything go wrong" because it does not cover anything going wrong, but only things going wrong due to malpractice on the part of the medical practitioner.  If a baby suffers brain damage due to unavoidable complications, for example, indemnity will not help. For that, we need disability support as part of the general social safety net.As it stands, indemnity cover (even at a level deemed "insufficient" by the NMC) increases the cost of obtaining an independent midwife by around £500 or so.  This can be expected to price out some women who would have preferred the superior care available outside of the overburdened NHS (even at the cost of reduced compensation in the unlikely event of a malpractice lawsuit).  As such, a legal requirement to have indemnity cover can actually be expected to lead to worse health outcomes in aggregate, as the added cost deprives some women of the opportunity for independent care, and adds to the demands being placed upon an already over-burdened NHS.(And that is quite apart from the more general concerns about paternalism and patient autonomy noted in my previous post.  Even for those of us who can afford to pay extra for indemnity cover, it isn't at all clear what business the government has forcing us to do this.)Given that it is really a matter of enabling compensation rather than ensuring protection, medical indemnity cover for private practitioners is not a public safety issue, or even a matter of legitimate public interest at all.  It should be up to individual practitioners and their clients to decide for themselves whether this is something that they feel that they want or need, and if so, to what degree of coverage.It is certainly not so pressing and urgent a need that it can justify depriving a woman of her chosen midwife, with all the associated harms noted previously. [...]

UK Shuts Down Independent Midwives


A new low for harmful over-regulation: The UK has just regulated independent midwives out of business (at least for the time being).  The Nursing and Midwifery Council decided that they did not consider the indemnity cover of Independent Midwives UK (which has worked fine since indemnity cover was legally mandated in 2014) to be "adequate" after all.  So, as of 11 January last week, independent midwives have been legally barred from attending the births of their clients, severely disrupting the birth plans of these expectant parents (threatening their right to a home birth, disrupting their continuity of care, and generally undermining patient autonomy and the values that led these expectant parents to invest in an independent midwife in the first place).The NMC's behaviour here is appalling in so many respects.  The immediate implementation of the decision makes it especially damaging. Expectant parents have formed birth plans which depend upon the independent midwives with whom they have built up relationships of trust.  To disrupt these plans without extremely good reason is deeply intrusive and unethical. As Birthrights explains in their critical open letter, NMC's actions "appear designed to cause maximum disruption and damage to independent midwives and the women they care for."  They continue:The NMC has a key role to play in protecting public safety, yet this decision directly jeopardises the health and safety of the women it is supposed to safeguard. Beyond the very real physical health implications of this decision, it is causing emotional trauma to women and their families at an intensely vulnerable time. To date, it appears that the NMC has shown no concern for the physical and mental wellbeing of pregnant women who have booked with independent midwives.At the very least, the NMC should, as Birthrights rightly insists, guarantee "that all women who are currently booked with independent midwives using the IMUK insurance scheme will be able to continue to access their services" and "that the midwives caring for them them will not face disciplinary action for fulfilling their midwifery role".Aside from the horrifically rushed implementation, the decision itself just doesn't seem remotely reasonable.  NMC Chief Executive and Registrar Jackie Smith has responded with the claim that "The NMC absolutely supports a woman’s right to choose how she gives birth and who she has to support her through that birth. But we also have a responsibility to make sure that all women and their babies are provided with a sufficient level of protection should anything go wrong."In other words: nice as a women's right to choose might be, what's really important is that she can sue for many bucketloads of money (not just a few bucketloads) if anything goes wrong.Seriously?  That's your top priority?  This reveals deeply messed-up values.(It's arguably wrong for the law to require indemnity insurance of independent midwives at all: The costs are of course passed on to clients, and it's not obvious what legitimate interest the state has in forcing expectant parents to pay for such cover.  I understand that in previous decades clients of independent midwives could just sign a waiver indicating that they wished to have midwifery care without such indemnity cover in place.  I would prefer to still have that option.  So I think the law is wrong.  But it's especially absurd for the NMC to not just enforce the minimal requirements of the law, but to zealou[...]

2016 in review


(Past annual reviews: 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 2005, and 2004.)Applied Ethics* The Instrumental value of one vote -- can be much higher than many philosophers seem to assume.* Pets and Slavery -- explains why domesticated animals are not inherently wronged by their guardians, or morally akin to "slaves".* Philanthropic focus vs abandonment -- diagnoses some bad reasoning from the CEO of Oxfam, who mistakenly thinks there are reasons of fairness to help people inefficiently.* Effective Altruism, Radical Politics, and Radical Philanthropy -- Is EA insufficiently 'radical'?  Or excessively so?* How bad? -- a rough first step towards moral prioritization.* Opposite Day: "Charity begins at home" edition -- Don't let my evil twin Ricardo convince you!Normative Ethics* Illustrating the Paradox of Deontology -- may you murder once to save five loved ones from being murdered? (If not, why not?)* Is Consequentialism More Demanding? -- not if you take the interests of the poor into account...* Irrational Increments for the Self-Torturer -- argues, contra Tenenbaum and Raffman, that some individual increments of the self-torturing device are not worth taking.* Related: Self-Torturers Without Diminishing Marginal Value -- provides a slightly neater version of the case to consider.* Do we have Vague Projects? -- putative candidates may be best explained in a way that suggests not, actually.* Attitudinal Pleasure and Normative Stance-Independence -- is the value of pleasure "subjective" in the relevant sense?  I express some doubts, contra Sobel.* Possibly Wrong Moral Theories -- and why we should think they're actually wrong.Metaethics and Consciousness* The basic reason to reject naturalism: Substantive Boundary Disputes -- argues that naturalism (about normativity and consciousness alike) cannot account for the substantive nature of questions about the domain's extent.* The 2-D Argument Against Metaethical Naturalism -- building upon the Open Question Argument.* Carroll on Zombies -- a physicist talks about something other than phenomenal consciousness.* Final Value and Fitting Attitudes -- explains how to analyse the former in terms of the latter, whilst avoiding the objections raised in a recently published paper.Teaching* 7 Things Everyone Should Know About Philosophy* Student Spotlight: Intrinsically Irrational Instrumental Desires -- an under-explored area of logical space.* Teaching Effective Altruism -- discusses the syllabus used for my EA class (see also discussion of the Giving Game I ran in class).* Expected Value without Expecting Value -- I was surprised to find that most students do not accept expected value reasoning: they would prefer to save 1000 lives for sure, than to have a 10% chance of saving a million lives. [...]

Is Consequentialism More Demanding?


People sometimes complain that impartial consequentialism is "too demanding", insofar as it requires us (comparatively) wealthy and fortunate people to do a lot to help the less fortunate.  And it's true that those are non-trivial costs.  But it's hard to take seriously the suggestion that these costs are morally more significant than the costs endured by the less fortunate by our doing less (or nothing).  So-called "moderate" views of beneficence are in fact extremely costly for the worst-off -- much worse than consequentialism is for the wealthy.  So it's an odd objection.David Sobel's (2007) 'The Impotence of the Demandingness Objection' nicely develops this line of criticism (p.3):Consider the case of Joe and Sally. Joe has two healthy kidneys and can live a decent but reduced life with only one. Sally needs one of Joe’s kidneys to live. Even though the transfer would result in a situation that is better overall, the Demandingness Objection’s thought is that it is asking so much of Joe to give up a kidney that he is morally permitted to not give. The size of the cost to Joe makes the purported moral demand that Joe give the kidney unreasonable, or at least not genuinely morally obligatory on Joe. Consequentialism, our intuitions tell us, is too demanding on Joe when it requires that he sacrifice a kidney to Sally.But consider things from Sally’s point of view. Suppose she were to complain about the size of the cost that a non-Consequentialist moral theory permits to befall her. Suppose she were to say that such a moral theory, in permitting others to allow her to die when they could aid her, is excessively demanding on her. Clearly Sally has not yet fully understood how philosophers typically intend the Demandingness Objection. What has she failed to get about the Objection? Why is Consequentialism too demanding on the person who would suffer significant costs if he was to aid others as Consequentialism requires, but non-Consequentialist morality is not similarly too demanding on Sally, the person who would suffer more significant costs if she were not aided as the alternative to Consequentialism permits? What must the Objection’s understanding of the demands of a moral theory be such that that would make sense? There is an obvious answer that has appealed even to prominent critics of the Objection — that the costs of what a moral theory requires are more demanding than the costs of what a moral theory permits to befall the unaided, size of cost held constant. The moral significance of the distinction between costs a moral theory requires and costs it permits must already be in place before the Objection gets a grip. But this is for the decisive break with Consequentialism to have already happened before we feel the pull of the Demandingness intuitions.We might similarly ask why deontological views are not considered excessively demanding when they prohibit Sally from saving her own life by stealing one of Joe's spare kidneys.  In terms of raw (theory-neutral) cost to the agent, this is surely very demanding!  Granted, as people standardly evaluate "demandingness", they might presuppose that moral prohibitions of this sort are not as relevantly "demanding" as positive obligations.  But this is, in effect, to already be assessing questions of demandingness through deontologically-tinted glasses.It seems, then, that there are no neutral grounds for considering impartial consequentialism to be "more demanding" than rival mor[...]

Illustrating the Paradox of Deontology


One who accepts a "consequentialism of rights" might hold that deliberating killing an innocent person (let's call this "murder", for short) is so morally bad that it isn't justified even to save five lives.  But deontologists go further, suggesting that one should not murder even to prevent five other murders.  This seems puzzling: if murder is so morally horrendous, why should we not be concerned to minimize its occurrence?  This is Scheffler's paradox of deontology in a nutshell.A deontologist might respond by suggesting that our moral aims are not so impersonal: we have a special responsibility for our own (present) actions, and so must regard our not (now) ourselves causing harm / violating rights as a distinctive moral goal.  Scheffler pushes back against this idea on pp. 415-6 of his 'Agent-Centred Restrictions, Rationality, and the Virtues':[O]n standard deontological views, morality evaluates actions from a vantage point which is concerned with more than just the interests of the individual agent. In other words, an action will be right or wrong, on such a view, relative to a standard of assessment that takes into account a number of factors quite independent of the interests of the agent. And defenders of such views are unlikely to claim that the relevant standard of assessment includes agent-centred restrictions, but that it is a matter of indifference, from the vantage point represented by that standard, whether or not those restrictions are violated. For if it is not the case that it is preferable, from that vantage point, that no violations should occur than that any should, it is hard to see how individual agents could possibly be thought to have reason to observe the restrictions when doing so did not happen to coincide with their own interests or the interests of those they cared about. In other words, deontological views need the idea that violations of the restrictions are morally objectionable or undesirable if the claim that people ought not to commit such violations when doing so would be in their own interests is to be plausible. Yet if such views do regard violations as morally objectionable or undesirable, in the sense that it is morally preferable that none should occur than that any should, it does then seem paradoxical that they tell us there are times when we must act in such a way that a larger rather than a smaller number of violations actually takes place.It's a fairly dense passage, so when teaching this topic last term I came up with a thought experiment to help illustrate.Suppose that five innocent people whom you love are going to be murdered, unless you yourself murder a (distinct) innocent person.  Is it wrong for you to murder an innocent person in order to save your five loved ones?Standard deontological theories will insist that murder, even in this case, is wrong.  But this may seem a difficult verdict to uphold, given that murdering the one seems preferable from both your personal standpoint and the impersonal standpoint.Impersonally: five murders are worse than one.  Personally: there is a special moral cost to you in committing a murder, sure, but it is not so great a cost (we may suppose) as losing your five loved ones.  So, we may wonder, from what perspective does the deontological verdict have any normative force or appeal?To get the verdict that murdering the one is wrong, the deontologist must hold that you are morally special (to override the [...]

Possibly Wrong Moral Theories


In 'The Normative Irrelevance of the Actual', I explained why it doesn't matter whether a putative counterexample to a moral theory is actual or hypothetical in nature, on the grounds that first-order moral theories can be understood as (implying) a whole raft of conditionals from possible non-moral circumstances to moral verdicts.  But there's another, perhaps more intuitive, way to make the case, based on the idea that some counterfactually superior moral theory should be superior, simpliciter.

Consider Slote's sentimentalism.  According to Slote (2007, 31), wrong acts are those that "reflect or exhibit or express an absence (or lack) of fully developed empathic concern for (or caring about) others." The relevant kind of empathic concern is not some kind of a priori theoretical posit, such as universal love, but rather is tied to our actual natural dispositions to favour those near and dear. (This is crucial to secure his desired anti-utilitarian verdicts.)  But this raises the obvious worry: what if our "natural" empathic dispositions turn out to have racist or otherwise clearly immoral built-in tendencies?

Slote responds: “The ethics of empathy may here be hostage to future biological and psychological research, but I don’t think that takes away from its promise as a way of understanding and justifying (a certain view of) morality.” (p.36)

But, I suggest, if we know that there is a possible situation in which sentimentalism is not the correct moral theory, then we can ask ourselves what the correct moral theory in that situation would be. And once equipped with that correct possible moral theory—one that provides an independent justification for rejecting racist or otherwise immoral sentiments even when sentimentalism cannot—then we may wonder what we need sentimentalism for. What is stopping that counterfactually correct moral theory from also being the actually correct moral theory?

Perhaps there are some moral theories that give plausible verdicts only in a certain counterfactual world, and are no longer plausible when we apply them to our world (or others).  So, fine, discard those clearly inadequate theories.  Still, given the entirety of logical space to choose from, we should be able to find a theory that yields the desired results in our world as well as in the counterfactual world where it is superior to sentimentalism (or whatever merely contingently plausible theory we are considering).

So, if a moral theory is merely contingently plausible, we can find a better option out there.  Being possibly wrong, in this sense, suffices to establish that the moral theory is actually wrong.

Attitudinal Pleasure and Normative Stance-Independence


David Sobel has an interesting post up at the revamped PEA Soup blog on 'Normative Stance Independence and Pleasure'.  He suggests that if pleasure is best understood in attitudinal terms (as per Parfit's hedonic likings) then this undermines Normative Stance Independence, the view that "normative facts are not made true by anyone’s conative or cognitive stance" or "by virtue of their ratification from within any given actual or hypothetical perspective."

But does it?  The distinction between stance-dependence and -independence is a slippery beast.  Even if pleasure could be said to involve "taking a stance" towards a base sensation by liking it, it's not so clear that the stance is what does the heavy lifting in explaining why pleasure is good.  More plausibly, I think, pleasure is good just because of how it feels, objectively speaking.  Again, this normative explanation remains untouched, it seems to me, no matter if the phenomenology of pleasure turns out to be inextricably tied up with the attitude of liking.  It could still be the objective phenomenology, rather than the "stance" per se, that matters.

(In support of this point, I take it that if knowledge, for example, has intrinsic value then this is uncontroversially objective or 'stance-independent' in nature, regardless of the fact that knowledge is (or involves) a cognitive state, and so might be considered part of the agent's "stance" in some sense.  So, why not the same for pleasure?)

Pets and Slavery


In 'The Case Against Pets', Rutgers law professors Francione and Charlton argue that "domestication and pet ownership [...] violate the fundamental rights of animals."  This is, I think, a deeply absurd position.A large part of their essay is just concerned with arguing against treating pets as property.  I think it's pretty clear that the ordinary social meaning of having a pet already rules this out.  One may carve up one's property for fun; if someone were to carve up their pet, we would (rightly) want them to be locked up for animal cruelty.  If the legal system failed to do this, they would certainly be shunned by the rest of society, who would be deeply horrified by their actions.It's an interesting question whether non-rational beings can have a right to life in addition to a right against cruel treatment.  If so, the implications would be quite radical, even aside from the complete abolition of the meat industry.  Society would presumably be obliged to support animal shelters to an extent that removes the current need to kill many perfectly healthy animals due to overcrowding.  I think that's a plausible enough position, though there are counterarguments to consider.Where the authors go off the rails is when they suggest that "domestication itself raises serious moral issues irrespective of how the non-humans involved are treated" -- such that pet ownership would still be wrong even if animal rights against cruel treatment and convenience-killing were secured.  Why do they think this?  What further rights are being violated, merely by caring for your pet?  Here is what F&C write:Domesticated animals are completely dependent on humans, who control every aspect of their lives. Unlike human children, who will one day become autonomous, non-humans never will. That is the entire point of domestication – we want domesticated animals to depend on us. [...] We might make them happy in one sense, but the relationship can never be ‘natural’ or ‘normal’. They do not belong in our world, irrespective of how well we treat them. This is more or less true of all domesticated non-humans. They are perpetually dependent on us. We control their lives forever. They truly are ‘animal slaves’. Some of us might be benevolent masters, but we really can’t be anything more than that."Slavery is bad, X is like slavery, therefore X is bad" is superficial reasoning.  Much depends on whether X shares the relevant features or preconditions that explain why slavery is so bad.I take the basic problem with (human) slavery to be that it is so drastically contrary to the interests of the enslaved.  Not only were slaves historically mistreated in all sorts of ways, but even an imaginary "happy slave" seems in a tragic position insofar as their capacity for rational autonomy -- and hence for a fully flourishing human life -- is being stunted rather than nourished.  Rationally autonomous beings have an interest in developing and preserving their autonomy, and when this interest is violated their life is (in this respect) worse as a result.This crucial feature is obviously lacking in non-rational animals.  So long as we do not mistreat them (whether by outright cruelty or mere neglect, e.g. failure to provide a sufficiently stimulating environment) domestic animals' chances at a fully flouri[...]