Subscribe: Philosophy, et cetera
Added By: Feedage Forager Feedage Grade A rated
Language: English
bad  case  difference  independent  killing  life  make  might  moral theory  moral  nmc  pain  people  pleasure  wrong 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Philosophy, et cetera

Philosophy, et cetera

Providing the questions for all of life's answers.

Updated: 2018-02-24T04:05:56.452+00:00


Philosophical Expertise, Deference, and Intransigence


Here's a familiar puzzle: David Lewis was a better philosopher than me, and certainly knew more and had thought more carefully about issues surrounding the metaphysics of modality.  He concluded that modal realism was true: that every concrete way that a world could be is a way that some concrete universe truly is (and that these concrete universes serve to ground modal truths -- truths about what is or is not possible).  But most of us don't feel the slightest inclination to defer to his judgement on this topic.  (I might defer to physicists on the 'Many Worlds' Interpretation of quantum mechanics, but that's a different matter.)  Are we being irrational?A familiar response: philosophical 'experts' themselves disagree.  Kripke, for example, may be weighed against Lewis on this topic.  But then it might seem to follow that I should suspend judgment entirely: if even the top experts on the topic cannot agree, what hope do I have of coming to a justified conclusion here?  And it would seem epistemically shady to cherry-pick the experts who agree with you, claim that you're responsibly deferring to them, and just ignore all the ones that don't.I think a better response is available.The puzzle presupposes that we ought to defer to experts.  But that only makes sense if we've reason to expect that expertise in a domain sufficiently increases epistemic reliability, i.e. the likelihood of true beliefs.  That's certainly the case for many domains -- it's why we should defer to scientific experts, for example.  But it arguably isn't so for philosophy in general.Philosophical expertise seems compatible with being completely off the rails when it comes to the substantive content of one's philosophical views.  And this is to be expected once we appreciate that (i) there are many possible internally coherent worldviews, (ii) philosophical argumentation proceeds through a mixture of ironing out incoherence and making us aware of possibilities we had previously neglected, and so (iii) even the greatest expertise in these skills will only help you to reach the truth if you start off in roughly the right place.  Increasing the coherence of someone who is totally wrong (i.e. closer to one of the many internally coherent worldviews that is objectively incorrect) won't necessarily bring them any closer to the truth.To put a more subjective spin on it: One's only hope of reaching the truth via rational means is to trust that your starting points are in the right vicinity, such that an ideally coherent version of your worldview would be getting things right.  So we've only got reason to defer to others if their verdicts are indicative of what our idealized selves would conclude.  Often, we can reasonably judge that other philosophers have views so alien to our own that it's unlikely that procedurally ideal reflection (increasing internal coherence) would lead us to share those views.  In such cases, we've no reason to defer to those philosophers, however 'expert' they may be.(Terminological variant: If you want to build into the definition of a subject-area 'expert' that deference to expert judgement is mandatory, then you should restrict attributions of expertise to those whose starting points are sufficiently similar to your own.)tl;dr: We should only be epistemically moved by peer disagreement (and related phenomena) when we take the other person's views to be evidence of what we ourselves would conclude upon ideal reflection.  Philosophical intransigence is thus often justified, insofar as we can justifiably believe that an improved version of our view could be developed that is at least as internally coherent as the opposing views. This remains true even if we judge that the defenders of the opposing views are (in purely procedural terms) smarter / better philosophers than we are ourselves. [...]

2017 in review


(Past annual reviews: 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 2005, and 2004.)Off the blog... Mostly I've been occupied this year by the arrival of this little guy:Professionally, I was delighted to finally find a good home for my 'Willpower Satisficing' paper (in Noûs!).  'Why Care About Non-Natural Reasons?' was accepted by APQ.  And a couple of previously-accepted papers -- 'Knowing What Matters' and 'Rethinking the Asymmetry' -- appeared in print, while 'Fittingness Objections to Consequentialism' was officially approved for an OUP-edited volume.  Busy times!On the blog...Applied Ethics* A series of posts took a critical look at a healthcare fiasco unfolding in the UK which our family experienced first-hand: UK shuts down Independent Midwives, Medical Indemnity: Protection or Compensation?, and Assessing the NMC's Defense of its Independent Midwifery Ban.* Universalizing Tactical Voting rebuts the moral objection to tactical voting.* Anomaly vs Huemer on Immigration -- explaining why the default presumption should be to favour freer immigration.Moral Theory* Aggregating the Right Moments addresses one intuitive reason for thinking that it'd be better to give one person half a million minutes (i.e. one year) more life than to give a million people one minute more each.* Nanoseconds that Matter explains why even arbitrarily small durations of time should not be assumed to lack value entirely.* Harms, Benefits, and Framing Effects defends the existence of 'framing effects' against the objections of a recently published paper.* Iterating Badness in the Paradox of Deontology explores an objection to Setiya's new paper, 'Must Consequentialists Kill?'* Drawing the Consequentialism/Deontology Distinction does just what it says on the tin.Other* Our Zombie Bodies, and Physicalist Epiphenomenalism discusses the idea that our mental properties should not be attributed to our physical bodies in addition to our person, and so our bodies are, in a sense, philosophical zombies.* Intelligible Non=Natural Concerns explores exceptions to the rule that we shouldn't care about morality 'de dicto'.Happy New Year! [...]

Giving Game 2017 results


This past week I ran a 'Giving Game' for my Effective Altruism class, letting each student decide (after class discussion) how to allocate £100 of my charitable budget for the year.  There was just one restriction: if they wanted to pick something other than one of the four EA Funds options (which have expert managers directing funds in the fields of "global health & development", "animal welfare", "long-term future", and "EA community"), they had to convince at least one other classmate to join them.  In the first seminar group, half the class ended up choosing alternative options; in the second, all stuck with the EA funds.  The end result was a bit more varied and (less conservative) than the first time I tried this, so that was interesting to see.  (I think it helped both to allow individual discretion rather than requiring group consensus decisions, and also to have the new "EA funds" available to enable responsibly contributing to a cause area without having to identify or select particular outstanding organizations within the area.  You can now just make the value judgment, and defer to trusted experts on the empirical details.)

Here's the final breakdown for both seminar groups combined:

* Global Health & Development [EA fund]: £900
* Animal Welfare [EA fund]: £200
* Long-term Future [EA fund]: £500
* EA Community [EA fund]: £300
* Cool Earth: £400
* Good Food Institute: £200

Where would (or better: do) you choose to give, in hopes of achieving the most good?

I'd definitely stick to the EA funds myself, trusting the managers to know the best giving opportunities within their area better than I do.  I've traditionally favoured the 'meta' approach of funding EA movement-building to catalyze additional donations, but the Long-term Future (yes, including AI risk) is hugely important and unduly neglected by people in general.

Drawing the Consequentialism / Deontology Distinction


I previously mentioned that Setiya's 'Must Consequentialists Kill?' defines consequentialism vs deontology in a way that I think we should resist.  (This is part of what allows Setiya to reach his surprising-sounding conclusion that "consequentialists" aren't committed to killing one to prevent more killings.)  Setiya defines "consequentialism" as the conjunction of two theses:ACTION-PREFERENCE NEXUS: Among the actions available to you, you should perform one of those whose consequences you should prefer to all the rest.AGENT-NEUTRALITY: Which consequences you should prefer is fixed by descriptions of consequences that make no indexical reference to you.I think this is both too strong and too weak.  It is too strong because consequentialism doesn't require agent-neutrality.  Egoism is clearly consequentialist in nature, as are other forms of agent-relative welfarism (e.g. views that are utilitarian at base, but then weaken the strict equality of interests to instead allow agents to weight the interests of their nearest and dearest more heavily than those of strangers).  As Setiya acknowledges, this is a "terminological" point. Nonetheless, some ways of using terms do a better job than others of carving nature at the joints, and if we're looking for a fundamental structural divide with which to categorize all normative ethical theories, you've made a real mistake if you don't recognize that egoism, utilitarianism, and agent-relative welfarism all belong on the same side.On the other hand, Setiya's definition is too weak because the Action-Preference Nexus is (as he himself notes) "not a claim about the order of explanation but about the congruence of reasons". And such a congruence seems difficult to deny -- even many self-identified deontologists [though not all!] will surely say that we should prefer not to act wrongly, for example.  But so long as the explanation of the congruence takes some reasons for action to be prior to reasons for desire (i.e. takes the right to be prior to the good), then the view in question would seem best categorized as "deontological" in nature, not "consequentialist".  This seems so even if the view in question is also agent-neutral, as is the view Setiya defends in his paper.  You can (albeit with some difficulty) hold that everyone ought to prefer that agents not engage in utilitarian sacrifice, or kill one to prevent more killings.  But insofar as these are moral side-constraints on action that you are elevating to the status of universal desirability, the resulting view seems all the more "deontological" in nature.One way to see this is to invoke my 'naturalization' test for axiological vs deontic reasons. Suppose it were not an agent, but rather a bolt of lightning, that killed the one and thereby saved the five.  How would the resulting state of affairs compare to the alternative where the five were killed?  Presumably it's less bad: fewer people died, and nobody was treated as a means, had their rights violated, or was otherwise "wronged" in any way.Since it makes such a difference to our (commonsense) moral assessment whether the one is killed by an agent or by natural causes, it seems that the injection of agency into the picture is responsible for flipping our verdicts about desirability in the original "killing one to save five" case -- suggesting that we only prefer that the agent not kill the one because of an antecedent judgment that the act would be morally wrong (rather than judging it wrong because it brings about an antecedently undesirable outcome, as a consequentialist version of the view would have it).[By contrast, consider how a properly consequentialist version of Setiya's view would go.  For "killing one to save five" to be antecedently undesirable (i.e. undesirable independently of any deontic reasons that stem from the action's putative wrongness), it would seem that the undesirability must[...]

Iterating Badness in the Paradox of Deontology


In 'Must Consequentialists Kill?' (forthcoming in J Phil), Setiya convincingly argues against the "orthodox" view that commonsense verdicts about the ethics of killing entail agent-relativity.  Instead, he observes: "In general, when you should not cause harm to one in a way that will benefit others, you should not want others to do so either." (p.8 on pre-print version)  For example, it's not just the agent that should prefer to avoid themselves killing one to prevent five killings, but we should generally prefer that others likewise avoid killing one to prevent five other killings.  The preference here mandated by commonsense morality is thus agent-neutral in nature: it makes no essential reference to your role in the situation.This seems right (I mean, correct as a claim about "commonsense morality", not actually right...), and it avoids one horn of the paradox of deontology, namely worries about how the reasons to act morally, if merely agent-relative, could have the authority to trump more personal / self-interested reasons for the agent.  But the core puzzle remains: if killing is so bad, why should we not be concerned to minimize its occurrence?  Setiya's answer seems to be, in effect, that killing per se is not so bad (being roughly as bad as accidental death).  What's very bad is instead a more specific kind of killing, such as killing as a means to preventing greater harms (or whatever the right generalization of deontological intuitions over cases turns out to be).And the badness must iterate.  Consider: According to Setiya, you should prefer that agents not kill one even as means to thereby preventing five utilitarians from each doing the very bad thing of killing one to prevent five other killings.  So killing as a means to preventing a greater number of very bad killings must itself be worse than five such very bad killings (which are themselves worse than 25 ordinary killings).  That is, it must be very very bad.  And so on.  In general, the badness of killing someone (in an intuitively disapproved-of way) must be a function of how much instrumental good is achieved by the killing, in order to ensure that the killing remains bad/undesirable on net.Such a view strikes me as not very substantively plausible.  It is not worse to kill someone for the sake of helping others than to kill them (let alone five...) for purely selfish ends.**  So I think we should reject the "commonsense" view about the ethics of killing (given that it is seen to be absurd upon reflection), and embrace a more traditional form of (welfarist) consequentialism which treats the "commonsense" intuition as something like a useful rule of thumb instead of a fundamental moral principle.Setiya disputes this by suggesting that we focus on the moral stakes as the situation unfolds in time:At the beginning of Five Killings, five people are going to be killed. In One Killing to Prevent Five, five people are going to be killed unless they are saved by the pushing of the button, which kills an innocent stranger by dropping him off a bridge into the path of the speeding trolley. The situation in which someone is going to be killed unless they are saved in this way is as bad as the situation in which they are going to be killed. Ethically speaking, the damage has been done. [...] It makes things worse, not better, that the button is pushed, so that the innocent stranger dies. That is why One Killing to Prevent Five is worse than Five Killings: it starts out worse and then declines. If we think of the temporal unfolding of events in One Killing to Prevent Five, we can make sense of why we should prefer Five Killings.Of course, this is not an argument intended to persuade the traditional consequentialist, but rather an "internal" defense of deontological* thinking, explaining why it makes sense from their starting point to reach t[...]

Intelligible Non-Natural Concerns


I've previously argued that -- even by non-naturalist lights -- what matters are various natural properties (e.g. causing pleasure or pain), and the role of the non-natural normative properties is instead to "mark" the significance of these natural properties.But it's worth flagging that there are exceptions. While I take it that typically what matters are natural features of the world, this is not a universal restriction on what matters. After all, normative properties plausibly have the further normative property of being worthy of philosophical scrutiny. So I do not deny that there may be special cases when it is perfectly reasonable to take an interest in morality de dicto. (Responding to moral uncertainty may be another such case.) My claim was the more modest one that non-naturalism does not commit us to having non-natural properties take center stage in our moral lives.The special cases where normative properties themselves are of legitimate interest are precisely cases in which it no longer seems perverse or unintelligible to take a special interest in a non-natural property. There's clearly nothing unintelligible about taking a philosophical interest in non-natural properties, after all. (They raise all sorts of interesting questions!) The case of moral uncertainty may be less obvious, so let me discuss that a bit further.Suppose you aren't sure whether it's wrong to painlessly kill happy chickens, as you are unsure whether the cognitive capacities that they possess (in particular, their limited degree of psychological connectedness from one moment to the next) are sufficient to ground a normative interest in continued (happy) survival. The question about which you are unsure is not an empirical one -- we may suppose that you are fully aware of all the relevant empirical details concerning chicken psychology. You know what relations of psychological continuity and connectedness do and do not hold between the various chicken timeslices. You just aren't sure which relations are the ones that matter. Since you generally desire to avoid causing harm, you specifically desire not to kill the chicken if the relation binding together its temporal parts is one that matters (in the sense of giving it a normative interest in survival).This seems a distinctively abstract sort of property that you are (quite reasonably) concerned with in this case. But then it doesn't seem any great problem if it turns out that the property in question is a non-natural one. Of course, it is not just any old non-natural property. And it is not an entirely free-floating concern, disconnected from all your other concerns. On the contrary, this property marks being normatively alike to other things that you rightly care about, and that uncontroversially matter. So perhaps this connection to other, more concrete, concerns can help to render this abstract concern more intelligible.If we take for granted that adult humans have what matters in survival and that embryos do not, we may intelligibly wonder whether the most coherent systematization of our pattern of concern would mandate concern for chicken survival or not. This is not exactly the same as the question whether chicken survival matters, but from a first personal perspective (assuming that one is not in general morally misguided, etc.) they are at least closely related, as an answer to the one question may reasonably be taken to subjectively settle the other as well.So I take this to be an intelligible concern, despite being relatively abstract in nature. It doesn't seem to make any difference to its intelligibility if it turns out to be a non-natural property that we are here concerned with. What would be objectionable is if non-naturalists were committed to replacing our ordinary concrete concerns (such as those involving the hedonic states of pleasure and pain) with excessively abstract non-natural ones, but we have already put that w[...]

Harms, Benefits, and Framing Effects


Kahneman and Tversky famously found that most people would prefer to save 200 / 600 people over a 1/3 chance of saving all 600, and yet would prefer a 1/3 chance of none of the 600 dying over a guaranteed 400/600 deaths.  This seems incoherent, since it seems our preferences over a pair of options are reversed merely by describing the very same case using different words.

In 'The Asian Disease Problem and the Ethical Implications Of Prospect Theory' (forthcoming in Noûs) Dreisbach and Guevara argue that the folk responses are compatible with a coherent non-consequentialist view.  Their basic idea (if I understand them correctly) is that the "400 will die" case is suggestive of a different causal mechanism: perhaps the 400 die from our intervention, so the choice is between guaranteed or gambled harms, whereas the "saving" choice is between guaranteed or gambled benefits.  They then suggest that non-consequentialist principles might reasonably mandate a special aversion to causing guaranteed harm (and so think it better to risk harming either all or none, despite no difference in expected value between the sure thing and the gamble).  In the first case, by contrast, they suggest that non-consequentialists might think it easier to justify saving some lives as a "sure thing" rather than taking a gamble that would most likely save nobody at all.

Such non-consequentialist principles sound pretty odd to me, but let's grant them for sake of argument.  I think that D&G's argument fails for the simpler reason that we can easily clarify the thought experiment so that there is no change in causal mechanisms between the two cases.  Rather than merely specifying that "400 will die", we may clarify: "400 will die of the disease."  I trust that this makes no intuitive difference to the case: So long as the scenario is framed in terms of how many people die of the disease rather than how many are saved from it, we prefer to gamble with lives in a way that we otherwise would not.  The inconsistency cannot be explained away by positing that our intuitions are tracking the deontological property of whether we cause a harm, for it evidently remains even when there is no such causal discrepancy between the cases.

So it seems that our prima facie intuitions here are simply incoherent -- and responsive to arbitrary framing effects -- after all.  Or am I missing something?

Anomaly v Huemer on Immigration


People often assume that to allow immigration is an act of charity: a country generously sharing its land and institutions with outsiders who have no real claim to be there.  Michael Huemer's work forcefully upends this assumption, showing that immigration restrictions are in fact a form of harmful coercion (like blocking a starving man from accessing a public market where he could trade for food). This reconceptualisation shifts the argumentative "burden", insofar as we generally accept that it is much more difficult to justify coercively harming someone (a seeming rights-violation) than to merely refrain from assisting them.

Jonny Anomaly, in a recent blog post on the issue, seems to miss this key feature of Huemer's argument, instead characterizing Huemer's argument in terms of "mutually beneficial gains", and responding that "although a small number of voluntary transactions may benefit all parties, this does not entail that a large series of transactions will benefit everyone."  But Huemer relies on no such entailment.  Indeed, he explicitly argues that incidental disadvantages don't justify harmful coercion: You may not block Starving Marvin's access to the market just because you're worried that he will beat a local to the last loaf of cheap bread.

It would of course be more relevant if immigration would somehow cause the total breakdown of society -- non-absolutists allow that rights may be overridden to avoid disaster.  But there is insufficient reason to accept such overblown fears at this stage. (Anomaly speculates that "At the extreme, unlimited migration might destroy the very institutions to which Starving Marvin wishes to immigrate."  We are given no reason to consider this credible.)

The reasonable response to such concerns is not to use them to drum up opposition to immigration (or the open borders ideal) in our current -- excessively nativist -- milieu, but just to keep an eye out for new evidence as we move to reduce the barriers to immigration.  As Huemer himself writes:
I grant that it may be wise to move only gradually towards open borders. The United States might, for example, increase immigration by one million persons per year, and continue increasing the immigration rate until either everyone who wishes to immigrate is accommodated, or we start to observe serious harmful consequences. My hope and belief would be that the former would happen first. But in case the latter occurred, we could freeze or lower immigration levels at that time.

Is there any reasonable basis for rejecting this "gradualist" proposal?

Nanoseconds that Matter


Take an arbitrarily short duration -- I'll speak of 'nanoseconds' for familiarity and convenience, but you could use an even smaller measure of time.  Could removing a mere (arbitrary) nanosecond from your life plausibly make your life any worse on the whole?  You might think not, on the basis that "surely nothing of any significance could occur during such a short time."  On the other hand, if you remove all the nanoseconds then we have no life left at all, which is certainly a significant difference.  Is it coherent to think that many individually worthless moments might collectively have value?I have my doubts, and have previously suggested that such putatively vague goods (as a "sufficient duration to matter") are better understood as graded and/or involving threshold effects.  A friend suggested minuscule scales of time as a challenge to this view, but I think my approach still makes good sense of this case.  Here's how...Some goods are extended in duration and plausibly independent / proportioned in value such that 1/n of the duration has 1/n of the value of the whole.  Hedonic pleasure is perhaps the paradigm example of this.  A tiny (but non-zero) period of pleasure can thus be expected to have a correspondingly tiny (but non-zero) value.Is that enough to refute the claim that no nanoseconds matter?  Well, one might push back against my counterexample by insisting that a mere nanosecond (or picosecond, or whatever) can make no difference to the actual amount of pleasure felt.  This claim might be defended in either of two ways:(1) appeal to some kind of 'discernibility' criterion, such that if you cannot discriminate between an experience lasting an extra nanosecond and one without, then there is no difference in the felt, subjective experience. But this fails, because we know that indiscriminability is insufficient for phenomenal identity (since the former relation is intransitive and identity isn't).(2) the critic might claim that during a sufficiently short period of time no physical processing in the brain sufficient to give rise to (continued) experience will have occurred.  But in that case, we have simply transformed this into a chunking case: The minimum phenomenal duration must, we're assuming, extend for longer (let's say it is 100 nanoseconds), in which case there must be some extension of brain processing in physical time sufficient to give rise to 100 nanoseconds more phenomenal experience.  Assuming that this happens, on average, once every 100 nanoseconds of physical time, it would seem that an arbitrary 1 ns extention of physical processing gives you a 1/100 chance of getting a bonus 100 ns of experience.  Even though 99% of nanoseconds have now been rendered worthless, the remaining 1% have their value boosted one hundredfold, returning the expected value of the nanosecond to our (slight but non-zero) starting point.So: some nanoseconds (or picoseconds, etc.) matter.  Further, a similar story may be told about various non-hedonic goods.  These will likewise tend to be graded, building up in value over time, perhaps with elements of 'chunking' or threshold effects here or there.  Consider having an idea, or writing a book.  It might at first seem inconceivable that a mere nanosecond could ever make a difference to such projects.  But there will be some crucial period during which the idea becomes more fully-formed and clear in your mind, and its value may increase with its clarity and completeness, and some nanoseconds simply must make a difference to these dimensions (since they involve precise physical mechanisms such as firing neurons).  Similarly when writing a book, there will be some nanosecond that makes the difference between your successfully pressing a key on your [...]

Aggregating the Right Moments


Should we prefer to give one person half a million minutes (i.e. one year) more life, or to give a million people one minute more each?  If iterated a million times over (once for each person in the million), the latter repeated choice is clearly better for all (by half a million minutes).  Moreover, as I suggested in comments to that post, if we assume that the million choices are independent of each other in value -- that is, the value of making one such choice does not depend on how the other choices are made -- then it quickly follows that it's better to give the million tiny benefits rather than the one big benefit, even in a one-off choice situation.However, it's worth flagging that on one very natural (but philosophically distorting) way of imagining the situation, the independence assumption will not hold.It's natural to imagine bestowing the extra gift of life to someone on their deathbed.  But then giving them one minute of time is not actually worth 1/500k of 500k extra minutes, since a minute spent on your deathbed is likely to be pretty worthless, i.e. disproportionately bad compared to the average minute in your life.  With an extra year, by contrast, you may actually achieve something worthwhile.Of course, if we're interested in whether small benefits to many can, in aggregate, outweigh large benefits to one, it's important that the putative "small benefit" actually be beneficial.  So, rather than imagining someone starting afresh after an extended hospital stay has already disrupted their life, it might be more philosophically fruitful to instead imagine the initial health disruption being delayed for the stipulated duration of time (a minute or a year, as the case may be).  This makes an important difference, because an extra minute in the midst of your life might be just what you needed to finish some worthwhile project (or at least extend a period of carefree enjoyment) that is genuinely valuable.While it may be relatively unlikely for any given person that the extra random minute would make much difference, given a large enough number of beneficiaries it becomes quite likely indeed that the extra minute made a substantial difference for someone (and a less substantial, but still genuinely desirable, difference for many others).  This is in stark contrast to imagining all million people each getting an unrepresentative, comparatively unhappy extra minute to spend lying in their deathbed!In sum: It can be hard to believe that giving one minute more to a million different people could really do more good than giving a full year of life to one person.  But this intuition may at least be partly due to imagining a distorted version of the case where the extra minute being given is a disproportionately worthless one.  If we instead imagine giving randomly representative minutes to a million individuals, and bear in mind all the moments in life when an extra minute could have real value, then I find the anti-aggregative intuition disappears entirely.  I now find it entirely credible that the extra minute each, to a large enough population, is better overall than an extra year to just one person would be. [...]

Universalizing Tactical Voting


I regularly come across two objections to tactical voting, i.e. voting for Lesser Evil rather than Good in hopes of defeating the Greater Evil candidate.  One objection is just the standard worry that individual votes lack instrumental value, debunked here.  More interestingly, some worry that tactical voting is positively problematic, morally speaking, on grounds of its putative non-universalizability.On one version of the worry, tactical voting involves (something approaching) a contradiction in the will, insofar as even if those who most prefer Good constituted a majority, they could get stuck in the inferior equilibrium point of all (unnecessarily, and contrary to their collective preference) supporting Lesser Evil.  On another version of the worry, tactical voting involves (something like) a contradiction in conception, insofar as it involves responding to how others plan to vote, which might seem to depend upon those others voting non-tactically, i.e. not waiting to first learn how you plan to vote.To avoid either worry, it suffices for tacticians to follow Regan's co-operative utilitarianism (which is, in general, the correct theoretical solution to any sort of coordination problem).  To wit:(1) Identify those who are willing and able to cooperate with you (by following this very decision procedure) in pursuit of the best collectively attainable outcome.(2) Determine the best available total plan of action for the group of cooperators, given how non-cooperators will actually behave, then play your individual role in said plan.In the first problem case: If a majority of the electorate is willing and able to cooperate with you, then correctly following the above procedure entails (i) that they will recognize themselves as forming a majority, and (ii) will follow the best plan available on that basis, viz. electing Good.  Your group of cooperators will only vote for Lesser Evil on the condition that they are not in a position to collectively elect Good (who is, by stipulation, better).  So: no problem.The second case is a bit trickier.  I take it that we are to imagine difference sub-groups, each of which attempts to cooperate in voting tactically to best achieve their (differing) goals.  Perhaps 40% unconditionally support Greater Evil, and 30% each support Good and Lesser Evil, whilst still preferring the other over Greater Evil.  How should the G and LE groups vote (given their values)?Since they have different goals, they do not count as fully mutual cooperators.  All we can do is consider each group separately.  In each case, what the members of a group objectively ought to do will depend upon what the other group does.  But that's not a problem, since there is after all a fact of the matter as to what the various individuals will end up doing, and so the above tactical decision procedure will (separately) determine what they all should have done (given their values).  It can be summarized as saying that each group should vote for whichever candidate the other group votes for, whereas determining which candidate that is is a matter outside of moral theory.  (It may instead be determined by whichever group is better able to position their candidate as the 'default' alternative to the Greater Evil.)While it would of course be unfortunate if it turned out that the Lesser Evil folks were able to outmaneuver the Good folks here, that would just be a sad fact about the world and not any sort of indictment of tactical voting per se, or so it seems to me.Is there some residual problem that I'm missing here?(The practical problem -- how to act given incomplete information, etc. -- is a different one from the theoretical problem I focus on here.  It's obv[...]

Assessing the NMC's Defense of its Independent Midwifery Ban


After receiving much criticism for its effective ban on independent midwifery, the NMC released a document [pdf] that seeks to explain and justify their position (see especially the fourth and final page).Their central conclusion is that they are simply following orders, and it isn't their responsibility to do anything to mitigate the harms they're thereby causing:So what we are seeing now is an entirely foreseeable consequence of Government policy (of successive governments since 2009) and the EU Directive which introduced this mandatory requirement in the interests of public protection. As Finlay Scott advised in 2010 and remains the case today, if the UK or any of the devolved administrations consider it is necessary to enable the continued availability of the services provided by entirely independent midwives, acting outside the alternative governance structures which are available to them, then they must seek to facilitate a solution, as the solution does not lie with the NMC.They further stress:The NMC exists to protect the public using the statutory powers it has been given. This includes ensuring that all its registered nurses and midwives have appropriate indemnity insurance. The NMC does not have the power to remove, waive or disregard this statutory duty, nor could it be in the public interest for it to do so.I think there are a couple of points worth noting here:Firstly: This is non-responsive to the specific concerns that have been raised about the NMC's hasty and overzealous exercise of its statutory powers.* The NMC did not have a statutory duty specifically to disrupt existing care arrangements (it could instead have allowed a reasonable adjustment period between reaching its final verdict and implementing the ban, preventing IMs from acquiring new clients rather than disrupting the maternity care of their existing clients).* The NMC did not have a statutory duty to ban IMs from attending births even in a non-medical capacity.* The NMC did not have a statutory duty to form its judgment (that IMUK's indemnity insurance was "inappropriate") in an arbitrary and ignorant fashion, without properly assessing the catastrophic risk model provided by IMUK's actuaries which explain why they believe their product to be entirely fit for purpose. [Listen to 23 - 24 mins through this radio interview, where IMUK chair Jacqui Tomkins explains that after long refusing to even meet to discuss their model, the NMC finally conceded that they hadn't even attempted to assess IMUK's catastrophic risk model because it was "very complicated and would have taken a lot of time"!!]* The NMC did not have a statutory duty to refuse to communicate with IMs about what level of indemnity cover it would consider "sufficient". (In case their stubborn and uncooperative arrogance does not come through sufficiently strongly in the above quotes, have a listen to NMC Chief Executive Jackie Smith at 11:30 mins through this radio interview, responding to the host's query about what would constitute sufficient cover with, "That is for them to satisfy me.")In each of these cases, far from simply faithfully executing their statutory duties, it seems clear that the NMC has behaved in a reckless, callous, and arbitrary fashion, contrary to the interests of the affected parties (including affected members of "the public" who suffered disruption to their maternity care as a result).Secondly: While perhaps less important in practice, I think it worth flagging just how lacking in imagination one would have to be to think it impossible in principle that disregarding a regulation "could [...] be in the public interest".  After all, I think my previous posts make a reasonably strong case that requiring indemnity insurance for IMs is bad policy[...]

Our Zombie Bodies, and Physicalist Epiphenomenalism


Eric Olson has a fascinating paper, 'The Zombies Among Us' (forthcoming in Nous), where he points out that standard constitutional theories of persons imply that our bodies are phenomenal zombies -- physically identical to us but lacking conscious experiences (or indeed any mental properties).Constitutionalists hold that we are constituted by (but not identical to) the physical matter that makes up our bodies.  If we imagine a brain-transplant case, for example, it seems that we go where our brains go, and so we can come apart from our bodies (and likewise from the biological organism that our bodies also constitute).  But then we must deny that our materially coincident bodies have mental properties (such as the belief that we go with our brains and not with our bodies), on pain of a fairly radical skepticism (if bodies have all the same beliefs and thoughts as us, what confidence could we have that we are not ourselves such thinking bodies, falsely believing that we go with our brains?).So, in the ordinary run of things, our bodies are physically identical to us but lack mental properties: they are philosophical zombies.Not in any sense that refutes physicalism, of course; our brains may still metaphysically suffice for (giving rise to) consciousness, even if we do not then attribute the consciousness to every object that has our brains as parts. (Which makes independent sense, after all: we can speak of a gerrymandered object consisting of my brain + the Eiffel Tower, but we should not hold that this object is itself a thinking thing.)An interesting consequence of this, which Olson flags, is that physicalists should not identify mental states (or properties) with physical states (or properties).  Any token physical state that I am in is shared by my body, after all.  So if mental states were identical to token physical states, we would be committed to holding that my body shares my mental states, contra the view expressed above.Instead, I think, physicalists should agree with dualists that the brain gives rise to (rather than just is) the mind.  Their view can remain distinctively physicalistic insofar as they insist that there is nothing metaphysically heavyweight about this "generation" -- the mind comes along "for free", in something like the way that computer hardware gives rise to a running software programme.  Your web browser is not identical to any of the circuitry upon which it runs, but nor does it create any deep metaphysical mysteries. (Of course, I think consciousness is different, but I'm playing Devil's advocate for the physicalist here.)An interesting upshot of this (it seems to me) is that physicalists should be epiphenomenalists too.  These "virtual" or abstractly generated mental properties do not push atoms around, and so lack fundamental causal powers (though we can always invoke them when speaking loosely, as in correlative explanations).  This isn't a problem, since there's nothing wrong with epiphenomenalism, but insofar as many physicalists believe otherwise, it does put them in a funny position.[See also: the 'Virtual Mind' diagnosis of why Searle's Chinese Room argument is misleading, for related discussion of how mental properties should be attributed to a different 'level' of entity from that which does the underlying information processing.] [...]

Medical Indemnity: Protection or Compensation?


One of the (many) puzzling elements of the NMC anti-midwifery fiasco is the NMC's insistence that, by shutting down midwives whom they judge to have "insufficient" indemnity cover, they are thereby "mak[ing] sure that all women and their babies are provided with a sufficient level of protection should anything go wrong," and that they "had to act quickly in the interests of public safety."This rhetoric strikes me as deeply misleading.  Indemnity cover is not a public safety issue.  Not only does it do nothing to prevent bad medical outcomes from occurring in the first place, it cannot even ensure in general that financial support is available when needed for increased caring costs associated with (e.g.) disability.  All that indemnity cover does for patients is ensure that greater compensation can be paid in a malpractice lawsuit -- a very rare and specific set of circumstances.Indemnity cannot be relied upon to "protect" families "should anything go wrong" because it does not cover anything going wrong, but only things going wrong due to malpractice on the part of the medical practitioner.  If a baby suffers brain damage due to unavoidable complications, for example, indemnity will not help. For that, we need disability support as part of the general social safety net.As it stands, indemnity cover (even at a level deemed "insufficient" by the NMC) increases the cost of obtaining an independent midwife by around £500 or so.  This can be expected to price out some women who would have preferred the superior care available outside of the overburdened NHS (even at the cost of reduced compensation in the unlikely event of a malpractice lawsuit).  As such, a legal requirement to have indemnity cover can actually be expected to lead to worse health outcomes in aggregate, as the added cost deprives some women of the opportunity for independent care, and adds to the demands being placed upon an already over-burdened NHS.(And that is quite apart from the more general concerns about paternalism and patient autonomy noted in my previous post.  Even for those of us who can afford to pay extra for indemnity cover, it isn't at all clear what business the government has forcing us to do this.)Given that it is really a matter of enabling compensation rather than ensuring protection, medical indemnity cover for private practitioners is not a public safety issue, or even a matter of legitimate public interest at all.  It should be up to individual practitioners and their clients to decide for themselves whether this is something that they feel that they want or need, and if so, to what degree of coverage.It is certainly not so pressing and urgent a need that it can justify depriving a woman of her chosen midwife, with all the associated harms noted previously. [...]

UK Shuts Down Independent Midwives


A new low for harmful over-regulation: The UK has just regulated independent midwives out of business (at least for the time being).  The Nursing and Midwifery Council decided that they did not consider the indemnity cover of Independent Midwives UK (which has worked fine since indemnity cover was legally mandated in 2014) to be "adequate" after all.  So, as of 11 January last week, independent midwives have been legally barred from attending the births of their clients, severely disrupting the birth plans of these expectant parents (threatening their right to a home birth, disrupting their continuity of care, and generally undermining patient autonomy and the values that led these expectant parents to invest in an independent midwife in the first place).The NMC's behaviour here is appalling in so many respects.  The immediate implementation of the decision makes it especially damaging. Expectant parents have formed birth plans which depend upon the independent midwives with whom they have built up relationships of trust.  To disrupt these plans without extremely good reason is deeply intrusive and unethical. As Birthrights explains in their critical open letter, NMC's actions "appear designed to cause maximum disruption and damage to independent midwives and the women they care for."  They continue:The NMC has a key role to play in protecting public safety, yet this decision directly jeopardises the health and safety of the women it is supposed to safeguard. Beyond the very real physical health implications of this decision, it is causing emotional trauma to women and their families at an intensely vulnerable time. To date, it appears that the NMC has shown no concern for the physical and mental wellbeing of pregnant women who have booked with independent midwives.At the very least, the NMC should, as Birthrights rightly insists, guarantee "that all women who are currently booked with independent midwives using the IMUK insurance scheme will be able to continue to access their services" and "that the midwives caring for them them will not face disciplinary action for fulfilling their midwifery role".Aside from the horrifically rushed implementation, the decision itself just doesn't seem remotely reasonable.  NMC Chief Executive and Registrar Jackie Smith has responded with the claim that "The NMC absolutely supports a woman’s right to choose how she gives birth and who she has to support her through that birth. But we also have a responsibility to make sure that all women and their babies are provided with a sufficient level of protection should anything go wrong."In other words: nice as a women's right to choose might be, what's really important is that she can sue for many bucketloads of money (not just a few bucketloads) if anything goes wrong.Seriously?  That's your top priority?  This reveals deeply messed-up values.(It's arguably wrong for the law to require indemnity insurance of independent midwives at all: The costs are of course passed on to clients, and it's not obvious what legitimate interest the state has in forcing expectant parents to pay for such cover.  I understand that in previous decades clients of independent midwives could just sign a waiver indicating that they wished to have midwifery care without such indemnity cover in place.  I would prefer to still have that option.  So I think the law is wrong.  But it's especially absurd for the NMC to not just enforce the minimal requirements of the law, but to zealously root out any midwifery care that might occur with large amounts of indemnity cover that just aren't large[...]

2016 in review


(Past annual reviews: 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 2005, and 2004.)Applied Ethics* The Instrumental value of one vote -- can be much higher than many philosophers seem to assume.* Pets and Slavery -- explains why domesticated animals are not inherently wronged by their guardians, or morally akin to "slaves".* Philanthropic focus vs abandonment -- diagnoses some bad reasoning from the CEO of Oxfam, who mistakenly thinks there are reasons of fairness to help people inefficiently.* Effective Altruism, Radical Politics, and Radical Philanthropy -- Is EA insufficiently 'radical'?  Or excessively so?* How bad? -- a rough first step towards moral prioritization.* Opposite Day: "Charity begins at home" edition -- Don't let my evil twin Ricardo convince you!Normative Ethics* Illustrating the Paradox of Deontology -- may you murder once to save five loved ones from being murdered? (If not, why not?)* Is Consequentialism More Demanding? -- not if you take the interests of the poor into account...* Irrational Increments for the Self-Torturer -- argues, contra Tenenbaum and Raffman, that some individual increments of the self-torturing device are not worth taking.* Related: Self-Torturers Without Diminishing Marginal Value -- provides a slightly neater version of the case to consider.* Do we have Vague Projects? -- putative candidates may be best explained in a way that suggests not, actually.* Attitudinal Pleasure and Normative Stance-Independence -- is the value of pleasure "subjective" in the relevant sense?  I express some doubts, contra Sobel.* Possibly Wrong Moral Theories -- and why we should think they're actually wrong.Metaethics and Consciousness* The basic reason to reject naturalism: Substantive Boundary Disputes -- argues that naturalism (about normativity and consciousness alike) cannot account for the substantive nature of questions about the domain's extent.* The 2-D Argument Against Metaethical Naturalism -- building upon the Open Question Argument.* Carroll on Zombies -- a physicist talks about something other than phenomenal consciousness.* Final Value and Fitting Attitudes -- explains how to analyse the former in terms of the latter, whilst avoiding the objections raised in a recently published paper.Teaching* 7 Things Everyone Should Know About Philosophy* Student Spotlight: Intrinsically Irrational Instrumental Desires -- an under-explored area of logical space.* Teaching Effective Altruism -- discusses the syllabus used for my EA class (see also discussion of the Giving Game I ran in class).* Expected Value without Expecting Value -- I was surprised to find that most students do not accept expected value reasoning: they would prefer to save 1000 lives for sure, than to have a 10% chance of saving a million lives. [...]

Is Consequentialism More Demanding?


People sometimes complain that impartial consequentialism is "too demanding", insofar as it requires us (comparatively) wealthy and fortunate people to do a lot to help the less fortunate.  And it's true that those are non-trivial costs.  But it's hard to take seriously the suggestion that these costs are morally more significant than the costs endured by the less fortunate by our doing less (or nothing).  So-called "moderate" views of beneficence are in fact extremely costly for the worst-off -- much worse than consequentialism is for the wealthy.  So it's an odd objection.David Sobel's (2007) 'The Impotence of the Demandingness Objection' nicely develops this line of criticism (p.3):Consider the case of Joe and Sally. Joe has two healthy kidneys and can live a decent but reduced life with only one. Sally needs one of Joe’s kidneys to live. Even though the transfer would result in a situation that is better overall, the Demandingness Objection’s thought is that it is asking so much of Joe to give up a kidney that he is morally permitted to not give. The size of the cost to Joe makes the purported moral demand that Joe give the kidney unreasonable, or at least not genuinely morally obligatory on Joe. Consequentialism, our intuitions tell us, is too demanding on Joe when it requires that he sacrifice a kidney to Sally.But consider things from Sally’s point of view. Suppose she were to complain about the size of the cost that a non-Consequentialist moral theory permits to befall her. Suppose she were to say that such a moral theory, in permitting others to allow her to die when they could aid her, is excessively demanding on her. Clearly Sally has not yet fully understood how philosophers typically intend the Demandingness Objection. What has she failed to get about the Objection? Why is Consequentialism too demanding on the person who would suffer significant costs if he was to aid others as Consequentialism requires, but non-Consequentialist morality is not similarly too demanding on Sally, the person who would suffer more significant costs if she were not aided as the alternative to Consequentialism permits? What must the Objection’s understanding of the demands of a moral theory be such that that would make sense? There is an obvious answer that has appealed even to prominent critics of the Objection — that the costs of what a moral theory requires are more demanding than the costs of what a moral theory permits to befall the unaided, size of cost held constant. The moral significance of the distinction between costs a moral theory requires and costs it permits must already be in place before the Objection gets a grip. But this is for the decisive break with Consequentialism to have already happened before we feel the pull of the Demandingness intuitions.We might similarly ask why deontological views are not considered excessively demanding when they prohibit Sally from saving her own life by stealing one of Joe's spare kidneys.  In terms of raw (theory-neutral) cost to the agent, this is surely very demanding!  Granted, as people standardly evaluate "demandingness", they might presuppose that moral prohibitions of this sort are not as relevantly "demanding" as positive obligations.  But this is, in effect, to already be assessing questions of demandingness through deontologically-tinted glasses.It seems, then, that there are no neutral grounds for considering impartial consequentialism to be "more demanding" than rival moral theories, at least in the sense of imposing excessively great costs on agents.  One can only get this [...]

Illustrating the Paradox of Deontology


One who accepts a "consequentialism of rights" might hold that deliberating killing an innocent person (let's call this "murder", for short) is so morally bad that it isn't justified even to save five lives.  But deontologists go further, suggesting that one should not murder even to prevent five other murders.  This seems puzzling: if murder is so morally horrendous, why should we not be concerned to minimize its occurrence?  This is Scheffler's paradox of deontology in a nutshell.A deontologist might respond by suggesting that our moral aims are not so impersonal: we have a special responsibility for our own (present) actions, and so must regard our not (now) ourselves causing harm / violating rights as a distinctive moral goal.  Scheffler pushes back against this idea on pp. 415-6 of his 'Agent-Centred Restrictions, Rationality, and the Virtues':[O]n standard deontological views, morality evaluates actions from a vantage point which is concerned with more than just the interests of the individual agent. In other words, an action will be right or wrong, on such a view, relative to a standard of assessment that takes into account a number of factors quite independent of the interests of the agent. And defenders of such views are unlikely to claim that the relevant standard of assessment includes agent-centred restrictions, but that it is a matter of indifference, from the vantage point represented by that standard, whether or not those restrictions are violated. For if it is not the case that it is preferable, from that vantage point, that no violations should occur than that any should, it is hard to see how individual agents could possibly be thought to have reason to observe the restrictions when doing so did not happen to coincide with their own interests or the interests of those they cared about. In other words, deontological views need the idea that violations of the restrictions are morally objectionable or undesirable if the claim that people ought not to commit such violations when doing so would be in their own interests is to be plausible. Yet if such views do regard violations as morally objectionable or undesirable, in the sense that it is morally preferable that none should occur than that any should, it does then seem paradoxical that they tell us there are times when we must act in such a way that a larger rather than a smaller number of violations actually takes place.It's a fairly dense passage, so when teaching this topic last term I came up with a thought experiment to help illustrate.Suppose that five innocent people whom you love are going to be murdered, unless you yourself murder a (distinct) innocent person.  Is it wrong for you to murder an innocent person in order to save your five loved ones?Standard deontological theories will insist that murder, even in this case, is wrong.  But this may seem a difficult verdict to uphold, given that murdering the one seems preferable from both your personal standpoint and the impersonal standpoint.Impersonally: five murders are worse than one.  Personally: there is a special moral cost to you in committing a murder, sure, but it is not so great a cost (we may suppose) as losing your five loved ones.  So, we may wonder, from what perspective does the deontological verdict have any normative force or appeal?To get the verdict that murdering the one is wrong, the deontologist must hold that you are morally special (to override the impersonal verdict and get that your murdering one is morally worse than allowing five other murders to [...]

Possibly Wrong Moral Theories


In 'The Normative Irrelevance of the Actual', I explained why it doesn't matter whether a putative counterexample to a moral theory is actual or hypothetical in nature, on the grounds that first-order moral theories can be understood as (implying) a whole raft of conditionals from possible non-moral circumstances to moral verdicts.  But there's another, perhaps more intuitive, way to make the case, based on the idea that some counterfactually superior moral theory should be superior, simpliciter.

Consider Slote's sentimentalism.  According to Slote (2007, 31), wrong acts are those that "reflect or exhibit or express an absence (or lack) of fully developed empathic concern for (or caring about) others." The relevant kind of empathic concern is not some kind of a priori theoretical posit, such as universal love, but rather is tied to our actual natural dispositions to favour those near and dear. (This is crucial to secure his desired anti-utilitarian verdicts.)  But this raises the obvious worry: what if our "natural" empathic dispositions turn out to have racist or otherwise clearly immoral built-in tendencies?

Slote responds: “The ethics of empathy may here be hostage to future biological and psychological research, but I don’t think that takes away from its promise as a way of understanding and justifying (a certain view of) morality.” (p.36)

But, I suggest, if we know that there is a possible situation in which sentimentalism is not the correct moral theory, then we can ask ourselves what the correct moral theory in that situation would be. And once equipped with that correct possible moral theory—one that provides an independent justification for rejecting racist or otherwise immoral sentiments even when sentimentalism cannot—then we may wonder what we need sentimentalism for. What is stopping that counterfactually correct moral theory from also being the actually correct moral theory?

Perhaps there are some moral theories that give plausible verdicts only in a certain counterfactual world, and are no longer plausible when we apply them to our world (or others).  So, fine, discard those clearly inadequate theories.  Still, given the entirety of logical space to choose from, we should be able to find a theory that yields the desired results in our world as well as in the counterfactual world where it is superior to sentimentalism (or whatever merely contingently plausible theory we are considering).

So, if a moral theory is merely contingently plausible, we can find a better option out there.  Being possibly wrong, in this sense, suffices to establish that the moral theory is actually wrong.

Attitudinal Pleasure and Normative Stance-Independence


David Sobel has an interesting post up at the revamped PEA Soup blog on 'Normative Stance Independence and Pleasure'.  He suggests that if pleasure is best understood in attitudinal terms (as per Parfit's hedonic likings) then this undermines Normative Stance Independence, the view that "normative facts are not made true by anyone’s conative or cognitive stance" or "by virtue of their ratification from within any given actual or hypothetical perspective."

But does it?  The distinction between stance-dependence and -independence is a slippery beast.  Even if pleasure could be said to involve "taking a stance" towards a base sensation by liking it, it's not so clear that the stance is what does the heavy lifting in explaining why pleasure is good.  More plausibly, I think, pleasure is good just because of how it feels, objectively speaking.  Again, this normative explanation remains untouched, it seems to me, no matter if the phenomenology of pleasure turns out to be inextricably tied up with the attitude of liking.  It could still be the objective phenomenology, rather than the "stance" per se, that matters.

(In support of this point, I take it that if knowledge, for example, has intrinsic value then this is uncontroversially objective or 'stance-independent' in nature, regardless of the fact that knowledge is (or involves) a cognitive state, and so might be considered part of the agent's "stance" in some sense.  So, why not the same for pleasure?)

Pets and Slavery


In 'The Case Against Pets', Rutgers law professors Francione and Charlton argue that "domestication and pet ownership [...] violate the fundamental rights of animals."  This is, I think, a deeply absurd position.A large part of their essay is just concerned with arguing against treating pets as property.  I think it's pretty clear that the ordinary social meaning of having a pet already rules this out.  One may carve up one's property for fun; if someone were to carve up their pet, we would (rightly) want them to be locked up for animal cruelty.  If the legal system failed to do this, they would certainly be shunned by the rest of society, who would be deeply horrified by their actions.It's an interesting question whether non-rational beings can have a right to life in addition to a right against cruel treatment.  If so, the implications would be quite radical, even aside from the complete abolition of the meat industry.  Society would presumably be obliged to support animal shelters to an extent that removes the current need to kill many perfectly healthy animals due to overcrowding.  I think that's a plausible enough position, though there are counterarguments to consider.Where the authors go off the rails is when they suggest that "domestication itself raises serious moral issues irrespective of how the non-humans involved are treated" -- such that pet ownership would still be wrong even if animal rights against cruel treatment and convenience-killing were secured.  Why do they think this?  What further rights are being violated, merely by caring for your pet?  Here is what F&C write:Domesticated animals are completely dependent on humans, who control every aspect of their lives. Unlike human children, who will one day become autonomous, non-humans never will. That is the entire point of domestication – we want domesticated animals to depend on us. [...] We might make them happy in one sense, but the relationship can never be ‘natural’ or ‘normal’. They do not belong in our world, irrespective of how well we treat them. This is more or less true of all domesticated non-humans. They are perpetually dependent on us. We control their lives forever. They truly are ‘animal slaves’. Some of us might be benevolent masters, but we really can’t be anything more than that."Slavery is bad, X is like slavery, therefore X is bad" is superficial reasoning.  Much depends on whether X shares the relevant features or preconditions that explain why slavery is so bad.I take the basic problem with (human) slavery to be that it is so drastically contrary to the interests of the enslaved.  Not only were slaves historically mistreated in all sorts of ways, but even an imaginary "happy slave" seems in a tragic position insofar as their capacity for rational autonomy -- and hence for a fully flourishing human life -- is being stunted rather than nourished.  Rationally autonomous beings have an interest in developing and preserving their autonomy, and when this interest is violated their life is (in this respect) worse as a result.This crucial feature is obviously lacking in non-rational animals.  So long as we do not mistreat them (whether by outright cruelty or mere neglect, e.g. failure to provide a sufficiently stimulating environment) domestic animals' chances at a fully flourishing life are not impaired by the mere fact of our control over them.  They have no interest in be[...]

Do we have Vague Projects?


Tenenbaum and Raffman (2012) claim that "most of our projects and ends are vague." (p.99)  But I'm not convinced that any plausibly are.  I've already discussed the self-torturer case, and how our interest in avoiding pain is not vague but merely graded.  I think similar things can be said of other putative "vague" projects.T&R's central example of a vague project is writing a book:Suppose you are writing a book. The success of your project is vague along many dimensions. What counts as a sufficiently good book is vague, what counts as an acceptable length of time to complete it is vague, and so on.But it strikes me as strange for one's goal to be to reach some vague level of sufficiency.  When I imagine writing a book, my preferences here are graded: each incremental improvement in quality is pro tanto desirable; each reduction in time spent is also pro tanto desirable.  These two goals seem like they should be able to be traded off against each other -- perhaps precisely, or (if they are not perfectly commensurable goods) then perhaps not, but this sort of rough incomparability between two goods is (I take it) not the same as either good itself being vague.I could imagine a cynical person who really doesn't care to improve the quality of their book above a sufficient level.  Perhaps they just want it to be of sufficient quality to earn a promotion, or some other positive social appraisal.  But these desired consequences are even more clearly not vague.Similar things can be said of the standard example of baldness.  I trust that nobody (sane) actually has a fundamental desire not to fall under the extension of the English-language predicate 'bald'.  What they more plausibly have is a graded desire that roughly maps onto what is socially recognized as baldness.  For example, perhaps they desire not to have their appearance negatively appraised on the basis of hair loss.  (Or perhaps even just not to have other people think of them as bald.)  But of course there's nothing vague about that: people either appraise you negatively or they do not.  Such appraisals are graded, however: the first noticeable signs of a receding hairline may be expected to elicit a less severe appraisal than a large bald patch. (Or so we might imagine the vain man to assume.)Or consider a case from Elson's (2015) reply:You may wish for a restful night’s sleep, but to stay up as late as possible as is consistent with that. Since restful is vague, one minute of sleep apparently couldn’t make the difference between a restful and a nonrestful night, and you ought to stay up for another minute. But foreseeably, if you keep thinking that way, you will stay up all night. (p.474)As with the book case, this strikes me as simply involving a trade-off between two graded (non-vague) ends.  To speak of a "wish for a restful night's sleep" is surely just a rough shorthand for what is really a graded desire, for a night's sleep that is more restful rather than less so.  Perhaps there are some threshold effects in there, insofar as some lost minutes may have more noticeable effects than others on your state of mind the next day (and you can't know in advance exactly which minutes these are).  But it's clearly just false to assume that a minute's less sleep will always make no difference to what it is that you really want here (regardless of whether the term 'restful' still applies to your [...]

Self-Torturers without Diminishing Marginal Value


My last post mentioned in passing that the puzzle of the self-torturer may be complicated by the fact that money has diminishing marginal value.  This can mean that a few increments (of pain for $$) may be worth taking even if a larger number of such increments, on average, are not.  So to make the underlying issues clearer, let us consider a case that does not involve money.Suppose ST is equipped with a self-torturing device that functions as follows.  Once per day, he may (permanently) increase the dial by one notch, which will have two effects: (i) marginally increasing the level of chronic pain he feels for the rest of his life, and (ii) giving an immediate (but temporary) boost of euphoric pleasure.  Before it is permanently attached, ST is allowed to play around with the dials to become informed about what it is like at various levels.  He realizes that after 1000 increments, the burst of pleasure is fully cancelled out by the heightened level of chronic pain he would then be feeling.  So he definitely wants to stop before then. (We may assume that he will live on for several years after this point.)  Is it rational for ST to turn the dial at all?Surely not.  Each increment imposes +x lifetime pain in return for a temporary boost of y pleasure. We may treat these as being of constant value (bracketing any slight differences in, e.g., the duration of ST's subsequent "lifetime" between the first day and the thousandth -- we could make it so that the pain only starts on the 1000th day if necessary).  And we know that it would be terrible for ST to endure 1000 increments.  That is, the disvalue of +1000x lifetime pain vastly outweighs the value of 1000 shorts bursts of y pleasure.  Since the intrinsic values here are (more or less) constant, it follows that the intrinsic disvalue of +x lifetime pain vastly outweighs the intrinsic value of a short burst of y pleasure.So -- assuming that there are no extrinsic values in play (e.g. we're not to imagine that ST has never experienced euphoria, such that a single burst would add a distinctive new quality to his life, or anything like that) -- it follows that each individual increment of the self-torture device is not worth it.  It would be irrational for ST to turn it at all.  So there is clearly no great "puzzle" or "paradox" here.Compare this result to the original puzzle involving money.  Since money has diminishing marginal value, it might be that (n times) $y is worth (n times) x pain (for some n < 1000) even if $1000y is not worth 1000x pain.  That contributes to the intuitive force of the "puzzle", insofar as at least early increments seem like they might be worth taking.  But it should be clear that merely adding a resource with diminishing marginal value can't really create a paradox here where there wasn't one previously.  There will still be some threshold point n where it is irrational (of net intrinsic disvalue) for ST to turn the dial a single notch more.So there is no great "puzzle" to the self-torturer. [...]

Irrational Increments for the Self-Torturer


Recall that the Self-Torturer (ST) gets $10 000 for each turn of a dial that permanently increases the pain he feels for the rest of his life by a negligible amount.  Each individual increment seems worth making, the thought goes, but 1000 increments would leave ST in intense agony, which no amount of money can compensate for.It seems intuitively clear to me that ST would soon reach a point at which additional increments -- even considered in isolation -- are not worth it.For example, if one hundred equal increments yielding 100x pain for $100y are collectively not worth it, then the badness of 100x pain outweighs the value of $100y.  On average, then, the harm of x outweighs the benefit of y, over the 100 increments. If the increments are all of equal net value, then each increment is of negative net value, and hence irrational to choose.  If instead we allow that the first few increments were worth the money (due perhaps to the diminishing marginal utility of money, or else the increasing marginal disutility of pain) then we can at least know that the one hundredth increment is not worth taking. After 99 increments, the value of $y is outweighed by the badness of x pain.  It would be incoherent to deny this while holding that the value of $100y is outweighed by the badness of 100x pain. So there's no real puzzle here.But, strangely, Tenenbaum and Raffman, in 'Vague Projects and the Puzzle of the Self-Torturer', do deny this.  They instead affirm (p.98):Nonsegmentation: When faced with a certain series of choices, the rational self-torturer must choose to stop turning the dial before the last setting; whereas in any isolated choice, she must (or at least may) choose to turn the dial.Why do they think this?  They seem to assume that our interest in avoiding pain is vague and coarse-grained.  They refer to "our commitment to a project of leading a (relatively) pain free life" (p.107).  Since a negligible increase in pain cannot make a difference to whether or not we live a relatively pain-free life, no individual increment violates this interest of ours,* whereas each increment does serve our interest in greater wealth.  (If you're already in pain, I imagine that T&R might appeal to some other coarse-grained anti-pain interest, just with higher thresholds, such as that of leading a life that is not excessively inundated with pain.)There are (at least) two obvious problems here.  One, noted by Luke Elson in his reply, is that the asterisked claim above is false.  In a sorites series like this, it is not universally true that each increment determinately makes no difference to whether the vague predicate applies.  Rather, on standard views of vagueness (e.g. supervaluationism), there will be a range in which it is indeterminate which particular increment violates the threshold, but determinate that some such increment does.  It is thus (determinately) false to claim that no increment violates our interest in leading a relatively pain-free life.Secondly, and more fundamentally, it just seems absurd to me to think that our interests in avoiding pain are coarse-grained in this way.  What's so special about the borderline between a life that is "relatively" pain free and one that is not?  Suppose it takes 20 increments of pain to reach the borderline cases, which in turn extend for 5 increments before we[...]

The Instrumental Value of One Vote


Over in this Leiter thread, some philosophers seem to be dismissing the instrumental value of voting (for Clinton over Trump) for misguided reasons:(1) That a marginal vote is "astronomically unlikely to change the outcome."This is not true,* at least for those who are able to vote in a swing state. According to Gelman, Silver and Edlin (p.325), the chance of a marginal vote altering the election outcome is as high as 1 in 10 million, depending on the state.  Given that the outcome will in turn affect hundreds of millions (or even billions) of people, voting for Clinton in a swing state arguably has significant expected value.(2) That the system is not sensitive to a single vote, and anything close to even will be decided by the courts or the like.The claim that insensitivity undermines marginal impact is generally fallacious.Given that a large collection of votes together makes a difference, it is logically impossible for each individual addition to the collection to make no difference.  While it may be true that an objectively tied vote and an objective 1-vote victory would not be distinguished by the system, there must be some smallest and largest numbers of votes that would in fact trigger a recount or a court case (or whatever), in which case one of those numbers [specifically, whichever one is the difference between a straight victory and a court-delivered loss] provides the new threshold that matters for a marginal vote to make a decisive difference.  (See also the final page of this paper by Gelman et al.)* = I've previously been led astray by Jason Brennan's model from p.19 of The Ethics of Voting, which really does yield astronomically small chances -- on the order of 10 ^ -2650.  I thank Toby Ord and Carl Shulman for their corrections in this public Facebook thread.In short, Brennan's mistake (and that of the past researchers he draws on) is to model voters as having a fixed non-50/50 propensity to favour a particular candidate over the other.  Even if the fixed propensity is just 50.5, repeating the odds over 100+ million voters makes the result an astronomically certain victory for the favoured candidate (with a vanishing small standard deviation from the expected result of their securing 50.5% of the total votes).  This is obviously not an accurate reflection of either our epistemic position prior to an election, or of any kind of objective probability distribution over the possible outcomes.  It's a bad model.  A better model would either model different voters as having different propensities [as per section 5 of this Gelman et al paper] or at least take on board our credences over a range of possible propensities (including 50/50) rather than stipulating that a particular non-50/50 propensity holds.As Gelman once wrote in a comment on Brennan's blog:[T]he claim of "10 to the −2,650th power" is indeed innumerate. This can be seen in many ways. For example, several presidential elections have been decided by less than a million votes, so a number of the order 1 in a million can't be too far off, for a voter in a swing state in a close national election. For another data point, in a sample of 20,000 congressional elections, a few were within 10 votes of being tied and about 500 were within 1000 votes of being tied. This suggests an average probability of a tie vote of about 1/80,000 in any randomly selected congr[...]