2017-01-18T10:12:03.872+00:00A new low for harmful government over-regulation: The UK has just regulated independent midwives out of business (at least for the time being). The Nursing and Midwifery Council decided that they did not consider the indemnity cover of Independent Midwives UK (which has worked fine since indemnity cover was legally mandated in 2014) to be "adequate" after all. So, as of 11 January last week, independent midwives have been legally barred from attending the births of their clients, severely disrupting the birth plans of these expectant parents (threatening their right to a home birth, disrupting their continuity of care, and generally undermining patient autonomy and the values that led these expectant parents to invest in an independent midwife in the first place).The NMC's behaviour here is appalling in so many respects. The immediate implementation of the decision makes it especially damaging. Expectant parents have formed birth plans which depend upon the independent midwives with whom they have built up relationships of trust. To disrupt these plans without extremely good reason is deeply intrusive and unethical. As Birthrights explains in their critical open letter, NMC's actions "appear designed to cause maximum disruption and damage to independent midwives and the women they care for." They continue:The NMC has a key role to play in protecting public safety, yet this decision directly jeopardises the health and safety of the women it is supposed to safeguard. Beyond the very real physical health implications of this decision, it is causing emotional trauma to women and their families at an intensely vulnerable time. To date, it appears that the NMC has shown no concern for the physical and mental wellbeing of pregnant women who have booked with independent midwives.At the very least, the NMC should, as Birthrights rightly insists, guarantee "that all women who are currently booked with independent midwives using the IMUK insurance scheme will be able to continue to access their services" and "that the midwives caring for them them will not face disciplinary action for fulfilling their midwifery role".Aside from the horrifically rushed implementation, the decision itself just doesn't seem remotely reasonable. NMC Chief Executive and Registrar Jackie Smith has responded with the claim that "The NMC absolutely supports a woman’s right to choose how she gives birth and who she has to support her through that birth. But we also have a responsibility to make sure that all women and their babies are provided with a sufficient level of protection should anything go wrong."In other words: nice as a women's right to choose might be, what's really important is that she can sue for many bucketloads of money (not just a few bucketloads) if anything goes wrong.Seriously? That's your top priority? This reveals deeply messed-up values.(It's arguably wrong for the law to require indemnity insurance of independent midwives at all: The costs are of course passed on to clients, and it's not obvious what legitimate interest the state has in forcing expectant parents to pay for such cover. I understand that in previous decades clients of independent midwives could just sign a waiver indicating that they wished to have midwifery care without such indemnity cover in place. I would prefer to still have that option. So I think the law is wrong. But it's especially absurd for the NMC to not just enforce the minimal requirements of the law, but to zealously root out any midwifery care that might occur with large amounts of indemnity cover that just aren't large enough for their liking. It's obscenely paternalistic, and deeply disrespectful of patient autonomy.)Finally, there are procedural issues surrounding how unfairly the NMC has treated Independent Midwives throughout this whole process. NMC has consistently refused to offer any guidance to IMUK regarding what level of cover they would approve of. I[...]
2017-01-07T12:09:34.835+00:00(Past annual reviews: 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 2005, and 2004.)Applied Ethics* The Instrumental value of one vote -- can be much higher than many philosophers seem to assume.* Pets and Slavery -- explains why domesticated animals are not inherently wronged by their guardians, or morally akin to "slaves".* Philanthropic focus vs abandonment -- diagnoses some bad reasoning from the CEO of Oxfam, who mistakenly thinks there are reasons of fairness to help people inefficiently.* Effective Altruism, Radical Politics, and Radical Philanthropy -- Is EA insufficiently 'radical'? Or excessively so?* How bad? -- a rough first step towards moral prioritization.* Opposite Day: "Charity begins at home" edition -- Don't let my evil twin Ricardo convince you!Normative Ethics* Illustrating the Paradox of Deontology -- may you murder once to save five loved ones from being murdered? (If not, why not?)* Is Consequentialism More Demanding? -- not if you take the interests of the poor into account...* Irrational Increments for the Self-Torturer -- argues, contra Tenenbaum and Raffman, that some individual increments of the self-torturing device are not worth taking.* Related: Self-Torturers Without Diminishing Marginal Value -- provides a slightly neater version of the case to consider.* Do we have Vague Projects? -- putative candidates may be best explained in a way that suggests not, actually.* Attitudinal Pleasure and Normative Stance-Independence -- is the value of pleasure "subjective" in the relevant sense? I express some doubts, contra Sobel.* Possibly Wrong Moral Theories -- and why we should think they're actually wrong.Metaethics and Consciousness* The basic reason to reject naturalism: Substantive Boundary Disputes -- argues that naturalism (about normativity and consciousness alike) cannot account for the substantive nature of questions about the domain's extent.* The 2-D Argument Against Metaethical Naturalism -- building upon the Open Question Argument.* Carroll on Zombies -- a physicist talks about something other than phenomenal consciousness.* Final Value and Fitting Attitudes -- explains how to analyse the former in terms of the latter, whilst avoiding the objections raised in a recently published paper.Teaching* 7 Things Everyone Should Know About Philosophy* Student Spotlight: Intrinsically Irrational Instrumental Desires -- an under-explored area of logical space.* Teaching Effective Altruism -- discusses the syllabus used for my EA class (see also discussion of the Giving Game I ran in class).* Expected Value without Expecting Value -- I was surprised to find that most students do not accept expected value reasoning: they would prefer to save 1000 lives for sure, than to have a 10% chance of saving a million lives. [...]
2016-12-22T16:10:07.449+00:00People sometimes complain that impartial consequentialism is "too demanding", insofar as it requires us (comparatively) wealthy and fortunate people to do a lot to help the less fortunate. And it's true that those are non-trivial costs. But it's hard to take seriously the suggestion that these costs are morally more significant than the costs endured by the less fortunate by our doing less (or nothing). So-called "moderate" views of beneficence are in fact extremely costly for the worst-off -- much worse than consequentialism is for the wealthy. So it's an odd objection.David Sobel's (2007) 'The Impotence of the Demandingness Objection' nicely develops this line of criticism (p.3):Consider the case of Joe and Sally. Joe has two healthy kidneys and can live a decent but reduced life with only one. Sally needs one of Joe’s kidneys to live. Even though the transfer would result in a situation that is better overall, the Demandingness Objection’s thought is that it is asking so much of Joe to give up a kidney that he is morally permitted to not give. The size of the cost to Joe makes the purported moral demand that Joe give the kidney unreasonable, or at least not genuinely morally obligatory on Joe. Consequentialism, our intuitions tell us, is too demanding on Joe when it requires that he sacrifice a kidney to Sally.But consider things from Sally’s point of view. Suppose she were to complain about the size of the cost that a non-Consequentialist moral theory permits to befall her. Suppose she were to say that such a moral theory, in permitting others to allow her to die when they could aid her, is excessively demanding on her. Clearly Sally has not yet fully understood how philosophers typically intend the Demandingness Objection. What has she failed to get about the Objection? Why is Consequentialism too demanding on the person who would suffer significant costs if he was to aid others as Consequentialism requires, but non-Consequentialist morality is not similarly too demanding on Sally, the person who would suffer more significant costs if she were not aided as the alternative to Consequentialism permits? What must the Objection’s understanding of the demands of a moral theory be such that that would make sense? There is an obvious answer that has appealed even to prominent critics of the Objection — that the costs of what a moral theory requires are more demanding than the costs of what a moral theory permits to befall the unaided, size of cost held constant. The moral significance of the distinction between costs a moral theory requires and costs it permits must already be in place before the Objection gets a grip. But this is for the decisive break with Consequentialism to have already happened before we feel the pull of the Demandingness intuitions.We might similarly ask why deontological views are not considered excessively demanding when they prohibit Sally from saving her own life by stealing one of Joe's spare kidneys. In terms of raw (theory-neutral) cost to the agent, this is surely very demanding! Granted, as people standardly evaluate "demandingness", they might presuppose that moral prohibitions of this sort are not as relevantly "demanding" as positive obligations. But this is, in effect, to already be assessing questions of demandingness through deontologically-tinted glasses.It seems, then, that there are no neutral grounds for considering impartial consequentialism to be "more demanding" than rival moral theories, at least in the sense of imposing excessively great costs on agents. One can only get this verdict by stacking the deck against consequentialism by implicitly defining "demandingness" in such a way as to only take a certain subclass of costs fully into account.(Alternatively, we could take all this to motivate reconceptualizing demandingness as not really about costs at all, but rather something more like willpower. And then you just make these moves and -- voila![...]
2016-12-18T15:18:16.170+00:00One who accepts a "consequentialism of rights" might hold that deliberating killing an innocent person (let's call this "murder", for short) is so morally bad that it isn't justified even to save five lives. But deontologists go further, suggesting that one should not murder even to prevent five other murders. This seems puzzling: if murder is so morally horrendous, why should we not be concerned to minimize its occurrence? This is Scheffler's paradox of deontology in a nutshell.A deontologist might respond by suggesting that our moral aims are not so impersonal: we have a special responsibility for our own (present) actions, and so must regard our not (now) ourselves causing harm / violating rights as a distinctive moral goal. Scheffler pushes back against this idea on pp. 415-6 of his 'Agent-Centred Restrictions, Rationality, and the Virtues':[O]n standard deontological views, morality evaluates actions from a vantage point which is concerned with more than just the interests of the individual agent. In other words, an action will be right or wrong, on such a view, relative to a standard of assessment that takes into account a number of factors quite independent of the interests of the agent. And defenders of such views are unlikely to claim that the relevant standard of assessment includes agent-centred restrictions, but that it is a matter of indifference, from the vantage point represented by that standard, whether or not those restrictions are violated. For if it is not the case that it is preferable, from that vantage point, that no violations should occur than that any should, it is hard to see how individual agents could possibly be thought to have reason to observe the restrictions when doing so did not happen to coincide with their own interests or the interests of those they cared about. In other words, deontological views need the idea that violations of the restrictions are morally objectionable or undesirable if the claim that people ought not to commit such violations when doing so would be in their own interests is to be plausible. Yet if such views do regard violations as morally objectionable or undesirable, in the sense that it is morally preferable that none should occur than that any should, it does then seem paradoxical that they tell us there are times when we must act in such a way that a larger rather than a smaller number of violations actually takes place.It's a fairly dense passage, so when teaching this topic last term I came up with a thought experiment to help illustrate.Suppose that five innocent people whom you love are going to be murdered, unless you yourself murder a (distinct) innocent person. Is it wrong for you to murder an innocent person in order to save your five loved ones?Standard deontological theories will insist that murder, even in this case, is wrong. But this may seem a difficult verdict to uphold, given that murdering the one seems preferable from both your personal standpoint and the impersonal standpoint.Impersonally: five murders are worse than one. Personally: there is a special moral cost to you in committing a murder, sure, but it is not so great a cost (we may suppose) as losing your five loved ones. So, we may wonder, from what perspective does the deontological verdict have any normative force or appeal?To get the verdict that murdering the one is wrong, the deontologist must hold that you are morally special (to override the impersonal verdict and get that your murdering one is morally worse than allowing five other murders to occur), but you're not so special that your interest in saving your loved ones overrides your putative moral obligations. It's an awkward combination of claims to assert simultaneously.How do you think the deontologist might best respond to this challenge? [...]
2016-09-21T15:57:45.342+01:00In 'The Normative Irrelevance of the Actual', I explained why it doesn't matter whether a putative counterexample to a moral theory is actual or hypothetical in nature, on the grounds that first-order moral theories can be understood as (implying) a whole raft of conditionals from possible non-moral circumstances to moral verdicts. But there's another, perhaps more intuitive, way to make the case, based on the idea that some counterfactually superior moral theory should be superior, simpliciter.
2016-09-20T17:26:04.127+01:00David Sobel has an interesting post up at the revamped PEA Soup blog on 'Normative Stance Independence and Pleasure'. He suggests that if pleasure is best understood in attitudinal terms (as per Parfit's hedonic likings) then this undermines Normative Stance Independence, the view that "normative facts are not made true by anyone’s conative or cognitive stance" or "by virtue of their ratification from within any given actual or hypothetical perspective."
2016-09-12T22:27:10.148+01:00In 'The Case Against Pets', Rutgers law professors Francione and Charlton argue that "domestication and pet ownership [...] violate the fundamental rights of animals." This is, I think, a deeply absurd position.A large part of their essay is just concerned with arguing against treating pets as property. I think it's pretty clear that the ordinary social meaning of having a pet already rules this out. One may carve up one's property for fun; if someone were to carve up their pet, we would (rightly) want them to be locked up for animal cruelty. If the legal system failed to do this, they would certainly be shunned by the rest of society, who would be deeply horrified by their actions.It's an interesting question whether non-rational beings can have a right to life in addition to a right against cruel treatment. If so, the implications would be quite radical, even aside from the complete abolition of the meat industry. Society would presumably be obliged to support animal shelters to an extent that removes the current need to kill many perfectly healthy animals due to overcrowding. I think that's a plausible enough position, though there are counterarguments to consider.Where the authors go off the rails is when they suggest that "domestication itself raises serious moral issues irrespective of how the non-humans involved are treated" -- such that pet ownership would still be wrong even if animal rights against cruel treatment and convenience-killing were secured. Why do they think this? What further rights are being violated, merely by caring for your pet? Here is what F&C write:Domesticated animals are completely dependent on humans, who control every aspect of their lives. Unlike human children, who will one day become autonomous, non-humans never will. That is the entire point of domestication – we want domesticated animals to depend on us. [...] We might make them happy in one sense, but the relationship can never be ‘natural’ or ‘normal’. They do not belong in our world, irrespective of how well we treat them. This is more or less true of all domesticated non-humans. They are perpetually dependent on us. We control their lives forever. They truly are ‘animal slaves’. Some of us might be benevolent masters, but we really can’t be anything more than that."Slavery is bad, X is like slavery, therefore X is bad" is superficial reasoning. Much depends on whether X shares the relevant features or preconditions that explain why slavery is so bad.I take the basic problem with (human) slavery to be that it is so drastically contrary to the interests of the enslaved. Not only were slaves historically mistreated in all sorts of ways, but even an imaginary "happy slave" seems in a tragic position insofar as their capacity for rational autonomy -- and hence for a fully flourishing human life -- is being stunted rather than nourished. Rationally autonomous beings have an interest in developing and preserving their autonomy, and when this interest is violated their life is (in this respect) worse as a result.This crucial feature is obviously lacking in non-rational animals. So long as we do not mistreat them (whether by outright cruelty or mere neglect, e.g. failure to provide a sufficiently stimulating environment) domestic animals' chances at a fully flourishing life are not impaired by the mere fact of our control over them. They have no interest in being free of our control, because they have no capacity for rational autonomy that would be served by such "freedom". Life in the wild is often nasty, brutish and short. We can provide much better lives for our companion animals.Perhaps the simplest way to refute F&C's argument is to note that moral rights must track interests. It makes no sense to posit a rig[...]
2016-08-11T13:02:10.372+01:00Tenenbaum and Raffman (2012) claim that "most of our projects and ends are vague." (p.99) But I'm not convinced that any plausibly are. I've already discussed the self-torturer case, and how our interest in avoiding pain is not vague but merely graded. I think similar things can be said of other putative "vague" projects.T&R's central example of a vague project is writing a book:Suppose you are writing a book. The success of your project is vague along many dimensions. What counts as a sufficiently good book is vague, what counts as an acceptable length of time to complete it is vague, and so on.But it strikes me as strange for one's goal to be to reach some vague level of sufficiency. When I imagine writing a book, my preferences here are graded: each incremental improvement in quality is pro tanto desirable; each reduction in time spent is also pro tanto desirable. These two goals seem like they should be able to be traded off against each other -- perhaps precisely, or (if they are not perfectly commensurable goods) then perhaps not, but this sort of rough incomparability between two goods is (I take it) not the same as either good itself being vague.I could imagine a cynical person who really doesn't care to improve the quality of their book above a sufficient level. Perhaps they just want it to be of sufficient quality to earn a promotion, or some other positive social appraisal. But these desired consequences are even more clearly not vague.Similar things can be said of the standard example of baldness. I trust that nobody (sane) actually has a fundamental desire not to fall under the extension of the English-language predicate 'bald'. What they more plausibly have is a graded desire that roughly maps onto what is socially recognized as baldness. For example, perhaps they desire not to have their appearance negatively appraised on the basis of hair loss. (Or perhaps even just not to have other people think of them as bald.) But of course there's nothing vague about that: people either appraise you negatively or they do not. Such appraisals are graded, however: the first noticeable signs of a receding hairline may be expected to elicit a less severe appraisal than a large bald patch. (Or so we might imagine the vain man to assume.)Or consider a case from Elson's (2015) reply:You may wish for a restful night’s sleep, but to stay up as late as possible as is consistent with that. Since restful is vague, one minute of sleep apparently couldn’t make the difference between a restful and a nonrestful night, and you ought to stay up for another minute. But foreseeably, if you keep thinking that way, you will stay up all night. (p.474)As with the book case, this strikes me as simply involving a trade-off between two graded (non-vague) ends. To speak of a "wish for a restful night's sleep" is surely just a rough shorthand for what is really a graded desire, for a night's sleep that is more restful rather than less so. Perhaps there are some threshold effects in there, insofar as some lost minutes may have more noticeable effects than others on your state of mind the next day (and you can't know in advance exactly which minutes these are). But it's clearly just false to assume that a minute's less sleep will always make no difference to what it is that you really want here (regardless of whether the term 'restful' still applies to your night's sleep -- there's clearly more to your interest in a restful night's sleep than just the binary question of whether it was restful or not).Elson later cites Tuck's example of "a shepherd who wishes to build a cairn of stones [...] to guide him in the hills" (478). And again, while it may be vague whether a certain collection of stones is enough to qualify as a 'cairn[...]
2016-08-08T16:45:09.677+01:00My last post mentioned in passing that the puzzle of the self-torturer may be complicated by the fact that money has diminishing marginal value. This can mean that a few increments (of pain for $$) may be worth taking even if a larger number of such increments, on average, are not. So to make the underlying issues clearer, let us consider a case that does not involve money.
2016-08-08T16:46:11.997+01:00Recall that the Self-Torturer (ST) gets $10 000 for each turn of a dial that permanently increases the pain he feels for the rest of his life by a negligible amount. Each individual increment seems worth making, the thought goes, but 1000 increments would leave ST in intense agony, which no amount of money can compensate for.It seems intuitively clear to me that ST would soon reach a point at which additional increments -- even considered in isolation -- are not worth it.For example, if one hundred equal increments yielding 100x pain for $100y are collectively not worth it, then the badness of 100x pain outweighs the value of $100y. On average, then, the harm of x outweighs the benefit of y, over the 100 increments. If the increments are all of equal net value, then each increment is of negative net value, and hence irrational to choose. If instead we allow that the first few increments were worth the money (due perhaps to the diminishing marginal utility of money, or else the increasing marginal disutility of pain) then we can at least know that the one hundredth increment is not worth taking. After 99 increments, the value of $y is outweighed by the badness of x pain. It would be incoherent to deny this while holding that the value of $100y is outweighed by the badness of 100x pain. So there's no real puzzle here.But, strangely, Tenenbaum and Raffman, in 'Vague Projects and the Puzzle of the Self-Torturer', do deny this. They instead affirm (p.98):Nonsegmentation: When faced with a certain series of choices, the rational self-torturer must choose to stop turning the dial before the last setting; whereas in any isolated choice, she must (or at least may) choose to turn the dial.Why do they think this? They seem to assume that our interest in avoiding pain is vague and coarse-grained. They refer to "our commitment to a project of leading a (relatively) pain free life" (p.107). Since a negligible increase in pain cannot make a difference to whether or not we live a relatively pain-free life, no individual increment violates this interest of ours,* whereas each increment does serve our interest in greater wealth. (If you're already in pain, I imagine that T&R might appeal to some other coarse-grained anti-pain interest, just with higher thresholds, such as that of leading a life that is not excessively inundated with pain.)There are (at least) two obvious problems here. One, noted by Luke Elson in his reply, is that the asterisked claim above is false. In a sorites series like this, it is not universally true that each increment determinately makes no difference to whether the vague predicate applies. Rather, on standard views of vagueness (e.g. supervaluationism), there will be a range in which it is indeterminate which particular increment violates the threshold, but determinate that some such increment does. It is thus (determinately) false to claim that no increment violates our interest in leading a relatively pain-free life.Secondly, and more fundamentally, it just seems absurd to me to think that our interests in avoiding pain are coarse-grained in this way. What's so special about the borderline between a life that is "relatively" pain free and one that is not? Suppose it takes 20 increments of pain to reach the borderline cases, which in turn extend for 5 increments before we reach the realm in which you determinately no longer qualify as leading a "relatively pain-free life". Are we supposed to imagine that a rational person could prefer to increase from 0 to 19 points of pain than to increase from 19 to 25? Are the latter 6 points intrinsically more significant because they happen to span the boundaries of the English language predicate [...]
2016-07-22T20:13:03.487+01:00Over in this Leiter thread, some philosophers seem to be dismissing the instrumental value of voting (for Clinton over Trump) for misguided reasons:(1) That a marginal vote is "astronomically unlikely to change the outcome."This is not true,* at least for those who are able to vote in a swing state. According to Gelman, Silver and Edlin (p.325), the chance of a marginal vote altering the election outcome is as high as 1 in 10 million, depending on the state. Given that the outcome will in turn affect hundreds of millions (or even billions) of people, voting for Clinton in a swing state arguably has significant expected value.(2) That the system is not sensitive to a single vote, and anything close to even will be decided by the courts or the like.The claim that insensitivity undermines marginal impact is generally fallacious.Given that a large collection of votes together makes a difference, it is logically impossible for each individual addition to the collection to make no difference. While it may be true that an objectively tied vote and an objective 1-vote victory would not be distinguished by the system, there must be some smallest and largest numbers of votes that would in fact trigger a recount or a court case (or whatever), in which case one of those numbers [specifically, whichever one is the difference between a straight victory and a court-delivered loss] provides the new threshold that matters for a marginal vote to make a decisive difference. (See also the final page of this paper by Gelman et al.)* = I've previously been led astray by Jason Brennan's model from p.19 of The Ethics of Voting, which really does yield astronomically small chances -- on the order of 10 ^ -2650. I thank Toby Ord and Carl Shulman for their corrections in this public Facebook thread.In short, Brennan's mistake (and that of the past researchers he draws on) is to model voters as having a fixed non-50/50 propensity to favour a particular candidate over the other. Even if the fixed propensity is just 50.5, repeating the odds over 100+ million voters makes the result an astronomically certain victory for the favoured candidate (with a vanishing small standard deviation from the expected result of their securing 50.5% of the total votes). This is obviously not an accurate reflection of either our epistemic position prior to an election, or of any kind of objective probability distribution over the possible outcomes. It's a bad model. A better model would either model different voters as having different propensities [as per section 5 of this Gelman et al paper] or at least take on board our credences over a range of possible propensities (including 50/50) rather than stipulating that a particular non-50/50 propensity holds.As Gelman once wrote in a comment on Brennan's blog:[T]he claim of "10 to the −2,650th power" is indeed innumerate. This can be seen in many ways. For example, several presidential elections have been decided by less than a million votes, so a number of the order 1 in a million can't be too far off, for a voter in a swing state in a close national election. For another data point, in a sample of 20,000 congressional elections, a few were within 10 votes of being tied and about 500 were within 1000 votes of being tied. This suggests an average probability of a tie vote of about 1/80,000 in any randomly selected congressional election. It's hard to see how the probability of a tie could be of order 10^-5 for congressional elections and then become 10^-2650 for presidential elections. Finally, even if you accept the fixed-propensity model (despite its being demonstrably wrong), my old post on the Best Case for Voting makes the case for co-operating as part of a group that (collective[...]
2016-06-25T21:01:22.017+01:00A few years back I noted that 2-D semantics provides a straightforward refutation of synthetic metaethical naturalism (SEN): SEN implies that moral terms differ in their primary and secondary intensions, this is clearly false (moral terms are "semantically neutral", or exhibit 2-D symmetry, in that their application to a world does not vary depending on whether we consider it as actual or as counterfactual), and so SEN must be false.As I've been developing this argument in my paper 'Moral Symmetry and Two Dimensional Semantics', it occurs to me that 2-D semantics enables an even broader argument against metaethical naturalism.To address the Open Question Argument, naturalists are committed to a divergence between the intension of 'good' (and other moral terms) and its meaning or cognitive significance. Such divergences are commonplace so far as the secondary intension is concerned: 'water' and 'H2O' have different meanings, despite picking out just the same stuff across all possible worlds (considered counterfactually). The problem for the naturalist is that such divergences do not seem justified when it comes to primary intensions. What a term picks out across worlds considered as actual seems very closely connected to the cognitive significance or sense of a term.(For example, if we use 'the watery stuff' as shorthand for whatever fills the functional role of water, and which thus picks out H2O in our world but XYZ in Twin Earth, then note that 'water' and 'the watery stuff' have the same primary intensions and cognitive significance, despite differing in their secondary intensions. "Water is the watery stuff" is cognitively trivial, whereas "Water is H2O" -- relating terms with distinct primary intensions -- is informative.)The difficulty now for the naturalist is that there is no way to flesh out the primary intension of moral terms using purely naturalistic items in a way that does justice to the cognitive significance of moral terms. If you have 'good' super-rigidly (in both primary and secondary intensions) pick out (say) happiness, then you seem committed to holding that 'good' means the same (or at least has much the same cognitive significance) as 'happiness', when intuitively they don't even seem to be in the same ballpark.Note that non-naturalists face no such problem. They can hold that 'good' super-rigidly picks out the sui generis property of goodness, which in turn supervenes on the natural things that are good -- a class of items that can only be identified through substantive moral insight, and not mere conceptual competence. Different moral communities may thus share this concept, picking out the same property of moral goodness, even as they disagree in their (implicit) theorizing about which natural items possess the moral property.Any objections? [...]
2016-06-20T18:13:59.436+01:00Zombies are back in the news! Via the DN Heap of Links, I see physicist Sean Carroll defending what appears to be a kind of analytical functionalism:What do we mean when we say “I am experiencing the redness of red?” We mean something like this: There is a part of the universe I choose to call “me,” a collection of atoms interacting and evolving in certain ways. I attribute to “myself” a number of properties, some straightforwardly physical, and others inward and mental. There are certain processes that can transpire within the neurons and synapses of my brain, such that when they occur I say, “I am experiencing redness.” This is a useful thing to say, since it correlates in predictable ways with other features of the universe. For example, a person who knows I am having that experience might reliably infer the existence of red‐wavelength photons entering my eyes, and perhaps some object emitting or reflecting them. They could also ask me further questions such as “What shade of red are you seeing?” and expect a certain spectrum of sensible answers.There may also be correlations with other inner mental states, such as “seeing red always makes me feel melancholy.” Because of the coherence and reliability of these correlations, I judge the concept of “seeing red” to be one that plays a useful role in my way of talking about the universe as described on human scales. Therefore the “experience of redness” is a real thing.This is manifestly not what many of us mean by our qualia-talk. Just speaking for myself: I am not trying to describe my behavioural dispositions or internal states that "correlate [...] with other features of the universe" in "useful" ways. I have other concepts to do that work, concepts that feature in the behavioural sciences (e.g. psychology). Those concepts transparently apply just as well to my imagined zombie twin as to myself. We could ask the zombie 'further questions such as "What shade of red are you seeing?" and expect a certain spectrum of sensible answers.' But this behaviouristic concept is not such a philosophically interesting one as our first-personal concept of what it is like to see red -- a phenomenal concept that is not properly applied to my zombie twin.So I worry that Carroll is simply changing the subject. Sure, behavioural dispositions and internal cognitive states (of the sort that are transparently shared by zombies) are "real things". Who would ever deny it? But redefining our mentalistic vocabulary to talk about these (Dennettian patterns in) physical phenomena is no more philosophically productive than "proving" theism by redefining 'God' to mean love.This diagnosis of the debate leads to a rather different dialogue than that which Carroll imagines:P: What I’m suggesting is that the statement “I have a feeling ...” is part of an emergent way of talking about those signals appearing in your brain. There is one way of talking that speaks in a vocabulary of neurons and synapses and so forth, and another way that speaks of people and their experiences. And there is a map between these ways: When the neurons do a certain thing, the person feels a certain way. And that’s all there is.M: Except that it’s manifestly not all there is! Because if it were, I wouldn’t have any conscious experiences at all. Atoms don’t have experiences. You can give a functional explanation of what’s going on, which will correctly account for how I actually behave, but such an explanation will always leave out the subjective aspect.P: Why? I’m not “leaving out” the subjective aspect, I’m suggesting that all of this talk of our inner experiences is a u[...]
2016-06-10T21:35:32.817+01:00Compare five seriously bad things:(1) Unjust discrimination along the lines of racism, sexism, etc., in Western countries.(2) War and terrorism(3) Global poverty(4) Animal suffering (from factory farming)(5) Global catastrophic (i.e. civilization-ending) risksJust how bad is each of these, in the world as we find it today? If you could prevent just one of them, which would it be? (What would your rank ordering be if you weren't sure how many philanthropic wishes the genie was going to give you?)It can be hard to know where to start with this question of moral prioritization. But one way to get a grip on the question is to consider how great a cost it would take (in the constant metric of, say, human lives lost) to balance out the gains of protecting against these various evils.To avoid deontological qualms, do not ask yourself how many people you'd be willing to kill to achieve world peace (or whatever). Maybe you would not be willing to kill anyone. Instead ask: if a natural disaster struck a foreign country, at the same time as one of the above evils was magically banished (for a century, say), how bad could the loss of lives from the disaster be while still making the day seem to you one that was, all things considered, more good for the world than bad?My above presentation lists the evils in descending order of (how it seems to me) roughly how much public attention each cause receives, e.g. on social media. (Strikingly, this seems more or less precisely the opposite of what I think the correct ranking would be in terms of actual importance.)To consider the magnitude of each problem in turn:(1) Unjust discrimination: A few individuals suffer greatly (or are even killed) due to this, and far greater numbers suffer more moderate harms and indignities. It's extraordinarily difficult to attempt any sort of estimate of the total magnitude of harms here, but perhaps something in the ballpark of 500 - 1500 lives per year would be a reasonable estimate? If so, it would take around 100k deaths to balance out the gain of a century without such discrimination. (Note: this would plausibly be at least an order of magnitude greater if we were to include non-Western countries, due to how extremely badly women are treated in many parts of the world.)(2) War, in recent decades, seems to kill around 50k - 100k people per year, and has been in general decline since the end of WWII. It also causes significant non-fatal casualties and other (e.g. economic) forms of disruption. (Terrorism, by contrast, is fairly trivial, except insofar as it affects political behaviour, but I'll bracket such indirect effects here.) So I'm going to guess that a century without war or terrorism would be worth around ten million deaths.(3) Global poverty is obviously hugely harmful. Roughly 10% of the world's population lives in extreme poverty (less than $1.90 per day), with another 2 billion or so living on less than $3.10 per day. I assume this makes a huge difference to one's quality of life (which I take to not just reduce to momentary felt happiness, but also self-actualization through meaningful projects, etc.). It also causes many millions of preventable deaths each year, not to mention other forms of ill health (e.g. intestinal parasites, blindness due to malnutrition, etc.).If global poverty were entirely abolished -- with everyone in the world attaining, say, the (purchasing power adjusted) level of the current U.S. poverty line -- this would vastly improve many billions of lives. It would plausibly take more than a billion deaths (or perhaps more, e.g. 50 million deaths p[...]
2016-04-20T17:43:18.604+01:00It can sometimes be difficult to discern precisely what's in dispute between Effective Altruists and their (leftist) critics. This is perhaps in part due to EA's being such a big tent that objecting to one proposal or proponent is not necessarily an objection to EA itself. To clarify the latter, I see Effective Altruism as a matter of two core commitments:(1) The "Altruism" bit: A commitment to making the world a better place -- including a willingness to expend some non-trivial proportion of one's own resources to this end.(2) The "Effective" bit: A commitment to using these resources as effectively and efficiently as possible (based on the best available evidence, analysis, etc.).And I guess there's a further 'movement-building' element that is perhaps common to all movements:(*) The belief that others should do likewise.Now, these core commitments seem pretty innocuous to me, so I'm always a bit baffled when I see people objecting to EA as such. (Why would anyone be against making the world a better place?)Of course, there's plenty of room for internal disagreement about what is actually most effective. EAs vary between whether they prioritize (e.g.) traditional global poverty charities, (farm) animal welfare, meta-charities, global catastrophic risk mitigation, or public policy / lobbying. The former cause area, with its robust supporting evidence, is emphasized in materials directed at popular audiences, for obvious pragmatic reasons (e.g. the prominence of aid skeptics, and the popular tendency to dismiss anything that sounds too "weird"). This has unfortunately led some academic critics to assume that EA is all about short-term thinking, which really couldn't be more wrong.One of the internet's most, um, rhetorically strenuous critics of Effective Altruism is Brian Leiter, who in his most substantial post on the topic to date writes:What if instead of picking worthy charities in accordance with Singer’s bourgeois moral philosophy, those with resources committed all of it to supporting radical political and economic reforms in powerful capitalist democracies like the U.S.; perhaps even committing their time and resources to helping other well-intentioned individuals with resources organize themselves collectively to do the same? Is it implausible that if all those in the thrall of Peter Singer gave all their money, and time, and effort, to challenging, through political activism or otherwise, the idea that human well-being should be hostage to acts of charity, then the well-being of human beings would be more likely to be maximized even from a utilitarian point of view? Do Singerites deny that systemic changes to the global capitalist system, including massive forced redistribution of resources from the idle rich to those in need, would not dwarf all the modest improvements in human well-being achieved by the kind of charitable acts Singer’s bourgeois moral philosophy commends? The question is not even seriously considered in the bourgeois moral philosophy of Singer. Although purporting to be concerned with consequences, like most utilitarians they set the evidential bar so high, and the temporal horizon so short, that the actual consequences of particular courses of action, including the valorization of charity over systemic change, are never really considered.I don't see any objection to the core commitments of EA here, just a dispute about means. Leiter may insist that even if his proposals don't violate the letter of EA, they nonetheless remain contrary to the ethos of the movement, and would (in practice) be dismissed out of hand in a way that reveals the movement's fundamental sho[...]
2016-04-11T15:50:58.773+01:00An interesting new paper forthcoming in Phil Studies, 'The pen, the dress, and the coat: a confusion in goodness' by Miles Tucker, argues against the (now widely accepted) Conditionalist thesis that intrinsic value and final value are separable.
2016-04-10T18:57:20.149+01:00A few people have asked for my EA syllabus from last term, so I thought I'd share it here with some general reflections.It was a fun class to teach, but I'd do things a bit differently the next time around. A big one is just the nature of the teaching: This one was organized as a very "student-led" module, all seminar discussions and no lectures. While the students really enjoyed the discussions, they seemed a bit complacent in places (esp. regarding their dismissals of expected value / global catastrophic risks and of the significance of non-human animal interests), where in a lecture I might have been better able to develop these challenges in greater depth.Anyway, here is the syllabus for the 9-week class, using MacAskill's Doing Good Better as the main textbook, with some supplementary readings...1: Introducing 'Effective Altruism'* MacAskill, introduction + chapters 1 & 2Key Questions: Are QALYs a useful measure? Is it better to make hard trade-offs in cause selection (engaging in philanthropic 'triage') to maximize the good done, or to select causes on some other basis such as flipping coins or emotional resonance?2: Global Poverty* MacAskill, chapter 3* Singer, ‘Famine, Affluence, and Morality’Key Questions:How much good does (the best) aid do? Are we morally required to donate more (and if so, how much more)?[In future I think I'd move MacAskill's chapter 3 (and the associated key question) into the first week (it's very easy reading), and add some responses to Singer (perhaps Miller's 'Beneficence, Duty and Distance') in here instead, for greater philosophical depth.]3. Difference-Making and Expected Value* MacAskill, chapters 4 - 6Key Questions: Should we be guided by average or marginal utility? Is 'expected value' reasoning the right way to take low-probability outcomes into consideration? Is it important to be the direct cause of a benefit, or just to (even indirectly) maximize the total amount of good done?4. Evaluating Charities / Catastrophic risk* MacAskill, chapter 7* Bostrom, 'Astronomical Waste'* Karnofsky, 'Why We Can’t Take Expected Value Estimates Literally (Even When They’re Unbiased)'Key Questions: Is there anything to be said for Charity Navigator-style "financial metrics" (overhead, CEO pay, etc.) as opposed to GiveWell-style impact analysis? To what degree should a lack of "robustness" lead us to discount a cost-effectiveness estimate? Should we accept the expected-value argument for prioritizing global catastrophic risk mitigation?[Many of the students wouldn't take Bostrom's paper seriously. In future I might try replacing it with the first chapter of Nick Beckstead's dissertation, 'On the Overwhelming Importance of Shaping the Far Future'.]5. Ethical Consumerism and Animal Welfare* MacAskill, chapter 8* Norcross, 'Puppies, Pigs, and People'Key Questions: Assuming that sweatshop jobs are a step up from the alternatives, how should we weigh the benefit they provide vs worries about complicity in exploitation? Is there anything wrong with carbon offsetting? Are other forms of moral offsetting (e.g. meat offsetting, murder offsetting) relevantly similar? How much weight should we give to the interests of non-human animals?6. Career Choice / Immigration* MacAskill, chapter 9* Clemens, 'Economics and Emigration: Trillion-Dollar Bills on the Sidewalk?'* [Background reading:] Fine, 'The Ethics of Immigration'Key Questions: How should you choose what career to pursue? Is "earning to give" better than working in the social sector? Do high-paying corporate jobs tend t[...]
2016-03-06T14:49:10.591+00:00This seems a lamentably common way of thinking:[Chief Executive of Oxfam GB] Goldring says it would be wrong to apply the EA philosophy to all of Oxfam's programmes because it could mean excluding people who most need the charity's help. For a certain cost, the charity might enable only a few children to go to school in a country such as South Sudan, where the barriers to school attendance are high, he says; but that does not mean it should work only in countries where the cost of schooling is cheaper, such as Bangladesh, because that would abandon the South Sudanese children.Fuzzy group-level thinking allows one to neglect real tradeoffs, and pretend that one is somehow helping everyone if you help each group a little bit. But this is obviously not true. If there are more Bangladeshi children in need of education than your current budget can provide for, then by spending the rest of your budget on educating a few kids in South Sudan, you are abandoning a greater number of Bangladeshi children.If we don't have the resources to help everyone, then (inevitably) some people will not be helped. To put it more emotively, you could say that we are "abandoning" them. That's a good reason to try to increase our philanthropic budgets. It is not any sort of reason at all to spend one's budget inefficiently, leading to the philanthropic "abandonment" of even more children.Goldring's reasoning is 'collectivist' in the bad sense: treating groups rather than individual persons as the basic unit of moral consideration. That you have already helped some Bangladeshi children is cold comfort to the other Bangladeshi children you have spurned for the sake of instead helping a smaller number of South Sudanese. They would seem to have a legitimate complaint against you: "Why do you discount my interests just because I share a nation with other individuals that you have helped? You have not helped me, and I need help just as much as those you chose to prioritize in South Sudan. Since you chose to help a smaller number of South Sudanese children when you could have instead helped a greater number of children in my community, you are effectively counting my interests for less. That is disrespectful and morally wrong."By contrast, if you focus your philanthropic resources on providing as much good as possible, no-one has any legitimate complaint. You may imagine a South Sudanese child asking, "Why did you not help me? Just because it's more expensive to provide schooling in my country, does not make my educational needs any less morally important than anyone else's!" To which the obvious answer is, "Indeed, I give equal weight to the interests of all, including yourself; but that is precisely why I must prioritize the educating of a larger group of individuals over a smaller group, if my resources only allow for one or the other. If we could fund your education without thereby taking away the funding for multiple other people's education then of course we would! But it would not be fair on those others to deprive several of them of education in order to educate just you. I count your education as being equally as important as the education of any other one individual. If I can then educate a second individual as well for the same amount of resources, then that is what treating each person's interests equally requires me to choose."We may vividly demonstrate the irrationality of the collectivist's thinking by mentally subdividing the group of Bangladeshi children into two groups. Call 'B' the [...]
2016-03-04T13:40:15.738+00:00I've been trying to work out what I think the most basic reason to reject naturalism (about mind and morality) is. Sometimes it's suggested that normativity is just "too different" from matter to be reducible to it. (Enoch and Parfit both say things along these lines.) But that seems a fairly weak reason: plants and stars seem very different from atoms, after all, but that doesn't stop them from being wholly reducible to atoms. Granted, mind and morality are even more different, being non-concrete and all, but still. I think the non-naturalist can do better.So, a better intuitive basis for rejecting naturalism, it seems to me, is that it can't accommodate the datum that debates about the distribution of mental or moral properties are substantive. Imagine two people disagreeing about precisely which collection of atoms constitutes the Sun. (There's overwhelming overlap between the two proposals, they just differ slightly in where they draw the boundaries -- whether or not to include a particular borderline atom, say.) It's clear that this is a merely terminological disagreement: they diverge in whether they take the word 'Sun' to pick out the minimal collection S, or the one-atom-larger collection S*, but it's not as though there's some further issue at stake here about which they remain ignorant -- which collection of atoms has some putative special further property of really being the Sun. There is no such further property.Disagreements about the distribution of minds and moral properties are not like this. But if naturalists were right, they should be. As I wrote in an old post on 'non-physical questions':The question whether my cyborg twin is conscious or not is surely a substantive question: I'm picking out a distinctive mental property, and asking whether he has it. Now, the problem for physicalists is that they can't really make sense of this. They can ask the semantic question whether the word 'consciousness' picks out functional property P1 or biological property P2. But given that we already know all the physical properties of my cyborg twin (say he has P1 but not P2), there's no substantive matter of fact left open for us to wonder about if physicalism is true. It becomes mere semantics.In other words, in order for it to be a substantive fact that consciousness tracks (say) functional rather than biological properties, there needs to be a further property (distinct from the functional and biological properties) for the two views to both be arguing about. Otherwise they're just arguing about a word. But it's obvious that disputes about the boundaries of consciousness are not just disputes about the word (in stark contrast to disputes about the boundaries of the Sun).Similarly in metaethics:The whole problem for the naturalist is that they have no basis for claiming that any particular one of the competing, internally coherent moral theories is the one true moral theory. After all, given the natural (non-moral) parity between us and our Moral Twin Earth counterparts, what in the two worlds can the naturalist appeal to as the basis for a moral or rational asymmetry between us?If moral boundaries are not just to be determined by arbitrary semantics, it must be that there's a metaphysical difference between the truly good properties and the pretenders, breaking the symmetry. There must be a further property of goodness for the rival views to be arguing about.I think this is the core intuition that's really behind the Open Question Argument, Parf[...]
2016-02-29T21:35:45.057+00:00I had always assumed that only ultimate ends, or telic / final / non-instrumental desires, could be intrinsically irrational. (Think Future Tuesday Indifference.) Instrumental desires, by contrast, may happen to be irrational if based on a false and irrational means-end belief, but then the problem is extrinsic to the desire itself -- the problem instead lies with the false belief, and one could presumably imagine circumstances in which the means-end belief would be true, thus making the instrumental desire in question a perfectly reasonable way of achieving one's goals.
2016-02-26T18:43:49.981+00:00Inspired by the ignorance of Bill Nye the science guy...Some things I wish everyone know about philosophy:(1) Descartes' "I think, therefore I am" does not imply that your existence depends upon your thinking. It is merely intended to show that a thinker cannot coherently doubt their own existence.(2) People are often quite bad at reasoning. Logic, a component of philosophy, can help with this. (Supplementing this with an understanding of statistical and probabilistic reasoning is still important, though!)(3) When philosophers raise outlandish-seeming questions ("What is your basis for expecting the sun to rise tomorrow? Or for expecting the future to resemble the past?") it is generally not because we think them unanswerable, or that we think the outlandish possibilities being hinted at are credible, but rather that consideration of the question can give rise to important insights, e.g. into the nature of our everyday knowledge. So to mock philosophers for their thought experiments is as silly as mocking Einstein for (supposedly) thinking that you could ride on a ray of light. It merely reveals that you have missed the point.(4) Many important intellectual questions (including, e.g., fundamental moral and epistemological issues) do not concern empirical happenstance, and so cannot be answered by the methods of science. Different methods are needed if we are to make any progress in thinking about them. This sort of thinking is what philosophy is all about. If you dismiss it, you are effectively giving up on rational thought about non-empirical matters.(5) Philosophy is inescapable, in the sense that wholesale dismissals of it tend to be self-defeating. If you dismiss it as worthless, you’re making a claim in ethics or value theory, which are sub-fields of philosophy. If you think it’s an unreliable source of knowledge, that’s epistemology. Either way, you must engage in philosophical reasoning and argument in order to (non-dogmatically) assess the value of philosophy.(6) While philosophy is difficult, and often controversial, it does not follow that it is "all just a matter of opinion". Some opinions are more reasonable, or better grounded, than others. Even if it turns out that there are multiple internally-coherent ways to think about the world, given the evidence available to us, our initial thoughts on a topic tend to be so riddled by implicit inconsistencies that philosophical thinking can allow us, individually, to make a great deal of progress in improving the coherence of our world views.(7) Philosophy, as a collective enterprise and academic discipline, makes progress by identifying and resolving common inconsistencies, clarifying what the implications of various positions really are, or which claims do (or don't) rationally support each other. [...]
2016-02-26T13:58:43.617+00:00In my Effective Altruism class this past week I've run a "giving game", getting the students, in small groups, to discuss & decide where to donate £100 of my money. It was quite interesting.One potential downside of requiring the decisions to be made by consensus in small groups (of three or four students each) was that this ended up creating a bit of a bias towards conservative / "safe" choices from GiveWell's top charities, rather than more speculative (but potentially high upside) options about which there were disagreements within the group. For example, one group had members initially supporting animal welfare, climate change mitigation, and criminal justice reform, but since they couldn't resolve these disagreements in the hour allotted for discussion and debate, they ended up agreeing to fund a deworming charity instead. Another student favoured existential risk reduction, but again could not reach consensus on this within their group.If I do this again in future years, I might try to think of an alternative way of implementing the giving game to allow the students a bit more free reign. E.g., one option would be to give each student £50 (or whatever) that they can allocate individually, or perhaps with the additional requirement that they must find / convince at least one other student in the class to share their choice of charity (to encourage argument and discussion). Discussion could then proceed in small groups of rotating membership (rather than having fixed groups as we did this year). Something I'll think about, anyway.As for the verdicts, following my students' directions, I have just donated:* £200 to the Against Malaria Foundation,* £200 to GiveDirectly,* £100 to each of SCI and Deworm the World,* £100 to Project Healthy Children,* £100 to Cool Earth,* £100 to Animal Equality, and* £100 to Basic Needs (an international mental health charity).Most of these donations got a further 25% boost from UK Gift Aid; for UK taxpayers, donating via the GWWC Trust is very helpful in this respect!What charities do you consider most effective? Comments / suggestions welcome! (I'm quite partial to meta-charities, myself...) [...]
2016-02-26T13:25:16.887+00:00It's been almost a decade since my evil twin Ricardo last posted on this blog. I invite him back today to share a horribly misguided speech that he recently gave as part of a debate in St Andrews on the topic 'Charity begins at home'. (They needed someone to defend that awful claim, and I wasn't entirely comfortable about it myself, so sent along my evil twin to do the job. Here's what he came up with...)Charity begins at home… but that’s not to say it ends there!So let me begin by clarifying what is and is not at stake in this debate. It’s common ground that we should do more to help others (and the global poor in particular). Our core thesis is just that we should not focus exclusively on the global poor, neglecting significant needs closer to home.To establish this thesis, consider the following scenario: Upon learning that the Against Malaria Foundation can save a life for around £2000, you go to the bank and withdraw £4000, intending to save two lives with it. But on the way to the post office, you come across a young child drowning in a shallow pond. You are the only person around, and her only chance at survival is if you jump in immediately to rescue her -- ruining the money in your pocket. What should you do? Obviously, you should save the child. This remains true even though the alternative -- were you willing to watch her drown in order to keep your money safe -- would have led to your later saving two lives instead. The rule of rescue bars us from making such tradeoffs. When we see great needs, right before our eyes, we morally must act. We can’t just sit back and coldly calculate for the greater good. If you share this moral judgment, then you, too, agree that charity begins at home. We cannot neglect the needs around us for the sake of some distant greater good.What is the explanation for this? One way to get at this is to ask where one goes wrong in acting otherwise. Suppose your neighbour would watch the child drown, for the greater good. What would you think of this, and why? Well, most naturally, I think you would worry that your neighbour is a disturbingly callous person. What sort of person can just sit by and watch as a young child drowns right before his eyes? He would seem a kind of moral monster. If his reason is that he wants to save two lives instead, then that adds a complication. He isn’t obviously ill-meaning; he wants to do what’s best. But it’s still monstrous in the sense of being inhuman -- perhaps robotic, in this case, is the better description.We think that part of what it takes to be a morally decent person, is to be sensitive to the needs and interests of those around you. To be willing to watch a child drown displays a special kind of moral insensitivity, even if it’s done for putatively moral reasons. Such an agent, we feel, fails to be sensitive to human suffering in the right way -- in the emotionally engaged kind of way that we think a properly sympathetic, decent human being, ought to be. Of course there remains a sense in which your robotic neighbour wants to minimize suffering, and that’s certainly a good aim as far as it goes. But one can’t help but feel that your robotic neighbour here is being more moved by math than by sympathy; they have an over-intellectualized concern for humanity in the abstract, but seem troublingly l[...]
2016-02-01T17:01:29.416+00:00I'm currently teaching a class on "Effective Altruism" (vaguely related to this old idea, but based around MacAskill's new book). One of the most interesting and surprising (to me) results so far is that most students really don't accept the idea of expected value. The vast majority of students would prefer to save 1000 lives for sure, than to have a 10% chance of saving a million lives. This, even though the latter choice has 100 times the expected value.One common sentiment seems to be that a 90% chance of doing no good at all is just too overwhelming, no matter how high the potential upside (in the remaining 10% chance), when the alternative is a sure thing to save some lives. It may seem to neglect the "immense value of human life" to let the thousand die in order to choose an option that will in all likelihood save no-one at all. (Some explicitly assimilate the low chance of success to a zero chance: "It's practically as though there's no chance at all, and you're just letting people die for no reason!")Another thought in the background here seems to be that there's something especially bad about doing no good. The perceived gap in value between 0 and 1000 lives saved is not seen as nine hundred and ninety nine times smaller than the gap in value between 1000 and one million lives saved, as it presumably should be if we value all lives equally. (Indeed, for some the former gap may be perceived as being of greater moral significance.)Interestingly, people's intuitions tend to shift when the case is redescribed in a way that emphasizes the opportunity cost of the first option: (i) letting exactly 999,000 / 1,000,000 people die, or (ii) taking a 10% chance to save all one million. Many switch to preferring the second option when thus described. (Much puzzlement ensues when I point out that this is the same case they previously considered, just described in different words! In seminar groups where time permitted, this led to some interesting discussion of which, if either, description should be considered "more accurate", or which of their conflicting intuitions they should place more trust in.) Makes me think that Kahneman and Tversky should be added to our standard ethics curriculum!One way to make the case for expected value is to imagine the long-run effects of iterating such choices, e.g. every day. Those who repeatedly choose option 1 will save 10k people every ten days, whereas the option 2 folk can expect to save 1 million every ten days on average (though of course the chances don't guarantee this). Most agree that the second option is better in the iterated choice situation.There are a couple of ways to argue from this intermediate claim to the conclusion that expected value should guide our one-off decisions. One is to suggest that each of the individual choices are equally choice-worthy, and that -- from an impartial moral perspective -- the intrinsic choice-worthiness of the option should not depend on external factors like whether one gets to make similar choices again in future. In that case, we could reach the conclusion that each individual option-2 choice is 1/10th as choice-worthy as the collection of ten such choices, which is much more choice-worthy than an option-1 choice.The second route would be to suggest that even if one doesn't get to make this particular choi[...]
2015-12-29T17:19:10.548+00:00(Past annual reviews: 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 2005, and 2004.)Metaethics & Epistemology* Information and Parfit's Fact-Stating Argument - clarifying what I take to be one of the strongest (yet generally neglected) arguments against metaethical naturalism.* Cancelling Schroeder's 'Implicature' Response to Parfit's Triviality Objection - it's really not a very compelling response.* Are "Internal Reasons" Normative? I worry that Williams can't coherently think so.* Normative vs Metaethical Wrong-making explains what I think is wrong with a recently-published "moral argument against moral realism".* Self-Undermining Skepticisms -- argues that Street-style attempts to "debunk" normative realism will end up being self-defeating.* Three options in the epistemology of philosophy compares skepticism, epistemic conservatism, and objective warrant views. (My sympathies lie with the latter.)* Judgmentalism vs Non-commitalism -- would an ideally rational agent ever suspend belief?Moral Theory* Deliberative Openness and the Actualism-Possibilism Dispute - arguing that agents should treat as deliberatively "open" (or within their control) just that which counterfactually depends upon the outcome of their present deliberations. If your best present efforts can do nothing to influence your future decisions, then this "future self" should, for deliberative purposes, be treated as a distinct agent -- not someone who you can rely upon to carry out your present plans and intentions. So, plan accordingly.* Three ways of rejecting moral intuitions -- some more legitimate than others.* Thoughts on 'Non-Consequentialism Demystified' -- is it a problem for a theory if it holds that reasonable choices and reasonable preferences diverge?* Puzzles re: Kant on the Good Will* Moral Theories and Fittingness Implications -- why the former have the latter.* Demandingness and Opt-in vs Opt-out sacrifices* Wronging for Utilitarians -- not such a puzzle as some seem to think.* Criterial vs Ground-level Moral Explanations -- a distinction that can help explain why various (e.g. motivational) objections to utilitarianism fail. See also Rossian Utilitarianism.* 'Objective Menu' theories of wellbeing -- the old 'list' metaphor is misleading.* Questioning Moral Equality -- is there any interesting sense in which all people are truly "moral equals"?* Valuing Unnecessary Causal Contributions -- why you shouldn't.* The Best Case for Voting -- invoking cooperative, rather than purely individualistic, utilitarianism.Applied Ethics* My paper 'Against "Saving Lives": Equal Concern and Differential Impact' was published in Bioethics.* Is it OK to have kids? -- my article in Aeon Magazine.* Procreative Externalities -- are additional people good or bad for those already here? (And more.)* A Distant Realm: Rethinking the Procreative Asymmetry* Good Lives and Un/conditional Value -- If good lives are not intrinsically good, it's hard to avoid the bleak conclusion that it'd be better had there never been life at all.* GOP Closes Doors to Newborns -- satire, but distressing how little needs tweaking when you start from real quotes from politicians talking about Syrian refugees.* Waiving Rights and "Second-class Citizens" -- there's something odd about objecting [...]