Subscribe: Philosophy, et cetera
http://pixnaps.blogspot.com/atom.xml
Added By: Feedage Forager Feedage Grade A rated
Language: English
Tags:
case  effective  good  life  lives  make  moral  normative  pain  people  philosophy  question  sense  vague  world  wrong 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Philosophy, et cetera

Philosophy, et cetera



Providing the questions for all of life's answers.



Updated: 2016-12-05T11:34:11.688+00:00

 



Possibly Wrong Moral Theories

2016-09-21T15:57:45.342+01:00

In 'The Normative Irrelevance of the Actual', I explained why it doesn't matter whether a putative counterexample to a moral theory is actual or hypothetical in nature, on the grounds that first-order moral theories can be understood as (implying) a whole raft of conditionals from possible non-moral circumstances to moral verdicts.  But there's another, perhaps more intuitive, way to make the case, based on the idea that some counterfactually superior moral theory should be superior, simpliciter.



Consider Slote's sentimentalism.  According to Slote (2007, 31), wrong acts are those that "reflect or exhibit or express an absence (or lack) of fully developed empathic concern for (or caring about) others." The relevant kind of empathic concern is not some kind of a priori theoretical posit, such as universal love, but rather is tied to our actual natural dispositions to favour those near and dear. (This is crucial to secure his desired anti-utilitarian verdicts.)  But this raises the obvious worry: what if our "natural" empathic dispositions turn out to have racist or otherwise clearly immoral built-in tendencies?

Slote responds: “The ethics of empathy may here be hostage to future biological and psychological research, but I don’t think that takes away from its promise as a way of understanding and justifying (a certain view of) morality.” (p.36)

But, I suggest, if we know that there is a possible situation in which sentimentalism is not the correct moral theory, then we can ask ourselves what the correct moral theory in that situation would be. And once equipped with that correct possible moral theory—one that provides an independent justification for rejecting racist or otherwise immoral sentiments even when sentimentalism cannot—then we may wonder what we need sentimentalism for. What is stopping that counterfactually correct moral theory from also being the actually correct moral theory?

Perhaps there are some moral theories that give plausible verdicts only in a certain counterfactual world, and are no longer plausible when we apply them to our world (or others).  So, fine, discard those clearly inadequate theories.  Still, given the entirety of logical space to choose from, we should be able to find a theory that yields the desired results in our world as well as in the counterfactual world where it is superior to sentimentalism (or whatever merely contingently plausible theory we are considering).

So, if a moral theory is merely contingently plausible, we can find a better option out there.  Being possibly wrong, in this sense, suffices to establish that the moral theory is actually wrong.



Attitudinal Pleasure and Normative Stance-Independence

2016-09-20T17:26:04.127+01:00

David Sobel has an interesting post up at the revamped PEA Soup blog on 'Normative Stance Independence and Pleasure'.  He suggests that if pleasure is best understood in attitudinal terms (as per Parfit's hedonic likings) then this undermines Normative Stance Independence, the view that "normative facts are not made true by anyone’s conative or cognitive stance" or "by virtue of their ratification from within any given actual or hypothetical perspective."

But does it?  The distinction between stance-dependence and -independence is a slippery beast.  Even if pleasure could be said to involve "taking a stance" towards a base sensation by liking it, it's not so clear that the stance is what does the heavy lifting in explaining why pleasure is good.  More plausibly, I think, pleasure is good just because of how it feels, objectively speaking.  Again, this normative explanation remains untouched, it seems to me, no matter if the phenomenology of pleasure turns out to be inextricably tied up with the attitude of liking.  It could still be the objective phenomenology, rather than the "stance" per se, that matters.

(In support of this point, I take it that if knowledge, for example, has intrinsic value then this is uncontroversially objective or 'stance-independent' in nature, regardless of the fact that knowledge is (or involves) a cognitive state, and so might be considered part of the agent's "stance" in some sense.  So, why not the same for pleasure?)



Pets and Slavery

2016-09-12T22:27:10.148+01:00

In 'The Case Against Pets', Rutgers law professors Francione and Charlton argue that "domestication and pet ownership [...] violate the fundamental rights of animals."  This is, I think, a deeply absurd position.A large part of their essay is just concerned with arguing against treating pets as property.  I think it's pretty clear that the ordinary social meaning of having a pet already rules this out.  One may carve up one's property for fun; if someone were to carve up their pet, we would (rightly) want them to be locked up for animal cruelty.  If the legal system failed to do this, they would certainly be shunned by the rest of society, who would be deeply horrified by their actions.It's an interesting question whether non-rational beings can have a right to life in addition to a right against cruel treatment.  If so, the implications would be quite radical, even aside from the complete abolition of the meat industry.  Society would presumably be obliged to support animal shelters to an extent that removes the current need to kill many perfectly healthy animals due to overcrowding.  I think that's a plausible enough position, though there are counterarguments to consider.Where the authors go off the rails is when they suggest that "domestication itself raises serious moral issues irrespective of how the non-humans involved are treated" -- such that pet ownership would still be wrong even if animal rights against cruel treatment and convenience-killing were secured.  Why do they think this?  What further rights are being violated, merely by caring for your pet?  Here is what F&C write:Domesticated animals are completely dependent on humans, who control every aspect of their lives. Unlike human children, who will one day become autonomous, non-humans never will. That is the entire point of domestication – we want domesticated animals to depend on us. [...] We might make them happy in one sense, but the relationship can never be ‘natural’ or ‘normal’. They do not belong in our world, irrespective of how well we treat them. This is more or less true of all domesticated non-humans. They are perpetually dependent on us. We control their lives forever. They truly are ‘animal slaves’. Some of us might be benevolent masters, but we really can’t be anything more than that."Slavery is bad, X is like slavery, therefore X is bad" is superficial reasoning.  Much depends on whether X shares the relevant features or preconditions that explain why slavery is so bad.I take the basic problem with (human) slavery to be that it is so drastically contrary to the interests of the enslaved.  Not only were slaves historically mistreated in all sorts of ways, but even an imaginary "happy slave" seems in a tragic position insofar as their capacity for rational autonomy -- and hence for a fully flourishing human life -- is being stunted rather than nourished.  Rationally autonomous beings have an interest in developing and preserving their autonomy, and when this interest is violated their life is (in this respect) worse as a result.This crucial feature is obviously lacking in non-rational animals.  So long as we do not mistreat them (whether by outright cruelty or mere neglect, e.g. failure to provide a sufficiently stimulating environment) domestic animals' chances at a fully flourishing life are not impaired by the mere fact of our control over them.  They have no interest in being free of our control, because they have no capacity for rational autonomy that would be served by such "freedom".  Life in the wild is often nasty, brutish and short.  We can provide much better lives for our companion animals.Perhaps the simplest way to refute F&C's argument is to note that moral rights must track interests.  It makes no sense to posit a right that serves no possible interest.  F&C acknowledge that domestic animals have no interest that wo[...]



Do we have Vague Projects?

2016-08-11T13:02:10.372+01:00

Tenenbaum and Raffman (2012) claim that "most of our projects and ends are vague." (p.99)  But I'm not convinced that any plausibly are.  I've already discussed the self-torturer case, and how our interest in avoiding pain is not vague but merely graded.  I think similar things can be said of other putative "vague" projects.T&R's central example of a vague project is writing a book:Suppose you are writing a book. The success of your project is vague along many dimensions. What counts as a sufficiently good book is vague, what counts as an acceptable length of time to complete it is vague, and so on.But it strikes me as strange for one's goal to be to reach some vague level of sufficiency.  When I imagine writing a book, my preferences here are graded: each incremental improvement in quality is pro tanto desirable; each reduction in time spent is also pro tanto desirable.  These two goals seem like they should be able to be traded off against each other -- perhaps precisely, or (if they are not perfectly commensurable goods) then perhaps not, but this sort of rough incomparability between two goods is (I take it) not the same as either good itself being vague.I could imagine a cynical person who really doesn't care to improve the quality of their book above a sufficient level.  Perhaps they just want it to be of sufficient quality to earn a promotion, or some other positive social appraisal.  But these desired consequences are even more clearly not vague.Similar things can be said of the standard example of baldness.  I trust that nobody (sane) actually has a fundamental desire not to fall under the extension of the English-language predicate 'bald'.  What they more plausibly have is a graded desire that roughly maps onto what is socially recognized as baldness.  For example, perhaps they desire not to have their appearance negatively appraised on the basis of hair loss.  (Or perhaps even just not to have other people think of them as bald.)  But of course there's nothing vague about that: people either appraise you negatively or they do not.  Such appraisals are graded, however: the first noticeable signs of a receding hairline may be expected to elicit a less severe appraisal than a large bald patch. (Or so we might imagine the vain man to assume.)Or consider a case from Elson's (2015) reply:You may wish for a restful night’s sleep, but to stay up as late as possible as is consistent with that. Since restful is vague, one minute of sleep apparently couldn’t make the difference between a restful and a nonrestful night, and you ought to stay up for another minute. But foreseeably, if you keep thinking that way, you will stay up all night. (p.474)As with the book case, this strikes me as simply involving a trade-off between two graded (non-vague) ends.  To speak of a "wish for a restful night's sleep" is surely just a rough shorthand for what is really a graded desire, for a night's sleep that is more restful rather than less so.  Perhaps there are some threshold effects in there, insofar as some lost minutes may have more noticeable effects than others on your state of mind the next day (and you can't know in advance exactly which minutes these are).  But it's clearly just false to assume that a minute's less sleep will always make no difference to what it is that you really want here (regardless of whether the term 'restful' still applies to your night's sleep -- there's clearly more to your interest in a restful night's sleep than just the binary question of whether it was restful or not).Elson later cites Tuck's example of "a shepherd who wishes to build a cairn of stones [...] to guide him in the hills" (478).  And again, while it may be vague whether a certain collection of stones is enough to qualify as a 'cairn' or a 'heap', it's hard to make sense of anyone actually caring about this as such.  Insofar as the cair[...]



Self-Torturers without Diminishing Marginal Value

2016-08-08T16:45:09.677+01:00

My last post mentioned in passing that the puzzle of the self-torturer may be complicated by the fact that money has diminishing marginal value.  This can mean that a few increments (of pain for $$) may be worth taking even if a larger number of such increments, on average, are not.  So to make the underlying issues clearer, let us consider a case that does not involve money.



Suppose ST is equipped with a self-torturing device that functions as follows.  Once per day, he may (permanently) increase the dial by one notch, which will have two effects: (i) marginally increasing the level of chronic pain he feels for the rest of his life, and (ii) giving an immediate (but temporary) boost of euphoric pleasure.  Before it is permanently attached, ST is allowed to play around with the dials to become informed about what it is like at various levels.  He realizes that after 1000 increments, the burst of pleasure is fully cancelled out by the heightened level of chronic pain he would then be feeling.  So he definitely wants to stop before then. (We may assume that he will live on for several years after this point.)  Is it rational for ST to turn the dial at all?

Surely not.  Each increment imposes +x lifetime pain in return for a temporary boost of y pleasure. We may treat these as being of constant value (bracketing any slight differences in, e.g., the duration of ST's subsequent "lifetime" between the first day and the thousandth -- we could make it so that the pain only starts on the 1000th day if necessary).  And we know that it would be terrible for ST to endure 1000 increments.  That is, the disvalue of +1000x lifetime pain vastly outweighs the value of 1000 shorts bursts of y pleasure.  Since the intrinsic values here are (more or less) constant, it follows that the intrinsic disvalue of +x lifetime pain vastly outweighs the intrinsic value of a short burst of y pleasure.

So -- assuming that there are no extrinsic values in play (e.g. we're not to imagine that ST has never experienced euphoria, such that a single burst would add a distinctive new quality to his life, or anything like that) -- it follows that each individual increment of the self-torture device is not worth it.  It would be irrational for ST to turn it at all.  So there is clearly no great "puzzle" or "paradox" here.

Compare this result to the original puzzle involving money.  Since money has diminishing marginal value, it might be that (n times) $y is worth (n times) x pain (for some n < 1000) even if $1000y is not worth 1000x pain.  That contributes to the intuitive force of the "puzzle", insofar as at least early increments seem like they might be worth taking.  But it should be clear that merely adding a resource with diminishing marginal value can't really create a paradox here where there wasn't one previously.  There will still be some threshold point n where it is irrational (of net intrinsic disvalue) for ST to turn the dial a single notch more.

So there is no great "puzzle" to the self-torturer.



Irrational Increments for the Self-Torturer

2016-08-08T16:46:11.997+01:00

Recall that the Self-Torturer (ST) gets $10 000 for each turn of a dial that permanently increases the pain he feels for the rest of his life by a negligible amount.  Each individual increment seems worth making, the thought goes, but 1000 increments would leave ST in intense agony, which no amount of money can compensate for.It seems intuitively clear to me that ST would soon reach a point at which additional increments -- even considered in isolation -- are not worth it.For example, if one hundred equal increments yielding 100x pain for $100y are collectively not worth it, then the badness of 100x pain outweighs the value of $100y.  On average, then, the harm of x outweighs the benefit of y, over the 100 increments. If the increments are all of equal net value, then each increment is of negative net value, and hence irrational to choose.  If instead we allow that the first few increments were worth the money (due perhaps to the diminishing marginal utility of money, or else the increasing marginal disutility of pain) then we can at least know that the one hundredth increment is not worth taking. After 99 increments, the value of $y is outweighed by the badness of x pain.  It would be incoherent to deny this while holding that the value of $100y is outweighed by the badness of 100x pain. So there's no real puzzle here.But, strangely, Tenenbaum and Raffman, in 'Vague Projects and the Puzzle of the Self-Torturer', do deny this.  They instead affirm (p.98):Nonsegmentation: When faced with a certain series of choices, the rational self-torturer must choose to stop turning the dial before the last setting; whereas in any isolated choice, she must (or at least may) choose to turn the dial.Why do they think this?  They seem to assume that our interest in avoiding pain is vague and coarse-grained.  They refer to "our commitment to a project of leading a (relatively) pain free life" (p.107).  Since a negligible increase in pain cannot make a difference to whether or not we live a relatively pain-free life, no individual increment violates this interest of ours,* whereas each increment does serve our interest in greater wealth.  (If you're already in pain, I imagine that T&R might appeal to some other coarse-grained anti-pain interest, just with higher thresholds, such as that of leading a life that is not excessively inundated with pain.)There are (at least) two obvious problems here.  One, noted by Luke Elson in his reply, is that the asterisked claim above is false.  In a sorites series like this, it is not universally true that each increment determinately makes no difference to whether the vague predicate applies.  Rather, on standard views of vagueness (e.g. supervaluationism), there will be a range in which it is indeterminate which particular increment violates the threshold, but determinate that some such increment does.  It is thus (determinately) false to claim that no increment violates our interest in leading a relatively pain-free life.Secondly, and more fundamentally, it just seems absurd to me to think that our interests in avoiding pain are coarse-grained in this way.  What's so special about the borderline between a life that is "relatively" pain free and one that is not?  Suppose it takes 20 increments of pain to reach the borderline cases, which in turn extend for 5 increments before we reach the realm in which you determinately no longer qualify as leading a "relatively pain-free life".  Are we supposed to imagine that a rational person could prefer to increase from 0 to 19 points of pain than to increase from 19 to 25?  Are the latter 6 points intrinsically more significant because they happen to span the boundaries of the English language predicate "is a relatively pain-free life"?  C'mon.  Pain is pain.  It doesn't matter what we call it. &[...]



The Instrumental Value of One Vote

2016-07-22T20:13:03.487+01:00

Over in this Leiter thread, some philosophers seem to be dismissing the instrumental value of voting (for Clinton over Trump) for misguided reasons:(1) That a marginal vote is "astronomically unlikely to change the outcome."This is not true,* at least for those who are able to vote in a swing state. According to Gelman, Silver and Edlin (p.325), the chance of a marginal vote altering the election outcome is as high as 1 in 10 million, depending on the state.  Given that the outcome will in turn affect hundreds of millions (or even billions) of people, voting for Clinton in a swing state arguably has significant expected value.(2) That the system is not sensitive to a single vote, and anything close to even will be decided by the courts or the like.The claim that insensitivity undermines marginal impact is generally fallacious.Given that a large collection of votes together makes a difference, it is logically impossible for each individual addition to the collection to make no difference.  While it may be true that an objectively tied vote and an objective 1-vote victory would not be distinguished by the system, there must be some smallest and largest numbers of votes that would in fact trigger a recount or a court case (or whatever), in which case one of those numbers [specifically, whichever one is the difference between a straight victory and a court-delivered loss] provides the new threshold that matters for a marginal vote to make a decisive difference.  (See also the final page of this paper by Gelman et al.)* = I've previously been led astray by Jason Brennan's model from p.19 of The Ethics of Voting, which really does yield astronomically small chances -- on the order of 10 ^ -2650.  I thank Toby Ord and Carl Shulman for their corrections in this public Facebook thread.In short, Brennan's mistake (and that of the past researchers he draws on) is to model voters as having a fixed non-50/50 propensity to favour a particular candidate over the other.  Even if the fixed propensity is just 50.5, repeating the odds over 100+ million voters makes the result an astronomically certain victory for the favoured candidate (with a vanishing small standard deviation from the expected result of their securing 50.5% of the total votes).  This is obviously not an accurate reflection of either our epistemic position prior to an election, or of any kind of objective probability distribution over the possible outcomes.  It's a bad model.  A better model would either model different voters as having different propensities [as per section 5 of this Gelman et al paper] or at least take on board our credences over a range of possible propensities (including 50/50) rather than stipulating that a particular non-50/50 propensity holds.As Gelman once wrote in a comment on Brennan's blog:[T]he claim of "10 to the −2,650th power" is indeed innumerate. This can be seen in many ways. For example, several presidential elections have been decided by less than a million votes, so a number of the order 1 in a million can't be too far off, for a voter in a swing state in a close national election. For another data point, in a sample of 20,000 congressional elections, a few were within 10 votes of being tied and about 500 were within 1000 votes of being tied. This suggests an average probability of a tie vote of about 1/80,000 in any randomly selected congressional election. It's hard to see how the probability of a tie could be of order 10^-5 for congressional elections and then become 10^-2650 for presidential elections. Finally, even if you accept the fixed-propensity model (despite its being demonstrably wrong), my old post on the Best Case for Voting makes the case for co-operating as part of a group that (collectively) achieves its maximum marginal impact by having each (or most) of its members vote. [...]



The 2-D Argument Against Metaethical Naturalism

2016-06-25T21:01:22.017+01:00

A few years back I noted that 2-D semantics provides a straightforward refutation of synthetic metaethical naturalism (SEN):  SEN implies that moral terms differ in their primary and secondary intensions, this is clearly false (moral terms are "semantically neutral", or exhibit 2-D symmetry, in that their application to a world does not vary depending on whether we consider it as actual or as counterfactual), and so SEN must be false.As I've been developing this argument in my paper 'Moral Symmetry and Two Dimensional Semantics', it occurs to me that 2-D semantics enables an even broader argument against metaethical naturalism.To address the Open Question Argument, naturalists are committed to a divergence between the intension of 'good' (and other moral terms) and its meaning or cognitive significance.  Such divergences are commonplace so far as the secondary intension is concerned: 'water' and 'H2O' have different meanings, despite picking out just the same stuff across all possible worlds (considered counterfactually).  The problem for the naturalist is that such divergences do not seem justified when it comes to primary intensions. What a term picks out across worlds considered as actual seems very closely connected to the cognitive significance or sense of a term.(For example, if we use 'the watery stuff' as shorthand for whatever fills the functional role of water, and which thus picks out H2O in our world but XYZ in Twin Earth, then note that 'water' and 'the watery stuff' have the same primary intensions and cognitive significance, despite differing in their secondary intensions.  "Water is the watery stuff" is cognitively trivial, whereas "Water is H2O" -- relating terms with distinct primary intensions -- is informative.)The difficulty now for the naturalist is that there is no way to flesh out the primary intension of moral terms using purely naturalistic items in a way that does justice to the cognitive significance of moral terms.  If you have 'good' super-rigidly (in both primary and secondary intensions) pick out (say) happiness, then you seem committed to holding that 'good' means the same (or at least has much the same cognitive significance) as 'happiness', when intuitively they don't even seem to be in the same ballpark.Note that non-naturalists face no such problem.  They can hold that 'good' super-rigidly picks out the sui generis property of goodness, which in turn supervenes on the natural things that are good -- a class of items that can only be identified through substantive moral insight, and not mere conceptual competence. Different moral communities may thus share this concept, picking out the same property of moral goodness, even as they disagree in their (implicit) theorizing about which natural items possess the moral property.Any objections? [...]



Carroll on Zombies

2016-06-20T18:13:59.436+01:00

Zombies are back in the news!  Via the DN Heap of Links, I see physicist Sean Carroll defending what appears to be a kind of analytical functionalism:What do we mean when we say “I am experiencing the redness of red?” We mean something like this: There is a part of the universe I choose to call “me,” a collection of atoms interacting and evolving in certain ways. I attribute to “myself” a number of properties, some straightforwardly physical, and others inward and mental. There are certain processes that can transpire within the neurons and synapses of my brain, such that when they occur I say, “I am experiencing redness.” This is a useful thing to say, since it correlates in predictable ways with other features of the universe. For example, a person who knows I am having that experience might reliably infer the existence of red‐wavelength photons entering my eyes, and perhaps some object emitting or reflecting them. They could also ask me further questions such as “What shade of red are you seeing?” and expect a certain spectrum of sensible answers.There may also be correlations with other inner mental states, such as “seeing red always makes me feel melancholy.” Because of the coherence and reliability of these correlations, I judge the concept of “seeing red” to be one that plays a useful role in my way of talking about the universe as described on human scales. Therefore the “experience of redness” is a real thing.This is manifestly not what many of us mean by our qualia-talk.  Just speaking for myself: I am not trying to describe my behavioural dispositions or internal states that "correlate [...] with other features of the universe" in "useful" ways.  I have other concepts to do that work, concepts that feature in the behavioural sciences (e.g. psychology).  Those concepts transparently apply just as well to my imagined zombie twin as to myself.  We could ask the zombie 'further questions such as "What shade of red are you seeing?" and expect a certain spectrum of sensible answers.'  But this behaviouristic concept is not such a philosophically interesting one as our first-personal concept of what it is like to see red -- a phenomenal concept that is not properly applied to my zombie twin.So I worry that Carroll is simply changing the subject.  Sure, behavioural dispositions and internal cognitive states (of the sort that are transparently shared by zombies) are "real things".  Who would ever deny it?  But redefining our mentalistic vocabulary to talk about these (Dennettian patterns in) physical phenomena is no more philosophically productive than "proving" theism by redefining 'God' to mean love.This diagnosis of the debate leads to a rather different dialogue than that which Carroll imagines:P: What I’m suggesting is that the statement “I have a feeling ...” is part of an emergent way of talking about those signals appearing in your brain. There is one way of talking that speaks in a vocabulary of neurons and synapses and so forth, and another way that speaks of people and their experiences. And there is a map between these ways: When the neurons do a certain thing, the person feels a certain way. And that’s all there is.M: Except that it’s manifestly not all there is! Because if it were, I wouldn’t have any conscious experiences at all. Atoms don’t have experiences. You can give a functional explanation of what’s going on, which will correctly account for how I actually behave, but such an explanation will always leave out the subjective aspect.P: Why? I’m not “leaving out” the subjective aspect, I’m suggesting that all of this talk of our inner experiences is a useful way of bundling up the collective behavior of a complex collection of atoms. Individual atoms don’t h[...]



How bad?

2016-06-10T21:35:32.817+01:00

Compare five seriously bad things:(1) Unjust discrimination along the lines of racism, sexism, etc., in Western countries.(2) War and terrorism(3) Global poverty(4) Animal suffering (from factory farming)(5) Global catastrophic (i.e. civilization-ending) risksJust how bad is each of these, in the world as we find it today?  If you could prevent just one of them, which would it be?  (What would your rank ordering be if you weren't sure how many philanthropic wishes the genie was going to give you?)It can be hard to know where to start with this question of moral prioritization.  But one way to get a grip on the question is to consider how great a cost it would take (in the constant metric of, say, human lives lost) to balance out the gains of protecting against these various evils.To avoid deontological qualms, do not ask yourself how many people you'd be willing to kill to achieve world peace (or whatever).  Maybe you would not be willing to kill anyone.  Instead ask: if a natural disaster struck a foreign country, at the same time as one of the above evils was magically banished (for a century, say), how bad could the loss of lives from the disaster be while still making the day seem to you one that was, all things considered, more good for the world than bad?My above presentation lists the evils in descending order of (how it seems to me) roughly how much public attention each cause receives, e.g. on social media.  (Strikingly, this seems more or less precisely the opposite of what I think the correct ranking would be in terms of actual importance.)To consider the magnitude of each problem in turn:(1) Unjust discrimination:  A few individuals suffer greatly (or are even killed) due to this, and far greater numbers suffer more moderate harms and indignities. It's extraordinarily difficult to attempt any sort of estimate of the total magnitude of harms here, but perhaps something in the ballpark of 500 - 1500 lives per year would be a reasonable estimate?  If so, it would take around 100k deaths to balance out the gain of a century without such discrimination.  (Note: this would plausibly be at least an order of magnitude greater if we were to include non-Western countries, due to how extremely badly women are treated in many parts of the world.)(2) War, in recent decades, seems to kill around 50k - 100k people per year, and has been in general decline since the end of WWII.  It also causes significant non-fatal casualties and other (e.g. economic) forms of disruption. (Terrorism, by contrast, is fairly trivial, except insofar as it affects political behaviour, but I'll bracket such indirect effects here.)  So I'm going to guess that a century without war or terrorism would be worth around ten million deaths.(3) Global poverty is obviously hugely harmful.  Roughly 10% of the world's population lives in extreme poverty (less than $1.90 per day), with another 2 billion or so living on less than $3.10 per day.  I assume this makes a huge difference to one's quality of life (which I take to not just reduce to momentary felt happiness, but also self-actualization through meaningful projects, etc.).  It also causes many millions of preventable deaths each year, not to mention other forms of ill health (e.g. intestinal parasites, blindness due to malnutrition, etc.).If global poverty were entirely abolished -- with everyone in the world attaining, say, the (purchasing power adjusted) level of the current U.S. poverty line -- this would vastly improve many billions of lives.  It would plausibly take more than a billion deaths (or perhaps more, e.g. 50 million deaths per year over the course of the century) to outweigh these gains.(4) "Almost 60 billion animals and bred and k[...]



Effective Altruism, Radical Politics and Radical Philanthropy

2016-04-20T17:43:18.604+01:00

It can sometimes be difficult to discern precisely what's in dispute between Effective Altruists and their (leftist) critics. This is perhaps in part due to EA's being such a big tent that objecting to one proposal or proponent is not necessarily an objection to EA itself.  To clarify the latter, I see Effective Altruism as a matter of two core commitments:(1) The "Altruism" bit: A commitment to making the world a better place -- including a willingness to expend some non-trivial proportion of one's own resources to this end.(2) The "Effective" bit: A commitment to using these resources as effectively and efficiently as possible (based on the best available evidence, analysis, etc.).And I guess there's a further 'movement-building' element that is perhaps common to all movements:(*) The belief that others should do likewise.Now, these core commitments seem pretty innocuous to me, so I'm always a bit baffled when I see people objecting to EA as such.  (Why would anyone be against making the world a better place?)Of course, there's plenty of room for internal disagreement about what is actually most effective.  EAs vary between whether they prioritize (e.g.) traditional global poverty charities, (farm) animal welfare, meta-charities, global catastrophic risk mitigation, or public policy / lobbying.  The former cause area, with its robust supporting evidence, is emphasized in materials directed at popular audiences, for obvious pragmatic reasons (e.g. the prominence of aid skeptics, and the popular tendency to dismiss anything that sounds too "weird").  This has unfortunately led some academic critics to assume that EA is all about short-term thinking, which really couldn't be more wrong.One of the internet's most, um, rhetorically strenuous critics of Effective Altruism is Brian Leiter, who in his most substantial post on the topic to date writes:What if instead of picking worthy charities in accordance with Singer’s bourgeois moral philosophy, those with resources committed all of it to supporting radical political and economic reforms in powerful capitalist democracies like the U.S.; perhaps even committing their time and resources to helping other well-intentioned individuals with resources organize themselves collectively to do the same? Is it implausible that if all those in the thrall of Peter Singer gave all their money, and time, and effort, to challenging, through political activism or otherwise, the idea that human well-being should be hostage to acts of charity, then the well-being of human beings would be more likely to be maximized even from a utilitarian point of view? Do Singerites deny that systemic changes to the global capitalist system, including massive forced redistribution of resources from the idle rich to those in need, would not dwarf all the modest improvements in human well-being achieved by the kind of charitable acts Singer’s bourgeois moral philosophy commends? The question is not even seriously considered in the bourgeois moral philosophy of Singer. Although purporting to be concerned with consequences, like most utilitarians they set the evidential bar so high, and the temporal horizon so short, that the actual consequences of particular courses of action, including the valorization of charity over systemic change, are never really considered.I don't see any objection to the core commitments of EA here, just a dispute about means. Leiter may insist that even if his proposals don't violate the letter of EA, they nonetheless remain contrary to the ethos of the movement, and would (in practice) be dismissed out of hand in a way that reveals the movement's fundamental shortcomings (such as the alleged focus on "modest improvements" over an unjustifiably short time horizon). &nbs[...]



Final Value and Fitting Attitudes

2016-04-11T15:50:58.773+01:00

An interesting new paper forthcoming in Phil Studies, 'The pen, the dress, and the coat: a confusion in goodness' by Miles Tucker, argues against the (now widely accepted) Conditionalist thesis that intrinsic value and final value are separable.

Consider, e.g., the pen Abraham Lincoln used to sign the Emancipation Proclamation.  Intuitively, it would seem to have final (non-instrumental) value in virtue of its extrinsic properties (i.e., its historical significance / relation to emancipation).  But, interestingly, Tucker argues that standard accounts of final value cannot accommodate this verdict.


According to Fitting Attitudes: "A thing has final value only if it is fitting to care about it for its own sake." (p.6) But, Tucker argues, it's fitting to care about Lincoln's pen only because it's fitting to care about something else, namely emancipation.  So, he suggests, it's not really fitting to care about Lincoln's pen "for it's own sake", so it lacks final value on this account.

I think what this really shows is that the fitting attitudes account of final value has been misformulated. What matters is not the explanation why it's fitting to care about the object, but rather the kind of care that is thereby warranted.  The distinction between final and instrumental value mirrors that between final (non-instrumental) and instrumental desires.  So we should say that an object has final value when it is fitting to desire (or otherwise regard) it non-instrumentally.

This plainly solves the problem.  So long as it's fitting to have some non-instrumental pro-attitude towards Lincoln's pen (e.g. to non-instrumentally desire its continued existence), then it has final value on this account.  It doesn't matter what the explanation of this fittingness fact is.  That will merely serve as an explanation of why it has final value as it does.  Such an explanation might very well appeal to the values of other things to which the pen is related, such as the history of emancipation.  The important thing is just that, as it turns out, the pen really does warrant our non-instrumental regard.



Teaching Effective Altruism

2016-04-10T18:57:20.149+01:00

A few people have asked for my EA syllabus from last term, so I thought I'd share it here with some general reflections.It was a fun class to teach, but I'd do things a bit differently the next time around.  A big one is just the nature of the teaching: This one was organized as a very "student-led" module, all seminar discussions and no lectures.  While the students really enjoyed the discussions, they seemed a bit complacent in places (esp. regarding their dismissals of expected value / global catastrophic risks and of the significance of non-human animal interests), where in a lecture I might have been better able to develop these challenges in greater depth.Anyway, here is the syllabus for the 9-week class, using MacAskill's Doing Good Better as the main textbook, with some supplementary readings...1: Introducing 'Effective Altruism'* MacAskill, introduction + chapters 1 & 2Key Questions: Are QALYs a useful measure?  Is it better to make hard trade-offs in cause selection (engaging in philanthropic 'triage') to maximize the good done, or to select causes on some other basis such as flipping coins or emotional resonance?2: Global Poverty* MacAskill, chapter 3* Singer, ‘Famine, Affluence, and Morality’Key Questions:How much good does (the best) aid do? Are we morally required to donate more (and if so, how much more)?[In future I think I'd move MacAskill's chapter 3 (and the associated key question) into the first week (it's very easy reading), and add some responses to Singer (perhaps Miller's 'Beneficence, Duty and Distance') in here instead, for greater philosophical depth.]3. Difference-Making and Expected Value* MacAskill, chapters 4 - 6Key Questions: Should we be guided by average or marginal utility?  Is 'expected value' reasoning the right way to take low-probability outcomes into consideration?  Is it important to be the direct cause of a benefit, or just to (even indirectly) maximize the total amount of good done?4. Evaluating Charities / Catastrophic risk* MacAskill, chapter 7* Bostrom, 'Astronomical Waste'* Karnofsky, 'Why We Can’t Take Expected Value Estimates Literally (Even When They’re Unbiased)'Key Questions: Is there anything to be said for Charity Navigator-style "financial metrics" (overhead, CEO pay, etc.) as opposed to GiveWell-style impact analysis?  To what degree should a lack of "robustness" lead us to discount a cost-effectiveness estimate?  Should we accept the expected-value argument for prioritizing global catastrophic risk mitigation?[Many of the students wouldn't take Bostrom's paper seriously. In future I might try replacing it with the first chapter of Nick Beckstead's dissertation, 'On the Overwhelming Importance of Shaping the Far Future'.]5. Ethical Consumerism and Animal Welfare* MacAskill, chapter 8* Norcross, 'Puppies, Pigs, and People'Key Questions: Assuming that sweatshop jobs are a step up from the alternatives, how should we weigh the benefit they provide vs worries about complicity in exploitation?  Is there anything wrong with carbon offsetting? Are other forms of moral offsetting (e.g. meat offsetting, murder offsetting) relevantly similar?  How much weight should we give to the interests of non-human animals?6. Career Choice / Immigration* MacAskill, chapter 9* Clemens, 'Economics and Emigration: Trillion-Dollar Bills on the Sidewalk?'* [Background reading:] Fine, 'The Ethics of Immigration'Key Questions: How should you choose what career to pursue? Is "earning to give" better than working in the social sector?  Do high-paying corporate jobs tend to be intrinsically immoral?  Should people have a right to accept a job (with an employer who wants them[...]



Philanthropic Focus vs Abandonment

2016-03-06T14:49:10.591+00:00

This seems a lamentably common way of thinking:[Chief Executive of Oxfam GB] Goldring says it would be wrong to apply the EA philosophy to all of Oxfam's programmes because it could mean excluding people who most need the charity's help. For a certain cost, the charity might enable only a few children to go to school in a country such as South Sudan, where the barriers to school attendance are high, he says; but that does not mean it should work only in countries where the cost of schooling is cheaper, such as Bangladesh, because that would abandon the South Sudanese children.Fuzzy group-level thinking allows one to neglect real tradeoffs, and pretend that one is somehow helping everyone if you help each group a little bit.  But this is obviously not true.  If there are more Bangladeshi children in need of education than your current budget can provide for, then by spending the rest of your budget on educating a few kids in South Sudan, you are abandoning a greater number of Bangladeshi children.If we don't have the resources to help everyone, then (inevitably) some people will not be helped.  To put it more emotively, you could say that we are "abandoning" them. That's a good reason to try to increase our philanthropic budgets.  It is not any sort of reason at all to spend one's budget inefficiently, leading to the philanthropic "abandonment" of even more children.Goldring's reasoning is 'collectivist' in the bad sense: treating groups rather than individual persons as the basic unit of moral consideration.  That you have already helped some Bangladeshi children is cold comfort to the other Bangladeshi children you have spurned for the sake of instead helping a smaller number of South Sudanese.  They would seem to have a legitimate complaint against you: "Why do you discount my interests just because I share a nation with other individuals that you have helped? You have not helped me, and I need help just as much as those you chose to prioritize in South Sudan.  Since you chose to help a smaller number of South Sudanese children when you could have instead helped a greater number of children in my community, you are effectively counting my interests for less.  That is disrespectful and morally wrong."By contrast, if you focus your philanthropic resources on providing as much good as possible, no-one has any legitimate complaint.  You may imagine a South Sudanese child asking, "Why did you not help me? Just because it's more expensive to provide schooling in my country, does not make my educational needs any less morally important than anyone else's!"  To which the obvious answer is, "Indeed, I give equal weight to the interests of all, including yourself; but that is precisely why I must prioritize the educating of a larger group of individuals over a smaller group, if my resources only allow for one or the other. If we could fund your education without thereby taking away the funding for multiple other people's education then of course we would!  But it would not be fair on those others to deprive several of them of education in order to educate just you.  I count your education as being equally as important as the education of any other one individual.  If I can then educate a second individual as well for the same amount of resources, then that is what treating each person's interests equally requires me to choose."We may vividly demonstrate the irrationality of the collectivist's thinking by mentally subdividing the group of Bangladeshi children into two groups.  Call 'B' the group that is helped by the current budget, and 'C' the group of additional children who will be aided if and [...]



The Basic Reason to Reject Naturalism: Substantive Boundary Disputes

2016-03-04T13:40:15.738+00:00

I've been trying to work out what I think the most basic reason to reject naturalism (about mind and morality) is.  Sometimes it's suggested that normativity is just "too different" from matter to be reducible to it.  (Enoch and Parfit both say things along these lines.)  But that seems a fairly weak reason: plants and stars seem very different from atoms, after all, but that doesn't stop them from being wholly reducible to atoms.  Granted, mind and morality are even more different, being non-concrete and all, but still.  I think the non-naturalist can do better.So, a better intuitive basis for rejecting naturalism, it seems to me, is that it can't accommodate the datum that debates about the distribution of mental or moral properties are substantive.  Imagine two people disagreeing about precisely which collection of atoms constitutes the Sun. (There's overwhelming overlap between the two proposals, they just differ slightly in where they draw the boundaries -- whether or not to include a particular borderline atom, say.)  It's clear that this is a merely terminological disagreement: they diverge in whether they take the word 'Sun' to pick out the minimal collection S, or the one-atom-larger collection S*, but it's not as though there's some further issue at stake here about which they remain ignorant -- which collection of atoms has some putative special further property of really being the Sun.  There is no such further property.Disagreements about the distribution of minds and moral properties are not like this. But if naturalists were right, they should be.  As I wrote in an old post on 'non-physical questions':The question whether my cyborg twin is conscious or not is surely a substantive question: I'm picking out a distinctive mental property, and asking whether he has it. Now, the problem for physicalists is that they can't really make sense of this. They can ask the semantic question whether the word 'consciousness' picks out functional property P1 or biological property P2. But given that we already know all the physical properties of my cyborg twin (say he has P1 but not P2), there's no substantive matter of fact left open for us to wonder about if physicalism is true. It becomes mere semantics.In other words, in order for it to be a substantive fact that consciousness tracks (say) functional rather than biological properties, there needs to be a further property (distinct from the functional and biological properties) for the two views to both be arguing about. Otherwise they're just arguing about a word.  But it's obvious that disputes about the boundaries of consciousness are not just disputes about the word (in stark contrast to disputes about the boundaries of the Sun).Similarly in metaethics:The whole problem for the naturalist is that they have no basis for claiming that any particular one of the competing, internally coherent moral theories is the one true moral theory. After all, given the natural (non-moral) parity between us and our Moral Twin Earth counterparts, what in the two worlds can the naturalist appeal to as the basis for a moral or rational asymmetry between us?If moral boundaries are not just to be determined by arbitrary semantics, it must be that there's a metaphysical difference between the truly good properties and the pretenders, breaking the symmetry. There must be a further property of goodness for the rival views to be arguing about.I think this is the core intuition that's really behind the Open Question Argument, Parfit's triviality and fact-stating objections, as well as the knowledge argument in philosophy of mind. &n[...]



Student Spotlight: Intrinsically Irrational Instrumental Desires

2016-02-29T21:35:45.057+00:00

I had always assumed that only ultimate ends, or telic / final / non-instrumental desires, could be intrinsically irrational.  (Think Future Tuesday Indifference.)  Instrumental desires, by contrast, may happen to be irrational if based on a false and irrational means-end belief, but then the problem is extrinsic to the desire itself -- the problem instead lies with the false belief, and one could presumably imagine circumstances in which the means-end belief would be true, thus making the instrumental desire in question a perfectly reasonable way of achieving one's goals.

Or so I assumed. (And I think it's a fairly common assumption.)

University of York undergraduate philosophy student Lorin Thompson (mentioned here with permission) drew my attention to an interesting class of counterexamples.  We can obtain intrinsically irrational instrumental desires if we consider instrumental desires that are essentially self-defeating.  His example is the "desire to think of a number, in order to not think of a number (simultaneously)."  The implicit means-end belief -- that one can achieve avoiding thinking of a number, by means of thinking of a number -- is logically incoherent, and the resulting instrumental desire is thus intrinsically (rather than merely extrinsically) irrational.

It's a cool case!  At the very least, I'll need to re-write my essay question for future years to ask something like whether there are "unworthy" ultimate ends rather than just "intrinsically irrational desires", as it now turns out that even Humean subjectivists should make room for the latter.

Does anyone know whether such cases have been discussed before, or could it potentially be a new contribution to the literature if Lorin were to write up his paper for an academic journal?



7 Things Everyone Should Know about Philosophy

2016-02-26T18:43:49.981+00:00

Inspired by the ignorance of Bill Nye the science guy...Some things I wish everyone know about philosophy:(1) Descartes' "I think, therefore I am" does not imply that your existence depends upon your thinking.  It is merely intended to show that a thinker cannot coherently doubt their own existence.(2) People are often quite bad at reasoning.  Logic, a component of philosophy, can help with this. (Supplementing this with an understanding of statistical and probabilistic reasoning is still important, though!)(3) When philosophers raise outlandish-seeming questions ("What is your basis for expecting the sun to rise tomorrow?  Or for expecting the future to resemble the past?") it is generally not because we think them unanswerable, or that we think the outlandish possibilities being hinted at are credible, but rather that consideration of the question can give rise to important insights, e.g. into the nature of our everyday knowledge. So to mock philosophers for their thought experiments is as silly as mocking Einstein for (supposedly) thinking that you could ride on a ray of light.  It merely reveals that you have missed the point.(4) Many important intellectual questions (including, e.g., fundamental moral and epistemological issues) do not concern empirical happenstance, and so cannot be answered by the methods of science. Different methods are needed if we are to make any progress in thinking about them.  This sort of thinking is what philosophy is all about.  If you dismiss it, you are effectively giving up on rational thought about non-empirical matters.(5) Philosophy is inescapable, in the sense that wholesale dismissals of it tend to be self-defeating. If you dismiss it as worthless, you’re making a claim in ethics or value theory, which are sub-fields of philosophy. If you think it’s an unreliable source of knowledge, that’s epistemology.  Either way, you must engage in philosophical reasoning and argument in order to (non-dogmatically) assess the value of philosophy.(6) While philosophy is difficult, and often controversial, it does not follow that it is "all just a matter of opinion".  Some opinions are more reasonable, or better grounded, than others.  Even if it turns out that there are multiple internally-coherent ways to think about the world, given the evidence available to us, our initial thoughts on a topic tend to be so riddled by implicit inconsistencies that philosophical thinking can allow us, individually, to make a great deal of progress in improving the coherence of our world views.(7) Philosophy, as a collective enterprise and academic discipline, makes progress by identifying and resolving common inconsistencies, clarifying what the implications of various positions really are, or which claims do (or don't) rationally support each other. [...]



My Giving Game results

2016-02-26T13:58:43.617+00:00

In my Effective Altruism class this past week I've run a "giving game", getting the students, in small groups, to discuss & decide where to donate £100 of my money.  It was quite interesting.

One potential downside of requiring the decisions to be made by consensus in small groups (of three or four students each) was that this ended up creating a bit of a bias towards conservative / "safe" choices from GiveWell's top charities, rather than more speculative (but potentially high upside) options about which there were disagreements within the group.  For example, one group had members initially supporting animal welfare, climate change mitigation, and criminal justice reform, but since they couldn't resolve these disagreements in the hour allotted for discussion and debate, they ended up agreeing to fund a deworming charity instead.  Another student favoured existential risk reduction, but again could not reach consensus on this within their group.

If I do this again in future years, I might try to think of an alternative way of implementing the giving game to allow the students a bit more free reign. E.g., one option would be to give each student £50 (or whatever) that they can allocate individually, or perhaps with the additional requirement that they must find / convince at least one other student in the class to share their choice of charity (to encourage argument and discussion).  Discussion could then proceed in small groups of rotating membership (rather than having fixed groups as we did this year).  Something I'll think about, anyway.

As for the verdicts, following my students' directions, I have just donated:
* £200 to the Against Malaria Foundation,
* £200 to GiveDirectly,
* £100 to each of SCI and Deworm the World,
* £100 to Project Healthy Children,
* £100 to Cool Earth,
* £100 to Animal Equality, and
* £100 to Basic Needs (an international mental health charity).

Most of these donations got a further 25% boost from UK Gift Aid; for UK taxpayers, donating via the GWWC Trust is very helpful in this respect!

What charities do you consider most effective?  Comments / suggestions welcome!  (I'm quite partial to meta-charities, myself...)



Opposite Day: "Charity begins at home" edition

2016-02-26T13:25:16.887+00:00

It's been almost a decade since my evil twin Ricardo last posted on this blog. I invite him back today to share a horribly misguided speech that he recently gave as part of a debate in St Andrews on the topic 'Charity begins at home'. (They needed someone to defend that awful claim, and I wasn't entirely comfortable about it myself, so sent along my evil twin to do the job. Here's what he came up with...)Charity begins at home… but that’s not to say it ends there!So let me begin by clarifying what is and is not at stake in this debate.  It’s common ground that we should do more to help others (and the global poor in particular). Our core thesis is just that we should not focus exclusively on the global poor, neglecting significant needs closer to home.To establish this thesis, consider the following scenario: Upon learning that the Against Malaria Foundation can save a life for around £2000, you go to the bank and withdraw £4000, intending to save two lives with it.  But on the way to the post office, you come across a young child drowning in a shallow pond.  You are the only person around, and her only chance at survival is if you jump in immediately to rescue her -- ruining the money in your pocket.  What should you do? Obviously, you should save the child.  This remains true even though the alternative -- were you willing to watch her drown in order to keep your money safe -- would have led to your later saving two lives instead.  The rule of rescue bars us from making such tradeoffs.  When we see great needs, right before our eyes, we morally must act.  We can’t just sit back and coldly calculate for the greater good.  If you share this moral judgment, then you, too, agree that charity begins at home.  We cannot neglect the needs around us for the sake of some distant greater good.What is the explanation for this?  One way to get at this is to ask where one goes wrong in acting otherwise.  Suppose your neighbour would watch the child drown, for the greater good.  What would you think of this, and why?  Well, most naturally, I think you would worry that your neighbour is a disturbingly callous person.  What sort of person can just sit by and watch as a young child drowns right before his eyes?  He would seem a kind of moral monster.  If his reason is that he wants to save two lives instead, then that adds a complication.  He isn’t obviously ill-meaning; he wants to do what’s best.  But it’s still monstrous in the sense of being inhuman -- perhaps robotic, in this case, is the better description.We think that part of what it takes to be a morally decent person, is to be sensitive to the needs and interests of those around you.  To be willing to watch a child drown displays a special kind of moral insensitivity, even if it’s done for putatively moral reasons.  Such an agent, we feel, fails to be sensitive to human suffering in the right way -- in the emotionally engaged kind of way that we think a properly sympathetic, decent human being, ought to be.  Of course there remains a sense in which your robotic neighbour wants to minimize suffering, and that’s certainly a good aim as far as it goes.  But one can’t help but feel that your robotic neighbour here is being more moved by math than by sympathy; they have an over-intellectualized concern for humanity in the abstract, but seem troublingly lacking in empathy for the concrete, individual child in front of them.So -- need it be said -- this concretel[...]



Expected Value without Expecting Value

2016-02-01T17:01:29.416+00:00

I'm currently teaching a class on "Effective Altruism" (vaguely related to this old idea, but based around MacAskill's new book).  One of the most interesting and surprising (to me) results so far is that most students really don't accept the idea of expected value.  The vast majority of students would prefer to save 1000 lives for sure, than to have a 10% chance of saving a million lives.  This, even though the latter choice has 100 times the expected value.One common sentiment seems to be that a 90% chance of doing no good at all is just too overwhelming, no matter how high the potential upside (in the remaining 10% chance), when the alternative is a sure thing to save some lives.  It may seem to neglect the "immense value of human life" to let the thousand die in order to choose an option that will in all likelihood save no-one at all.  (Some explicitly assimilate the low chance of success to a zero chance: "It's practically as though there's no chance at all, and you're just letting people die for no reason!")Another thought in the background here seems to be that there's something especially bad about doing no good.  The perceived gap in value between 0 and 1000 lives saved is not seen as nine hundred and ninety nine times smaller than the gap in value between 1000 and one million lives saved, as it presumably should be if we value all lives equally.  (Indeed, for some the former gap may be perceived as being of greater moral significance.)Interestingly, people's intuitions tend to shift when the case is redescribed in a way that emphasizes the opportunity cost of the first option: (i) letting exactly 999,000 / 1,000,000 people die, or (ii) taking a 10% chance to save all one million.  Many switch to preferring the second option when thus described.  (Much puzzlement ensues when I point out that this is the same case they previously considered, just described in different words! In seminar groups where time permitted, this led to some interesting discussion of which, if either, description should be considered "more accurate", or which of their conflicting intuitions they should place more trust in.)  Makes me think that Kahneman and Tversky should be added to our standard ethics curriculum!One way to make the case for expected value is to imagine the long-run effects of iterating such choices, e.g. every day.  Those who repeatedly choose option 1 will save 10k people every ten days, whereas the option 2 folk can expect to save 1 million every ten days on average (though of course the chances don't guarantee this).  Most agree that the second option is better in the iterated choice situation.There are a couple of ways to argue from this intermediate claim to the conclusion that expected value should guide our one-off decisions.  One is to suggest that each of the individual choices are equally choice-worthy, and that -- from an impartial moral perspective -- the intrinsic choice-worthiness of the option should not depend on external factors like whether one gets to make similar choices again in future.  In that case, we could reach the conclusion that each individual option-2 choice is 1/10th as choice-worthy as the collection of ten such choices, which is much more choice-worthy than an option-1 choice.The second route would be to suggest that even if one doesn't get to make this particular choice repeatedly, we may, in our lives, expect to fairly often have to make some or other choices under cond[...]



2015 in review

2015-12-29T17:19:10.548+00:00

(Past annual reviews: 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 2005, and 2004.)Metaethics & Epistemology* Information and Parfit's Fact-Stating Argument - clarifying what I take to be one of the strongest (yet generally neglected) arguments against metaethical naturalism.* Cancelling Schroeder's 'Implicature' Response to Parfit's Triviality Objection - it's really not a very compelling response.* Are "Internal Reasons" Normative?  I worry that Williams can't coherently think so.* Normative vs Metaethical Wrong-making explains what I think is wrong with a recently-published "moral argument against moral realism".* Self-Undermining Skepticisms -- argues that Street-style attempts to "debunk" normative realism will end up being self-defeating.* Three options in the epistemology of philosophy compares skepticism, epistemic conservatism, and objective warrant views. (My sympathies lie with the latter.)* Judgmentalism vs Non-commitalism -- would an ideally rational agent ever suspend belief?Moral Theory* Deliberative Openness and the Actualism-Possibilism Dispute - arguing that agents should treat as deliberatively "open" (or within their control) just that which counterfactually depends upon the outcome of their present deliberations.  If your best present efforts can do nothing to influence your future decisions, then this "future self" should, for deliberative purposes, be treated as a distinct agent -- not someone who you can rely upon to carry out your present plans and intentions.  So, plan accordingly.* Three ways of rejecting moral intuitions -- some more legitimate than others.* Thoughts on 'Non-Consequentialism Demystified' -- is it a problem for a theory if it holds that reasonable choices and reasonable preferences diverge?* Puzzles re: Kant on the Good Will* Moral Theories and Fittingness Implications -- why the former have the latter.* Demandingness and Opt-in vs Opt-out sacrifices* Wronging for Utilitarians -- not such a puzzle as some seem to think.* Criterial vs Ground-level Moral Explanations -- a distinction that can help explain why various (e.g. motivational) objections to utilitarianism fail.  See also Rossian Utilitarianism.* 'Objective Menu' theories of wellbeing -- the old 'list' metaphor is misleading.* Questioning Moral Equality -- is there any interesting sense in which all people are truly "moral equals"?* Valuing Unnecessary Causal Contributions -- why you shouldn't.* The Best Case for Voting -- invoking cooperative, rather than purely individualistic, utilitarianism.Applied Ethics* My paper 'Against "Saving Lives": Equal Concern and Differential Impact' was published in Bioethics.* Is it OK to have kids? -- my article in Aeon Magazine.* Procreative Externalities -- are additional people good or bad for those already here? (And more.)* A Distant Realm: Rethinking the Procreative Asymmetry* Good Lives and Un/conditional Value -- If good lives are not intrinsically good, it's hard to avoid the bleak conclusion that it'd be better had there never been life at all.* GOP Closes Doors to Newborns -- satire, but distressing how little needs tweaking when you start from real quotes from politicians talking about Syrian refugees.* Waiving Rights and "Second-class Citizens" -- there's something odd about objecting to extended guest-worker programs for the sake of the guest-workers who want to stay here and work.* Bas[...]



Aeon article on Procreative Ethics

2015-12-24T12:27:26.526+00:00

It's been a year in the making, but my article on procreative ethics is finally up at Aeon Magazine!  Big thanks to Helen Yetter-Chappell, Eden Lin, my parents, and my brother Luke, for helpful feedback on making it accessible to a general audience. (Bonus points to my parents for bringing me and my four brothers into existence -- all net positives for the world, in my opinion!)



Normative vs Metaethical (constitutive) Wrong-making

2015-12-22T17:32:52.032+00:00

Melis Erdur, in 'A Moral Argument Against Moral Realism', asks "whether it makes moral sense to take the dictates of some independent reality to be the ultimate reason why genocide is wrong." (p.7) She continues:[S]urely, the existence of an independently issued verdict – if there were such a verdict – that genocide is wrong would not be the main or ultimate reason why it is wrong. Genocide is wrong mainly and ultimately because of the pain and suffering and loss that it involves – regardless of whether or not the badness of such suffering and loss is confirmed by an independent reality.It's a mistake to think that moral realism implies that possession of the mind-independent property of moral wrongness is the "ultimate reason why" an act is wrong, in the ordinary (normative) sense of "reason why".  It's a common mistake, though.  Matt Bedke writes something similar (though I gather from correspondence that he doesn't really intend it to be read this way) in 'A Menagerie of Duties?': "Is it because they are causing [...] pain that the action normatively matters in the way it does, or because there is some non-natural property or relation at play? Surely the former." (p.197)As I respond, in my non-naturalism paper (the remainder of this blog post is an extended quote from pp.12-14):A question like "Why is it wrong to cause gratuitous pain?" can be read in two ways. The most natural reading situates it as a question in first-order normative ethics. This is to ask: What are the wrong-making features of such actions? Which of the (natural) properties in this situation are the normatively significant ones--the ones that do the justifying (or that explain why a certain action is unjustified)? Here the non-naturalist can happily agree with Bedke that it's the causing of pain that's of central normative significance here, and that explains why "the action normatively matters in the way it does."Non-naturalism is not a first-order normative theory, after all. It instead addresses the (more obscure) metaethical question: What does the wrongness of the action (or the badness of the pain) consist in? Simple answer: The wrongness of the act consists in the act's possessing the property of being wrong! Not a particularly informative answer, perhaps, but it's a central thesis of non-naturalism that normative properties are sui generis. This view eschews the kinds of ambitious metaethical explanations offered by constructivists and others. There is, on this view, no deeper explanation of what wrongness is to be offered. The purely normative properties are bedrock, and the basic normative truths are brutely true. One may or may not like this aspect of the view, but the crucial point for now is just to note that it's compatible with any first-order normative explanations (of which acts "normatively matter" and why). The constitutive sense in which possessing the purely normative property of wrongness is what "makes" wrong acts wrong (in the sense that this is what it is for an act to be wrong) is distinct from, and not in competition with, the normative sense in which certain natural properties are "wrong-making" features. [...]



Lichtenberg on Effective Altruism

2016-02-26T13:25:16.843+00:00

Judith Lichtenberg has a pretty misguided -- and often misleading -- hit piece on the Effective Altruism movement, which concludes:The effective altruists have shown that, without undue burdens, many of us can and should do a lot more than we do now. But in their zeal to maximize effectiveness, they distort human psychology, undervalue the contributions made by ordinary people, and neglect the kind of structural and political change that is ultimately necessary to redress the suffering and radical inequality we see around us.Her criticisms are not well-supported.(1) Some EAs, like Peter Singer, are sympathetic to impartial moral theories like utilitarianism, but (i) not all are, and (ii) it seems an is/ought confusion to accuse a normative theory of "distort[ing] human psychology".Lichtenberg paints EAs as cold and emotionless: "To do the most good we must ignore our natural sentiments and calculate, or else let others (like the analysts at GiveWell) do the calculating for us." This is a ridiculous caricature (a real distortion of others' psychologies!).  The nugget of truth here is that the Effective Altruism movement indeed urges us to seek to ensure that our philanthropic efforts are evidence-based, and as effective as possible.  But this is a far cry from "dismiss[ing] the role" of emotions altogether, as JL accuses us of.  For many EAs, an emotional concern for the wellbeing of others is precisely what drives them!  (Nor, for that matter, is "calculating" a particularly accurate characterization of the kind of analysis that GiveWell does.)JL seems not to like the suggestion that we should (ever?) override our initial inclinations to help groups that we are familiar with in favour of strangers whom we are able to help even more.  She apparently finds ideals of universal love and beneficence to be "chilling".  But it is hardly a "distort[ion of] human psychology" to suggest that such efforts are often worth making -- unless we are to think that every effort to compensate for our ordinary biases in order to better promote our values is an objectionable form of psychological "distortion"?(2) Do EAs "undervalue the contributions made by ordinary people"?  On the contrary, organizations like Giving What We Can stress how much "ordinary" westerners (who are, by global standards, pretty much all extremely wealthy) can easily contribute, potentially saving lives every year without much cost to our own material standard of living at all.So I think JL is simply incorrect when she claims that "focusing so heavily on what elites can do denigrates the contributions of ordinary people, who cannot make huge differences understood in the quantitative and aggregative terms the effective altruism movement prizes." I think that saving a life is a pretty "huge" deal, and it's something that any ordinary person can do (with a modest but well-targeted donation), and that is highly prized within the EA movement.  It's obviously true that those with vastly greater resources are able to achieve even more.  But it's no part of EA that this "denigrates" the more modest efforts of the rest of us; that invidious idea is wholly JL's own, and I think we should reject it.JL adds:[T]here are ways of making a difference that can only be achieved by getting your hands dirty—in soup kitchens, clinics, prisons, schools, and neighborhoods, not to mention through political acti[...]



GOP Closes Doors to Newborns

2015-11-17T16:55:07.859+00:00

In a nearby possible world, the GOP politicians decrying refugees extended their arguments just a little further.  It went a little something like this...

"Who in their right mind would want to let in tens of thousands of newborn children, when we cannot determine, when the administration cannot determine, who is and isn’t going to grow up to be a terrorist?” Cruz asked.

“Here’s the problem." Rubio expanded. "You allow 10,000 people to be born. And 9,999 of them are innocent people who grow up to be decent, law-abiding citizens. And one of them becomes a terrorist,” he said. “What if we get one of them wrong? Just one of them wrong.”

"My primary responsibility is to keep the people of Texas safe," explained the Governor of Texas, in a video clip. "That means: no more people," he quietly added. "They're just too risky."

"Abortion is murder," one pro-life campaigner clarified. "But if we ship babies off to the Middle-East, whatever happens to them is out of our hands." She paused. "I think it's the right thing to do, to protect our values, and protect our children." A longer pause. "The ones that are already here, I mean."