Freddie deBoer Is Wrong About Effective Altruism
It's not what he says about effective altruism or utilitarianism, but what he leaves unsaid.
This post is a response to a pair of recent articles by the writer Freddie deBoer:
“Effective Altruism Has a Novelty Problem”
“My Brief Brief Against Utilitarianism”
Freddie deBoer has a problem with effective altruism. It’s a school of thought everyone in educated circles is suddenly talking about in response to the surprisingly large press splash of Oxford philosopher William MacAskill’s new book What We Owe the Future. MacAskill, along with the philosopher Peter Singer, are generally seen as the fathers of the effective altruism movement (or EA for short). In a pair of recent articles, deBoer went after effective altruism along with its parent philosophy utilitarianism, arguing that the former is an obnoxious and immature movement, and the latter is a dangerous and almost sociopathically deranged moral code. I love Freddie deBoer. He’s one of my favorite writers, in fact. These recent broadsides, however, miss the mark by quite a bit. While he goes to town on the softest targets and the lowest hanging fruit, it’s what he leaves unsaid that exposes the hollowness of his critique.
Briefly, effective altruism is the notion that we should organize our philanthropic efforts in order to maximize the amount of good we do. According to effective altruism, it is preferable to donate money toward, for example, preventing or reversing certain types of blindness in impoverished countries rather than supporting, say, a local museum, youth program, or advocacy group. Since both the general quality of life and cost of living are so much higher in the developed world, EA reasons, it is more impactful to prioritize one’s charitable giving to causes in the developing world via effective organizations, where comparatively small sums of money can go further and “do more good.” With the money that might give a hungry man in Milwaukee a meal, a family in rural Africa might be fed for a week.
To be sure, there are many valid criticisms one might make of effective altruism. For example, certain strains of EA thinking lead people to disproportionately care about large-scale but speculative future problems that might happen, like an out-of-control artificial intelligence, as opposed to concrete problems of the here and now. Another issue is that EA divorces all sentimentality from the act of charity, which allows for more rational decisions, but in so doing also diminishes the positive feelings people derive from their generosity, which may, for some, translate to less charitable giving. EA also incentivizes people to maximize their incomes in order to have more money to donate — “earning to give” — but depending on the nature of what one does for a living, that may offset the positive impact (e.g. an executive at a company that engages in predatory lending).
deBoer’s problem is about no such substance. He regards the core mission of EA, to “do good better” as being so obvious, banal, and self-explanatory that it shouldn’t need saying. Except it isn’t obvious. It’s radically counterintuitive to consider that someone 5,000 miles away whom you don’t know and will never meet could benefit more from your $50 check than the local children’s hospital. On that score, he’s simply mistaken, but his larger beef seems to be that people online are going around using EA as an identity of sorts, and acting like smarmy, holier-than-thou asses. Within 10 minutes of googling effective altruism, deBoer reports, he was confronted with a 12-year-old think piece profiling a philosophy professor’s opinion that killing all carnivorous species would be good, and an EA forum where people discuss the notion that insects have sentience. Oh, and some guy over at Vox went all Tom Cruise-on-Oprah’s-couch about how awesome he thinks effective altruism is. Damning evidence indeed.
There’s nothing wrong with criticizing or even ridiculing people saying silly things online. But are we going to pretend that the dumbest handful of things the first dozen pages of search results turn up is analogous to the larger movement or set of ideas in question? It’s a tried-and-true tactic of the critic to seek out the softest conceivable target, but it’s also the least interesting. For every EA debate bro online, there are thousands who’ve read Peter Singer or Will MacAskill, or come across them on podcasts or YouTube, and been inspired to put GiveDirectly, Population Services International, or the Against Malaria Foundation on automatic monthly donations and then moved on. Effective altruism, the online community, is not the same thing as effective altruism, the pattern of real-life behaviors. So someone said we should kill all the tigers. Alright, that’s dumb — and happily, almost no one agrees. Now that you’ve shot that fish in a barrel, can we expect a real critique of why prioritizing life-saving developing world charities is better than less impactful domestic ones? I suppose not.
Freddie deBoer is a self-identified communist. I bring this up because Marxists in general, I’ve noticed, take a rather dim view of philanthropy, as they tend to be loath to admit that rich people or (gasp) corporations could ever do any good in the world. Improving the lot of the downtrodden is their home turf (they must be a road team), and they don’t much appreciate competition on that front. Even if they concede that philanthropy can do some good, well, it’s just an insulting pittance, dwarfed by the colossal evils of capitalism. This isn’t to say that Marxists never give to charity, but they do maintain a dismissiveness of it. Granted, I can’t know whether this plays a role in deBoer’s case, and one never wants to mind-read others, however when the criticisms are this vacuous from an otherwise reasonable person, I’m left wondering.
deBoer did get some pushback from his audience, which led him to write a follow-up piece taking aim at the larger philosophical tradition from which effective altruism derives: utilitarianism. Utilitarianism is the doctrine in moral philosophy that we should aim to do the most possible good for the largest possible number of people. “Good” has been defined differently over the years, and seems nowadays to most often refer to something along the lines of “maximizing wellbeing.” Utilitarianism is an offshoot of the broader school of thought known as consequentialism, which posits that the rightness or wrongness of actions should be determined not by the action itself, but by its consequences, as contrasted with deontology, which holds more or less that the opposite is true.
deBoer says he rejects utilitarianism because if taken to its logical extremes, it navigates one into circumstances nearly all people regard as morally wrong. His prime example is of women in persistent vegetative states in long-term care facilities. There have been real-world instances of such women being raped, and, deBoer argues, utilitarianism not only lacks the means to condemn such behavior, it actively encourages it. After all, the woman is not conscious, so she can’t generate “utility”, but the would-be rapist is, and therefore can. Even a security guard observing this crime in progress, if he is an “honest” utilitarian who takes his principles to their uttermost logical conclusions, should be happy to see such utility being created and would have no grounds to intervene, we are told.
There are two different avenues by which one might respond to this. The first is to dispute the claim that utilitarian logic necessarily leads to the outcomes he suggests in these examples. While I think one can quite easily marshal utilitarian arguments to find ways out of these bad outcomes (e.g. knowledge of such abuse and its toleration would harm the entire community), it’s actually beside the point. I’m less interested in deBoer’s choice of edge case than in the conclusion he draws from it. Indeed, it’s less a case of drawing conclusions than starting with one and working backward from it. Whether you want to grant him these examples or swap in stronger hypotheticals, there is a point at which utilitarian reasoning quite obviously breaks down. Of course there is. What goes astoundingly unsaid is that this is true of every proposed moral system in the history of human cognition!
There is genuine, excuse the term, utility in exploring thought experiments that stress test a moral philosophy and expose its limitations, but it’s not the checkmate deBoer seems to think it is. Pushed to its limits, no moral system comes out smelling like daisies. Whether you look at consequentialism as a whole, or deontology, or virtue ethics, intuitive ethics, or the versions put forth by any religion, they all crack when you take them to sufficient extremes, or apply them to certain edge cases. We should desire coherence and consistency in our beliefs. We should want to get our ideas as watertight and versatile as possible. This is why criticism is so invaluable. But no system has ever been foolproof, and none ever will be.
Moral systems are all frameworks, not programming to be followed slavishly to the letter of the code. Humans aren’t computers. Now, there are certainly massive implications for artificial intelligence of both the soft and hard varieties in terms of how we program them. Which philosophies inform that programming, including when and how they break down, matter enormously. That fascinating and disquieting avenue was left unexplored.
Instead, deBoer operates from the seeming assumption that this is how humans behave, when it clearly isn’t. No sane person follows any moral philosophy to its absolute conclusions in all cases, no matter how radical. Rather, they are taken as a guide, a set of general rules, and a direction of ethical orientation. In reality, nearly all people, whether they are philosophically versed or not, live by a moral code that blends many of them together. Nobody lives by a single system, nor should they, because no system is flexible and all-encompassing enough to work in every situation. Life is too complex. The world has too many moving parts. The measure of a system’s worth isn’t by comparing it against the impossible standard of perfection, but by comparing it to the other systems on offer. You want to make the case that utilitarianism is a raft of dangerous horseshit, unfit even as a component of one’s ethics? Don’t show me that it’s flawed, show me that it’s wrong more often than its alternatives.
If you want to say that some effective altruism enthusiasts are obnoxious, or that utilitarianism is, like all moral systems, an incomplete framework with limited applications, then say that. But you can’t dismiss an approach because a few professors did their morbid philosophy seminar schtick where the normies can see. Likewise, you can’t reject an idea because it fails to hold up under every conceivable scenario without any acknowledgment that they all do, and without offering an alternative. Not in any kind of philosophically defensible way, at any rate.
There is a special kind of deception (and self-deception) that only the very intelligent and educated are capable of. It allows them to begin a conversation about people donating money to dirt-poor villagers in malaria-ridden countries to provide them with bed nets, and then leads you down a winding path of rhetorical smoke and mirrors that purports to connect the dots to the view that raping women in vegetative states is positively moral. When you step back and look at it in totality, there’s a sort of reverse alchemy to it. It’s an artform. A grotesquely dishonest one, but an artform nonetheless. Simpler folks are content to just say “I don’t like them.”
See also: “Global Basic Income: Ending World Poverty Now”
Subscribe now and never miss a new post. You can also support the work on Patreon. Please consider sharing this on your social networks. You can reach me at @AmericnDreaming on Twitter, or at AmericanDreaming08@Gmail.com.
Coincidentally, I also like DeBoer, although I am fiercely anti-communist (having been born and raised in a communist country does that to you). I think you make some good arguments, and I would love to read his answer to your rebuttal.
Good rebuttal. My criticism of effective altruism and utilitarianism is that it arrogantly assumes that humans know what action will lead to the greatest good/happiness/lack of suffering. In reality, good intentions lead to bad outcomes all the time. It's absurd to think that humans can compile all potential variables, including ones they aren't even aware of until they come to fruition.
The example of giving money to a charity in rural Africa instead of a charity in a Western nation makes sense. I think there are uses for effective altruism in one's personal life which are totally fine. But we should be wary of collective, corporate actions with such aims because we don't actually know all the outcomes of our actions.