Coincidentally, I also like DeBoer, although I am fiercely anti-communist (having been born and raised in a communist country does that to you). I think you make some good arguments, and I would love to read his answer to your rebuttal.
Good rebuttal. My criticism of effective altruism and utilitarianism is that it arrogantly assumes that humans know what action will lead to the greatest good/happiness/lack of suffering. In reality, good intentions lead to bad outcomes all the time. It's absurd to think that humans can compile all potential variables, including ones they aren't even aware of until they come to fruition.
The example of giving money to a charity in rural Africa instead of a charity in a Western nation makes sense. I think there are uses for effective altruism in one's personal life which are totally fine. But we should be wary of collective, corporate actions with such aims because we don't actually know all the outcomes of our actions.
Fair points. The inherent incompleteness of human knowledge is a problem for any moral system. The reason I don't consider myself a utilitarian (or an anything, really) is that they all have these defects. But those defects don't mean that there's no value to such thinking. It just means it can't be your sole moral operating system.
People should read deBoer's piece on EA. He is absolutely not purporting to disprove the idea that "prioritizing life-saving developing world charities is better than less impactful domestic ones." He is complaining about the EA-as-identity crowd's obsession with being 'the cleverest boy in the room'. This 'novelty problem' is a problem because the most novel philanthropic project may not in fact be the most effective; the most effective may be some mundane project like purchasing mosquito netting. It's not helpful to confuse this argument with his separate critique of utilitarianism.
It's a mistake to assume that the greatest good can be served by alleviating human suffering. Suffering is the fundamental condition of life. So far, all life requires our planet and a relatively complex balance of species that live on it. Also, so far, all life has only served one purpose: to produce more life. Our particular challenge is to succeed as the catalyst that extends life beyond the planet, that it may exist further into the future than our particular place of origin.
If you look at suffering as feedback and our actions in the world as modulated by that feedback, Buddhism and just about any religious practice is a method for attenuating the emotional noise generated by our experience. Like any control system, there are optimal feedback settings and they will vary by individual and experience over time.
Even if suffering is a "fundamental condition of life," we can still partially alleviate suffering. If my toddler's hand is on a hot stove, I can alleviate suffering by moving it away. This is all that the utilitarian need assume.
There is more to utilitarianism than the elimination of preventable suffering. We also want to promote positive mental states to the extent this is with our power.
The descriptive fact that all biological life is directed towards reproduction does not entail that all life *should* be devoted to reproduction. This is the naturalistic fallacy: https://ethics.org.au/ethics-explainer-naturalistic-fallacy/
As you said, suffering as a fundamental condition is descriptive, not prescriptive. It is by suffering that we are compelled to act in the world. We have differential material circumstances due mainly to geographic variations as "initial conditions" for human history, e.g. luck. Effective altruism seems to function mainly as a mechanism for material redistribution. While that might arguably be a moral good, it does have costs. I personally, don't favor it because it engages high-cost cognitive resources (rationality) at the expense of interpersonal empathy (for which we are optimized by evolution). It also further perpetuates disconnectedness, which is epidemic.
As many thinkers have pointed out, 'interpersonal empathy' is very bad guide to moral action. It is what prompts people to care deeply about personal-scale emergencies like a little girl trapped in a well, while remaining indifferent to large-scale emergencies like mass famines, climate change etc. It is probably true that these glitches of moral perspective are evolved, but to celebrate them for that reason is again to fall prey to the naturalistic fallacy.
I think the difference in perspectives here may be that you are reading my remarks as claims that naturalistic behavior is prescriptive or moral. I am not. Each person must follow their own heart in these matters, keeping an open mind to the possibility of finding better ways and discovering their own mistakes. My point is mainly that morality and charity have costs and effective altruism is a complex endeavor which has many opportunities for missing such costs.
Take, for instance, the many clothing donation programs that are contributing to ecological problems in Africa and suppressing the viability of native textile manufacturing industries from growing there. Textiles being one of the historical stepping-stone industries that can raise a society out of poverty, there is a non-zero chance that this type of charity is net-harmful. (ex. see: https://mashable.com/article/how-to-ethically-donate-clothes).
Humans creating and operating policies at national and international scales involves tackling information and complexity challenges are rarely acknowledged. And, as I mentioned in an earlier comment, it disengages the interpersonal relationship cognitive machinery and mainly engages the logical problem-solving brain. While empathy and social interaction may not be the best guides to moral action, there is a personal and societal cost when we disengage from interpersonal social behaviors en masse. I suspect that the cost may be very high.
It is true that ethical altruism involves complex utility calculations, and if those calculations are botched, it is possible to do more harm than good. But a central component of the ethical altruist project is to demystify these questions, by having experts curate an ongoing list of the world's most effective charities (https://www.thelifeyoucansave.org). Another tenet of ethical altruism is that your philanthropy should be automated-- i.e., by setting up your bank account to automatically donate 10% of your income to a portfolio of highly effective charities. These are unequivocal solutions to the problems you mention.
Coincidentally, I also like DeBoer, although I am fiercely anti-communist (having been born and raised in a communist country does that to you). I think you make some good arguments, and I would love to read his answer to your rebuttal.
Good rebuttal. My criticism of effective altruism and utilitarianism is that it arrogantly assumes that humans know what action will lead to the greatest good/happiness/lack of suffering. In reality, good intentions lead to bad outcomes all the time. It's absurd to think that humans can compile all potential variables, including ones they aren't even aware of until they come to fruition.
The example of giving money to a charity in rural Africa instead of a charity in a Western nation makes sense. I think there are uses for effective altruism in one's personal life which are totally fine. But we should be wary of collective, corporate actions with such aims because we don't actually know all the outcomes of our actions.
Fair points. The inherent incompleteness of human knowledge is a problem for any moral system. The reason I don't consider myself a utilitarian (or an anything, really) is that they all have these defects. But those defects don't mean that there's no value to such thinking. It just means it can't be your sole moral operating system.
You should probably look further into communist critiques of capitalism to have a better understanding of why communists are sceptical about charity.
People should read deBoer's piece on EA. He is absolutely not purporting to disprove the idea that "prioritizing life-saving developing world charities is better than less impactful domestic ones." He is complaining about the EA-as-identity crowd's obsession with being 'the cleverest boy in the room'. This 'novelty problem' is a problem because the most novel philanthropic project may not in fact be the most effective; the most effective may be some mundane project like purchasing mosquito netting. It's not helpful to confuse this argument with his separate critique of utilitarianism.
It's a mistake to assume that the greatest good can be served by alleviating human suffering. Suffering is the fundamental condition of life. So far, all life requires our planet and a relatively complex balance of species that live on it. Also, so far, all life has only served one purpose: to produce more life. Our particular challenge is to succeed as the catalyst that extends life beyond the planet, that it may exist further into the future than our particular place of origin.
I'd be curious to know your thoughts on Buddhism, given its focus on liberation from suffering.
If you look at suffering as feedback and our actions in the world as modulated by that feedback, Buddhism and just about any religious practice is a method for attenuating the emotional noise generated by our experience. Like any control system, there are optimal feedback settings and they will vary by individual and experience over time.
Even if suffering is a "fundamental condition of life," we can still partially alleviate suffering. If my toddler's hand is on a hot stove, I can alleviate suffering by moving it away. This is all that the utilitarian need assume.
There is more to utilitarianism than the elimination of preventable suffering. We also want to promote positive mental states to the extent this is with our power.
The descriptive fact that all biological life is directed towards reproduction does not entail that all life *should* be devoted to reproduction. This is the naturalistic fallacy: https://ethics.org.au/ethics-explainer-naturalistic-fallacy/
As you said, suffering as a fundamental condition is descriptive, not prescriptive. It is by suffering that we are compelled to act in the world. We have differential material circumstances due mainly to geographic variations as "initial conditions" for human history, e.g. luck. Effective altruism seems to function mainly as a mechanism for material redistribution. While that might arguably be a moral good, it does have costs. I personally, don't favor it because it engages high-cost cognitive resources (rationality) at the expense of interpersonal empathy (for which we are optimized by evolution). It also further perpetuates disconnectedness, which is epidemic.
As many thinkers have pointed out, 'interpersonal empathy' is very bad guide to moral action. It is what prompts people to care deeply about personal-scale emergencies like a little girl trapped in a well, while remaining indifferent to large-scale emergencies like mass famines, climate change etc. It is probably true that these glitches of moral perspective are evolved, but to celebrate them for that reason is again to fall prey to the naturalistic fallacy.
I think the difference in perspectives here may be that you are reading my remarks as claims that naturalistic behavior is prescriptive or moral. I am not. Each person must follow their own heart in these matters, keeping an open mind to the possibility of finding better ways and discovering their own mistakes. My point is mainly that morality and charity have costs and effective altruism is a complex endeavor which has many opportunities for missing such costs.
Take, for instance, the many clothing donation programs that are contributing to ecological problems in Africa and suppressing the viability of native textile manufacturing industries from growing there. Textiles being one of the historical stepping-stone industries that can raise a society out of poverty, there is a non-zero chance that this type of charity is net-harmful. (ex. see: https://mashable.com/article/how-to-ethically-donate-clothes).
Humans creating and operating policies at national and international scales involves tackling information and complexity challenges are rarely acknowledged. And, as I mentioned in an earlier comment, it disengages the interpersonal relationship cognitive machinery and mainly engages the logical problem-solving brain. While empathy and social interaction may not be the best guides to moral action, there is a personal and societal cost when we disengage from interpersonal social behaviors en masse. I suspect that the cost may be very high.
It is true that ethical altruism involves complex utility calculations, and if those calculations are botched, it is possible to do more harm than good. But a central component of the ethical altruist project is to demystify these questions, by having experts curate an ongoing list of the world's most effective charities (https://www.thelifeyoucansave.org). Another tenet of ethical altruism is that your philanthropy should be automated-- i.e., by setting up your bank account to automatically donate 10% of your income to a portfolio of highly effective charities. These are unequivocal solutions to the problems you mention.
Who said anything about UNRWA?