In Part One of my blog post on Singer’s “The Most Good You Can Do”, I outlined the main components of “effective altruism” and discussed my criticism that it does not sufficiently address wider political and structural questions that surround development interventions.
In this second and concluding part, I will discuss two more concerns I have with the approach: a narrow definition of “good”, and the limited situations that one can apply the effective altruism approach.
Singer’s driving imperative is ensuring that donations of money have the best impact – that they do the most good. Throughout his book he demonstrates that it is possible to rank competing goods; that donating to projects that save lives or cure blindness do more good than the construction of a new wing of a museum or providing less-essential support for more people.
At the core of this is the idea that rational thought, rather than emotional appeal, should be the foundation of any philanthropic decision. This informs the way in which Singer defines “good” interventions: it is those that are delivering well on their (quantifiable) aims, and that are the most cost effective at doing so.
I don’t take any issue with the idea that a good intervention is one that sets out clear goals and then delivers on them. My concern is, however, that a solely quantitative focus can lead to an oversimplified and narrowly defined idea of “good”.
For example, one could judge that Intervention A is better than intervention B because it vaccinates the same amount of children for half the price. Immediately, this sounds like Intervention A is delivering more good, on the above definition. But there are wider questions that these criteria do not answer: are the interventions linked to wider education? Is the government’s healthcare infrastructure adequate for follow ups? What is the effect of an international body delivering this rather than a local or regional one?
A valid response to this worry is that effective altruism doesn’t take into account -just- cost. If an intervention that is cost-effective but has serious problems in other areas (such as a silo approach to the issue), it will overall not be getting as good results as other more holistic projects and so a discerning philanthropist will take into account a wider set of metrics.
However, as we have seen with the tyranny of the “how much are your overheads?” question in the charity sector, it is equally as likely that a simple metric of “cost-effectiveness” will take over in the minds of many philanthropists, which may crowd out many innovative, less quantifiable interventions. This is not a necessary outcome of using cost as a core metric, but it is in my opinion a significant practical implication.
Furthermore, effective altruism’s rational good becomes much harder to define once you get away from objective measures such as price-per-treatment. It all becomes a lot murkier, and other values begin to play a role. Is a more expensive state-based approach preferable to a less costly programme from a multinational pharmaceutical company? It all depends on your view of the wider problems – and with that, the debate about what is “good” becomes a lot more complex, and links effective altruism back to my initial concerns about its apolitical approach to charity.
The above critique depends greatly on what kind of intervention is taking place. When talking about short-term delivery of emergency supplies, cost-effectiveness may correlate strongly with the number of people helped and therefore the number of lives saved. Whilst the wider questions on the structure and politics of interventions are valid, in life-or-death situations many of the above points probably matter far less than the ability to save the most people. For longer-term emergencies, however, the murkiness and wider picture become much more apparent. It might be the case, therefore, that the validity of effective altruism’s approach varies with the scope of the problems it’s trying to address.
The scope of effective altruism’s effectiveness
When facing life-or-death situations, where the main question is how can we mobilize effectively to save as many lives as possible, Singer’s effective altruism has a lot of merit. But as soon as you take a wider view, and touch upon more ambiguously measured and distributed interventions, the above problem of defining the good is felt much more acutely.
Using education as an example can highlight such limitations. Access to education can promote a wide range of goods, from better employment prospects (that link to nutrition and health), to more informed family planning (reducing maternal mortality, for example) and more equality between genders. Many of these things have a direct impact on the more life-or-death issues such as infant mortality and nutrition. However, measuring these impacts is extremely difficult, and even with more direct outcomes deciding on a metric is surprisingly tricky. The Millennium Development Goals measured their education targets via school attendance, but this doesn’t really relate to the standard of education that children obtain. How do you measure motivation, inspiration and creative thinking – crucial to educate the generation who will create their own solutions to the world’s problems? Additionally, a school-child focused ignores the need for vocational and adult-education that could be transformative for so many societies.
So, given the above complexities, how is the most effective intervention measured? Cost-per-pupil suddenly seems woefully inadequate. Test scores rely on uniformity that don’t exist within or between countries, and depend as much on healthcare and electricity provision as direct educational interventions.
A quantitative data approach to this kind of intervention is significantly limited, I propose. I would also argue that education interventions are certainly not seen as frivolous in the way that Singer views liberal arts funding, so effective altruism needs to address these complexities if it is to be applied to these vital projects.
Linked to this is Singer’s promotion of random trials. Again, trying to get more objective information on how projects are succeeding is definitely a good thing, but testing “solutions” (as Singer’s favoured GiveWell does) as opposed to individual projects assumed that projects can be packaged and dropped anywhere without recourse to local politics, social context or history. Furthermore, if meaningful impacts are to be measured from less quantifiable interventions (such as I outline above), there are big difficulties with comparing the results of randomised trials as an approach.
None of my criticisms mean that effective altruism is incapable of being applied to more complex and political problems. I am a huge advocate for some of the approach’s core ideas: that people should be giving more to address core development problems, and that charity giving needs to be far more intelligent.
What I hoped to do with these two blog posts is highlight what I see to be three big holes within the approach that limit its effectiveness and validity. I’m sure that many effective altruists are aware of and are addressing these limits, but it is crucial that the approach evolves a more nuanced worldview that accounts for politics, hard-to-quantify foods and a distinction between interventions of different scopes.