Mieux Donner

It is always possible to prioritise

25 February 2025, reading time: 17 min.

Analysis. A ladder is standing against a wall, with a target painted on it.

Introduction

We often hear that it is impossible to measure the impact of certain actions and that there is no way of comparing their effectiveness. This is a tempting idea, as it avoids the need to make difficult trade-offs between different interventions. However, the reality is quite different: there are many ways of prioritising, even in complex contexts.

If we really want to help, we have to ask ourselves how best to do so. Ignoring variations in impact is tantamount to assuming that all interventions are the same, whereas there are considerable differences between them. Even if evaluations are imperfect and sometimes uncertain, they are infinitely more useful than no comparison at all.

In this article, we’ll look at why prioritisation is essential if we are to help more. We will then explore the tools available, both quantitative and non-quantitative, to guide our decisions.

1. Why and how to prioritise

1.1 Prioritising to help more: a necessity

When it comes to helping, the question should not only be whether it is right to do so, but also how to help as much as possible with the resources available. Not all actions are created equal: some interventions are transformative and easy to implement, while others require far greater resources for less impact. When we have the opportunity to act more effectively, we can decide to consider the additional people we can help.

However, our intuitions are often wrong. We tend to favour actions that are visible, immediate or emotionally striking, without really assessing their effectiveness. This leads us to invest time and money in solutions that appear to help, but which in reality may have far less impact than other alternatives.

Not measuring impact is tantamount to assuming that all interventions have the same effect. Yet studies show that some can be 100 times more effective than others in the same cause. If we ignore these differences, we risk wasting precious resources and helping less than we could.

Even if we can’t always measure with absolute precision, we know that some approaches are far more effective than others. Prioritising does not mean reducing all decisions to cold numbers, but rather asking the right questions:

  • Where can our action have the most impact?
  • Which interventions are already funded, and which ones lack resources?
  • What could be the real cost to obtain the desired final impact?

Our aim is not to eliminate causes or impose a single way of acting, but to provide those who give and those who take decisions with concrete benchmarks to help them make more informed choices. In the following sections, we will look at how to measure impact using the quantitative tools available, and how to supplement them with qualitative approaches when quantification reaches its limits.

1.2 Quantitative tools for comparing the impact of interventions

Cost-effectiveness analysis

Cost-effectiveness analysis is a powerful tool for identifying the most effective interventions. It consists of measuring how much an action costs for a certain benefit, for example:

  • How much does it cost to save a year’s life by supplementing with vitamin A?
  • How much does an extra year of education cost through the distribution of school textbooks?
  • How much does a tonne of CO₂ avoided by different environmental policies cost?

By using these analyses to compare different projects, we can allocate resources to the interventions that produce the greatest effect per euro invested.

Some interventions lend themselves easily to this approach, such as distributions of mosquito nets or a vaccination programme, which can be evaluated in terms of disability-adjusted life years (DALYs) per given euro.

Scientific rigour is at the heart of this approach, thanks in particular to :

  • Randomised controlled trials (RCTs): used to isolate the real effect of an intervention by comparing a group that receives it with a control group. Esther Duflo, co-laureate of the Nobel Prize in Economics, has used this method for subjects much broader than health interventions.
  • Meta-analyses and systematic reviews: combine several studies to obtain a more robust view of the results.

Although cost-effectiveness analysis does not capture everything, it does provide valuable information for prioritising actions and avoiding funding ineffective interventions.

We can quantify more often than we think

Many interventions seem difficult, if not impossible, to measure. Yet even in complex areas, methods exist for obtaining useful estimates. In development economics, for example, very different interventions can be compared using models based on empirical data.

Let’s take the example of a plea for tobacco taxation. It would be impossible to organise a large-scale randomised trial to test its effectiveness. However, by combining :

  • Historical data on the impact of taxes on consumption,
  • Economic models simulating the effect of new taxes,
  • Epidemiological studies on tobacco-related diseases,
  • Chance of success of such advocacy based on past advocacy.

The expected value of the number of life years gained can be estimated and related to the advocacy budget, in order to compare its effectiveness with other public health strategies.

The key is not to confuse imprecision with the impossibility of comparison. Even when the figures are not exact, they remain essential guides for directing resources where they have the greatest impact.

1.3 Why we shouldn't give up on prioritisation just because it's difficult

Not seeking to assess the impact of an action implicitly amounts to assuming that all interventions are equal. Yet we know that some have a drastically greater impact than others. Ignoring these differences means running the risk of inefficiently spending resources that could save more lives or improve well-being even more. The aim is not to obtain perfect figures, but to use the best available data to guide our decisions in an informed way.

Not trying to prioritise means making blind decisions and risking underfunding high-impact interventions. Even when we don’t have precise figures, we can use the best methods available to ensure that our resources are not allocated haphazardly.

1.4 Considering interventions that are more difficult to evaluate

Areas where impact measurement is more uncertain, such as systemic reforms or public policies, should not be ignored on the grounds that they are difficult to evaluate. It is true that some interventions, such as the distribution of mosquito nets or vitamin A supplementation, are easier to quantify than others, such as the fight against corruption or the defence of human rights. That doesn’t mean we shouldn’t consider them, we’re trying to help as best we can and shouldn’t limit ourselves to what’s easy to measure.

When it is difficult to quantify an impact with acceptable precision, this does not mean that it is impossible to prioritise between interventions. In many cases, it is possible to establish orders of magnitude, compare different approaches and adjust our methods in line with new evidence. Even when direct quantification is complicated, other tools exist to guide our choices.

2. Non-quantitative tools for prioritisation

Here is an overview of the key methods that can help guide our decisions, even when quantitative approaches are not enough to make choices.

2.1 Choosing the right tools

When it comes to prioritising complex decisions, it’s essential to recognise that each assessment tool has its own strengths, weaknesses and areas of application. Some tools are quick but less accurate, others are more comprehensive but take longer to implement. Each tool has its own trade-offs to take into account.

Some criteria to consider when evaluating a tool:

  • Speed: Some tools allow you to obtain an initial estimate in a few minutes (e.g. prioritisation heuristics), while others require days or weeks of analysis (e.g. detailed cost-effectiveness analyses).
  • Applicability: Some tools are suitable for a wide range of situations (e.g. factor weighting models), while others are specific to a given problem (e.g. statistical analyses of a specific medical intervention).
  • Accuracy and reliability: One tool may provide a rapid but approximate estimate, while another may be more accurate but sensitive to bias or underlying assumptions.

By combining several tools with complementary characteristics, we can limit assessment errors and obtain a more balanced view. For example, a cost-effectiveness analysis can be enhanced by expert feedback and factor weighting models.

2.2 The progressive iteration approach

Progressive iteration can be applied using a number of different methods. It involves starting with a superficial analysis of a broad set of options before focusing resources on the best alternatives. Rather than trying to analyse a problem in depth straight away, the idea is to gradually refine the assessment in several stages.

Example of application:

  1. Rapid filtering: Identification of dozens of intervention ideas and elimination of those that seem unpromising according to basic criteria (e.g. feasibility, potential impact).
  2. Intermediate analysis: More in-depth research on a selection of options, collecting additional data and seeking expert advice.
  3. In-depth evaluation: For the few remaining options, advanced tools such as detailed cost-effectiveness analyses or pilot testing are applied.

This approach allows resources to be concentrated on the most promising interventions, while reducing the risk of ruling out a good option too early for lack of information.

2.3 Rational decision-making tools

When choices are complex and involve several factors, we can use systematic methods to structure our reasoning:

  • Bayesian methods: help us to update our beliefs by integrating new data by comparing the plausibility of an observation given the truth of different hypotheses.
  • Probabilistic approaches: used to evaluate actions with uncertain outcomes (e.g. scientific research, advocacy).

2.4. Counterfactual reasoning: what would happen without our action

One of the most fundamental tools in effective decision-making iscounterfactual analysis. This involves not simply assessing the apparent impact of an intervention, but asking the following question:

What would have happened if this intervention had not taken place?

For example, if an association funds bursaries for bright students in a developing country, we need to ask what impact a bursary has on education and whether these students could have obtained a bursary by other means (government programme, other NGOs, private institutions). If this is the case, then the real impact of the programme is less than it first appears.

Counterfactual reasoning thus makes it possible to avoid overestimating the effects of an action and to identify interventions that make a real difference.

2.5 Criteria: Magnitude, Potential for Improvement and Neglected Character

The Magnitude, Potential for Improvement and Neglected Character framework is a useful model for prioritising causes and interventions, based on three key criteria:

  1. Magnitude: does the problem that the policy seeks to address affect a large number of people? To what extent are the individuals affected?
  2. Potential for improvement: how much of the problem can be effectively resolved?
  3. Neglected aspect : How many resources are already dedicated to this problem?

This model makes it possible to concentrate efforts on problems where an intervention can have the greatest impact. For example, a problem that is very serious but already widely treated (e.g. cancer) might be less of a priority than an equally serious but neglected problem (e.g. vitamin A deficiency in certain developing countries).

2.6 Weighted Factor Models

Factor weighting models (WFM) are tools for integrating several criteria into a decision. Rather than evaluating an intervention according to a single indicator (e.g. cost per life saved), this model makes it possible to combine several dimensions to obtain a more comprehensive assessment.

Operating principle:

  1. Define a list of criteria (e.g. cost, effectiveness, feasibility, social acceptability).
  2. Assign a weight to each criterion according to its relative importance.
  3. Score each option according to these criteria.
  4. Calculate an overall score by multiplying each score by its weight.
Tableau représentant un exemple de Weighted Factor Model
Example of a weighted factor model (Charity Entrepreneurship 2019)

2.7. Independent expertise and specialist consensus

When data is limited, it is often useful to draw on the advice of specialists with in-depth knowledge of a field. However, to avoid individual and subjective bias, several methodologies can be used to structure this consultation of experts:

  • The Delphi approach: an iterative process in which several experts give their estimates independently, before comparing their points of view and gradually refining their judgements. This avoids the group effect and leads to a more robust consensus.
  • Subjective Bayesian analysis: a method in which confidence levels are assigned to different hypotheses on the basis of available knowledge, updating these beliefs as new information emerges.
  • Qualitative meta-analyses: combining several studies and expert testimonies to identify common trends and avoid relying on a single source.

In cases where decisions are complex and lack precise data, these approaches help to reduce uncertainty and make informed decisions based on the best available knowledge.

2.8. The study of counter-examples and past failures

A common mistake in impact assessment is to focus solely on successes. Yet analysing what didn’t work helps to avoid repeating costly mistakes.

For example:

  • The Scared Straight programme, which aimed to reduce delinquency by exposing young people to the prison environment, not only proved ineffective but increased crime.
  • The PlayPumps, water pumps activated by children playing games, were widely funded but proved to be useless and impractical for local communities.

By studying these failures, we can identify biases and design errors that could have been avoided, and improve future interventions.

2.9. Making the most of information: investing in knowledge

In the face of uncertainty, one of the best strategies may be toinvest in gathering information before taking large-scale action. Rather than massively funding an intervention with uncertain effects, it may be more effective to start with targeted experiments:

  • Fund a pilot study to test a new approach.
  • Launch a small-scale project and measure its effects before extending it.
  • Explore alternative evaluation methods, such as interviews with beneficiaries or comparisons with similar cases.

This approach helps to limit risk while maximising learning, so that future decisions can be made in a more informed way.

3. Prioritisation does not mean quantifying everything

Many of the debates surrounding impact assessment stem from a misunderstanding: just because we want to prioritise more effectively does not mean we want to quantify everything. These tools show that it is possible to prioritise even in the absence of precise figures. Contrary to the image of a cold, purely numerical utilitarianism, prioritisation can and must incorporate qualitative and pragmatic methods.

  • Numbers should not replace reasoning, but illuminate it.
  • When solid data is available, it should be used.
  • When data is limited, there are qualitative tools to structure the decision.

The important thing is not to have absolute certainty, but to adopt a posture of humility and continuous improvement: testing, learning, adjusting, and always seeking to do better for those we want to help. The aim is not to have artificial precision, but to avoid total arbitrariness and the illusion that all actions are equal.

Compare to help

It’s easy to criticise quantification by reducing it to an obsession with numbers. But in reality, the best prioritisation processes combine quantitative and qualitative approaches to make more informed decisions.

Even in complex areas, there are tools available to assess and compare the impact of interventions. Rather than giving up in the face of uncertainty, we need to use the best available methods to guide our decisions. Measuring impact should not be a bureaucratic exercise: it is a powerful tool for doing better and helping more.

When it comes to helping people as effectively as possible, a frequent criticism is thequantitative approach: some people see the use of numerical indicators as a form of reductionism to the detriment of human realities. Yet impact decisions are essential if we are to meet the needs of those who suffer most.

Rather than falling into a naive utilitarianism that relies solely on raw figures, we have at our disposal qualitative and conceptual approaches that allow us to integrate the complexity of social and humanitarian interventions. This article explores these methods and shows how they complement quantitative tools to provide the best possible help.

You might also want to read...

Valeur d’une vie en France
Romain Barbe

What is the cost of a human life?

Human life is precious. It is natural to want to mobilise all our resources to save a life, even if it only prolongs a life by a week. But what happens when other people are also in danger, and our resources are not enough to help them all? As a society, we face practical limits that force us to make difficult decisions.

Read More »
Illustration : évaluation incertaine de l'impact climatique des dons, entre modélisation et transparence sur le coût par tonne de CO2
Ombline Planes

Elementor #34549

Assessing the climate impact of donations: between modelling and uncertainty At Mieux Donner, we sometimes use a calculation that is strikingly simple: “€1 = 1

Read More »