Tools and Methods for Evaluating the Efficiency of Development Interventions

Executive Summary

This report investigates tools and methods for assessing aid efficiency. It explains concepts of efficiency and efficiency analysis and presents a catalogue of 15 methods that can be used to assess the efficiency of aid interventions. Each method is described and characterised. Several general observations and recommendations conclude the report. 

Motivation

The overall study was motivated by the apparent gap in evaluation studies between expected and delivered results in terms of efficiency analysis of aid interventions. 

  • On the one hand, efficiency assessment is a principal aid evaluation criterion, required by bilateral and multilateral policies;
  • On the other hand, several reports and observations reveal that donors and development agencies conduct efficiency analyses with insufficient frequency and quality.

This report’s catalogue of methods for conducting efficiency analyses provides assistance in closing this gap. 

Definitions of Efficiency

The OECD Development Assistance Committee defines efficiency in terms of transformation of inputs into results. Similarly, welfare economists sometimes define efficiency based on the transformation of costs into benefits as measured, for example, by benefit-cost ratios. In both cases, transformation efficiency is measured by how economically inputs or costs are transformed into results or benefits. 

In welfare economics, costs and benefits are understood in general terms and include social and private, direct and indirect, and tangible and intangible contributions. 

If all inputs and results are taken into account, transformation efficiency is sometimes referred to as allocation efficiency. If only results on the output-level are taken into account, the related transformation efficiency measure is called production efficiency. 

Transformation efficiency is often measured by ratios. In the case of allocation efficiency, benefit-cost ratios, cost-effectiveness ratios, cost-utility ratios and internal and economic rates of return are used. In the case of production efficiency, unit costs or other partial efficiency indicators are used. 

Apart from accounts of transformation efficiency, welfare economists also describe efficiency based on optimisation principles. For example, Pareto improvements increase the individual welfare of some people without making others worse off. Kaldor-Hicks improvements extend this principle and allow for special types of trade-offs, i.e. the compensation of welfare losses of one person by welfare gains of another person. 

The optimisation rule most frequently applied in welfare economics is based on the concept of net benefits, i.e. the difference between benefits and costs. Net benefits describe the overall impact an intervention has on welfare, and optimisation efficiency would then be measured by net benefits. 

In decision analysis – a different field of academic and applied research – the con-cept of net benefits is generalised to the concept of utility that measures a decision-maker’s relative preferences with regard to different options and their consequences. 

In both cases, efficiency is measured by the degree to which optimisation rules are fulfilled. We refer to efficiency defined in this way as optimisation efficiency. 

The distinction between optimisation and transformation efficiency appears superfluous at first: aren’t these simply two different measures for the same thing? Sometimes, they are, but often, they are not. For example, choosing the most efficient development interventions based on transformation efficiency information may lead to different results than selecting interventions based on their optimisation efficiency. 

In this report, we consider efficiency assessment methods based on transformation efficiency as well as on optimisation efficiency.

We consider methods that provide only partial accounts of efficiency, in the sense that they consider only a subset of inputs and results or costs and benefits. While many methods require quantitative data, we also consider approaches that deal with efficiency in entirely qualitative ways. 

Quality of Efficiency Analysis

The measurement of efficiency may require special skills and experience. Carol Weiss, a well-known evaluation expert, describes efficiency analysis as a specialized craft that few evaluators have mastered. All methods presented in this report face principal and practical challenges and limitations that must be considered when applying them. 

Principal challenges and limitations depend on the approaches used. For example, for methods based on welfare changes to society, results depend on the specific social welfare function chosen or on the model used for estimating and aggregating individual welfare. 

Practical challenges and limitations arise from the approximations made by evaluators within specific methods, as well as from the quality and availability of data.

In addition, basic requirements for sound evaluation are required. These include, for example, the measurement of net effects against a reference scenario. Otherwise, effects caused by an intervention and other changes that have not been caused by an intervention are mixed up. 

Efficiency as Rationale for Decision-Making

Efficiency is a powerful concept. At least in theory, welfare can be maxi-mised based on efficiency information alone and efficiency would therefore rep-resent the most important criterion in ap-praisals and evaluations. 

In practice, however, challenges such as the limited scope of efficiency analysis, simplifications and approximations reduce its potential.

Therefore, efficiency analysis usually does not dictate decisions but can provide crucial information for decision-makers. Even without accurate efficiency-related information, the concept of efficiency remains important for informing a welfare-maximising approach to aid. 

Analysis Perspectives

As with other evaluation methods, the result of efficiency analyses depends on the analysis per-spective. Important analysis perspectives are those of the entire aid recipient society or of a part of that society, for example the beneficiaries of an aid intervention. 

However, other, more restricted analysis perspectives can be useful as well:

  • Decision-making analysis is usually based on the perspective of a single person, the decision-maker. He or she takes into account other analysis results and the opinions and preferences of others, but nevertheless evaluates options from his or her individual perspective.
  • Analysis perspectives of private sector entities, e.g. companies, are useful for determining the viability of a business element embedded in an aid intervention.

Probably the most comprehensive analysis perspective covers not only the aid recipient society but also elements of the society providing aid, for example by explicitly considering the donor agency as part of the analysis. In this way, the operational efficiency of that agency is included in the assessment of the aid interventions’ efficiency.

Without explicitly or implicitly describing analysis perspectives, results of efficiency analyses are not only difficult to interpret, but can also not be compared with each other and lead to confusion. 

Criteria for Characterizing Efficiency Analysis Methods

In order to describe and assess efficiency analysis methods, we consider their analytic power as well as the analysis requirements in terms of data, time and skills. 

In our definition, analytic power is essentially determined by the analysis level of a method. We differentiate between three levels of analysis:

  • Level 2 analysis, the most potent, is capable of assessing the efficiency of an aid intervention so that it can be compared with alternatives or benchmarks.
  • Level 1 analysis is capable of identifying the potential for efficiency improvements within aid interventions. Level 1 and 2 analyses have complementary functions: while level 2 analyses support the selection of interventions, level 1 analyses primarily help to improve interventions operationally.
  • Finally, level 0 analysis is entirely descriptive and can usually not produce well-founded recommendations. For this level of analysis, recognising the limitations of the findings is critical to avoid proceeding with actions under a misconception of the evidence basis.

The assessment of analytic power is complemented by the degree to which methods are well-defined, the degree to which different evaluators can be expected to produce the same analysis results (if all other things remain equal), and the way stakeholders are involved in the analysis.

In addition, methods are characterised by their data, time and skill requirements:

  • Data requirements are assessed both by the type and the origin of data.
  • Basic data types are qualitative information and quantitative data. The latter is further subdivided into financial and non-financial (numerical) data. Some methods express costs and benefits originally measured in any of these three data types in monetary units. We use the additional qualifier monetarisable data to refer to this case.
  • With data origin, we describe what level of an intervention’s results chain the data stems from, i.e. input, output or outcome and impact level data.
  • Time requirements for conducting efficiency analyses are measured in terms of working times for both the evaluator and stakeholders.
  • Finally, skill requirements indicate whether skills needed for the analysis exceed what we consider basic evaluation skills

Overview of Methods

Overall, 15 distinct analysis methods have been identified and are described in this report. The following table provides an overview of these methods, ordered according to the analysis level and the degree to which methods were known by the experts interviewed for this study. 

Conclusions

Based on the findings of this report, we draw four general conclu-sions. The first two illustrate how efficiency analyses can be applied more widely: 

  • First, the application potential of effi-ciency analysis methodology is not ex-hausted, both in terms of frequency and quality.

Regarding the frequency of application, we have identified several methods that are little known and some-times not well-documented. They are applicable in circumstances where more established methods are not suitable. In other circumstances, these methods can complement better-known methods. Examples are Cost-Utility Analyses, Methods for Multiple-Attribute Decision-Making, and more pragmatic methods such as comparative ratings by stakeholders and the Follow the Money approach.

Regarding quality, various reports indicate that efficiency analysis is often applied with insufficient rigour.

To successfully conduct an efficiency analysis, advanced analytic skills may be required, both for the evaluator conducting the analysis and for those commissioning an evaluation or an appraisal. Without appropriate skills, impractical or inappropriate methodology for efficiency analysis may be selected and guidance and quality control may be weak.

In addition, for some methods, the evaluation design needs to be changed from vertical assessments that evaluate several criteria for a single intervention to horizontal assessments that focus on the efficiency criterion across several comparable interventions.

  • Second, since some methods described in this report are far from being fully developed, considerable potential exists in their further development and, possibly, also in developing entirely new approaches.

In our understanding, even if frequency and quality of efficiency analysis are increased in the suggested way, it will not entirely satisfy expectations reflected in evaluation guidelines and national and multilateral policy documents. We therefore also recommend clarifying and specifying expectations in two ways:

  • Third, expectations regarding efficiency analysis need to be adapted to what present and near-future methodology can realistically accomplish. This does not necessarily imply lowering expectations but rather clearly specifying the purpose for conducting efficiency analyses. The analysis levels introduced in this report allow for such a specification.

For projects and simple programmes, we estimate that some level 2 and level 1 analyses should always be possible. This implies that the efficiency of several alternatives can be compared to each other and that efficiency improvement potential within specific alternatives can be identified.

  • For more aggregated aid modalities such as complex programmes or budget support, we consider that efficiency assessment is usually limited to level 1 analysis. This implies that for these types of aid, the expectation of selecting the most efficient option by means of efficiency analysis alone, as for example in aid modality comparisons, needs to be reconsidered. For these types of aid, efficiency analysis is realistically restricted to identifying operational improvement potentials.
  • Fourth, efficiency analysis should not be conducted whenever it is analytically possible. Instead, we recommend choosing carefully when to apply it.

Efficiency analysis itself also produces costs and benefits. Costs are usually resource and time investments for conducting the analysis. Benefits are, ultimately, increased development impact that can be reached in many ways, for example by providing assistance for the selection of more efficient interventions, by directly improving the efficiency of ongoing or planned interventions, by fostering learning through publication and dissemination of appraisal, evaluation or research reports, or by developing required skills through capacity development measures.

Depending on circumstances, the benefits of efficiency analysis may not justify its costs. Examples are expert judgements with low credibility, level 2 analyses without influence on the selection of interventions or efficiency analyses of interventions that are already known to have either very high or very low efficiency.

In such cases, the best use of resources may be to conduct a more efficient type of efficiency analysis or no efficiency analysis at all. 

Download the full report (131 pages) ↓