Issue Brief

January 31, 2014

MCC Independent Evaluations

As defined in the Millennium Challenge Corporation’s Policy for Monitoring and Evaluation, evaluation is the objective, systematic assessment of a program’s design, implementation and results. There are three categories of evaluations at MCC: independent evaluations (impact or performance evaluations), self-evaluations and special studies. Through its investments in these evaluation activities, MCC seeks to achieve its goals of transparency, accountability and improvement through learning by addressing three key questions:

  • Was MCC’s investment implemented according to plan?
  • What changes in outcomes, particularly in income, for program participants are attributable to MCC’s investment?
  • What lessons can be generalized from past experience to improve the cost-effectiveness of future investments?

Independent evaluations are built on monitoring systems established by each Millennium Challenge Account (MCA), the local organization responsible for implementing a compact. These systems track and report on key input, output and some outcome indicators during compact implementation. In addition, independent evaluations are expected to build on and validate MCA and MCC self-evaluations which document the original program logic, design, and implementation by answering questions like:

  • Was the program implemented according to plan?
  • Did the program reach intended beneficiaries?
  • Were there any unanticipated externalities (positive or negative)?

Building on the foundation of strong monitoring systems and self-evaluations, MCC uses the independent evaluations to rigorously fulfill its commitment to accountability and learning. Although it is common in the development community to focus on inputs (such as funds dedicated to farmer training), outputs (such as the number of farmers trained) and increasingly on some intermediate outcomes (such as adoption rate of improved cultivation techniques), MCC takes this one step further by using many of its evaluations to determine if a link can be made between these outcomes and an ultimate impact on household incomes. Independent evaluations test the assumptions underlying the program logic and are the primary mechanisms for measuring whether or not that link occurred.

MCC’s independent evaluations are conducted by professional researchers selected through a competitive process. MCC’s use of independent, reputable professionals is intended to produce unbiased assessments of the activities being studied.

Because of its commitment to learning and transparency, MCC publicly publishes findings from every independent evaluation, as well as each evaluation’s methodology and primary data collected whenever possible 1 to allow the broader development community to learn from its experience.

Program logic

Program logic describes how an investment is expected to reduce poverty through economic growth. It lays out the chain of events by which a given program is expected to lead to changes in intermediate outcomes and ultimately changes in household income. For example, in a farmer training program, trained farmers may learn why improved soil management practices increase crop yield, adopt those practices, increase crop yields, raise their farm income, and ultimately raise their household income. In a rural roads program, individuals using the road may reduce travel times on the road, reduce travel costs to markets and other social services, increase profitability from farm income, and ultimately raise their household income. The program logic provides the foundation for program design, economic analysis, evaluation questions, and key outcomes.

Impact and Performance Evaluations

MCC invests in two different types of independent evaluations: impact and performance. Impact evaluations are more rigorous; they are designed to distinguish impacts caused specifically by an MCC investment from those resulting from common external factors that affected both program participants and non-participants. In agriculture, these common factors could include increased market prices for agricultural goods, national policy changes or favorable weather conditions. Impact evaluations compare what happened with the MCC investment to what would have happened without it, through use of a counterfactual.

Performance evaluations are also valuable tools for estimating the extent to which MCC investments have contributed to changes in trends for outcomes, including household income. Performance evaluations are less rigorous and cannot attribute causal impact to MCC investments because they do not use a statistically valid counterfactual. However, they are useful to compare changes in the situation before and after MCC’s investment and provide details on how an investment might have contributed to changes in outcomes and—importantly—why or why not.

There are several critical factors that MCC considers when deciding to invest in an impact or a performance evaluation:

  • Feasibility: The feasibility of designing and implementing a strong impact evaluation is based on how well the evaluators are able to estimate a counterfactual and how feasible it is to maintain that counterfactual throughout the evaluation period.
  • Learning potential: For programs where the assumptions underlying the program logic are based on limited evidence, there is a strong case for an impact evaluation. A rigorous impact evaluation tests assumptions about a project’s effectiveness and contributes substantially to MCC future decision-making, as well as the global evidence base.
  • Strong stakeholder commitment. Identifying a control group and ensuring adherence to an impact evaluation design requires significant commitment and collaboration by sector staff, program implementers and evaluators within MCC and among partner countries.
  • Proper coordination. Evaluations require close coordination with program implementation. Program designers, implementers and evaluators must work together to understand and define the program logic, estimate how long expected impacts will take to accrue and identify what is most important to learn about how the program works. This is particularly true for impact evaluations, which require coordination and commitment among various stakeholders to estimate and remain committed to a counterfactual.

Incorporating evaluations into program operations is not easy—particularly for impact evaluations—but this is a challenge that MCC embraces to ensure accountability for results and to improve learning about what works and what doesn’t. This commitment to evaluation helps distinguish MCC in the international development community.

The counterfactual

An impact evaluation is defined by the ability to estimate the counterfactual—what would have happened to the same group of program participants if they had not received MCC’s assistance. The most rigorous method for estimating the counterfactual is through randomized control trials. In many programs, there are financial and/or logistical constraints to providing all eligible individuals or groups with an intervention. Random selection (such as through a lottery) is a fair and transparent way to select which eligible individuals or groups should receive the intervention first.

Because randomized control trials randomly select individuals who will receive program benefits, evaluators can compare the groups to measure their impacts. This use of a statistically identical control group creates the greatest opportunity for learning what works and for measuring program impacts.

When a randomized control trial is not feasible, MCC may use other methods to construct a credible comparison group, such as a propensity score matching or regression discontinuity.

Contribution of MCC’s evaluations to development

MCC is a leader in the foreign aid community in its commitment to accountability, transparency and learning through evaluations. The benefits of these investments include:

  • Testing traditional assumptions about what works. All MCC programs are selected, designed and implemented using certain assumptions about how the inputs and expected outputs will lead to poverty reduction through economic growth and who is expected to benefit. Evaluations can also be structured to evaluate the effectiveness of different program design elements, or compare one approach to another.
  • Understanding who benefits from investments and why. Evaluations can also be structured to understand how different social groups are able to benefit from a project. For this reason, MCC requests that independent evaluators disaggregate results by key characteristics according to the program logic, such as gender, age, and poverty level. For example, understanding the relationship between intra-household assets and poverty is important because interventions that focus on cash income alone may not generate improvements in certain measures of well-being such as nutrition, food security or health, which in turn may have significant long-term income effects. Gender differences in access and control over assets play a large role in determining whether and how short-term income gains translate into improvements in well-being and for whom.
  • Improve evidence-based decision making: The results of evaluations strengthen and improve future program design and decision-making. In addition, MCC and MCAs may also be able to make necessary course corrections during implementation based on learning from evaluations.
  • Contribute to global best practices: MCC evaluations are expected to contribute to global understanding on what works in the development field. MCC makes the results of its evaluations publicly available to be used by other donors, partner countries, researchers, and non-governmental organizations.
Footnotes
  • 1. Subject to ensuring the protection of our respondents’ privacy.