Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Evaluation

Reducing poverty through sustained economic growth is the singular goal of MCC, and independent evaluations are MCC’s chosen means of measuring that impact. Evaluations are integral to MCC’s commitment to accountability, learning, transparency, and evidence-based decision making. Independent evaluations, which are conducted by third-party independent experts, help answer three fundamental questions:

  • Was MCC’s investment implemented according to plan? This is key to transparency.
  • Did the investment produce the intended results? Did it achieve its stated objective in pursuit of MCC’s mission to reduce poverty through economic growth? This is key to accountability.
  • Why did or didn’t the investment achieve certain results? This is key to learning.
MCC’s commitment to independently evaluating every project and publishing those results distinguishes it in the international development community.

List of Evaluations

This listing includes planned, ongoing, and completed independent evaluations. MCC aims to update the data quarterly.

Data is as of November 6, 2023.

    • {{ item.sectors }}
    • {{ item.programs }}

    {{ item.title }}

    • Type: {{ item.type }}
    • Status: {{ item.status }}
Filter By
{{ facet.title }}

 

The Evidence Platform contains the official published reports of all active and completed independent evaluations, along with the data collection instruments and the publicly accessible metadata and microdata gathered to support these evaluations. 

An article in the Journal of Development Effectiveness highlights MCC’s unique approach to rigorously measuring its impact.

Evaluating an Investment’s Performance

Impact Evaluations

Impact evaluations are designed to measure statistically significant outcome changes that can be attributed to the MCC investment. This approach requires distinguishing changes in outcomes that resulted from MCC’s investment rather than external factors, such as increased market prices for agricultural goods, national policy changes or favorable weather conditions. These are types of changes that would have occurred without MCC’s investment and should not be attributed to MCC’s impact. Impact evaluations compare what happened with the MCC investment to what would have happened without it through the explicit definition and measurement of a counterfactual.

What is a Counterfactual?

An impact evaluation is distinguished by its ability to estimate the counterfactual; that is, what would have happened to the same group of program participants had they not received MCC’s assistance. In many programs, there are financial or logistical constraints to providing all eligible individuals or groups with an intervention. Therefore, random selection (such as through a lottery) is often a fair and transparent way to select which eligible individuals or groups should receive the intervention first.

Because randomized control trials select individuals that will and will not be exposed to program benefits, evaluators can compare the groups to measure their impacts. This use of a statistically comparable control group can create the greatest opportunity for learning what works and for measuring program impacts. MCC may also employ other methods of constructing credible comparison groups in cases where randomized control trials are not feasible.

Performance Evaluations

Performance evaluations estimate the contribution of MCC investments to changes in outcome trends, when formal measurement of a counterfactual is not feasible. Performance evaluations cannot attribute outcome changes to specific causes. However, they often provide crucial insights into strengths or weaknesses in program implementation through critical empirical and analytic assessment of the measurable components of the program’s intermediate and ultimate outcomes. They can often identify clear opportunities to improve program implementation and investment decisions, even when they cannot explicitly estimate how an investment might have contributed to changes in participant incomes.

How We Choose

There are several critical factors that MCC considers when deciding to invest in an impact or a performance evaluation:

  • Learning potential: A strong case for an impact evaluation exists for programs where the assumptions underlying the project logic are based on limited evidence. A rigorous impact evaluation tests assumptions about a project’s effectiveness and contributes substantially to MCC’s future decision-making and the global evidence base.
  • Feasibility: The feasibility of designing and implementing a strong impact evaluation is based on how well the evaluators can estimate a counterfactual, and how feasible it is to maintain that counterfactual through the duration of the evaluation period.
  • Strong stakeholder commitment: Identifying a control group and ensuring adherence to an impact evaluation design may require significant commitment and collaboration by sector staff, program implementers, and evaluators, both within MCC and in partner countries.
  • Appropriate timing: The evaluation timeline must be informed by the project logic, particularly regarding assumptions about how long it will take for expected impacts to occur. By collecting data at the wrong time, evaluations may misrepresent the impacts on outcomes of interest or miss important lessons.
  • Proper coordination: Evaluations require close coordination between the evaluator and the program implementer. Program designers, implementers and evaluators must work together to understand and define the program logic, estimate how long it will take expected impacts to accrue, and identify what is most important to learn about how the program works. This is particularly true for impact evaluations, which require coordination and commitment among various stakeholders to estimate a counterfactual.

Both impact and performance evaluations can be informative in measuring trends in outcomes, but impact evaluations have the additional benefit of being able to attribute the results they measure to MCC’s investment. Balancing tradeoffs when deciding how best to evaluate a program is not easy, but it is a challenge that MCC embraces to ensure accountability for results and to improve learning about what works.