Corporate Policy
Policy for Monitoring and Evaluation
March 15, 2017
Purpose
The Millennium Challenge Corporation’s (MCC) Monitoring and Evaluation (M&E) Policy is designed to help MCC and its partners estimate, track and evaluate the impacts of its programs using technically rigorous, systematic, and transparent methods. It is predicated on the principles of accountability, transparency, and learning.
- Accountability refers to the obligation to report on and accept responsibility for all funded activities and attributable outcomes.
- Transparency refers to MCC’s obligation to disclose these findings in a public and transparent manner and share the information (microdata and reports) generated in the implementation and evaluation of its compacts and threshold programs.
- Learning refers to MCC’s commitment to improving the understanding of the causal relationships and effects of its interventions, particularly in terms of poverty reduction and growth, and to facilitating the integration of monitoring and evaluation findings in the design, implementation, analysis, and measurement of current and future interventions.
This policy sets forth requirements for the monitoring and evaluation of MCC Compacts and Threshold Programs, including the purposes of M&E, the types of evaluation that are required and recommended, and approaches for gathering, disseminating, and using M&E data. This policy is intended primarily to guide internal staff decisions to utilize M&E effectively throughout the entire program life cycle in order to improve outcomes.
Scope
From and after the effective date of this policy, this policy will govern the monitoring and evaluation of all compacts and threshold programs. Unless otherwise noted, all requirements apply to both compacts and threshold programs.
Initial M&E Plans, and M&E Plan revisions, that were approved by MCC prior to the effective date of this policy do not have to comply with this policy until their next revision.
Authorities
MCC’s operations are governed by U.S. law and MCC’s policies and guidelines. A list of relevant policies and guidelines can be found in Annex III.
Acts
- Section 609(b)(1)(C) of the Millennium Challenge Act of 2003, as amended
Related MCC Policies and Guidelines
- Threshold Program Development Guidance
- Compact Development Guidance
- Guidelines for Economic and Beneficiary Analysis
- Guidance on Common Indicators
- Gender Policy
- Guidance for Creating a Completeness Index (forthcoming)
- Program Procurement Guidelines
- Operational Requirements for Social Inclusion and Gender Integration
- Scope of Work for A Review of Data Quality for Performance Indicators
- Guidance on Quarterly MCA Disbursement Request and Reporting Package
- Quarterly Results Report Guidance Document
- Indicator Tracking Table (ITT) Guidance
- Policy on the Approval of Modifications to MCC Compact Programs
- Program Closure Guidelines
- Guidance for Post Compact M&E Plans
- Closeout ITT Guidance (forthcoming)
- Project Evaluability Assessment (PEA) Tool
- Evaluation Management and Review Process
- Evaluation Risk Assessment Checklist
- Evaluation Microdata and De-Identification Guidelines
Key Definitions
Accountable Entity — The entity designated by the government of the country receiving assistance from MCC to oversee and manage implementation of the Compact or Threshold Program on behalf of the government. The Accountable Entity is often referred to as the MCA.
Activity — Actions taken or work performed through which inputs, such as funds, technical assistance and other types of resources are mobilized to produce specific outputs. Typically, multiple Activities make up one Project and work together to meet the Project’s Objective.
Actual — A data point that shows what has been completed, as opposed to a number that is a target or a prediction.
Attribution — The ability to show that a change in a particular outcome was caused by an intervention or set of interventions.
Baseline — The situation prior to a development intervention, against which progress can be assessed or comparisons made.
Benchmark — Specific, pre-determined, targets or objectives that measure progress over the life of the program.
Beneficiary — An individual who experiences better standards of living as a result of the project, primarily through higher real incomes.
Beneficiary Analysis — An analysis used to estimate the impact of projects on the poor. It also has broader applicability for determining the impact on populations of particular interest, such as women, the aged, children, and regional or ethnic sub-populations.
Change in Cost — refers to: (i) any increase in the costs estimated for a particular Project or Activity, as set forth in the current detailed financial plan for the Compact Program or (ii) any Reallocation (as defined in MCC’s Policy on the Approval of Modifications to MCC Compact Programs).
Change in Scope — refers to any change to the scope or substance of a Compact Program, including, without limitation, the modification or elimination of any Project, Activity, or sub-Activity, or the creation of a new Project, Activity, or sub-activity, in each case under a Compact Program.
Closeout — refers to anything deemed final as of the end of the Compact (i.e. the Closeout ITT refers to the final ITT that includes all data as of the Compact End Date).
Common Indicator — Indicators that MCC uses to aggregate results across countries within certain sectors and report internally and externally to key stakeholders.
Compact — The agreement known as Millennium Challenge Compact, entered into between the United States of America, acting through the Millennium Challenge Corporation, and the government of the country receiving assistance pursuant to which MCC provides such assistance to the country.
Compact Completion Report (CCR) — An assessment prepared by the Accountable Entity that describes the history and evolution of the Compact, results achieved and lessons learned.
Compact End Date (CED) — The last day of a Compact’s term, which is exactly five years from the EIF date.
Compact M&E Summary — Refers to Annex III of a Compact, which includes an overview of the Monitoring & Evaluation Plan.
Completeness Index — A measure that MCC uses to assess the extent to which proposed activities have been defined.
Counterfactual — The scenario which hypothetically would have occurred for individuals or groups had there been no program.
Country Team — A multidisciplinary team of MCC staff that manages the development and implementation of each Compact program in coordination with their MCA counterparts.
Closure Date — With respect to a Compact, the last day of the Compact’s Closure Period, which is exactly 120 days after the CED.
Closure Period — The 120 day period after the CED, during which the MCA closes out all remaining contracts and transfers any remaining assets.
Cumulative — An indicator classification. These indicators report a running total, so that each reported actual includes the previously reported actual and adds any progress made since the last reporting period.
Data Quality Reviews — A mechanism to review and analyze the quality and utility of performance information. It covers a) quality of data, b) data collection instruments, c) survey sampling methodology, d) data collection procedures, e) data entry, storage and retrieval processes, f) data manipulation and analyses and g) data dissemination.
Date Indicator — An indicator that records the occurrence of a one-time event.
Disclosure Review Board — A committee established to protect the rights and privacy of individual respondents to MCC-funded surveys.
Economic Rate of Return (ERR) — An analysis that measures the expected increases in real household incomes, value-added of individual firms, and financial/resource benefits to public entities and compares them to the economic costs borne by MCC and other actors (including partner governments, other donor agencies, local organizations, and individual participants). The economic rate of return is expressed in percentage terms, and represents the interest rate at which the discounted benefits equal the discounted costs.
Eligibility Indicators —Policy indicators developed by third-party institutions used in MCC’s annual country selection process for threshold and compact programs.
Entry into Force (EIF) — The point in time when a Compact or Threshold Program Agreement comes into full legal force and effect and its term begins.
Evaluability — The ability of an intervention to demonstrate in measurable terms the results it intends to deliver.
Evaluability Assessment — An assessment conducted to determine the ability of an intervention to demonstrate in measurable terms the results it intends to deliver.
Evaluation — The systematic and objective assessment of the design, implementation, and results of an Activity, Project or Program.
Evaluation Catalog — an electronic catalog posted to MCC’s public website that contains metadata and microdata from its rigorous independent evaluations.
Evaluation Management Committee (EMC) — Committee established early in Compact development consisting of one Chair and four-six members for the purpose of making critical decisions on independent evaluations throughout the life of the Compact. The Committee members consist of the M&E Director, M&E Lead, sector lead(s) as appropriate, Economic Analysis Lead, and Evaluation/Technical Support as appropriate.
Evaluation Risk Assessment — Assesses the evaluation activity/deliverable under review, and the current risks and cost-benefit of proceeding with the evaluation.
Final Evaluation — Evaluation conducted at the end of the period of implementation of the intervention or at a date sufficiently after the intervention to be able to measure results.
Goal — The ultimate purpose of a development intervention. For Compacts, the goal is always poverty reduction through economic growth.
Goal Indicator — Indicators that measure the economic growth and poverty reduction changes that occur during or after implementation of the Program.
Impact — The expected result of a Compact on beneficiaries. The impact for MCC Compacts is poverty reduction through economic growth, measured in terms of increase in local incomes (often measured by household consumption and expenditures).
Impact Evaluation — A study that measures the changes in income and/or other aspects of well-being that are attributable to a defined intervention. Impact evaluations require a credible and rigorously defined counterfactual, which estimates what would have happened to the beneficiaries absent the project.
Indicator — Quantitative or qualitative variable that provides a simple and reliable means to measure achievement of a development intervention.
Indicator Analysis — Additional information on the policies and actions that may have affected a country’s standing on the eligibility indicators used in the annual MCC country selection process.
Indicator Inputs — An indicator classification. These indicators are the components of a composite indicator, such as a percentage or ratio. In most cases, they will be the numerator and denominator used to calculate the indicator.
Indicator Tracking Table (ITT) — A report that tracks progress on the indicators included in a country’s M&E Plan. It is part of the Quarterly Disbursement Request and Reporting Package (QDRP).
Input — The financial, human, and material resources used for a development intervention.
Investment Memo — An internal memorandum that presents project recommendations for senior management approval based on the results of project appraisal studies.
Investment Management Committee — Leadership board that makes key decisions on project selection and design.
Key Performance Indicator — An indicator selected from the M&E Plan that is reported quarterly in the Quarterly Results Report and publicly in the Table of Key Performance Indicators.
Level — An indicator classification. These indicators track trends over time, and may fluctuate up and down between quarters.
M&E Plan — Tool for outlining a country’s approach to monitoring, evaluating, and assessing progress towards Compact objectives.
Management Information System (MIS) — A system designed to collect, process, store, and disseminate data to assist in the management of programs.
Milestone — The expected result for a particular indicator to be met by a certain point in time.
Mid-Course Evaluation — A study performed during the period of implementation of the intervention.
Modification (of a Compact) — Refers to any Change in Cost or any Change in Scope.
Monitoring — A continuous function that uses the systematic collection of data on specified indicators to gauge progress toward final program objectives and achievement of intermediate results along the way.
Objective — The result that a Project intends to achieve.
Outcome — The likely or achieved intermediate effects of an intervention’s outputs.
Outcome Indicator — An indicator that measure the intermediate effects of an Activity or set of Activities and are directly related to the Output Indicators.
Output — The direct result of a Project Activity. The goods or services produced by the implementation of an Activity.
Output Indicator — An indicator that directly measure Project Activities. They describe and quantify the goods and services produced directly by the implementation of an Activity.
Participant — An individual who takes part in an MCC-funded Project.
Performance Evaluation — is defined in Section 4.6.3.
Post Compact M&E Plan — Describes post-compact monitoring and evaluation activities, identifies the individuals and organizations that would undertake these activities, provides a budget framework for future monitoring and evaluation which draws upon both MCC and country resources, and document the role the partner country will play in results dissemination.
Process Indicator — An indicator that measures progress toward the completion of a Project Activity, a step toward the achievement of Project Outputs and a way to ensure the work plan is proceeding on time.
Program — A group of Projects implemented together to achieve a goal.
Program Closure Plan — The plan developed by an Accountable Entity describing the closure strategy for each Project and Activity of a Compact, the winding-up or continuing of the Accountable Entity, financial plan for the closure period, post-compact M&E plan, and other important aspects as appropriate in order to close-out the Compact.
Program Logic — An explanatory model that demonstrates how a Program’s Activities lead to the expected outcomes, objectives, and goal of a Compact, presented graphically.
Project — A group of Activities implemented together to achieve an objective.
Qualitative methodology — The system of design, monitoring, and evaluation by which non-numeric data is scientifically collected and made generalizable.Qualitative methods typically includebut are not limited tofocus groups, semi-structured interviews, participant observation, and other ethnographic forms.
Result — The output, outcome or impact of a development intervention.
Summary of Findings — An MCC-authored document created with each independent evaluation report, which describes the context, program logic, monitoring results, evaluation results, and lessons learned from the evaluated project/activity/sub-activity.
Table of Key Performance Indicators — A public document that reports on a sub-set of the indicators reported on in the Indicator Tracking Table. The indicators are selected yearly by the country teams to best reflect the current state of the Compact.
Target — The expected result for a particular indicator to be met by the end of the compact.
Threshold Program — A program authorized by Section 616 of the Millennium Challenge Act of 2003, as amended, pursuant to which MCC provides assistance to a qualifying country for the purpose of assisting such country to become eligible for a Compact.
Threshold Program Agreement — The agreement signed by the threshold country and the United States that specifies the terms and conditions for the implementation of a threshold program.
Introduction
MCC bases policy and investment decisions on available empirical evidence, development theory and international best practices. MCC also uses the opportunities afforded by program implementation to generate new knowledge for the wider development community. Moreover, MCC commits to measuring and documenting program achievements and shortcomings so that the development community, including MCC’s stakeholders, gain an understanding of the return on investment in development activities.
Determining whether aid programs are effective requires measuring the results of those programs. Some key questions include:
- Do the expected results of the program justify the allocation of resources towards that program?
- Has program implementation met predetermined benchmarks for progress?
- Has the program achieved its goals?
- What can we learn from the experience to inform future programs and international best practices?
MCC’s focus on results is motivated by these basic questions of aid effectiveness. MCC’s authorizing legislation specifically requires MCC to develop compacts with partner countries that include specific objectives and benchmarks for measuring progress in achieving those objectives and report annually on progress toward those objectives. 1
Answering these questions requires both monitoring and evaluation:
- Monitoring is the continuous, systematic collection of data on specified indicators to measure progress toward program objectives and the achievement of intermediate results along the way.
- Evaluation is the objective, systematic assessment of a program’s design, implementation and results.
Monitoring provides timely, high quality data to understand whether the program is proceeding as planned, on schedule, following the program logic and ascertaining whether the identified assumptions are holding and whether the risks are materializing at the level expected and the mitigation measures are having the desired effect. MCC and its partners are expected to use this data throughout the life of the program to assist in making decisions about the program’s implementation.
Although effective monitoring is necessary for program management, it is not sufficient for assessing the expected results of an intervention. MCC therefore also uses evaluations to understand the effectiveness of its programs. MCC is committed to making its evaluations as rigorous as warranted and feasible in order to understand the causal effects of its programs on expected outcomes and to assess the cost effectiveness of its interventions, and to inform decisions about current and future program design and implementation.
M&E and Accountability
MCC holds itself accountable as a publically financed entity by assessing the performance of its interventions to targets established on the basis of available evidence; engaging independent evaluators to assess the relevance, effectiveness and efficiency of its interventions, publically disclosing evaluation results and disseminating them to a broad range of stakeholders, including the U.S. public; and using key findings and lessons to inform resource allocation and other agency decisions. For evaluation to serve the aim of accountability, MCC’s programs include meaningful outputs and outcomes metrics derived from the analysis and program logic of the interventions.
MCC requires that evaluation findings be presented to stakeholders and publicly disseminated in a timely manner, in order to ensure a feedback loop that may inform future decision making. Guidance for the stakeholder engagement and public dissemination processes are contained in the Evaluation Management and Review Process guidance.
M&E and Transparency
MCC is committed to transparency and making information available to the public. This is done through routinely updating MCC’s website with the most recent information, and requiring MCAs to do the same on their respective websites.
Upon approval by MCC, the current version of Monitoring and Evaluation Plans (M&E Plan) is posted to MCC’s website, and likewise to the relevant MCA’s. In addition, MCC regularly publishes monitoring and evaluation results information on its website, as do the MCAs. Monitoring data is updated quarterly to show both the progress of program-specific projects and activities, as well as aggregated progress across each of the sectors that have common indicators. 2 Evaluation data and accompanying reports from MCC’s independent evaluations are posted to the Evaluation Catalog along with Summaries of Findings on MCC’s website as they become available. Details of materials available on MCC’s website is described in Section 6.6.
M&E and Learning
MCC emphasizes the importance of learning through its commitment to independent evaluations. Evaluation of projects that are well designed and executed can systematically generate knowledge about the magnitude and determinants of project performance, permitting MCC staff, host governments, and implementing partners to refine designs and introduce improvements into future efforts. Learning requires careful selection of: (i) evaluation questions regarding fundamental assumptions underlying project designs; (ii) methods of analysis that identify the internal and external validity of the findings; and (iii) mechanisms to share findings widely and to facilitate integration of the evaluation conclusions, lessons and recommendations into decision-making.
In order to provide evidence to inform decision making, MCC requires that every completed evaluation report include a summary of findings. The summary of findings summarizes the key components of the program, the program logic and accompanying assumptions, monitoring indicators and results, and evaluation questions and findings, as well as key lessons learned by MCC staff and implementing partners from program implementation and results. Each summary of findings is posted along with the final evaluation report on the Evaluation Catalog of MCC’s website.
MCC also conducts periodic reviews of MCC’s evaluation portfolio in a specific sector with the aim of assessing the consistency and applicability of findings across its interventions, while also identifying alternatives for approaching project design, implementation and evaluation in the future. MCC disseminates the results of these reviews through, for example, its Principles into Practice 3 series which can be found on MCC’s website.
MCC is also commITTed to providing professional development opportunities for all relevant staff, including continued training of key staff in evaluation management and methods through MCC-wide courses and/or external opportunities, to promote internal learning. MCC actively encourages staff to participate in relevant monitoring and evaluation discussions for knowledge exchange.
Roles And Responsibilities
POC | Responsibility |
---|---|
MCC M&E Lead |
|
MCA M&E Team |
|
Evaluator |
|
EMC |
|
MCC Economist |
|
RCM and Sector Leads | Provide input in order to assist MCA with developing M&E Plan |
Policy
General M&E Standards
Several principles and processes are fundamental to providing useful monitoring and evaluation information 4 :
- Monitoring and evaluation evidence and processes should be of the highest practical quality. They should be as rigorous as practical and affordable. Evidence and practices should be impartial. The expertise and independence of evaluators and monitoring managers should result in credible evidence. Evaluation methods should be selected that best match the evaluation questions to be answered. Indicators should be limited in number to include the most crucial indicators. Both successes and failures must be reported.
- Country ownership, a fundamental MCC principle, and donor coordination are core principles of the Paris Declaration on Aid Effectiveness. Partnership with countries and their citizens are important to institution building and aid coordination. When MCC and countries partner with other donors, M&E can be streamlined and less costly.
- Evidence must be used to inform program decisions, including, but not limited to:
- managing and adjusting programs,
- allocating budgets and
- designing new programs, policies and activities.
- Monitoring and evaluation should be integrated into the entire life cycle of a program from concept through implementation and beyond.
- During program development, the following should all be identified:
- the problems to be addressed and the root causes of those problems
- the logic linking the proposed interventions with targeted outcomes
- assumptions and risks underlying the program logic
- the beneficiaries
- the indicators, baselines, milestones, targets, and benchmarks to measure progress over the life of the program.
- Although best practice requires that monitoring and evaluation be planned and designed at the beginning of program development processes, adjustments to the program during implementation will almost always require necessary modifications to these plans.
- Monitoring and evaluation activities often continue after programs end to track sustainability and longer term outcomes. Provisions for this should be made as early in the planning and implementation stages as possible.
- Quality monitoring and evaluation requires resources. Multi-year budgets should take account of all potential costs and contingency costs (e.g. increased costs when security situations arise.)
Evaluability
MCC analyzes program readiness using an evaluability assessment of the proposed interventions. The objective of an evaluability assessment is to use specific, transparent standards and best practices for assessing the following five dimensions of a project:
- problem diagnostic
- project objectives and program logic
- risks and assumptions
- project participants/beneficiaries
- accountability and learning metrics.
Assessing evaluability and identifying key evaluation questions from the outset of a project or activity should improve quality of project design, and guide data collection during and after implementation. These assessments provide a foundation for the independent evaluations that identify the effects of MCC’s investment on outcomes for households and firms in the partner country, especially on the stated objective of the compact and/or projects. The detailed process is contained in the Project Evaluability Assessment (PEA) Tool.
In addition, the preliminary indicators submitted in the investment memo for compacts are accompanied by a completeness index, which measures the extent to which proposed indicators have operational definitions, baselines, milestones and targets, complementing the evaluability assessment as a metric of a compact’s readiness. Guidelines for estimating the Completeness Index are contained in the Guidance for Creating a Completeness Index.
Standards for M&E: Gender
Since gender inequality can be a constraint to economic growth and poverty reduction, and because gender issues can be a determining factor in the effectiveness of an intervention, relevant gender considerations should be incorporated into the M&E Plan and M&E activities in accordance with MCC’s Gender Policy. The M&E Plan must specify which indicators will be disaggregated by sex. Specifically, indicators that quantify participants and beneficiaries (e.g., number of farmers trained, number of farmers adopting new technology) should be sex-disaggregated to provide information about the number of men and women being served by an activity. MCAs should report sex-disaggregated information to MCC every quarter when data are available.
Although the M&E Policy does not require that targets be established for the number of men and women served by an activity, targets are often an important design and monitoring tool to link performance to poverty reduction. Particularly in the context of gender differences and inequalities, MCC should establish targets when pre-compact gender analysis, cost-benefit analysis, or program design work leads to the formulation of specific hypotheses on gender impacts or explicitly links performance to gender-specific outcomes, such as equitably distributed benefits. As with other M&E objectives, reasonable and cost-effective efforts should be made to incorporate these gender dimensions into the activity’s evaluation when warranted by project design.
Analysis may also demonstrate potential adverse impacts on female beneficiaries, which also should be addressed in the evaluation. When linked to program design and logic, evaluations should examine intra-household dynamics of male and female beneficiaries, the cost-effectiveness of delivering gender-differentiated interventions, differential impacts on men and women, and how gender integration enhances income growth. M&E Plans will document how gender is being addressed in evaluations as relevant by country, and M&E staff will work with MCC GSI staff to incorporate gender in evaluations and surveys as appropriate.
Overview of M&E Processes for Compacts and Threshold Programs
Monitoring and evaluation begin at the earliest stages of program development and continue throughout implementation and, in most cases, after the program has concluded. The key documents for which M&E is responsible or provides input during each stage of the compact lifecycle are listed below:

Developing Monitoring and Evaluation Plans
After a compact or threshold program agreement is signed, the MCA/partner country entity and MCC must finalize an M&E Plan that provides a detailed framework for monitoring and evaluating the program.
- For compacts, MCC/MCA must develop a Compact M&E Summary (currently Annex III of the compact) prior to developing the full M&E plan. 5
- For threshold programs, a Threshold M&E summary should be included in the Threshold Agreement.
Purpose of the Monitoring and Evaluation Plan
The M&E Plan is a tool to manage the process of monitoring, evaluating and reporting progress toward the achievement of program results.
- The monitoring component of the M&E Plan identifies indicators, establishes performance milestones and targets, and details the data collection and reporting plan to track progress against targets on a regular basis.
- The evaluation component identifies and describes the types of evaluations that will be conducted, the key evaluation questions and methodologies, and the data collection strategies that will be employed.
The M&E Plan is used in conjunction with other tools such as work plans, procurement plans, and financial plans. The M&E plan also serves as a communications tool, so that MCA staff and other stakeholders clearly understand the objectives and targets that the MCA is responsible for achieving.
Required elements of the M&E Plan
The M&E Plan must contain all of the elements listed in Annex II of this policy. The “M&E Plan”, “Indicator Documentation Table,” “Table of Indicator Baselines and Targets,” and “Modifications to the M&E Plan” must use MCC’s standard templates. In general:
- The M&E Plan must include context for the program, including:
- Clearly defined program logic
- Literature review of evidence on proposed intervention(s)
- Initial cost-benefit analysis, which analyzes the economic rationale for MCC investments, and beneficiary analysis, which analyzes the distribution of program benefits, to the extent applicable
- The M&E Plan must include all indicators that must be reported to MCC on a regular basis, including those indicators that reflect the key parameters and targets underpinning the ERR.
- The M&E Plan must include all relevant Common Indicators. (See Section 7.2.7.3.)
- If MCC or its partners identify gaps in data availability and data quality during program development that limit the ability to measure results, best efforts to fill these gaps should be covered in the planning of monitoring and evaluation activities and their associated costs for the program implementation period and beyond. These efforts should be documented in the M&E Plan.
- The M&E Plan must include a description of complementary data to be collected by the partner country for evaluation of programs, but not reported to MCC on a regular basis, including qualitative studies.
- The M&E Plan must include an evaluation plan at the compact or threshold program level. However, the evaluation plan can be developed in stages for each project (or other level as appropriate) as the projects are designed and implemented The content of the evaluation section of the M&E Plan will vary depending on the status of the more comprehensive M&E Plan.
- The evaluation section of the plan must include:
- The proposed methodology (impact or performance) for evaluating each project/activity/sub-activity, as appropriate. If the program will be evaluated using only self-evaluation or will not be evaluated, a justification for this decision must be included.
- The estimated MCC and MCA/partner country budgets for each evaluation activity, with a corresponding procurement plan for contracting an independent evaluator, as well as proposed timelines for data collection and analysis to assist MCC and MCA’s management. 6
- The M&E Plan must include the full budget for compact/threshold-related M&E activities (including post compact M&E) and identify which of those activities will be funded by the compact/threshold and which will be funded directly by MCC, both during and post completion of the compact or threshold.
- M&E budgets shall be developed based on the M&E activities deemed necessary to support the program; however, they generally account for 2% to 3% of the program value. M&E funds may not be reallocated without prior approval from the MCC M&E Lead.
Responsibility for Developing the M&E Plan
Primary responsibility for developing the M&E Plan lies with the MCA M&E Director with support and input from MCC’s M&E Lead and Economist.
MCC and MCA Project/Activity Leads are expected to guide the selection of the indicators at the process and output levels that are particularly useful for management and oversight of activities and projects.
MCA leadership, the MCC Resident Country Mission (RCM), and others within MCC, such as Environmental and Social Performance (ESP) and Gender and Social Inclusion (GSI) Leads, as well as external stakeholders, if applicable, must assist with the development of the M&E Plan.
The MCC M&E Lead is responsible for developing the Evaluation Plan, which is a section of the overall M&E Plan. MCC’s EMC is responsible for reviewing all evaluation plans.
Timing of the Initial M&E Plan
Specific timing for the finalization of the initial M&E Plan, which is usually within 90 days of EIF, is established in an agreement entered into with the partner country that is supplemental to the compact or threshold program agreement. Usually the MCA Board and MCA M&E personnel need to be in place and project work plans need to be agreed upon before the initial M&E Plan can be finalized.
MCC Peer Review and Approval of the Initial M&E Plan
As requested by MCC M&E management, the M&E Plan will undergo a peer review within MCC before the beginning of the formal approval process. The initial M&E Plan must be approved by the MCA Board of Directors (or appropriate partner country entity in the case of threshold programs) prior to its formal submission to MCC. MCA/partner country must then send the M&E Plan to MCC for formal approval. The M&E Plan sent for MCC approval must be in English. The approved M&E Plan expands upon and provides more detail on the Compact M&E Summary set forth in Annex III of the Compact.
Timing and Frequency of Reviews and Revisions
M&E Plans may be reviewed and revised at any time to adjust for changes in the program’s design and to incorporate lessons learned for improved performance monitoring and measurement. However, any such revision of the M&E Plan by an MCA must be approved by MCC in writing and must be otherwise consistent with the requirements of the compact or threshold program agreement, and any relevant supplemental agreements.
The M&E Plan may be modified or amended without amending the compact or threshold program agreement. However, M&E Plans must be kept up to date and must be amended after a modification to the compact has been approved by MCC. In some cases, MCC may condition disbursement of compact funding on M&E Plans being kept up to date.
Many countries choose to review their M&E Plans annually during the annual work planning process. M&E Plan revisions must be formally approved by MCC at least one month before a QDRP submission date to allow for the corresponding changes to the MIS ITT. Annex VI describes details for the modification process.
With notice to the MCA, MCC may make non-substantive changes to the M&E Plan as necessary. Non-substantive changes do not affect the data or how it is interpreted. Some examples of non-substantive changes could include revising indicator units to correspond to MCC’s approved list of units of measurement or standardizing indicator names.
Criteria for Selecting Indicators 7
Indicators are used to measure progress toward the expected results throughout the implementation period. Different types of indicators are needed at different points in time to trace the progress of the intervention against the program logic. Indicators in the compact and M&E Plan should strive to meet the following criteria:
- Direct: An indicator should measure as closely as possible the result it is intended to measure.
- Unambiguous: The definition of the indicators should be operationally precise and there should be no ambiguity about what is being measured or how to interpret the results.
- Adequate: Taken as a group, indicators should sufficiently measure the result in question. Developers of the M&E Plan should strive for the minimum number of indicators sufficient to measure the result.
- Practical: Data for an indicator should be realistically obtainable in a timely way and at a reasonable cost.
- Useful: Indicators selected for inclusion in the M&E Plan must be useful for MCC management and oversight of the program. Where appropriate, MCC Common Indicators must be included in the M&E Plan to allow MCC to aggregate results across countries.
Data Quality Standards
The data used to measure those indicators should meet the following standards: 8
- Validity: Data are valid to the extent that they clearly, directly and adequately represent the result to be measured. Measurement errors, unrepresentative sampling and simple transcription errors may adversely affect data validity. Data should be periodically tested to ensure that no error creates significant bias.
- Reliability: Data should reflect stable and consistent data collection processes and analysis methods over time. Project managers and M&E staff should be confident that progress toward performance targets reflects real changes rather than variations in data collection methods.
- Timeliness: Data should be available with enough frequency and should be sufficiently current to inform management decision-making. Effective management decisions depend upon regular collection of up-to-date performance information.
- Precision: Data should be sufficiently accurate to present a fair picture of performance and enable project managers to make confident decisions. The expected change being measured should be greater than the margin of error. Measurement error results primarily from weakness in design of a data collection instrument, inadequate controls for bias in responses or reporting, or inadequately trained or supervised enumerators.
- Consistency: Data should be consistent with the documented definition of the indicators, and the methodology of data collection for common indicators should match the Guidance on Common Indicators to ensure consistency across compacts.
- Objectivity: Data that are collected, analyzed, and reported should have mechanisms in place to reduce the possibility that data are subject to erroneous or intentional alteration. The data collector should follow agreed-upon data collection and quality control procedures to ensure consistency, reliability, objectivity, and accuracy of data.
Monitoring vs. Evaluation Indicators
The set of indicators includes both monitoring and evaluation indicators.
- Monitoring: These indicators will focus on providing timely data during compact implementation that can inform programmatic decisions. Data should be reported on at least an annual basis and should generally be administrative in nature such that it reflects the full scope of project implementation. Date indicators that will only be reported once are appropriate for this category as long as they are expected to be achieved within the compact implementation period. The data that is reported by these indicators should be easily interpreted and should not be based on sampling. The majority of these indicators will be at the process and output level, though any outcomes that meet the criteria should be included. These indicators will be documented in Annex I and II of the M&E Plan and reported in the Indicator Tracking Table.
- Evaluation: These indicators will focus on assessing the achievement of high-level results. Any indicators for which progress will not be measureable during the compact implementation period and/or that require sample-based surveys for measurement will fall under the Evaluation category. These indicators will be assessed and reported by the independent evaluations, due to the complexity of collecting and interpreting the data. They should be noted in the Evaluation sections of the M&E Plan in the form of an evaluation question. The expectation is that these indicators will be at the outcome level only.
Special Case: Common Indicators
Common indicators are used by MCC to measure progress across compacts within certain sectors. MCC has identified common indicators for sectors in which MCC is investing significant resources. They allow MCC to aggregate results across countries and report externally. Aggregate results are published on the MCC website regularly. The common indicators are specified in MCC’s Guidance on Common Indicators, which is updated at MCC’s discretion. Each MCA must include the common indicators in their M&E Plan when the indicators are relevant to that country’s compact activities. Disaggregated data on common indicators should be reported to MCC as specified in the guidance. Common indicators may be specified at all indicator levels (process, output, outcome and goal).
Standards for Including Indicators in the M&E Plan
- The M&E Plan indicators must be kept to the minimum necessary, such that the M&E Plan conforms to the best practice, which states performance should be based on a manageable number of output and outcome indicators that align with the program logic and are drawn from the development priorities and goals of the specific country, where feasible.
- MCAs are welcome to monitor additional indicators at the activity level for their own management and communication purposes, but these should not be included in their M&E Plans nor reported to MCC, unless specifically requested by MCC. MCAs should be cautious about supplementing the M&E Plan with too many other indicators that might overburden the MCA’s M&E staff, might not be realistic in light of M&E resources, or fail to be used by the relevant MCC or MCA staff.
Establishing Baselines, Milestones, and Targets
- Every indicator selected must have a baseline. An indicator’s baseline should be established prior to the start of the corresponding activity or project. Baselines demonstrate that the problem can be specified in measurable terms, and are thus a pre-requisite for adequate intervention design. Indicators in the M&E Plan must include milestones and targets whenever possible.
- For indicators derived from the economic analysis, targets will be set based on the ERR model. In cases where project design or ERR analysis directly and explicitly link performance to gender-specific outcomes, targets for the sex-disaggregated indicators will likewise be established. The MCA may set quarterly targets for internal management purposes, but MCC only utilizes annual targets with reporting of progress in the form of an ITT, which is described in Section 7.3.
Reporting Performance Against the M&E Plan – the ITT
MCAs must report to MCC on indicators in the M&E Plan on a quarterly basis using the ITT. ITTs are typically included as part of the quarterly disbursement request package (QDRP); however, in the case that an MCA submits a six-month disbursement request, the ITT must still be submitted quarterly. Additional guidance on reporting is contained in MCC’s Guidance on Quarterly MCA Disbursement Request and Reporting Package. No changes to indicators, baselines, milestones, or targets may be made in the ITT until the changes have been approved in the M&E Plan. MCAs must also provide documentation, such as the source reports, for the figures reported in the ITT.
Indicators that are identified in the M&E Plan as being disaggregated by sex, age, or another disaggregation type should be reported in a disaggregated way in the ITT. Both the disaggregation category and type should be specified. For example, a disaggregation category may be “Age,” but the age bands used may vary by project so the types must also be specified (e.g., children under 5 and/or women of child-bearing age for a health project, primary vs. secondary age children for a project targeting school enrollment). Justification should be provided for any disaggregations that are proposed.
Closeout ITT
- The Closeout ITT is the final ITT submitted by the MCA for a compact and is considered the ultimate source for progress made during the life of the compact. Due within 76 days of CED, it includes final data for all quarters of the compact. In order to ensure accuracy of closeout data, MCC and MCA will perform a formal review process of the Closeout ITT at the end of the Compact, which includes reviewing data reported in all quarters of the compact. For more details on this process please see the Closeout Indicator Tracking Table Guidance.
- If data quality issues for an indicator included in the Closeout ITT arise post compact, the data will undergo a review process and may ultimately be replaced or removed from the Closeout ITT. For more information on the process for replacing or revising ITT data post compact, please refer to the Closeout Indicator Tracking Table Guidance.
Post Compact M&E
- The Post Compact M&E Plan is an extension of the existing Compact M&E Plan and will begin to be developed during the drafting of the MCA Program Closure Plan (PCP). According to MCC’s Program Closure Guidelines, the program must include the following in the PCP:
- Any data collection activities, such as surveys, impact evaluations, and special studies, that are at risk of not being completed by CED;
- A description of all M&E activities that are planned to be conducted between CED and end of the closure period, as agreed by MCC;
- A budget estimate for the completion of any M&E activities to be funded by MCC or another non-MCA entity within the partner government; 9
- Designated representatives that will serve as the primary points of contact for any M&E-related obligations of the Government that extend beyond the closure date and their responsibilities; and
- The process and timeline for developing the Post Compact M&E Plan.
- MCC and MCA, along with the designated representative for Post Compact M&E if appropriate, will develop a Post Compact M&E Plan designed to observe the sustainability of benefits created under the compact in conjunction with the PCP and within 90 days after CED. This plan should describe ongoing and future monitoring and evaluation activities, identify the individuals and organizations that would undertake these activities, provide a budget framework for future monitoring and evaluation which draws upon both MCC and country resources, and document the role the partner country will play in results dissemination. See the Guidance for Post Compact M&E Plans for more detail.
- MCC requires submission of an Annual Summary Report, including a post compact ITT as appropriate, as a mechanism by which former compact governments can inform MCC of ongoing progress of compact projects and activities, and sectoral or institutional reforms as a product of evaluation lessons learned. Typically, MCC requires submission of the Annual Summary Report for five years post-compact, but the time period can be changed, with approval by MCC M&E, to align with specific requirements for individual compacts.
Evaluations
While good program monitoring is essential for program management, it is not sufficient for assessing expected program results. Therefore, MCC and MCAs will use different types of evaluations as complementary tools to better understand the effectiveness of its programs. MCC and MCAs are committed to making the evaluations as rigorous as warranted in order to understand the causal impacts of a program on its expected outcomes and to assess the cost effectiveness of the program. This evaluation component contains three types of evaluation activities: (i) independent evaluations (impact and/or performance evaluations); (ii) self-evaluation, and (iii) special studies, each of which is further described below.
Independent Evaluations
- Every project in a compact or threshold program must undergo a comprehensive, independent evaluation (impact and/or performance). 10 MCC’s independent evaluations are conducted by professional researchers selected through a competitive process. MCC’s use of independent, reputable professionals is intended to produce unbiased assessments of the activities being studied.
- Every standalone investment within a project must be evaluated if possible and if cost-effective. If a standalone investment will not be evaluated, the MCC M&E lead must provide a justification for this decision. To the extent that investments contribute to a common set of outcomes, they should be evaluated together.
- The M&E Plan should include a section describing each evaluation, including its purpose, methodology, and timeline, the process for data collection and analysis, and approval process. All independent evaluations must be designed and implemented by independent, third-party evaluators, which are hired by MCC. If the MCA wishes to engage an evaluator, the engagement will be subject to the prior written approval of MCC. Contract terms must ensure the independence of the evaluator, the publication of the design reports, data collections instruments, baseline, interim and final evaluation reports and the availability of the underlying data.
- For each evaluation, the independent evaluator(s) shall have the appropriate methodological and subject matter expertise to conduct the evaluation, as they are responsible for the overall design, implementation, and dissemination of the evaluation. MCC is responsible for oversight of the independent evaluator and quality control of evaluation activities. The MCA is responsible for building local ownership and commitment to the evaluation, oversight of the data collection firm, and quality control of evaluation activities. Specific responsibilities of each party are described in the Evaluation Management and Review Process guidance.
- In order to ensure proposed evaluation activities are feasible, and final evaluation products are technically and factually accurate, for each independent evaluation, the MCA and relevant stakeholders are expected to review and provide feedback to the independent evaluators on the evaluation design report(s), evaluation materials (including questionnaires), the baseline report (if applicable), and any interim/final reports.
Final Evaluation Objectives
Final evaluations support three objectives derived from MCC’s core principles: accountability, transparency and learning. Accountability refers to MCC and MCAs’ obligations to report on their activities and attributable outcomes and accept responsibility for them. Transparency refers to disclosing the findings in a public and transparent manner. Learning refers to improving the understanding of the causal relationships between interventions and changes in intermediate outcomes, poverty, and incomes.
Evaluation Approaches
- MCC advances the objectives of accountability and learning by selecting from a range of independent evaluation approaches. MCC currently distinguishes between two types of evaluations, impact and performance evaluations, as defined below. For accountability reasons, each project should have, at the minimum, an independent performance evaluation.
- Impact Evaluation – A study that measures the changes in income and/or other aspects of well-being that are attributable to a defined intervention. Impact evaluations require a credible and rigorously defined counterfactual, which estimates what would have happened to the beneficiaries absent the project. Estimated impacts, when contrasted with total related costs, provide an assessment of the intervention’s cost-effectiveness. 11
- Performance Evaluation – A study that seeks to answer descriptive questions, such as: what were the objectives of a particular project or program; what the project or program has achieved; how it has been implemented; how it is perceived and valued; whether expected results are occurring and are sustainable; and other questions that are pertinent to program design, management, and operational decision making. MCC’s performance evaluations also address questions of program impact and cost-effectiveness. 12
- MCC balances expected accountability and learning benefits with projected evaluation costs to determine which type of evaluation approach is appropriate to implement. Impact evaluations are performed when their costs are warranted by the expected accountability and learning.
- For all pilot programs, impact evaluations are required before those programs are replicated, unless an impact evaluation is inappropriate or impracticable and a written justification is provided explaining the decision. In those cases, a performance evaluation is required.
- Specific guidelines and standards for the selection, preparation, review, and dissemination of performance and impact evaluations are issued by MCC in the Evaluation Management and Review Process guidance.
Key Outcomes
MCC evaluations identify the effects of MCC’s investment on outcomes for households and firms in the partner country, especially on the stated objective of the compact and projects. Of particular importance are effects on household-level and intra-household material well-being, measured in terms of consumption or income, and firms’ net income. MCC evaluations may also include other outcome measures of well-being, such as physical and human capital assets.
Use of qualitative data collection and research methods
- Qualitative methods should be used where practical and applicable to strengthen evaluations. For example, qualitative methods can be used to improve survey design and implementation by helping to identify what concepts to include in quantitative surveys, create lists of potential responses for questions, and identifying populations and questions to capture heterogeneous impacts. Qualitative methods can also provide a deeper understanding of data from quantitative surveys, including understanding surprising results, or clarifying how respondents interpreted subjective concepts.
- Qualitative methods may also be used as independent evaluation tools to understand hard-to-quantify outcomes such as “quality” or “trust”; or to test the validity of program logics and their underlying assumptions, including but not limited to explaining why causal mechanisms did or did not materialize and why beneficiaries did or did not change their behavior. Wherever used, M&E staff should ensure that qualitative methods are appropriate and add value to evaluation or monitoring design, and align with established and accepted standards of methodological rigor.
Cost Effectiveness
MCC requires that an independent entity review the MCC cost-benefit analysis after the end of the intervention (project, activity, etc. for which the ERR was calculated). In addition, key parameters relevant to the cost-benefit analysis should be measured by the independent entity, unless it is not cost effective to do so.
Disaggregation
MCC evaluations should strive to disaggregate results by sex, age, and income whenever possible and cost-effective. In particular, MCC evaluations should strive to disaggregate results by individuals’ baseline level of poverty relative to the World Bank’s current poverty line measure.
Reports
- The evaluator, under guidance of the MCC M&E Lead, will lead development of the evaluation design report(s), baseline report(s), and interim and/or final evaluation report(s) in consultation with MCC EMC members, MCA M&E and Sector Leads, implementing entities, and other relevant stakeholders. While the evaluator must consult with the above stakeholders, the evaluator is ultimately responsible for producing an independent, unbiased evaluation of the intervention.
- Prior to developing the evaluation design report(s), the evaluator will review and assess the Evaluation Plan for completeness and consistency. The evaluator will document any gaps or reasons for deviation from the Evaluation Plan in the evaluation design report(s). This ensures the final evaluation design produces an independent, unbiased evaluation of the MCA intervention(s). The content of the evaluation design report will vary depending on the evaluator and program, however it should follow the MCC Evaluation Design Report Template, which requires a description of methods and data sources.
- The evaluator will present the evaluation design report to the EMC to assess for (i) technical rigor, (ii) agency and policy relevance, (iii) operational risks and risk mitigation strategies, including necessary coordination between the Department of Compact Operations (DCO) and the Department of Policy and Evaluation (DPE), and (iv) local stakeholder commitment. The evaluator will document all responses to MCC comments (Annex 1 of the report). If deemed appropriate by MCC M&E management, evaluation design reports may be subject to peer review.
- Within 12 months of the completion of baseline data collection, 13 the evaluator will produce a baseline report. The content of the baseline report will vary depending on the evaluator and evaluation design report.
- Within 12 months after follow-up data collection, the evaluator will then produce a final 14 evaluation report for the project. The content of the evaluation report will vary depending on the evaluator and program, however the evaluator should follow MCC’s Final Evaluation Report Template. The evaluator is encouraged to consult with the MCC EMC, MCA M&E and Sector Leads, and any necessary local stakeholders on the development of the final evaluation report. As with all previous key evaluation deliverables, the evaluator is responsible for sharing the evaluation (interim and/or final) report(s) with the MCA and any local stakeholders for review and comment prior to formal submission to MCC.
- At each point in the process, the MCC M&E Lead will use the Evaluation Risk Assessment to determine if any revisions are required to the Evaluation Plan.
Review
Evaluation reports are subject to internal MCC review before being considered final. However, given MCC’s commitment to having independent assessments of program impact, its internal review is limited to ensuring factual accuracy, technically valid methodology and research protocol. MCC may choose to subject an independent evaluation report to an external peer review. Peer reviews will be conducted by independent, internationally-credible institutions or individuals, with terms of payment that protect the independence of the review.
Self Evaluation
Upon completion of each compact program, the MCA will prepare a CCR which provides the MCA’s perspective on compact performance and lessons learned during implementation. The CCR should be prepared collaboratively by all relevant MCA staff, with the MCA M&E unit providing necessary input and helping to document results captured through monitoring and evaluation efforts. MCC M&E should provide input as requested.
Special Studies
Either MCC or the Government may request special studies or ad hoc evaluations of projects, activities, or the program as a whole prior to the expiration of the compact.
Public Dissemination of M&E Data
Public Dissemination of Monitoring Results
- Monitoring data are updated on the MCC website on a quarterly basis, at both the compact and sector levels.
- Compact level results are published in a table of Key Performance Indicators, which is updated quarterly for each open compact to reflect progress towards achieving relevant compact-specific process, output, and outcome indicator targets. The key performance indicators are a subset of the full Indicator Tracking Table. Details of how and when key performance indicators are selected are available in the Quarterly Results Report Guidance Document.
- Sector level results will be made available through Results by Sector reports, which are updated quarterly and aggregate across compacts all data available for the common indicators in each of the common sectors. The sectors and common indicators that are routinely reported on are outlined in the Guidance on Common Indicators.
Public Dissemination of Evaluation Results
- All independent evaluation reports are publicly available and posted to the Evaluation Catalog on the MCC website to ensure transparency and accountability. In addition, evaluation reports are accompanied by a summary of findings, which summarizes the key components of the evaluated program, the program logic and accompanying assumptions, monitoring indicators and results, evaluation questions and findings, and key lessons learned by MCC resulting from program implementation and evaluation findings.
- Each evaluation has its own Evaluation Catalog entry, which includes a description of methods, key findings, and lessons learned. MCC expects to make each interim and final evaluation report publicly available as soon as practical after receiving the draft report. When applicable, MCC will also publically post statements regarding any significant unresolved differences of opinion between the evaluator and stakeholders.
- The Evaluation Catalog also contains microdata generated in the design, implementation, and evaluation of the compact and threshold programs. All public data sets are approved by MCC’s Disclosure Review Board (DRB), which was established to protect the rights and privacy of individual respondents to MCC-funded surveys. MCC requires public use data files to be free of personal or geographic identifiers that would permit unassisted identification of individual respondents or their household members, and to exclude variables that introduce reasonable risks of deductive disclosure of the identity of individual subjects.
- For public release of data, MCC has three primary objectives:
- Maximizing replicability. To enable any stakeholder, researcher, or agency to understand the source data and analysis behind MCC evaluations and investments.
- Maximizing usability. Recognizing the value of data generated through MCC projects and investments, public access to MCC-financed data can stimulate a wide range of policy relevant research, maximizing the benefits of MCC’s investments in large-scale data collection efforts in developing countries.
- Ensuring confidentiality of respondents. The previous two objectives must be balanced with obligations to protect the confidentiality of survey respondents who are crucial to the production of microdata. There are two main forms of risk to the respondents: (i) risk of loss of confidentiality of participation in the survey, and (ii) risk of loss of confidentiality of personally identifiable information and other sensitive data. The informed consent should set forth the level of confidentiality promised to respondents and should generally not preclude the dissemination of anonymized data as approved by the MCC DRB.
- For more information see MCC’s Microdata Documentation and De-Identification Guidelines.
- The following materials are released publicly in the Evaluation Catalog:
- Metadata
- Questionnaires
- Public Use Data
- Evaluation Design Reports
- Baseline Reports
- Interim Reports
- Final Evaluation Report Packages
- Final Evaluation Report
- Summary of Findings
- MCC Response
- Partner Government Response (if any)
- Peer Reviews (if relevant)
- Other study documents as appropriate.
General Provisions
Data Quality and Data Quality Review
M&E data is the key source of information on progress towards the achievement of program results and supports decision making by program managers. Ensuring that the underlying data are of good quality is essential to maintaining a high level of confidence in the decisions that are made using the data. In addition to the formal Data Quality Reviews described below, MCC M&E staff perform site visits to all compacts and threshold programs to provide technical guidance and support to partner country counterparts to develop and implement M&E Plans and also to review M&E data to ensure quality and reliability.
Purpose of a Data Quality Review
- A Data Quality Review (DQR) is a mechanism to review and analyze the quality and utility of performance information. This should be done in addition to the standard data quality requirements an MCA must follow, as referenced in Section 7.2.7. DQRs cover a) quality of data, b) data collection instruments, c) survey sampling methodology, d) data collection procedures, e) data entry, storage and retrieval processes, f) data manipulation and analyses and g) data dissemination. DQRs also will identify key issues or problems and mitigation measures to correct them.
- At least one DQR is required for each compact and, at a minimum, must include all common indicators from the M&E Plan.
Conducting a Data Quality Review
- MCC requires that DQRs be conducted by an independent entity, such as a local or international specialized firm or research organization, or an individual consultant, depending on the size of the program or project in review. MCAs are responsible for selecting, awarding and administering DQR contracts in accordance with MCC’s Program Procurement Guidelines.
- A DQR should review data against the standards laid out in Section 7.2.7 of this policy. An M&E Plan will specify which data from the plan will be included in the review and when. Depending on the data, the review could take place ex-ante, simultaneously, or after the data have been reported.
- The anticipated frequency and timing of Data Quality Reviews must be set forth in the M&E Plan; however, MCC may request a DQR at any time. DQRs should be timed to occur before or early enough in the program term that meaningful remedial measures (if any) may be taken depending on the results of the review. If survey data quality has been reviewed by an independent evaluator to MCC’s satisfaction, then an additional MCA-contracted DQR for the relevant compact or threshold program is not required.
- The methodology for the review should include a mix of document and record reviews, site visits, key informant interviews, and perhaps focus groups or data analysis.
Documentation and Follow-up
- Each review will be thoroughly documented in a report that will describe any weaknesses found in the a) data collection instruments, b) data sampling and/or collection methods, c) handling and processing of data by responsible entities, or d) reporting procedures. The report should also make recommendations for remedying those weaknesses where possible. Where a remedy is not possible or cost-effective, the report should identify replacement indicators or data sources that would be more accurate and efficient.
- The MCA’s comments on the DQR, including which recommendations will be implemented, and the MCA action plan will be attached to the final DQR report and made publicly available on its website. MCA comments must be submitted in English and reviewed and supplemented as necessary by MCC. MCAs are responsible for ensuring that MCC-approved recommendations of DQRs are followed through and implemented.
Evaluation risk reviews
MCC M&E management holds semi-annual evaluation risk reviews for each compact and threshold program. The purpose of these reviews is to discuss the progress of and identify risks to evaluations. The MCC M&E lead should complete the Evaluation Risk Assessment checklist prior to the meeting to facilitate and inform the discussion. The reviews should include MCC M&E management, the relevant MCC evaluation support personnel, and the MCC M&E country team. Relevant personnel from RCM, DCO and EA should also be invited to participate.
Tracking of evaluation recommendations and findings
As a learning institution, MCC Sector and M&E staff track the application of findings and recommendations from evaluations in the preparation and implementation of interventions in similar and related fields. MCC staff should incorporate learning from previous project design and implementation when developing new projects. Each Investment Memo will include a section on how lessons have been reflected in the design and implementation of new interventions. The Investment Management Committee will verify that this section is complete and relevant to the new interventions proposed.
Training
MCC invests in the training of agency staff in monitoring and evaluation methods though internal training activities or, when appropriate, through external opportunities. MCC develops training curricula and evaluation tools that have wide application across MCC’s portfolio, as well as specifically with practice groups, country areas and other knowledge networks. MCC training activities also include discussions regarding the sectorial and operational lessons generated through the evaluation of MCC projects, as well as lessons generated through the evaluations of other development agencies. In carrying out these activities, MCC emphasizes collaboration between technical and operational units across MCC.
Coordination with other entities
- When possible and practical, MCC and MCAs will coordinate data collection and evaluation activities with other agencies or entities with complementary goals to reduce duplication of efforts and increase operational efficiency. Additionally, as described in the Evaluation Management and Review Process guidance, stakeholder engagement and collaboration are integrated throughout the evaluation management and review process. Standards and mechanisms for ensuring data quality must comply with MCC standards, which will be determined by MCC M&E staff using the guidance contained in Section 7.2.7 and the Evaluation Management and Review Process guidance.
- To the extent consistent with these guidelines and any applicable additional guidance issued by MCC, MCC-financed programs will be developed and implemented in a manner consistent with the monitoring and evaluation standards set forth in this policy and as determined by MCC M&E staff. MCC seeks to ensure, through its due diligence and implementation oversight efforts, that the compact and threshold program activities it finances are implemented in accordance with the requirements of this M&E Policy. MCC will only support compact and threshold program activities that are expected to meet the requirements of this M&E Policy within a prescribed timeframe.
Effectiveness
This policy will become effective on the date it is approved and supersedes all previous versions. All compacts and threshold programs signed before the effective date will not be subject to the M&E Plan and ITT requirements in this policy.
Amendments To This Policy
This policy may be amended by MCC from time to time. Such amendments will apply to the MCA or threshold program with prior notice.
Conclusion
MCC’s ability to fulfill commitments to transparency, accountability, and learning depends on embedding evaluation practices throughout the organization. No single policy can anticipate and provide detailed guidance for the diverse set of MCC Programs and contexts, and MCC relies on several other guidance documents to ensure that rigorous practices are applied as a standard across the agency. However, this policy seeks to establish the roles and responsibilities, and the key expectations regarding the design, conduct, dissemination, and use of monitoring and evaluation.
Annex I: Contents of the M&E Plan
The M&E Plan must contain the following elements (order below is required):
- Preamble
- List of Acronyms
- Compact and Objective Overview
-
-
- Introduction
- Program Logic
- Projected Economic Benefits
- Program Beneficiaries
-
-
- Monitoring Component
-
-
- Summary of Monitoring Strategy
- Data Quality Reviews
- Standard Reporting Requirements
- Reporting to MCC: Quarterly Disbursement Request Package
-
-
- Evaluation Component
-
-
- Summary of Evaluation Strategy
- Specific Evaluation Plans
-
-
- Implementation and Management of M&E
-
-
- Responsibilities
- MCA Management Information System for Monitoring and Evaluation
- Review and Revision of the M&E Plan.
-
-
- M&E Budget
- Other
- Annex I: Indicator Documentation Table
- Annex II: Table of Indicator Baselines and Targets
- Annex III: Modifications to the M&E Plan
- Additional Annexes
Annex II: The Compact M&E Summary
All compacts include a description of the Monitoring & Evaluation Plan (referred to herein as the Compact M&E Summary 15 ), which represents the negotiated legal agreement between the country government and MCC on broad M&E issues. Specifically, the Compact M&E Summary must include:
- A summary of the program logic, including the goal, and expected outcomes;
- The number of expected beneficiaries by project, to the extent applicable, defined in accordance with MCC’s Guidelines for Economic and Beneficiary Analysis;
- A select number of indicators, drawn from the variables in the economic analysis and the broader program logic, at the goal and outcome levels with their definitions, baselines, milestones, and/or (final) targets 16 ;
- Output indicators when possible with their definitions, baselines, milestones and targets;
- The results of the completeness index, conducted in accordance with the Guidance for Creating a Completeness Index;
- The results of the evaluability assessment, conducted in accordance with the Project Evaluability Assessment (PEA) Tool;
- General requirements for data collection, reporting, and data quality reviews;
- The specific requirements for evaluation of every project and a brief description of the proposed methods that will be used;
- A brief description of other components of the M&E Plan (such as M&E costs and assumptions and risks);
- Requirements for the implementation of the M&E Plan, including information management and MCA responsibilities; and
- A timeline for any results that are expected after Year 5. The Compact M&E Summary must express the intent of both parties to continue the monitoring and evaluation of compact results beyond Year 5, including the development of a post compact M&E Plan and identification of a post compact M&E counterpart. The source of funds for post compact M&E work must also be identified.
The Compact M&E Summary indicators are typically not changed in developing the full M&E Plan. However, if it is necessary to make changes, those modifications must follow the policy for revising M&E Plans found in Annex IV.
Annex III: Technical Information on Indicators
Types of Indicators
At MCC, indicators are separated into the following types or levels:
- Process Indicators: These indicators measure progress toward the completion of programs (projects/activities/sub-activities). They are a precondition for the achievement of output indicators and a means to ascertain that the work plan is proceeding on time.
- Output Indicators: These indicators directly measure programs (projects/activities/sub-activities). They describe and quantify the goods and services produced directly by the implementation of a program.
- Outcome Indicators: These indicators measure the intermediate effects of an activity or set of activities and are directly related through the program logic to the output indicators.
- Goal Indicators: These indicators measure the economic growth and poverty reduction that occur during or after implementation of the program. For compacts, goal indicators will typically be a direct measure of local income when possible. If it is not possible to measure income directly, the goal indicators must be directly linked to project outcomes.
- Lower-level indicators (process and output) come from project and activity work plans. These indicators are useful for project and activity level management and help to track implementation progress. The process indicators included in the M&E Plan should be limited in number.
- Higher-level indicators (outcome and goal) are typically but not exclusively drawn from the benefit streams in the ERR analysis and help to demonstrate program results over time.
All indicators should have a specified unit of measurement, which must align with MCC’s approved list of units of measurement included in the Indicator Tracking Table Guidance. Units may be added to this list at the request of an MCA if necessary, but they will be subject to MCC approval.
Indicator Classifications
Indicators must be classified as one of the three following types of indicators:
- Cumulative: These indicators report a running total, so that each reported actual data point includes the previously reported actual and adds any progress made since the last reporting period. Example: If there are 1,000 farmers trained through Quarter 9 and 200 are trained in Quarter 10, the reported value for Quarter 10 is 1,200.
- Level: These indicators track trends over time, and may fluctuate up or down depending on performance. Example: Percentage of households with electricity in Zone 1. Each year, the value could go up or down depending on the number of households in Zone 1 and the number of households with electricity connections. Therefore, the reported values may be 50% for Year 1 and then 48% for Year 2 and then 52% for Year 3.
- Date: These indicators use calendar dates instead of numbers as targets and reported actuals. The unit for date indicators will always be “Date.”
Indicator Inputs
Some indicators are composites of multiple variables, such as percentages for which the indicator value is calculated using at least two pieces of information. Each MCC M&E Lead will decide with their MCA counterparts which indicator inputs should be reported in the ITT. At the very least, the numerators and denominators of all ratio and percentage indicators must be reported as inputs.
Annex IV: Technical Information: Modifying M&E Plans
Modifying Indicators
Indicators in the M&E Plan can be modified as follows:
-
- A new indicator may be added;
- An existing indicator may be removed; or
- A descriptive quality of an existing indicator may be changed such as the definition, source, frequency, etc.
a. An indicator may be added only for the following reasons: 17
- Change to the program, project or activity scope that results in a new indicator being relevant;
- Recalculation of the ERR such that a new indicator is now relevant (e.g., a new benefit stream has been added);
- Existing indicators do not sufficiently meet the “adequacy” criteria for indicators (i.e., taken together, the existing indicators are insufficient to adequately measure progress towards results);
- New issues emerge, suggesting the importance of a new indicator;
- MCC requires that a new common indicator be used for measurement across all projects of a certain type; or
- The unit of measurement of an indicator is changed.
All new or changed indicators should comply with the criteria for selecting indicators found in Section 7.2.7.
An indicator may be removed only for the following reasons:
- Changes to the program, project or activity scope that render an indicator irrelevant;
- Recalculation of the ERR such that an indicator is no longer relevant (e.g., no longer in the benefit stream or assumptions);
- The cost (in terms of time and/or money) of collecting the data for an indicator outweighs its usefulness;
- An indicator’s quality is determined to be poorer than initially thought when the indicator was selected for inclusion in the plan; or
- An indicator has been added that is deemed to be a superior way of measuring the same variable.
- The unit of measurement of an indicator is changed.
Modifying Baselines
Baselines may only be modified under the following circumstances:
- New, credible information emerges on existing variables or new variables, such as new survey data that is determined by MCC to be untainted by any activities;
- Changes to the program, project or activity scope; or
- Corrections to erroneous data.
Modifying Milestones and Targets
Milestones and targets for process indicators may be modified when the implementation plan is updated; however, modifications to these indicators should be kept to a minimum. Any modifications must be accompanied by a written justification.
Milestones and targets for goal, outcome and output indicators will be modified only as follows:
- For intermediary milestones, modifications are permitted as long as those modifications do not change the target.
- For targets, modifications are permitted as follows:
- For indicators that are not linked to the ERR, their targets may only be modified under the following circumstances:
- Changes in baseline;
- Changes to the program, project or activity scope as defined in the Policy on the Approval of Modifications to MCC Compact Programs;
- Occurrence of exogenous factors; 18 or
- Corrections to erroneous data.
- Special information for indicators linked to the ERR model: MCC Economic Analysis will analyze modified targets to assess whether they maintain the integrity of the original ERR. MCC Economic Analysis will assess whether the change will trigger the Policy on the Approval of Modifications to MCC Compact Programs.
- For indicators that are not linked to the ERR, their targets may only be modified under the following circumstances:
Modifying Beneficiary Numbers
Beneficiary numbers in the M&E Plan must be updated to reflect the most recent estimates from MCC Economic Analysis. Beneficiary numbers in the M&E Plan may only be modified under the following circumstances:
- Changes in baseline
- Changes to the program, project or activity scope 19
- Occurrence of exogenous factors
- Corrections to erroneous data.
The MCC Economist must also appropriately revise the beneficiary analysis when beneficiary numbers change.
Other Modifications
- Elements of the M&E Plan other than indicators, baselines, milestones, targets and beneficiary numbers will be updated over time as needed. These types of modifications include, but are not limited to changes to responsibilities for data collection, or modifications to the evaluation plan. All such modifications must be approved by MCC.
- With notice to the MCA, MCC may make non-substantive changes to the M&E Plan as necessary. Non-substantive changes do not affect the data or how it is interpreted. Some examples of non-substantive changes could include revising indicator units to correspond to MCC’s approved list of units of measurement or standardizing indicator names.
Documenting Modifications
- Justification for deleting an Indicator, modifying an indicator baseline, milestone, or target, modifying beneficiary information or major adjustments to the evaluation plan must be adequately documented in English as an annex to the revised M&E Plan by the MCA. MCAs must use the standard modification template provided by MCC for documenting these modifications.
Approval and Peer Review of M&E Plan Modifications
- M&E Plan modifications for compacts are defined into two categories – “major” and “minor.” Major modifications are limited to changes to the program logic, baselines, milestones, targets, and definitions, adding new indicators, and retiring existing indicators. All other modifications are considered minor.
- For minor modifications, M&E staff may approve changes without the clearance of additional country team members; however the country team must be notified of the changes being made and may opt to provide input. Changes to the evaluation plan are considered minor as long as the EMC has already cleared on the change (EMC notes can be attached to the M&E Plan for documentation).
- For major modifications, the process is as follows:
- Consultation with the MCA and MCC and reporting entities to discuss and agree to changes. All technical issues should be resolved at this stage.
- MCC M&E peer review of revised M&E Plan, if requested by MCC M&E Management or MCC M&E Lead.
- Informal review of revised/peer reviewed M&E Plan by MCA and MCC country teams.
- MCA Board of Directors approval of M&E Plan.
- MCC formal review and approval of M&E Plan, in accordance with the approvals matrix below.
RCD | Sector | Econ | M&E | |
---|---|---|---|---|
Initial M&E Plan (compact and post compact) | C | C | C | A |
Major modifications (revisions to program logic, definitions that change the meaning of the indicator, baselines/targets and adding/retiring indicators only) | C | C | C | A |
Minor modifications (all other revisions) | I | I | I | A |
* I = Informational only, C = Clear, and A = Approval
Acronym List
Acronym | Meaning |
---|---|
CCR | Compact Completion Report |
CED | Compact End Date |
DCO | Department of Compact Operations |
DPE | Department of Policy and Evaluation |
DQR | Data Quality Review |
DRB | Disclosure Review Board |
EIF | Entry into Force |
EMC | Evaluation Management Committee |
ERR | Economic Rate of Return |
ESP | Environmental and Social Performance |
GSI | Gender and Social Inclusion |
ITT | Indicator Tracking Table |
M&E | Monitoring & Evaluation |
MCA | Millennium Challenge Account |
MCC | Millennium Challenge Corporation |
MCC MIS | MCC Management Information System |
PCP | Program Closure Plan |
QDRP | Quarterly Disbursement Request Package |
RCD | Resident Country Director |
RCM | Resident Country Mission |