Evaluation Management Guidance

View as PDF

Footnotes
  • 1. Hereafter, the term “evaluation” can be assumed to mean “independent evaluation”, i.e. an evaluation conducted by a 3rd party that is independent of MCC.
  • 2. Hereafter, the term “evaluator” can be assumed to mean “independent evaluator”, i.e. the 3rd party that MCC has commissioned to conduct an independent evaluation.
  • 3. MCC considers all approaches that rely on a comparison between conditions existing before and after program implementation to fall under this category; recognizing that there are many variations of Pre-Post methods, ranging from comparing two rounds of data to conducting time series analysis.
  • 4. MCC considers this category to include retrospective evaluations that draw conclusions about results solely on post-program data. It generally includes qualitative assessments, such as case studies, but may incorporate quantitative data.
  • 5. This methodology has been applied in past MCC evaluations but is expected to be used sparingly going forward. The value of collecting data on a comparison group that cannot be considered a valid counterfactual must be justified.
  • 6. MCC considers this category to reflect results that are modeled, based on existing literature or sector-specific models, rather than directly measured. The Highway Development Model widely used in roads projects is an example of this.
  • 7. This practice has been in place since January 2020.
  • 8. Evaluation plans are documented in the Investment Memo, Compact or Threshold Agreement, and M&E Plan.
  • 9. The terms “final” and “endline” evaluation are synonymous.
  • 10. Contract closeout requires confirming the receipt of all evaluation deliverables and then appropriately storing/documenting them in M&E filing systems, the pipeline, and the Evaluation Catalog; paying the final invoice, drafting the final CPARS form, de-obligating any remaining funds; and preparing a final contract modification.
  • 11. This practice has been in place since January 2020.
  • 12. Given that participation on the EMC will require staff time and is crucial to the evaluation quality assurance process, MCC should include participation on the EMC as part of Committee members’ performance plans.
  • 13. Examples of this include Agriculture and Irrigation Activities that include a Land component; and FIT-led Activities that focus on a specific sector (such as Energy, WASH, and Transportation).
  • 14. As of January 2020.
  • 15. Note that this sectoral grouping is intended to capture programs that target FIT outcomes. In many cases, MCC programs have a FIT component or are led by FIT staff, but are focused on a broader sector, such as Energy or Education.  In these cases, the program evaluations fall under the broader sector and not FIT in terms of Evaluation Lead oversight.
  • 16. Note that these sectoral grouping differ from the sectors noted in MCC’s website in some cases, e.g. Water includes both WASH and irrigation. M&E’s groupings aim to more closely map to similar theories of change.
  • 17. Evaluation Matrix updated periodically with staffing changes.
  • 18. Some evaluations may not have baselines. Some evaluations may include one or more midline/interim studies as well. Any midline/interim reports should follow the same review and clearance procedures as endline/final reports.
  • 19. A TEP is a group that reviews proposals from bidders and rates each one against the requirements in the SOW in order to identify a preferred bidder. It is helpful to represent a range of perspectives on a TEP. At a minimum, the TEP should include the PM, COR, Sector Lead, and Economist.
  • 20. Generally, the latter situation occurs when the program is not considered to be evaluable and no further evaluation work is commissioned.
  • 21. Some evaluations do not collect baseline data. For these evaluations, the evaluator should ensure the Evaluation Design Report validates the evaluation design using available trend data and other sources to justify the decision to use end-line data only.
  • 22. The original EMG stipulated that Steps 1 and 2 be sequenced to ensure that in-country stakeholders have the opportunity to discuss issues with the evaluator before MCC reviewed the evaluation report. The guidance was revised in 2019 to reflect the fact that rarely were significant issues that warranted a revision to the report raised in Step 1 and so the versions of the report going through Steps 1 and 2 were the same. The lack of a need for sequenced reviews is likely a reflection of other aspects of the EMG working to improve the alignment of evaluation work with program implementation and generally improve quality and accuracy.
  • 23. The MCC Response is an official MCC Management statement that confirms MCC acceptance of the Evaluation Report and documents any outstanding differences of opinion between MCC and the evaluator as it relates to (i) factual and/or (ii) technical issues.
  • 24. Average review lengths based on evaluation pipeline data for completed interim and final reports as of November 2019.
  • 25. The Official Country Response is an official host-country statement that either confirms their acceptance of the evaluation findings or documents any outstanding differences of opinion between the host country and the evaluator as it relates to (i) factual and/or (ii) technical issues.
  • 26. The Evaluation Briefs have replaced the Summary of Findings, which had been developed to disseminate findings succinctly among other aims (including providing details on program logic, program financials, and monitoring indicators, and putting the evaluation in the context of the specific piece of a compact/project that it was evaluating). The decision to switch to Evaluation Briefs was made in 2018 and was informed by the fact that new MCC products (like the Star Report) have been launched and can serve some of the purposes originally served by the Summary of Findings (e.g. program description and indicator performance).
  • 27. These lessons were previously recorded within the Summary of Findings. The Evaluation Brief’s four-page structure does not accommodate the full set of lessons (which often run over a page), so MCC Learning is now its own document within the interim/final report package. The full lessons (or a sub-set) should be summarized in the Evaluation Brief.