Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

  • Guidance:  MCC Guidelines for Transparent, Reproducible, and Ethical Data and Documentation (TREDD)
  • March 2020

Background

Accountability, Transparency, and Learning at MCC

MCC is committed to an evidence-based approach for promoting poverty reduction through economic growth. Its results framework seeks to measure and report on the outputs and outcomes of MCC investments. In particular, MCC’s Monitoring and Evaluation (M&E) Policy (https://www.mcc.gov/resources/doc/policy-for-monitoring-and-evaluation) is built on the principles of accountability, transparency, and learning:

  • Accountability refers to MCC’s commitment to report on and accept responsibility for the results of MCC-funded activities.
  • Transparency refers to MCC’s commitment to disclose M&E findings in a complete and public manner.
  • Learning refers to MCC’s commitment to improving the understanding of the causal relationships and effects of its interventions, particularly in terms of poverty reduction and economic growth, and to facilitating the integration of M&E findings in the design, implementation, analysis, and measurement of current and future interventions.

In 2013, MCC launched the Evaluation Catalog (https://data.mcc.gov/evaluations/index.php/catalog/) to operationalize these principles by creating a platform to transparently share:

  • Documentation of the independent evaluation portfolio, including the Evaluation Design Reports, Baseline Reports, Interim/Final Results Reports, and Evaluation Briefs, as well as other corresponding documentation (questionnaires, informed consents, data de-identification worksheets, and Transparency Statements). This documentation supports use of the evaluation findings and data by others who may wish to reproduce or extend the analysis of the original evaluation for additional learning.
  • Data underlying the independent evaluation Interim/Final Results Reports for (i) computational reproducibility and (ii) broader knowledge generation beyond the original evaluation analysis.

While MCC is committed to open data and transparency, MCC has long recognized the need to balance transparency with proper, ethical management of data to minimize risks related to improper data management– in particular data that includes personally identifiable information (PII) and/or sensitive data. The potential risks of improper data management when engaging in data activities may include:

  • Direct harm to data providers from loss of confidentiality. If intruders or other unauthorized individuals obtain PII or sensitive information that is linkable to the data provider, there is risk that this disclosure could be used to harm and/or exploit the data provider. For example, if the survey is on financial inclusion services and survey participants are identified as loan recipients, with the loan amounts linked to their PII, a loss of confidentiality could result in these individuals – or their households, family members, friends – becoming targets for financial extortion.
  • Reputational harm to data handlers. Survey firms, independent evaluators, research assistants, and principal investigators all risk loss of reputation if they do not adhere to best practices in ethical data and documentation sharing.
  • Reputational harm to MCC and its country partners. MCC and the institutions of its country partners could suffer loss of reputation if data and documentation sharing is considered unethical by other governing bodies, taxpayers, and other relevant stakeholders.

For MCC data activities,[[To date, this has mostly related to independent evaluation-related data, but may also include economic analysis surveys, due diligence studies, and other studies informing operations.]] these commitments to transparency, reproducibility, and ethical data management creates the need for careful consideration of data management and sharing practices. For this purpose, MCC established the MCC Data Management Guidelines in 2012 to inform proper management of data activities.  These TREDD guidelines, effective as of February 21, 2020, supersede and replace all previous versions of the MCC Data Management Guidelines.[[These guidelines may be revised and updated from time to time, and such revision will be promptly posted on the MCC website. If the guidelines are updated during the course of an evaluation or contract term, staff and contractors should apply the most recent, approved version to their work to the extent possible.]]

Alignment with USG Federal Data Strategy

The mission of the US Government Federal Data Strategy[[Information available at https://strategy.data.gov/.]] is to fully leverage the value of federal data for mission, service, and the public good by guiding the Federal Government in practicing ethical governance, conscious design, and a learning culture. MCC’s TREDD approach reflects the principles of this strategy which include:

Ethical Governance

  1. Uphold Ethics: Monitor and assess the implications of federal data practices for the public. Design checks and balances to protect and serve the public good.
  2. Exercise Responsibility: Practice effective data stewardship and governance. Employ sound data security practices, protect individual privacy, maintain promised confidentiality, and ensure appropriate access and use.
  3. Promote Transparency: Articulate the purposes and uses of federal data to engender public trust. Comprehensively document processes and products to inform data providers and users.

Conscious Design

  1. Ensure Relevance: Protect the quality and integrity of the data. Validate that data are appropriate, accurate, objective, accessible, useful, understandable, and timely.
  2. Harness Existing Data: Identify data needs to inform priority research and policy questions; reuse data if possible and acquire additional data if needed.
  3. Anticipate Future Uses: Create data thoughtfully, considering fitness for use by others; plan for reuse and build in interoperability from the start.
  4. Demonstrate Responsiveness: Improve data collection, analysis, and dissemination with ongoing input from users and stakeholders. The feedback process is cyclical; establish a baseline, gain support, collaborate, and refine continuously.

Learning Culture

  1. Invest in Learning: Promote a culture of continuous and collaborative learning with and about data through ongoing investment in data infrastructure and human resources.
  2. Develop Data Leaders: Cultivate data leadership at all levels of the federal workforce by investing in training and development about the value of data for mission, service, and the public good.
  3. Practice Accountability: Assign responsibility, audit data practices, document and learn from results, and make needed changes.

Alignment with Scientific Community

MCC’s independent evaluations are designed and implemented using research methods from across the social sciences, particularly economics, political science, and other behavioral sciences. MCC’s TREDD practices also align with calls for more transparency in the social sciences to mitigate potential threats to the credibility and integrity of the research findings (Miguel et al 2014). Table 2 provides an overview of the main known threats to credibility and integrity of research and MCC’s TREDD practices to mitigate those threats.

An additional threat to the credibility and integrity of MCC-funded independent evaluations is influence – whether actual or perceived – by MCC over the contractors to focus only on positive results of MCC’s investments. MCC’s TREDD practices discussed in Table 2 are therefore not only intended to mitigate p-hacking and publication bias driven by researcher and journal practices, but also to protect contractor independence so as to maintain the independence, credibility, and integrity of the evaluation design, implementation, and analysis.

 

Table 2: MCC’s practices to mitigate threats to credibility and independence of independent evaluations[[Descriptions and key references adapted from Hoces de la Guardia and Sturdy (2018).]]
Threat Description Key references MCC practice
P-hacking When analysts, intentionally or not, select a subset of the possible analyses in a study based on whether those analyses generate statistically significant results. The main consequence of p-hacking is that it increases the chances of false positives and can produce biased results within a single study and across a body of literature. The problem can be understood as a version of multiple hypothesis testing where the analyst does not know, or does not report, the true number of underlying hypotheses. Theoretically outlined in economics by Leamer (1983); Ioannidis (2007) calibrates a model with different levels of p-hacking-type of manipulations by the researchers (among other components) to argue that most published research is probably false; Brodeur et al. (2016) finds evidence of p-hacking in economics using 50,000 tests published in the AER, JPE, and QJE MCC requires contractors to prepare an Evaluation Design Report. All evaluation questions and corresponding outcomes listed in the EDR must be reported in the Interim/Final Results Report regardless of positive/negative results or statistical significance.  Any changes to the evaluation design must be documented and justified in an annex to the original EDR or a new version of the EDR.  All reports must follow MCC’s reporting requirements. Additionally, all comments made by MCC and other stakeholders on Interim/Final Results Reports, and the response by the contractor, are published alongside the Interim/Final Results Report to mitigate any influence over the contractors to focus on statistically significant, positive findings.
Publication bias Empirical research suffers from publication bias when results in published studies are systematically unrepresentative of conducted studies. The most common manifestation of such bias occurs when studies with statistically significant results have a higher likelihood of being published than studies with null results. Franco et al. (2014) found that 22% of studies with null results were published, while 61% of those with strong results were published, in an analysis of studies in economics, political science, sociology, and psychology that were awarded highly competitive resources by National Science Foundation. MCC requires all independent evaluations to be reported in the MCC Evaluation Catalog as soon as an Evaluation Design Report is cleared. This allows the total number of independent evaluations funded by MCC to be publicly known, even if an evaluation is cancelled. Additionally, all Interim/Final Results Reports are published on the MCC Evaluation Catalog regardless of the reported results and regardless of acceptance into a journal.  Summaries of all interim/final evaluations (Evaluation Briefs) are also posted to MCC’s main website.
Lack of computational reproducibility Computational reproducibility is the practice of running the same code over the same data and obtaining the same results as those presented in the original reported analysis. Gertler et al. (2018) attempted to re-run the analysis code from a sample of 203 empirical papers from leading journals in economics and was able to obtain the same results for 14% of the papers. MCC requires contractors to submit the analysis code and underlying data. The code and data are published on the MCC Evaluation Catalog. If the public or restricted-use data cannot reproduce analysis (due to data permutations to protect confidentiality for example), the contractor must explain why in the Transparency Statement.