Poverty Reduction Blog Tag: Data
Posted on October 10, 2014 by Tom Kelly, Acting Vice President for Policy and Evaluation
At this time each year, with the announcement of the results of the Aid Transparency Index (ATI), major aid donors all over the world await their relative rankings from Publish What You Fund’s careful exercise to evaluate the quality of information provided to the International Aid Transparency Index (IATI). In this regard MCC is no different. Having ranked #1 in the world in the 2013 Index, we were in the enviable position this year of having no place to go but down!
Because MCC values transparency, we spent the last year making careful improvements to our IATI data. We also worked closely with the State Department’s Foreign Assistance Dashboard to develop a USG XML format based on the IATI standard allowing MCC’s higher quality data to be published to the IATI Registry. We worked alongside other donors to try to figure out how IATI data can be linked with country budget information. We published our own IATI implementation schedule to inform our data users about our data definitions and future publication plans. And we put lots of time and thought into how to best represent our results within the IATI data standard. Because of all these efforts – and in spite of fierce competition – MCC was able to score above 85% and remain among the top three donors worldwide. It hasn’t been easy, and we are proud of this.
Yet as the ATI comes out in its 4th year in 2014, we are surveying the broader landscape and are concerned about the performance of the donor community as a whole. We are concerned that so many donors will fall far short of their Busan commitments, and concerned that data quality will therefore not improve to a point where country partners will find it useful. Among 68 donors ranked, only 15 score in the “good” and “very good” category. The average score in the Index this year is only 39%. Clearly, major progress will have to be made by the end of 2015 to deliver on the promises of Busan.
In this context, people ask us all the time what’s the way forward? Setting aside some peculiarities that make it easier for MCC to do this (as a young, small agency with transparency in our DNA), here are a few of the things we have found most useful:
- Demonstrate political will and leadership on transparency at all levels – so staff are incentivized to work hard at solving the multitude of problems that will inevitably come up;
- Give a strong mandate to a small team that includes policy, data analyst, technical and finance staff – so that together they can resolve most of the issues and tee up the important points effectively for senior staff decisions;
- Don’t try to build a single system to meet IATI reporting requirements – instead develop a strategy for continual progress. Think through how you can pull data for each of the required fields from existing systems, and use your tech people to link them up into your XML output. Start with the fields where you already collect information and work steadily on improving quality of this data. Then make plans to do what’s required to collect and report on additional information over time;
- Keep talking to stakeholders and data users to better understand – and to stimulate – demand; and finally…
- Open up your data to your own staff – leverage IATI efforts to build more robust internal systems to share and use data. As staff see the benefits to their own work, support for the work of data teams will grow, and internal demand will make the system sustainable.
MCC will soon be launching a Principles into Practice paper detailing these and other lessons from our work on transparency and accountability. We hope many of you will join us in upcoming conversations so that we can learn together how to move this field of practice forward. We believe that it is possible for donors worldwide to jump forward in 2015, and we look forward to doing our part to help to drive the broader agenda for transparency.
Posted on April 28, 2014 by Nathaniel Heller, Executive Director, Global Integrity, and Alicia Phillips Mandaville, Managing Director, Development Policy
Last week, roughly 40 colleagues gathered at the OpenGov Hub in Washington, D.C., to brainstorm and debate around possibilities for a Governance Data Alliance, an idea focused on improving coordination in the production of governance data while simultaneously establishing and strengthening feedback loops between producers and actual users of those data.
The gathering was co-organized by Global Integrity and the Millennium Challenge Corporation and facilitated by the terrific Allen Gunn of Aspiration. The results of a pre-event scoping survey was visualized and mapped by the craftsmen over at Vizzuality. You can check out all of that data over at dataalliance.globalintegrity.org, see pictures of the meeting over on Flickr or read comments from participants on twitter using #governancedata.
As we talked about publicly before the meeting (on multiple blogs, including here and here), our intent with this quick get-together was to explore whether there was sufficient interest in this diverse and ad hoc group to take an exploratory process forward—and, if so, to identify the big questions we’d need to collectively answer to determine the initial contours of a potential alliance. We did not intend to answer any of these big questions during the two days of the meeting or design any solutions or outcomes. Instead, we were focused entirely on sussing out the major, “Gee, we really need to figure that out” issues.
The good news is that there was a strong consensus to take an exploratory process forward. We also managed to identify a number of core, meaty items that need further unpacking in the coming months if a governance data alliance is to add value—a process we’ll be taking forward through a number of ad hoc working groups. Those working groups are open to anyone interested in being part of the conversation, regardless of whether you attended last week’s meeting. Here’s what we’ll be focused on:
Making our assumptions explicit about how better governance data can lead to improved outcomes (or as Toby Mendel from the Centre for Law and Democracy pointed out, we need a clear and compelling theory of change). We all think that better data on governance can—when the data is used—help improve governance and service delivery outcomes. But we have a variety of views about the ways in which better governance data can lead to improved outcomes. Maybe it’s about policy makers being able to make better-informed decisions; maybe it’s about citizens’ groups being able to hold decision makers accountable; maybe it’s about donors being able to incentivize governance reforms. An essential starting point in working out how a Governance Data Alliance can help is to make explicit the ways in which we think better data can lead to better outcomes. This should enable us to focus more clearly on addressing the challenges and obstacles that sometimes prevent better data leading to better outcomes.
Refining and settling on an initial problem statement(s). During the course of our meeting, we identified a range of problems that a potential Governance Data Alliance could help address. Poor communication between governance data producers (which manifests itself in redundant country coverage and coverage gaps) is one; similarly poor communication between producers and users leading to wasted effort in the production of information no one actually uses is another. While users struggle to gather standardized, machine-readable data, unused zombie governance data and methodology repositories continue to propagate (examples are here, here, here, and here). But which of these (and many other challenges) should we collectively seek to address first (or second)?
Membership. Who might participate in a future alliance from all three cohorts (users, producers and enablers)? How would participation be extended, candidate organizations vetted and cats herded so as to keep the collective a manageable yet very large tent? As Rita Ramalho of the IFC reminded us, each of these cohorts are robust communities by themselves! We owe a thanks to John Samuel of Development Studies for asking out loud: how we can avoid letting this process and an eventual alliance be dominated by NGOs and actors from Northern countries?
Governance (how meta!). How would an eventual alliance be governed? Who sets the rules of the game, and where does power and decision making reside? If a staff is needed to take the process forward, who should they be and where should they sit? Vincent Lazatin of the Transparency and Accountability Network in the Philippines put in some heroic work in teeing up these issues.
Just what is “governance data?” While we intentionally parked any debate around the definition of “[good] governance” for a later date, we know we need to resolve this to some degree of satisfaction moving forward. Is “governance data” third-party NGO ratings of government performance? Internal public sector administrative data like accurate counts of birth certificates? Household surveys asking about satisfaction with government service delivery? All of those, or something completely different? Ernst and Young’s Kelly Terrill led a break out session that made clear there is unmet demand for all different aspects of governance data from non-traditional corners as well.
Producer coordination. There are many ways in which governance data producers can better coordinate and improve efficiencies. But should that start with simple communication and awareness-raising around anticipated coverage patterns or extend more aggressively to shared in-country research teams or streamlined methodologies and question sets? Global Integrity’s Hazel Feigenblatt is already helping to coordinate an initial team across several data producers to begin tackling this.
Tackling the feedback loop problem. We all agreed there was a huge need to establish and nurture better feedback loops between governance data producers and users. Vanessa Tucker of Freedom House spoke about the value of face-to-face meetings with governments interested in “unpacking” the data. But in the vast majority of cases, producers have very little understanding of who actually uses their data and whether their data has any impact in terms of behavioral change. Users typically have little access to producers to share concerns or thoughts for improving methodologies and data samples. Shyaka Anastase from the Rwanda Governance Board highlighted how this disconnect can lead to mistrust. Tackling the feedback loop problem could take a number of forms, from a simple “switchboard” service that connects users with producers (and vice-versa) to a more ambitious model where producers and users are permanently and regularly talking to one another. Where’s the right place to start?
Opportunities to leverage improved governance data. Can we identify key development and political issues and agendas where improved governance data (and its uptake and usage) can impact development and policy outcomes? Jamie Roberto Diaz Palacios from the Guatemalan National Program for Competitiveness pointed out the links between governance data and investor interest as a country specific opportunity. But at a global level, how deeply should a potential alliance dive into discussions around the post-2015 development agenda or the “data revolution?”
Funding. Most of the anticipated activities under an eventual alliance would not be cost-free, even the lowest-hanging fruit. How would we source financial support to operationalize the vision? Is there a healthy role for high-intensity users of governance data to recognize more publically that governance data doesn’t grow on trees, but rather requires continued and non-trivial investment? The philanthropic funders in the room—Elizabeth Eagen, Mark de la Iglesia, and Subarna Mathes from Open Society Foundations; Libby Haight from Hewlett Foundation; and Laura Bacon from Omidyar Network—were incredibly gracious in engaging in these discussions without awkwardness.
While the above issues will be tackled in the working groups moving forward (coordinated by a coordination “super” group that keeps all of those trains running on time), many others will also be addressed and wrestled with in the coming months. And we need your help and interest.
Very soon, we’ll be putting into a place a public mechanism for inviting additional friends and colleagues into this process on completely equal terms. While you’ll have as much influence over the outcomes as anyone else, there’s a catch: You’ll need to put in some real effort and sweat equity, possibly several hours each week. Keep an eye on this blog for updates on that front.
In the interim, if you have interest in plugging into things sooner, just give us a shout at nathaniel [dot] heller [at] globalintegrity [dot] org and mandavilleap [at] mcc [dot] gov. Stella Dawson from Thomson Reuters Foundation, who added a wonderful media practitioner’s perspective to the meeting, has also published a summary piece on the event here. We’ll also be publishing more extensive notes and transcripts from the meeting, so keep an eye out for those as well.
Posted on April 3, 2014 by Alicia Phillips Mandaville, Managing Director of Development Policy
It seems that everyone is talking about how data will shape our global future. It is a beautiful and unprecedented level of enthusiasm for data—bring it on! But bring it practically.
While we are getting excited about what data can do for development, let’s also get excited about what we are finally in a position to do about development data. For too long, we have lacked credible numbers about many of the things we care most about—including comparative data on governance—and now people are finally starting to take note. But what can we actually do? As a first step, MCC and Global Integrity (with support from the Omidyar Network and The William and Flora Hewlett Foundation) are convening a group of global governance data users, producers and funders who are trying to identify collective action solutions to the way the current state of play affects the availability and quality of governance data.
MCC relies on independently produced, third-party data to drive the annual process of selecting country partners for large-scale grant investments in economic growth. We are painfully aware of data gaps, especially with regard to measurements of countries' efforts to fight corruption. For MCC’s purposes, the data on our scorecard remains the best available measure of anti-corruption efforts that covers all low and lower middle income countries. However, no single data set can answer every question—particularly for something as complex as corruption.
At its heart, these data challenges are a collective action problem. Plenty of people want more and better data, but no one is really doing anything about it yet. Until the field of governance measurement as a whole is more coordinated, we will have gaps and overlaps in our collective knowledge. Heck, at this point, we still battle huge gaps and overlaps with respect to what data is available in a machine-readable format (what can I suck into my computer without first tediously rearranging and cleaning enormous spreadsheets).
The consequences of this are a big deal to MCC, but they extend way beyond us. MCC has experienced firsthand how frustrating it is to want to compare national-level education or health outcomes (like literacy or maternal health at delivery) and find the data lacking. It would be terrible if the thing that undermined global focus on governance issues was inadequate data. So we're trying something new.
In mid-April, MCC and Global Integrity are co-hosting a two day effort to convene global organizations that rely on governance data (governments and donors), organizations that produce governance data (largely third-party NGOs) and organizations that are working to enable improvements in the quality and availability of governance data (philanthropic foundations and academics). If we can get the people who have a data problem (users) in the same room with the people who can solve the problem (data providers and enablers), maybe we can make some real progress. We've been excited by the enthusiasm we've met so far.
What will come of this? We aren’t sure. But no matter what, we'll end with much greater clarity between the users and producers of governance metrics (in part through some pre-event surveys that will be public shortly), and that's no small thing. But could we see a commitment to collectively move governance data into more easily useable spaces? A sort of alliance of actors who create or rely on the quality of the information available? What would that look like anyway, and what would it do?
A first step might be as technically wonky as agreement about the formats that data gets published in. Or a public calendar of when new data is available or being used. And while that sounds unexciting to outsiders, in the long run these are exactly the kinds of first steps that enable governance data to be broadly and regularly used by more than just MCC. Whether you care about measuring, monitoring, ranking, or investing – you have to start with useable data.
This kind of opportunity doesn’t come along all the time—people are talking about and calling for data! And not just some data—global data! At MCC, we will continue to celebrate the fact that people are finally paying attention to the promise and potential of development data. That environment makes us keener than ever to get people focused on practical questions attached to resolving collective action problems in the data world. And, of course, keen to hear your perspective and ideas about next steps and solutions!
Posted on March 12, 2014 by Alicia Phillips Mandaville, Managing Director of Development Policy
Yesterday, the Center for Global Development published a data-savvy critique of MCC’s control of corruption selection indicator. They bring to bear some serious empirical analysis, and after reminding the reader that the indicator is a hard hurdle that acts as the sole difference between passing or failing the MCC scorecard for some countries, they raise a number of tough questions about why we use the data that we do. The authors point to the difficulties in measuring corruption accurately, empirical work that shows weak correlation between corruption and development outcomes and the indicator’s slow, opaque relationship with policy reform efforts—and conclude that MCC should deeply question how it can rely on this data as a hard hurdle.
I love this. Seriously.
In January, I promised I would discuss what constitutes a responsible use of data for development or foreign assistance purposes. This is a perfect opportunity to talk about the most fundamental principle: know thy data.
The CGD paper is constructive because it unpacks what is actually rolled up in the data that we rely on for the corruption hurdle—and it does so objectively and with no assertion that this is particularly unfair to any individual country. Rather, they are talking about fundamental data content and behavior. It's technical and it's detailed. It requires math. It’s the stuff most people would prefer to skip over.
But if decision making about a country rests on that data, and if you care about real progress on the measured issue itself, the math matters.
I have been working with this data for years now, and understanding what is and isn't measured—what annual composite data can and can't tell us about any one country—has been a critical part of building a holistic approach to investigating and briefing MCC’s Board of Directors on anti-corruption and accountable governance in candidate countries. That’s not unique to this dataset. What we do now is something we would need to do for any new or improved indicator measuring corruption or accountability.
Which is another reason I am glad to see this paper: It suggests alternative data sources we could look at and is upfront that none of the suggested data is yet available for every country. That isn't just a problem for us. For MCC to use a data set as a hard hurdle—or for others to seriously consider using a data set to measure progress against global development goals—that data set must actually cover all low and lower middle income countries at a decent (preferably annual) frequency. At present, very few anti-corruption measures or proxies do. That's a subject that—as people debate the possibility of a governance-focused goal on the Post-2015 Development Agenda—the world needs to come back to: Why do we still have the same predictable gaps in governance data? And it's a topic you'll hear more about from us.
In the meantime, we have built a practice around making sure MCC remains a responsible user of development data. If you look at the annual Selection Criteria and Methodology Reports, you will see that the section on supplemental information has grown over time. In 2012, we introduced a public guide to supplemental information that includes reference to country performance on international initiatives (like the Extractive Industries Transparency Initiative or Open Government Partnership) that weren't fully operational when MCC got started. And if you look at our on our approach to corruption, you will see we've built a thoughtful methodology for tracking corruption concerns.
My colleagues and I sincerely welcome the questions raised by this paper and look forward to participating in the conversation.
Posted on March 4, 2014 by John Underwood, MCC chief economist
MCC watchers pay a lot of attention to how our Board of Directors selects countries. Performance-based selection is one of our signature features—but it’s just the first step in an exacting process that MCC and partner countries undertake before taxpayer money is ever spent in the country. The process isn’t easy, and money doesn’t always flow at the end of it. But as MCC’s chief economist, I see it as a real strength of the institution.
This is what happens after MCC’s Board selects a country as eligible for assistance—based on a commitment to good governance and investing in sound economic and social policies—but before we fund projects:
1. Undertake a joint search for the most likely binding constraints to private investment and economic growth. I lead our team of MCC economists who, together with our partner country colleagues, undertake a constraints analysis. The results, informed by and tested through broad in-country consultations, enable us to jointly select activities that are most likely to promote sustainable poverty-reducing economic growth. The binding constraint in many MCC countries is in infrastructure, particularly transportation and energy. Governance issues are also common. Education comes up in several cases, notably in countries in the lower middle income category, representing situations in which countries will at best only slowly move further up the income scale and create what people want—jobs—without addressing education quantity and quality. The table below shows MCC’s country-by-country constraints analysis findings to date:
Along with the constraints analysis, countries conduct a social and gender analysis and look for private sector investment opportunities. Both contribute to the constraints analysis findings. In addition, the social and gender analysis looks for barriers that may inhibit groups from benefiting from the proposed investments. The investment opportunities analysis explores possibilities to directly or indirectly leverage private sector investment. Both provide valuable data for the next step.
Identify a program to address one or more binding constraints. The partner country, with MCC collaboration and further in-country consultation, undertakes further work to get at root causes behind the binding constraints to growth. The aim is a coherent program logic that explains how policy and institutional reform and investments will help address the constraint. MCC uses cost-benefit analysis to measure the likely impact of proposed projects. It’s a straightforward comparison of costs and benefits; the costs are the MCC-funded grants and related costs funded by the country or other donors, and the benefits are increases in incomes of the country’s targeted households and firms. MCC analyzes proposals as investments, with payoffs going to households and firms. We only include benefits when there is evidence to support the logic and look at who benefits across the income spectrum.
The cost-benefit tool allows a back and forth between country project teams and MCC to improve the cost-effectiveness of projects, notably by looking for cost savings while retaining the benefits. MCC expects projects to pass a “hurdle rate” of at least a 10 percent expected economic rate of return (ERR). As part of project preparations, the country works with MCC to set out the framework for monitoring and evaluation to help keep projects on track during implementation and for careful independent evaluations after completion.
The rigorous combination of the constraints analysis, social and gender analysis, investment opportunity analysis, program logic development, project cost-benefit analysis leading to an ERR, and planning for monitoring and evaluation helps ensure that MCC will support countries doing the right things and doing them the right way.
Selection may be the most well-known way we use evidence in our decisions, but the demanding, data-driven project development process is just as much a part of MCC’s DNA. I hope it will get the attention it deserves and ultimately benefit from receiving your input on how it is working.
Thanks to Sandra Ospina and Natalie Kottke for contributing to this post.
Posted on February 20, 2014 by Andria Hayes-Birchler, Senior Development Policy Officer
Hillary Rodham Clinton just launched a global review of data on the advancement of women and girls. The former Secretary of State (and former chair of MCC’s Board of Directors) is using her platform at the Bill, Hillary & Chelsea Clinton Foundation to partner with the Bill & Melinda Gates Foundation on No Ceilings: The Full Participation Project, which aims to gather and analyze data on the progress of women globally. I am thrilled that she is focusing on two issues of importance to MCC—gender parity and data—and hope it paves the way for more and better data across development decision-making.
The project aims to track global progress of women and girls since the 1995 United Nations Conference on Women in Beijing. In the nearly two decades since the conference, have women advanced in education? Are they serving as elected officials more frequently? What about women’s economic participation: Are there fewer women living in poverty? Have women’s wages increased in an absolute matter? How about relative to men? To answer any of these questions, one needs high-quality data and the capacity to analyze it well, and this is exactly the challenge No Ceilings hopes to tackle.
At MCC, we rely on a huge amount of third-party data for making decisions about which countries we work with, which investments are most likely to lead to economic growth and poverty reduction (and for whom) and for measuring and understanding our results. My colleagues and I are deeply interested in ensuring high-quality data exists and that development stakeholders use that data responsibly. We know how powerful data can be in driving decisions. And we know how frustrating it can be when there isn’t good data or the data is weak.
This new initiative could advance the data in development conversation; particularly since it:
- Brings accountability to global promises. In 1995, the world came together and promised to advance women’s empowerment. Without data on women’s literacy rates or incidences of violence against women, for example, it is impossible to know if there has been progress on these promises. Data help provides answers.
- Has an eye on post-2015 goals. As the Millennium Development Goals race towards their 2015 target date, the global community will need to come together towards new post-2015 goals. By highlighting where progress has (and hasn’t) been made towards women’s empowerment over the past two decade, No Ceilings has the potential to inform where the global community can best focus the next wave of commitments.
- Is likely to serve as a “gap analysis.” Although the project primarily aims to analyze existing data, it is likely to highlight all the areas where data is low-quality (or simply non-existent). By identifying the unmet needs for data, No Ceilings has the potential to inspire fresh efforts at capturing new data, much like MCC’s selection scorecard has helped development stakeholders examine the quality of global policy data over the past decade.
- Uses traditional and non-traditional data sources. With Google in the mix, it is likely No Ceilings will have access to data that hasn’t traditionally been explored by development stakeholders. I look forward to seeing if new data, indicators or ideas comes out of the data review and analysis.
More than anything, we know that for women and girls to count in economic development projects, they must be counted. Their progress in education, politics and economics must be counted. And as MCC seeks to reward governments that promote women’s economic participation—and ensure women benefit from MCC compacts—this data is a vital tool for tracking progress. I’m eager to see No Ceilings help us do just that.
Posted on January 15, 2014 by Alicia Phillips Mandaville, Managing Director, Development Policy
The start of a new year seems to prompt an awful lot of writing about how the data revolution will change everything—especially in the developing world. It will be bigger than the industrial revolution. It is already disruptive. And the applications and devices that humans can design to use this data are projected to reduce poverty, liberate people, halt the spread of disease, and alter the state-centric nature of the international system. The more disruptive the better! Vive la Révolution!
It’s easy to get caught up in this, as (full disclosure) I am. The availability of machine-readable, comparable information is already changing people’s lives in very practical ways. Data has even become less nerdy and more exciting to talk about: We can refer to “a disruptive future,” and plenty of people think that future kind of looks like an iPhone. Using technical terms in everyday professional conversations is becoming the norm. But underneath the comfortable arm waving about this bright new future, there are some quiet places that have not seen this change.
At a time where people are waxing eloquent about the power of big data to make consumer goods and services ever more tailored and ever more rapid, the world still lacks reliable, comparable country statistics on basic economic, governance and human development outcomes across much of the developing world. UNICEF estimates that one in three children have not been registered and therefore simply do not exist in statistical terms. Education outcomes are often estimated by models based on five-to-10-year-old data. As a proxy for accountable governance, budget transparency data covers only about half of the more than 190 countries in the world.
And the closer you look, the more you find that even the data we have considered reliable has internal flaws that can make it hard to trust (see Mortan Jensen's controversial book Poor Numbers). Unlike “big data”—where the law of large numbers more or less evens out the errors of any individual data point—cross-country data comparisons are typically small enough that even a handful of inaccurate data points can alter the outcome.
The first challenge here is obvious. If we want to realize the potential of the data revolution in the world’s poorest countries, we need more and better data. Period. And people are already both demanding it and trying to create it.
But there is a second, less-visible challenge: ensuring that data is used responsibly. Foreign aid and foreign assistance are fields where much of the data we want to use is just beginning to be collected or fraught with challenges. But while development professionals grapple with how to work appropriately with some serious data gaps, we are surrounded by popular examples from other fields of how reliable big data can be: Nate Silver's 2012 election predictions, Target's marketing algorithms that can tell you are pregnant before you tell your friends and even a Brad Pitt movie about data—seriously! It can be tempting to think our world is the same—but it isn’t yet.
So if we are using development data, how do we know we are using it responsibly for policy making and aid allocation? That's not an often-asked question, but I think it should be. Are there cross checking metrics? What would that even look like?! Is transparency the answer? When someone corrects a data error, how should decision makers react (à la the Reinhart and Rogoff data controversy)?
Over this year, focusing on the responsible use of data is a theme I'll come back to again and again: things worth watching and learning from, characteristics of the responsible (and irresponsible!) use of development data and efforts to fill data gaps to enhance aid effectiveness. I hope others will too.
Posted on May 16, 2013 by Sheila Herrling, Vice President for Policy and Evaluation
On April 29th at the G8 International Conference on Open Data for Agriculture, the Millennium Challenge Corporation (MCC) unveiled a new evaluation data catalog to house all the data collected through our independent evaluations. Right now, the public can view metadata from agriculture programs in Armenia, Ghana, El Salvador, and the Philippines on the catalog at data.mcc.gov/evaluations, including descriptive statistics for surveys of an estimated 5,000 households in Armenia, 9,300 households in Ghana, 1,700 individuals in El Salvador, and 2,400 households in the Philippines.
The data catalog is designed to contain all of the information that documents and describes MCC-financed independent evaluations, including information on evaluation questions, the types of surveys conducted for the evaluation and the population of interest, questionnaires, sampling methods, and descriptive statistics for household- and individual-level data. The data catalog is fully searchable down to the variable level, allowing for comparison across datasets. In addition, as microdata for each survey is reviewed by MCC’s Disclosure Review Board and is approved for public release, the catalog will host public-use datasets and statistical analyses files for replicating the independent evaluator’s results or conducting separate analysis.
The launch of the catalog is just the beginning of a series of planned data releases. We aim to release as much of our independent evaluation data to the public as possible. We’ve developed an institutional process to enable us to do this over the coming months. It is a labor-intensive effort, but that’s a small price to pay for pushing the boundaries of transparency and accountability to get this huge stock of data into the public domain. And we are delighted to be ahead of the curve on President Obama’s just-released Executive Order on Open Data Policy.
While publishing the data is a big deal in and of itself, the really big deal will come in seeing how others use it. We know – and welcome – that it will be used as another accountability check on us and our partner governments. We hope it also will be used by other investors to learn from our experience on how to increase the impact of the dollars they invest. For example, the agricultural data we are releasing may help us better understand why some farmers adopt improved practices more quickly than others, which can lead to program improvements to maximize impact, increase incomes and expand productivity.
Still, it is the unknown uses – the things we never imagined our data could be used for – that will likely prove to be the most exciting. Finance institutions, for example, looking to spur agricultural growth may gather information needed to develop innovative new products for smallholder farmers. Companies that want to evaluate the risks and benefits of operating in certain locations may find market information that is useful for evaluating risk and catalyzing new investments. Governments and civil society organizations can also analyze this data to drive forward their own complementary development and social programs.
MCC is opening our data because it is the right thing to do: American taxpayers deserve to see this part of their investment. But we are also opening our data because it is the smart thing to do. Information and data are tremendous strategic assets. They can help us enhance policies and practices to more fully contribute to economic growth, strengthen democratic institutions, improve the impact of our work, and inspire entrepreneurship, innovation and scientific discovery in the field of development and beyond. Follow our efforts and give us your feedback!
- December 2014
- November 2014
- October 2014
- September 2014
- July 2014
- June 2014
- May 2014
- April 2014
- March 2014
- February 2014
- January 2014
- December 2013
- November 2013
- October 2013
- September 2013
- August 2013
- July 2013
- June 2013
- May 2013
- April 2013
- March 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- September 2012
- August 2012
- July 2012
- June 2012
- May 2012
- April 2012
- March 2012
- February 2012
- December 2011
- November 2011
- October 2011
- September 2011
- August 2011
- July 2011
- June 2011
- May 2011
- April 2011
- March 2011
- February 2011
- January 2011
- December 2010
- November 2010
- October 2010
- September 2010
- August 2010
- July 2010
- June 2010
- May 2010
- April 2010
- March 2010
- February 2010
- January 2010
- December 2009
- November 2009
- October 2009
- September 2009
- August 2009
- July 2009
- June 2009
- May 2009
- April 2009
- March 2009
- February 2009
- January 2009
- December 2008
- November 2008
- October 2008
- September 2008
- July 2008
- June 2008
- April 2008
- March 2008
- February 2008
- January 2008