Posted on December 2, 2011 by Franck Wiebe, Chief Economist, MCC
This blog entry was first posted on Devex.com.
Six years after the signing of the Paris Declaration on Aid Effectiveness, the question of how to enhance aid impact remains highly relevant as most of the largest donors reconvene in Busan.
The Millennium Challenge Corp. is a relative newcomer to the foreign assistance community. Described in principle at Monterrey in 2002 and established by U.S. legislation in 2004, MCC was designed to embody many of the Paris Declaration principles. MCC’s experience of putting these principles into practice suggests three ideas that deserve continued attention: better focus of aid dollars within countries, better assessment of the rationale for aid programs, and stronger commitment to evaluating the impact of aid programs.
Better focus of aid programs within countries
Donors have improved coordination amongst themselves in many countries, reducing overlap and competition, but the pattern of assistance remains scattered and diffused. In most countries, the array of donor activities may be consistent with broad national development plans, but the aggregation of efforts by development agencies only rarely reflects anything close to a strategy.
This approach misses the opportunity to focus on the most important development challenges that need to be tackled first while unintentionally imposing a greater burden on partner country governance structures. The right strategy for any country cannot be to invest in public sector capacity building in every office; rather, a better strategy is for country governments to work with development agencies on a more limited set of well-defined priorities.
Identifying the appropriate priorities remains a challenge, given that country development plans are broad and far-reaching. MCC has found the data-driven “growth diagnostics” framework to be extremely helpful for sifting through the national development plans to laser in on the most critical challenges facing a country. MCC collaborates with country counterparts to ensure that the results are understood and accepted by both parties, and has found that some countries embrace these analyses, using them to prioritize their own strategies well beyond the scope of the MCC compact and to frame their engagement with other donors.
By now, all agree that country partners need to own and drive this prioritization process. Indeed, aid dollars can be successful only when supporting the reform of domestic institutions and policies undertaken by choice by country partners. Consequently, aid programs need to be connected to explicit, public commitments made and owned by our partner governments.
These pieces come together to build a strategy for more effective and more focused aid: Partner countries identify a small set of development priorities (addressing the binding constraint to economic growth usually needs to be one – in most contexts, serious poverty reduction requires growth); partner countries identify a series of commitments to policy and institutional changes to address the existing problem; and only then can aid programs be aligned in a meaningful way in support of these reforms.
Assess cost-effectiveness before funding
“Stretching aid dollars” requires a new level of discipline from development agencies and country partners. The practice of benefit-cost analysis fell out of favor – it takes time, data, and technical competence, and unfortunately is vulnerable to political interference (both local counterparts and aid agencies often have agendas of their own) – but needs to be reinstated as an essential tool for assessing trade-offs and opportunity costs. We need to start with the recognition that any good idea has a price at which it is no longer a good idea. Partners should not enter into programs before conducting an objective comparison of the value of benefits to the total cost of delivering them.
MCC has found that such analyses are possible for the vast majority of programs proposed to us by our partner countries. Not surprisingly, we find that some proposed investments cannot be justified given the estimated costs and projected benefits. Such information usually leads to further work on the program design, but sometimes leads to the search for alternative approaches to the same problem or to other priorities that can be tackled in a cost-effective manner. In this way, we have found at MCC that the technical discipline imposed by benefit-cost analysis improves the quality of the portfolio, where quality is explicitly described as delivering measurable results. The principal idea is inescapable: If we wish to enhance aid impact, we need to be willing to scrutinize every significant effort, asking the same fundamental question, is this proposed activity worth the money and effort being invested?
Some may object that such an approach stifles innovation – it need not. Where ideas have never been tried before, development partners can enter into small-scale pilots and rigorous experiments designed to generate information that can be used to assess the potential for scale-up. MCC has built such experimentation into several of its country programs, and the U.S. Agency for International Development’s new Development Innovation Ventures is another promising mechanism. But the current clamor for increased innovation should not serve as an excuse for not conducting proper due diligence, using logic and evidence, to assess whether the new idea has any prior basis for expecting cost-effective results.
Invest in more, and more rigorous, impact evaluations
Just as more analysis is needed before development activities are funded, more analysis is required after they are completed to determine what was accomplished and what was not. MCC has found that establishing high expectations and budgeting appropriately – often in the range of 2-4 percent of the total program budget – creates an environment within which independent evaluations of impact can be conducted as part of the core implementation plan. Collecting baseline data that covers expected beneficiaries and the appropriate control population is possible when it is required.
The cost and effort is substantial, but so is the value. Credible and rigorous impact evaluations – including but not limited to randomized control trials – serve three important functions:
First, they impose a discipline on the program development side. The benefit-cost analysis may describe the anticipated program impacts, but when evaluation is seen as part of the design process, program planners are given the opportunity to assess whether the planned intervention can plausibly be expected to deliver as promised, and if not, what modifications are needed to improve the chances for success.
Second, they are an essential element of a learning agenda that seeks to inform not only future donor programs, but also – and more importantly – future public expenditures and practices by our developing country partners. Moreover, the increasing availability of results from impact evaluations pushes donor agencies and country partners to establish mechanisms that reinforce the learning process.
Third, such evaluations are a necessary part of the transparent accountability process through which all relevant parties assess whether they used scarce resources appropriately. MCC has embraced this responsibility to its funders – the U.S. Congress and American taxpayers – and expects its country partners to commit to the same level of transparency locally. In this way, the evaluation of aid projects can help strengthen the processes through which government actors can inform their citizens about accomplishments and citizens can hold their government officials accountable for prudential use of public resources.
Already a backlash is occurring in some circles, with the term “randomista” sometimes used as a term of criticism. Some critics have written that this “fad” has gone too far. This negative characterization is both untrue and unfortunate. Although MCC funds rigorous independent impact evaluations for close to half of the projects in our portfolio, many other agencies still have few or none. Clearly, there is still room in the development community for greater investments in rigorous evaluations. MCC has found, too, that such “impact evaluation thinking” can inform our less rigorous performance evaluations; we hire credible independent evaluators and ask them to consider the counterfactual and recognize that not all change can be attributed to our programs.
The Paris Declaration created a useful starting framework that describes the processes related to program effectiveness that donors should adopt. But even as we adopt these processes, we need to ensure that we are delivering effective programs – the two are not necessarily synonymous. Busan provides us an opportunity to develop an improved results-focused agenda explicitly aimed at shifting resources from ineffective programs toward the problems that matter most using the most cost-effective delivery mechanisms. Such an agenda goes well beyond “managing for results” rhetoric and establishes a new standard of actually delivering results.
The tools described above are known and available to donors and their country counterparts, and their use could dramatically improve our performance. Developing countries should demand that donors increasingly apply these tools; we should demand no less of ourselves.