How long should a development intervention run? For a BRAC program for women farmers in eastern Uganda, that decision was made by the amount of funding it would have.
Knowing that the program would end after four years, BRAC partnered with AMA Innovation Lab principal investigator Stephen C. Smith and his team to lay the groundwork for an innovative evaluation of the short-run program’s long-run impacts. The results now show which components had a lasting impact, but for Smith the study highlights the importance of rigorous evaluation and of taking steps to determine a program’s ideal length from the start.
“Development organizations have limited funding that they are entrusted with by donors,” said Smith, a professor of economics and international affairs at The George Washington University. “They want to be confident that those limited funds are used in ways that are effective at achieving development goals.”
Phasing out a development program in Uganda
In 2009, the NGO BRAC launched its program for women smallholder farmers in Uganda. The program trained some of the women in more productive farming techniques and established them as model farmers for their villages. Other women took up an opportunity to become BRAC-sponsored community agricultural promoters who also encouraged local demand for high-yielding seed varieties and other inputs while building up a local supply chain.
Together with BRAC, Smith, an economist at George Washington University and a member of the BRAC USA advisory board, developed an innovative reverse randomized controlled trial that phased out each of the two program components separately across a random sample of villages in eastern Uganda starting in 2013. A third group of villages received continued support for both programs until all support ended three years later.
Smith presents the findings in a new paper coauthored with Ram Fishman, Munshi Sulaiman, and Vida Bobić. The results show that three years after the program ended, farmers in these villages still used the improved techniques at the same rates. They also sustained demand for high-yield varieties of seeds.
An analysis of comparable households in the region and an experimental evaluation of the same BRAC program in a different part of Uganda suggested that these were in fact sustained improvements. But because the program launched without an initial randomized controlled trial, the team had to use other methods to identify the initial impacts of the program.
“We found that the supply chain side of the community agricultural promoters was the least sustainable part of the program,” said Smith, “but we don’t have clear evidence that these results are highly generalizable.”
Evaluating development interventions for impact
In 2005, Smith wrote the book Ending Global Poverty as a guide to what works for international development. He interviewed experts at some of the most highly rated NGOs in the world to find out what they thought were the most impressive programs among their peers.
“One of the things I was struck with in trying to find the person who was the designated NGO leader of program design and evaluation was that a lot of NGOs did not have such a person,” said Smith.
Evaluation is important for a number of reasons, Smith said. One of those is to ensure that funds are being used effectively. Another is for scaling. Small programs are a good way to learn what works, but evaluation can also help target efforts to make a big impact.
Far too often, says Smith, funding is the primary factor that determines whether or not a program continues, not whether it’s effective or how long it needs to run in order to be effective. Both of these can be established with rigorous evaluation.
“Planning for a rigorous evaluation of a program’s optimal lengths starts at the beginning,” said Smith. “Even if a program at modest scale is randomized, if you lose track of the people after the program ends you can get short-run impacts but not long-run impacts.”
Getting started with rigorous program evaluation
Smith said that having specialized people to focus on integrating evaluation into the design of programs is extremely important. BRAC has a large in-house research and evaluation group called the Research and Evaluation Division (RED). The division was first established not long after BRAC was founded in 1972 and to this was added a parallel research arm based in Kampala to cover BRAC operations outside Bangladesh.
“These are both independent research arms of BRAC that reports directly to the executive director,” said Scott MacMillan, a senior advisor at BRAC USA. “It’s an independent line of reporting, which results in tension sometimes between the program and research people but allows for the organization to understand what’s really moving indicators and what’s not.”
While not all organizations have budgets for a whole division, there are other opportunities to integrate evaluation, said Smith. No matter how small the NGO, there should at least be a designated person responsible for monitoring and evaluation. There are also partnership opportunities with researchers at universities and international organizations.
“Some funders are willing to provide a modest increment to the budget to make rigorous evaluation possible. Otherwise, you may not have a lot of money for evaluation but researchers are interested in topics that they could learn generally from,” said Smith. “Your key evaluation people should be on the lookout to collaborate with university and international experts who are doing cutting edge work.”
The value of what can be learned from rigorous evaluation makes a strong case for all organizations to build it into the core of their activities, said Smith. At the very least, evaluation makes it makes it possible to know whether a program works or whether and when it should end.
“You can learn as much from failure as you can from success,” said Smith. “If there’s bad news it’s better to learn it quickly, and improve your programs going forward.”
This post originally appeared on Agrilinks.org.
Alex Russell, (530) 752-4798, email@example.com