Migration Policies from Pilot Test to Full Scale Implementation in Bangladesh
LEAP PHD STUDENT ERICK BAUMGARTNER REPORTS ON THE RECENT LEAP COFFEE SEMINAR WITH ASHISH SHENOY
The surge in experimental evaluations observed in the last decades is closely related with the pursuit of an efficient allocation of resources in social policy. By measuring how effective different programs are, it is possible to assess whether such policies should be continued and to provide policymakers with a groundwork to an informed decision when comparing different intervention possibilities. Issues such as external validity and failures to replicate results on a larger scale, however, are a reminder of the difficulties in interpreting these experimental results. Nevertheless, even these asymmetries can be analyzed, providing us with a better understanding of the barriers to scaled implementation and replication.
As stated by Eva Vivalt in her 2020 article (“How much can we generalize from impact evaluations?”), “generalizability is not binary but something that we can measure.” By exploiting a data set of impact evaluation results, the author highlights some of the systematic differences in experimental results: small-scale interventions consistently report higher effects, while government-implemented programs usually have smaller effect sizes than academic or non-governmental organization.
These results put the implementation of these policies into perspective, showing that there is scope to understand the drivers of these gaps, allowing for more realistic estimates of large-scale interventions or finding measures to keep its effectiveness at scale. The article presented by professor Ashish Shenoy at the last LEAP Development Coffee is another contribution to this discussion.
The article discussed the experience of an intervention supporting migration in northern Bangladesh. Agricultural seasonality in this region gives rise to cycles of poverty in the time periods before the harvest of one's region, when food prices are high and rural wages are low. Thus, temporary migration can support households to find better opportunities in other regions during these periods, which could be supported through migration loans.
In his pilot evaluation of a program that offers a short-term low-interest migration loan in northern Bangladesh, the estimated effect of the intervention ranged between 25 and 40 percentage points. In the scaled intervention, however, the increase in the probability that a household would have a migrant reached only 6 percentage points, in another example of the issues with scaling implementation. Further analysis, however, rules out some of the possible drivers of this decrease. There is no evidence of crowd-out effects in general equilibrium, and differences between the treated population in the scaled intervention and the pilot would account for at most one third of the difference in treatment effects.
Based on these findings, the authors present evidence that the effect attenuation can be interpreted through the lens of delegation risks: capacity-constrained implementers seeking to maximize measurable implementation outcomes. Specifically, in the context of the intervention in Bangladesh, implementers trying to maximize the number of individuals taking part in the loan program might end up focusing efforts on people who would already be more likely to migrate, thus reaching out to a higher share of “always-takers”, who would migrate even in the absence of the incentive to do so. This framework provides an important input to the design of public policy, centering the discussion on the optimal design of implementation metrics and the incentives faced by implementers: by focusing on a mis-specified metric, implementers may end up focusing on suboptimal implementation strategies. Additionally, the role of monitoring should also be highlighted: pilot interventions and randomized controlled trials usually demand a high level of scrutiny regarding implementation to assure the compliance to methodological standards. These demands are no longer present in scaled implementation, which may lead to lower levels of inspection regarding the targeting procedures of implementers. These insights are a valuable addition to the ongoing discussion regarding the implementation of projects at scale and the reliability of estimated effects in an experimental context.
Mitchell, Harrison, A. Mushfiq Mobarak, Karim Naguib, Maira Emy Reimão and Ashish Shenoy, “External Validity and Implementation at Scale: Evidence from a Migration Loan Program in Bangladesh”, Working Paper, 2022.
Vivalt, Eva “How much can we generalize from impact evaluations?” Journal of the European Economic Association 18.6 (2020): 3045-3089. https://doi.org/10.1093/jeea/jvaa019