- Split View
-
Views
-
Cite
Cite
Joan Hamory Hicks, Michael Kremer, Edward Miguel, Commentary: Deworming externalities and schooling impacts in Kenya: a comment on Aiken et al. (2015) and Davey et al. (2015) , International Journal of Epidemiology, Volume 44, Issue 5, October 2015, Pages 1593–1596, https://doi.org/10.1093/ije/dyv129
- Share Icon Share
We thank Aiken et al . 1 and Davey et al . 2 for carrying out a re-analysis of Miguel and Kremer (2004). 3 We interpret the evidence from the reanalysis as strongly supporting the findings of positive deworming treatment externalities and school participation impacts.
Aiken et al . 1 usefully correct some errors in Miguel and Kremer. 3,Figure 1 presents the original and updated estimates of the key externality and school participation effects side by side (from their Tables 1 and 5, Appendix Tables VII and IX, and our Supplementary Data and Supplementary Data , available as Supplementary data at IJE online), and shows that results are extremely similar: in addition to direct impacts on worm infections in the treatment vs control schools ( Figure 1 , Panel A), Aiken et al. find externality effects on untreated pupils within treatment schools (Panel B), and externality effects across schools up to 3 km away (Panel C). These effects are significant at P < 0.05. Similarly, they find direct effects on school participation (Panel E) and within-school externality effects (Panel F) at P < 0.05, and externality effects up to 3 km away (Panel G) at P < 0.10. The direct effects on both infections and school participation are somewhat larger and more precisely estimated with the updated data (Panels A and E). The updated evidence on within-school externalities implies that a key conclusion in Miguel and Kremer 3 —that individually randomized studies underestimate true deworming impacts—remains valid.
As Aiken et al. point out, most of the errors (rounding, inaccurately labelled statistical significance, or data set updates) are minor. The replication also corrects errors in the original code used to estimate externalities. Miguel and Kremer 3 measured externalities among schools located within 3–6 km that were among the 12 closest schools, rather than all schools, as reported. The externality effect on infections at 3–6 km was significant in the original analysis; the updated estimate is negative but not significant (Panel D). The point estimate on the 3–6 km externality effect on school participation is negative and not significant in both the original and updated analyses (Panel H). The lack of infection externalities at 3–6 km (with updated data) means there is little reason to expect schooling externalities at this distance. Importantly, standard errors on school participation externality estimates at 3–6 km become very large with the updated data, nearly doubling.
We disagree with Aiken et al .’s claim in their Discussion that there is ‘no evidence of a total effect’ of deworming on school participation. This assertion is based on an approach that adds unnecessary ‘noise’ to the estimation. An estimator for overall externalities that goes out beyond 3 km, and that puts extensive weight—due to the large numbers of schools at that distance—on the not significant 3–6 km externality estimate makes the overall estimate far less precise. In Appendix A (available as Supplementary data at IJE online) we show that, under reasonable assumptions, the estimator that excludes the 3–6 km externalities is preferred under the standard criterion of minimizing mean squared error (also see Aiken et al . 4 and Hicks et al.5 ).
Since the estimated direct effect ( Figure 1 , Panel E) is a lower bound on overall deworming impacts as long as treatment spillovers are either zero or go in the same direction as the direct effect, the appropriate conclusion is that deworming has a large positive impact on school participation. Combining the precise direct effect estimate and 0–3 km externality estimate with the effect at 3–6 km might have made sense with the original data, but with the updated data the 3–6 km effect estimate is simply too noisy to be informative for statistical inference. Aiken et al . also claim that there is little evidence for treatment spillovers across schools. Yet the finding that externality effects at 3–6 km are imprecisely estimated in no way negates the positive effects within schools or across schools within 3 km (Panels C and G).
The argument in Davey et al ., 2 that deworming impacts on school participation are not robust to different statistical approaches, is based on several analytical errors.
First, they misclassify pre-treatment control observations as treatment observations. Group 2 schools began receiving deworming in March 1999. The correct coding of treatment for Group 2 thus begins after March 1999, as in Miguel and Kremer 3 and Aiken et al .; 1 however, Davey et al . 2 misclassify the Group 2 observations from early 1999 as treatment observations. They purport to justify the misclassification of 20% of 1999 observations using an ‘intention-to-treat’ framework, a framework typically utilized when a population was assigned to treatment, but only some individuals actually received it. Davey et al. incorrectly apply it to a different situation, in which no individuals were actually treated (i.e. Group 2 prior to March 1999) nor were any supposed to be treated. Their puzzling approach rests on the assertion that the programme sought to provide deworming at the exact start of the calendar year. However, the study’s research design necessitated treatment not starting immediately at the start of 1999: extensive data collection was carried out in early 1999 precisely because Group 2 had not yet been phased into treatment, allowing for estimation of impacts after 1 year of treatment (further discussion in Appendix B, available as Supplementary data at IJE online; also see Davey et al.6 and Hicks et al . 7 ).
Second, since all estimators in Davey et al.2 are based on treatment vs control school differences in a context with positive treatment externalities, their estimates are biased towards zero. Furthermore, many of the estimators ignore the study’s stepped-wedge design, in which some schools change treatment status in year 2. They instead focus on cross-sectional estimates, and moreover, split the data into year subsets and report results separately for the subsets, unnecessarily sacrificing statistical precision. The re-analysis authors’ own power calculations imply that this approach is extremely underpowered (Aiken et al. , 8 Davey et al. , 6 Appendix 1, available as Supplementary data at IJE online). An analysis using both years of data and the stepped-wedge design—the specification that represents the culmination of their own analysis plan (Aiken et al . 8 )—produces large impacts; in their abstract they write: ‘When both years were combined, there was strong evidence of an effect on attendance’.
Figure 2 shows that the attendance result is robust across a range of approaches common in economics and public health. These estimates employ both years of data, as envisioned in the project’s prospective design, and use: (i) different statistical models (linear regression, random effects logistic regression); (ii) different samples (the full sample, those eligible for treatment); (iii) regression models adjusted for covariates and unadjusted; (iv) different approaches to weighting (each attendance observation equally, each pupil equally); and (v) the dataset in Davey et al. that incorrectly defines treatment, vs data that correctly defines treatment. All 32 estimates in Figure 2 are positive, large and significant ( P < 0.01) (details in Supplementary Data , available as Supplementary data at IJE online). The pre-specified logistic analysis in Aiken et al. ’s 8 plan is represented by a bold vertical line in Panel B, and this estimate of 1.82 is significant at P < 0.001.
To justify not pooling both years of data, Davey et al . 2 raise concerns about the correlation between the number of attendance observations per school and school participation rates, in the treatment vs control schools over time, which they apparently establish by ‘eyeballing’ a plot of the relationship. We present statistical evidence that this correlation is not significant and does not bias estimates ( Supplementary Data , available as Supplementary data at IJE online). Davey et al . also base part of their conclusion on a cluster-level analysis using a non-standard approach to weighting observations. Even in the cluster-level models, we show that deworming has a significant positive effect on school participation for each year separately when standard weighting is applied and treatment is correctly defined ( Supplementary Data , available as Supplementary data at IJE online). It is only when one simultaneously makes multiple analytical errors—in weighting observations, defining treatment, and failing to pool the data—that deworming impact estimates are not significant.
In sum, a re-analysis of Miguel and Kremer 3 confirms its main conclusions regarding deworming treatment externalities and school participation gains ( Figure 1 ), and these school participation gains are robust across a wide range of adequately powered analysis methods ( Figure 2 ). These results contribute to a growing literature using cluster randomized or quasi-experimental designs to study deworming’s socioeconomic impacts, all of which estimate positive long-run impacts on educational and labour market outcomes (Ahuja et al.,9 Bleakley, 10 and Ozier 11 ).
Acknowledgements
We thank Kevin Audi, Evan DeFilippis, Felipe Gonzalez, Leah Luben and especially Michael Walker for excellent research assistance. All errors remain our own.
Conflict of interest: MK is Scientific Director for Development Innovations Ventures at USAID. Among its many other activities, USAID supports deworming.
References