There a few possible sources of bias given the proposed study design. First, due to the repeated measurements that will be done, Hawthorne effects1 may exist although most likely only to specific components of the study. The main stepped wedge component of the study will involve repeated measurements of the various target respondents in the selected (and randomised) study areas or study clusters. Whilst the study areas or study clusters will remain the same throughout the study, the set of target respondents at each round of measurement is not necessarily the same as previous rounds as described in our sampling process. This approach increases within-cluster variation hence increasing indicator precision. This in turn will also minimise Hawthorne effects.
 

On the other hand, the incidence sub-study may potentially be the most susceptible to Hawthorne effects given that it will be the same cohort of study subjects that will be measured repeatedly. However, we think this will be kept at a minimum given that the study subjects will be chosen from those who attend routine well-baby / well-child clinics in both the control and intervention arms of the study. Because of this approach, we argue that even prior to selection, these mothers and their children who attend routine well-baby / well-child clinics have both physical access and a health-seeking behaviour that predispose and/or motivate them to seek care despite their children nor being overtly ill. From this perspective, they are already a relatively different cohort from that of the general population and selecting them for the study will not necessarily be adding an incentive or a motive that they otherwise would not have had prior to the study.
 

As for John Henry effects2, we can effectively stem these effects mainly because of the stepped wedge cluster design and within-cluster sampling process (as stated above). In addition, study area / cluster selection will be done in such a way that they are not close to each other or do not share borders (an approach we will use as well to prevent contamination). Hence, the clusters who start out as interventions and those that start out as controls will not necessarily be in contact with each other hence will not encourage comparison. Finally, when the study is introduced and explained to the study areas / study clusters and the study subjects within these clusters for purposes of eliciting consent, we will emphasise more on the fact that all areas will get the intervention albeit at different stages over the period of a year and will minimise the whole use of the labels of intervention and control at the community and individual level. This will hopefully minimise the effect brought about by subjects being told that they belong to either group.
 

Subject-expectancy, on the other hand, is the most likely effect that we would be prominent in this study. Whilst an aspect of subject-expectancy would be similar to the Hawthorne and John Henry effects stated above and as such can be mitigated by the same approaches we mentioned, the other aspect of subject-expectancy in which subjects provide responses to questions based on what they think is the right or correct answer or what they think the enumerator or investigator expects to elicit from them. This effect will most likely be found in the non-quantitative and non-numeric types of measurements of the study. We will minimise this by using wherever available standard questionnaires that have been used to measure such indicators as these have been tested and used by many investigators previously and usually have a general guideline on how they should be administered and have a general profile on how they perform / behave as indicators. For those questions or measures that do not have standard versions and require subjective and retrospective responses, we will create anchoring vignettes to use as standardisation measures for such responses and then apply item response theory to support the analysis of responses. Finally, we will triangulate the responses to the qualitative investigations that we will conduct alongside the quantitative component of the study.
 
 
 

Endnotes

1 See Parsons, H M. “What Happened at Hawthorne?: New Evidence Suggests the Hawthorne Effect Resulted From Operant Reinforcement Contingencies..” Science 183, no. 4128 (March 8, 1974): 922–32. doi:10.1126/science.183.4128.922, and; McCarney, Rob, James Warner, Steve Iliffe, Robbert van Haselen, Mark Griffin, and Peter Fisher. “The Hawthorne Effect: a Randomised, Controlled Trial..” BMC Medical Research Methodology 7, no. 1 (2007): 30–38. doi:10.1186/1471-2288-7-30.

2 Saretsky, Gary. “The OEO P.C. Experiment and the John Henry Effect.” The Phi Delta Kappan 53, no. 9 (May 1, 1972): 579–81. doi:10.2307/20373317?ref=no-x-route:b0c5912281bbaa29e53cc48d0df80fe9.