Not very surprisingly, health outcomes are improved: fewer children die when they are vaccinated against preventable diseases; HIV-infected patients survive longer when they are treated with antiretroviral therapy (ART); maternal deaths decline when prenatal care is linked to caesarean sections and anti-haemorrhagic agents to address obstructed labour and its complications; and fewer malaria deaths occur, and drug-resistant strains are slower to emerge, when potent anti-malarials are used in combination rather than as monotherapy. Given that the benefit of these interventions is hardly in dispute, how should global health researchers approach the task of documenting and disseminating their impact in what are these days termed resource-poor settings? What role does a journal have in fostering that process?

It has long been the case that randomized clinical trials have been held up as the gold standard of clinical research. Typically, individuals are randomized to receive one of two treatments; both patients and the clinicians caring for them are blinded to their treatment assignment; outcomes are rigorously measured among all participants. Since the assignment is random, factors ranging from socioeconomic and nutritional status to comorbid disease should be equally distributed within the two groups, so that confounding does not affect the interpretation of results. This kind of study can only be carried out ethically if the intervention being assessed is in equipoise, meaning that the medical community is in genuine doubt about its clinical merits.

It is troubling, then, that clinical trials have so dominated outcomes research when observational studies of interventions like those cited above, which are clearly not in equipoise, are discredited to the point that they are difficult to publish. One of us recently attended a seminar on impact evaluation in which the speaker announced that all health aid should be evaluated, and that the only valid form of evaluation is a clinical trial. Does this mean we should only provide aid if we are not sure that it works? Such a conclusion is patently absurd but it is one consequence of privileging a particular study design over our clinical objectives.

Visit The Lancet to read the full article.