What is evidence synthesis?
When it comes to urgent and complex problems such as ending hunger, we need solutions as quickly as possible, and we need to know that they work. The good news is that there has never been more research on these problems—and each day, that stock of knowledge expands as new studies and reports are published.
The problem is synthesizing what all this knowledge means. It can take a year to track down the relevant research on a particular question or problem—because there is so much of it scattered across so many repositories and domains.
Evidence synthesis is an umbrella term describing the process of creating a summary of the underlying information. It uses a carefully worked-out methodology to evaluate interventions. Evidence synthesis is not new and there are many different types of evidence synthesis: systematic reviews are probably the best known. The need for evidence synthesis is acute, the more research we produce, the more we need to assess and incorporate the findings of new studies.
In agriculture, the past decade has generated more than two-million articles across hundreds of academic journals. More than 60 major agencies publish reports and studies. There is no keyword search for “what works.” There are no meta-tags for specific policy interventions.
This is why we turned to machine learning. We created a searchable evidence map from all this research, from which we could identify and classify the kinds of agricultural interventions that addressed the key elements of SDG 2. In a sense, machine learning gave a voice to this research so it could tell us whether it could be relevant for policy makers.
We could see the most studied interventions using the evidence map. We asked our global advisory board of experts to explore the data with us, and to help pick the most important interventions—those that should go on for detailed academic review to better understand how they work, and in what contexts.
One of the challenges of reviewing evidence in agriculture is that almost everything works in some context. Unlike biomedicine, where evidence for “what works” can be examined in carefully controlled clinical isolation, evidence in agriculture is influenced by many diverse economic, geographical, and social factors. Our research questions had to be broad enough to account for these factors yet precise enough to reveal the kind of robust evidence for ‘what works’ that people can use to make investment and policy decisions.
The conclusions of these evidence syntheses will, subject to peer review, be published as a special collection in Nature Research Journals in 2020.
Ceres 2030 is a partnership between Cornell IP-CALS, the International Food Policy Research Institute (IFPRI), and the International Institute of Sustainable Development (IISD)