A new study led by researchers at the Johns Hopkins Bloomberg School of Public Health suggests that the experience level of abstractors who abstract data from journal papers is a factor when it comes to capturing outcomes/results, but is less a factor when it comes to describing baseline characteristics.
The study was published on January 18 in the journal Research Synthesis Methods. For the study, the researchers recruited 50 abstractors from a variety of sources, including students from the Bloomberg School and Brown University who had taken courses on systematic review methods and through ads placed in organizations. Participating abstractors were classified by experience level and then each abstractor abstracted six papers.
Dr. Jian-Yu E, an ScD candidate in the Bloomberg School’s department of epidemiology, is the study’s lead author.
The researchers found that less experienced abstractors abstracted data as accurately as more experienced abstractors for data items related to baseline characteristics (the demographic and clinical data collected at the beginning of a clinical trial before intervention). However, for data items related to outcomes/results, which inform meta-analysis, more experienced abstractors abstracted data much more accurately.
Comparing the data of abstractors and resolving any discrepancies, known as adjudication, is just as important as the experience levels of the abstractors, the authors note. The researchers concluded that systematic review teams should consider assigning data-abstraction tasks based on experience levels, but more importantly, have a process to adjudicate the discrepancies.Friday Letter Submission, Publish on February 21