Connect

Member Research and Reports

Member Research and Reports

Pittsburgh Leads Evaluation to Provide Recommendations for Development of Observational Study Standards

It seems intuitive: Want to do a study to improve patient care? Then use real-world evidence to find out what works and what does not – and how to fix it. But without common standards for conducting such observational studies, it can be difficult to gather the support needed to make research-based changes.

SallyCMorton
[Photo: Dr. Sally Morton]

Dr. Sally Morton, professor and chair of biostatistics at the University of Pittsburgh Graduate School of Public Health and director of the Comparative Effectiveness Research Center in Pitt’s Health Policy Institute, and colleagues recently completed a review and laid out standards for conducting observational studies in The Journal of Clinical Epidemiology.

“Without a ‘gold-standard’ for conducting observational studies, the quality of the research is in the eye of the beholder,” said Dr. Morton. “If the full promise of real-world evidence and big data for health care is to be realized, a common set of agreed-upon minimum standards and guidelines is needed to promote consistency in study quality and broader application of findings to treatment decisions.”

The study, “Standards and Guidelines for Observational Studies: Quality is in the Eye of the Beholder,” reviewed, compared and contrasted nine sets of standards and guidelines developed by public, private and professional societies in the U.S. and Europe.

These nine sets outline how to conduct observational or real-world studies that leverage information from electronic health records, administrative claims, patient registries or data networks. Authors evaluated the presence and agreement of 23 methodological elements (such as the need to use a study protocol, how to link data from different sources, or handle missing information) and compared how actionable each element was.

The authors found that, out of the 23 methodological elements, 14 (61 percent) were addressed by seven or more standards and guidelines, reflecting general agreement that these elements are important. However, for all but two of these 14 elements, there was disagreement on how the element should be addressed or acted upon. The remaining 11 elements varied in whether the sets of standards and guidelines agreed that the element was important or was included at all. Just over half (57 percent) of the 23 methodological elements were considered actionable.

Co-authors on this study are Ms. Monica R. Costlow, of Pitt; and Dr. Jennifer S. Graff, and Dr. Robert W. Dubois, both of the National Pharmaceutical Council (NPC).

The research was funded by the NPC and the University of Pittsburgh.

For more information and an infographic by the NPC that explains the findings, visit http://insideupmc.upmc.com/how-do-we-agree-on-what-good-patient-studies-look-like/.