Medical tests are used for determining whether disease is present for the purposes of diagnosis, screening, assessing the extent of disease, predicting the future course of disease, and guiding treatment selection. In most cases, test results are used to classify patients into two exclusive groups, “test positive” and “test negative”. Test performance can then be expressed as the ability to identify individuals with disease as “test positives” – called sensitivity – and individuals with no disease as “test negatives” – called specificity. Individual studies of test performance tend to be small and are often conducted in diverse settings. Systematic reviews of test studies offer a natural framework for evidence synthesis. However, there are several methods by which these reviews can be guided.
[Photo: Dr. Issa Dahabreh]
The purpose of this study, led by Dr. Issa Dahabreh, assistant professor in the department of health services, policy and practice, and founding member of the Center for Evidence Synthesis in Health, was to compare alternative meta-analysis methods for sensitivity and specificity. Specifically, the researchers aimed to compare: (a) univariate and bivariate models; (b) inverse variance, maximum likelihood and Bayesian estimation of random effects models; and (c) models using a normal approximation versus those using the exact binomial likelihood.
The researchers used two worked examples – thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities – to highlight that different meta-analysis approaches can produce different results. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5 percent or greater were found in 12 percent and 5 percent of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated substantially greater uncertainty around those estimates.
Based on the results of this study, the authors suggest that, whenever possible, the binomial likelihood should be preferred in applied meta-analysis. This has important implications for the evaluation of medical tests in the future.
This study was published in Journal of Clinical Epidemiology, 2017 (ahead of print).
For more information: https://www.ncbi.nlm.nih.gov/pubmed/28063915