A new study in the journal Addiction, Dr. Dennis Gorman, professor at the Texas A&M School of Public Health, examined 38 high-impact addiction journals and used information from the Center for Open Science and journal author instructions to determine the extent each journal had adopted six common methods to ensure published articles are credible.
Credibility problems seem to stem from the use of data analysis methods that are more likely to give positive results, producing study hypotheses after the results are already known and selectively reporting only positive results. Financial and non-monetary conflicts of interests sometimes play a role. Statements to disclose conflicts of interest are one method for improving research credibility. The other procedures include use of specific guidelines for reporting results, registration of clinical trials, systematic reviews and other study designs, open sharing of data and peer-reviewed registered reports that evaluate study methods before data are collected and analyzed.
Thirty-seven journals used at least one of these methods. The most common was requiring conflict of interest disclosure, which all but one journal required. Nearly half of the journals required pre-registration of clinical trials, but only four required registration of other studies. Such pre-registration can be useful as it demonstrates that a hypothesis was formed before results have been obtained, and measures and statistical analyses prespecified in detail before data are collected. Twenty-eight journals had data sharing policies, although these only encouraged data sharing rather than mandating it as a requirement of publication. 13 journals recommended the use of publishing guidelines when writing-up studies for publication.Tags: Friday Letter Submission, Publish on July 12