Advanced algorithms working from large chemical databases can predict a new chemical’s toxicity better than standard animal tests, suggests a study led by scientists at Johns Hopkins Bloomberg School of Public Health.
The researchers, in the study that appears in the journal Toxicological Sciences on July 11, mined a large database of known chemicals they developed to map the relationships between chemical structures and toxic properties. They then showed that one can use the map to automatically predict the toxic properties of any chemical compound — more accurately than a single animal test would do.
The most advanced toxicity-prediction tool the team developed was on average about 87 percent accurate in reproducing consensus animal-test-based results — across nine common tests, which account for 57 percent of the world’s animal toxicology testing. By contrast, the repetition of the same animal tests in the database were only about 81 percent accurate — in other words, any given test had only an 81 percent chance, on average, of obtaining the same result for toxicity when repeated.
“These results are a real eye-opener — they suggest that we can replace many animal tests with computer-based prediction and get more reliable results,” says principal investigator Dr. Thomas Hartung, the Doerenkamp-Zbinden Chair and professor in the department of environmental health and engineering at the Bloomberg School.