For toxicological tests, a false negative (i.e., predicting a chemical is negative when it is in fact positive) is one type of error. It is often desirable to minimize the number of false negatives to decrease the risk of missing a toxic chemical.

Computational toxicology is a rapid high-throughput test of multiple chemicals, and like all tests, it can generate false negative results. To illustrate, the ICH M7 pharmaceutical impurities guidelines1 recommends the use of two complementary (Q)SAR methodologies to predict the results of the bacterial mutagenicity test. By selecting the most conservative outcome, this approach reduces the number of false negatives compared to using a single methodology. The guideline also highlights the use of an expert review, especially when the results from the two methodologies are conflicting or inconclusive. The subsequent expert review additionally minimizes the risk of missing a mutagenic impurity.

For the different (Q)SAR outcomes (including positive, negative, out-of-domain, and indeterminate) generated by each computational method, is it possible to understand the risk of missing a mutagenic impurity?

The quantification of this risk would be helpful in addressing the scope of any subsequent expert review.

To understand this risk, computational models typically used in assessing pharmaceutical impurities were run over a series of proprietary collections where the results of the bacterial mutagenicity test were known. For each collection, the experimental results and the (Q)SAR predictions (from the two methodologies) were then shared with us. In total, information on approximately 16,000 chemicals was shared. We then grouped the chemicals by the different combinations of (Q)SAR outcomes for the two methodologies (“statistical” and “expert”), such as negative(statistical)&negative(expert) or out-of-domain(statistical)&negative(expert).

For each combination of outcomes, the proportion of positive experimental results was then calculated. For example, 7,978 chemicals were predicted as negative(statistical)&negative(expert) and 8.1% were mutagenic. By computing the percentage of mutagenic chemicals for each combination of outcomes, it is possible to understand the risk of missing a mutagenic impurity. This in turn can be used to determine the scope of any expert review and may also be used as part of the weight-of-the-evidence.

The full results from this analysis, along with a series of case studies, have been reported in a recent publication2 which was cited in the 2020 European Medicine’s Agency questions & answer step 2b3.

Please contact me (gmyatt@leadscope.com) for more information on how this approach can be used as part of an expert review.

References

1. ICH M7, 2017 (R1) (2017) Assessment and control of DNA reactive (mutagenic) impurities in pharmaceuticals to limit potential carcinogenic risk.

2. Amberg et al., 2019. Principles and procedures for handling out-of-domain and indeterminate results as part of ICH M7 recommended (Q)SAR analyses. Regulatory Toxicology and Pharmacology 102, 53–64. doi:10.1016/j.yrtph.2018.12.007 

3. European Medicine’s Agency ICH guideline M7 on assessment and control of DNA reactive (mutagenic) impurities in pharmaceuticals to limit potential carcinogenic risk – questions & answers Step 2b. 2 July 2020. EMA/CHMP/ICH/321999/2020

Published by Glenn Myatt

Glenn J. Myatt is the co-founder and currently head of Leadscope (An Instem company) with over 25 years’ experience in computational chemistry/toxicology. He holds a Bachelor of Science degree in Computing from Oxford Brookes University, a Master of Science degree in Artificial Intelligence from Heriot-Watt University and a Ph.D. in Chemoinformatics from the University of Leeds. He has published 27 papers, 6 book chapters and three books.