One of the more challenging outcomes from a (Q)SAR model is the out-of-domain (OOD) result. This result is possible since (Q)SAR models are, in many situations (such as the ICH M7 guideline), required to perform an applicability domain analysis to satisfy OECD validation principles1.  Although a (Q)SAR model may still generate a prediction, the OOD result generally means that the test chemical is outside the chemical space which the model makes a prediction with a given reliability.  An expert-review is helpful to understand and reassess the reliability of an OOD result.

The procedures discussed in the blog entry titled “The use of chemical analogs in expert reviews” could be utilized in such an expert-review; however, understanding the reason why the prediction is OOD in the first place could cue the assessor into what may be the focus of the review. This is especially important because different model types vary in their definition of its applicability domain. A prediction is considered within the applicability domain of Leadscope’s statistical models based on the inclusion of at least one structural feature in the model’s prediction and a sufficiently similar analog in the training set. Meeting these criteria indicates that the model ‘knows’ something about your test chemical and has a basis for making a prediction. If one of these criteria is not met; for example, there is no analog with sufficient similarity to the test chemical in the training set; but there are structural features used in the prediction, the result is considered OOD. In some of these cases, the lack of a close neighbor is due to the inclusion of a sub-structure in the test chemical which is not familiar to the model.  In addition to assessing potentially reactive features, and the relevancy of the model features, the prediction for the core sub-structure which is within the applicability domain of the model could serve as part of an expert review.  Assessing whether this sub-structure is potentially reactive is a good starting point, i.e., are there sufficient negative examples in the database to not consider the sub-structure a concern? If so, this review would support a reassessment of the prediction’s reliability.

Readily available parameters, such as the prediction probability are also helpful in such cases. A previous analysis by Amberg et al., 2019 showed that the risk of missing a mutagenic impurity given an OOD statistical result with a probability <0.2 and a negative expert rule based result is approximately the same for both methodologies predicting negative2.

In another instance, there may be sufficiently similar analogs in the training set, but the statistical model’s prediction is out of domain due to an absence of model features. Here an advantage of using complementary statistical and expert-rule based approaches is observed since the expert-rule based prediction will likely be within the applicability of domain because its applicability is not defined by model features but rather on the sufficiency of analogs. The content of the blog entry on the use of chemical analogs in expert reviews are helpful in these situations. In resolving OOD assessments, an expert-review is used. Despite the conduct of such a review, there remains a computational aspect to the analysis as the model/software provides information that facilitates the review.

Please contact me if you would like to discuss this further:

  1. OECD (2014), Guidance Document on the Validation of (Quantitative) Structure-Activity Relationship [(Q)SAR] Models, OECD Series on Testing and Assessment, No. 69, OECD Publishing, Paris,
  2. Amberg, Alexander et al. 2019. “Principles and Procedures for Handling Out-of-Domain and Indeterminate Results as Part of ICH M7 Recommended (Q)SAR Analyses.” Regulatory Toxicology and Pharmacology.

One reply on “How an expert-review could be used to resolve out-of-domains”

Comments are closed.