Table 3.
Evaluation category | Evaluation criteria |
---|---|
Characteristics |
Principle Prediction (i.e., hazard versus potency [categories or continuous]) Publication Information sources |
Input data |
Test method (in vitro and in chemico) Read-out used Validation status Reproducibility Issues (e.g., IP, availability) In silico/expert system data/physicochemical properties Read-out used Availability Reliability Issues (e.g., IP, availability) Expert knowledge Input used Availability Principle Prediction (i.e., hazard vs. potency [categories or continuous]) Publication Information sources |
Prediction algorithm |
Type Availability Transparency Requirements for implementation (specific software) Self-learning Complexity Sequential information generation All inputs required? Predictivity: Sample size (total and for categories) Predictivity: Parameters (sensitivity, specificity, concordance) |
Mechanistic relevance |
OECD AOP key events covered Sequence of OECD AOP events considered Justification/discussion of the mechanistic relevance |
Applicability domain |
Chemical spectrum tested Limitations (solubility, surfactants) Potential limitations for cosmetic ingredients (e.g., natural extracts cannot be processed by in silico approaches) |
Practical aspects |
Costs Can be conducted by CRO [contract research organization]? Time required (per substance) |