Basic considerations |
Visualize what the ideal model would look like using knowledge from up-to-date research and clinical experience
-
–
Consider face validity; i.e., perceived clinical utility
-
–
Consider practicality; e.g., scoring time
-
–
Consider pharmacologic treatment cutoff values based on the ideal model
|
|
Differences in opinion |
People can have varying opinions on a perceived ideal model. For instance, opinions can vary with respect to:
-
–
the utility of an item; i.e., is the item needed in the tool, and if so, how much weight should be assigned to the item?
-
–
the practicality of the item; e.g., does the amount of time it takes to score the item outweigh its added utility to the tool?
-
–
what treatment cutoff value(s) should be used, and how should they be used?
As a result of differences in opinion, perceived ideal assessment tools will also differ. This is one reason why new or modified assessment tools continue to be developed |
|
Issues in current practice |
Advances in research and differences in opinion continue to result in different or modified tools being proposed and used. Examples include:
|
|
Empirical research on treatment cutoffs has used NAS as the outcome of interest, as opposed to the true need to treat, and can, therefore, only provide suggestive evidence with respect to the need for treatment |
|
Judgments should not only consider inter-rater reliability, but also the utility of items when deciding whether or not to include the item in a tool |
|
Future tools should be developed based on a formative modeling strategy and can be created by adding or reducing items from existing tools. Furthermore, other information can be used in conjunction with such tools |
|
Assessment tools based on formative modeling can be developed to encompass a variety of exposure types |
|
Tools tend to only be developed by a small group of people before being published and presumably used. Due to likely differences in opinion from other clinicians and researchers, implementation and ultimately standardization in practice is unlikely |
|
To help standardize NAS assessment, experts should come together to decide on the best formative model(s) and ultimately assessment tool(s) to use |