Skip to main content
Applied Psychological Measurement logoLink to Applied Psychological Measurement
. 2020 Mar 14;44(4):327–328. doi: 10.1177/0146621620909901

InDisc: An R Package for Assessing Person and Item Discrimination in Typical-Response Measures

Pere J Ferrando 1,, David Navarro-González 1
PMCID: PMC7262995  PMID: 32536733

Abstract

InDisc is an R Package that implements procedures for estimating and fitting unidimensional Item Response Theory (IRT) Dual Models (DMs). DMs are intended for personality and attitude measures and are, essentially, extended standard IRT models with an extra person parameter that models the discriminating power of the individual. The package consists of a main function, which calls subfunctions for fitting binary, graded, and continuous responses. The program, a detailed user’s guide, and an empirical example are available at no cost to the interested practitioner.

Keywords: dual models, person discrimination, person reliability, IRT

Program Description

A unified approach for obtaining and estimating unidimensional Item Response Theory (IRT) Dual Models (DMs) has been proposed by Ferrando (2019). DMs are intended for personality and attitude measures, are based on a Thurstonian response process, and are, essentially, extended standard IRT models with an extra person parameter that models the discriminating power of the individual. Consequently, both items and individuals are considered as sources of measurement error in DMs.

InDisc (for Item and Individual Discrimination) is an R package consisting of a main function (InDisc), which calls all the subfunctions that implement the procedures described in Ferrando (2019) for fitting binary, graded, and continuous response DMs. Estimation is based on a two-stage (calibration and scoring) random-regressors approach (McDonald, 1982). Item calibration at the first stage is the same as in the corresponding standard IRT models, is based on a factor-analytic underlying-variables approach, and uses an unweighted least squares (ULS) minimum-residual criterion as implemented in the “psych” R package (Revelle, 2018). Individual trait scores and individual discriminations are obtained at the second stage using Expected a Posteriori (EAP) Bayes estimation. Overall, the combined ULS-EAP estimation procedure is simple, robust, and can handle large datasets, both in terms of sample size and test length.

For input, the main function requires (a) the N (individuals) ×n (items) data matrix containing the item scores, (b) the number of quadrature points for EAP estimation (default is 30), and (c) the model to be used (“graded” or “linear”). The output is printed in the console in text form (output can also be saved) and includes (a) item parameter estimates with standard errors and measures of model-data fit at the calibration stage and (b) point estimates of the trait level and person discrimination, with the corresponding posterior standard deviations, marginal reliabilities, and measures of appropriateness of the DM at the scoring stage.

InDisc has been developed in R version 3.6.1 and runs with R versions more recent than 3.5.0. In principle, it is operational in any operating system that supports R (Windows, Linux, Mac OS). The number of variables and respondents the program can handle is not limited. Available documentation includes internal R documentation including one empirical example as well as a detailed 31-page user’s guide.

Availability, Documentation, and Distribution

The package, the documentation, and a sample illustrative dataset are available at no cost at CRAN repository (https://CRAN.R-project.org/package=InDisc). The user’s guide can be obtained at no cost at https://psico.fcep.urv.cat/utilitats/InDisc.

Supplemental Material

InDisc_script – Supplemental material for InDisc: An R Package for Assessing Person and Item Discrimination in Typical-Response Measures

Supplemental material, InDisc_script for InDisc: An R Package for Assessing Person and Item Discrimination in Typical-Response Measures by Pere J. Ferrando and David Navarro-González in Applied Psychological Measurement

Footnotes

Author’s note: Supplemental material for this article is available online at http://psico.fcep.urv.cat/utilitats/InDisc

Declaration of Conflicting Interests: The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This project has been possible with the support of a grant from the Ministerio de Ciencia, Innovación y Universidades and the European Regional Development Fund (ERDF) (PSI2017-82307-P).

ORCID iDs: Pere J. Ferrando Inline graphic https://orcid.org/0000-0002-3133-5466

David Navarro-González Inline graphic https://orcid.org/0000-0002-9843-5058

References

  1. Ferrando P. J. (2019). A comprehensive IRT approach for modeling binary, graded, and continuous responses with error in persons and items. Applied Psychological Measurement, 43(5), 339–359. 10.1177/0146621618817779 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. McDonald R. P. (1982). Linear versus models in item response theory. Applied Psychological Measurement, 6, 379–396. 10.1177/014662168200600402 [DOI] [Google Scholar]
  3. Revelle W. (2018). psych: Procedures for psychological, psychometric, and personality research. Northwestern University. https://CRAN.R-project.org/package=psych

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

InDisc_script – Supplemental material for InDisc: An R Package for Assessing Person and Item Discrimination in Typical-Response Measures

Supplemental material, InDisc_script for InDisc: An R Package for Assessing Person and Item Discrimination in Typical-Response Measures by Pere J. Ferrando and David Navarro-González in Applied Psychological Measurement


Articles from Applied Psychological Measurement are provided here courtesy of SAGE Publications

RESOURCES