TY - JOUR T1 - Detection of aberrant item score patterns in computerized adaptive testing: An empirical example using the CUSUM JF - Personality and Individual Differences Y1 - 2010 A1 - Egberink, I. J. L. A1 - Meijer, R. R. A1 - Veldkamp, B. P. A1 - Schakel, L. A1 - Smid, N. G. KW - CAT KW - computerized adaptive testing KW - CUSUM approach KW - person Fit AB - The scalability of individual trait scores on a computerized adaptive test (CAT) was assessed through investigating the consistency of individual item score patterns. A sample of N = 428 persons completed a personality CAT as part of a career development procedure. To detect inconsistent item score patterns, we used a cumulative sum (CUSUM) procedure. Combined information from the CUSUM, other personality measures, and interviews showed that similar estimated trait values may have a different interpretation.Implications for computer-based assessment are discussed. VL - 48 SN - 01918869 ER - TY - JOUR T1 - Using patterns of summed scores in paper-and-pencil tests and computer-adaptive tests to detect misfitting item score patterns JF - Journal of Educational Measurement Y1 - 2004 A1 - Meijer, R. R. KW - Computer Assisted Testing KW - Item Response Theory KW - person Fit KW - Test Scores AB - Two new methods have been proposed to determine unexpected sum scores on subtests (testlets) both for paper-and-pencil tests and computer adaptive tests. A method based on a conservative bound using the hypergeometric distribution, denoted ρ, was compared with a method where the probability for each score combination was calculated using a highest density region (HDR). Furthermore, these methods were compared with the standardized log-likelihood statistic with and without a correction for the estimated latent trait value (denoted as l-super(*)-sub(z) and l-sub(z), respectively). Data were simulated on the basis of the one-parameter logistic model, and both parametric and nonparametric logistic regression was used to obtain estimates of the latent trait. Results showed that it is important to take the trait level into account when comparing subtest scores. In a nonparametric item response theory (IRT) context, on adapted version of the HDR method was a powerful alterative to ρ. In a parametric IRT context, results showed that l-super(*)-sub(z) had the highest power when the data were simulated conditionally on the estimated latent trait level. (PsycINFO Database Record (c) 2005 APA ) (journal abstract) VL - 41 ER - TY - JOUR T1 - Using response times to detect aberrant responses in computerized adaptive testing JF - Psychometrika Y1 - 2003 A1 - van der Linden, W. J. A1 - van Krimpen-Stoop, E. M. L. A. KW - Adaptive Testing KW - Behavior KW - Computer Assisted Testing KW - computerized adaptive testing KW - Models KW - person Fit KW - Prediction KW - Reaction Time AB - A lognormal model for response times is used to check response times for aberrances in examinee behavior on computerized adaptive tests. Both classical procedures and Bayesian posterior predictive checks are presented. For a fixed examinee, responses and response times are independent; checks based on response times offer thus information independent of the results of checks on response patterns. Empirical examples of the use of classical and Bayesian checks for detecting two different types of aberrances in response times are presented. The detection rates for the Bayesian checks outperformed those for the classical checks, but at the cost of higher false-alarm rates. A guideline for the choice between the two types of checks is offered. VL - 68 ER - TY - JOUR T1 - Outlier detection in high-stakes certification testing JF - Journal of Educational Measurement Y1 - 2002 A1 - Meijer, R. R. KW - Adaptive Testing KW - computerized adaptive testing KW - Educational Measurement KW - Goodness of Fit KW - Item Analysis (Statistical) KW - Item Response Theory KW - person Fit KW - Statistical Estimation KW - Statistical Power KW - Test Scores AB - Discusses recent developments of person-fit analysis in computerized adaptive testing (CAT). Methods from statistical process control are presented that have been proposed to classify an item score pattern as fitting or misfitting the underlying item response theory model in CAT Most person-fit research in CAT is restricted to simulated data. In this study, empirical data from a certification test were used. Alternatives are discussed to generate norms so that bounds can be determined to classify an item score pattern as fitting or misfitting. Using bounds determined from a sample of a high-stakes certification test, the empirical analysis showed that different types of misfit can be distinguished Further applications using statistical process control methods to detect misfitting item score patterns are discussed. (PsycINFO Database Record (c) 2005 APA ) VL - 39 ER - TY - CHAP T1 - Detecting person misfit in adaptive testing using statistical process control techniques T2 - Computer adaptive testing: Theory and practice Y1 - 2000 A1 - van Krimpen-Stoop, E. M. L. A. A1 - Meijer, R. R. KW - person Fit JF - Computer adaptive testing: Theory and practice PB - Kluwer Academic. CY - Dordrecht, The Netherlands ER -