TitleEvaluation of Parameter Recovery, Drift, and DIF with CAT Data
Publication TypeConference Paper
Year of Publication2017
AuthorsThompson, N, Stoeger, J
Conference NameIACAT 2017 Conference
Date Published08/2017
PublisherNiigata Seiryo University
Conference LocationNiigata, Japan
KeywordsCAT, DIF, Parameter Drift, Parameter Recovery
Abstract

Parameter drift and differential item functioning (DIF) analyses are frequent components of a test maintenance plan. That is, after a test form(s) is published, organizations will often calibrate postpublishing data at a later date to evaluate whether the performance of the items or the test has changed over time. For example, if item content is leaked, the items might gradually become easier over time, and item statistics or parameters can reflect this.

When tests are published under a computerized adaptive testing (CAT) paradigm, they are nearly always calibrated with item response theory (IRT). IRT calibrations assume that range restriction is not an issue – that is, each item is administered to a range of examinee ability. CAT data violates this assumption. However, some organizations still wish to evaluate continuing performance of the items from a DIF or drift paradigm.

This presentation will evaluate just how inaccurate DIF and drift analyses might be on CAT data, using a Monte Carlo parameter recovery methodology. Known item parameters will be used to generate both linear and CAT data sets, which are then calibrated for DIF and drift. In addition, we will implement Randomesque item exposure constraints in some CAT conditions, as this randomization directly alleviates the range restriction problem somewhat, but it is an empirical question as to whether this improves the parameter recovery calibrations.

Session Video

URLhttps://drive.google.com/open?id=1F7HCZWD28Q97sCKFIJB0Yps0H66NPeKq