02133nas a2200169 4500008004100000245006700041210006500108260005500173520156400228653000801792653000801800653002001808653002301828100002101851700002001872856007101892 2017 eng d00aEvaluation of Parameter Recovery, Drift, and DIF with CAT Data0 aEvaluation of Parameter Recovery Drift and DIF with CAT Data aNiigata, JapanbNiigata Seiryo Universityc08/20173 a
Parameter drift and differential item functioning (DIF) analyses are frequent components of a test maintenance plan. That is, after a test form(s) is published, organizations will often calibrate postpublishing data at a later date to evaluate whether the performance of the items or the test has changed over time. For example, if item content is leaked, the items might gradually become easier over time, and item statistics or parameters can reflect this.
When tests are published under a computerized adaptive testing (CAT) paradigm, they are nearly always calibrated with item response theory (IRT). IRT calibrations assume that range restriction is not an issue – that is, each item is administered to a range of examinee ability. CAT data violates this assumption. However, some organizations still wish to evaluate continuing performance of the items from a DIF or drift paradigm.
This presentation will evaluate just how inaccurate DIF and drift analyses might be on CAT data, using a Monte Carlo parameter recovery methodology. Known item parameters will be used to generate both linear and CAT data sets, which are then calibrated for DIF and drift. In addition, we will implement Randomesque item exposure constraints in some CAT conditions, as this randomization directly alleviates the range restriction problem somewhat, but it is an empirical question as to whether this improves the parameter recovery calibrations.
10aCAT10aDIF10aParameter Drift10aParameter Recovery1 aThompson, Nathan1 aStoeger, Jordan uhttps://drive.google.com/open?id=1F7HCZWD28Q97sCKFIJB0Yps0H66NPeKq