01780nas a2200145 4500008003900000245008400039210006900123300001200192490000700204520131700211100001601528700002301544700001401567856005301581 2015 d00aa-Stratified Computerized Adaptive Testing in the Presence of Calibration Error0 aaStratified Computerized Adaptive Testing in the Presence of Cal a260-2830 v753 aa-Stratified computerized adaptive testing with b-blocking (AST), as an alternative to the widely used maximum Fisher information (MFI) item selection method, can effectively balance item pool usage while providing accurate latent trait estimates in computerized adaptive testing (CAT). However, previous comparisons of these methods have treated item parameter estimates as if they are the true population parameter values. Consequently, capitalization on chance may occur. In this article, we examined the performance of the AST method under more realistic conditions where item parameter estimates instead of true parameter values are used in the CAT. Its performance was compared against that of the MFI method when the latter is used in conjunction with Sympson–Hetter or randomesque exposure control. Results indicate that the MFI method, even when combined with exposure control, is susceptible to capitalization on chance. This is particularly true when the calibration sample size is small. On the other hand, AST is more robust to capitalization on chance. Consistent with previous investigations using true item parameter values, AST yields much more balanced item pool usage, with a small loss in the precision of latent trait estimates. The loss is negligible when the test is as long as 40 items.1 aCheng, Ying1 aPatton, Jeffrey, M1 aShao, Can uhttp://epm.sagepub.com/content/75/2/260.abstract01409nas a2200157 4500008003900000245009300039210006900132300001000201490000700211520091300218100002301131700001501154700001701169700001301186856005201199 2013 d00aThe Influence of Item Calibration Error on Variable-Length Computerized Adaptive Testing0 aInfluence of Item Calibration Error on VariableLength Computeriz a24-400 v373 a
Variable-length computerized adaptive testing (VL-CAT) allows both items and test length to be “tailored” to examinees, thereby achieving the measurement goal (e.g., scoring precision or classification) with as few items as possible. Several popular test termination rules depend on the standard error of the ability estimate, which in turn depends on the item parameter values. However, items are chosen on the basis of their parameter estimates, and capitalization on chance may occur. In this article, the authors investigated the effects of capitalization on chance on test length and classification accuracy in several VL-CAT simulations. The results confirm that capitalization on chance occurs in VL-CAT and has complex effects on test length, ability estimation, and classification accuracy. These results have important implications for the design and implementation of VL-CATs.
1 aPatton, Jeffrey, M1 aYing Cheng1 aYuan, Ke-Hai1 aDiao, Qi uhttp://apm.sagepub.com/content/37/1/24.abstract