TY - JOUR T1 - Multidimensional CAT Item Selection Methods for Domain Scores and Composite Scores With Item Exposure Control and Content Constraints JF - Journal of Educational Measurement Y1 - 2014 A1 - Yao, Lihua AB -

The intent of this research was to find an item selection procedure in the multidimensional computer adaptive testing (CAT) framework that yielded higher precision for both the domain and composite abilities, had a higher usage of the item pool, and controlled the exposure rate. Five multidimensional CAT item selection procedures (minimum angle; volume; minimum error variance of the linear combination; minimum error variance of the composite score with optimized weight; and Kullback-Leibler information) were studied and compared with two methods for item exposure control (the Sympson-Hetter procedure and the fixed-rate procedure, the latter simply refers to putting a limit on the item exposure rate) using simulated data. The maximum priority index method was used for the content constraints. Results showed that the Sympson-Hetter procedure yielded better precision than the fixed-rate procedure but had much lower item pool usage and took more time. The five item selection procedures performed similarly under Sympson-Hetter. For the fixed-rate procedure, there was a trade-off between the precision of the ability estimates and the item pool usage: the five procedures had different patterns. It was found that (1) Kullback-Leibler had better precision but lower item pool usage; (2) minimum angle and volume had balanced precision and item pool usage; and (3) the two methods minimizing the error variance had the best item pool usage and comparable overall score recovery but less precision for certain domains. The priority index for content constraints and item exposure was implemented successfully.

VL - 51 UR - http://dx.doi.org/10.1111/jedm.12032 ER - TY - JOUR T1 - Using Multidimensional CAT to Administer a Short, Yet Precise, Screening Test JF - Applied Psychological Measurement Y1 - 2014 A1 - Yao, Lihua A1 - Pommerich, Mary A1 - Segall, Daniel O. AB -

Multidimensional computerized adaptive testing (MCAT) provides a mechanism by which the simultaneous goals of accurate prediction and minimal testing time for a screening test could both be met. This article demonstrates the use of MCAT to administer a screening test for the Computerized Adaptive Testing–Armed Services Vocational Aptitude Battery (CAT-ASVAB) under a variety of manipulated conditions. CAT-ASVAB is a test battery administered via unidimensional CAT (UCAT) that is used to qualify applicants for entry into the U.S. military and assign them to jobs. The primary research question being evaluated is whether the use of MCAT to administer a screening test can lead to significant reductions in testing time from the full-length selection test, without significant losses in score precision. Different stopping rules, item selection methods, content constraints, time constraints, and population distributions for the MCAT administration are evaluated through simulation, and compared with results from a regular full-length UCAT administration.

VL - 38 UR - http://apm.sagepub.com/content/38/8/614.abstract ER - TY - JOUR T1 - Comparing the Performance of Five Multidimensional CAT Selection Procedures With Different Stopping Rules JF - Applied Psychological Measurement Y1 - 2013 A1 - Yao, Lihua AB -

Through simulated data, five multidimensional computerized adaptive testing (MCAT) selection procedures with varying test lengths are examined and compared using different stopping rules. Fixed item exposure rates are used for all the items, and the Priority Index (PI) method is used for the content constraints. Two stopping rules, standard error (SE) and predicted standard error reduction (PSER), are proposed; each MCAT selection process is stopped if either the required precision has been achieved or the selected number of items has reached the maximum limit. The five procedures are as follows: minimum angle (Ag), volume (Vm), minimize the error variance of the linear combination (V 1), minimize the error variance of the composite score with the optimized weight (V 2), and Kullback–Leibler (KL) information. The recovery for the domain scores or content scores and their overall score, test length, and test reliability are compared across the five MCAT procedures and between the two stopping rules. It is found that the two stopping rules are implemented successfully and that KL uses the least number of items to reach the same precision level, followed by Vm; Ag uses the largest number of items. On average, to reach a precision of SE = .35, 40, 55, 63, 63, and 82 items are needed for KL, Vm, V 1, V 2, and Ag, respectively, for the SE stopping rule. PSER yields 38, 45, 53, 58, and 68 items for KL, Vm, V 1, V 2, and Ag, respectively; PSER yields only slightly worse results than SE, but with much fewer items. Overall, KL is recommended for varying-length MCAT.

VL - 37 UR - http://apm.sagepub.com/content/37/1/3.abstract ER -