TY - JOUR T1 - Optimal and nonoptimal computer-based test designs for making pass-fail decisions JF - Applied Measurement in Education Y1 - 2006 A1 - Hambleton, R. K. A1 - Xing, D. KW - adaptive test KW - credentialing exams KW - Decision Making KW - Educational Measurement KW - multistage tests KW - optimal computer-based test designs KW - test form AB - Now that many credentialing exams are being routinely administered by computer, new computer-based test designs, along with item response theory models, are being aggressively researched to identify specific designs that can increase the decision consistency and accuracy of pass-fail decisions. The purpose of this study was to investigate the impact of optimal and nonoptimal multistage test (MST) designs, linear parallel-form test designs (LPFT), and computer adaptive test (CAT) designs on the decision consistency and accuracy of pass-fail decisions. Realistic testing situations matching those of one of the large credentialing agencies were simulated to increase the generalizability of the findings. The conclusions were clear: (a) With the LPFTs, matching test information functions (TIFs) to the mean of the proficiency distribution produced slightly better results than matching them to the passing score; (b) all of the test designs worked better than test construction using random selection of items, subject to content constraints only; (c) CAT performed better than the other test designs; and (d) if matching a TIP to the passing score, the MST design produced a bit better results than the LPFT design. If an argument for the MST design is to be made, it can be made on the basis of slight improvements over the LPFT design and better expected item bank utilization, candidate preference, and the potential for improved diagnostic feedback, compared with the feedback that is possible with fixed linear test forms. (PsycINFO Database Record (c) 2007 APA, all rights reserved) PB - Lawrence Erlbaum: US VL - 19 SN - 0895-7347 (Print); 1532-4818 (Electronic) ER - TY - ABST T1 - Computer-based test designs with optimal and non-optimal tests for making pass-fail decisions Y1 - 2004 A1 - Hambleton, R. K. A1 - Xing, D. CY - Research Report, University of Massachusetts, Amherst, MA ER - TY - JOUR T1 - Small sample estimation in dichotomous item response models: Effect of priors based on judgmental information on the accuracy of item parameter estimates JF - Applied Psychological Measurement Y1 - 2003 A1 - Swaminathan, H. A1 - Hambleton, R. K. A1 - Sireci, S. G. A1 - Xing, D. A1 - Rizavi, S. M. AB - Large item banks with properly calibrated test items are essential for ensuring the validity of computer-based tests. At the same time, item calibrations with small samples are desirable to minimize the amount of pretesting and limit item exposure. Bayesian estimation procedures show considerable promise with small examinee samples. The purposes of the study were (a) to examine how prior information for Bayesian item parameter estimation can be specified and (b) to investigate the relationship between sample size and the specification of prior information on the accuracy of item parameter estimates. The results of the simulation study were clear: Estimation of item response theory (IRT) model item parameters can be improved considerably. Improvements in the one-parameter model were modest; considerable improvements with the two- and three-parameter models were observed. Both the study of different forms of priors and ways to improve the judgmental data used in forming the priors appear to be promising directions for future research. VL - 27 N1 - Sage Publications, US ER - TY - CONF T1 - Impact of test design, item quality and item bank size on the psychometric properties of computer-based credentialing exams T2 - Paper presented at the meeting of National Council on Measurement in Education Y1 - 2002 A1 - Xing, D. A1 - Hambleton, R. K. JF - Paper presented at the meeting of National Council on Measurement in Education CY - New Orleans N1 - PDF file, 500 K ER - TY - ABST T1 - Impact of several computer-based testing variables on the psychometric properties of credentialing examinations (Laboratory of Psychometric and Evaluative Research Report No 393) Y1 - 2001 A1 - Xing, D. A1 - Hambleton, R. K. CY - Amherst, MA: University of Massachusetts, School of Education. ER - TY - CONF T1 - Impact of several computer-based testing variables on the psychometric properties of credentialing examinations T2 - Paper presented at the Annual Meeting of the National Council on Measurement in Education Y1 - 2001 A1 - Xing, D. A1 - Hambleton, R. K. JF - Paper presented at the Annual Meeting of the National Council on Measurement in Education CY - Seattle WA ER - TY - CONF T1 - Classification accuracy and test security for a computerized adaptive mastery test calibrated with different IRT models T2 - Paper presented at the annual meeting of the National Council on Measurement in Education Y1 - 2000 A1 - Robin, F. A1 - Xing, D. A1 - Scrams, D. A1 - Potenza, M. JF - Paper presented at the annual meeting of the National Council on Measurement in Education CY - New Orleans LA ER -