TY - CHAP T1 - An evaluation of a new procedure for computing information functions for Bayesian scores from computerized adaptive tests Y1 - 2009 A1 - Ito, K. A1 - Pommerich, M A1 - Segall, D. CY - D. J. Weiss (Ed.), Proceedings of the 2009 GMAC Conference on Computerized Adaptive Testing. N1 - {PDF file, 571 KB} ER - TY - CHAP T1 - The nine lives of CAT-ASVAB: Innovations and revelations Y1 - 2009 A1 - Pommerich, M A1 - Segall, D. O. A1 - Moreno, K. E. AB - The Armed Services Vocational Aptitude Battery (ASVAB) is administered annually to more than one million military applicants and high school students. ASVAB scores are used to determine enlistment eligibility, assign applicants to military occupational specialties, and aid students in career exploration. The ASVAB is administered as both a paper-and-pencil (P&P) test and a computerized adaptive test (CAT). CAT-ASVAB holds the distinction of being the first large-scale adaptive test battery to be administered in a high-stakes setting. Approximately two-thirds of military applicants currently take CAT-ASVAB; long-term plans are to replace P&P-ASVAB with CAT-ASVAB at all test sites. Given CAT-ASVAB’s pedigree—approximately 20 years in development and 20 years in operational administration—much can be learned from revisiting some of the major highlights of CATASVAB history. This paper traces the progression of CAT-ASVAB through nine major phases of development including: research and evelopment of the CAT-ASVAB prototype, the initial development of psychometric procedures and item pools, initial and full-scale operational implementation, the introduction of new item pools, the introduction of Windows administration, the introduction of Internet administration, and research and development of the next generation CATASVAB. A background and history is provided for each phase, including discussions of major research and operational issues, innovative approaches and practices, and lessons learned. CY - In D. J. Weiss (Ed.), Proceedings of the 2009 GMAC Conference on Computerized Adaptive Testing. N1 - {PDF File, 169 KB} ER - TY - ABST T1 - The effect of using item parameters calibrated from paper administrations in computer adaptive test administrations Y1 - 2007 A1 - Pommerich, M KW - Mode effects AB - Computer administered tests are becoming increasingly prevalent as computer technology becomes more readily available on a large scale. For testing programs that utilize both computer and paper administrations, mode effects are problematic in that they can result in examinee scores that are artificially inflated or deflated. As such, researchers have engaged in extensive studies of whether scores differ across paper and computer presentations of the same tests. The research generally seems to indicate that the more complicated it is to present or take a test on computer, the greater the possibility of mode effects. In a computer adaptive test, mode effects may be a particular concern if items are calibrated using item responses obtained from one administration mode (i.e., paper), and those parameters are then used operationally in a different administration mode (i.e., computer). This paper studies the suitability of using parameters calibrated from a paper administration for item selection and scoring in a computer adaptive administration, for two tests with lengthy passages that required navigation in the computer administration. The results showed that the use of paper calibrated parameters versus computer calibrated parameters in computer adaptive administrations had small to moderate effects on the reliability of examinee scores, at fairly short test lengths. This effect was generally diminished for longer test lengths. However, the results suggest that in some cases, some loss in reliability might be inevitable if paper-calibrated parameters are used in computer adaptive administrations. JF - Journal of Technology, Learning, and Assessment VL - 5 ER - TY - JOUR T1 - The Effect of Using Item Parameters Calibrated from Paper Administrations in Computer Adaptive Test Administrations JF - The Journal of Technology, Learning, and Assessment Y1 - 2007 A1 - Pommerich, M AB -

Computer administered tests are becoming increasingly prevalent as computer technology becomes more readily available on a large scale. For testing programs that utilize both computer and paper administrations, mode effects are problematic in that they can
result in examinee scores that are artificially inflated or deflated. As such, researchers have engaged in extensive studies of whether scores differ across paper and computer presentations of the same tests. The research generally seems to indicate that the more
complicated it is to present or take a test on computer, the greater the possibility of mode effects. In a computer adaptive test, mode effects may be a particular concern if items are calibrated using item responses obtained from one administration mode (i.e., paper), and those parameters are then used operationally in a different administration mode (i.e., computer). This paper studies the suitability of using parameters calibrated from a paper administration for item selection and scoring in a computer adaptive administration, for two tests with lengthy passages that required navigation in the computer administration. The results showed that the use of paper calibrated parameters versus computer calibrated parameters in computer adaptive administrations had small to
moderate effects on the reliability of examinee scores, at fairly short test lengths. This effect was generally diminished for longer test lengths. However, the results suggest that in some cases, some loss in reliability might be inevitable if paper-calibrated parameters
are used in computer adaptive administrations. 

VL - 5 ER - TY - CONF T1 - Calibrating CAT pools and online pretest items using marginal maximum likelihood methods T2 - Paper presented at the annual meeting of the National Council on Measurement in Education Y1 - 2003 A1 - Pommerich, M A1 - Segall, D. O. JF - Paper presented at the annual meeting of the National Council on Measurement in Education CY - Chicago IL N1 - {PDF file, 284 KB} ER - TY - CONF T1 - An examination of item review on a CAT using the specific information item selection algorithm T2 - Paper presented at the annual meeting of the National Council on Measurement in Education Y1 - 2001 A1 - Bowles, R A1 - Pommerich, M JF - Paper presented at the annual meeting of the National Council on Measurement in Education CY - Seattle WA N1 - PDF file, 325 KB} ER - TY - CONF T1 - An examination of item review on a CAT using the specific information item selection algorithm T2 - Paper presented at the annual meeting of the National Council on Measurement in Education Y1 - 2001 A1 - Bowles, R A1 - Pommerich, M JF - Paper presented at the annual meeting of the National Council on Measurement in Education CY - Seattle WA N1 - PDF file, 325 K ER - TY - CONF T1 - From simulation to application: Examinees react to computerized testing T2 - Paper presented at the annual meeting of the National Council of Measurement in Education Y1 - 2000 A1 - Pommerich, M A1 - Burden, T. JF - Paper presented at the annual meeting of the National Council of Measurement in Education CY - New Orleans, April 2000 ER - TY - CONF T1 - Pretesting alongside an operational CAT T2 - Paper presented at the annual meeting of the National Council on Measurement in Education Y1 - 1999 A1 - Davey, T. A1 - Pommerich, M A1 - Thompson, D. T. JF - Paper presented at the annual meeting of the National Council on Measurement in Education CY - Montreal, Canada ER -