01538nas a2200169 4500008004200000245012100042210006900163520094900232100001801181700001701199700002501216700001801241700002201259700002201281700002001303856004501323 In Press d 00aDevelopment of a Computerized Adaptive Test for Anxiety Based on the Dutch–Flemish Version of the PROMIS Item Bank0 aDevelopment of a Computerized Adaptive Test for Anxiety Based on3 aWe used the Dutch–Flemish version of the USA PROMIS adult V1.0 item bank for Anxiety as input for developing a computerized adaptive test (CAT) to measure the entire latent anxiety continuum. First, psychometric analysis of a combined clinical and general population sample (N = 2,010) showed that the 29-item bank has psychometric properties that are required for a CAT administration. Second, a post hoc CAT simulation showed efficient and highly precise measurement, with an average number of 8.64 items for the clinical sample, and 9.48 items for the general population sample. Furthermore, the accuracy of our CAT version was highly similar to that of the full item bank administration, both in final score estimates and in distinguishing clinical subjects from persons without a mental health disorder. We discuss the future directions and limitations of CAT development with the Dutch–Flemish version of the PROMIS Anxiety item bank.1 aFlens, Gerard1 aSmits, Niels1 aTerwee, Caroline, B.1 aDekker, Joost1 aHuijbrechts, Irma1 aSpinhoven, Philip1 ade Beurs, Edwin uhttps://doi.org/10.1177/107319111774674201973nas a2200145 4500008003900000245012800039210006900167300001200236490000700248520147000255100002001725700001801745700001901763856004501782 2020 d00aA Dynamic Stratification Method for Improving Trait Estimation in Computerized Adaptive Testing Under Item Exposure Control0 aDynamic Stratification Method for Improving Trait Estimation in a182-1960 v443 aWhen computerized adaptive testing (CAT) is under stringent item exposure control, the precision of trait estimation will substantially decrease. A new item selection method, the dynamic Stratification method based on Dominance Curves (SDC), which is aimed at improving trait estimation, is proposed to mitigate this problem. The objective function of the SDC in item selection is to maximize the sum of test information for all examinees rather than maximizing item information for individual examinees at a single-item administration, as in conventional CAT. To achieve this objective, the SDC uses dominance curves to stratify an item pool into strata with the number being equal to the test length to precisely and accurately increase the quality of the administered items as the test progresses, reducing the likelihood that a high-discrimination item will be administered to an examinee whose ability is not close to the item difficulty. Furthermore, the SDC incorporates a dynamic process for on-the-fly item–stratum adjustment to optimize the use of quality items. Simulation studies were conducted to investigate the performance of the SDC in CAT under item exposure control at different levels of severity. According to the results, the SDC can efficiently improve trait estimation in CAT through greater precision and more accurate trait estimation than those generated by other methods (e.g., the maximum Fisher information method) in most conditions.1 aChen, Jyun-Hong1 aChao, Hsiu-Yi1 aChen, Shu-Ying uhttps://doi.org/10.1177/014662161984382001777nas a2200145 4500008003900000245005500039210005400094300001300148490000700161520135000168100001901518700002601537700002301563856004501586 2019 d00aDeveloping Multistage Tests Using D-Scoring Method0 aDeveloping Multistage Tests Using DScoring Method a988-10080 v793 aThe D-scoring method for scoring and equating tests with binary items proposed by Dimitrov offers some of the advantages of item response theory, such as item-level difficulty information and score computation that reflects the item difficulties, while retaining the merits of classical test theory such as the simplicity of number correct score computation and relaxed requirements for model sample sizes. Because of its unique combination of those merits, the D-scoring method has seen quick adoption in the educational and psychological measurement field. Because item-level difficulty information is available with the D-scoring method and item difficulties are reflected in test scores, it conceptually makes sense to use the D-scoring method with adaptive test designs such as multistage testing (MST). In this study, we developed and compared several versions of the MST mechanism using the D-scoring approach and also proposed and implemented a new framework for conducting MST simulation under the D-scoring method. Our findings suggest that the score recovery performance under MST with D-scoring was promising, as it retained score comparability across different MST paths. We found that MST using the D-scoring method can achieve improvements in measurement precision and efficiency over linear-based tests that use D-scoring method.1 aHan, Kyung, T.1 aDimitrov, Dimiter, M.1 aAl-Mashary, Faisal uhttps://doi.org/10.1177/001316441984142802607nas a2200133 4500008004100000245004800041210004700089260005500136520214600191653002002337653002402357100002102381856007102402 2017 eng d00aDeveloping a CAT: An Integrated Perspective0 aDeveloping a CAT An Integrated Perspective aNiigata, JapanbNiigata Seiryo Universityc08/20173 a
Most resources on computerized adaptive testing (CAT) tend to focus on psychometric aspects such as mathematical formulae for item selection or ability estimation. However, development of a CAT assessment requires a holistic view of project management, financials, content development, product launch and branding, and more. This presentation will develop such a holistic view, which serves several purposes, including providing a framework for validity, estimating costs and ROI, and making better decisions regarding the psychometric aspects.
Thompson and Weiss (2011) presented a 5-step model for developing computerized adaptive tests (CATs). This model will be presented and discussed as the core of this holistic framework, then applied to real-life examples. While most CAT research focuses on developing new quantitative algorithms, this presentation is instead intended to help researchers evaluate and select algorithms that are most appropriate for their needs. It is therefore ideal for practitioners that are familiar with the basics of item response theory and CAT, and wish to explore how they might apply these methodologies to improve their assessments.
Steps include:
1. Feasibility, applicability, and planning studies
2. Develop item bank content or utilize existing bank
3. Pretest and calibrate item bank
4. Determine specifications for final CAT
5. Publish live CAT.
So, for example, Step 1 will contain simulation studies which estimate item bank requirements, which then can be used to determine costs of content development, which in turn can be integrated into an estimated project cost timeline. Such information is vital in determining if the CAT should even be developed in the first place.
References
Thompson, N. A., & Weiss, D. J. (2011). A Framework for the Development of Computerized Adaptive Tests. Practical Assessment, Research & Evaluation, 16(1). Retrieved from http://pareonline.net/getvn.asp?v=16&n=1.
10aCAT Development10aintegrated approach1 aThompson, Nathan uhttps://drive.google.com/open?id=1Jv8bpH2zkw5TqSMi03e5JJJ98QtXf-Cv01568nas a2200181 4500008003900000245011800039210006900157300001100226490000700237520097700244100001801221700001701239700002501256700001801281700002201299700002001321856004501341 2017 d00aDevelopment of a Computer Adaptive Test for Depression Based on the Dutch-Flemish Version of the PROMIS Item Bank0 aDevelopment of a Computer Adaptive Test for Depression Based on a79-1050 v403 aWe developed a Dutch-Flemish version of the patient-reported outcomes measurement information system (PROMIS) adult V1.0 item bank for depression as input for computerized adaptive testing (CAT). As item bank, we used the Dutch-Flemish translation of the original PROMIS item bank (28 items) and additionally translated 28 U.S. depression items that failed to make the final U.S. item bank. Through psychometric analysis of a combined clinical and general population sample (N = 2,010), 8 added items were removed. With the final item bank, we performed several CAT simulations to assess the efficiency of the extended (48 items) and the original item bank (28 items), using various stopping rules. Both item banks resulted in highly efficient and precise measurement of depression and showed high similarity between the CAT simulation scores and the full item bank scores. We discuss the implications of using each item bank and stopping rule for further CAT development.1 aFlens, Gerard1 aSmits, Niels1 aTerwee, Caroline, B.1 aDekker, Joost1 aHuijbrechts, Irma1 ade Beurs, Edwin uhttps://doi.org/10.1177/016327871668416803027nas a2200169 4500008004100000245004800041210004300089260005500132520252300187653001002710653001802720100001902738700001702757700001402774700001602788856005302804 2017 eng d00aThe Development of a Web-Based CAT in China0 aDevelopment of a WebBased CAT in China aNiigata, JapanbNiigata Seiryo Universityc08/20173 aCognitive ability assessment has been widely used as the recruitment tool in hiring potential employees. Traditional cognitive ability tests have been encountering threats from item-exposures and long time for answering. Especially in China, campus recruitment thinks highly of short answering time and anti-cheating. Beisen, as the biggest native online assessment software provider, developed a web-based CAT for cognitive ability which assessing verbal, quantitative, logical and spatial ability in order to decrease answering times, improve assessment accuracy and reduce threats from cheating and faking in online ability test. The web-based test provides convenient testing for examinees who can access easily to the test via internet just by login the test website at any time and any place through any Internet-enabled devices (e.g., laptops, IPADs, and smart phones).
We designed the CAT following strategies of establishing item bank, setting starting point, item selection, scoring and terminating. Additionally, we pay close attention to administrating the test via web. For the CAT procedures, we employed online calibration for establishing a stable and expanding item bank, and integrated maximum Fisher information, α-stratified strategy and randomization for item selection and coping with item exposures. Fixed-length and variable-length strategies were combined in terminating the test. For fulfilling the fluid web-based testing, we employed cloud computing techniques and designed each computing process subtly. Distributed computation was used to process scoring which executes EAP and item selecting at high speed. Caching all items to the servers in advance helps shortening the process of loading items to examinees’ terminal equipment. Horizontally scalable cloud servers function coping with great concurrency. The massive computation in item selecting was conversed to searching items from an information matrix table.
We examined the average accuracy, bank usage and computing performance in the condition of laboratory and real testing. According to a test for almost 28000 examinees, we found that bank usage is averagely 50%, and that 80% tests terminate at test information of 10 and averagely at 9.6. In context of great concurrency, the testing is unhindered and the process of scoring and item selection only takes averagely 0.23s for each examiner.
10aChina10aWeb-Based CAT1 aLiang, Chongli1 aWang, Danjun1 aZhou, Dan1 aZhan, Peida uhttp://iacat.org/development-web-based-cat-china01179nas a2200157 4500008003900000245008400039210006900123300001200192490000700204520068700211100002000898700001600918700001800934700002200952856004700974 2017 d00aThe Development of MST Test Information for the Prediction of Test Performances0 aDevelopment of MST Test Information for the Prediction of Test P a570-5860 v773 aThe current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the validity of the proposed method in both measurement precision and classification accuracy. The results indicate that the MST test information effectively predicted the performance of MST. In addition, the results of the current study highlighted the relationship among the test construction, MST design factors, and MST performance.1 aPark, Ryoungsun1 aKim, Jiseon1 aChung, Hyewon1 aDodd, Barbara, G. uhttp://dx.doi.org/10.1177/001316441666296003946nas a2200181 4500008004100000245009300041210006900134260005500203520329800258653001203556653002403568653002603592653002503618100001403643700002103657700001503678856007103693 2017 eng d00aDIF-CAT: Doubly Adaptive CAT Using Subgroup Information to Improve Measurement Precision0 aDIFCAT Doubly Adaptive CAT Using Subgroup Information to Improve aNiigata, JapanbNiigata Seiryo Universityc08/20173 aDifferential item functioning (DIF) is usually regarded as a test fairness issue in high-stakes tests. In low-stakes tests, it is more of an accuracy problem. However, in low-stakes tests, the same method, deleting items that demonstrate significant DIF, is still employed to treat DIF items. When political concerns are not important, such as in low-stakes tests and instruments that are not used to make decisions about people, deleting items might not be optimal. Computerized adaptive testing (CAT) is more and more frequently used in low-stakes tests. The DIF-CAT method evaluated in this research is designed to cope with DIF in a CAT environment. Using this method, item parameters are separately estimated for the focal group and the reference group in a DIF study, then CATs are administered based on different sets of item parameters for the focal and reference groups.
To evaluate the performance of the DIF-CAT procedure, it was compared in a simulation study to (1) deleting all the DIF items in a CAT bank and (2) ignoring DIF. A 300-item flat item bank and a 300-item peaked item bank were simulated using the three-parameter logistic IRT model with D = 1,7. 40% of the items in each bank showed DIF. The DIF size was b and/or a = 0.5 while original b ranged from -3 to 3 and a ranged from 0.3 to 2.1. Three types of DIF were considered: (1) uniform DIF caused by differences in b, non-uniform DIF caused by differences in a, and non-uniform DIF caused by differences in both a and b. 500 normally distributed simulees in each of reference and focal groups were used in item parameter re-calibration. In the Delete DIF method, only DIF-free items were calibrated. In the Ignore DIF method, all the items were calibrated using all simulees without differentiating the groups. In the DIF-CAT method, the DIF-free items were used as anchor items to estimate the item parameters for the focal and reference groups and the item parameters from recalibration were used. All simulees used the same item parameters in the Delete method and the Ignore method. CATs for simulees within the two groups used group-specific item parameters in the DIF-CAT method. In the CAT stage, 100 simulees were generated for each of the reference and focal groups, at each of six discrete q levels ranging from -2.5 to 2.5. CAT test length was fixed at 40 items. Bias, average absolute difference, RMSE, standard error of θ estimates, and person fit, were used to compare the performance of the DIF methods. DIF item usage was also recorded for the Ignore method and the DIF-CAT method.
Generally, the DIF-CAT method outperformed both the Delete method and the Ignore method in dealing with DIF items in CAT. The Delete method, which is the most frequently used method for handling DIF, performed the worst of the three methods in a CAT environment, as reflected in multiple indices of measurement precision. Even the Ignore method, which simply left DIF items in the item bank, provided θ estimates of higher precision than the Delete method. This poor performance of the Delete method was probably due to reduction in size of the item bank available for each CAT.
10aDIF-CAT10aDoubly Adaptive CAT10aMeasurement Precision10asubgroup information1 aWang, Joy1 aWeiss, David, J.1 aWang, Chun uhttps://drive.google.com/open?id=1Gu4FR06qM5EZNp_Ns0Kt3HzBqWAv3LPy01541nas a2200157 4500008003900000022001400039245009700053210006900150300001400219490000700233520104800240100001901288700001601307700001901323856004101342 2017 d a1745-398400aDual-Objective Item Selection Criteria in Cognitive Diagnostic Computerized Adaptive Testing0 aDualObjective Item Selection Criteria in Cognitive Diagnostic Co a165–1830 v543 aThe development of cognitive diagnostic-computerized adaptive testing (CD-CAT) has provided a new perspective for gaining information about examinees' mastery on a set of cognitive attributes. This study proposes a new item selection method within the framework of dual-objective CD-CAT that simultaneously addresses examinees' attribute mastery status and overall test performance. The new procedure is based on the Jensen-Shannon (JS) divergence, a symmetrized version of the Kullback-Leibler divergence. We show that the JS divergence resolves the noncomparability problem of the dual information index and has close relationships with Shannon entropy, mutual information, and Fisher information. The performance of the JS divergence is evaluated in simulation studies in comparison with the methods available in the literature. Results suggest that the JS divergence achieves parallel or more precise recovery of latent trait variables compared to the existing methods and maintains practical advantages in computation and item pool usage.1 aKang, Hyeon-Ah1 aZhang, Susu1 aChang, Hua-Hua uhttp://dx.doi.org/10.1111/jedm.1213900689nas a2200193 4500008004500000022001400045245012100059210006900180300001000249490000600259653003100265653002300296653002200319653003200341653002500373653001800398100001500416856006400431 2014 Engldsh a2165-659200aDetecting Item Preknowledge in Computerized Adaptive Testing Using Information Theory and Combinatorial Optimization0 aDetecting Item Preknowledge in Computerized Adaptive Testing Usi a37-580 v210acombinatorial optimization10ahypothesis testing10aitem preknowledge10aKullback-Leibler divergence10asimulated annealing.10atest security1 aBelov, D I uhttp://www.iacat.org/jcat/index.php/jcat/article/view/36/1801617nas a2200193 4500008003900000022001400039245007400053210006900127300001400196490000700210520105700217100002101274700001401295700001901309700001701328700001801345700001901363856004101382 2014 d a1745-398400aDetermining the Overall Impact of Interruptions During Online Testing0 aDetermining the Overall Impact of Interruptions During Online Te a419–4400 v513 aWith an increase in the number of online tests, interruptions during testing due to unexpected technical issues seem unavoidable. For example, interruptions occurred during several recent state tests. When interruptions occur, it is important to determine the extent of their impact on the examinees’ scores. There is a lack of research on this topic due to the novelty of the problem. This article is an attempt to fill that void. Several methods, primarily based on propensity score matching, linear regression, and item response theory, were suggested to determine the overall impact of the interruptions on the examinees’ scores. A realistic simulation study shows that the suggested methods have satisfactory Type I error rate and power. Then the methods were applied to data from the Indiana Statewide Testing for Educational Progress-Plus (ISTEP+) test that experienced interruptions in 2013. The results indicate that the interruptions did not have a significant overall impact on the student scores for the ISTEP+ test.
1 aSinharay, Sandip1 aWan, Ping1 aWhitaker, Mike1 aKim, Dong-In1 aZhang, Litong1 aChoi, Seung, W uhttp://dx.doi.org/10.1111/jedm.1205202198nas a2200145 4500008003900000245007900039210006900118300001100187490000700198520173800205100001501943700001901958700002301977856005202000 2013 d00aDeriving Stopping Rules for Multidimensional Computerized Adaptive Testing0 aDeriving Stopping Rules for Multidimensional Computerized Adapti a99-1220 v373 aMultidimensional computerized adaptive testing (MCAT) is able to provide a vector of ability estimates for each examinee, which could be used to provide a more informative profile of an examinee’s performance. The current literature on MCAT focuses on the fixed-length tests, which can generate less accurate results for those examinees whose abilities are quite different from the average difficulty level of the item bank when there are only a limited number of items in the item bank. Therefore, instead of stopping the test with a predetermined fixed test length, the authors use a more informative stopping criterion that is directly related to measurement accuracy. Specifically, this research derives four stopping rules that either quantify the measurement precision of the ability vector (i.e., minimum determinant rule [D-rule], minimum eigenvalue rule [E-rule], and maximum trace rule [T-rule]) or quantify the amount of available information carried by each item (i.e., maximum Kullback–Leibler divergence rule [K-rule]). The simulation results showed that all four stopping rules successfully terminated the test when the mean squared error of ability estimation is within a desired range, regardless of examinees’ true abilities. It was found that when using the D-, E-, or T-rule, examinees with extreme abilities tended to have tests that were twice as long as the tests received by examinees with moderate abilities. However, the test length difference with K-rule is not very dramatic, indicating that K-rule may not be very sensitive to measurement precision. In all cases, the cutoff value for each stopping rule needs to be adjusted on a case-by-case basis to find an optimal solution.
1 aWang, Chun1 aChang, Hua-Hua1 aBoughton, Keith, A uhttp://apm.sagepub.com/content/37/2/99.abstract01496nas a2200157 4500008003900000022001400039245006400053210006400117300001400181490000700195520101300202100002401215700002001239700002401259856005501283 2012 d a1745-398400aDetecting Local Item Dependence in Polytomous Adaptive Data0 aDetecting Local Item Dependence in Polytomous Adaptive Data a127–1470 v493 aA rapidly expanding arena for item response theory (IRT) is in attitudinal and health-outcomes survey applications, often with polytomous items. In particular, there is interest in computer adaptive testing (CAT). Meeting model assumptions is necessary to realize the benefits of IRT in this setting, however. Although initial investigations of local item dependence have been studied both for polytomous items in fixed-form settings and for dichotomous items in CAT settings, there have been no publications applying local item dependence detection methodology to polytomous items in CAT despite its central importance to these applications. The current research uses a simulation study to investigate the extension of widely used pairwise statistics, Yen's Q3 Statistic and Pearson's Statistic X2, in this context. The simulation design and results are contextualized throughout with a real item bank of this type from the Patient-Reported Outcomes Measurement Information System (PROMIS).
1 aMislevy, Jessica, L1 aRupp, André, A1 aHarring, Jeffrey, R uhttp://dx.doi.org/10.1111/j.1745-3984.2012.00165.x00541nas a2200181 4500008004500000245006300045210006300108300001400171490000700185100002300192700002000215700002200235700001700257700001600274700001800290700002100308856003000329 2012 Engldsh 00aDevelopment of a computerized adaptive test for depression0 aDevelopment of a computerized adaptive test for depression a1105-11120 v691 aGibbons, Robert, D1 aWeiss, David, J1 aPilkonis, Paul, A1 aFrank, Ellen1 aMoore, Tara1 aKim, Jong Bae1 aKupfer, David, J uWWW.ARCHGENPSYCHIATRY.COM01889nas a2200169 4500008004500000245015100045210006900196490000700265520128400272100001601556700001701572700001401589700001501603700001501618700001401633856007201647 2011 Engldsh 00aDesign of a Computer-Adaptive Test to Measure English Literacy and Numeracy in the Singapore Workforce: Considerations, Benefits, and Implications0 aDesign of a ComputerAdaptive Test to Measure English Literacy an0 v123 aA computer adaptive test CAT) is a delivery methodology that serves the larger goals of the assessment system in which it is embedded. A thorough analysis of the assessment system for which a CAT is being designed is critical to ensure that the delivery platform is appropriate and addresses all relevant complexities. As such, a CAT engine must be designed to conform to the
validity and reliability of the overall system. This design takes the form of adherence to the assessment goals and objectives of the adaptive assessment system. When the assessment is adapted for use in another country, consideration must be given to any necessary revisions including content differences. This article addresses these considerations while drawing, in part, on the process followed in the development of the CAT delivery system designed to test English language workplace skills for the Singapore Workforce Development Agency. Topics include item creation and selection, calibration of the item pool, analysis and testing of the psychometric properties, and reporting and interpretation of scores. The characteristics and benefits of the CAT delivery system are detailed as well as implications for testing programs considering the use of a
CAT delivery system.
A comparison od two procedures, Modified Robust Z and 95% Credible Interval, were compared in a Monte Carlo study. Both procedures evidenced adequate control of false positive DIF results.
En el presente trabajo se muestra el análisis realizado sobre un Test Adaptativo Informatizado (TAI) diseñado para la evaluación del nivel de inglés, denominado eCAT, con el objetivo de estudiar el deterioro de parámetros (parameter drift) producido desde la calibración inicial del banco de ítems. Se ha comparado la calibración original desarrollada para la puesta en servicio del TAI (N= 3224) y la calibración actual obtenida con las aplicaciones reales del TAI (N= 7254). Se ha analizado el Funcionamiento Diferencial de los Ítems (FDI) en función de los parámetros utilizados y se ha simulado el impacto que sobre el nivel de rasgo estimado tiene la variación en los parámetros. Los resultados muestran que se produce especialmente un deterioro de los parámetros a y c, que hay unimportante número de ítems del banco para los que existe FDI y que la variación de los parámetros produce un impacto moderado en la estimación de θ de los evaluados con nivel de inglés alto. Se concluye que los parámetros de los ítems se han deteriorado y deben ser actualizados.Item parameter drift in computerized adaptive testing: Study with eCAT. This study describes the parameter drift analysis conducted on eCAT (a Computerized Adaptive Test to assess the written English level of Spanish speakers). The original calibration of the item bank (N = 3224) was compared to a new calibration obtained from the data provided by most eCAT operative administrations (N =7254). A Differential Item Functioning (DIF) study was conducted between the original and the new calibrations. The impact that the new parameters have on the trait level estimates was obtained by simulation. Results show that parameter drift is found especially for a and c parameters, an important number of bank items show DIF, and the parameter change has a moderate impact on high-level-English θ estimates. It is then recommended to replace the original estimates by the new set. by the new set.
10a*Software10aEducational Measurement/*methods/*statistics & numerical data10aHumans10aLanguage1 aAbad, F J1 aOlea, J1 aAguado, D1 aPonsoda, V1 aBarrada, J R uhttp://iacat.org/content/deterioro-de-par%C3%A1metros-de-los-%C3%ADtems-en-tests-adaptativos-informatizados-estudio-con-ecat00553nas a2200157 4500008004100000245008700041210006900128300001400197490001000211100001300221700001200234700001400246700001400260700001400274856010700288 2010 eng d00aDevelopment and evaluation of a confidence-weighting computerized adaptive testing0 aDevelopment and evaluation of a confidenceweighting computerized a163–1760 v13(3)1 aYen, Y C1 aHo, R G1 aChen, L J1 aChou, K Y1 aChen, Y L uhttp://iacat.org/content/development-and-evaluation-confidence-weighting-computerized-adaptive-testing03099nas a2200445 4500008004100000020004100041245012000082210006900202250001500271260001000286300001100296490000700307520175400314653003802068653002102106653001002127653000902137653002202146653002802168653003302196653001102229653001102240653000902251653001602260653001802276653001902294653003102313653003102344653001602375100001602391700001002407700001402417700001502431700001402446700001502460700001802475700002402493700001802517856011802535 2010 eng d a0161-8105 (Print)0161-8105 (Linking)00aDevelopment and validation of patient-reported outcome measures for sleep disturbance and sleep-related impairments0 aDevelopment and validation of patientreported outcome measures f a2010/06/17 cJun 1 a781-920 v333 aSTUDY OBJECTIVES: To develop an archive of self-report questions assessing sleep disturbance and sleep-related impairments (SRI), to develop item banks from this archive, and to validate and calibrate the item banks using classic validation techniques and item response theory analyses in a sample of clinical and community participants. DESIGN: Cross-sectional self-report study. SETTING: Academic medical center and participant homes. PARTICIPANTS: One thousand nine hundred ninety-three adults recruited from an Internet polling sample and 259 adults recruited from medical, psychiatric, and sleep clinics. INTERVENTIONS: None. MEASUREMENTS AND RESULTS: This study was part of PROMIS (Patient-Reported Outcomes Information System), a National Institutes of Health Roadmap initiative. Self-report item banks were developed through an iterative process of literature searches, collecting and sorting items, expert content review, qualitative patient research, and pilot testing. Internal consistency, convergent validity, and exploratory and confirmatory factor analysis were examined in the resulting item banks. Factor analyses identified 2 preliminary item banks, sleep disturbance and SRI. Item response theory analyses and expert content review narrowed the item banks to 27 and 16 items, respectively. Validity of the item banks was supported by moderate to high correlations with existing scales and by significant differences in sleep disturbance and SRI scores between participants with and without sleep disorders. CONCLUSIONS: The PROMIS sleep disturbance and SRI item banks have excellent measurement properties and may prove to be useful for assessing general aspects of sleep and SRI with various groups of patients and interventions.10a*Outcome Assessment (Health Care)10a*Self Disclosure10aAdult10aAged10aAged, 80 and over10aCross-Sectional Studies10aFactor Analysis, Statistical10aFemale10aHumans10aMale10aMiddle Aged10aPsychometrics10aQuestionnaires10aReproducibility of Results10aSleep Disorders/*diagnosis10aYoung Adult1 aBuysse, D J1 aYu, L1 aMoul, D E1 aGermain, A1 aStover, A1 aDodds, N E1 aJohnston, K L1 aShablesky-Cade, M A1 aPilkonis, P A uhttp://iacat.org/content/development-and-validation-patient-reported-outcome-measures-sleep-disturbance-and-sleep02219nas a2200289 4500008004100000020004600041245010800087210006900195250001500264300001200279490000700291520131000298100001801608700001701626700001801643700001401661700001401675700001801689700001401707700001701721700001501738700001301753700001401766700001601780700001301796856012001809 2010 Eng d a1573-2649 (Electronic)0962-9343 (Linking)00aDevelopment of computerized adaptive testing (CAT) for the EORTC QLQ-C30 physical functioning dimension0 aDevelopment of computerized adaptive testing CAT for the EORTC Q a2010/10/26 a479-4900 v203 aPURPOSE: Computerized adaptive test (CAT) methods, based on item response theory (IRT), enable a patient-reported outcome instrument to be adapted to the individual patient while maintaining direct comparability of scores. The EORTC Quality of Life Group is developing a CAT version of the widely used EORTC QLQ-C30. We present the development and psychometric validation of the item pool for the first of the scales, physical functioning (PF). METHODS: Initial developments (including literature search and patient and expert evaluations) resulted in 56 candidate items. Responses to these items were collected from 1,176 patients with cancer from Denmark, France, Germany, Italy, Taiwan, and the United Kingdom. The items were evaluated with regard to psychometric properties. RESULTS: Evaluations showed that 31 of the items could be included in a unidimensional IRT model with acceptable fit and good content coverage, although the pool may lack items at the upper extreme (good PF). There were several findings of significant differential item functioning (DIF). However, the DIF findings appeared to have little impact on the PF estimation. CONCLUSIONS: We have established an item pool for CAT measurement of PF and believe that this CAT instrument will clearly improve the EORTC measurement of PF.1 aPetersen, M A1 aGroenvold, M1 aAaronson, N K1 aChie, W C1 aConroy, T1 aCostantini, A1 aFayers, P1 aHelbostad, J1 aHolzner, B1 aKaasa, S1 aSinger, S1 aVelikova, G1 aYoung, T uhttp://iacat.org/content/development-computerized-adaptive-testing-cat-eortc-qlq-c30-physical-functioning-dimension01659nas a2200145 4500008004100000245004900041210004800090260009700138520115900235100001301394700001101407700001401418700001101432856007001443 2009 eng d00aDeveloping item variants: An empirical study0 aDeveloping item variants An empirical study aD. J. Weiss (Ed.), Proceedings of the 2009 GMAC Conference on Computerized Adaptive Testing.3 aLarge-scale standardized test have been widely used for educational and licensure testing. In computerized adaptive testing (CAT), one of the practical concerns for maintaining large-scale assessments is to ensure adequate numbers of high-quality items that are required for item pool functioning. Developing items at specific difficulty levels and for certain areas of test plans is a wellknown challenge. The purpose of this study was to investigate strategies for varying items that can effectively generate items at targeted difficulty levels and specific test plan areas. Each variant item generation model was developed by decomposing selected source items possessing ideal measurement properties and targeting the desirable content domains. 341 variant items were generated from 72 source items. Data were collected from six pretest periods. Items were calibrated using the Rasch model. Initial results indicate that variant items showed desirable measurement properties. Additionally, compared to an average of approximately 60% of the items passing pretest criteria, an average of 84% of the variant items passed the pretest criteria. 1 aWendt, A1 aKao, S1 aGorham, J1 aWoo, A uhttp://iacat.org/content/developing-item-variants-empirical-study02745nas a2200409 4500008004100000020004600041245009400087210006900181250001500250260000800265300001200273490000700285520144500292653001501737653002001752653003101772653003001803653002001833653001901853653002601872653001101898653001101909653000901920653001601929653002601945653003701971653003002008653004402038653001802082653002002100653002802120100002002148700002302168700001602191700001702207856011102224 2009 eng d a1528-8447 (Electronic)1526-5900 (Linking)00aDevelopment and preliminary testing of a computerized adaptive assessment of chronic pain0 aDevelopment and preliminary testing of a computerized adaptive a a2009/07/15 cSep a932-9430 v103 aThe aim of this article is to report the development and preliminary testing of a prototype computerized adaptive test of chronic pain (CHRONIC PAIN-CAT) conducted in 2 stages: (1) evaluation of various item selection and stopping rules through real data-simulated administrations of CHRONIC PAIN-CAT; (2) a feasibility study of the actual prototype CHRONIC PAIN-CAT assessment system conducted in a pilot sample. Item calibrations developed from a US general population sample (N = 782) were used to program a pain severity and impact item bank (kappa = 45), and real data simulations were conducted to determine a CAT stopping rule. The CHRONIC PAIN-CAT was programmed on a tablet PC using QualityMetric's Dynamic Health Assessment (DYHNA) software and administered to a clinical sample of pain sufferers (n = 100). The CAT was completed in significantly less time than the static (full item bank) assessment (P < .001). On average, 5.6 items were dynamically administered by CAT to achieve a precise score. Scores estimated from the 2 assessments were highly correlated (r = .89), and both assessments discriminated across pain severity levels (P < .001, RV = .95). Patients' evaluations of the CHRONIC PAIN-CAT were favorable. PERSPECTIVE: This report demonstrates that the CHRONIC PAIN-CAT is feasible for administration in a clinic. The application has the potential to improve pain assessment and help clinicians manage chronic pain.10a*Computers10a*Questionnaires10aActivities of Daily Living10aAdaptation, Psychological10aChronic Disease10aCohort Studies10aDisability Evaluation10aFemale10aHumans10aMale10aMiddle Aged10aModels, Psychological10aOutcome Assessment (Health Care)10aPain Measurement/*methods10aPain, Intractable/*diagnosis/psychology10aPsychometrics10aQuality of Life10aUser-Computer Interface1 aAnatchkova, M D1 aSaris-Baglama, R N1 aKosinski, M1 aBjorner, J B uhttp://iacat.org/content/development-and-preliminary-testing-computerized-adaptive-assessment-chronic-pain02877nas a2200493 4500008004100000020004100041245014100082210006900223250001500292260000800307300001100315490000700326520125100333653003001584653001001614653000901624653004601633653003301679653001101712653003101723653001101754653000901765653003301774653001601807653002401823653004601847653005501893653005501948653004602003653001902049653003102068653001402099100001602113700001502129700001302144700001402157700001502171700001702186700001502203700001702218700001502235700001302250856012002263 2009 eng d a0090-5550 (Print)0090-5550 (Linking)00aDevelopment of an item bank for the assessment of depression in persons with mental illnesses and physical diseases using Rasch analysis0 aDevelopment of an item bank for the assessment of depression in a2009/05/28 cMay a186-970 v543 aOBJECTIVE: The calibration of item banks provides the basis for computerized adaptive testing that ensures high diagnostic precision and minimizes participants' test burden. The present study aimed at developing a new item bank that allows for assessing depression in persons with mental and persons with somatic diseases. METHOD: The sample consisted of 161 participants treated for a depressive syndrome, and 206 participants with somatic illnesses (103 cardiologic, 103 otorhinolaryngologic; overall mean age = 44.1 years, SD =14.0; 44.7% women) to allow for validation of the item bank in both groups. Persons answered a pool of 182 depression items on a 5-point Likert scale. RESULTS: Evaluation of Rasch model fit (infit < 1.3), differential item functioning, dimensionality, local independence, item spread, item and person separation (>2.0), and reliability (>.80) resulted in a bank of 79 items with good psychometric properties. CONCLUSIONS: The bank provides items with a wide range of content coverage and may serve as a sound basis for computerized adaptive testing applications. It might also be useful for researchers who wish to develop new fixed-length scales for the assessment of depression in specific rehabilitation settings.10aAdaptation, Psychological10aAdult10aAged10aDepressive Disorder/*diagnosis/psychology10aDiagnosis, Computer-Assisted10aFemale10aHeart Diseases/*psychology10aHumans10aMale10aMental Disorders/*psychology10aMiddle Aged10aModels, Statistical10aOtorhinolaryngologic Diseases/*psychology10aPersonality Assessment/statistics & numerical data10aPersonality Inventory/*statistics & numerical data10aPsychometrics/statistics & numerical data10aQuestionnaires10aReproducibility of Results10aSick Role1 aForkmann, T1 aBoecker, M1 aNorra, C1 aEberle, N1 aKircher, T1 aSchauerte, P1 aMischke, K1 aWesthofen, M1 aGauggel, S1 aWirtz, M uhttp://iacat.org/content/development-item-bank-assessment-depression-persons-mental-illnesses-and-physical-diseases00508nas a2200121 4500008003900000245011000039210006900149300001000218490000600228100001200234700002000246856012000266 2009 d00aDiagnostic classification models and multidimensional adaptive testing: A commentary on Rupp and Templin.0 aDiagnostic classification models and multidimensional adaptive t a58-610 v71 aFrey, A1 aCarstensen, C H uhttp://iacat.org/content/diagnostic-classification-models-and-multidimensional-adaptive-testing-commentary-rupp-and01210nas a2200133 4500008003900000245008600039210006900125300001200194490000700206520076400213100002100977700002500998856005301023 2009 d00aDirect and Inverse Problems of Item Pool Design for Computerized Adaptive Testing0 aDirect and Inverse Problems of Item Pool Design for Computerized a533-5470 v693 aThe recent literature on computerized adaptive testing (CAT) has developed methods for creating CAT item pools from a large master pool. Each CAT pool is designed as a set of nonoverlapping forms reflecting the skill levels of an assumed population of test takers. This article presents a Monte Carlo method to obtain these CAT pools and discusses its advantages over existing methods. Also, a new problem is considered that finds a population ability density function best matching the master pool. An analysis of the solution to this new problem provides testing organizations with effective guidance for maintaining their master pools. Computer experiments with a pool of Law School Admission Test items and its assembly constraints are presented.
1 aBelov, Dmitry, I1 aArmstrong, Ronald, D uhttp://epm.sagepub.com/content/69/4/533.abstract00476nas a2200121 4500008004100000245008700041210006900128300001200197490000700209100001500216700001900231856010400250 2009 eng d00a Direct and inverse problems of item pool design for computerized adaptive testing0 aDirect and inverse problems of item pool design for computerized a533-5470 v691 aBelov, D I1 aArmstrong, R D uhttp://iacat.org/content/direct-and-inverse-problems-item-pool-design-computerized-adaptive-testing00678nas a2200157 4500008004100000245017700041210006900218260003100287100001600318700001200334700001300346700001500359700001400374700001300388856011900401 2008 eng d00aDeveloping a progressive approach to using the GAIN in order to reduce the duration and cost of assessment with the GAIN short screener, Quick and computer adaptive testing0 aDeveloping a progressive approach to using the GAIN in order to aWashington D.C., USAc20081 aDennis, M L1 aFunk, R1 aTitus, J1 aRiley, B B1 aHosman, S1 aKinne, S uhttp://iacat.org/content/developing-progressive-approach-using-gain-order-reduce-duration-and-cost-assessment-gain01724nas a2200181 4500008004100000245011200041210006900153300001100222490000700233520107200240653003401312653001701346653001901363100001601382700001801398700001501416856011101431 2008 eng d00aThe D-optimality item selection criterion in the early stage of CAT: A study with the graded response model0 aDoptimality item selection criterion in the early stage of CAT A a88-1100 v333 aDuring the early stage of computerized adaptive testing (CAT), item selection criteria based on Fisher’s information often produce less stable latent trait estimates than the Kullback-Leibler global information criterion. Robustness against early stage instability has been reported for the D-optimality criterion in a polytomous CAT with the Nominal Response Model and is shown herein to be reproducible for the Graded Response Model. For comparative purposes, the A-optimality and the global information criteria are also applied. Their item selection is investigated as a function of test progression and item bank composition. The results indicate how the selection of specific item parameters underlies the criteria performances evaluated via accuracy and precision of estimation. In addition, the criteria item exposure rates are compared, without the use of any exposure controlling measure. On the account of stability, precision, accuracy, numerical simplicity, and less evidently, item exposure rate, the D-optimality criterion can be recommended for CAT.10acomputerized adaptive testing10aD optimality10aitem selection1 aPassos, V L1 aBerger, M P F1 aTan, F E S uhttp://iacat.org/content/d-optimality-item-selection-criterion-early-stage-cat-study-graded-response-model00469nas a2200121 4500008003900000245008000039210006900119490000800188100002100196700001800217700001900235856009300254 2007 d00aThe design and evaluation of a computerized adaptive test on mobile devices0 adesign and evaluation of a computerized adaptive test on mobile 0 v49.1 aTriantafillou, E1 aGeorgiadou, E1 aEconomides, AA uhttp://iacat.org/content/design-and-evaluation-computerized-adaptive-test-mobile-devices00495nas a2200097 4500008004100000245007100041210006600112260011700178100001700295856008500312 2007 eng d00aThe design of p-optimal item banks for computerized adaptive tests0 adesign of poptimal item banks for computerized adaptive tests aD. J. Weiss (Ed.), Proceedings of the 2007 GMAC Conference on Computerized Adaptive Testing. {PDF file, 211 KB}.1 aReckase, M D uhttp://iacat.org/content/design-p-optimal-item-banks-computerized-adaptive-tests00563nas a2200109 4500008004100000245010200041210006900143260009600212100001000308700001700318856011800335 2007 eng d00aDesigning optimal item pools for computerized adaptive tests with Sympson-Hetter exposure control0 aDesigning optimal item pools for computerized adaptive tests wit aD. J. Weiss (Ed.), Proceedings of the 2007 GMAC Conference on Computerized Adaptive Testing1 aGu, L1 aReckase, M D uhttp://iacat.org/content/designing-optimal-item-pools-computerized-adaptive-tests-sympson-hetter-exposure-control00491nas a2200109 4500008004100000245006400041210006400105260009700169100001800266700001600284856008100300 2007 eng d00aDesigning templates based on a taxonomy of innovative items0 aDesigning templates based on a taxonomy of innovative items aD. J. Weiss (Ed.). Proceedings of the 2007 GMAC Conference on Computerized Adaptive Testing.1 aParshall, C G1 aHarmes, J C uhttp://iacat.org/content/designing-templates-based-taxonomy-innovative-items01549nas a2200169 4500008003900000022001400039245006100053210006100114300001400175490000700189520104600196100001901242700002301261700002201284700001801306856005501324 2007 d a1745-398400aDetecting Differential Speededness in Multistage Testing0 aDetecting Differential Speededness in Multistage Testing a117–1300 v443 aA potential undesirable effect of multistage testing is differential speededness, which happens if some of the test takers run out of time because they receive subtests with items that are more time intensive than others. This article shows how a probabilistic response-time model can be used for estimating differences in time intensities and speed between subtests and test takers and detecting differential speededness. An empirical data set for a multistage test in the computerized CPA Exam was used to demonstrate the procedures. Although the more difficult subtests appeared to have items that were more time intensive than the easier subtests, an analysis of the residual response times did not reveal any significant differential speededness because the time limit appeared to be appropriate. In a separate analysis, within each of the subtests, we found minor but consistent patterns of residual times that are believed to be due to a warm-up effect, that is, use of more time on the initial items than they actually need.
1 aLinden, Wim, J1 aBreithaupt, Krista1 aChuah, Siang Chee1 aZhang, Yanwei uhttp://dx.doi.org/10.1111/j.1745-3984.2007.00030.x02414nas a2200325 4500008004100000020002200041245008700063210006900150250001500219300001100234490000700245520140100252653001901653653003001672653001901702653003801721653002101759653002001780653001401800653001501814653003301829653001101862653002401873653001801897100001701915700001501932700001501947700001501962856011101977 2007 eng d a0962-9343 (Print)00aDeveloping tailored instruments: item banking and computerized adaptive assessment0 aDeveloping tailored instruments item banking and computerized ad a2007/05/29 a95-1080 v163 aItem banks and Computerized Adaptive Testing (CAT) have the potential to greatly improve the assessment of health outcomes. This review describes the unique features of item banks and CAT and discusses how to develop item banks. In CAT, a computer selects the items from an item bank that are most relevant for and informative about the particular respondent; thus optimizing test relevance and precision. Item response theory (IRT) provides the foundation for selecting the items that are most informative for the particular respondent and for scoring responses on a common metric. The development of an item bank is a multi-stage process that requires a clear definition of the construct to be measured, good items, a careful psychometric analysis of the items, and a clear specification of the final CAT. The psychometric analysis needs to evaluate the assumptions of the IRT model such as unidimensionality and local independence; that the items function the same way in different subgroups of the population; and that there is an adequate fit between the data and the chosen item response models. Also, interpretation guidelines need to be established to help the clinical application of the assessment. Although medical research can draw upon expertise from educational testing in the development of item banks and CAT, the medical field also encounters unique opportunities and challenges.10a*Health Status10a*Health Status Indicators10a*Mental Health10a*Outcome Assessment (Health Care)10a*Quality of Life10a*Questionnaires10a*Software10aAlgorithms10aFactor Analysis, Statistical10aHumans10aModels, Statistical10aPsychometrics1 aBjorner, J B1 aChang, C-H1 aThissen, D1 aReeve, B B uhttp://iacat.org/content/developing-tailored-instruments-item-banking-and-computerized-adaptive-assessment00595nas a2200169 4500008004100000245009100041210006900132300001200201490000700213100001600220700001400236700001700250700001400267700001500281700001200296856011700308 2007 eng d00aDevelopment and evaluation of a computer adaptive test for “Anxiety” (Anxiety-CAT)0 aDevelopment and evaluation of a computer adaptive test for Anxie a143-1550 v161 aWalter, O B1 aBecker, J1 aBjorner, J B1 aFliege, H1 aKlapp, B F1 aRose, M uhttp://iacat.org/content/development-and-evaluation-computer-adaptive-test-%E2%80%9Canxiety%E2%80%9D-anxiety-cat00492nas a2200109 4500008004100000245006600041210006200107260009700169100002000266700001800286856007800304 2007 eng d00aThe development of a computerized adaptive test for integrity0 adevelopment of a computerized adaptive test for integrity aD. J. Weiss (Ed.), Proceedings of the 2007 GMAC Conference on Computerized Adaptive Testing.1 aEgberink, I J L1 aVeldkamp, B P uhttp://iacat.org/content/development-computerized-adaptive-test-integrity00634nas a2200145 4500008004100000245009700041210006900138260009700207100001600304700001200320700001900332700001300351700001300364856011100377 2007 eng d00aDevelopment of a multiple-component CAT for measuring foreign language proficiency (SIMTEST)0 aDevelopment of a multiplecomponent CAT for measuring foreign lan aD. J. Weiss (Ed.). Proceedings of the 2007 GMAC Conference on Computerized Adaptive Testing.1 aSumbling, M1 aSanz, P1 aViladrich, M C1 aDoval, E1 aRiera, L uhttp://iacat.org/content/development-multiple-component-cat-measuring-foreign-language-proficiency-simtest00448nas a2200109 4500008004100000245004200041210004200083260011500125100001300240700001800253856006700271 2006 eng d00aDesigning computerized adaptive tests0 aDesigning computerized adaptive tests aS.M. Downing and T. M. Haladyna (Eds.), Handbook of test development. New Jersey: Lawrence Erlbaum Associates.1 aDavey, T1 aPitoniak, M J uhttp://iacat.org/content/designing-computerized-adaptive-tests02210nas a2200373 4500008004100000245011500041210006900156300001100225490000700236520104400243653002101287653002001308653001001328653000901338653002801347653001101375653003801386653000901424653001601433653004101449653001801490653003701508653003001545100001401575700001301589700001301602700001501615700001701630700001501647700001901662700001601681700001801697856012101715 2005 eng d00aData pooling and analysis to build a preliminary item bank: an example using bowel function in prostate cancer0 aData pooling and analysis to build a preliminary item bank an ex a142-590 v283 aAssessing bowel function (BF) in prostate cancer can help determine therapeutic trade-offs. We determined the components of BF commonly assessed in prostate cancer studies as an initial step in creating an item bank for clinical and research application. We analyzed six archived data sets representing 4,246 men with prostate cancer. Thirty-one items from validated instruments were available for analysis. Items were classified into domains (diarrhea, rectal urgency, pain, bleeding, bother/distress, and other) then subjected to conventional psychometric and item response theory (IRT) analyses. Items fit the IRT model if the ratio between observed and expected item variance was between 0.60 and 1.40. Four of 31 items had inadequate fit in at least one analysis. Poorly fitting items included bleeding (2), rectal urgency (1), and bother/distress (1). A fifth item assessing hemorrhoids was poorly correlated with other items. Our analyses supported four related components of BF: diarrhea, rectal urgency, pain, and bother/distress.10a*Quality of Life10a*Questionnaires10aAdult10aAged10aData Collection/methods10aHumans10aIntestine, Large/*physiopathology10aMale10aMiddle Aged10aProstatic Neoplasms/*physiopathology10aPsychometrics10aResearch Support, Non-U.S. Gov't10aStatistics, Nonparametric1 aEton, D T1 aLai, J S1 aCella, D1 aReeve, B B1 aTalcott, J A1 aClark, J A1 aMcPherson, C P1 aLitwin, M S1 aMoinpour, C M uhttp://iacat.org/content/data-pooling-and-analysis-build-preliminary-item-bank-example-using-bowel-function-prostate00494nas a2200121 4500008004100000245010000041210006900141300001200210490001000222100000700232700001400239856011900253 2005 eng d00aDesign and evaluation of an XML-based platform-independent computerized adaptive testing system0 aDesign and evaluation of an XMLbased platformindependent compute a230-2370 v48(2)1 aHo1 aYen, Y -C uhttp://iacat.org/content/design-and-evaluation-xml-based-platform-independent-computerized-adaptive-testing-system00533nas a2200169 4500008004100000245006700041210006300108300001600171490000700187100001400194700001400208700001600222700001700238700001500255700001200270856008100282 2005 eng d00aDevelopment of a computer-adaptive test for depression (D-CAT)0 aDevelopment of a computeradaptive test for depression DCAT a2277–22910 v141 aFliege, H1 aBecker, J1 aWalter, O B1 aBjorner, J B1 aKlapp, B F1 aRose, M uhttp://iacat.org/content/development-computer-adaptive-test-depression-d-cat00623nas a2200109 4500008004100000245009500041210006900136260016800205100001800373700001900391856010300410 2005 eng d00aThe development of the adaptive item language assessment (AILA) for mixed-ability students0 adevelopment of the adaptive item language assessment AILA for mi aProceedings E-Learn 2005 World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education, 643-650, Vancouver, Canada, AACE, October 2005.1 aGiouroglou, H1 aEconomides, AA uhttp://iacat.org/content/development-adaptive-item-language-assessment-aila-mixed-ability-students01996nas a2200205 4500008004100000020004600041245007900087210006900166260004100235300001400276490000700290520127200297653003001569653002501599653003401624100001301658700001801671700001601689856008501705 2005 eng d a0017-9124 (Print); 1475-6773 (Electronic)00aDynamic assessment of health outcomes: Time to let the CAT out of the bag?0 aDynamic assessment of health outcomes Time to let the CAT out of bBlackwell Publishing: United Kingdom a1694-17110 v403 aBackground: The use of item response theory (IRT) to measure self-reported outcomes has burgeoned in recent years. Perhaps the most important application of IRT is computer-adaptive testing (CAT), a measurement approach in which the selection of items is tailored for each respondent. Objective. To provide an introduction to the use of CAT in the measurement of health outcomes, describe several IRT models that can be used as the basis of CAT, and discuss practical issues associated with the use of adaptive scaling in research settings. Principal Points: The development of a CAT requires several steps that are not required in the development of a traditional measure including identification of "starting" and "stopping" rules. CAT's most attractive advantage is its efficiency. Greater measurement precision can be achieved with fewer items. Disadvantages of CAT include the high cost and level of technical expertise required to develop a CAT. Conclusions: Researchers, clinicians, and patients benefit from the availability of psychometrically rigorous measures that are not burdensome. CAT outcome measures hold substantial promise in this regard, but their development is not without challenges. (PsycINFO Database Record (c) 2007 APA, all rights reserved)10acomputer adaptive testing10aItem Response Theory10aself reported health outcomes1 aCook, KF1 aO'Malley, K J1 aRoddey, T S uhttp://iacat.org/content/dynamic-assessment-health-outcomes-time-let-cat-out-bag00395nas a2200109 4500008004100000245005900041210005800100260001700158100001100175700001800186856008100204 2004 eng d00aDetecting exposed test items in computer-based testing0 aDetecting exposed test items in computerbased testing aSan Diego CA1 aHan, N1 aHambleton, RK uhttp://iacat.org/content/detecting-exposed-test-items-computer-based-testing00485nas a2200097 4500008004100000245008700041210006900128260006200197100001500259856011300274 2004 eng d00aDeveloping tailored instruments: Item banking and computerized adaptive assessment0 aDeveloping tailored instruments Item banking and computerized ad aItem Banks, and Computer-Adaptive Testing,” Bethesda MD1 aChang, C-H uhttp://iacat.org/content/developing-tailored-instruments-item-banking-and-computerized-adaptive-assessment-200485nas a2200097 4500008003900000245008700039210006900126260006200195100001700257856011300274 2004 d00aDeveloping tailored instruments: Item banking and computerized adaptive assessment0 aDeveloping tailored instruments Item banking and computerized ad aItem Banks, and Computer-Adaptive Testing,” Bethesda MD1 aBjorner, J B uhttp://iacat.org/content/developing-tailored-instruments-item-banking-and-computerized-adaptive-assessment-100542nas a2200145 4500008004100000245008900041210006900130300001200199490000700211653003400218100001400252700001400266700001500280856010100295 2004 eng d00aThe development and evaluation of a software prototype for computer-adaptive testing0 adevelopment and evaluation of a software prototype for computera a109-1230 v4310acomputerized adaptive testing1 aLilley, M1 aBarker, T1 aBritton, C uhttp://iacat.org/content/development-and-evaluation-software-prototype-computer-adaptive-testing01800nas a2200277 4500008004100000245007600041210006900117300001100186490000600197520090300203653001501106653002901121653003001150653002001180653001101200653005001211653001801261653003201279653004101311653001801352100001401370700001301384700001301397700001901410856009301429 2003 eng d00aDeveloping an initial physical function item bank from existing sources0 aDeveloping an initial physical function item bank from existing a124-360 v43 aThe objective of this article is to illustrate incremental item banking using health-related quality of life data collected from two samples of patients receiving cancer treatment. The kinds of decisions one faces in establishing an item bank for computerized adaptive testing are also illustrated. Pre-calibration procedures include: identifying common items across databases; creating a new database with data from each pool; reverse-scoring "negative" items; identifying rating scales used in items; identifying pivot points in each rating scale; pivot anchoring items at comparable rating scale categories; and identifying items in each instrument that measure the construct of interest. A series of calibrations were conducted in which a small proportion of new items were added to the common core and misfitting items were identified and deleted until an initial item bank has been developed.10a*Databases10a*Sickness Impact Profile10aAdaptation, Psychological10aData Collection10aHumans10aNeoplasms/*physiopathology/psychology/therapy10aPsychometrics10aQuality of Life/*psychology10aResearch Support, U.S. Gov't, P.H.S.10aUnited States1 aBode, R K1 aCella, D1 aLai, J S1 aHeinemann, A W uhttp://iacat.org/content/developing-initial-physical-function-item-bank-existing-sources00505nas a2200121 4500008004100000245009900041210006900140100001300209700001600222700001800238700001500256856011200271 2003 eng d00aDevelopment and psychometric evaluation of the Flexilevel Scale of Shoulder Function (FLEX-SF)0 aDevelopment and psychometric evaluation of the Flexilevel Scale 1 aCook, KF1 aRoddey, T S1 aGartsman, G M1 aOlson, S L uhttp://iacat.org/content/development-and-psychometric-evaluation-flexilevel-scale-shoulder-function-flex-sf00381nas a2200085 4500008004100000245007700041210006900118100001500187856009300202 2003 eng d00aDevelopment of the Learning Potential Computerised Adaptive Test (LPCAT)0 aDevelopment of the Learning Potential Computerised Adaptive Test1 aDe Beer, M uhttp://iacat.org/content/development-learning-potential-computerised-adaptive-test-lpcat02788nas a2200121 4500008004100000245013500041210006900176300000900245490000700254520226900261100001502530856012102545 2003 eng d00aDevelopment, reliability, and validity of a computerized adaptive version of the Schedule for Nonadaptive and Adaptive Personality0 aDevelopment reliability and validity of a computerized adaptive a34850 v633 aComputerized adaptive testing (CAT) and Item Response Theory (IRT) techniques were applied to the Schedule for Nonadaptive and Adaptive Personality (SNAP) to create a more efficient measure with little or no cost to test reliability or validity. The SNAP includes 15 factor analytically derived and relatively unidimensional traits relevant to personality disorder. IRT item parameters were calibrated on item responses from a sample of 3,995 participants who completed the traditional paper-and-pencil (P&P) SNAP in a variety of university, community, and patient settings. Computerized simulations were conducted to test various adaptive testing algorithms, and the results informed the construction of the CAT version of the SNAP (SNAP-CAT). A validation study of the SNAP-CAT was conducted on a sample of 413 undergraduates who completed the SNAP twice, separated by one week. Participants were randomly assigned to one of four groups who completed (1) a modified P&P version of the SNAP (SNAP-PP) twice (n = 106), (2) the SNAP-PP first and the SNAP-CAT second (n = 105), (3) the SNAP-CAT first and the SNAP-PP second (n = 102), and (4) the SNAP-CAT twice (n = 100). Results indicated that the SNAP-CAT was 58% and 60% faster than the traditional P&P version, at Times 1 and 2, respectively, and mean item savings across scales were 36% and 37%, respectively. These savings came with minimal cost to reliability or validity, and the two test forms were largely equivalent. Descriptive statistics, rank-ordering of scores, internal factor structure, and convergent/discriminant validity were highly comparable across testing modes and methods of scoring, and very few differences between forms replicated across testing sessions. In addition, participants overwhelmingly preferred the computerized version to the P&P version. However, several specific problems were identified for the Self-harm and Propriety scales of the SNAP-CAT that appeared to be broadly related to IRT calibration difficulties. Reasons for these anomalous findings are discussed, and follow-up studies are suggested. Despite these specific problems, the SNAP-CAT appears to be a viable alternative to the traditional P&P SNAP. (PsycINFO Database Record (c) 2003 APA, all rights reserved).1 aSimms, L J uhttp://iacat.org/content/development-reliability-and-validity-computerized-adaptive-version-schedule-nonadaptive-and01443nas a2200241 4500008004100000245008000041210006900121300001200190490000700202520067300209653003000882653002800912653002500940653002300965653001600988653002201004653002101026100001301047700001601060700001001076700001601086856009901102 2002 eng d00aData sparseness and on-line pretest item calibration-scaling methods in CAT0 aData sparseness and online pretest item calibrationscaling metho a207-2180 v393 aCompared and evaluated 3 on-line pretest item calibration-scaling methods (the marginal maximum likelihood estimate with 1 expectation maximization [EM] cycle [OEM] method, the marginal maximum likelihood estimate with multiple EM cycles [MEM] method, and M. L. Stocking's Method B) in terms of item parameter recovery when the item responses to the pretest items in the pool are sparse. Simulations of computerized adaptive tests were used to evaluate the results yielded by the three methods. The MEM method produced the smallest average total error in parameter estimation, and the OEM method yielded the largest total error (PsycINFO Database Record (c) 2005 APA )10aComputer Assisted Testing10aEducational Measurement10aItem Response Theory10aMaximum Likelihood10aMethodology10aScaling (Testing)10aStatistical Data1 aBan, J-C1 aHanson, B A1 aYi, Q1 aHarris, D J uhttp://iacat.org/content/data-sparseness-and-line-pretest-item-calibration-scaling-methods-cat01637nas a2200133 4500008004100000245008400041210006900125300001200194490000700206520114900213100002701362700001601389856009801405 2002 eng d00aDetection of person misfit in computerized adaptive tests with polytomous items0 aDetection of person misfit in computerized adaptive tests with p a164-1800 v263 aItem scores that do not fit an assumed item response theory model may cause the latent trait value to be inaccurately estimated. For a computerized adaptive test (CAT) using dichotomous items, several person-fit statistics for detecting mis.tting item score patterns have been proposed. Both for paper-and-pencil (P&P) tests and CATs, detection ofperson mis.t with polytomous items is hardly explored. In this study, the nominal and empirical null distributions ofthe standardized log-likelihood statistic for polytomous items are compared both for P&P tests and CATs. Results showed that the empirical distribution of this statistic differed from the assumed standard normal distribution for both P&P tests and CATs. Second, a new person-fit statistic based on the cumulative sum (CUSUM) procedure from statistical process control was proposed. By means ofsimulated data, critical values were determined that can be used to classify a pattern as fitting or misfitting. The effectiveness of the CUSUM to detect simulees with item preknowledge was investigated. Detection rates using the CUSUM were high for realistic numbers ofdisclosed items. 1 aKrimpen-Stoop, E M L A1 aMeijer, R R uhttp://iacat.org/content/detection-person-misfit-computerized-adaptive-tests-polytomous-items00462nas a2200097 4500008004100000245008700041210006900128260003900197100001500236856011300251 2002 eng d00aDeveloping tailored instruments: Item banking and computerized adaptive assessment0 aDeveloping tailored instruments Item banking and computerized ad a” Bethesda, Maryland, June 23-251 aThissen, D uhttp://iacat.org/content/developing-tailored-instruments-item-banking-and-computerized-adaptive-assessment-300477nas a2200109 4500008004100000245009900041210006900140260001900209100001400228700001400242856011100256 2002 eng d00aThe development and evaluation of a computer-adaptive testing application for English language0 adevelopment and evaluation of a computeradaptive testing applica aUnited Kingdom1 aLilley, M1 aBarker, T uhttp://iacat.org/content/development-and-evaluation-computer-adaptive-testing-application-english-language02890nas a2200349 4500008004100000245008300041210006900124260000800193300001100201490000700212520176600219653003001985653002802015653001502043653001002058653000902068653002202077653001102099653001902110653001102129653000902140653001602149653006202165653006102227653003302288653003602321653003102357653002602388100001402414700001602428856009602444 2002 eng d00aDevelopment of an index of physical functional health status in rehabilitation0 aDevelopment of an index of physical functional health status in cMay a655-650 v833 aOBJECTIVE: To describe (1) the development of an index of physical functional health status (FHS) and (2) its hierarchical structure, unidimensionality, reproducibility of item calibrations, and practical application. DESIGN: Rasch analysis of existing data sets. SETTING: A total of 715 acute, orthopedic outpatient centers and 62 long-term care facilities in 41 states participating with Focus On Therapeutic Outcomes, Inc. PATIENTS: A convenience sample of 92,343 patients (40% male; mean age +/- standard deviation [SD], 48+/-17y; range, 14-99y) seeking rehabilitation between 1993 and 1999. INTERVENTIONS: Not applicable. MAIN OUTCOME MEASURES: Patients completed self-report health status surveys at admission and discharge. The Medical Outcomes Study 36-Item Short-Form Health Survey's physical functioning scale (PF-10) is the foundation of the physical FHS. The Oswestry Low Back Pain Disability Questionnaire, Neck Disability Index, Lysholm Knee Questionnaire, items pertinent to patients with upper-extremity impairments, and items pertinent to patients with more involved neuromusculoskeletal impairments were cocalibrated into the PF-10. RESULTS: The final FHS item bank contained 36 items (patient separation, 2.3; root mean square measurement error, 5.9; mean square +/- SD infit, 0.9+/-0.5; outfit, 0.9+/-0.9). Analyses supported empirical item hierarchy, unidimensionality, reproducibility of item calibrations, and content and construct validity of the FHS-36. CONCLUSIONS: Results support the reliability and validity of FHS-36 measures in the present sample. Analyses show the potential for a dynamic, computer-controlled, adaptive survey for FHS assessment applicable for group analysis and clinical decision making for individual patients.10a*Health Status Indicators10a*Rehabilitation Centers10aAdolescent10aAdult10aAged10aAged, 80 and over10aFemale10aHealth Surveys10aHumans10aMale10aMiddle Aged10aMusculoskeletal Diseases/*physiopathology/*rehabilitation10aNervous System Diseases/*physiopathology/*rehabilitation10aPhysical Fitness/*physiology10aRecovery of Function/physiology10aReproducibility of Results10aRetrospective Studies1 aHart, D L1 aWright, B D uhttp://iacat.org/content/development-index-physical-functional-health-status-rehabilitation00321nas a2200097 4500008004100000245004300041210003900084260002200123100001700145856006100162 2002 eng d00aThe Development of STAR Early Literacy0 aDevelopment of STAR Early Literacy aDesert Springs CA1 aMcBride, J R uhttp://iacat.org/content/development-star-early-literacy00550nas a2200097 4500008003900000245013500039210006900174260007200243100001500315856012200330 2002 d00aDEVELOPMENT, RELIABILITY, AND VALIDITY OF A COMPUTERIZED ADAPTIVE VERSION OF THE SCHEDULE FOR NONADAPTIVE AND ADAPTIVE PERSONALITY0 aDEVELOPMENT RELIABILITY AND VALIDITY OF A COMPUTERIZED ADAPTIVE aUnpublished Ph. D. dissertation, University of Iowa, Iowa City Iowa1 aSimms, L J uhttp://iacat.org/content/development-reliability-and-validity-computerized-adaptive-version-schedule-nonadaptive-an-000476nas a2200133 4500008004100000245007400041210006900115260001200184100001100196700001600207700001000223700001400233856009500247 2001 eng d00aData sparseness and online pretest calibration/scaling methods in CAT0 aData sparseness and online pretest calibrationscaling methods in aSeattle1 aBan, J1 aHanson, B A1 aYi, Q1 aHarris, D uhttp://iacat.org/content/data-sparseness-and-online-pretest-calibrationscaling-methods-cat00422nas a2200121 4500008004100000245005900041210005900100260001500159100001700174700001900191700001200210856007800222 2001 eng d00aDeriving a stopping rule for sequential adaptive tests0 aDeriving a stopping rule for sequential adaptive tests aSeattle WA1 aGrabovsky, I1 aChang, Hua-Hua1 aYing, Z uhttp://iacat.org/content/deriving-stopping-rule-sequential-adaptive-tests00450nas a2200097 4500008004100000245008100041210006900122260004200191100001900233856010000252 2001 eng d00aDetection of misfitting item-score patterns in computerized adaptive testing0 aDetection of misfitting itemscore patterns in computerized adapt aEnschede, The Netherlands: Febodruk B1 aStoop, E M L A uhttp://iacat.org/content/detection-misfitting-item-score-patterns-computerized-adaptive-testing00501nam a2200097 4500008004100000245009300041210006900134260007600203100001300279856011100292 2001 eng d00aDevelopment and evaluation of test assembly procedures for computerized adaptive testing0 aDevelopment and evaluation of test assembly procedures for compu aUnpublished doctoral dissertation, University of Massachusetts, Amherst1 aRobin, F uhttp://iacat.org/content/development-and-evaluation-test-assembly-procedures-computerized-adaptive-testing00548nas a2200157 4500008004100000245008100041210006900122300001200191490000700203100002000210700001600230700001600246700001700262700001400279856009700293 2001 eng d00aDevelopment of an adaptive multimedia program to collect patient health data0 aDevelopment of an adaptive multimedia program to collect patient a320-3240 v211 aSutherland, L A1 aCampbell, M1 aOrnstein, K1 aWildemuth, B1 aLobach, D uhttp://iacat.org/content/development-adaptive-multimedia-program-collect-patient-health-data00453nas a2200097 4500008004100000245009000041210006900131260002500200100003300225856009700258 2001 eng d00aThe Development of STAR Early Literacy: A report of the School Renaissance Institute.0 aDevelopment of STAR Early Literacy A report of the School Renais aMadison, WI: Author.1 aSchool-Renaissance-Institute uhttp://iacat.org/content/development-star-early-literacy-report-school-renaissance-institute01965nas a2200193 4500008004100000245008600041210006900127300001000196490000700206520131400213653001401527653003001541653002501571653001101596653003601607653001401643100001501657856009901672 2001 eng d00aDevelopments in measurement of persons and items by means of item response models0 aDevelopments in measurement of persons and items by means of ite a65-940 v283 aThis paper starts with a general introduction into measurement of hypothetical constructs typical of the social and behavioral sciences. After the stages ranging from theory through operationalization and item domain to preliminary test or questionnaire have been treated, the general assumptions of item response theory are discussed. The family of parametric item response models for dichotomous items is introduced and it is explained how parameters for respondents and items are estimated from the scores collected from a sample of respondents who took the test or questionnaire. Next, the family of nonparametric item response models is explained, followed by the 3 classes of item response models for polytomous item scores (e.g., rating scale scores). Then, to what degree the mean item score and the unweighted sum of item scores for persons are useful for measuring items and persons in the context of item response theory is discussed. Methods for fitting parametric and nonparametric models to data are briefly discussed. Finally, the main applications of item response models are discussed, which include equating and item banking, computerized and adaptive testing, research into differential item functioning, person fit research, and cognitive modeling. (PsycINFO Database Record (c) 2005 APA )10aCognitive10aComputer Assisted Testing10aItem Response Theory10aModels10aNonparametric Statistical Tests10aProcesses1 aSijtsma, K uhttp://iacat.org/content/developments-measurement-persons-and-items-means-item-response-models01613nas a2200193 4500008004100000245008600041210006900127300001200196490000700208520094500215653002101160653003001181653004101211653000901252653001701261100001601278700001701294856010801311 2001 eng d00aDifferences between self-adapted and computerized adaptive tests: A meta-analysis0 aDifferences between selfadapted and computerized adaptive tests a235-2470 v383 aSelf-adapted testing has been described as a variation of computerized adaptive testing that reduces test anxiety and thereby enhances test performance. The purpose of this study was to gain a better understanding of these proposed effects of self-adapted tests (SATs); meta-analysis procedures were used to estimate differences between SATs and computerized adaptive tests (CATs) in proficiency estimates and post-test anxiety levels across studies in which these two types of tests have been compared. After controlling for measurement error the results showed that SATs yielded proficiency estimates that were 0.12 standard deviation units higher and post-test anxiety levels that were 0.19 standard deviation units lower than those yielded by CATs. The authors speculate about possible reasons for these differences and discuss advantages and disadvantages of using SATs in operational settings. (PsycINFO Database Record (c) 2005 APA )10aAdaptive Testing10aComputer Assisted Testing10aScores computerized adaptive testing10aTest10aTest Anxiety1 aPitkin, A K1 aVispoel, W P uhttp://iacat.org/content/differences-between-self-adapted-and-computerized-adaptive-tests-meta-analysis00475nas a2200121 4500008004100000245005900041210005900100260005900159300001400218100001800232700002300250856008000273 2000 eng d00aDesigning item pools for computerized adaptive testing0 aDesigning item pools for computerized adaptive testing aDendrecht, The NetherlandsbKluwer Academic Publishers a149–1621 aVeldkamp, B P1 avan der Linden, WJ uhttp://iacat.org/content/designing-item-pools-computerized-adaptive-testing00600nas a2200109 4500008004100000245009300041210006900134260012700203100002700330700001600357856011700373 2000 eng d00aDetecting person misfit in adaptive testing using statistical process control techniques0 aDetecting person misfit in adaptive testing using statistical pr aW. J. van der Linden, and C. A. W. Glas (Editors). Computerized Adaptive Testing: Theory and Practice. Norwell MA: Kluwer.1 aKrimpen-Stoop, E M L A1 aMeijer, R R uhttp://iacat.org/content/detecting-person-misfit-adaptive-testing-using-statistical-process-control-techniques-000571nas a2200133 4500008004100000245009300041210006900134260004900203300001200252653001500264100002700279700001600306856011500322 2000 eng d00aDetecting person misfit in adaptive testing using statistical process control techniques0 aDetecting person misfit in adaptive testing using statistical pr aDordrecht, The NetherlandsbKluwer Academic. a201-21910aperson Fit1 aKrimpen-Stoop, E M L A1 aMeijer, R R uhttp://iacat.org/content/detecting-person-misfit-adaptive-testing-using-statistical-process-control-techniques00514nas a2200109 4500008004100000245012100041210006900162260001900231100001700250700001600267856012100283 2000 eng d00aDetecting test-takers who have memorized items in computerized-adaptive testing and muti-stage testing: A comparison0 aDetecting testtakers who have memorized items in computerizedada aNew Orleans LA1 aPatsula, L N1 aMcLeod, L D uhttp://iacat.org/content/detecting-test-takers-who-have-memorized-items-computerized-adaptive-testing-and-muti-stage00481nas a2200121 4500008004100000245009100041210006900132300001200201490000700213100002000220700001600240856010300256 2000 eng d00aDetection of known items in adaptive testing with a statistical quality control method0 aDetection of known items in adaptive testing with a statistical a373-3890 v251 aVeerkamp, W J J1 aGlas, C E W uhttp://iacat.org/content/detection-known-items-adaptive-testing-statistical-quality-control-method00639nas a2200109 4500008004100000245011000041210006900151260014400220100002700364700001600391856012200407 2000 eng d00aDetection of person misfit in computerized adaptive testing with polytomous items (Research Report 00-01)0 aDetection of person misfit in computerized adaptive testing with aEnschede, The Netherlands: University of Twente, Faculty of Educational Science and Technology, Department of Measurement and Data Analysis1 aKrimpen-Stoop, E M L A1 aMeijer, R R uhttp://iacat.org/content/detection-person-misfit-computerized-adaptive-testing-polytomous-items-research-report-00-0100577nas a2200097 4500008004100000245016800041210006900209260006600278100001300344856012200357 2000 eng d00aDevelopment and evaluation of test assembly procedures for computerized adaptive testing (Laboratory of Psychometric and Evaluative Methods Research Report No 391)0 aDevelopment and evaluation of test assembly procedures for compu aAmherst MA: University of Massachusetts, School of Education.1 aRobin, F uhttp://iacat.org/content/development-and-evaluation-test-assembly-procedures-computerized-adaptive-testing-laboratory02840nas a2200193 4500008004100000245013800041210006900179300000900248490000700257520208000264653003002344653002002374653005202394653002202446653001802468653002402486100001602510856012002526 2000 eng d00aThe development of a computerized version of Vandenberg's mental rotation test and the effect of visuo-spatial working memory loading0 adevelopment of a computerized version of Vandenbergs mental rota a39380 v603 aThis dissertation focused on the generation and evaluation of web-based versions of Vandenberg's Mental Rotation Test. Memory and spatial visualization theory were explored in relation to the addition of a visuo-spatial working memory component. Analysis of the data determined that there was a significant difference between scores on the MRT Computer and MRT Memory test. The addition of a visuo-spatial working memory component did significantly affect results at the .05 alpha level. Reliability and discrimination estimates were higher on the MRT Memory version. The computerization of the paper and pencil version on the MRT did not significantly effect scores but did effect the time required to complete the test. The population utilized in the quasi-experiment consisted of 107 university students from eight institutions in engineering graphics related courses. The subjects completed two researcher developed, Web-based versions of Vandenberg's Mental Rotation Test and the original paper and pencil version of the Mental Rotation Test. One version of the test included a visuo-spatial working memory loading. Significant contributions of this study included developing and evaluating computerized versions of Vandenberg's Mental Rotation Test. Previous versions of Vandenberg's Mental Rotation Test did not take advantage of the ability of the computer to incorporate an interaction factor, such as a visuo-spatial working memory loading, into the test. The addition of an interaction factor results in a more discriminate test which will lend itself well to computerized adaptive testing practices. Educators in engineering graphics related disciplines should strongly consider the use of spatial visualization tests to aid in establishing the effects of modern computer systems on fundamental design/drafting skills. Regular testing of spatial visualization skills will result assist in the creation of a more relevant curriculum. Computerized tests which are valid and reliable will assist in making this task feasible. (PsycINFO Database Record (c) 2005 APA )10aComputer Assisted Testing10aMental Rotation10aShort Term Memory computerized adaptive testing10aTest Construction10aTest Validity10aVisuospatial Memory1 aStrong, S D uhttp://iacat.org/content/development-computerized-version-vandenbergs-mental-rotation-test-and-effect-visuo-spatial03270nas a2200217 4500008004100000245020800041210007000249300001000319490000700329520241300336653002102749653002402770653003202794653001302826100001902839700001802858700001702876700001802893700001702911856012402928 2000 eng d00aDiagnostische programme in der Demenzfrüherkennung: Der Adaptive Figurenfolgen-Lerntest (ADAFI) [Diagnostic programs in the early detection of dementia: The Adaptive Figure Series Learning Test (ADAFI)]0 aDiagnostische programme in der Demenzfrüherkennung Der Adaptive a16-290 v133 aZusammenfassung: Untersucht wurde die Eignung des computergestützten Adaptiven Figurenfolgen-Lerntests (ADAFI), zwischen gesunden älteren Menschen und älteren Menschen mit erhöhtem Demenzrisiko zu differenzieren. Der im ADAFI vorgelegte Aufgabentyp der fluiden Intelligenzdimension (logisches Auffüllen von Figurenfolgen) hat sich in mehreren Studien zur Erfassung des intellektuellen Leistungspotentials (kognitive Plastizität) älterer Menschen als günstig für die genannte Differenzierung erwiesen. Aufgrund seiner Konzeption als Diagnostisches Programm fängt der ADAFI allerdings einige Kritikpunkte an Vorgehensweisen in diesen bisherigen Arbeiten auf. Es konnte gezeigt werden, a) daß mit dem ADAFI deutliche Lokationsunterschiede zwischen den beiden Gruppen darstellbar sind, b) daß mit diesem Verfahren eine gute Vorhersage des mentalen Gesundheitsstatus der Probanden auf Einzelfallebene gelingt (Sensitivität: 80 %, Spezifität: 90 %), und c) daß die Vorhersageleistung statusdiagnostischer Tests zur Informationsverarbeitungsgeschwindigkeit und zum Arbeitsgedächtnis geringer ist. Die Ergebnisse weisen darauf hin, daß die plastizitätsorientierte Leistungserfassung mit dem ADAFI vielversprechend für die Frühdiagnostik dementieller Prozesse sein könnte.The aim of this study was to examine the ability of the computerized Adaptive Figure Series Learning Test (ADAFI) to differentiate among old subjects at risk for dementia and old healthy controls. Several studies on the subject of measuring the intellectual potential (cognitive plasticity) of old subjects have shown the usefulness of the fluid intelligence type of task used in the ADAFI (completion of figure series) for this differentiation. Because the ADAFI has been developed as a Diagnostic Program it is able to counter some critical issues in those studies. It was shown a) that distinct differences between both groups are revealed by the ADAFI, b) that the prediction of the cognitive health status of individual subjects is quite good (sensitivity: 80 %, specifity: 90 %), and c) that the prediction of the cognitive health status with tests of processing speed and working memory is worse than with the ADAFI. The results indicate that the ADAFI might be a promising plasticity-oriented tool for the measurement of cognitive decline in the elderly, and thus might be useful for the early detection of dementia.10aAdaptive Testing10aAt Risk Populations10aComputer Assisted Diagnosis10aDementia1 aSchreiber, M D1 aSchneider, RJ1 aSchweizer, A1 aBeckmann, J F1 aBaltissen, R uhttp://iacat.org/content/diagnostische-programme-der-demenzfr%C3%BCherkennung-der-adaptive-figurenfolgen-lerntest-adafi00403nas a2200121 4500008004100000245005400041210005300095300001200148490000700160100001700167700001900184856007800203 2000 eng d00aDoes adaptive testing violate local independence?0 aDoes adaptive testing violate local independence a149-1560 v651 aMislevy, R J1 aChang, Hua-Hua uhttp://iacat.org/content/does-adaptive-testing-violate-local-independence00591nas a2200109 4500008004100000245008400041210006900125260014400194100001800338700002300356856010200379 1999 eng d00aDesigning item pools for computerized adaptive testing (Research Report 99-03 )0 aDesigning item pools for computerized adaptive testing Research aEnschede, The Netherlands: University of Twente, Faculty of Educational Science and Technology, Department of Measurement and Data Analysis1 aVeldkamp, B P1 avan der Linden, WJ uhttp://iacat.org/content/designing-item-pools-computerized-adaptive-testing-research-report-99-0300395nas a2200121 4500008004100000245005500041210005500096300001200151490000700163100001700170700001300187856007300200 1999 eng d00aDetecting item memorization in the CAT environment0 aDetecting item memorization in the CAT environment a147-1600 v231 aMcLeod L. D.1 aLewis, C uhttp://iacat.org/content/detecting-item-memorization-cat-environment00423nas a2200109 4500008004100000245006800041210006800109260002100177100001600198700001800214856008100232 1999 eng d00aDetecting items that have been memorized in the CAT environment0 aDetecting items that have been memorized in the CAT environment aMontreal, Canada1 aMcLeod, L D1 aSchinpke, D L uhttp://iacat.org/content/detecting-items-have-been-memorized-cat-environment00517nas a2200109 4500008004100000245006300041210006300104260012100167100001900288700001600307856008400323 1999 eng d00aDeveloping computerized adaptive tests for school children0 aDeveloping computerized adaptive tests for school children aF. Drasgow and J. B. Olson-Buchanan (Eds.), Innovations in computerized assessment (pp. 93-115). Mahwah NJ: Erlbaum.1 aKingsbury, G G1 aHouser, R L uhttp://iacat.org/content/developing-computerized-adaptive-tests-school-children00482nas a2200097 4500008004100000245011700041210006900158260002100227100002000248856011600268 1999 eng d00aThe development and cognitive evaluation of an audio-assisted computer-adaptive test for eight-grade mathematics0 adevelopment and cognitive evaluation of an audioassisted compute aMontreal, Canada1 aWilliams, V S L uhttp://iacat.org/content/development-and-cognitive-evaluation-audio-assisted-computer-adaptive-test-eight-grade00559nas a2200097 4500008004100000245009700041210006900138260012200207100001500329856011700344 1999 eng d00aDevelopment and introduction of a computer adaptive Graduate Record Examination General Test0 aDevelopment and introduction of a computer adaptive Graduate Rec aF. Drasgow and J .B. Olson-Buchanan (Eds.). Innovations in computerized assessment (pp. 117-135). Mahwah NJ: Erlbaum.1 aMills, C N uhttp://iacat.org/content/development-and-introduction-computer-adaptive-graduate-record-examination-general-test00657nas a2200133 4500008004100000245012100041210006900162260010800231100001600339700001700355700001600372700001500388856012000403 1999 eng d00aThe development of a computerized adaptive selection system for computer programmers in a financial services company0 adevelopment of a computerized adaptive selection system for comp aF. Drasgow and J. B. Olsen (Eds.), Innvoations in computerized assessment (p. 7-33). Mahwah NJ Erlbaum.1 aZickar, M J1 aOverton, R C1 aTaylor, L R1 aHarms, H J uhttp://iacat.org/content/development-computerized-adaptive-selection-system-computer-programmers-financial-services00381nas a2200109 4500008004100000245006400041210006000105300001200165490000700177100001500184856007200199 1999 eng d00aThe development of an adaptive test for placement in french0 adevelopment of an adaptive test for placement in french a122-1350 v101 aLaurier, M uhttp://iacat.org/content/development-adaptive-test-placement-french00590nas a2200109 4500008004100000245011100041210006900152260010500221100001600326700001600342856012200358 1999 eng d00aDevelopment of the computerized adaptive testing version of the Armed Services Vocational Aptitude Battery0 aDevelopment of the computerized adaptive testing version of the aF. Drasgow and J. Olson-Buchanan (Eds.). Innovations in computerized assessment. Mahwah NJ: Erlbaum.1 aSegall, D O1 aMoreno, K E uhttp://iacat.org/content/development-computerized-adaptive-testing-version-armed-services-vocational-aptitude-battery00499nas a2200121 4500008004100000245009700041210006900138300001000207100001400217700001700231700001600248856011300264 1999 eng d00aDynamic health assessments: The search for more practical and more precise outcomes measures0 aDynamic health assessments The search for more practical and mor a11-131 aWare, Jr.1 aBjorner, J B1 aKosinski, M uhttp://iacat.org/content/dynamic-health-assessments-search-more-practical-and-more-precise-outcomes-measures00576nas a2200121 4500008004100000245009500041210006900136260009200205100001300297700001500310700001800325856011100343 1998 eng d00aDeveloping, maintaining, and renewing the item inventory to support computer-based testing0 aDeveloping maintaining and renewing the item inventory to suppor aComputer-Based Testing: Building the Foundation for Future Assessments, Philadelphia PA1 aWay, W D1 aSteffen, M1 aAnderson, G S uhttp://iacat.org/content/developing-maintaining-and-renewing-item-inventory-support-computer-based-testing00489nas a2200109 4500008004100000245007700041210006900118260006500187100001600252700001400268856009700282 1998 eng d00aDevelopment and evaluation of online calibration procedures (TCN 96-216)0 aDevelopment and evaluation of online calibration procedures TCN aChampaign IL: Algorithm Design and Measurement Services, Inc1 aLevine, M L1 aWilliams. uhttp://iacat.org/content/development-and-evaluation-online-calibration-procedures-tcn-96-21600480nas a2200109 4500008004100000245007800041210006900119260004600188100001700234700001900251856010000270 1998 eng d00aDoes adaptive testing violate local independence? (Research Report 98-33)0 aDoes adaptive testing violate local independence Research Report aPrinceton NJ: Educational Testing Service1 aMislevy, R J1 aChang, Hua-Hua uhttp://iacat.org/content/does-adaptive-testing-violate-local-independence-research-report-98-3300349nas a2200097 4500008004100000245005300041210005300094260001600147100001500163856007300178 1997 eng d00aDetecting misbehaving items in a CAT environment0 aDetecting misbehaving items in a CAT environment aChicago, IL1 aSwygert, K uhttp://iacat.org/content/detecting-misbehaving-items-cat-environment00349nas a2200097 4500008004100000245005100041210005100092260001500143100002300158856007000181 1997 eng d00aDetection of aberrant response patterns in CAT0 aDetection of aberrant response patterns in CAT aChicago IL1 avan der Linden, WJ uhttp://iacat.org/content/detection-aberrant-response-patterns-cat00471nas a2200133 4500008004100000245007300041210006900114300001000183490000700193100001300200700001100213700001800224856009500242 1997 eng d00aDeveloping and scoring an innovative computerized writing assessment0 aDeveloping and scoring an innovative computerized writing assess a21-410 v341 aDavey, T1 aGodwin1 aMittelholz, D uhttp://iacat.org/content/developing-and-scoring-innovative-computerized-writing-assessment00480nas a2200121 4500008004100000245008900041210006900130300000900199490000700208100001800215700001800233856010700251 1997 eng d00aDiagnostic adaptive testing: Effects of remedial instruction as empirical validation0 aDiagnostic adaptive testing Effects of remedial instruction as e a3-200 v341 aTatsuoka, K K1 aTatsuoka, M M uhttp://iacat.org/content/diagnostic-adaptive-testing-effects-remedial-instruction-empirical-validation01729nas a2200169 4500008004100000245009900041210006900140300001200209490000700221520112300228653002101351653003001372653000801402653002301410100001601433856011001449 1997 eng d00aThe distribution of indexes of person fit within the computerized adaptive testing environment0 adistribution of indexes of person fit within the computerized ad a115-1270 v213 aThe extent to which a trait estimate represents the underlying latent trait of interest can be estimated by using indexes of person fit. Several statistical methods for indexing person fit have been proposed to identify nonmodel-fitting response vectors. These person-fit indexes have generally been found to follow a standard normal distribution for conventionally administered tests. The present investigation found that within the context of computerized adaptive testing (CAT) these indexes tended not to follow a standard normal distribution. As the item pool became less discriminating, as the CAT termination criterion became less stringent, and as the number of items in the pool decreased, the distributions of the indexes approached a standard normal distribution. It was determined that under these conditions the indexes' distributions approached standard normal distributions because more items were being administered. However, even when over 50 items were administered in a CAT the indexes were distributed in a fashion that was different from what was expected. (PsycINFO Database Record (c) 2006 APA )10aAdaptive Testing10aComputer Assisted Testing10aFit10aPerson Environment1 aNering, M L uhttp://iacat.org/content/distribution-indexes-person-fit-within-computerized-adaptive-testing-environment00860nas a2200205 4500008004100000245004600041210004600087260001200133300000800145490000600153520029600159653002900455653001500484653001100499653001800510653002400528653001800552100001700570856006700587 1996 eng d00aDispelling myths about the new NCLEX exam0 aDispelling myths about the new NCLEX exam cJan-Feb a6-70 v93 aThe new computerized NCLEX system is working well. Most new candidates, employers, and board of nursing representatives like the computerized adaptive testing system and the fast report of results. But, among the candidates themselves some myths have grown which cause them needless anxiety.10a*Educational Measurement10a*Licensure10aHumans10aNursing Staff10aPersonnel Selection10aUnited States1 aJohnson, S H uhttp://iacat.org/content/dispelling-myths-about-new-nclex-exam02572nas a2200133 4500008004100000245009100041210006900132300000900201490000700210520206600217653003402283100001402317856010702331 1996 eng d00aDynamic scaling: An ipsative procedure using techniques from computer adaptive testing0 aDynamic scaling An ipsative procedure using techniques from comp a58240 v563 aThe purpose of this study was to create a prototype method for scaling items using computer adaptive testing techniques and to demonstrate the method with a working model program. The method can be used to scale items, rank individuals with respect to the scaled items, and to re-scale the items with respect to the individuals' responses. When using this prototype method, the items to be scaled are part of a database that contains not only the items, but measures of how individuals respond to each item. After completion of all presented items, the individual is assigned an overall scale value which is then compared with each item responded to, and an individual "error" term is stored with each item. After several individuals have responded to the items, the item error terms are used to revise the placement of the scaled items. This revision feature allows the natural adaptation of one general list to reflect subgroup differences, for example, differences among geographic areas or ethnic groups. It also provides easy revision and limited authoring of the scale items by the computer program administrator. This study addressed the methodology, the instrumentation needed to handle the scale-item administration, data recording, item error analysis, and scale-item database editing required by the method, and the behavior of a prototype vocabulary test in use. Analyses were made of item ordering, response profiles, item stability, reliability and validity. Although slow, the movement of unordered words used as items in the prototype program was accurate as determined by comparison with an expert word ranking. Person scores obtained by multiple administrations of the prototype test were reliable and correlated at.94 with a commercial paper-and-pencil vocabulary test, while holding a three-to-one speed advantage in administration. Although based upon self-report data, dynamic scaling instruments like the model vocabulary test could be very useful for self-assessment, for pre (PsycINFO Database Record (c) 2003 APA, all rights reserved).10acomputerized adaptive testing1 aBerg, S R uhttp://iacat.org/content/dynamic-scaling-ipsative-procedure-using-techniques-computer-adaptive-testing00324nas a2200109 4500008004100000245003400041210003300075260001800108100001600126700001700142856005500159 1995 eng d00aDoes cheating on CAT pay: Not0 aDoes cheating on CAT pay Not aSan Francisco1 aGershon, RC1 aBergstrom, B uhttp://iacat.org/content/does-cheating-cat-pay-not00563nas a2200121 4500008004100000245011900041210006900160260004700229100001300276700001600289700001700305856011900322 1994 eng d00aDIF analysis for pretest items in computer-adaptive testing (Educational Testing Service Research Rep No RR 94-33)0 aDIF analysis for pretest items in computeradaptive testing Educa aPrinceton NJ: Educational Testing Service.1 aZwick, R1 aThayer, D T1 aWingersky, M uhttp://iacat.org/content/dif-analysis-pretest-items-computer-adaptive-testing-educational-testing-service-research00487nas a2200097 4500008004100000245010200041210006900143260004600212100001600258856011500274 1993 eng d00aDeriving comparable scores for computer adaptive and conventional tests: An example using the SAT0 aDeriving comparable scores for computer adaptive and conventiona aPrinceton NJ: Educational Testing Service1 aEignor, D R uhttp://iacat.org/content/deriving-comparable-scores-computer-adaptive-and-conventional-tests-example-using-sat00435nas a2200109 4500008004100000245008300041210006900124300001200193490000700205100001700212856009600229 1993 eng d00aThe development and evaluation of a computerized adaptive test of tonal memory0 adevelopment and evaluation of a computerized adaptive test of to a111-1360 v411 aVispoel, W P uhttp://iacat.org/content/development-and-evaluation-computerized-adaptive-test-tonal-memory00477nas a2200121 4500008004100000245008100041210006900122300000900191490000700200653003400207100002100241856009300262 1992 eng d00aThe development and evaluation of a system for computerized adaptive testing0 adevelopment and evaluation of a system for computerized adaptive a43040 v5210acomputerized adaptive testing1 aTorre Sanchez, R uhttp://iacat.org/content/development-and-evaluation-system-computerized-adaptive-testing00512nas a2200109 4500008004100000245005600041210005200097260014600149100001700295700001600312856007400328 1992 eng d00aThe development of alternative operational concepts0 adevelopment of alternative operational concepts aProceedings of the 34th Annual Conference of the Military Testing Association. San Diego, CA: Navy Personnel Research and Development Center.1 aMcBride, J R1 aCurran, L T uhttp://iacat.org/content/development-alternative-operational-concepts00461nas a2200097 4500008004100000245010100041210006900142260001700211100001300228856012200241 1992 eng d00aDifferential item functioning analysis for computer-adaptive tests and other IRT-scored measures0 aDifferential item functioning analysis for computeradaptive test aSan Diego CA1 aZwick, R uhttp://iacat.org/content/differential-item-functioning-analysis-computer-adaptive-tests-and-other-irt-scored-measures00441nas a2200109 4500008004100000245007700041210006900118260001500187100001900202700001700221856009300238 1991 eng d00aThe development and evaluation of a computerized adaptive testing system0 adevelopment and evaluation of a computerized adaptive testing sy aChicago IL1 aDe la Torre, R1 aVispoel, W P uhttp://iacat.org/content/development-and-evaluation-computerized-adaptive-testing-system00486nas a2200109 4500008004100000245010800041210006900149260001500218100001300233700001500246856011500261 1991 eng d00aDevelopment and evaluation of hierarchical testlets in two-stage tests using integer linear programming0 aDevelopment and evaluation of hierarchical testlets in twostage aChicago IL1 aLam, T L1 aGoong, Y Y uhttp://iacat.org/content/development-and-evaluation-hierarchical-testlets-two-stage-tests-using-integer-linear00364nas a2200085 4500008004100000245006800041210006800109100001200177856008900189 1990 eng d00aDichotomous search strategies for computerized adaptive testing0 aDichotomous search strategies for computerized adaptive testing1 aXiao, B uhttp://iacat.org/content/dichotomous-search-strategies-computerized-adaptive-testing00646nas a2200097 4500008004100000245013800041210006900179260016300248100001800411856011900429 1989 eng d00aDie Optimierung der Mebgenauikeit beim branched adaptiven Testen [Optimization of measurement precision for branched-adaptive testing0 aDie Optimierung der Mebgenauikeit beim branched adaptiven Testen aK. D. Kubinger (Ed.), Moderne Testtheorie Ein Abrib samt neusten Beitrgen [Modern test theory Overview and new issues] (pp. 187-218). Weinhem, Germany: Beltz.1 aKubinger, K D uhttp://iacat.org/content/die-optimierung-der-mebgenauikeit-beim-branched-adaptiven-testen-optimization-measurement00536nas a2200109 4500008004100000245011600041210006900157260005300226100001400279700001700293856011600310 1988 eng d00aThe development and evaluation of a microcomputerized adaptive placement testing system for college mathematics0 adevelopment and evaluation of a microcomputerized adaptive place a1986 (San Francisco CA) and 1987 (Washington DC)1 aHsu, T -C1 aShermis, M D uhttp://iacat.org/content/development-and-evaluation-microcomputerized-adaptive-placement-testing-system-college00529nas a2200097 4500008004100000245014000041210006900181260004500250100001500295856012100310 1986 eng d00aDetermining the sensitivity of CAT-ASVAB scores to changes in item response curves with the medium of administration (Report No.86-189)0 aDetermining the sensitivity of CATASVAB scores to changes in ite aAlexandria VA: Center for Naval Analyses1 aDivgi, D R uhttp://iacat.org/content/determining-sensitivity-cat-asvab-scores-changes-item-response-curves-medium-administration00505nas a2200097 4500008004100000245011700041210006900158260004800227100001400275856011800289 1985 eng d00aDevelopment of a microcomputer-based adaptive testing system: Phase II Implementation (Research Report ONR 85-5)0 aDevelopment of a microcomputerbased adaptive testing system Phas aSt. Paul MN: Assessment Systems Corporation1 aVale, C D uhttp://iacat.org/content/development-microcomputer-based-adaptive-testing-system-phase-ii-implementation-research00423nas a2200097 4500008004100000245008500041210006900126260002000195100001700215856009300232 1984 eng d00aThe design of a computerized adaptive testing system for administering the ASVAB0 adesign of a computerized adaptive testing system for administeri aNew Orleans, LA1 aMcBride, J R uhttp://iacat.org/content/design-computerized-adaptive-testing-system-administering-asvab00612nas a2200097 4500008004100000245006000041210005900101260026000160100001400420856008000434 1982 eng d00aDesign of a Microcomputer-Based Adaptive Testing System0 aDesign of a MicrocomputerBased Adaptive Testing System aD. J. Weiss (Ed.), Proceedings of the 1979 Item Response Theory and Computerized Adaptive Testing Conference (pp. 360-371). Minneapolis: University of Minnesota, Department of Psychology, Psychometric Methods Program, Computerized Adaptive Testing Laborat1 aVale, C D uhttp://iacat.org/content/design-microcomputer-based-adaptive-testing-system00442nas a2200097 4500008004100000245009100041210006900132260001900201100001700220856010700237 1982 eng d00aDevelopment of a computerized adaptive testing system for enlisted personnel selection0 aDevelopment of a computerized adaptive testing system for enlist aWashington, DC1 aMcBride, J R uhttp://iacat.org/content/development-computerized-adaptive-testing-system-enlisted-personnel-selection00523nas a2200097 4500008004100000245004800041210004700089260020000136100001700336856007200353 1982 eng d00aDiscussion: Adaptive and sequential testing0 aDiscussion Adaptive and sequential testing aD. J. Weiss (Ed.). Proceedings of the 1982 Computerized Adaptive Testing Conference (pp. 290-294). Minneapolis MN: University of Minnesota, Department of Psychology, Psychometric Methods Program.1 aReckase, M D uhttp://iacat.org/content/discussion-adaptive-and-sequential-testing00431nas a2200109 4500008004100000245007900041210006900120300001200189490000700201100001400208856009900222 1981 eng d00aDesign and implementation of a microcomputer-based adaptive testing system0 aDesign and implementation of a microcomputerbased adaptive testi a399-4060 v131 aVale, C D uhttp://iacat.org/content/design-and-implementation-microcomputer-based-adaptive-testing-system00568nam a2200097 4500008004100000245011100041210006900152260012300221100001400344856011200358 1980 eng d00aDevelopment and evaluation of an adaptive testing strategy for use in multidimensional interest assessment0 aDevelopment and evaluation of an adaptive testing strategy for u aUnpublished doctoral dissertation, University of Minnesota. Dissertational Abstract International, 42(11-B), 4248-42491 aVale, C D uhttp://iacat.org/content/development-and-evaluation-adaptive-testing-strategy-use-multidimensional-interest00496nas a2200097 4500008004100000245002600041210002500067260024000092100001600332856005000348 1980 eng d00aDiscussion: Session 10 aDiscussion Session 1 aD. J. Weiss (Ed.), Proceedings of the 1979 Computerized Adaptive Testing Conference (pp. 51-55). Minneapolis MN: University of Minnesota, Department of Psychology, Psychometric Methods Program, Computerized Adaptive Testing Laboratory.1 aWaters, B K uhttp://iacat.org/content/discussion-session-100516nas a2200097 4500008004100000245002600041210002500067260026000092100001600352856005000368 1980 eng d00aDiscussion: Session 30 aDiscussion Session 3 aD. J. Weiss (Ed.), Proceedings of the 1979 Item Response Theory and Computerized Adaptive Testing Conference (pp. 140-143). Minneapolis: University of Minnesota, Department of Psychology, Psychometric Methods Program, Computerized Adaptive Testing Laborat1 aNovick, M R uhttp://iacat.org/content/discussion-session-300601nas a2200109 4500008004100000245014400041210006900185260008500254100001600339700001700355856011900372 1979 eng d00aThe danger of relying solely on diagnostic adaptive testing when prior and subsequent instructional methods are different (CERL Report E-5)0 adanger of relying solely on diagnostic adaptive testing when pri aUrbana IL: Univeristy of Illinois, Computer-Based Education Research Laboratory.1 aTatsuoka, K1 aBirenbaum, M uhttp://iacat.org/content/danger-relying-solely-diagnostic-adaptive-testing-when-prior-and-subsequent-instructional00356nas a2200109 4500008004100000245005000041210005000091300001200141490000600153100001800159856006900177 1977 eng d00aDescription of components in tailored testing0 aDescription of components in tailored testing a153-1570 v91 aPatience, W M uhttp://iacat.org/content/description-components-tailored-testing00379nas a2200097 4500008004100000245001500041210001500056260015500071100001300226856004200239 1976 eng d00aDiscussion0 aDiscussion aC. K. Clark (Ed.), Proceedings of the First Conference on Computerized Adaptive Testing (pp. 113-117). Washington DC: U.S. Government Printing Office.1 aLord, FM uhttp://iacat.org/content/discussion-200384nas a2200097 4500008004100000245001500041210001500056260015900071100001400230856004200244 1976 eng d00aDiscussion0 aDiscussion aC. K. Clark (Ed.), Proceedings of the First Conference on Computerized Adaptive Testing (pp. pp. 118-119). Washington DC: U.S. Government Printing Office.1 aGreen, BF uhttp://iacat.org/content/discussion-000440nas a2200097 4500008004100000245001500041210001500056260021600071100001300287856004200300 1975 eng d00aDiscussion0 aDiscussion aD. J. Weiss (Ed.), Computerized adaptive trait measurement: Problems and Prospects (Research Report 75-5), pp. 44-46. Minneapolis: University of Minnesota, Department of Psychology, Psychometric Methods Program.1 aLinn, RL uhttp://iacat.org/content/discussion-100439nas a2200097 4500008004100000245001500041210001500056260021600071100001400287856004000301 1975 eng d00aDiscussion0 aDiscussion aD. J. Weiss (Ed.), Computerized adaptive trait measurement: Problems and Prospects (Research Report 75-5), pp. 46-49. Minneapolis: University of Minnesota, Department of Psychology, Psychometric Methods Program.1 aBock, R D uhttp://iacat.org/content/discussion00539nas a2200121 4500008004100000245006900041210006700110260010600177100001700283700001400300700001600314856008700330 1974 eng d00aDevelopment of a programmed testing system (Technical Paper 259)0 aDevelopment of a programmed testing system Technical Paper 259 aArlington VA: US Army Research Institute for the Behavioral and Social Sciences. NTIS No. AD A001534)1 aBayroff, A G1 aRoss, R M1 aFischl, M A uhttp://iacat.org/content/development-programmed-testing-system-technical-paper-25900444nas a2200121 4500008004100000245007300041210006900114300001200183490000700195100001300202700001600215856009100231 1969 eng d00aThe development and evaluation of several programmed testing methods0 adevelopment and evaluation of several programmed testing methods a129-1460 v291 aLinn, RL1 aCleary, T A uhttp://iacat.org/content/development-and-evaluation-several-programmed-testing-methods00619nam a2200097 4500008003900000245014200039210006900181260014200250100001800392856011100410 1969 d00aThe development, implementation, and evaluation of a computer-assisted branched test for a program of individually prescribed instruction0 adevelopment implementation and evaluation of a computerassisted aDoctoral dissertation, University of Pittsburgh. Dissertation Abstracts International, 30-09A, 3856. (University Microfilms No. 70-4530).1 aFerguson, R L uhttp://iacat.org/content/development-implementation-and-evaluation-computer-assisted-branched-test-program00533nas a2200121 4500008004100000245009800041210006900139260004600208100001300254700001400267700001600281856011400297 1968 eng d00aThe development and evaluation of several programmed testing methods (Research Bulletin 68-5)0 adevelopment and evaluation of several programmed testing methods aPrinceton NJ: Educational Testing Service1 aLinn, RL1 aRock, D A1 aCleary, T A uhttp://iacat.org/content/development-and-evaluation-several-programmed-testing-methods-research-bulletin-68-5