00347nas a2200097 4500008004100000245005100041210005100092300001000143100002300153856007300176 2010 eng d00aConstrained Adaptive Testing with Shadow Tests0 aConstrained Adaptive Testing with Shadow Tests a31-561 avan der Linden, WJ uhttp://iacat.org/content/constrained-adaptive-testing-shadow-tests-000363nas a2200109 4500008004100000245004600041210004600087300001200133100001800145700002300163856006700186 2010 eng d00aDesigning Item Pools for Adaptive Testing0 aDesigning Item Pools for Adaptive Testing a231-2451 aVeldkamp, B P1 avan der Linden, WJ uhttp://iacat.org/content/designing-item-pools-adaptive-testing00354nam a2200121 4500008004100000245003300041210003300074260002300107300000800130100002300138700001600161856005500177 2010 eng d00aElements of Adaptive Testing0 aElements of Adaptive Testing aNew YorkbSpringer a4371 avan der Linden, WJ1 aGlas, C A W uhttp://iacat.org/content/elements-adaptive-testing00466nas a2200121 4500008004100000245007900041210006900120300001200189100001600201700002300217700001700240856008700257 2010 eng d00aEstimation of the Parameters in an Item-Cloning Model for Adaptive Testing0 aEstimation of the Parameters in an ItemCloning Model for Adaptiv a289-3141 aGlas, C A W1 avan der Linden, WJ1 aGeerlings, H uhttp://iacat.org/content/estimation-parameters-item-cloning-model-adaptive-testing00445nas a2200121 4500008004100000245006200041210006200103260002300165300000900188100002300197700001700220856008600237 2010 eng d00aItem Selection and Ability Estimation in Adaptive Testing0 aItem Selection and Ability Estimation in Adaptive Testing aNew YorkbSpringer a3-301 avan der Linden, WJ1 aPashley, P J uhttp://iacat.org/content/item-selection-and-ability-estimation-adaptive-testing-001557nas a2200157 4500008004100000020004100041245008400082210006900166250001500235300001100250490000700261520099300268100001601261700002301277856009901300 2010 eng d a0007-1102 (Print)0007-1102 (Linking)00aMarginal likelihood inference for a model for item responses and response times0 aMarginal likelihood inference for a model for item responses and a2010/01/30 a603-260 v633 a
Marginal maximum-likelihood procedures for parameter estimation and testing the fit of a hierarchical model for speed and accuracy on test items are presented. The model is a composition of two first-level models for dichotomous responses and response times along with multivariate normal models for their item and person parameters. It is shown how the item parameters can easily be estimated using Fisher's identity. To test the fit of the model, Lagrange multiplier tests of the assumptions of subpopulation invariance of the item parameters (i.e., no differential item functioning), the shape of the response functions, and three different types of conditional independence were derived. Simulation studies were used to show the feasibility of the estimation and testing procedures and to estimate the power and Type I error rate of the latter. In addition, the procedures were applied to an empirical data set from a computerized adaptive test of language comprehension.
1 aGlas, C A W1 avan der Linden, WJ uhttp://iacat.org/content/marginal-likelihood-inference-model-item-responses-and-response-times00474nas a2200109 4500008004100000245008900041210007100130300001100201100001400212700002300226856011500249 2010 eng d00aMultidimensional Adaptive Testing with Kullback–Leibler Information Item Selection0 aMultidimensional Adaptive Testing with Kullback–Leibler Informat a77-1021 aMulder, J1 avan der Linden, WJ uhttp://iacat.org/content/multidimensional-adaptive-testing-kullback%E2%80%93leibler-information-item-selection00292nas a2200085 4500008004100000245004000041210004000081100002300121856006200144 2010 eng d00aSequencing an Adaptive Test Battery0 aSequencing an Adaptive Test Battery1 avan der Linden, WJ uhttp://iacat.org/content/sequencing-adaptive-test-battery01899nas a2200169 4500008004100000020004100041245008600082210006900168250001500237260000800252300001200260490000700272520131100279100001401590700002301604856010201627 2009 Eng d a0033-3123 (Print)0033-3123 (Linking)00aMultidimensional Adaptive Testing with Optimal Design Criteria for Item Selection0 aMultidimensional Adaptive Testing with Optimal Design Criteria f a2010/02/02 cJun a273-2960 v743 aSeveral criteria from the optimal design literature are examined for use with item selection in multidimensional adaptive testing. In particular, it is examined what criteria are appropriate for adaptive testing in which all abilities are intentional, some should be considered as a nuisance, or the interest is in the testing of a composite of the abilities. Both the theoretical analyses and the studies of simulated data in this paper suggest that the criteria of A-optimality and D-optimality lead to the most accurate estimates when all abilities are intentional, with the former slightly outperforming the latter. The criterion of E-optimality showed occasional erratic behavior for this case of adaptive testing, and its use is not recommended. If some of the abilities are nuisances, application of the criterion of A(s)-optimality (or D(s)-optimality), which focuses on the subset of intentional abilities is recommended. For the measurement of a linear combination of abilities, the criterion of c-optimality yielded the best results. The preferences of each of these criteria for items with specific patterns of parameter values was also assessed. It was found that the criteria differed mainly in their preferences of items with different patterns of values for their discrimination parameters.1 aMulder, J1 avan der Linden, WJ uhttp://iacat.org/content/multidimensional-adaptive-testing-optimal-design-criteria-item-selection00353nas a2200109 4500008004100000245004500041210004500086300001100131490001100142100002300153856006700176 2008 eng d00aAdaptive models of psychological testing0 aAdaptive models of psychological testing a3–110 v216(1)1 avan der Linden, WJ uhttp://iacat.org/content/adaptive-models-psychological-testing00347nas a2200109 4500008003900000245004500039210004500084300000800129490000800137100002300145856006900168 2008 d00aAdaptive Models of Psychological Testing0 aAdaptive Models of Psychological Testing a1-20 v2161 avan der Linden, WJ uhttp://iacat.org/content/adaptive-models-psychological-testing-001289nas a2200133 4500008004100000245005700041210005700098300000900155490000800164520084700172653003401019100002301053856007901076 2008 eng d00aSome new developments in adaptive testing technology0 aSome new developments in adaptive testing technology a3-110 v2163 aIn an ironic twist of history, modern psychological testing has returned to an adaptive format quite common when testing was not yet standardized. Important stimuli to the renewed interest in adaptive testing have been the development of item-response theory in psychometrics, which models the responses on test items using separate parameters for the items and test takers, and the use of computers in test administration, which enables us to estimate the parameter for a test taker and select the items in real time. This article reviews a selection from the latest developments in the technology of adaptive testing, such as constrained adaptive item selection, adaptive testing using rule-based item generation, multidimensional adaptive testing, adaptive use of test batteries, and the use of response times in adaptive testing.
10acomputerized adaptive testing1 avan der Linden, WJ uhttp://iacat.org/content/some-new-developments-adaptive-testing-technology00402nas a2200109 4500008004100000245006400041210006400105300001100169490000700180100002300187856008200210 2008 eng d00aUsing response times for item selection in adaptive testing0 aUsing response times for item selection in adaptive testing a5–200 v331 avan der Linden, WJ uhttp://iacat.org/content/using-response-times-item-selection-adaptive-testing00514nas a2200097 4500008004100000245008600041210006900127260009700196100002300293856010000316 2007 eng d00aThe shadow-test approach: A universal framework for implementing adaptive testing0 ashadowtest approach A universal framework for implementing adapt aD. J. Weiss (Ed.), Proceedings of the 2007 GMAC Conference on Computerized Adaptive Testing.1 avan der Linden, WJ uhttp://iacat.org/content/shadow-test-approach-universal-framework-implementing-adaptive-testing00470nas a2200109 4500008004100000245004400041210004400085260012600129100002300255700001600278856006600294 2007 eng d00aStatistical aspects of adaptive testing0 aStatistical aspects of adaptive testing aC. R. Rao and S. Sinharay (Eds.), Handbook of statistics (Vol. 27: Psychometrics) (pp. 801838). Amsterdam: North-Holland.1 avan der Linden, WJ1 aGlas, C A W uhttp://iacat.org/content/statistical-aspects-adaptive-testing01959nas a2200265 4500008004100000020002200041245008200063210006900145260002600214300001000240490000700250520112900257653001501386653003401401653001401435653001701449653002401466653001501490653002201505653001501527100002301542700001301565700001801578856009701596 2006 eng d a1076-9986 (Print)00aAssembling a computerized adaptive testing item pool as a set of linear tests0 aAssembling a computerized adaptive testing item pool as a set of bSage Publications: US a81-990 v313 aTest-item writing efforts typically results in item pools with an undesirable correlational structure between the content attributes of the items and their statistical information. If such pools are used in computerized adaptive testing (CAT), the algorithm may be forced to select items with less than optimal information, that violate the content constraints, and/or have unfavorable exposure rates. Although at first sight somewhat counterintuitive, it is shown that if the CAT pool is assembled as a set of linear test forms, undesirable correlations can be broken down effectively. It is proposed to assemble such pools using a mixed integer programming model with constraints that guarantee that each test meets all content specifications and an objective function that requires them to have maximal information at a well-chosen set of ability values. An empirical example with a previous master pool from the Law School Admission Test (LSAT) yielded a CAT with nearly uniform bias and mean-squared error functions for the ability estimator and item-exposure rates that satisfied the target for all items in the pool. 10aAlgorithms10acomputerized adaptive testing10aitem pool10alinear tests10amathematical models10astatistics10aTest Construction10aTest Items1 avan der Linden, WJ1 aAriel, A1 aVeldkamp, B P uhttp://iacat.org/content/assembling-computerized-adaptive-testing-item-pool-set-linear-tests01580nas a2200205 4500008004100000020002200041245005000063210005000113260002600163300001200189490000700201520094200208653003401150653002801184653001901212653002001231653003301251100002301284856006701307 2006 eng d a0146-6216 (Print)00aEquating scores from adaptive to linear tests0 aEquating scores from adaptive to linear tests bSage Publications: US a493-5080 v303 aTwo local methods for observed-score equating are applied to the problem of equating an adaptive test to a linear test. In an empirical study, the methods were evaluated against a method based on the test characteristic function (TCF) of the linear test and traditional equipercentile equating applied to the ability estimates on the adaptive test for a population of test takers. The two local methods were generally best. Surprisingly, the TCF method performed slightly worse than the equipercentile method. Both methods showed strong bias and uniformly large inaccuracy, but the TCF method suffered from extra error due to the lower asymptote of the test characteristic function. It is argued that the worse performances of the two methods are a consequence of the fact that they use a single equating transformation for an entire population of test takers and therefore have to compromise between the individual score distributions. 10acomputerized adaptive testing10aequipercentile equating10alocal equating10ascore reporting10atest characteristic function1 avan der Linden, WJ uhttp://iacat.org/content/equating-scores-adaptive-linear-tests00449nas a2200109 4500008004100000245008700041210006900128300001200197490000700209100002300216856010000239 2005 eng d00aA comparison of item-selection methods for adaptive tests with content constraints0 acomparison of itemselection methods for adaptive tests with cont a283-3020 v421 avan der Linden, WJ uhttp://iacat.org/content/comparison-item-selection-methods-adaptive-tests-content-constraints-002148nas a2200229 4500008004100000020002200041245008700063210006900150260004100219300001200260490000700272520135500279653002101634653001501655653002401670653002601694653002501720653002101745653003101766100002301797856009801820 2005 eng d a0022-0655 (Print)00aA comparison of item-selection methods for adaptive tests with content constraints0 acomparison of itemselection methods for adaptive tests with cont bBlackwell Publishing: United Kingdom a283-3020 v423 aIn test assembly, a fundamental difference exists between algorithms that select a test sequentially or simultaneously. Sequential assembly allows us to optimize an objective function at the examinee's ability estimate, such as the test information function in computerized adaptive testing. But it leads to the non-trivial problem of how to realize a set of content constraints on the test—a problem more naturally solved by a simultaneous item-selection method. Three main item-selection methods in adaptive testing offer solutions to this dilemma. The spiraling method moves item selection across categories of items in the pool proportionally to the numbers needed from them. Item selection by the weighted-deviations method (WDM) and the shadow test approach (STA) is based on projections of the future consequences of selecting an item. These two methods differ in that the former calculates a projection of a weighted sum of the attributes of the eventual test and the latter a projection of the test itself. The pros and cons of these methods are analyzed. An empirical comparison between the WDM and STA was conducted for an adaptive version of the Law School Admission Test (LSAT), which showed equally good item-exposure rates but violations of some of the constraints and larger bias and inaccuracy of the ability estimator for the WDM.10aAdaptive Testing10aAlgorithms10acontent constraints10aitem selection method10ashadow test approach10aspiraling method10aweighted deviations method1 avan der Linden, WJ uhttp://iacat.org/content/comparison-item-selection-methods-adaptive-tests-content-constraints00511nas a2200109 4500008004100000245008200041210006900123260006700192100002300259700001800282856010100300 2005 eng d00aConstraining item exposure in computerized adaptive testing with shadow tests0 aConstraining item exposure in computerized adaptive testing with aLaw School Admission Council Computerized Testing Report 02-031 avan der Linden, WJ1 aVeldkamp, B P uhttp://iacat.org/content/constraining-item-exposure-computerized-adaptive-testing-shadow-tests-100549nas a2200109 4500008004100000245010300041210006900144260006800213100002300281700001900304856011600323 2005 eng d00aImplementing content constraints in alpha-stratified adaptive testing using a shadow test approach0 aImplementing content constraints in alphastratified adaptive tes aLaw School Admission Council, Computerized Testing Report 01-091 avan der Linden, WJ1 aChang, Hua-Hua uhttp://iacat.org/content/implementing-content-constraints-alpha-stratified-adaptive-testing-using-shadow-test-001608nas a2200217 4500008004100000020002200041245008200063210006900145260004300214300001200257490000700269520084600276653003401122653002601156653003501182653001601217653001701233100002301250700001801273856009901291 2004 eng d a1076-9986 (Print)00aConstraining item exposure in computerized adaptive testing with shadow tests0 aConstraining item exposure in computerized adaptive testing with bAmerican Educational Research Assn: US a273-2910 v293 aItem-exposure control in computerized adaptive testing is implemented by imposing item-ineligibility constraints on the assembly process of the shadow tests. The method resembles Sympson and Hetter’s (1985) method of item-exposure control in that the decisions to impose the constraints are probabilistic. The method does not, however, require time-consuming simulation studies to set values for control parameters before the operational use of the test. Instead, it can set the probabilities of item ineligibility adaptively during the test using the actual item-exposure rates. An empirical study using an item pool from the Law School Admission Test showed that application of the method yielded perfect control of the item-exposure rates and had negligible impact on the bias and mean-squared error functions of the ability estimator. 10acomputerized adaptive testing10aitem exposure control10aitem ineligibility constraints10aProbability10ashadow tests1 avan der Linden, WJ1 aVeldkamp, B P uhttp://iacat.org/content/constraining-item-exposure-computerized-adaptive-testing-shadow-tests01903nas a2200217 4500008004100000020002200041245007000063210006900133260004100202300001200243490000700255520117100262653003201433653003301465653001801498653002401516100001301540700001801553700002301571856009101594 2004 eng d a0022-0655 (Print)00aConstructing rotating item pools for constrained adaptive testing0 aConstructing rotating item pools for constrained adaptive testin bBlackwell Publishing: United Kingdom a345-3590 v413 aPreventing items in adaptive testing from being over- or underexposed is one of the main problems in computerized adaptive testing. Though the problem of overexposed items can be solved using a probabilistic item-exposure control method, such methods are unable to deal with the problem of underexposed items. Using a system of rotating item pools, on the other hand, is a method that potentially solves both problems. In this method, a master pool is divided into (possibly overlapping) smaller item pools, which are required to have similar distributions of content and statistical attributes. These pools are rotated among the testing sites to realize desirable exposure rates for the items. A test assembly model, motivated by Gulliksen's matched random subtests method, was explored to help solve the problem of dividing a master pool into a set of smaller pools. Different methods to solve the model are proposed. An item pool from the Law School Admission Test was used to evaluate the performances of computerized adaptive tests from systems of rotating item pools constructed using these methods. (PsycINFO Database Record (c) 2007 APA, all rights reserved)10acomputerized adaptive tests10aconstrained adaptive testing10aitem exposure10arotating item pools1 aAriel, A1 aVeldkamp, B P1 avan der Linden, WJ uhttp://iacat.org/content/constructing-rotating-item-pools-constrained-adaptive-testing00452nas a2200109 4500008004100000245007900041210006900120260001700189100002300206700001800229856009500247 2004 eng d00aA sequential Bayesian procedure for item calibration in multistage testing0 asequential Bayesian procedure for item calibration in multistage aSan Diego CA1 avan der Linden, WJ1 aMead, Alan, D uhttp://iacat.org/content/sequential-bayesian-procedure-item-calibration-multistage-testing00470nas a2200121 4500008004100000245008000041210006900121300001200190490000700202100002300209700001900232856009700251 2003 eng d00aAlpha-stratified adaptive testing with large numbers of content constraints0 aAlphastratified adaptive testing with large numbers of content c a107-1200 v271 avan der Linden, WJ1 aChang, Hua-Hua uhttp://iacat.org/content/alpha-stratified-adaptive-testing-large-numbers-content-constraints00563nas a2200097 4500008004100000245008000041210006900121260015300190100002300343856009900366 2003 eng d00aBayesian checks on outlying response times in computerized adaptive testing0 aBayesian checks on outlying response times in computerized adapt aH. Yanai, A. Okada, K. Shigemasu, Y. Kano, Y. and J. J. Meulman, (Eds.), New developments in psychometrics (pp. 215-222). New York: Springer-Verlag.1 avan der Linden, WJ uhttp://iacat.org/content/bayesian-checks-outlying-response-times-computerized-adaptive-testing01587nas a2200145 4500008004100000245005200041210005200093300001200145490000700157520113200164653003401296100001601330700002301346856007201369 2003 eng d00aComputerized adaptive testing with item cloning0 aComputerized adaptive testing with item cloning a247-2610 v273 a(from the journal abstract) To increase the number of items available for adaptive testing and reduce the cost of item writing, the use of techniques of item cloning has been proposed. An important consequence of item cloning is possible variability between the item parameters. To deal with this variability, a multilevel item response (IRT) model is presented which allows for differences between the distributions of item parameters of families of item clones. A marginal maximum likelihood and a Bayesian procedure for estimating the hyperparameters are presented. In addition, an item-selection procedure for computerized adaptive testing with item cloning is presented which has the following two stages: First, a family of item clones is selected to be optimal at the estimate of the person parameter. Second, an item is randomly selected from the family for administration. Results from simulation studies based on an item pool from the Law School Admission Test (LSAT) illustrate the accuracy of these item pool calibration and adaptive testing procedures. (PsycINFO Database Record (c) 2003 APA, all rights reserved).10acomputerized adaptive testing1 aGlas, C A W1 avan der Linden, WJ uhttp://iacat.org/content/computerized-adaptive-testing-item-cloning00459nas a2200109 4500008004100000245008200041210006900123260001500192100002300207700001800230856010100248 2003 eng d00aConstraining item exposure in computerized adaptive testing with shadow tests0 aConstraining item exposure in computerized adaptive testing with aChicago IL1 avan der Linden, WJ1 aVeldkamp, B P uhttp://iacat.org/content/constraining-item-exposure-computerized-adaptive-testing-shadow-tests-000462nas a2200121 4500008004100000245007000041210006900111260001500180100001300195700001600208700002300224856009300247 2003 eng d00aConstructing rotating item pools for constrained adaptive testing0 aConstructing rotating item pools for constrained adaptive testin aChicago IL1 aAriel, A1 aVeldkamp, B1 avan der Linden, WJ uhttp://iacat.org/content/constructing-rotating-item-pools-constrained-adaptive-testing-000439nas a2200097 4500008004100000245008400041210006900125100002300194700001800217856010600235 2003 eng d00aControlling item exposure and item eligibility in computerized adaptive testing0 aControlling item exposure and item eligibility in computerized a1 avan der Linden, WJ1 aVeldkamp, B P uhttp://iacat.org/content/controlling-item-exposure-and-item-eligibility-computerized-adaptive-testing00500nas a2200109 4500008004100000245010400041210006900145260001500214100001800229700002300247856012000270 2003 eng d00aImplementing an alternative to Sympson-Hetter item-exposure control in constrained adaptive testing0 aImplementing an alternative to SympsonHetter itemexposure contro aChicago IL1 aVeldkamp, B P1 avan der Linden, WJ uhttp://iacat.org/content/implementing-alternative-sympson-hetter-item-exposure-control-constrained-adaptive-testing00510nas a2200121 4500008004100000245010300041210006900144300001200213490000700225100002300232700001900255856011400274 2003 eng d00aImplementing content constraints in alpha-stratified adaptive testing using a shadow test approach0 aImplementing content constraints in alphastratified adaptive tes a107-1200 v271 avan der Linden, WJ1 aChang, Hua-Hua uhttp://iacat.org/content/implementing-content-constraints-alpha-stratified-adaptive-testing-using-shadow-test01432nas a2200205 4500008004100000245008800041210007000129300001200199490000700211520067700218653002100895653003000916653002400946653002500970653002600995653005201021100001901073700002301092856011101115 2003 eng d00aOptimal stratification of item pools in α-stratified computerized adaptive testing0 aOptimal stratification of item pools in αstratified computerized a262-2740 v273 aA method based on 0-1 linear programming (LP) is presented to stratify an item pool optimally for use in α-stratified adaptive testing. Because the 0-1 LP model belongs to the subclass of models with a network flow structure, efficient solutions are possible. The method is applied to a previous item pool from the computerized adaptive testing (CAT) version of the Graduate Record Exams (GRE) Quantitative Test. The results indicate that the new method performs well in practical situations. It improves item exposure control, reduces the mean squared error in the θ estimates, and increases test reliability. (PsycINFO Database Record (c) 2005 APA ) (journal abstract)10aAdaptive Testing10aComputer Assisted Testing10aItem Content (Test)10aItem Response Theory10aMathematical Modeling10aTest Construction computerized adaptive testing1 aChang, Hua-Hua1 avan der Linden, WJ uhttp://iacat.org/content/optimal-stratification-item-pools-%CE%B1-stratified-computerized-adaptive-testing00461nas a2200109 4500008004100000245007700041210006900118260003000187100002300217700001800240856009300258 2003 eng d00aA sequential Bayes procedure for item calibration in multi-stage testing0 asequential Bayes procedure for item calibration in multistage te aManuscript in preparation1 avan der Linden, WJ1 aMead, Alan, D uhttp://iacat.org/content/sequential-bayes-procedure-item-calibration-multi-stage-testing01516nas a2200157 4500008004100000245009500041210006900136300001200205490000700217520090100224653002101125653003001146653004501176100002301221856011401244 2003 eng d00aSome alternatives to Sympson-Hetter item-exposure control in computerized adaptive testing0 aSome alternatives to SympsonHetter itemexposure control in compu a249-2650 v283 aTheHetter and Sympson (1997; 1985) method is a method of probabilistic item-exposure control in computerized adaptive testing. Setting its control parameters to admissible values requires an iterative process of computer simulations that has been found to be time consuming, particularly if the parameters have to be set conditional on a realistic set of values for the examinees’ ability parameter. Formal properties of the method are identified that help us explain why this iterative process can be slow and does not guarantee admissibility. In addition, some alternatives to the SH method are introduced. The behavior of these alternatives was estimated for an adaptive test from an item pool from the Law School Admission Test (LSAT). Two of the alternatives showed attractive behavior and converged smoothly to admissibility for all items in a relatively small number of iteration steps. 10aAdaptive Testing10aComputer Assisted Testing10aTest Items computerized adaptive testing1 avan der Linden, WJ uhttp://iacat.org/content/some-alternatives-sympson-hetter-item-exposure-control-computerized-adaptive-testing01512nas a2200229 4500008004100000245008700041210006900128300001200197490000700209520075300216653002100969653001300990653003001003653003401033653001101067653001501078653001501093653001801108100002301126700002701149856010601176 2003 eng d00aUsing response times to detect aberrant responses in computerized adaptive testing0 aUsing response times to detect aberrant responses in computerize a251-2650 v683 aA lognormal model for response times is used to check response times for aberrances in examinee behavior on computerized adaptive tests. Both classical procedures and Bayesian posterior predictive checks are presented. For a fixed examinee, responses and response times are independent; checks based on response times offer thus information independent of the results of checks on response patterns. Empirical examples of the use of classical and Bayesian checks for detecting two different types of aberrances in response times are presented. The detection rates for the Bayesian checks outperformed those for the classical checks, but at the cost of higher false-alarm rates. A guideline for the choice between the two types of checks is offered.10aAdaptive Testing10aBehavior10aComputer Assisted Testing10acomputerized adaptive testing10aModels10aperson Fit10aPrediction10aReaction Time1 avan der Linden, WJ1 aKrimpen-Stoop, E M L A uhttp://iacat.org/content/using-response-times-detect-aberrant-responses-computerized-adaptive-testing00534nas a2200109 4500008004100000245011000041210006900151260004200220100002300262700001800285856012100303 2002 eng d00aConstraining item exposure in computerized adaptive testing with shadow tests (Research Report No. 02-06)0 aConstraining item exposure in computerized adaptive testing with aUniversity of Twente, The Netherlands1 avan der Linden, WJ1 aVeldkamp, B P uhttp://iacat.org/content/constraining-item-exposure-computerized-adaptive-testing-shadow-tests-research-report-no-0201689nas a2200277 4500008004100000020001000041245006500051210006400116260009700180300001100277520074500288653002101033653002201054653002501076653002801101653002501129653001601154653001801170653005501188653001501243653001201258100001801270700002301288700001301311856008701324 2002 eng d a02-0900aMathematical-programming approaches to test item pool design0 aMathematicalprogramming approaches to test item pool design aTwente, The NetherlandsbUniversity of Twente, Faculty of Educational Science and Technology a93-1083 a(From the chapter) This paper presents an approach to item pool design that has the potential to improve on the quality of current item pools in educational and psychological testing and hence to increase both measurement precision and validity. The approach consists of the application of mathematical programming techniques to calculate optimal blueprints for item pools. These blueprints can be used to guide the item-writing process. Three different types of design problems are discussed, namely for item pools for linear tests, item pools computerized adaptive testing (CAT), and systems of rotating item pools for CAT. The paper concludes with an empirical example of the problem of designing a system of rotating item pools for CAT.10aAdaptive Testing10aComputer Assisted10aComputer Programming10aEducational Measurement10aItem Response Theory10aMathematics10aPsychometrics10aStatistical Rotation computerized adaptive testing10aTest Items10aTesting1 aVeldkamp, B P1 avan der Linden, WJ1 aAriel, A uhttp://iacat.org/content/mathematical-programming-approaches-test-item-pool-design00495nas a2200097 4500008004100000245010600041210006900147260004100216100002300257856011700280 2002 eng d00aModifications of the Sympson-Hetter method for item-exposure control in computerized adaptive testing0 aModifications of the SympsonHetter method for itemexposure contr aManuscript submitted for publication1 avan der Linden, WJ uhttp://iacat.org/content/modifications-sympson-hetter-method-item-exposure-control-computerized-adaptive-testing01155nas a2200133 4500008004100000245007100041210006900112300001200181490000700193520069200200100001800892700002300910856008800933 2002 eng d00aMultidimensional adaptive testing with constraints on test content0 aMultidimensional adaptive testing with constraints on test conte a575-5880 v673 aThe case of adaptive testing under a multidimensional response model with large numbers of constraints on the content of the test is addressed. The items in the test are selected using a shadow test approach. The 0–1 linear programming model that assembles the shadow tests maximizes posterior expected Kullback-Leibler information in the test. The procedure is illustrated for five different cases of multidimensionality. These cases differ in (a) the numbers of ability dimensions that are intentional or should be considered as ldquonuisance dimensionsrdquo and (b) whether the test should or should not display a simple structure with respect to the intentional ability dimensions.1 aVeldkamp, B P1 avan der Linden, WJ uhttp://iacat.org/content/multidimensional-adaptive-testing-constraints-test-content01201nas a2200121 4500008004100000245007000041210006900111300001200180490000700192520076700199100002300966856009000989 2001 eng d00aComputerized adaptive testing with equated number-correct scoring0 aComputerized adaptive testing with equated numbercorrect scoring a343-3550 v253 aA constrained computerized adaptive testing (CAT) algorithm is presented that can be used to equate CAT number-correct (NC) scores to a reference test. As a result, the CAT NC scores also are equated across administrations. The constraints are derived from van der Linden & Luecht’s (1998) set of conditions on item response functions that guarantees identical observed NC score distributions on two test forms. An item bank from the Law School Admission Test was used to compare the results of the algorithm with those for equipercentile observed-score equating, as well as the prediction of NC scores on a reference test using its test response function. The effects of the constraints on the statistical properties of the θ estimator in CAT were examined. 1 avan der Linden, WJ uhttp://iacat.org/content/computerized-adaptive-testing-equated-number-correct-scoring00585nas a2200109 4500008004100000245012400041210006900165260008200234100001900316700002300335856011700358 2001 eng d00aImplementing content constraints in a-stratified adaptive testing using a shadow test approach (Research Report 01-001)0 aImplementing content constraints in astratified adaptive testing aUniversity of Twente, Department of Educational Measurement and Data Analysis1 aChang, Hua-Hua1 avan der Linden, WJ uhttp://iacat.org/content/implementing-content-constraints-stratified-adaptive-testing-using-shadow-test-approach00377nas a2200109 4500008004100000245005100041210005100092260001500143100001600158700002300174856007000197 2001 eng d00aModeling variability in item parameters in CAT0 aModeling variability in item parameters in CAT aSeattle WA1 aGlas, C A W1 avan der Linden, WJ uhttp://iacat.org/content/modeling-variability-item-parameters-cat00476nas a2200109 4500008004100000245008600041210006900127260001500196100002300211700002700234856010500261 2001 eng d00aUsing response times to detect aberrant behavior in computerized adaptive testing0 aUsing response times to detect aberrant behavior in computerized aSeattle WA1 avan der Linden, WJ1 aKrimpen-Stoop, E M L A uhttp://iacat.org/content/using-response-times-detect-aberrant-behavior-computerized-adaptive-testing01256nas a2200145 4500008004100000245006500041210006500106300001000171490000700181520076500188653003400953100002300987700001601010856008401026 2000 eng d00aCapitalization on item calibration error in adaptive testing0 aCapitalization on item calibration error in adaptive testing a35-530 v133 a(from the journal abstract) In adaptive testing, item selection is sequentially optimized during the test. Because the optimization takes place over a pool of items calibrated with estimation error, capitalization on chance is likely to occur. How serious the consequences of this phenomenon are depends not only on the distribution of the estimation errors in the pool or the conditional ratio of the test length to the pool size given ability, but may also depend on the structure of the item selection criterion used. A simulation study demonstrated a dramatic impact of capitalization on estimation errors on ability estimation. Four different strategies to minimize the likelihood of capitalization on error in computerized adaptive testing are discussed.10acomputerized adaptive testing1 avan der Linden, WJ1 aGlas, C A W uhttp://iacat.org/content/capitalization-item-calibration-error-adaptive-testing00437nam a2200109 4500008004100000245005500041210005400096260005900150100002300209700001600232856007900248 2000 eng d00aComputerized adaptive testing: Theory and practice0 aComputerized adaptive testing Theory and practice aDordrecht, The NetherlandsbKluwer Academic Publishers1 avan der Linden, WJ1 aGlas, C A W uhttp://iacat.org/content/computerized-adaptive-testing-theory-and-practice00469nas a2200097 4500008004100000245005100041210005100092260013400143100002300277856007100300 2000 eng d00aConstrained adaptive testing with shadow tests0 aConstrained adaptive testing with shadow tests aW. J. van der Linden and C. A. W. Glas (eds.), Computerized adaptive testing: Theory and practice (pp.27-52). Norwell MA: Kluwer.1 avan der Linden, WJ uhttp://iacat.org/content/constrained-adaptive-testing-shadow-tests00544nas a2200109 4500008004100000245006700041210006600108260013200174100002300306700001600329856008900345 2000 eng d00aCross-validating item parameter estimation in adaptive testing0 aCrossvalidating item parameter estimation in adaptive testing aA. Boorsma, M. A. J. van Duijn, and T. A. B. Snijders (Eds.) (pp. 205-219), Essays on item response theory. New York: Springer.1 avan der Linden, WJ1 aGlas, C A W uhttp://iacat.org/content/cross-validating-item-parameter-estimation-adaptive-testing00475nas a2200121 4500008004100000245005900041210005900100260005900159300001400218100001800232700002300250856008000273 2000 eng d00aDesigning item pools for computerized adaptive testing0 aDesigning item pools for computerized adaptive testing aDendrecht, The NetherlandsbKluwer Academic Publishers a149–1621 aVeldkamp, B P1 avan der Linden, WJ uhttp://iacat.org/content/designing-item-pools-computerized-adaptive-testing01168nas a2200205 4500008004100000245005600041210005300097300001200150490000700162520055300169653002200722653002500744653002500769653002200794653001500816100002300831700001800854700001500872856007500887 2000 eng d00aAn integer programming approach to item bank design0 ainteger programming approach to item bank design a139-1500 v243 aAn integer programming approach to item bank design is presented that can be used to calculate an optimal blueprint for an item bank, in order to support an existing testing program. The results are optimal in that they minimize the effort involved in producing the items as revealed by current item writing patterns. Also presented is an adaptation of the models, which can be used as a set of monitoring tools in item bank management. The approach is demonstrated empirically for an item bank that was designed for the Law School Admission Test. 10aAptitude Measures10aItem Analysis (Test)10aItem Response Theory10aTest Construction10aTest Items1 avan der Linden, WJ1 aVeldkamp, B P1 aReese, L M uhttp://iacat.org/content/integer-programming-approach-item-bank-design00481nas a2200121 4500008004100000245006200041210006200103260005900165300001100224100002300235700001700258856008400275 2000 eng d00aItem selection and ability estimation in adaptive testing0 aItem selection and ability estimation in adaptive testing aDordrecht, The NetherlandsbKluwer Academic Publishers a1–251 avan der Linden, WJ1 aPashley, P J uhttp://iacat.org/content/item-selection-and-ability-estimation-adaptive-testing00610nas a2200109 4500008004100000245009500041210006900136260014400205100001800349700002300367856011000390 2000 eng d00aMultidimensional adaptive testing with constraints on test content (Research Report 00-11)0 aMultidimensional adaptive testing with constraints on test conte aEnschede, The Netherlands: University of Twente, Faculty of Educational Science and Technology, Department of Measurement and Data Analysis1 aVeldkamp, B P1 avan der Linden, WJ uhttp://iacat.org/content/multidimensional-adaptive-testing-constraints-test-content-research-report-00-1100606nas a2200097 4500008004100000245011100041210006900152260014400221100002300365856012000388 2000 eng d00aOptimal stratification of item pools in a-stratified computerized adaptive testing (Research Report 00-07)0 aOptimal stratification of item pools in astratified computerized aEnschede, The Netherlands: University of Twente, Faculty of Educational Science and Technology, Department of Measurement and Data Analysis1 avan der Linden, WJ uhttp://iacat.org/content/optimal-stratification-item-pools-stratified-computerized-adaptive-testing-research-report00645nas a2200109 4500008004100000245011000041210006900151260014400220100002300364700002700387856012100414 2000 eng d00aUsing response times to detect aberrant behavior in computerized adaptive testing (Research Report 00-09)0 aUsing response times to detect aberrant behavior in computerized aEnschede, The Netherlands: University of Twente, Faculty of Educational Science and Technology, Department of Measurement and Data Analysis1 avan der Linden, WJ1 aKrimpen-Stoop, E M L A uhttp://iacat.org/content/using-response-times-detect-aberrant-behavior-computerized-adaptive-testing-research-report00555nas a2200097 4500008004100000245008100041210006900122260014400191100002300335856009900358 1999 eng d00aAdaptive testing with equated number-correct scoring (Research Report 99-02)0 aAdaptive testing with equated numbercorrect scoring Research Rep aEnschede, The Netherlands: University of Twente, Faculty of Educational Science and Technology, Department of Measurement and Data Analysis1 avan der Linden, WJ uhttp://iacat.org/content/adaptive-testing-equated-number-correct-scoring-research-report-99-0200591nas a2200109 4500008004100000245008400041210006900125260014400194100001800338700002300356856010200379 1999 eng d00aDesigning item pools for computerized adaptive testing (Research Report 99-03 )0 aDesigning item pools for computerized adaptive testing Research aEnschede, The Netherlands: University of Twente, Faculty of Educational Science and Technology, Department of Measurement and Data Analysis1 aVeldkamp, B P1 avan der Linden, WJ uhttp://iacat.org/content/designing-item-pools-computerized-adaptive-testing-research-report-99-0300419nas a2200109 4500008004100000245007200041210006900113300001000182490000700192100002300199856008700222 1999 eng d00aEmpirical initialization of the trait estimator in adaptive testing0 aEmpirical initialization of the trait estimator in adaptive test a21-290 v231 avan der Linden, WJ uhttp://iacat.org/content/empirical-initialization-trait-estimator-adaptive-testing00596nas a2200097 4500008004100000245006700041210006500108260021200173100002300385856009000408 1999 eng d00aHet ontwerpen van adaptieve examens [Designing adaptive tests]0 aHet ontwerpen van adaptieve examens Designing adaptive tests aJ. M Pieters, Tj. Plomp, and L.E. Odenthal (Eds.), Twintig jaar Toegepaste Onderwijskunde: Een kaleidoscopisch overzicht van Twents onderwijskundig onderzoek (pp. 249-267). Enschede: Twente University Press.1 avan der Linden, WJ uhttp://iacat.org/content/het-ontwerpen-van-adaptieve-examens-designing-adaptive-tests01086nas a2200133 4500008004100000245007800041210006900119300001200188490000700200520059200207653003400799100002300833856009600856 1999 eng d00aMultidimensional adaptive testing with a minimum error-variance criterion0 aMultidimensional adaptive testing with a minimum errorvariance c a398-4120 v243 aAdaptive testing under a multidimensional logistic response model is addressed. An algorithm is proposed that minimizes the (asymptotic) variance of the maximum-likelihood estimator of a linear combination of abilities of interest. The criterion results in a closed-form expression that is easy to evaluate. In addition, it is shown how the algorithm can be modified if the interest is in a test with a "simple ability structure". The statistical properties of the adaptive ML estimator are demonstrated for a two-dimensional item pool with several linear combinations of the abilities. 10acomputerized adaptive testing1 avan der Linden, WJ uhttp://iacat.org/content/multidimensional-adaptive-testing-minimum-error-variance-criterion01186nas a2200157 4500008004100000245010900041210006900150300001200219490000700231520058300238653003400821100002300855700001600878700001800894856011600912 1999 eng d00aUsing response-time constraints to control for differential speededness in computerized adaptive testing0 aUsing responsetime constraints to control for differential speed a195-2100 v233 aAn item-selection algorithm is proposed for neutralizing the differential effects of time limits on computerized adaptive test scores. The method is based on a statistical model for distributions of examinees’ response times on items in a bank that is updated each time an item is administered. Predictions from the model are used as constraints in a 0-1 linear programming model for constrained adaptive testing that maximizes the accuracy of the trait estimator. The method is demonstrated empirically using an item bank from the Armed Services Vocational Aptitude Battery. 10acomputerized adaptive testing1 avan der Linden, WJ1 aScrams, D J1 aSchnipke, D L uhttp://iacat.org/content/using-response-time-constraints-control-differential-speededness-computerized-adaptive00390nas a2200109 4500008004100000245005800041210005800099300001200157490000700169100002300176856008100199 1998 eng d00aBayesian item selection criteria for adaptive testing0 aBayesian item selection criteria for adaptive testing a201-2160 v631 avan der Linden, WJ uhttp://iacat.org/content/bayesian-item-selection-criteria-adaptive-testing-000598nas a2200109 4500008004100000245008900041210006900130260014400199100002300343700001600366856010600382 1998 eng d00aCapitalization on item calibration error in adaptive testing (Research Report 98-07)0 aCapitalization on item calibration error in adaptive testing Res aEnschede, The Netherlands: University of Twente, Faculty of Educational Science and Technology, Department of Measurement and Data Analysis1 avan der Linden, WJ1 aGlas, C A W uhttp://iacat.org/content/capitalization-item-calibration-error-adaptive-testing-research-report-98-0701435nas a2200145 4500008004100000245005300041210005100094300001200145490000700157520098100164653003401145100002301179700001501202856007201217 1998 eng d00aA model for optimal constrained adaptive testing0 amodel for optimal constrained adaptive testing a259-2700 v223 aA model for constrained computerized adaptive testing is proposed in which the information in the test at the trait level (0) estimate is maximized subject to a number of possible constraints on the content of the test. At each item-selection step, a full test is assembled to have maximum information at the current 0 estimate, fixing the items already administered. Then the item with maximum in-formation is selected. All test assembly is optimal because a linear programming (LP) model is used that automatically updates to allow for the attributes of the items already administered and the new value of the 0 estimator. The LP model also guarantees that each adaptive test always meets the entire set of constraints. A simulation study using a bank of 753 items from the Law School Admission Test showed that the 0 estimator for adaptive tests of realistic lengths did not suffer any loss of efficiency from the presence of 433 constraints on the item selection process. 10acomputerized adaptive testing1 avan der Linden, WJ1 aReese, L M uhttp://iacat.org/content/model-optimal-constrained-adaptive-testing00410nas a2200109 4500008004100000245006500041210006500106300001200171490000700183100002300190856008700213 1998 eng d00aOptimal test assembly of psychological and educational tests0 aOptimal test assembly of psychological and educational tests a195-2110 v221 avan der Linden, WJ uhttp://iacat.org/content/optimal-test-assembly-psychological-and-educational-tests00488nas a2200109 4500008004100000245010500041210006900146300001200215490000700227100002300234856012100257 1998 eng d00aStochastic order in dichotomous item response models for fixed, adaptive, and multidimensional tests0 aStochastic order in dichotomous item response models for fixed a a211-2260 v631 avan der Linden, WJ uhttp://iacat.org/content/stochastic-order-dichotomous-item-response-models-fixed-adaptive-and-multidimensional-tests00673nas a2200121 4500008004100000245012000041210006900161260014400230100002300374700001600397700001800413856012000431 1998 eng d00aUsing response-time constraints to control for differential speededness in adaptive testing (Research Report 98-06)0 aUsing responsetime constraints to control for differential speed aEnschede, The Netherlands: University of Twente, Faculty of Educational Science and Technology, Department of Measurement and Data Analysis1 avan der Linden, WJ1 aScrams, D J1 aSchnipke, D L uhttp://iacat.org/content/using-response-time-constraints-control-differential-speededness-adaptive-testing-research00349nas a2200097 4500008004100000245005100041210005100092260001500143100002300158856007000181 1997 eng d00aDetection of aberrant response patterns in CAT0 aDetection of aberrant response patterns in CAT aChicago IL1 avan der Linden, WJ uhttp://iacat.org/content/detection-aberrant-response-patterns-cat00419nas a2200097 4500008004100000245007800041210006900119260001200188100002300200856009800223 1997 eng d00aMultidimensional adaptive testing with a minimum error-variance criterion0 aMultidimensional adaptive testing with a minimum errorvariance c aChicago1 avan der Linden, WJ uhttp://iacat.org/content/multidimensional-adaptive-testing-minimum-error-variance-criterion-000560nas a2200097 4500008004100000245010200041210006900143260010900212100002300321856011800344 1997 eng d00aMultidimensional adaptive testing with a minimum error-variance criterion (Research Report 97-03)0 aMultidimensional adaptive testing with a minimum errorvariance c aEnschede, The Netherlands: University of Twente, Department of Educational Measurement and Data Analysis1 avan der Linden, WJ uhttp://iacat.org/content/multidimensional-adaptive-testing-minimum-error-variance-criterion-research-report-97-0300388nas a2200109 4500008004100000245005800041210005800099300001200157490000700169100002300176856007900199 1996 eng d00aBayesian item selection criteria for adaptive testing0 aBayesian item selection criteria for adaptive testing a201-2160 v631 avan der Linden, WJ uhttp://iacat.org/content/bayesian-item-selection-criteria-adaptive-testing00499nas a2200097 4500008004100000245008200041210006900123260008500192100002300277856010100300 1996 eng d00aBayesian item selection criteria for adaptive testing (Research Report 96-01)0 aBayesian item selection criteria for adaptive testing Research R aTwente, The Netherlands: Department of Educational Measurement and Data Analysis1 avan der Linden, WJ uhttp://iacat.org/content/bayesian-item-selection-criteria-adaptive-testing-research-report-96-0100347nas a2200097 4500008004100000245004800041210004800089260001900137100002300156856007000179 1995 eng d00aBayesian item selection in adaptive testing0 aBayesian item selection in adaptive testing aMinneapolis MN1 avan der Linden, WJ uhttp://iacat.org/content/bayesian-item-selection-adaptive-testing00404nas a2200121 4500008004100000245005300041210005300094300001200147490001000159100002300169700001600192856007400208 1989 eng d00aSome procedures for computerized ability testing0 aSome procedures for computerized ability testing a175-1870 v13(2)1 avan der Linden, WJ1 aZwarts, M A uhttp://iacat.org/content/some-procedures-computerized-ability-testing