03003nas a2200157 4500008004100000245007600041210006900117260005500186520243700241653002102678653002302699653002002722100001602742700001602758856007102774 2017 eng d00aGenerating Rationales to Support Formative Feedback in Adaptive Testing0 aGenerating Rationales to Support Formative Feedback in Adaptive aNiigata, JapanbNiigata Seiryo Universityc08/20173 a
Computer adaptive testing offers many important benefits to support and promote life-long learning. Computers permit testing on-demand thereby allowing students to take the test at any time during instruction; items on computerized tests are scored immediately thereby providing students with instant feedback; computerized tests permit continuous administration thereby allowing students to have more choice about when they write their exams. But despite these important benefits, the advent of computer adaptive testing has also raised formidable challenges, particularly in the area of item development. Educators must have access to large numbers of diverse, high-quality test items to implement computerize adaptive testing because items are continuously administered to students. Hence, hundreds or even thousands of items are needed to develop the test item banks necessary for computer adaptive testing. Unfortunately, educational test items, as they are currently created, are time consuming and expensive to develop because each individual item is written, initially, by a content specialist and, then, reviewed, edited, and revised by groups of content specialists to ensure the items yield reliable and valid information. Hence, item development is one of the most important problems that must be solved before we can migrate to computer adaptive testing to support life-long learning because large numbers of high-quality, content-specific, test items are required.
One promising item development method that may be used to address this challenge is with automatic item generation. Automatic item generation is a relatively new but rapidly evolving research area where cognitive and psychometric modelling practices are used produce hundreds of new test items with the aid of computer technology. The purpose of our presentation is to describe a new methodology for generating both the items and the rationales required to solve each generated item in order to produce the feedback needed to support life-long learning. Our item generation methodology will first be described. To ensure our description is practical, the method will also be demonstrated using generated items from the health sciences to demonstrate how item generation can promote life-long learning for medical educators and practitioners.
10aAdaptive Testing10aformative feedback10aItem generation1 aGierl, Mark1 aBulut, Okan uhttps://drive.google.com/open?id=1O5KDFtQlDLvhNoDr7X4JO4arpJkIHKUP01509nas a2200145 4500008004100000245003400041210003300075260005500108520107900163653002101242653001601263653001701279100002201296856004501318 2017 eng d00aGrow a Tiger out of Your CAT 0 aGrow a Tiger out of Your CAT aNiigata, JapanbNiigata Seiryo Universityc08/20173 a
The main focus in the community of test developers and researchers is on improving adaptive test procedures and methodologies. Yet, the transition from research projects to larger-scale operational CATs is facing its own challenges. Usually, these operational CATs find their origin in government tenders. “Scalability”, “Interoperability” and “Transparency” are three keywords often found in these documents. Scalability is concerned with parallel system architectures which are based upon stateless selection algorithms. Design capacities often range from 10,000 to well over 100,000 concurrent students. Interoperability is implemented in standards like QTI, standards that were not designed with adaptive testing in mind. Transparency is being realized by open source software: the adaptive test should not be a black box. These three requirements often complicate the development of an adaptive test, or sometimes even conflict.
10ainteroparability10aScalability10atransparency1 aVerschoor, Angela uhttp://iacat.org/grow-tiger-out-your-cat01889nas a2200157 4500008003900000245006900039210006800108300001200176490000700188520141000195100001901605700001601624700002001640700001801660856005301678 2014 d00aGeneral Test Overlap Control: Improved Algorithm for CAT and CCT0 aGeneral Test Overlap Control Improved Algorithm for CAT and CCT a229-2440 v383 aThis article proposed a new online test overlap control algorithm that is an improvement of Chen’s algorithm in controlling general test overlap rate for item pooling among a group of examinees. Chen’s algorithm is not very efficient in that not only item pooling between current examinee and prior examinees is controlled for but also item pooling between previous examinees, which would have been controlled for when they were current examinees. The proposed improvement increases efficiency by only considering item pooling between current and previous examinees, and its improved performance over Chen is demonstrated in a simulated computerized adaptive testing (CAT) environment. Moreover, the proposed algorithm is adapted for computerized classification testing (CCT) using the sequential probability ratio test procedure and is evaluated against some existing exposure control procedures. The proposed algorithm appears to work best in controlling general test overlap rate among the exposure control procedures examined without sacrificing much classification precision, though longer tests might be required for more stringent control of item pooling among larger groups. Given the capability of the proposed algorithm in controlling item pooling among a group of examinees of any size and its ease of implementation, it appears to be a good test overlap control method.
1 aChen, Shu-Ying1 aLei, Pui-Wa1 aChen, Jyun-Hong1 aLiu, Tzu-Chen uhttp://apm.sagepub.com/content/38/3/229.abstract00537nas a2200097 4500008004100000245010000041210006900141260009700210100001300307856011900320 2009 eng d00aA gradual maximum information ratio approach to item selection in computerized adaptive testing0 agradual maximum information ratio approach to item selection in aD. J. Weiss (Ed.), Proceedings of the 2009 GMAC Conference on Computerized Adaptive Testing.1 aHan, K T uhttp://iacat.org/content/gradual-maximum-information-ratio-approach-item-selection-computerized-adaptive-testing-000496nas a2200097 4500008004100000245009800041210006900139260006000208100001300268856011700281 2009 eng d00aGradual maximum information ratio approach to item selection in computerized adaptive testing0 aGradual maximum information ratio approach to item selection in aMcLean, VA. USAbGraduate Management Admissions Council1 aHan, K T uhttp://iacat.org/content/gradual-maximum-information-ratio-approach-item-selection-computerized-adaptive-testing00568nas a2200109 4500008004100000245010200041210006900143260009700212100002200309700001100331856011600342 2009 eng d00aGuess what? Score differences with rapid replies versus omissions on a computerized adaptive test0 aGuess what Score differences with rapid replies versus omissions aD. J. Weiss (Ed.), Proceedings of the 2009 GMAC Conference on Computerized Adaptive Testing.1 aTalento-Miller, E1 aGuo, F uhttp://iacat.org/content/guess-what-score-differences-rapid-replies-versus-omissions-computerized-adaptive-test01774nas a2200217 4500008004100000245006200041210006200103260005600165300001200221520108400233653002401317653001601341653001401357653002201371653001501393653001801408653001301426100001901439700001601458856008201474 2002 eng d00aGenerating abstract reasoning items with cognitive theory0 aGenerating abstract reasoning items with cognitive theory aMahwah, N.J. USAbLawrence Erlbaum Associates, Inc. a219-2503 a(From the chapter) Developed and evaluated a generative system for abstract reasoning items based on cognitive theory. The cognitive design system approach was applied to generate matrix completion problems. Study 1 involved developing the cognitive theory with 191 college students who were administered Set I and Set II of the Advanced Progressive Matrices. Study 2 examined item generation by cognitive theory. Study 3 explored the psychometric properties and construct representation of abstract reasoning test items with 728 young adults. Five structurally equivalent forms of Abstract Reasoning Test (ART) items were prepared from the generated item bank and administered to the Ss. In Study 4, the nomothetic span of construct validity of the generated items was examined with 728 young adults who were administered ART items, and 217 young adults who were administered ART items and the Advanced Progressive Matrices. Results indicate the matrix completion items were effectively generated by the cognitive design system approach. (PsycINFO Database Record (c) 2005 APA )10aCognitive Processes10aMeasurement10aReasoning10aTest Construction10aTest Items10aTest Validity10aTheories1 aEmbretson, S E1 aKyllomen, P uhttp://iacat.org/content/generating-abstract-reasoning-items-cognitive-theory00508nas a2200109 4500008004100000245005500041210005000096260014700146100001500293700001500308856007500323 2000 eng d00aThe GRE computer adaptive test: Operational issues0 aGRE computer adaptive test Operational issues aW. J. van der Linden and C. A. W. Glas (Eds.), Computerized adaptive testing: Theory and practice (pp. 75-99). Dordrecht, Netherlands: Kluwer.1 aMills, C N1 aSteffen, M uhttp://iacat.org/content/gre-computer-adaptive-test-operational-issues00416nas a2200109 4500008004100000245006800041210006700109300001200176490000700188100001900195856009200214 1999 eng d00aGenerating items during testing: Psychometric issues and models0 aGenerating items during testing Psychometric issues and models a407-4330 v641 aEmbretson, S E uhttp://iacat.org/content/generating-items-during-testing-psychometric-issues-and-models00750nas a2200145 4500008004100000245005500041210005500096300001100151490000700162520028800169653003400457100001600491700001700507856008000524 1999 eng d00aGraphical models and computerized adaptive testing0 aGraphical models and computerized adaptive testing a223-370 v233 aConsiders computerized adaptive testing from the perspective of graphical modeling (GM). GM provides methods for making inferences about multifaceted skills and knowledge and for extracting data from complex performances. Provides examples from language-proficiency assessment. (SLD)10acomputerized adaptive testing1 aAlmond, R G1 aMislevy, R J uhttp://iacat.org/content/graphical-models-and-computerized-adaptive-testing00387nas a2200097 4500008004100000245005600041210005600097260004300153100001500196856007800211 1997 eng d00aGetting more precision on computer adaptive testing0 aGetting more precision on computer adaptive testing aUniversity of Tennessee, Knoxville, TN1 aKrass, I A uhttp://iacat.org/content/getting-more-precision-computer-adaptive-testing00456nas a2200097 4500008004100000245009900041210006900140260001500209100001900224856011500243 1997 eng d00aThe goal of equity within and between computerized adaptive tests and paper and pencil forms. 0 agoal of equity within and between computerized adaptive tests an aChicago IL1 aThomasson, G L uhttp://iacat.org/content/goal-equity-within-and-between-computerized-adaptive-tests-and-paper-and-pencil-forms01729nas a2200145 4500008004100000020001400041245006700055210006500122300001200187490000700199520125900206100001901465700001201484856008701496 1996 eng d a0146-621600aA global information approach to computerized adaptive testing0 aglobal information approach to computerized adaptive testing a213-2290 v203 abased on Fisher information (or item information). At each stage, an item is selected to maximize the Fisher information at the currently estimated trait level (&thetas;). However, this application of Fisher information could be much less efficient than assumed if the estimators are not close to the true &thetas;, especially at early stages of an adaptive test when the test length (number of items) is too short to provide an accurate estimate for true &thetas;. It is argued here that selection procedures based on global information should be used, at least at early stages of a test when &thetas; estimates are not likely to be close to the true &thetas;. For this purpose, an item selection procedure based on average global information is proposed. Results from pilot simulation studies comparing the usual maximum item information item selection with the proposed global information approach are reported, indicating that the new method leads to improvement in terms of bias and mean squared error reduction under many circumstances. Index terms: computerized adaptive testing, Fisher information, global information, information surface, item information, item response theory, Kullback-Leibler information, local information, test information.1 aChang, Hua-Hua1 aYing, Z uhttp://iacat.org/content/global-information-approach-computerized-adaptive-testing00435nas a2200121 4500008004500000245006700045210006500112300001200177490000700189100001600196700001200212856008900224 1996 Engldsh 00aA Global Information Approach to Computerized Adaptive Testing0 aGlobal Information Approach to Computerized Adaptive Testing a213-2290 v201 aChang, H -H1 aYing, Z uhttp://iacat.org/content/global-information-approach-computerized-adaptive-testing-100400nas a2200097 4500008004100000245006700041210006500108260002100173100001900194856008900213 1995 eng d00aA global information approach to computerized adaptive testing0 aglobal information approach to computerized adaptive testing aSan Francisco CA1 aChang, Hua-Hua uhttp://iacat.org/content/global-information-approach-computerized-adaptive-testing-000437nam a2200097 4500008004100000245007500041210006900116260002600185100003500211856009300246 1995 eng d00aGuidelines for computer-adaptive test development and use in education0 aGuidelines for computeradaptive test development and use in educ aWashington DC: Author1 aAmerican-Council-on-Education. uhttp://iacat.org/content/guidelines-computer-adaptive-test-development-and-use-education00462nas a2200109 4500008004100000245009500041210006900136300001200205490000700217100001800224856011000242 1994 eng d00aA general approach to algorithmic design of fixed-form tests, adaptive tests, and testlets0 ageneral approach to algorithmic design of fixedform tests adapti a141-1530 v181 aBerger, M P F uhttp://iacat.org/content/general-approach-algorithmic-design-fixed-form-tests-adaptive-tests-and-testlets00468nas a2200109 4500008004500000245009500045210006900140300001200209490000700221100001800228856011200246 1994 Engldsh 00aA General Approach to Algorithmic Design of Fixed-Form Tests, Adaptive Tests, and Testlets0 aGeneral Approach to Algorithmic Design of FixedForm Tests Adapti a141-1530 v181 aBerger, M P F uhttp://iacat.org/content/general-approach-algorithmic-design-fixed-form-tests-adaptive-tests-and-testlets-000568nas a2200121 4500008004100000245013200041210006900173260004700242100001200289700001700301700001100318856011700329 1992 eng d00aA general Bayesian model for testlets: theory and applications (Research Report 92-21; GRE Board Professional Report No 99-01P)0 ageneral Bayesian model for testlets theory and applications Rese aPrinceton NJ: Educational Testing Service.1 aWang, X1 aBradlow, E T1 aWainer uhttp://iacat.org/content/general-bayesian-model-testlets-theory-and-applications-research-report-92-21-gre-board00458nas a2200109 4500008004100000245005400041210005400095260009400149100001500243700001600258856007400274 1990 eng d00aGenerative adaptive testing with digit span items0 aGenerative adaptive testing with digit span items aSan Diego, CA: Testing Systems Department, Navy Personnel Research and Development Center1 aWolfe, J H1 aLarson, G E uhttp://iacat.org/content/generative-adaptive-testing-digit-span-items00399nas a2200097 4500008004100000245007100041210006900112260001600181100001200197856009200209 1989 eng d00aGolden section search strategies for computerized adaptive testing0 aGolden section search strategies for computerized adaptive testi aBerkeley CA1 aXiao, B uhttp://iacat.org/content/golden-section-search-strategies-computerized-adaptive-testing00428nas a2200097 4500008004100000245008500041210006900126260002100195100001700216856009700233 1978 eng d00aA generalization of sequential analysis to decision making with tailored testing0 ageneralization of sequential analysis to decision making with ta aOklahoma City OK1 aReckase, M D uhttp://iacat.org/content/generalization-sequential-analysis-decision-making-tailored-testing00424nas a2200109 4500008004100000245006300041210006300104260003700167100001200204700001300216856008500229 1977 eng d00aGroup tailored tests and some problems of their utlization0 aGroup tailored tests and some problems of their utlization aLeyden, The Netherlandsc06/19771 aLewy, A1 aDoron, R uhttp://iacat.org/content/group-tailored-tests-and-some-problems-their-utlization00542nas a2200097 4500008004100000245007400041210006900115260015200184100001600336856009200352 1976 eng d00aThe graded response model of latent trait theory and tailored testing0 agraded response model of latent trait theory and tailored testin aC. K. Clark (Ed.), Proceedings of the First Conference on Computerized Adaptive Testing (pp. 5-17). Washington DC: U.S. Government Printing Office.1 aSamejima, F uhttp://iacat.org/content/graded-response-model-latent-trait-theory-and-tailored-testing