02772nas a2200229 4500008004100000022001400041245015400055210006900209260000900278300000800287490000700295520193000302653001802232653003702250653001102287653002702298653002302325653003702348100002002385700001902405856011802424 2012 eng d a1471-228800aComparison of two Bayesian methods to detect mode effects between paper-based and computerized adaptive assessments: a preliminary Monte Carlo study.0 aComparison of two Bayesian methods to detect mode effects betwee c2012 a1240 v123 a
BACKGROUND: Computerized adaptive testing (CAT) is being applied to health outcome measures developed as paper-and-pencil (P&P) instruments. Differences in how respondents answer items administered by CAT vs. P&P can increase error in CAT-estimated measures if not identified and corrected.
METHOD: Two methods for detecting item-level mode effects are proposed using Bayesian estimation of posterior distributions of item parameters: (1) a modified robust Z (RZ) test, and (2) 95% credible intervals (CrI) for the CAT-P&P difference in item difficulty. A simulation study was conducted under the following conditions: (1) data-generating model (one- vs. two-parameter IRT model); (2) moderate vs. large DIF sizes; (3) percentage of DIF items (10% vs. 30%), and (4) mean difference in θ estimates across modes of 0 vs. 1 logits. This resulted in a total of 16 conditions with 10 generated datasets per condition.
RESULTS: Both methods evidenced good to excellent false positive control, with RZ providing better control of false positives and with slightly higher power for CrI, irrespective of measurement model. False positives increased when items were very easy to endorse and when there with mode differences in mean trait level. True positives were predicted by CAT item usage, absolute item difficulty and item discrimination. RZ outperformed CrI, due to better control of false positive DIF.
CONCLUSIONS: Whereas false positives were well controlled, particularly for RZ, power to detect DIF was suboptimal. Research is needed to examine the robustness of these methods under varying prior assumptions concerning the distribution of item and person parameters and when data fail to conform to prior assumptions. False identification of DIF when items were very easy to endorse is a problem warranting additional investigation.
10aBayes Theorem10aData Interpretation, Statistical10aHumans10aMathematical Computing10aMonte Carlo Method10aOutcome Assessment (Health Care)1 aRiley, Barth, B1 aCarle, Adam, C uhttp://iacat.org/content/comparison-two-bayesian-methods-detect-mode-effects-between-paper-based-and-computerized01448nas a2200205 4500008004100000020004100041245016200082210006900244250001500313300001100328490000700339520065200346653002500998653001501023653003701038653002401075100001401099700001501113856011401128 2009 eng d a1529-7713 (Print)1529-7713 (Linking)00aConsiderations about expected a posteriori estimation in adaptive testing: adaptive a priori, adaptive correction for bias, and adaptive integration interval0 aConsiderations about expected a posteriori estimation in adaptiv a2009/07/01 a138-560 v103 aIn a computerized adaptive test, we would like to obtain an acceptable precision of the proficiency level estimate using an optimal number of items. Unfortunately, decreasing the number of items is accompanied by a certain degree of bias when the true proficiency level differs significantly from the a priori estimate. The authors suggest that it is possible to reduced the bias, and even the standard error of the estimate, by applying to each provisional estimation one or a combination of the following strategies: adaptive correction for bias proposed by Bock and Mislevy (1982), adaptive a priori estimate, and adaptive integration interval.10a*Bias (Epidemiology)10a*Computers10aData Interpretation, Statistical10aModels, Statistical1 aRaiche, G1 aBlais, J G uhttp://iacat.org/content/considerations-about-expected-posteriori-estimation-adaptive-testing-adaptive-priori02871nas a2200313 4500008004100000020002200041245010100063210006900164250001500233260000800248300001200256490000700268520179500275653005102070653002002121653003702141653002602178653001902204653001102223653003002234653004602264653003502310653002802345653001302373100002102386700001702407700001502424856011802439 2007 eng d a0315-162X (Print)00aImproving patient reported outcomes using item response theory and computerized adaptive testing0 aImproving patient reported outcomes using item response theory a a2007/06/07 cJun a1426-310 v343 aOBJECTIVE: Patient reported outcomes (PRO) are considered central outcome measures for both clinical trials and observational studies in rheumatology. More sophisticated statistical models, including item response theory (IRT) and computerized adaptive testing (CAT), will enable critical evaluation and reconstruction of currently utilized PRO instruments to improve measurement precision while reducing item burden on the individual patient. METHODS: We developed a domain hierarchy encompassing the latent trait of physical function/disability from the more general to most specific. Items collected from 165 English-language instruments were evaluated by a structured process including trained raters, modified Delphi expert consensus, and then patient evaluation. Each item in the refined data bank will undergo extensive analysis using IRT to evaluate response functions and measurement precision. CAT will allow for real-time questionnaires of potentially smaller numbers of questions tailored directly to each individual's level of physical function. RESULTS: Physical function/disability domain comprises 4 subdomains: upper extremity, trunk, lower extremity, and complex activities. Expert and patient review led to consensus favoring use of present-tense "capability" questions using a 4- or 5-item Likert response construct over past-tense "performance"items. Floor and ceiling effects, attribution of disability, and standardization of response categories were also addressed. CONCLUSION: By applying statistical techniques of IRT through use of CAT, existing PRO instruments may be improved to reduce questionnaire burden on the individual patients while increasing measurement precision that may ultimately lead to reduced sample size requirements for costly clinical trials.10a*Rheumatic Diseases/physiopathology/psychology10aClinical Trials10aData Interpretation, Statistical10aDisability Evaluation10aHealth Surveys10aHumans10aInternational Cooperation10aOutcome Assessment (Health Care)/*methods10aPatient Participation/*methods10aResearch Design/*trends10aSoftware1 aChakravarty, E F1 aBjorner, J B1 aFries, J F uhttp://iacat.org/content/improving-patient-reported-outcomes-using-item-response-theory-and-computerized-adaptive02512nas a2200265 4500008004100000020004100041245013200082210006900214250001500283260000800298300001100306490000700317520154000324653003101864653003701895653003301932653002401965653001101989653002402000653002702024653003302051653003002084100001602114856011602130 2006 eng d a0025-7079 (Print)0025-7079 (Linking)00aOverview of quantitative measurement methods. Equivalence, invariance, and differential item functioning in health applications0 aOverview of quantitative measurement methods Equivalence invaria a2006/10/25 cNov aS39-490 v443 aBACKGROUND: Reviewed in this article are issues relating to the study of invariance and differential item functioning (DIF). The aim of factor analyses and DIF, in the context of invariance testing, is the examination of group differences in item response conditional on an estimate of disability. Discussed are parameters and statistics that are not invariant and cannot be compared validly in crosscultural studies with varying distributions of disability in contrast to those that can be compared (if the model assumptions are met) because they are produced by models such as linear and nonlinear regression. OBJECTIVES: The purpose of this overview is to provide an integrated approach to the quantitative methods used in this special issue to examine measurement equivalence. The methods include classical test theory (CTT), factor analytic, and parametric and nonparametric approaches to DIF detection. Also included in the quantitative section is a discussion of item banking and computerized adaptive testing (CAT). METHODS: Factorial invariance and the articles discussing this topic are introduced. A brief overview of the DIF methods presented in the quantitative section of the special issue is provided together with a discussion of ways in which DIF analyses and examination of invariance using factor models may be complementary. CONCLUSIONS: Although factor analytic and DIF detection methods share features, they provide unique information and can be viewed as complementary in informing about measurement equivalence.10a*Cross-Cultural Comparison10aData Interpretation, Statistical10aFactor Analysis, Statistical10aGuidelines as Topic10aHumans10aModels, Statistical10aPsychometrics/*methods10aStatistics as Topic/*methods10aStatistics, Nonparametric1 aTeresi, J A uhttp://iacat.org/content/overview-quantitative-measurement-methods-equivalence-invariance-and-differential-item02839nas a2200349 4500008004100000020004600041245011400087210006900201250001500270260001100285300000700296490000600303520169400309653002702003653002002030653002102050653002002071653004702091653003702138653001802175653001102193653001902204653001602223653002002239653003002259100001402289700001402303700001702317700002002334700001402354856012102368 2004 eng d a1477-7525 (Electronic)1477-7525 (Linking)00aPractical methods for dealing with 'not applicable' item responses in the AMC Linear Disability Score project0 aPractical methods for dealing with not applicable item responses a2004/06/18 cJun 16 a290 v23 aBACKGROUND: Whenever questionnaires are used to collect data on constructs, such as functional status or health related quality of life, it is unlikely that all respondents will respond to all items. This paper examines ways of dealing with responses in a 'not applicable' category to items included in the AMC Linear Disability Score (ALDS) project item bank. METHODS: The data examined in this paper come from the responses of 392 respondents to 32 items and form part of the calibration sample for the ALDS item bank. The data are analysed using the one-parameter logistic item response theory model. The four practical strategies for dealing with this type of response are: cold deck imputation; hot deck imputation; treating the missing responses as if these items had never been offered to those individual patients; and using a model which takes account of the 'tendency to respond to items'. RESULTS: The item and respondent population parameter estimates were very similar for the strategies involving hot deck imputation; treating the missing responses as if these items had never been offered to those individual patients; and using a model which takes account of the 'tendency to respond to items'. The estimates obtained using the cold deck imputation method were substantially different. CONCLUSIONS: The cold deck imputation method was not considered suitable for use in the ALDS item bank. The other three methods described can be usefully implemented in the ALDS item bank, depending on the purpose of the data analysis to be carried out. These three methods may be useful for other data sets examining similar constructs, when item response theory based methods are used.10a*Disability Evaluation10a*Health Surveys10a*Logistic Models10a*Questionnaires10aActivities of Daily Living/*classification10aData Interpretation, Statistical10aHealth Status10aHumans10aPilot Projects10aProbability10aQuality of Life10aSeverity of Illness Index1 aHolman, R1 aGlas, C A1 aLindeboom, R1 aZwinderman, A H1 aHaan, R J uhttp://iacat.org/content/practical-methods-dealing-not-applicable-item-responses-amc-linear-disability-score-project01770nas a2200289 4500008004100000245007700041210006900118300001400187490000700201520080000208653002501008653003101033653003701064653003801101653001901139653001001158653002701168653004601195653002001241653002801261653003201289653001801321100001401339700001701353700001501370856009501385 2000 eng d00aItem response theory and health outcomes measurement in the 21st century0 aItem response theory and health outcomes measurement in the 21st aII28-II420 v383 aItem response theory (IRT) has a number of potential advantages over classical test theory in assessing self-reported health outcomes. IRT models yield invariant item and latent trait estimates (within a linear transformation), standard errors conditional on trait level, and trait estimates anchored to item content. IRT also facilitates evaluation of differential item functioning, inclusion of items with different response formats in the same scale, and assessment of person fit and is ideally suited for implementing computer adaptive testing. Finally, IRT methods can be helpful in developing better health outcome measures and in assessing change over time. These issues are reviewed, along with a discussion of some of the methodological and practical challenges in applying IRT methods.10a*Models, Statistical10aActivities of Daily Living10aData Interpretation, Statistical10aHealth Services Research/*methods10aHealth Surveys10aHuman10aMathematical Computing10aOutcome Assessment (Health Care)/*methods10aResearch Design10aSupport, Non-U.S. Gov't10aSupport, U.S. Gov't, P.H.S.10aUnited States1 aHays, R D1 aMorales, L S1 aReise, S P uhttp://iacat.org/content/item-response-theory-and-health-outcomes-measurement-21st-century