Submitted by zhenli on Thu, 01/24/2019 - 14:35
Title | Adaptive Testing With a Hierarchical Item Response Theory Model |
Publication Type | Journal Article |
Year of Publication | 2019 |
Authors | Wang, W, Kingston, N |
Journal | Applied Psychological Measurement |
Volume | 43 |
Number | 1 |
Pagination | 51-67 |
Abstract | The hierarchical item response theory (H-IRT) model is very flexible and allows a general factor and subfactors within an overall structure of two or more levels. When an H-IRT model with a large number of dimensions is used for an adaptive test, the computational burden associated with interim scoring and selection of subsequent items is heavy. An alternative approach for any high-dimension adaptive test is to reduce dimensionality for interim scoring and item selection and then revert to full dimensionality for final score reporting, thereby significantly reducing the computational burden. This study compared the accuracy and efficiency of final scoring for multidimensional, local multidimensional, and unidimensional item selection and interim scoring methods, using both simulated and real item pools. The simulation study was conducted under 10 conditions (i.e., five test lengths and two H-IRT models) with a simulated sample of 10,000 students. The study with the real item pool was conducted using item parameters from an actual 45-item adaptive test with a simulated sample of 10,000 students. Results indicate that the theta estimations provided by the local multidimensional and unidimensional item selection and interim scoring methods were relatively as accurate as the theta estimation provided by the multidimensional item selection and interim scoring method, especially during the real item pool study. In addition, the multidimensional method required the longest computation time and the unidimensional method required the shortest computation time. |
URL | https://doi.org/10.1177/0146621618765714 |
DOI | 10.1177/0146621618765714 |