Keynote Addresses

[tabs] [tab title=”Presidential Address”]

Larry M. Rudner, President, IACAT

Vice President for Research and Chief Psychometrician Graduate Management Admissions Council

CAT for Small Sample Classification Testing

Abstract
Larry-Rudner-768x1024Much of the research on computer adaptive testing (CAT) has been based on Item Response Theory (IRT) and most applications of CAT use IRT to place examinees on a continuum. Even programs whose ultimate goal is classification often first rank order examinees and then compare ability estimates with one or multiple cut scores. Yet, IRT is not the only model suitable for CAT and theta estimates are not always needed. In his Presidential Address, Lawrence Rudner will call for research and applications using alternative models that may prove more responsive to program needs. An overview of one such model — Measurement Decision Theory — will be presented. That simple Bayesian model seeks to identify the likelihood of group classifications (e.g., qualified v. not qualified) given the pattern of candidate responses. Advantages are the need for only small calibration samples, applicability to multidimensional data, management of false positives and false negatives, and shorter tests.

Bio
Lawrence M. Rudner is Vice President of Research and Development at the Graduate Management Admission Council. Rudner’s work at GMAC® has included test validation, adaptive testing, professional standards, QTI specifications, test security, data forensics, and contract monitoring for the GMAT® exam. He conducted some of the first research on several lasting measurement topics, including the use of IRT to assess item bias, parameter invariance, assessment of person fit, validity of a composite measure, and classification accuracy. He has received the Award for Outstanding Dissemination of Educational Measurement Concepts to the Public from the National Council on Measurement in Education and the Career Achievement Award from the Association of Test Publishers. Prior to joining GMAC, he was the Director of the ERIC Clearinghouse on Assessment and Evaluation. He is also the founder and co-editor of the online journal Practical Assessment Research and Evaluation, which is now the most widely read journal in the field.
[/tab] [tab title=”Keynotes”]

[well size=”sm”]
BobMislevy

Speaker, Robert J. Mislevy

Frederic M. Lord Chair in Measurement and Statistics, ETS Professor Emeritus, University of Maryland

An Extended Taxonomy of Variants of Computerized Adaptive Testing

Abstract
This presentation builds on foundational work on probabilistic frames of reference and principled assessment design to explore the role of adaptation in assessment. Assessments are characterized in terms of their claim status, observation status, and locus of control. The relevant claims and observations constitute a frame of discernment for the assessment. Adaptation occurs when the frame is permitted to evolve with respect to the claims or observations (or both); adaptive features may be controlled by the examiner or the examinee. Familiar computerized adaptive testing with item response theory (CAT-IRT) falls comfortably within the taxonomy, but so do a variety of other kinds of adaptation that can be useful in more personalized learning environments.

Bio
Robert Mislevy is the Frederic M. Lord Chair in Measurement and Statistics at ETS. He is an Emeritus Professor of Measurement and Statistics at the University of Maryland, where he was also an Affiliate Professor of Survey Methods and of Second Language Acquisition. His research applies developments in technology, statistics, and cognitive science to practical problems in assessment. He developed the “plausible values” methodology for the National Assessment of Educational Progress, worked with Cisco Systems to develop simulation-based assessments of network engineering, and, with Linda Steinberg and Russell Almond, created an “evidence- centered” assessment design framework. Dr. Mislevy’s publications include Automated scoring of complex tasks in computer-based testing, Psychometric considerations in game-based assessment, and the “Cognitive Psychology” chapter in Educational Measurement (4th Edition). He has received career contributions awards from the American Educational Research Association (AERA) and National Council on Measurement in Education (NCME), and won NCME’s Award for Technical Contributions to Measurement three times. He is a past-president of the Psychometric Society and a member of the National Academy of Education. [/well] [well size=”sm”]

 

Fritz

FritzFritz Drasgow

University of Illinois at Urbana-Champaign

Computer Adaptive Testing of Personality for High Stakes Settings

Abstract
There is a large literature documenting the fact that various aspects of personality are related to behavior in many situations. The practical usefulness of measures of personality for high stakes decisions, however, is less clear because of the notorious faking problem. A research program was initiated fifteen years ago to address this problem. The latent structure of personality, the response format for an assessment instrument, the underlying response process, and the adaptive algorithm have been examined in this work and the Tailored Adaptive Personality Assessment System (TAPAS) is the result. Its validity for predicting aspects of performance in military settings will be described.

Bio
Fritz Drasgow is Interim Dean of the School of Labor and Employment Relations and Professor of Psychology at the University of Illinois at Urbana-Champaign. His research focuses on psychological measurement, computerized testing, and modeling. His recent work focuses on personality assessment. With colleagues Stephen Stark and Oleksandr Chernyshenko, Drasgow has developed the “Tailored Adaptive Personality Assessment System” (TAPAS), which is based on an ideal point item response model. It uses a multidimensional forced choice response format and is adaptive so that items are selected in an efficient manner. Drasgow is a former chairperson of the American Psychological Association’s Committee on Psychological Tests and Assessments, the U.S. Department of Defense’s Advisory Committee on Military Personnel Testing, the American Psychological Association’s Taskforce on Internet Testing, and the American Institute of Certified Public Accountants’ Psychometric Oversight Committee. He is a former President of the Society for Industrial and Organizational Psychology (SIOP) and received the SIOP Distinguished Scientific Contributions Award.
[/well]
[well size=”sm”]

Gunter Maris, CITO – University of Amsterdam

Network Psychometrics: A New Perspective on Learning

Abstract
Network psychometrics is rapidly emerging as a new field of psychometric research. A recent breakthrough has uncovered the intimate relation between common Item Response Theory (IRT) models and the network models developed in the field of statistical mechanics close to a century ago. At about the same time that Ernst Zermelo formulated the very first IRT model (now known as the Bradley-Terry-Luce, BTL, model), Wilhelm Lenz developed the first network model (now known as the Ising model). The connection between the two seemingly distinct classes of models has profound implications for our understanding of measurement, learning, and their integration. This presentation proposes a theoretical framework for network psychometrics and explores some of its implications for adaptive learning. Special attention will be given to the active role a measurement model has to play in the learning process. Specifically, rather than using measurements only to passively monitor students’ learning, the measurement model is used for steering the very learning process itself.

Bio
Gunter Maris earned his Ph.D. from the University of Nijmegen in 2001 for research on statistical contributions to psychological modeling, under supervision of Eric Maris, after which he joined CITO as Research Scientist and Statistical Consultant. Since that time his research has focused on the foundations and development of statistical measurement models, computational statistics, adaptive learning, and network psychometrics. In 2007, he was appointed professor of psychometrics at the Psychological Methods group of the University of Amsterdam, on behalf of CITO. In 2013, Gunter Maris was awarded a prestigious research grant from the Dutch organization for scientific research (NWO) for developing an online adaptive learning environment based on the classical TuxMath game.

[/well]
[well size=”sm”]

Mary Pommerich, Defense Manpower Data Center

Letting the CAT-ASVAB Out of the Bag: History, Experiences, and What the Future May Hold

Abstract
CAT-ASVAB holds the distinction of being the first large-scale adaptive test battery to be administered in a high-stakes setting. It has been in operational use for almost 25 years as part of the Department of Defense’s Mary_PommerichEnlistment Testing Program. For testing programs that are new to CAT administration (or have been operational for less time than the CAT-ASVAB), much can be learned from revisiting the history and experiences of CAT-ASVAB, while keeping an eye toward what the future may hold for the ASVAB testing program. This presentation will provide a background on the ASVAB testing program, highlighting significant events in CAT-ASVAB history, problems triggered by initial CAT-ASVAB implementation decisions, the current state of affairs, and where the program is headed in the future. Significant near and longer term goals that will be discussed include the elimination of paper and pencil testing, the implementation of unproctored at-home testing with verification testing, a new approach to pretesting items, and a revision to the battery. The driving motivations behind these changes and challenges faced in implementing them will be discussed.

Bio
Mary Pommerich received her PhD in Quantitative Psychology from the University of North Carolina at Chapel Hill and has been a practicing psychometrician in the testing industry for 20 years. She is currently a mathematical statistician in the Personnel Testing Division of the Defense Manpower Data Center (DMDC), home of the Armed Services Vocational Aptitude Battery (ASVAB) testing program. There she provides psychometric support for defense testing programs and serves as the chair of the Technical Committee for the Manpower Accession Policy Working Group. Prior to her time at DMDC, she spent the first seven years of her career honing her skills at ACT. Mary has been working with operational CAT programs and conducting CAT research since the late 1990s.

[/tab]

[tab title=”Invited”]
[well size=”sm”]

Daniel Wakeman

Vice President & Chief Information Officer Educational Testing Service (ETS) DanWickman

Challenges and Solutions to Delivering Digital Assessments

Abstract
With the advent of the Common Core assessments to be delivered online by the 2014–2015 school year, many states will be conducting large scale, sophisticated, online high-stakes testing for the first time. In some states, the mode of delivery will be adaptive. How can states best prepare to deliver such assessments? What works and what does not? Wakeman will discuss his experiences delivering high-stakes computer-based testing. l report on the many innovative ways to ensure assessments are delivered on time and uninterrupted across more than 100 countries and thousands of test centers practically simultaneously. Using technologies such as cloud computing, caching, encryption, and distributed computing, among others, it has been possible to ensure the smooth delivery of millions of tests. This presentation will address each of the unique technical challenges the K12 environment will face in delivering the Common Core assessments and present methods that schools, districts and states can use to ensure their successful administration.for over 15 years for tests such as the GRE® test, which is an adaptive test, and the TOEFL® test, a linear design.

Bio
As Vice President and Chief Information Officer, Daniel S. Wakeman is responsible for all ETS information technology assets and activity. In this capacity, he also is responsible for ensuring that ETS strategy and tactics are appropriately informed and influenced by information technology. Wakeman has over 25 years of IT leadership experience including being CTO and founder of a business-to-business exchange. He has also held IT positions at DuPont®, Dow® Chemical, IBM®, D&N Bank and the U.S. Air Force. Wakeman holds a bachelor of science degree in computer science from Michigan Technological University, and received a master’s degree in business administration from Central Michigan University. [/well] [well size=”sm”] LibertyMunson

Liberty J. Munson, PhD

Principal Psychometrician Microsoft® Learning Experiences

Microsoft’s Innovative Solutions to Exam Delivery in a Leaner, Meaner World

Abstract
For many certification programs where piracy is a significant concern, one of the key benefits of computer adaptive testing is that it limits exposure of the item pool. However, computer adaptive testing requires hundreds of items and thousands of responses — requirements that even in the world’s largest IT certification program can find difficult to meet. Because Microsoft delivers more than 100 different certification exams, computer adaptive testing is cost prohibitive and resource intense. As a result, Microsoft has developed several innovative delivery methodologies that leverage the ideas behind, and benefits of, computer adaptive testing. This presentation will describe Microsoft’s experiences with computer adaptive testing, including candidate reactions, and how Microsoft has leveraged some of the fundamental principles behind computer adaptive testing in alternate, cost effective, and innovate ways, including dynamic item selection/forms assembly, dynamic item creation, and continuous publication. Munson will also discuss the importance of leveraging delivery methodologies that offer benefits similar to that of computer adaptive testing in proactively protecting exam content and addressing piracy concerns.

Bio
Liberty Munson, Ph.D., is the Principal Psychometrician for Microsoft® Learning Experiences. She is responsible for ensuring that psychometric standards are rigorously applied during all phases of the certification exam lifecycle and that the design and implementation of Microsoft’s Certification program results in valid and reliable assessments of candidate skills. Prior to joining Microsoft, she worked at Boeing® where she developed a wide variety of selection tests, including multiple-choice exams, team-based exercises, problem-solving activities, and structured interviews; she also assisted with the development and analysis of Boeing’s internal certification exams and acted as a co-project manager overseeing development, administration, and analysis of Boeing’s Employee Survey. She received her B.S. in Psychology from Iowa State University and her M.A. and Ph.D. in Industrial/Organizational Psychology with minors in Quantitative Psychology and Human Resource Management from the University of Illinois at Urbana-Champaign.

[/well] [well size=”sm”]
CarolChapelle

Carol A. Chapelle
Iowa State University

Computer-adaptive Testing and Innovation in Language Assessment

Abstract
Computer-adaptive testing (CAT) that works toward an innovative agenda in language assessment raises new challenges.  An innovative agenda encompasses more than the efficiency-oriented motives of creating short, reliable tests.  Innovation in language assessment means using technology as a resource for improving testing methods in a variety of ways, increasing uses of assessment particularly for student learning, as well as creating new knowledge about the intersection of technology with assessment (Chapelle & Douglas, 2006).   An agenda for innovation in CAT has been conceptualized by Levy, Behrens and Mislevy (2006), and specified more fully for language assessment (Mislevy, Chapelle, Chung, & Xu, 2008).  This agenda points toward specific challenges that can be addressed incrementally.  This paper will describe how an innovative CAT agenda creates the need to improve knowledge in three areas of language assessment:  how to chart paths for development of language ability, how to generate formative feedback to learners, and how to conceptualize a student model for a given assessment.   By drawing upon research on assessment and learning of academic writing for university students of English as a second language, I will illustrate research in progress intended to shed light on each challenge.  I will suggest that academic language assessment provides an instructive domain for exploration of issues common to many areas of assessment because the construct underlying task performance is conceptualized as consisting of both context independent and situated capacities.

Bio
Carol A. Chapelle, Distinguished Professor of Liberal Arts and Sciences and Professor of TESL/applied linguistics, is Co-Editor of the Cambridge Applied Linguistics Series. Her research explores issues at the intersection of computer technology and applied linguistics. Her books are Computer applications in second language acquisition: Foundations for teaching, testing, and research (Cambridge University Press, 2001) and English language learning and technology: Lectures on applied linguistics in the age of information and communication technology (John Benjamins, 2003). She is Past President of the American Association for Applied Linguistics (2006-2007) and former editor of TESOL Quarterly (1999-2004), Her papers have appeared in journals such as TESOL Quarterly, Language Learning, Language Testing, and Language Learning & Technology, as well as in Handbooks and Encyclopedias of Applied Linguistics. She teaches courses in applied linguistics at Iowa State University and has taught in Arizona, Denmark, Hawai’i, Michigan, Spain, and Quebec. She has lectured at conferences in Canada, Chile, England, France, Germany, Japan, Iceland, Mexico, Morocco, Scotland, Singapore, South Korea, Spain, and Taiwan.

[/well]
[well size=”sm”]

Robert Gibbons Ph.D.
University of Chicago

The Future of Mental Health MeasurementRobertGibbons

Abstract
Mental health measurement has been based primarily on subjective judgment and classical test theory. Typically, impairment level is determined by a total score, requiring that all respondents be administered the same items. An alternative to full scale administration is adaptive testing in which different individuals may receive different scale items that are targeted to their specific impairment level. This approach to testing is referred to as computerized adaptive testing (CAT) and is immediately applicable to mental health measurement problems. We have developed CAT depression, anxiety and mania tests based on multidimensional item response theory (IRT), well suited to mental health constructs, that can be administered adaptively such that each individual responds only to those items that are most informative for assessing his/her level of severity. The shift in paradigm is from small fixed length tests with questionable psychometric properties to large item banks from which an optimal small subset of items is adaptively drawn for each individual, targeted to their level of impairment. For longitudinal studies, the previous impairment estimate is then used as a starting point for the next adaptive test administration, further decreasing the number of items needed to be administered. Results to date reveal that depressive severity can be measured using an average of only 12 items (2 minutes) from a bank of 400 items, yet maintains a correlation of r=0.95 with the 400 item scores. Similar results are seen for anxiety and mania. Using an average of only 4 items (< 1 minute) we have derived a diagnostic screening test for major depressive disorder which has sensitivity of 0.95 and specificity of 0.87, where for the same subjects, sensitivity for the PHQ-9 is 0.70 with similar specificity.

Literature References
Gibbons R.D., & Hedeker D.R. Full-information item bi-factor analysis. Psychometrika, 57, 423-436, 1992.

Gibbons R.D., Bock R.D., Hedeker D., Weiss D., Segawa E., Bhaumik D.K., Kupfer D., Frank E., Grochocinski V., Stover A. Full-Information Item Bi-Factor Analysis of Graded Response Data. Applied Psychological Measurement, 31, 4-19, 2007.

Gibbons R.D., Weiss D.J., Kupfer D.J., Frank E., Fagiolini A., Grochocinski V.J., Bhaumik D.K., Stover A. Bock R.D., Immekus J.C. Using computerized adaptive testing to reduce the burden of mental health assessment. Psychiatric Services, 59, 361-368, 2008.

Gibbons R.D., Weiss D.J., Pilkonis P.A., Frank E., Moore T., Kim J.B., Kupfer D.K. The CAT-DI: A computerized adaptive test for depression. JAMA Psychiatry, 69, 1104-1112, 2012.

Gibbons R.D., Hooker G., Finkelman M.D., Weiss D.J., Pilkonis P.A., Frank E., Moore T., Kupfer D.J. The CAD-MDD: A computerized adaptive diagnostic screening tool for depression. Journal of Clinical Psychiatry, 74, 669-674, 2013.

Gibbons R.D., Weiss D.J., Pilkonis, P.A., Frank E., Moore T., Kim J.B., Kupfer D.J. Development of the CAT-ANX: A computerized adaptive test for anxiety. American Journal of Psychiatry, published on-line first, ajp.psychiatryonline.org. This work was supported by grant R01-MH66302 from the National Institute of Mental Health. Dr. Gibbons has financial interests in Adaptive Testing Technologies, through which these adaptive tests will be made available commercially.

Bio
Robert Gibbons received his doctorate in statistics and psychometrics from the University of Chicago in 1981.  He spent the first 30 years of his career at the University of Illinois at Chicago (1981-2010) where he directed the Center for Health Statistics, a consortium of 15 statisticians working in both theoretical and applied areas of statistics.  In 2010 Professor Gibbons joined the faculty of the University of Chicago where he is Professor of Biostatistics in the Departments of Health Studies, Medicine, and Psychiatry, and continues to direct the Center for Health Statistics.  Support for his research includes numerous grants and contracts from the NIH, NIMH, ONR, NCI, and MacArthur foundation.  Professor Gibbons is a Fellow of the American Statistical Association, and a member of the Institute of Medicine of the National Academy of Sciences.  He has authored more than 250 peer-reviewed scientific papers and five books.   Professor Gibbons most recent awards include Pritzker Scholar (University of Chicago, 2011), the Rema Lapouse Award for contributions to Psychiatric Epidemiology (American Public Health Association, 2012), and the Long-Term Excellence Award in Health Policy Statistics (American Statistical Association, 2013).

[/well]

[/tab]

[tab title=”Symposia”]
Symposia

Implementing MST in Survey Assessments

Organizer (Alina von Davier)

Lessons learned from Multistage adaptive testing in international adult populations
Kentaro Yamamoto, ETS.

Towards adaptive testing in PISA 2015: Challenges and solutions
Matthias von Davier, ETS.

Math Computer Based Study: an empirical study of adaptive approach to group-score assessments
Andreas Oranje, ETS.

Data collection and calibration designs for implementing multistage adaptive testing in NAEP
Longjuan Liang, ETS


Advances in personality CAT
Organizer: Stephen Clark

Thurstonian model for multidimensional forced choice responses
Anna Brown (U. Kent)

MCMC calibration and scoring of multidimensional forced choice (MFC) pairs and rank responses
Jacob Seybert (ETS)

CAT and aberrance detection with multidimensional pairwise preference measures
Stephen Stark (USF)


Cognitive Diagnostic CAT
Organizer: Duanli Yan, ETS

Multistage Testing Using Diagnostic Models
Matthias von Davier, ETS
Ying Cheng, University of Notre Dame

New Item Selection Methods for CD-CAT
Jimmy de la Torre & Mehmet Kaplan, Rutgers University
Juan Ramón Barrada, University of Zaragoza, Spain

The Termination Rules in Adaptive Diagnostic Testing
Yuehmei Chien & Chingwei David Shin, Pearson
Ning Yan, Independent consultant

Leveraging CAT Response Characteristics to Support Diagnostic Reporting
William Lorié, Questar Assessment

CD-CAT – From Smart Testing to Smart Learning
Hua-Hua Chang, University of Illinois at Urbana-Champaign


[/tab]

[tab title=”Workshops”]

[well size=”sm”]

Computerized Multistage Adaptive Testing

Abstract
This workshop provides a general overview of a multistage test (MST) design and its important concepts and processes. The MST design is described, why it is needed, and how it differs from other test designs, such as linear test and computer adaptive test (CAT) designs. The focus of the workshop will be on MST theory and applications including alternative scoring and estimation methods, classification tests, routing and scoring, linking, test security, as well as a live demonstration of MST software MSTGen (Han, 2013). This workshop is based on the edited volume of Yan, von Davier, & Lewis (2014). The volume is structured to take the reader through all the operational aspects of the test, from the design to the post-administration analyzes. In particular, the chapters of Yan, Lewis, and von Davier; Lewis and Smith; Lee, Lewis, and von Davier; Haberman and von Davier; and Han and Kosinski are the basis for this workshop. MSTGen (Han, 2013), a computer software tool for MST simulation studies, will be introduced by Han. MSTGen supports both conventional MST by routing mode and the new MST by shaping mode, and examples of both MST modes will be covered. The software is offered at no cost, and participants are encouraged to bring their own computers for a brief hands-on training.

Level: Intermediate

Prerequisite: Basic understanding of item response theory and CAT.

Duanli Yan, ETS
Alina von Davier, ETS
Chris Han, GMAC
Charlie Lewis, ETS

Bios
Duanli YanDuanli Yan is a Manager of Data Analysis and Computational Research for Automated Scoring group in the Research & Development division at ETS. She is also an Adjunct Professor at Rutgers University. She holds a Ph.D. in Psychometrics from Fordham University. Dr. Yan has been the statistical coordinator for the EXADEP™ test, and the TOEIC® Institutional programs, a Development Scientist for innovative research applications and a Psychometrician for several operational programs. Dr. Yan was the 2011 recipient of the ETS Presidential Award. She is a co-editor for volume Computerized Multistage Testing: Theory and Applications. She is also a co-author for book Bayesian Networks in Educational Assessment. Dr. Yan has been an invited symposium organizer and presenter for many conferences such as those of the National Council of Measurement in Education (NCME), International Association for Computerized Adaptive Testing (IACAT), and International Psychometrics Society (IMPS).

 

 

Alina A. von Davier

Alina A. von Davier is a Research Director and leader of the Center for Advanced Psychometrics at ETS. She also is an Adjunct Professor at Fordham University. Her Ph.D. in mathematics was earned at the Otto von Guericke University of Magdeburg, Germany, and her M.S. in mathematics is from the University of Bucharest, Romania. At ETS, von Davier is responsible for developing a team of experts and a psychometric research agenda in support of next generation of assessment. She also is responsible for fostering research relationships between ETS and the psychometric field, nationally and internationally. She edited a volume on test equating, Statistical Models for Test Equating, Scaling, and Linking, which has been selected as the 2013 winner of the Division D Significant Contribution to Educational Measurement and Research Methodology award. She is a co-author of a book on the kernel method of test equating and was a guest co-editor for a special issue on population invariance of linking functions for Applied Psychological Measurement. She authored a book on testing causal hypotheses and numerous papers published in psychometric journals. Most recently, von Davier co-edited a volume on multi-stage testing. Prior to joining ETS, she worked in Germany at the Universities of Trier, Magdeburg, Kiel, and Jena, and at the ZUMA in Mannheim, and in Romania, at the Institute of Psychology of the Romanian Academy.    

 

 

Kyung (Chris) T. Han

Kyung (Chris) T. Han is a Senior Psychometrician and Director at the Graduate Management Admission Council. Han received his doctorate in Research and Evaluation Methods from the University of Massachusetts at Amherst. He received the Alicia Cascallar Award for an Outstanding Paper by an Early Career Scholar in 2012 and the Jason Millman Promising Measurement Scholar Award in 2013 from the National Council on Measurement in Education (NCME). He has presented and published numerous papers and book chapters on a variety of topics from item response theory, test validity, and test equating to adaptive testing. He also has developed several psychometric software programs including WinGen, IRTEQ, MSTGen, and SimulCAT, which are used widely in the measurement field.

 

 

Charles Lewis

Charles Lewis is a Professor Emeritus at Fordham University. He received his Ph.D. from Princeton University in 1970, with a major in statistics and psychometrics. Dr. Lewis has been a Distinguished Presidential Appointee at ETS. His research interests include validity and fairness in testing, statistical models in test theory, applications of Bayesian statistics, and generalized linear models.   [/well]  [well size=”sm”]

Introduction to Computerized Adaptive Testing

Abstract
This workshop provides an overview of the primary components and algorithms involved in CAT, including development of an item bank, calibrating with item response theory, starting rule, item selection rule, scoring method, and termination criterion. It will also provide a five-step process for evaluating the feasibility of CAT and developing a real CAT assessment, with a focus on validity documentation. The workshop is intended for researchers that are familiar with classical psychometrics and educational/psychological assessment but are new to CAT.

Level: Beginner

Prerequisite: Basic understanding of Item Response Theory

Nathan Thompson, VP Assessment Systems Corporation

Bio
Nathan Thompson is the Vice President of Assessment Systems Corporation, a leading provider of Nate450technology and psychometric solutions to the testing and assessment industry. He oversees consulting and business development activities at ASC, but is primarily interested in the development of software tools that make test development more efficient and defensible. He has led the development of ASC’s CAT platform, FastTest, and developed a number of CAT assessments on that system. Dr. Thompson received a Ph.D. in Psychometric Methods from the University of Minnesota, with a minor in Industrial/Organizational Psychology.

CAT simulations: how and why to perform these?

Abstract
In this workshop, the goals and usefulness of simulations for constructing CATs will be discussed Customized software will be demonstrated and distributed. Participants will practice using the software for some examples. Participants are invited to bring their own laptops for practicing (Windows).

Angela Verschoor, Cito
Theo Eggen, Cito

Bios Angela_Verschoor Angela Verschoor is Senior Researcher at CITO, the Netherlands. With a background in discrete optimization, her interest is the development and application of automated test assembly (ATA), optimal design and computerized adaptive testing (CAT). She has been responsible for the design of pretests for large-scale projects such as the Final Primary Education Test inthe Netherlands. Other recent projects included the introduction of ATA and CAT in, amongst others, the Netherlands, Georgia, Russia, Kazakhstan, the Philippines and Switzerland.

 

 

 

Theo J.H.M. Eggen is Senior Research Scientist at the Psychometric Research Center of Cito and full  Theoprofessor of psychometrics at the University of Twente in the Netherlands Consultancy and research on educational and psychometrical issues of test development are his main activities. His specializations are: item response theory, quality of tests, (inter)national assessment, and computerized adaptive testing. He has major experience as a consultant in educational measurement at Cito, at the university and internationally. He is author of research articles and chapters of textbooks. He is scientific director of the Research Center for Examinations and Certification (RCEC).
[/well]
[well size=”sm”]

Building and delivering online CAT using open-source Concerto platform

Abstract
During this hands-on workshop participants will learn how to build and deliver an online Computerized Adaptive Test using Concerto v4, an open-source R-based adaptive testing platform. We will start with an introduction to Concerto, build HTML-based item templates, import item content and parameters and combine it all into a fully-functional online test.

Level: Beginner

Prerequisite: Basic understanding of Item Response Theory

If you are new to R, we strongly recommend reading and trying examples in the first 10 chapters of an official introduction to R (http://cran.r-project.org/doc/manuals/R-intro.html). It isn’t a prerequisite and takes only 2 hours. Plus, you’ll gain an extremely useful skill beyond developing online CAT. Laptops will not be supplied, so please bring your own and make sure that the conference’s Internet connection is properly configured. BEFORE the workshop, please download and install R (http://cran.rstudio.com) and R studio (http://www.rstudio.com). We will be available 15 minutes before the workshop to help you with that if necessary.

Michal Kosinski, University of Cambridge

Bio
Michal Kosinski, one of Concerto’s authors, is the Deputy Director of The Psychometrics Centre at the  MichaelKosinskiUniversity of Cambridge and the Leader of the e-Psychometrics Unit. He is additionally a Research Consultant at Microsoft® Research. He combines a solid psychological background with extensive skills in the areas of Machine Learning, data mining, and programming. His current research focuses on the digital environment and encompasses the relationship between digital footprint and psychological traits, crowd-sourcing platforms, auctioning platforms, and on-line psychometrics.

 

[/well] [/tab] [tab title=”Committees”]
[well size=”sm”]
Program Committee
Isaac Bejar, ETS Alina von Davier, ETS Duanli Yan, ETS

Scientific Committee John Barnard, VP IACAT, EPEC Pty Ltd
Cliff Donath, Exec. Dir. IACAT, Philips Healthcare
Theo Eggen, Past President IACAT, CITO
Larry Rudner,President IACAT, GMAC
Nathan Thomson, Membership Dir. IACAT, Assessment Systems Corporation
David Weiss, President Emeritus IACAT, University of Minnesota

[/tab] [/tabs]

Leave a Reply

Your email address will not be published. Required fields are marked *