There are four parallel pre-conference workshops, which will be held on Friday 18 August, 2017 from 0845 until 1200. They include: 

 

WORKSHOP 1: Introduction to developing a CAT

Level: Beginner     

Pre-requisite/s: Basic understanding of item response theory.

Presenters: John Barnard (EPEC), Nate Thompson (Assessment Systems Corporation)

Abstract: New to CAT? This workshop is intended for researchers who are familiar with classical psychometrics and educational / psychological assessment and are interested in leveraging the benefits of item response theory (IRT) and adaptive testing. The goal is to provide a theoretical and practical background to allow practitioners to understand the advantages, begin developing a CAT program, and find deeper resources.

The first portion of the workshop will focus on item response theory, which serves as the backbone of the vast majority of CATs. We will discuss the drawbacks to classical test theory, the development of IRT, and comparisons of IRT models. We will also present information on how to link multiple test forms together, evaluate fit, and how to use IRT calibration software. The second portion will present the components and algorithms of a CAT, including development of an item bank, pilot testing and calibration, starting rule, item selection rule, scoring method, and termination criterion. Advanced / optional algorithms such as item exposure constraints and content balancing will also be discussed. A follow-up discussion will also provide a five-step process for evaluating the feasibility of CAT and developing a real CAT assessment, with a focus on validity documentation, as well as a conversation on practical issues such as item bank management and CAT maintenance. 

Bio: John Barnard is the founder and Executive Director of EPEC Pty Ltd, a private company in Melbourne, Australia which specialises in psychometrics and online assessment. He has extensive experience in assessment, from pioneering the implementation of IRT in South Africa and publishing CATs for selection of students in the 1980s before migrating to Australia in 1996, where he has been active in numerous national and international projects.

John holds three doctorates and dual appointment as professor. He is a full member of a number of professional organisations, the latest as a founding member of IACAT, elected as Vice President in 2014 and becoming President in 2015. John is also a member of the International Assessments Joint National Advisory Committee (IAJNAC), a consulting editor of JCAT and a member of the International Editorial Board of the SA Journal of Science and Technology. His most recent research in online diagnostic testing is based on a new measurement paradigm, Option Probability Theory (OPT), which he has been developing over the past decade. 

Bio: Nathan Thompson is the Vice President of Client Services and Psychometrics at Assessment Systems Corporation, a leading provider of technology and psychometric solutions to the testing and assessment industry. He has extensive experience in psychometrics and test development, having worked in this role at both a certifying agency and two testing services providers, as well as in management of the business of testing. He oversees consulting and business development activities at ASC, but is primarily interested in the development of software tools which make test development more efficient and defensible, having spearheaded the development of software such as Iteman, Xcalibre and FastTest.

He is a founding member and Membership Director for the International Association for Computerized Adaptive Testing (IACAT), as well as serving on the annual conference committee. Dr. Thompson received a Ph.D. in Psychometric Methods from the University of Minnesota, with a minor in Industrial/Organizational Psychology. He also holds a B.A. from Luther College in Decorah, IA, with a triple major in Latin, Mathematics and Psychology.   

 


 

WORKSHOP 2: CAT Simulations: How and Why?

Level: Intermediate

Pre-requisite/s: Basic understanding of item response theory (IRT) and computerized adaptive testing (CAT).

Presenters: Angela Verschoor (CITO), Theo J. H. M. Eggen (CITO)

Abstract: In this workshop, the goals and usefulness of simulations for constructing CATs will be discussed. The measurement characteristics of a CAT can be studied and set before publishing it. Information can be collected by simulation studies that use the available IRT calibrated item bank and the proposed target population. The performance of proposed selection algorithms and constraints can be studied. Customized software will be demonstrated and distributed. Participants can practice using the software for some examples. Participants are invited to bring their own laptops for practicing (Windows®). 

Bio: Angela Verschoor is Senior Researcher at CITO, the Netherlands. With a background in discrete optimization, her interest is the development and application of automated test assembly (ATA), optimal design and computerized adaptive testing (CAT). She has been responsible for the design of pretests for large-scale projects such as the Final Primary Education Test in the Netherlands.

Other recent projects included the introduction of ATA and CAT in, amongst others, the Netherlands, Georgia, Russia, Kazakhstan, the Philippines and Switzerland.

 

Bio: Theo Eggen is a Senior Research Scientist at the Psychometric Research Center of CITO, and a full Professor of Psychometrics at the University of Twente in the Netherlands. Theo consults and undertakes research on educational and psychometrical issues of test development. He specializes in item response theory (IRT), assessing the quality of tests and inter(national) assessments, and computerized adaptive testing (CAT). Theo has extensive experience working as a Consultant in educational measurement at CITO, at the University and internationally. 

He is also the author of research articles and chapters of textbooks. Theo is further the Director of the Research Center for Examinations and Certification (RCEC).

 


 

WORKSHOP 3: The Shadow-Test Approach to Adaptive Testing

Level: Intermediate

Presenters: Michelle D. Barrett (Pacific Metrics Corporation), Wim J. van der Linden (Pacific Metrics Corporation and the University of Twente)

Abstract: The shadow-test approach is not “just another item-selection technique” for adaptive testing, but an integrated approach to the configuration and management of the entire process of adaptive testing. The purpose of this workshop is to (i) introduce the conceptual ideas underlying the approach, (ii) show how it can be used to combine all content, statistical, practical, and logical requirements into a single adaptive optimization model, (iii) embed adaptive calibration of field-test items into operational testing, (iv) use the approach to deliver tests either with a fully adaptive, multistage, linear on-the-fly format or any hybrid version of them, (v) introduce all statistical and computational aspects, and (vi) discuss practical implementation issues (transitioning from fixed-form to adaptive testing, accommodating changes in item pool composition or test specifications, dealing with testing interruptions, etc.). The workshop consists of a mixture of lectures, demonstrations, and an opportunity to work with a CAT simulator program based on the shadow-test approach offered to the participants for free. Participants are encouraged to bring their own laptop computers and item-pool metadata to work with the simulator.

Bio: Michelle Barrett is the Director of Assessment Technology at Pacific Metrics Corporation. In this role, she is responsible for leading teams of psychometricians and software engineers to design and develop leading assessment technology solutions including test delivery, computerized adaptive testing and optimal test assembly. Previously, she worked in a similar role at CTB/McGraw-Hill, where her team's software performed psychometric analysis for multiple large-scale assessments and served adaptive tests to students using summative and formative assessment products. She has also worked as a Senior Consultant in the assessment division at the Colorado Department of Education. Dr Barrett has research interests in adaptive testing and response model parameter linking. She is also interested in exploring new and modified software development practices to speed the deployment of psychometric innovation into scalable, production level software. She holds a Bachelor's degree from Stanford University, a Master's degree from Harvard Graduate School of Education, a Graduate Certificate in large scale assessment from the University of Maryland, and received her PhD in research methods, data analysis and measurement from the University of Twente, Netherlands.

Bio: Wim van der Linden is Distinguished Scientist and Director of Research and Innovation at Pacific Metrics Corporation. He is also Professor Emeritus of Measurement and Data Analysis at the University of Twente, Netherlands. He has published widely about such topics as item response theory, adaptive testing, optimal test assembly, parameter linking, observed-score equating, and response-time modeling and applications. He has also authored Linear Models for Optimal Test Design (2005) and was the editor (with C. A. W. Glas) of Elements of Adaptive Testing (2010), and the three-volume Handbook of Item Response Theory (2016). Dr van der Linden is a past president of the Psychometric Society and NCME, a recipient of the ATP, NCME, and the Psychometric Society's career achievement awards, the AERA E. F. Lindquist award, and holds an honorary doctorate from the University of Umea, Sweden. 

 

 

WORKSHOP 4: Multivariate CAT

Level: Advanced

Pre-requisite/s: Sound understanding of CAT. Since the workshop will be a mixture of lectures and demonstrations, with a sample source code also being delivered in the workshop, participants are encouraged to bring their own laptop computers to test out the algorithms. 

Presenters: Chun Wang (University of Minnesota), Ping Chen (Beijing Normal University)

Abstract: Many constructs measured by psychological testing can be conceptualized in terms of multi-unidimensional structures (such as personality traits or vocational interests) or hierarchical structures (such as cognitive ability). Advanced measurement theory, such as multidimensional item response theory (MIRT), is evolving rapidly to meet the needs of complicated psychological testing. Multidimensional CAT (MCAT), by coupling the strength of multidimensional IRT and adaptive testing, provides a promising way to measure psychological constructs with greater precision and reduced test length. The purpose of this workshop is to firstly, introduce the key building elements of multidimensional CAT, including the item selection methods, intermediate scoring methods, stopping rules, and online calibration methods. Furthermore, we aim to exemplify the applications of multidimensional CATs in education and health domains, and discuss the practical challenges.

Bio: Professor Chun Wang completed her Master of Science and PhD at the University of Illinois and is currently Assistant Professor at the University of Minnesota. To date she received 18 first class awards from the Psychometric Society, IACAT, NCME, Universities and Associations; 28 publications in Journals such as PsychometrikaApplied Psychological MeasurementJournal of Educational MeasurementInternational Journal of Testing, etc.; Chapters in books, a long list of invited presentations at prominent conferences; grants, reviewer for journals, editorial board member and affiliated with a number of professional organizations.

Chun became interested in CAT in 2007 during her graduate studies at the University of Illinois and presented her first CAT paper about controlling item exposure in unidimensional CATs at the 2008 Psychometric Society meeting. Her research then focused into two major directions, namely multidimensional CAT (MCAT) and cognitive diagnostic CAT. MCAT combines CAT and cognitive diagnosis and thus offers the advantages of both whilst also introducing complexity in item selection, interim scoring and item bank management.

Bio: Ping Chen is an Associate Professor at the Collaborative Innovation Center of Assessment toward Basic Education Quality (CICA-BEQ), Beijing Normal University (BNU), China. He is also the Director of the Department of Statistics and Measurement Technology at CICA-BEQ. He is responsible for providing strong technical support and guidance for the large-scale national assessment in China. Ping received a Ph.D. of psychological measurement and assessment at BNU in 2011, and was awarded one-year financial support from the China Scholarship Council to study at the University of Illinois at Urbana-Champaign, as a visiting student. With an interdisciplinary background of psychometrics and computer science, his research interests include item response theory, computerized adaptive testing, item online calibration, cognitive diagnostic assessment, standard setting and large-scale data processing and analysis. He has published lead-author papers in peer-reviewed journals such as Psychometrika, British Journal of Mathematical and Statistical Psychology, Journal of Educational Measurement and Acta Psychologica Sinica, among others.